Skip to main content
Governance

NIST AI RMF 1.0: a practical mapping for federal contractors

The AI Risk Management Framework is not a control catalog. It is a governance framework. Here is how to map its four functions onto a real federal AI program without turning it into paperwork.

What AI RMF is, and is not

NIST AI Risk Management Framework 1.0, published January 2023, is a voluntary framework for managing risks associated with AI systems. It is not mandatory, it is not a compliance standard, and it does not itself produce an authorization. It is structurally similar to the NIST Cybersecurity Framework: four functions (Govern, Map, Measure, Manage), each with categories and subcategories, applied iteratively over an AI system's lifecycle.

THE AI RMF IN PRACTICE

NIST AI RMF 1.0 has four functions: Govern, Map, Measure, Manage. Govern and Map are done once per system. Measure and Manage are ongoing — they're where your MLOps platform earns its keep.

For federal contractors, AI RMF matters because agencies increasingly reference it in solicitations, because OMB M-24-10 and successor memos lean on it, and because it gives you a common vocabulary for talking about AI risk with ISSOs who otherwise default to 800-53.

AI RMF is governance infrastructure. It does not replace 800-53 controls. It tells you how to decide which risks you even need controls for.

The four functions, concretely

FunctionWhat it doesArtifacts
GovernEstablish policies, roles, accountability, and culture for AI risk.AI policy, RACI, accountability memo, training records.
MapUnderstand context: intended purpose, stakeholders, data provenance, deployment environment, legal and ethical framings.System documentation, use-case inventory, stakeholder map, data cards.
MeasureAssess, analyze, and track AI risks using appropriate metrics and methods.Evaluation reports, fairness metrics, robustness tests, monitoring dashboards.
ManagePrioritize and act on risks — accept, transfer, mitigate, or avoid.Risk register, mitigation plans, incident runbooks, decommission plan.
TYPICAL FEDERAL IMPLEMENTATION MATURITY — BY AI RMF FUNCTION
Map
58
Govern
45
Measure
40
Manage
35
FY2025 average estimated across federal AI programs

The relationship to 800-53

800-53 is a control catalog. AI RMF is a risk framework. They are complementary, not substitutes. The practical relationship:

  • AI RMF's Map function generates the context that drives your 800-53 categorization (what data, what stakeholders, what impact).
  • AI RMF's Measure function produces evidence that maps to specific 800-53 controls — CA-8 (penetration testing), RA-5 (vulnerability monitoring), SI-4 (system monitoring), CA-5 (POA&M).
  • AI RMF's Manage function produces the risk register that supports 800-53's RA-3 (risk assessment) and PM family (program management) controls.
  • AI RMF's Govern function produces the policies that 800-53 PL (planning), PM (program management), and AR (accountability, audit, risk management privacy controls) families require.

Operationalizing Govern

The common failure mode is a 40-page AI policy that no engineer reads. The disciplined version is narrow and specific.

  • Named AI risk owner (person, not office). For a small firm, this is the founder or CTO.
  • A one-page AI use-case inventory, refreshed quarterly.
  • A go/no-go checklist for new AI use cases covering data classification, bias risk, human-in-the-loop requirement, logging requirement.
  • Training records showing staff have read and acknowledged the AI handling policy.
  • A public-facing AI acceptable-use statement where relevant (customer-facing systems).

Operationalizing Map

Map is where most programs under-invest. The output is a per-system context document that answers:

  1. What is this system's intended purpose, in plain prose?
  2. Who are the users? Who are the affected non-users?
  3. What data trains or conditions the system? Provenance, licensing, sensitivity.
  4. What decisions does the system make or support? Fully automated? Human-in-the-loop? Human-on-the-loop?
  5. What laws or regulations apply? (FISMA, DFARS, Privacy Act, EO 14110, agency-specific.)
  6. What are plausible failure modes and their impact?

Four to six pages. Living document. Reviewed at major version changes.

Operationalizing Measure

Metrics that actually show up in an ATO package for an LLM system:

RiskMetricMethod
Hallucination on in-domain queriesTask accuracy against gold setEval harness, 300+ items, reviewed quarterly.
Prompt injection susceptibilitySuccessful injection rateRed-team set of 100+ known patterns, rerun on model-version change.
PII/CUI leakageLeakage incidence in outputsClassifier on output, sampled review.
Disparate error rates across cohortsPer-cohort accuracy deltaStratified test set with demographic or contextual splits.
AvailabilityP50/P95 latency, uptimeInfrastructure monitoring, SRE standard.
Model driftDistribution shift on production inputsMonitoring pipeline on prompt embeddings.

Operationalizing Manage

A short risk register — ten to twenty entries for a real system — with each entry carrying:

  • Risk description (one sentence).
  • Current likelihood and impact.
  • Mitigation in place (referencing the specific control or procedure).
  • Residual risk rating.
  • Owner and review date.

The register gets reviewed at the same cadence as your 800-53 POA&M — typically monthly. Items do not live in the register forever; either you mitigate them down, or you formally accept residual risk and move them to the accepted-risk memo.

The AI RMF risk register is a live document. Items that have not moved in six months are either forgotten or secretly closed. Either is a finding.

Common implementation mistakes

  • Treating AI RMF as a document-once-and-file exercise. It is iterative across the AI lifecycle.
  • Writing an AI policy that lists all 80+ AI RMF subcategories verbatim. Nobody reads it.
  • Skipping Map. Most programs jump from Govern to Measure and skip the context work. Then Measure produces metrics disconnected from intended use.
  • Measuring things that are easy to measure (latency, throughput) instead of things that matter (fairness across cohorts, injection resistance).
  • Letting the risk register become a wish list. Every item needs an owner and a review date.

What a small AI firm should ship in 90 days

  • One-page AI policy.
  • One-page use-case inventory.
  • Per-system context document (Map output), 4-6 pages per system.
  • Evaluation harness with at least six metrics relevant to the specific use case.
  • Risk register, 10-20 entries, monthly review.
  • Mapping table showing which AI RMF subcategories tie to which 800-53 controls and which internal policies.

That is a working AI RMF implementation. It is not a masterpiece. It is a working baseline that a reviewer can read, understand, and extend.

Bottom line

AI RMF is a voluntary governance framework that is becoming de facto mandatory through reference in OMB memos and agency solicitations. Treat it as infrastructure, not paperwork. Four functions, each with narrow concrete artifacts. Map is where under-investment hurts most. Measure should produce evidence that maps to 800-53 controls. Manage should be a live register, not a document.

Frequently asked questions

Is NIST AI RMF mandatory?

Not legally mandatory as of 2026, but increasingly referenced in OMB memos (M-24-10 and successors) and agency solicitations. For federal contractors, treat it as de facto required.

What are the four AI RMF functions?

Govern (policy and accountability), Map (context and stakeholders), Measure (metrics and evaluation), Manage (risk prioritization and action).

How does AI RMF relate to NIST 800-53?

800-53 is a control catalog. AI RMF is a risk framework. AI RMF's Map function drives 800-53 categorization. Measure produces evidence for specific controls. Manage produces the risk register that supports RA-3 and PM-family controls.

What should I ship first?

A one-page AI policy, a use-case inventory, per-system context documents, an evaluation harness, and a live risk register. 90 days of disciplined work from nothing.

Does AI RMF replace 800-53 for AI systems?

No. It complements 800-53. The two operate at different levels — AI RMF at governance and risk, 800-53 at controls.

Who owns AI RMF implementation in a small firm?

A named person. For a small firm, typically the founder or CTO. Do not assign it to a committee.

1 business day response

Implementing AI RMF without bureaucracy?

We can stand up a working AI RMF baseline in 90 days: policies, context docs, evaluation harness, and risk register.

Talk to usRead more insights →
UEI Y2JVCZXT9HP5CAGE 1AYQ0NAICS 541512SAM.GOV ACTIVE