The pattern
NIST 800-53 Rev 5 is vast. The Moderate baseline pulls 325 controls; the High baseline 421. Across real federal AI projects, findings concentrate in a predictable subset of roughly forty controls across seven families: AC, AU, CM, IA, SC, SI, and CA. If you harden these, you clear the vast majority of AI-specific ATO findings.
For AI under 800-53 Rev 5: SA (acquisition) for AI procurement policy, SI (integrity) for model integrity and poisoning detection, RA (risk assessment) for AI-specific risk, and AU (audit) for inference logging. These four families drive the majority of AI-related findings.
High-Impact Control Families — AI Finding Frequency at FedRAMP Moderate
Percentage = frequency with which findings in this family appear during AI system ATOs. SI and AU dominate because inference logging and model integrity documentation are commonly underprepared.
Access Control (AC)

| Control | AI failure mode |
|---|---|
| AC-2 Account Management | Service accounts used for model API calls never rotated; dormant developer accounts retain model access. |
| AC-3 Access Enforcement | Retrieval layer pulls from document store without enforcing per-user clearance. Model returns content the user should not see. |
| AC-4 Information Flow Enforcement | Prompts can carry data across classification boundaries without labels. Model output crosses boundary in the opposite direction. |
| AC-6 Least Privilege | Fine-tuning, model upload, and inference share the same role. One compromised developer can modify production models. |
| AC-16 Security Attributes | Classification labels do not travel with data into or out of the model. RAG retrieval loses labels; output is unlabeled. |
Audit and Accountability (AU)
| Control | AI failure mode |
|---|---|
| AU-2 Event Logging | No audit event emitted on every inference. Cannot tell who asked what. |
| AU-3 Content of Audit Records | Logs missing full prompt, full response, model version, or tool calls. |
| AU-6 Audit Review, Analysis, Reporting | Nobody actually reviews the logs. No alerting on anomalous prompt patterns. |
| AU-9 Protection of Audit Information | Prompt logs in plaintext. Log store not at the same impact level as the source data. |
| AU-12 Audit Generation | Orchestration layer logs but downstream model-endpoint log events are missing or not correlated. |
Configuration Management (CM)
| Control | AI failure mode |
|---|---|
| CM-2 Baseline Configuration | No baseline configuration for the model itself — which version, which parameters, which prompt templates. |
| CM-3 Configuration Change Control | Vendor-managed model updates. Version drifts under the ATO. No change ticket. |
| CM-6 Configuration Settings | Inference parameters (temperature, max tokens, top-p) not documented or parameterized per use case. |
| CM-8 System Component Inventory | SBOM missing model weights, training-data lineage, RAG index components. |
| CM-10 Software Usage Restrictions | Open-weight models whose licenses prohibit certain federal use cases not tracked. |
Identification and Authentication (IA)
| Control | AI failure mode |
|---|---|
| IA-2 User Identification and Authentication | Model endpoint accepts API keys without binding to a named user. No trail back to a human. |
| IA-5 Authenticator Management | Model API keys long-lived, not rotated, sometimes in source repositories. |
| IA-8 Identification and Authentication (Non-Organizational Users) | External agency users not authenticated via PIV/PIV-I when federal policy requires it. |
| IA-9 Service Identification and Authentication | Service-to-model authentication by IP allow-list instead of workload identity. |
System and Communications Protection (SC)
| Control | AI failure mode |
|---|---|
| SC-4 Information in Shared Resources | Model session or cache leaks data across user boundaries. |
| SC-7 Boundary Protection | Model endpoint exposed beyond the authorization boundary without explicit interconnection documentation. |
| SC-8 Transmission Confidentiality and Integrity | Prompts sent to internal model endpoints over unencrypted channels because "it is internal." |
| SC-12 Cryptographic Key Establishment and Management | Keys for encrypting prompt logs managed outside the authorization boundary. |
| SC-13 Cryptographic Protection | Non-FIPS-validated crypto used somewhere in the stack (often a third-party SDK). |
| SC-28 Protection of Information at Rest | Prompt/response logs and fine-tuned weights not encrypted at rest. |
System and Information Integrity (SI)
| Control | AI failure mode |
|---|---|
| SI-4 System Monitoring | No monitoring for anomalous prompt patterns, extraction attempts, or tool-call abuse. |
| SI-7 Software, Firmware, Information Integrity | Model weights loaded without hash verification. No integrity check on boot. |
| SI-10 Information Input Validation | No input filtering. Prompt injection patterns reach the model unchallenged. |
| SI-15 Information Output Filtering | No DLP on model output. PII, CUI, classification markers leave the boundary. |
| SI-16 Memory Protection | GPU memory not cleared between tenants on shared inference hardware. |
Assessment, Authorization, Monitoring (CA)
| Control | AI failure mode |
|---|---|
| CA-2 Control Assessments | Model-layer controls not assessed because FedRAMP baseline inherited without extension for AI. |
| CA-5 Plan of Action and Milestones | AI-specific findings (hallucination rates, bias metrics) not tracked in POA&M. |
| CA-7 Continuous Monitoring | No continuous monitoring of model behavior, only of infrastructure. |
| CA-8 Penetration Testing | Pen-test does not include red-team against the model — prompt injection, extraction, jailbreak. |
Personnel Security and Risk (PS, RA)
- PS-3 Personnel Screening. Staff with access to CUI training data or prompt logs must meet the applicable screening level. Often overlooked for ML engineers who "only touch test data."
- RA-5 Vulnerability Monitoring and Scanning. Extended for AI to include model-level vulnerabilities — known jailbreaks, model-card CVE-style disclosures, training-data poisoning research.
How to prepare
For each of the forty controls above, your SSP should contain an implementation statement that specifically addresses the AI layer. Generic boilerplate copied from the FedRAMP template is not enough; the 3PAO will ask AI-specific questions. Concrete, model-layer answers are the difference between a clean assessment and a findings-heavy one.
Bottom line
Forty controls across seven families catch the overwhelming majority of AI-specific 800-53 findings. Write specific, AI-aware implementation statements for each. Your ATO package improves more from depth on these forty than from surface-level coverage of the remaining 285-plus.
Frequently asked questions
Across real federal AI projects, findings concentrate in AC, AU, CM, IA, SC, SI, and CA — roughly forty controls. The remaining 285+ in the Moderate baseline apply but rarely drive unique AI findings.
AU-3 (audit record content) and SI-10 (input validation) are perennial. Prompt/response logging and prompt-injection handling are where AI systems consistently fall short.
Rev 6 is being drafted in OSCAL-native form. AI-specific overlays are under development. The forty-control concentration pattern will likely hold; the specific parameter values and enhancements will evolve.
AI RMF is a governance framework that complements 800-53. The two work together: AI RMF produces evidence that maps to specific 800-53 controls. Neither replaces the other.
Inherited controls still need an implementation statement, typically short, referencing the parent FedRAMP package and describing any customer responsibilities that remain. They are not zero-effort.
Author your SSP alongside the engineering. Write specific, model-layer implementation statements for the forty controls. Run your eval harness to generate evidence for CA-7, CA-8, SI-4, SI-10.