The DoD Zero Trust Strategy in one paragraph
DoD Zero Trust Strategy (published 2022, updated periodically) defines seven pillars that collectively describe a Zero Trust architecture: User, Device, Application & Workload, Data, Network & Environment, Automation & Orchestration, and Visibility & Analytics. Every pillar has capability targets and maturity levels. DoD components are working toward Target Level by FY27 and Advanced Level thereafter. For contractors operating inside DoD environments, ZT is not optional; the pillars shape everything from identity issuance to model-endpoint exposure.
AI systems introduce new identity problems that traditional zero-trust doesn't cover: model version identities, API key governance, tool-call permission scopes, and RAG retrieval access controls all need policy enforcement.
The seven pillars applied to AI

| Pillar | ZT intent | Applied to AI |
|---|---|---|
| User | Continuous authentication, risk-based authorization | PIV/PIV-I or FIDO2 MFA for every model API call. No service-account-only auth for human-originating inference. Risk signals drive step-up auth on privileged operations (fine-tuning, model upload). |
| Device | Device posture as an authorization input | Developer devices calling model endpoints assert posture (EDR, compliance state) via mTLS with device certs. No inference from unmanaged devices. |
| Application & Workload | Application-layer authorization, workload identity | Service mesh with per-service identity (SPIFFE/SPIRE). Model endpoint authorizes per-workload, not per-network-address. Every call carries verifiable workload identity. |
| Data | Data-level classification, access, and DLP | Training data, RAG sources, fine-tuned weights, prompt logs all tagged with classification. Access enforced at data level, not only at service boundary. DLP on model outputs. |
| Network & Environment | Microsegmentation, encrypted everywhere | Model endpoint sits in its own segment. RAG store in another. Logging in another. Only specific identities can traverse. TLS 1.3 with FIPS-validated cipher suites everywhere. |
| Automation & Orchestration | Policy-as-code, automated response | Authorization policies expressed as code (OPA, Cedar). Model-version changes gated by policy. Automated quarantine of model endpoints on anomalous behavior. |
| Visibility & Analytics | Pervasive telemetry, behavioral analytics | Full prompt/response logging. User-behavior analytics on query patterns. Anomaly detection on model outputs and tool-call patterns. |
ICAM, concretely for model endpoints
Identity, Credential, and Access Management is the foundation of the User pillar. For model endpoints:
- PIV or PIV-I for DoD users; FIDO2 hardware keys for contractor users who cannot hold a PIV.
- No password-only access to any model endpoint, ever.
- OAuth 2.0 / OIDC for service-to-model authentication. Token audience bound to the specific endpoint. Short expiry.
- Workload identity via SPIFFE/SPIRE for service-to-service calls.
- Per-model fine-grained authorization: a user authorized to query Model A is not automatically authorized to query Model B, or to fine-tune, or to upload.
- Attribute-based access control (ABAC) driven by user clearance, request classification, and target data sensitivity.
CASB and inline controls for LLM traffic
Commercial cloud access security brokers (Netskope, Zscaler, Microsoft Defender for Cloud Apps) increasingly enforce policy on LLM traffic. Applied in a federal context:
- Block outbound connections to commercial LLM endpoints from any device with access to federal data.
- Inline DLP on outbound prompts — classify content, block or redact if it matches CUI patterns.
- Allow-list specific federal-authorized endpoints (Azure OpenAI Government, Bedrock GovCloud, Vertex Assured Workloads).
- Log all LLM API traffic to a central SIEM for user-behavior analytics.
The CASB is a defense-in-depth layer. It does not replace application-layer authorization; it catches the gap when developers or users try to route around it.
Data pillar is where AI really diverges
The Data pillar is where AI systems depart most from traditional IT. Four specific applications:
Training data as a first-class protected asset
Training datasets carry the same classification as the data they were drawn from. Storage, access, and export of training data require the same controls as live CUI.
Fine-tuned weights inherit classification
A model fine-tuned on CUI is itself a CUI artifact. ZT data-pillar controls apply to the weights — encrypted at rest, access-controlled, exfiltration-monitored.
RAG sources enforce user-level access
Retrieval must check whether the requesting user is authorized for each retrieved document before including it in model context. This is the single biggest failure mode in federal RAG systems.
Prompt and response logs as sensitive stores
Users will paste sensitive content into prompts. Your log store inherits that sensitivity. Encrypt, restrict, retain, purge.
Automation and analytics for AI
Visibility & Analytics combined with Automation & Orchestration is where AI systems need specific tooling.
- Anomaly detection on query patterns per user — sudden volume spikes, sudden content-class shifts.
- Detection of prompt-injection patterns on the ingress side.
- Detection of potential extraction attacks (users probing the model for training-data leakage).
- Automated quarantine of model endpoints when anomaly thresholds trip.
- Policy-as-code for model-version promotion — new model versions gated on passing eval suite.
Maturity reality check
Most federal AI programs in 2026 operate somewhere between ZT Baseline and Intermediate across the pillars. Target by FY27 and Advanced by FY29 are aspirational for many programs. The practical approach is to hit Baseline across all seven pillars first, then prioritize Data, User, and Visibility & Analytics uplift for AI-specific risks, and let less AI-sensitive pillars (Network, Device) mature on the program's broader schedule.
Scale: 0% = Preparation, 25% = Basic, 50% = Intermediate, 75% = Advanced, 100% = Optimal. Estimated based on DoD ZT program reporting and program observations. AI-specific gaps largest in Data and Visibility pillars.
Bottom line
DoD Zero Trust's seven pillars apply to AI systems with specific emphasis on User (ICAM for model endpoints), Data (training, weights, RAG, logs), and Visibility & Analytics (behavior-aware monitoring). The general-IT language of the strategy translates cleanly to AI when you think of the model endpoint as a sensitive workload and each prompt as an authorization event.
Frequently asked questions
User, Device, Application & Workload, Data, Network & Environment, Automation & Orchestration, and Visibility & Analytics.
Each inference request is an authorization event. PIV/FIDO2 MFA for users, workload identity for services, per-model fine-grained authorization, data-level classification on prompts and responses, and behavioral analytics on query patterns.
Identity, Credential, and Access Management. For AI endpoints: PIV/PIV-I, FIDO2, OAuth 2.0/OIDC, workload identity via SPIFFE/SPIRE, attribute-based access control.
Yes, as defense-in-depth. Block commercial LLM endpoints, enforce DLP on outbound prompts, allow-list federal-authorized endpoints, and log all LLM API traffic.
Data. Training data, fine-tuned weights, RAG sources, and prompt/response logs all carry classification that must be enforced at the data level, not only at the service boundary.
Target Level across all seven pillars by FY27, Advanced by FY29, per DoD direction. Most programs in 2026 are between Baseline and Intermediate. Prioritize Data, User, and Visibility & Analytics for AI-specific risks.