Skip to main content
DoD Engineering

Digital engineering for DoD: where AI fits in the MBSE mandate

The DoD Digital Engineering Strategy is a decade-long mandate to replace document-driven acquisition with authoritative digital models. AI and ML are central to making it work — and there are a dozen SBIR topic families funding the capability gaps right now.

What the DoD Digital Engineering Strategy actually requires

The DoD Digital Engineering Strategy, issued in 2018 and carried forward through subsequent policy memos and the 2023 Digital Engineering Fundamentals update, commits the department to a fundamental shift: from a document-driven acquisition process to one anchored in authoritative digital models. Every major program is expected to establish a single source of truth in a digital engineering environment, maintain a digital thread across lifecycle phases, and use model-based systems engineering rather than paper specifications and Visio diagrams. The strategy has five goals, but the operational implication is simple: the model is the contract, not the document.

That is a large change. The legacy acquisition pattern is documents — specifications, CDRLs, ICDs, test reports — each produced by a different team, stored in different systems, and reconciled by human effort at program reviews. The new pattern is models — SysML, Capella, Cameo, Modelica, and increasingly AI-augmented representations — that are queryable, verifiable, and shareable across the program. The old pattern scaled to World War II. The new pattern is necessary for a force that fields software-defined weapons, multi-domain operations, and systems that change faster than the acquisition cadence that birthed them.

Digital Engineering AI Application Areas — SBIR Funding Activity

Requirements traceability and gap analysis
88%
Simulation and surrogate modeling
85%
Automated test case generation and coverage
82%
Digital twin data pipeline and anomaly detection
80%
Model verification and validation (V&V) automation
75%
System architecture optimization AI
68%

MBSE and the digital thread concept

Model-Based Systems Engineering is the engineering method that implements the strategy. Rather than write a requirements document, engineers build a requirements model. Rather than produce an interface control document, engineers maintain an interface model linked to the architecture model. The models use formal languages — SysML is the dominant standard, with ARCADIA/Capella, Modelica, and several domain-specific languages filling specialized niches. Commercial tools include Cameo Systems Modeler (now 3DExperience), IBM Rhapsody, Capella, and Innoslate.

The digital thread is the connective tissue. It is the set of links, transforms, and traceability relationships that flow data across lifecycle phases — from operational concept to requirements to architecture to design to manufacturing to test to sustainment. Done well, a change to a top-level requirement propagates through the thread, flagging impacts to components, tests, and sustainment plans. Done poorly, the thread is a PowerPoint diagram with no underlying implementation.

The gap between done-well and done-poorly is enormous, and it is where most AI opportunity lives. Programs have partial models, orphaned requirements, stale traceability, and inconsistent states across tools. Keeping the thread coherent is currently a labor-intensive, error-prone, and expensive human task. Program offices know it. The capability gap is the entry point.

The model is the contract, not the document. That is the operational shift the Digital Engineering Strategy demands — and it is exactly the kind of structured-data substrate that makes AI useful.

Where AI fits in: requirements, simulation, test automation

Requirements traceability is the first and most legible AI application. A program with 8,000 shall statements distributed across a requirements baseline, an architecture model, and a test plan needs automated consistency checking, gap identification, and change-impact analysis. Language models are surprisingly good at this. They can parse shall statements, extract verbs and noun phrases, match them to architecture elements, and flag orphans and contradictions. Several SBIR topics across Army, Navy, and Air Force have funded specific tooling in this space over the last three cycles.

Surrogate modeling is the second high-value area. Physics-based simulations — CFD for air flow, finite element for structural loads, six-DOF flight simulation — are expensive. Programs run them hundreds or thousands of times during design and test. Machine learning surrogates trained on simulation outputs can approximate the expensive model at a fraction of the runtime, enabling design space exploration and optimization that would otherwise be computationally infeasible. DARPA, AFRL, and NAVAIR have all funded surrogate-model SBIR topics.

Automated test case generation is the third. Test and evaluation is the bottleneck in digital engineering — a program with thousands of requirements and tens of thousands of test cases cannot test everything manually. AI-assisted test generation, coverage analysis, and regression prioritization are active research and procurement areas. The Navy T&E community and the Joint Test and Evaluation Methodology office have recurring topics in this space.

Digital twin programs and their AI data requirements

A digital twin is a live model of a physical system or process, connected to real telemetry from the physical asset. Digital twins are the operational extension of MBSE — the same models used in design are used in operations and sustainment, continuously updated with sensor data. DoD digital twin programs span aircraft (Air Force Digital Thread), ships (NAVSEA), ground vehicles (Army), and installations (Air Force Installation Twin, Navy Facilities Digital Twin).

The AI requirements in digital twin programs cluster around four themes. Anomaly detection on telemetry streams — finding the signal that precedes a fault. State estimation — reconciling sensor data with the model to produce a best estimate of system state. Remaining useful life prediction — fusing telemetry with maintenance history and physics models to forecast component life. And scenario simulation — running the twin forward under hypothetical conditions to test operational decisions.

The data volumes involved are large. A single F-35 generates tens of gigabytes per flight hour across its sensor suite. A ship generates similar volumes across propulsion, navigation, and combat systems. The AI pipeline — data ingest, feature extraction, model training, inference, and feedback — is where most of the actual engineering work lives. Most programs are not wrangling this data well, and the SBIR topics funding the gap tend to be technically specific and well-scoped.

SBIR topic families funding MBSE and DE AI

A rough map of the recurring SBIR topic families in this space, based on recent cycles:

  • DARPA — topics under the Engineering for Zero Downtime, AIMEE (AI for MBSE), and adjacent digital engineering program calls. DARPA topics in this area tend to be ambitious and technology-forward.
  • Air Force (AFRL, LCMC) — MBSE tooling for aircraft programs, model curation and consistency, AI-assisted engineering design review, and digital twin telemetry analytics.
  • Army (PEO GCS, DEVCOM) — ground vehicle digital engineering, systems engineering automation, and test case generation for combat systems.
  • Navy (NAVAIR, NAVSEA, ONR) — naval digital twin programs, ship health monitoring, and MBSE-enabled sustainment forecasting.
  • OSD and DAU — cross-cutting digital engineering workforce tooling, including AI-assisted systems engineering training and decision support.

Topic counts in this family run roughly 15 to 30 per annual DoD SBIR solicitation, with clear growth year-over-year since 2022.

The T&E AI connection

Test and Evaluation is where digital engineering meets reality, and where AI investment has been accelerating fastest. The Director, Operational Test and Evaluation (DOT&E) has been explicit that AI-enabled systems require new test methodologies — existing T&E processes assume deterministic systems with bounded state spaces, which machine learning systems do not satisfy. The T&E AI stack spans automated test case generation, coverage analysis for ML systems, adversarial and robustness testing, and continuous test in deployment.

For AI firms, T&E is a particularly attractive niche because the problems are technically specific, the customer base is concentrated (DOT&E, service test agencies, the major test ranges), and the requirements are well-documented. The Joint Test and Evaluation Methodology (JTEM) office and the Test Resource Management Center (TRMC) both sponsor directly relevant SBIR topics, and several have Phase III pathways into program-of-record test infrastructure.

Positioning an AI firm for digital engineering contracts

A small AI firm that wants to compete in digital engineering should invest in three specific things. First, fluency with at least one MBSE tool stack — Cameo, Capella, or similar. The ability to ingest models from standard tools and produce artifacts that integrate back in is table stakes. Pure ML capability without MBSE fluency looks generic; MBSE fluency plus ML depth is scarce.

Second, a credible story about model validation. The hardest question program offices ask about ML surrogates or AI-assisted consistency checking is "how do we trust it?" A firm that can answer that with a concrete V&V methodology — test coverage, uncertainty quantification, human-in-the-loop review, and fallback behavior — is dramatically more credible than a firm that waves at accuracy metrics.

Third, at least one reference project — even a Phase I — that demonstrates delivered capability inside a real MBSE tool chain. Reviewers in this space can smell generic AI pitches. A specific screenshot of a consistency check running against a Cameo model, or a surrogate trained on a real CFD dataset, is worth more than three pages of capability claims.

Bottom line

The DoD Digital Engineering Strategy is not a passing fad. It is a decade-long structural commitment with real appropriations behind it and program offices under mandate to execute it. The capability gaps are real, the SBIR topic cadence is steady, and the Phase III transitions into program-of-record tooling are active. For a small AI firm with technical depth and the willingness to learn MBSE fluency, digital engineering is one of the cleaner paths into DoD — less crowded than generic "AI for defense" topics, more legible than classified spaces, and with a funded road ahead through the end of the decade.

Frequently asked questions

What is the DoD Digital Engineering Strategy?

The DoD Digital Engineering Strategy, issued in 2018 and reinforced by subsequent policy memos, mandates a shift from document-driven to model-based acquisition. Programs must establish an authoritative source of truth in digital models, use model-based systems engineering across the lifecycle, and maintain a digital thread connecting requirements, design, manufacturing, test, and sustainment.

How does AI fit into MBSE and the digital thread?

AI plugs into MBSE at several points: automated requirements traceability, surrogate modeling for expensive simulations, test case generation, anomaly detection on digital twin telemetry, and architecture optimization. The digital thread produces large amounts of structured data across lifecycle stages, which is exactly the substrate ML models need.

Which SBIR agencies are funding digital engineering AI?

DARPA runs explicit digital engineering programs and adjacent SBIR topics. The Air Force issues MBSE-related topics through AFRL and Life Cycle Management Center. The Army and Navy both have digital engineering offices and recurring SBIR topics on MBSE, digital twin, and T&E automation. OSD occasionally sponsors MBSE-adjacent topics.

1 business day response

Building AI for DoD digital engineering?

We help small firms scope MBSE and digital twin AI proposals that slot cleanly into program-of-record digital engineering environments.

Explore SBIR partneringRead more insights →Start a conversation
UEI Y2JVCZXT9HP5CAGE 1AYQ0NAICS 541512SAM.GOV ACTIVE