Why this matters for AI/ML scope specifically
Every prime on a large federal AI program has a subcontracting plan. The plan promises percentages against the standard small business categories. The AI and ML scope inside that plan is where the plan most often breaks down. Primes either cannot find small businesses with real AI depth, or they find small businesses whose value to the plan is their certifications rather than their delivery capability, or they end up carrying the technical work on the prime's team and writing the small business dollars onto low-value scope.
The outcome of any of those three paths is the same: the small business participation in AI scope is either not meaningful, not real, or not sustainable. The Defense Contract Management Agency, DCAA, agency OSDBUs, and program offices increasingly push back on all three. This post is how to do it differently — how to select AI small business subs that deliver, what scopes actually transfer, and what it looks like when meaningful subcontracting is working.
The goals you are measured against
The government-wide prime and subcontracting goals under the Small Business Act:
- 23 percent of prime contract dollars to small businesses (and a corresponding subcontracting goal).
- 5 percent to women-owned small businesses (WOSB).
- 5 percent to small disadvantaged businesses (SDB), including 8(a) participants.
- 3 percent to HUBZone small businesses.
- 3 percent to service-disabled veteran-owned small businesses (SDVOSB).
Agency-level goals may exceed these. DoD and VA have historically carried higher SDVOSB utilization goals; HHS and Treasury have carried higher SDB and 8(a) goals; individual agency-specific supplements may also apply. Read your contract.
The governing clauses
- FAR 19.7 establishes the subcontracting plan requirement. Prime contracts over the threshold (currently $750,000 for most acquisitions, $1.5M for construction) with subcontracting opportunities require a subcontracting plan before award.
- FAR 52.219-9 Small Business Subcontracting Plan — the clause that gets included in the contract. It specifies plan contents, measurement, reporting (SSR / ISR in eSRS), and consequences of non-performance.
- FAR 52.219-8 Utilization of Small Business Concerns — the policy clause applicable more broadly.
- DFARS 252.219-7003 and 7004 for DoD-specific supplements.
Plans are audited. Repeated under-performance against plan, or plans that were clearly unrealistic at submission, have led to past performance findings that follow a prime across future competitions.
Why AI/ML is hard to subcontract to true small businesses
The credential bar on federal AI/ML work is high for reasons that compound:
- Production experience. "We can train a model on Kaggle" is not the same as "we can ship ML into a system that has an ATO, a continuous monitoring package, and an ISSM who asks hard questions." Many small businesses marketing AI capability have the first and not the second.
- Clearances and suitability. The team touching the data often needs a clearance, a public trust determination, or an agency suitability. That gates who can staff the scope. Small businesses without cleared personnel or a cleared facility cannot take the scope even when the technical capability exists.
- Federal engineering culture. Documentation, change control, evidence production, slow deploys, regression discipline. Commercial ML teams sometimes find federal cadence painful; small businesses who have not worked federal before often underestimate how much of the work is engineering discipline rather than modeling.
- Security posture. FedRAMP-authorized services, FIPS-validated crypto, boundary-aware architectures, RMF packages. Small businesses who have not operated inside a boundary cannot contribute meaningfully to boundary-sensitive scope.
- Proposal and reporting overhead. Federal subcontracting is not just the work; it is the DCAA-compliant timekeeping, the CPARS-affecting performance, the monthly status. Small shops sometimes cannot carry that overhead on a small scope.
The result: the population of small businesses that can credibly deliver federal AI/ML scope is smaller than it looks on paper. Primes who do not invest in finding the real ones end up with plans that measure dollars but not delivery.
What to look for in an AI small business sub
The signals that separate a real AI sub from a marketing-led one:
Past performance at the engineering level
Not "we have worked with federal agencies." Specifics: what systems did they help build, which ATO did they contribute to, what was the scope of their role, what engineering artifacts did they produce. A real sub can walk you through a block diagram of a system they delivered on and describe where they did what.
Technical depth on the founding team
Production ML is a craft. Small businesses whose founding team has a decade of hands-on ML engineering — not a layer of program managers above a contracted staffing arrangement — deliver differently than ones that do not. Ask to meet the technical principal. If you never meet them, that is a signal.
Credentials that matter
Federal certifications (SAM registration, DSIP account, SBA portal registration), security credentials (clearance status, FedRAMP familiarity, cloud provider certifications in GovCloud/Gov regions), and technical credentials (demonstrated performance on public benchmarks, published work, conference talks, notable OSS contributions). No one credential proves capability, but the absence of all of them is loud.
Capability statements that are specific
A capability statement that lists every AI buzzword is a weak signal. One that says "we built X production system with Y documented engineering discipline for Z class of agency" is a strong one. Ask for samples. Ask what they would not take on. Teams that can articulate their limits are usually more credible about their strengths.
Pricing that reflects the work
A sub who can do ML platform engineering for $85 per hour is either bleeding cash, using offshore engineering they did not disclose, or bidding to win and figuring it out later. Federal engineering on the scopes that matter prices in a recognizable range. Extreme outliers on either side deserve a second look.
Red flags
Patterns that indicate a sub is either a pass-through, a fronting arrangement, or unable to deliver the scope:
- Capability statement reads like a prime's capability statement. If everything is "we do everything," they probably do very little well.
- No named technical leads. If there is no one on the org chart whose LinkedIn shows the actual AI/ML engineering, the engineering is either being done by the prime, being done by a staffing partner, or not being done.
- No portal registrations or certifications. Not being in SAM, DSIP, or the relevant agency portals signals either a recent entrant or a firm that has never operated seriously in federal.
- Unable to produce sample deliverables. Redacted SSPs, anonymized model cards, architecture diagrams from completed work. A sub who cannot show any artifact of federal engineering work probably has not done any.
- Owner/principal is not available during the pursuit. On small businesses, the owner almost always works a pursuit. If they are not engaged in the conversation, you are talking to a sales layer.
- Pricing changes radically between discussions. Honest pricing does not move 40 percent between a conversation and a formal quote.
- Asking to be listed in the plan but not to do the work. This is the textbook fronting request. Run.
Workshare models that work
Meaningful AI/ML subcontracting has recognizable shapes. A few patterns we have seen succeed:
The scope-owned module
The sub owns a bounded technical module — a RAG ingestion pipeline, a feature store, a model registry and CI/CD platform, an evaluation harness — end to end. The prime integrates. The sub is accountable for delivery of that module, its documentation, its security posture, and its evidence. This is the cleanest form of real subcontracting because accountability is unambiguous.
The embedded specialist
The sub provides a small number of named engineers with specific specialization — ML ops, security engineering for ML, data engineering for federal data at scale — who work inside the prime's team on specific deliverables. Success depends on the sub actually bringing the specialization and not just body-shopping.
The supporting capability
The sub provides a supporting capability the prime does not have in house — accessibility engineering for AI interfaces, document parsing for a federal corpus, a specific modeling capability, federal-region cloud architecture — on an as-needed basis. The prime does the bulk of the work; the sub supplies the depth where needed.
The 8(a) mentor-protégé or JV
Where the program permits and the small business qualifies, a mentor-protégé relationship or a small business joint venture lets the prime share the work more deeply with small business participation. Formal structures that SBA recognizes. Requires investment on both sides.
How Precision Delivery Federal fits
We are a small business focused specifically on federal AI, ML, and data engineering work. Relevant credentials for primes evaluating us as a subcontractor:
- SAM.gov active. UEI Y2JVCZXT9HP5, CAGE 1AYQ0, NAICS 541512 primary, additional NAICS for data processing and engineering services.
- DSIP account active for DoD SBIR proposals.
- Ames, Iowa based. Iowa small business.
- Principal with a decade of production ML experience on SAMHSA and adjacent federal data programs. Hands-on technical lead, not a sales front.
- Technical credentials including a Kaggle Top 200 ranking, production ML deployments, and a federal data engineering portfolio.
- Federal proposal depth. Active SBIR proposal program across DoD components, HHS, DOE, NASA, and agency-specific vehicles. We understand the federal acquisition cadence because we live in it.
- Scope focus. Agentic AI systems, machine learning platforms, RAG over federal corpora, ATO-ready engineering, DevSecOps for AI, and data engineering at federal scale. Not a generalist IT shop.
We sub on scopes where we can own delivery and produce the engineering artifacts that pass assessor review. We do not take pass-through scope.
Measurement that stands up to audit
The plan is measured in eSRS. The reality is measured by whether the work happened. Questions an OSDBU or contracting officer may ask that a meaningful subcontracting program can answer cleanly:
- Which specific deliverables did the small business own?
- Whose names show up on those deliverables?
- What engineering artifacts (diagrams, docs, code, reports) did the small business produce?
- What percentage of the small business's effort on this contract was delivered by their own W-2 employees versus their own subcontractors?
- How did the prime evaluate the small business's performance at each gate?
- What scope modifications have been made, and did small business participation scale with them?
Plans that can answer these questions crisply are the ones that survive. Plans that cannot are the ones that draw findings.
Setting up the AI/ML scope in the subcontracting plan
A few practical recommendations for primes structuring the plan at proposal time:
- Identify the AI/ML carve-out explicitly. Call out the percentage of small business participation that lives in technical AI/ML scope, not just overall dollar percentages.
- Name the subs where the contract permits. Teaming letters on pursuit, named subs on award where selected. Reduces ambiguity downstream.
- Tie scopes to subs to deliverables. "Sub X owns the evaluation harness deliverable" is a subcontracting commitment that can be tracked. "Sub X supports overall ML effort" is not.
- Budget real dollars. Pricing the scope to values that no sub can actually deliver at is a setup for later failure. Either the sub walks, the scope gets starved, or the prime absorbs the work — any of which breaks the plan.
- Identify contingency subs. Programs change. Subs change. A plan that accommodates substitution without blowing up the percentage model is a more resilient plan.
- Plan for mentor-protégé where structurally appropriate. Especially for multi-year primes, mentor-protégé can compound into a stronger small business pipeline for the prime over time.
FAQ
Where this fits in our practice
We sub on federal AI/ML scopes where small business participation is measured by delivery, not by line items. See our teaming page for how we structure partnerships, our capabilities statement for the official one-pager, and our past performance for the work behind the claims.