Why the numbers are hard
Before talking about trends, a reality check: the federal AI market is harder to measure than most reports imply. USAspending.gov captures contract obligations but does not tag "AI" consistently. Agency AI use case inventories describe what is deployed but not what it cost. SBIR data is clean but covers only a fraction of the spend. Industry reports combine all of the above with proprietary estimates and produce numbers that can differ by a factor of three.
Rather than pretend the market size is known to the nearest billion, this article looks at what is unambiguous: the direction of the trends, the shape of the distributions, and the patterns that shape outcomes for small businesses. On the questions "is AI spending growing" and "where is it growing fastest," the data converges clearly. On the question "exactly how big," honest analysts still disagree.
Trend 1: Narrow AI obligations are growing faster than overall IT services
Across the past three cycles, obligations explicitly coded or described as AI on civilian and defense contracts have grown at roughly twice the rate of broader IT services spending. Even after discounting for tagging inconsistency, the direction is clear. Agencies are spending a bigger share of their IT dollar on AI every year, and the denominator (overall IT) is also growing.
Within "AI" spend, three subcategories dominate:
- Classical ML and data science services — production ML pipelines, analytics, decision support. The largest share by dollars, stable growth.
- Generative and agentic AI — smaller share in 2026 but growing fastest by rate. Many pilots; fewer large production awards yet.
- AI-enabling infrastructure — cloud, data engineering, GPU compute. Reported under IT rather than AI in many cases but essential to AI delivery.
Trend 2: DoD leads, HHS is the surprise
DoD components, collectively, spend more on AI than any civilian department — no surprise. The surprise in 2026 is HHS: VA, NIH, CMS, FDA, and CDC combined now rival individual military services on AI obligation volume. Clinical ML at VA, research AI at NIH, claims analytics at CMS, and post-market surveillance at FDA account for most of it. HHS is not the first name that comes to mind when a founder hears "federal AI," but by dollar volume it belongs in the top tier. See our agency-by-agency tour for specifics.
Other notable movers:
- Treasury and IRS — growing faster than most civilian peers on fraud, compliance, and forecasting AI.
- DHS CISA — AI-for-cyber obligations rising sharply, tied to network defense modernization.
- DOE national labs — large obligation volume concentrated in a small number of labs, high technical depth.
- Space Force — fastest-growing DoD component for AI as a share of total spend, driven by space domain awareness.
Trend 3: Average award size is bimodal
Federal AI awards cluster in two bands: a long tail of small awards ($150K-$2M, mostly SBIR and small prime contracts) and a thin head of very large awards (strategic DoD programs above $100M). The middle ($5M-$50M) is thinner than many expect.
| Award size band | Approx. share of count | Approx. share of dollars |
|---|---|---|
| Under $500K (mostly SBIR Phase I, small task orders) | ~55-65% | ~3-5% |
| $500K - $2M (Phase II, small task orders) | ~20-25% | ~8-12% |
| $2M - $10M (task orders, mid-size BPAs) | ~8-12% | ~15-20% |
| $10M - $100M (large task orders, vehicles) | ~3-5% | ~30-40% |
| $100M+ (strategic programs, large IDIQs) | ~0.5-1% | ~25-35% |
What that means: if you count awards, small business dominates. If you count dollars, primes dominate. Both narratives are true at the same time, and readers who cite either one as the whole picture are misleading themselves.
Trend 4: AI work is increasingly flowing through existing vehicles
Five years ago, a new agency AI initiative often meant a new contract. In 2026, it more often means a task order under an existing IDIQ. The major vehicles — GSA OASIS+, Alliant 2, CIO-SP3 (and the pending CIO-SP4), NITAAC CIO-CS, and DoD IDIQs like ESI-3 and the component-level vehicles — absorb the majority of new AI work.
For a small business, the implication is critical: teaming with a prime that holds a relevant vehicle is often the fastest path to AI revenue in the agency, even if the small business could win a standalone contract in principle. (See our teaming playbook.)
Trend 5: SBIR continues to outperform as a small-business entry
SBIR Phase I and Phase II awards remain the highest-leverage entry for small AI firms. The data shows:
- DoD SBIR AI-labeled topics have grown roughly 30-50% in count over the last three cycles.
- NIH SBIR AI content is heavier in imaging, biomedical NLP, and research informatics.
- NSF SBIR Phase I AI cohort has expanded significantly, driven by the broad AI topic areas.
- Phase III transitions from AI SBIRs remain uneven — a winning Phase II is not an automatic Phase III, and transition planning matters more than proposal writing.
For detailed win-rate analysis see our SBIR agency comparison.
Trend 6: OTAs are quietly becoming a bigger channel
OTA (Other Transaction Authority) consortia — AMTC, S2MARTS, C5, NSTXL, TReX, and the service-specific consortia — have grown faster than traditional contracts over the past three years. AI prototypes in particular flow more often through OTA than through FAR-based awards when speed matters. Typical OTA award timeline is 4-8 weeks from RFP to award, vs. 4-8 months for comparable FAR. Prototype-to-production transition via OTA is also more flexible than traditional Phase III.
If your firm has not joined a relevant OTA consortium yet, 2026 is the year. Membership fees are modest, opportunity flow is significant, and the learning curve is short.
Trend 7: Cloud-native AI is the default; monolithic AI is dying
Agencies increasingly reject AI proposals that assume on-premise deployment or custom infrastructure. The expected default is cloud-native on a FedRAMP-authorized platform, with clear inheritance from the platform's controls and a clear model-and-inference architecture. Proposals that do not articulate this posture cleanly lose technical points.
The model-service landscape has settled into three practical options (we compare them here): Amazon Bedrock on AWS GovCloud, Azure OpenAI on Azure Government, and Google Vertex on Assured Workloads. Each has advantages; all are production-ready for federal work.
Trend 8: Compliance and responsible-AI dollars are rising separately
A notable slice of 2026 AI spend is on the compliance and governance side of AI itself — model evaluation, red-teaming, documentation, monitoring for bias and drift, independent verification. NIST's AISI and the AI evaluation programs have attracted substantial civilian spend. Within agencies, responsible-AI offices are buying tooling and third-party assessment services. This is a newer and narrower market, but it is growing fast and has less incumbency than core AI delivery.
A firm with depth in AI evaluation, safety testing, or responsible-AI operations has a cleaner runway in 2026 than one that looks identical to every other AI delivery firm.
Trend 9: Workforce AI training is its own budget line now
Federal agencies are spending measurably on workforce AI literacy, upskilling, and role-specific AI training. Not modeling budget — training budget. OMB M-24-10 implementation, the AI in Government Act, and the follow-on executive actions all create explicit workforce obligations. Firms that can deliver federal-appropriate AI training (not consumer online courses) see a growing and under-contested opportunity.
Trend 10: The prime concentration is loosening, slowly
A decade ago, roughly a dozen primes captured the clear majority of federal AI-labeled work. By 2026, the top-twenty concentration remains high on dollar volume but the middle of the market — the $20M-$100M band — has meaningfully more mid-tier and specialist firms winning directly. This is partly policy (SBA goals, small-business carve-outs), partly the rise of specialist vendors with genuine AI depth, and partly that primes are stretched across too many capability areas to dominate every one.
For a small business, this matters less than the overall pattern: primes still capture most dollars, but specialist small firms win more directly than they did five years ago, especially when they bring genuine technical depth and disciplined federal-delivery hygiene.
Where the market is going
Three things to watch over the next 12-18 months:
- cATO becomes the default for cloud-native AI systems. Continuous ATO changes the spending pattern toward sustained operations and away from large one-time authorization events. Firms that deliver on cATO will capture a growing share of post-award dollars.
- Agentic AI moves from pilot to production. 2026 is the year several high-visibility pilots transition into production programs. Firms positioned on these pilots will see Phase III flows start to land.
- SBIR reauthorization reshapes the middle market. The 2026 reauthorization (details here) introduces new strategic-initiative set-asides and tighter Phase III metrics. The downstream effect will be visible in 2027 award data.
Implications for a small AI firm
Distill all of this into a firm-level strategy:
- Concentrate on two or three agencies where mission fit is real.
- Run an SBIR portfolio of 8-12 proposals per year aimed at those agencies.
- Join two OTA consortia relevant to your capability area.
- Team with at least one prime that holds a relevant IDIQ in your target agency.
- Build cloud-native, compliance-aware AI delivery as your default — not a retrofit.
- Develop an AI evaluation / responsible-AI capability as a secondary revenue line; it grows faster than delivery and has less incumbency.
None of this is novel. All of it is hard. The firms that execute it consistently are the ones who will look back at 2026 from 2029 as the year they caught the curve.
Frequently asked questions
Narrow AI-labeled obligations are in the single-digit billions annually. Broader definitions including AI-enabling infrastructure run much higher. The growth rate is faster than the overall IT services market.
DoD collectively dominates. Among civilians, VA, HHS (NIH, CMS), Treasury, and DHS lead. DOE is the most technically concentrated.
Bimodal. Most awards by count are under $2M (SBIR and small task orders). Most dollars are in large awards over $10M. Strategic DoD programs can exceed $500M.
By dollar volume, primes dominate. By award count, small businesses are a significant share. The gap is narrowing but not closing on dollars.
USAspending.gov, SAM.gov, DSIP for DoD SBIR, grants.gov for research grants, and agency AI use case inventories. No single source is complete — combine them.
Yes. OTA consortia absorb a growing share of federal AI prototype work, with shorter timelines and more flexible transitions than traditional FAR-based awards. Consortium membership is low-friction and worth the fee for active firms.