Why this matters right now
DoD SBIR solicitation 26.1 opened on April 13, 2026, the same day the reauthorization bill was signed into law. The release contains 115 topics spread across Army, Navy, Air Force, Space Force, DARPA, DTRA, MDA, and other DoD components. Topics are listed on the Defense SBIR/STTR Innovation Portal (DSIP). The window is compressed because the lapse pushed what would have been a late-2025 or early-2026 solicitation into mid-April, and close dates follow the normal rhythm from there.
If you try to read 115 topics the way you read three, you will drown. This post is the read-and-score process I use. It is opinionated. It is also not fancy — the whole thing fits in a spreadsheet, and a good first pass takes four to six hours.
How to read a DoD topic
Every DoD topic follows roughly the same shape. Learn the shape and you can skim fast.
Objective
One to three sentences at the top. States the technical outcome the sponsor wants. If the objective is fuzzy ("improve capability in area X"), the topic is often underwritten — that can be an opportunity or a trap depending on whether you have direct access to the sponsor to clarify scope.
Description
The longest block. Explains the operational problem, prior attempts, why those failed or stopped short, and what the sponsor believes a successful solution looks like. Read this paragraph-by-paragraph. Underline every concrete technical requirement (data rates, accuracy thresholds, platform constraints, environment conditions). These are the rubric the evaluator will use, whether the sponsor admits it or not.
Phase I deliverables
Usually a feasibility study, sometimes a prototype at TRL 3 or 4, always a final report. Look for the phrase "demonstrate feasibility" vs. "build a prototype" — the former lets you lean on paper analysis, the latter forces you to budget for hardware or compute.
Phase II expectations
The section people skip. Don't. Phase II language tells you whether the sponsor has actual transition intent. Concrete transition language ("will integrate with Program of Record X," "the PEO has identified this as a gap," "anticipated Phase II ceiling of $Y") signals a real pipeline. Vague transition language ("potential integration with future systems") is a warning sign — the topic exists, but the money and the program manager to sustain Phase II may not.
References
The reading list at the bottom. A small but real signal. If the references are to in-house AFRL/ARL/NRL reports from the last 24 months, a specific government researcher is almost certainly behind the topic, and your proposal needs to respect what they have already found. If the references are all commercial sources, the topic is more open.
The scoring rubric
Score every candidate topic on five axes. Three thresholds rule most decisions — below the threshold, the topic is a no regardless of how interesting it is.
A topic under 60 out of 100 drops out. A topic under 15 on technical fit drops out regardless of total. The remaining set is your shortlist.
Filtering 115 down to 3
You will not manually score 115 topics. You should not try. The filtering is a funnel.
Pass 1 — Keyword exclusion (30 minutes)
Pull the DSIP topic list as a table. Mark every topic where the title and objective use language that is outside your technical center of gravity. If your team builds LLM systems, topics on hypersonic materials or undersea acoustics are out. This is almost always 50 to 70 percent of topics.
Pass 2 — Objective read (2 to 3 hours)
For the 30 to 60 topics that survive keyword exclusion, read the one-paragraph objective. Mark the ones where your first reaction is "I know exactly who on the team would work on this." Be honest with yourself. If you hesitate, it is a no.
Pass 3 — Full score (2 to 3 hours)
For the 10 to 15 topics that survive objective read, read the full description, Phase I deliverables, Phase II language, and references. Score each on the rubric. Sort by total score. The top three to five become your write list.
Pass 4 — Gut check
For the top candidates, ask three questions. Who on my team signs the technical volume and presents in a kickoff? Do I have a plausible Phase II story that ends with a named program office buying? If I do not win this, do I regret the time? If any answer is weak, drop the topic.
Three focused submissions beat twelve diluted ones.
Per-firm caps in the 2026 reauthorization explicitly penalize high-volume submitters. Evaluators read dozens of proposals per topic — the ones that win are the ones where the team's fit is obvious in the first page. Dilution is expensive.
Common mistakes
Four mistakes account for most of the bad submissions I have seen or written in past cycles.
1. Chasing topics outside your technical center
The topic sounds cool. The team has zero direct experience. You tell yourselves you can learn it in Phase I. Evaluators have read hundreds of proposals in that area and can detect unfamiliarity within the first technical paragraph. Save the effort for a topic you already know.
2. Assuming the listed PoC is the topic author
DoD topics are often drafted by a technical lead and routed through a contracting office. The listed point of contact may be a KO, a program manager, or a technical representative who did not write the text. Pre-solicitation questions should be specific enough that you learn from the answer regardless of who responds.
3. Writing for the topic text instead of the topic intent
The description is a snapshot of what the sponsor wrote months ago. The evaluator reads your proposal with what the sponsor knows today. Your proposal should address the text exactly and then add one or two paragraphs that demonstrate you understand where the problem has moved since the topic was written. That signals domain depth.
4. Burying the transition story
Phase I wins are decided on feasibility; Phase II wins are decided on transition. A Phase I proposal that does not articulate a credible Phase II transition path is starting the race 20 points behind. Put the Phase III commercial path in the first two pages. Do not bury it in the back.
The 30-day sprint from topic pick to submission
If you finish scoring this weekend, here is a defensible four-week sprint.
- Week 1: Final topic selection. One-page technical outline for each of three topics. Pre-solicitation questions submitted where DSIP allows.
- Week 2: First draft of technical volume for topic #1. Begin technical volume for topic #2. Cost volume draft for topic #1.
- Week 3: Technical volume drafts complete for all three. Internal red-team review. Resolve any open questions from topic PoCs. Cost volumes complete.
- Week 4: Final edits. Compliance matrix check against solicitation. Formatting pass. Submission through DSIP at least 48 hours before deadline (portals fail at the wire).
Four weeks is tight but not unrealistic for a disciplined small business with two to three domain experts and an administrative lead. It is unrealistic if you are starting from zero on topic research during week one.
Where AI/ML topics cluster in 26.1
Without naming specific topic numbers (check DSIP for the live list), AI/ML-heavy topics in recent DoD cycles have clustered in five areas. Expect 26.1 to look similar:
- Agentic systems for decision support. Mission planning, logistics, C2. Often DARPA or service-level AI shops.
- Computer vision and ISR exploitation. Overhead, airborne, ground. Usually Air Force, Space Force, or Army.
- Autonomy and swarm control. Navy and Air Force have led here for three cycles.
- Cybersecurity LLM applications. Threat triage, code analysis, log exploitation. Cross-service.
- Model assurance and evaluation. Red-teaming, adversarial robustness, prompt injection. Growing share every cycle.
Bottom line
115 topics is a lot. Most of them are not for you. The ones that are for you are usually obvious within the first sixty seconds of reading the objective. Build the funnel, do the scoring, write three strong proposals. A disciplined small business that submits three well-matched proposals per cycle will outperform a firm that submits fifteen.
Frequently asked questions
115 topics released April 13, 2026 across Army, Navy, Air Force, Space Force, DARPA, DTRA, and other DoD components. Topics are hosted on DSIP.
For most small businesses, three focused submissions beat twelve diluted ones. Per-firm caps in the 2026 reauthorization explicitly penalize high-volume submitters.
Not always. DoD topics are often written by a technical lead and routed through a contracting shop. The listed point of contact may be a KO, a program manager, or a technical representative. Write your pre-solicitation questions so you learn something useful regardless of who responds.
Phase I typically targets TRL 2 to 4. Phase II targets TRL 4 to 6. Each topic states its own TRL expectations — read them.
Dual-use means the technology has a defensible commercial market in addition to the defense use case. Dual-use topics are scored more favorably on Phase II transition and Phase III commercialization criteria, and dual-use firms are more resilient if a defense program office pulls back.
No. One firm, one proposal per topic. If your team has two competing approaches, pick the stronger one before submission.