Skip to main content

Federated learning for federal data silos.

April 5, 2026 · 14 min read · Cross-agency learning without data sharing, FedAvg limits, DP budgets, and what actually ships in 2026.

The real problem federated learning tries to solve

Two federal data stewards have similar problems, similar data, and a mission case for collaborative modeling. They cannot share raw data because of statute, classification, or policy. The naive path is a data-sharing agreement that takes two years to negotiate and often dies in the process. Federated learning offers a different path: train a shared model across the two silos without raw data leaving either side.

That is the pitch. The delivery in 2026 is more nuanced. Federated learning works, sometimes produces useful models, and carries real complexity. The right question on any program is not "should we do federated learning" — it is "is federated learning the simplest sufficient approach for this problem." Often the answer is no. When it is yes, these are the patterns that hold up.

Short version. Federated learning is real, shipping, and useful in specific 2026 federal contexts — mostly cross-hospital medical research, multi-base operational modeling, and cross-agency anti-fraud. It is oversold for most other use cases.

Federated learning architecture across federal agency nodes

flowchart LR A1[Agency Node A] --> T1[Local Model Training] --> G1[Gradient Update] A2[Agency Node B] --> T2[Local Model Training] --> G2[Gradient Update] A3[Agency Node C] --> T3[Local Model Training] --> G3[Gradient Update] G1 --> S[Secure Aggregation Server] G2 --> S G3 --> S S --> GM[Global Model] GM --> A1 GM --> A2 GM --> A3 style A1 fill:#3b82f6,color:#fff,stroke:#3b82f6 style A2 fill:#3b82f6,color:#fff,stroke:#3b82f6 style A3 fill:#3b82f6,color:#fff,stroke:#3b82f6 style S fill:#0d9488,color:#fff,stroke:#0d9488 style GM fill:#d97706,color:#fff,stroke:#d97706

Federated learning in one diagram

Server
  |
  |-- broadcast global model --> Client A (local data, local gradient)
  |-- broadcast global model --> Client B (local data, local gradient)
  |-- broadcast global model --> Client C (local data, local gradient)
  |
  <-- encrypted updates --
  |
  aggregate (FedAvg / FedProx / secure aggregation)
  |
  apply to global model
  |
  repeat

Every federated learning deployment is a variation on this loop. The variations matter.

FedAvg and its limits

FedAvg is the original algorithm: clients train locally for some epochs, send updated weights, the server averages them (weighted by local dataset size), and the cycle repeats. It is simple and it often works. It fails in specific, predictable ways that are the norm in federal settings.

Non-IID data

Two agencies have different distributions. FedAvg converges slowly or not at all. Mitigations: FedProx (regularization penalty on client drift), SCAFFOLD (variance reduction), personalized federated learning (shared base + per-client head).

Client heterogeneity

One client has GPUs, another has CPU-only. FedAvg synchronizes on every round; the slow client dominates wall-clock. Mitigations: asynchronous federated learning, stale-gradient updates, client selection per round.

Client imbalance

One client has 10x the data of the others. Weighted averaging gives them 10x the influence. The aggregate model effectively trains on their data.

Adversarial clients

One participant sends malicious updates to poison the global model. Mitigations: Krum, trimmed-mean aggregation, anomaly detection on updates.

Privacy: what federated learning does and does not provide

"We do federated learning so we have privacy" is wrong. Federated learning keeps raw data local; it does not prevent model updates from leaking training data. Membership inference, gradient inversion, and model-extraction attacks are real. Privacy requires additional mechanisms.

Differential privacy

Add calibrated noise to client updates such that any single training example's influence on the output is bounded. Expressed as a privacy budget (epsilon, delta). DP is the strongest available guarantee; it also costs model quality and requires careful accounting across training rounds. Typical federal privacy budgets we see in 2026: epsilon 1-10 for research settings, epsilon 0.1-1 for stronger privacy needs.

Secure aggregation

A cryptographic protocol so the aggregator learns only the sum of client updates, not individual contributions. Protects against a malicious server and against aggregator compromise. Implementations: the Bonawitz et al. protocol, Flower Secure Aggregation, NVIDIA FLARE homomorphic aggregation. Adds overhead; the security is worth it when the aggregator is not fully trusted.

Trusted execution environments

Run aggregation inside an enclave (Intel SGX, AMD SEV-SNP, NVIDIA Confidential Computing). The aggregator sees the data but the hardware prevents the hosting party from inspecting it. Real in 2026, deployed in specific federal pilots, not yet standard.

Frameworks in honest perspective

FrameworkStrengthsWatch-outs
Flower (flwr.ai)Language-agnostic clients, huge community, active dev, production-usedYou build a lot of the glue; privacy primitives are add-ons
NVIDIA FLAREHealthcare/defense focus, homomorphic aggregation, strong secure aggregationNVIDIA-centric; steeper curve
PySyft (OpenMined)Strong privacy primitives, active research communityHistorically unstable APIs; check current stability
TensorFlow FederatedGoogle-backed, well-documented simulation toolingProduction deployments are thinner than simulation
IBM Federated LearningEnterprise integrations, policy engineSmaller community; IBM-specific

Federal use cases that actually work

  • Multi-hospital clinical research. NIH-funded federated studies across VA, DOD, and academic medical centers, training models without moving PHI. Deployed and published.
  • Anti-fraud across financial agencies. Treasury, IRS, and SBA cross-agency fraud signal sharing via federated training, updates only.
  • Multi-base readiness models. DoD installations train shared readiness or maintenance prediction models across bases without centralizing operational data.
  • Cross-domain threat models. Intel community partners training on each side of a classification boundary with secure aggregation and cross-domain transfer of model updates.
  • Multi-agency NLP fine-tuning. Shared language-model adapters across agencies with overlapping text domains (law enforcement, inspectors general, etc.).

When federated learning is the wrong answer

  • A data-sharing agreement is achievable. Then do the agreement. Federated learning is worse quality than centralized training at equal data scale.
  • The signal is weak across clients. If the useful pattern is in one silo, you are training noise from the others.
  • The clients have very heterogeneous data. You will spend the program on algorithmic tuning and get poor results.
  • The problem is really about evaluation, not training. Then federated evaluation (centralized model, distributed test data) is the answer.
  • The client count is small (2-3). Benefits are minor; complexity is large.
Federated learning is not a legal workaround. It is a technical pattern that still requires the same governance, MOUs, and audit posture as any cross-agency data effort. The artifacts change; the policy work does not.

Governance on federated programs

What must be agreed in writing before code is written:

  • Who owns the global model? What are the use rights for each participant?
  • What happens to the model if a participant withdraws?
  • What is the privacy budget per participant? How is it accounted?
  • What are the update-inspection rights — can a server-side operator read raw updates? Under what circumstances?
  • What are the anomaly-detection and client-removal procedures for adversarial participants?
  • What is the publication policy for results trained on the federated system?
  • What is the audit schedule and what evidence is captured per round?

Operational patterns we use

Central coordinator in a neutral boundary

Aggregator hosted in a GovCloud account separate from any participant's production account. Strong access controls, audit logging.

Secure aggregation mandatory

Any federated program with more than a single-agency threat model uses secure aggregation, not plain-average.

Differential privacy if the model is released

If the trained model will be used outside the training participants, DP is mandatory. If the model stays internal to the training group, DP is a program decision.

Per-round eval at each client

Each client runs the global model against their local held-out test set every round and reports metrics. Divergence between clients is the earliest signal of trouble.

Byzantine-robust aggregation by default

Trimmed mean or median-based aggregation; the computational overhead is small and the robustness is free insurance.

Where this fits in our practice

We evaluate federated learning as part of the broader cross-agency collaboration toolkit, including data-sharing agreements, synthetic data, and secure enclaves. See our synthetic data post for an adjacent approach and our MLOps on GovCloud for the underlying infrastructure.

FAQ

What is federated learning and when does federal need it?
Federated learning trains a model across multiple parties without sharing raw data. Each party computes gradients or model updates on its local data; updates are aggregated into a global model. Federal needs it when legal or policy constraints prevent data sharing across agencies or across domestic/international boundaries, but collaborative modeling is still a mission goal.
Does federated learning give you real privacy?
By itself, no. Model updates can leak training data. Real privacy requires differential privacy (noise added to gradients with a bounded privacy budget), secure aggregation (cryptographic protocol so the aggregator sees only the sum), or both. Federated learning without those is data sharing by another name.
What is FedAvg and where does it fail?
FedAvg averages client model weights, weighted by local dataset size. It fails when clients have non-IID data (different distributions), when clients have heterogeneous compute or network, and when one client can disproportionately influence the aggregate. All three are the norm in federal cross-agency settings.
What frameworks actually work in 2026?
FLOWER is the most widely used open-source federated learning framework, language-agnostic clients, active community. PySyft (OpenMined) is another viable option with stronger privacy primitives. NVIDIA FLARE targets healthcare and defense. TensorFlow Federated is still around but narrower.
Is federated learning a workaround for data-sharing agreements?
Sometimes, but it is not a free pass. A federated learning agreement still requires memorandums of understanding, data-access controls on each side, model-update-sharing terms, and joint governance of the aggregated model. The legal work is similar scope to a data-sharing MOU, just with different artifacts.
When is federated learning the wrong choice?
When a data-sharing agreement is feasible, federated learning adds complexity for no gain. When the shared signal is weak relative to local signal, federated learning hurts quality. When one participant dominates the data, FedAvg becomes a fancy way of training on their data only.

Related insights

Considering federated learning across federal data silos?

We will tell you honestly whether federated learning solves your problem or whether a simpler data-sharing agreement would.