Sharing a Step on the Path to Distributed Intelligence

by
David Stout
September 25, 2025
Key Takeaways

The future of intelligence will be distributed, not centralized.


Artificial Super Intelligence will emerge from a civilization of models, not a single one.


Sovereignty, modularity, and federation are the path forward.

Sharing a Step on the Path to Distributed Intelligence

At webAI, we believe the future of intelligence will not be centralized.

It will be distributed — billions of nodes, each learning in context, collaborating securely, and refining together. This is how we get to Artificial Super Intelligence: not from a single monolithic model in a datacenter, but from a living network of models, sovereign and resilient acting as a civilization of models.

Most of the work we do to build this future happens quietly.

We’ve been developing runtimes, distributed networks, AI frameworks, and modular tooling designed to make AI sovereign and edge-ready. Day by day, the pieces are coming together.

This is a glimpse into how we’re thinking at webAI — and every so often, we share a piece of that progress publicly.

A New Paper

Last week, we released one such glimpse: Federated Learning with Ad-hoc Adapter Insertions: The Case of Soft-Embeddings for Training Classifier-as-Retriever.

It’s a technical paper, but the essence is simple: we found a way to let federated retrieval models adapt locally without breaking privacy budgets or communication constraints. The mechanism — ad-hoc soft embedding adapters — is small, lightweight, and surprisingly powerful.

It’s not the whole story, but it’s one of the steps.

Why It Matters

We’ve seen strong results — accuracy improvements from 12% 99.9%, 2.6× faster training in distributed settings, backed by formal convergence and privacy guarantees.

But more importantly, this work validates a direction we’ve been investing in for years: modular, federated, sovereign AI.

  • On-device learning instead of cloud dependence
  • Adapters and modularity instead of one-size-fits-all retraining
  • Peer-to-peer intelligence instead of central command

One Step of Many

This paper is not representative of the complete work. It’s a milestone on a much larger journey.

We’ve made a series of internal breakthroughs — in retrieval, federation, modular runtimes, and distributed orchestration — that together point to one conclusion:

Distributed AI is how we reach ASI.

What excites us is not just this result, but what comes next: connecting these components into a network that can learn faster, adapt locally, and operate everywhere — from a smartphone to a datacenter, to an IoT node.

Looking Forward

We’ll keep most of our work under wraps until it’s ready. But from time to time, we’ll continue to share pieces of the journey publicly.

Full paper is here: arXiv:2509.16508

Author Information
David Stout
Related posts.
No items found.
All
First Principles
Dropdown IconDropdown IconDropdown IconDropdown Icon
Industry insights
Dropdown IconDropdown IconDropdown IconDropdown Icon
AI fundamentals
Dropdown IconDropdown IconDropdown IconDropdown Icon
Engineering
Dropdown IconDropdown IconDropdown IconDropdown Icon
Changelog
Dropdown IconDropdown IconDropdown IconDropdown Icon
Company news
Dropdown IconDropdown IconDropdown IconDropdown Icon
Customers
Dropdown IconDropdown IconDropdown IconDropdown Icon
Industry insights
The Lightbulb of AI