A frontier lab building systems that learn
In stealth
We are changing how AI scales by changing how it learns.
Backpropagation, the standard learning rule in AI, is upstream of many of the technology's limits: training cost, data requirements, brittle long-horizon behavior, centralized deployment, and the inability of systems to learn continually or improve through use.
We are replacing it.
We are building a paradigm based on local, plasticity-based, distributed learning: a path to training systems that also learn from experience during inference. We believe this capability is essential for robust long-horizon agents, personalized models, adaptable robotics, efficient inference, and the next regime of model scaling.
Changing the learning rule also changes the hardware frontier, as memory, bandwidth, power, architecture, and interconnects are shaped by learning at training and inference time.
Our team has been leading biologically plausible learning across neuroscience, machine learning, AI accelerators, and memory nanotechnology. We have previously founded unicorns, deployed deep technology in Antarctica and in space, built products used by millions, and sold to dozens of Fortune 500 companies.
We have been building in private. Follow us.
