Our Content

Research-driven builders and investors in the cybernetic economy

We contributed to Lido DAO, P2P.org, =nil; Foundation, DRPC, Neutron and invested into 150+ projects

Noumenal: Thermodynamic Brains for Robots

Extropic — a thermodynamic computer company — just dropped a nuclear missile on the entire AI industry by releasing their new thermodynamic chip. And it’s a multi-warhead missile, one with incredible fallout: In addition to the chip, they will be dropping a number of breakthrough research papers through the end of the year, co-authored by our portfolio company Noumenal Labs, which is building a “thermodynamic brain” for robots based on the principles of Active Inference. 

Working closely with their key scientific advisor, Professor Karl Friston, the originator of the Active Inference and most cited neuroscientist in history, the Noumenal Labs team integrates deep theoretical insight with practical engineering to create systems that do not merely compute, they understand, adapt, and evolve.

Noumenal has secured early access to the Extropic thermodynamic computer. Bill Gates won his decade by getting early time on scarce machines — Lakeside’s terminal, C³’s PDP-10, UW’s mainframes — logging thousands of hours before the world caught up. Noumenal is in this leading position today.

Thermodynamic computers are going to catch an entire industry off guard and will change the entire shape of AI, and in particular applications in edge computing. Robotics is the main product-market fit vertical for it. This is because thermodynamic computers enable us to run extremely powerful, state of the art AI models on edge devices like robots, with only a tiny fraction of the energy and compute requirements that would require running such models on classical CPUs and GPUs. Hence we can have edge applications that do not sacrifice intelligence for portability. 

It is time for us to explain why we invested in Noumenal, what a thermodynamic computer is, and how it will enable embodied intelligence using Active Inference.

The Hardware Lottery

Computing breakthroughs don't happen through careful planning. They happen by happy accident, when algorithms serendipitously match available hardware.

In the late 2000s Geoffrey Hinton’s group discovered that video cards, originally built for rendering graphics, were remarkably effective at large-scale matrix multiplications needed for neural networks. 

The Transformer architecture, introduced in 2017, then won big in the “hardware lottery”. Its reliance on large, regular matrix multiplications made it perfectly suited to GPU tensor cores, avoiding the sequential processing bottlenecks that had prevented previous forms of AI to scale.

At core, Transformers have been selected as the backbone of modern AI not because it is the most performant architecture — not because it is fit for purpose — but because it is extremely easy to scale on existing hardware.

This serendipitous but coincidental fit between Transformer software architectures and GPU hardware architectures sparked the multibillion GPU era.

Now that we know that AI models are not science fiction but science fact, that they are possible and useful, the imperative is to reinvent our computers and upgrade model architectures.

Digital Computers

Today’s computers implement Boolean logic and are made out of inherently noisy transistors that act as binary switches. 

We have spent over 70 years engineering perfectly deterministic machines: high voltages suppress thermal noise, error correction prevents bit flips, cooling systems remove thermal fluctuations. This suppression of noise (aka stochasticity) and probability in favor of strict determinism is precisely what enables digital computers to implement the crisp (but frozen) calculations of binary logic.

Meanwhile, mainstream generative AI is fundamentally probabilistic or stochastic: Diffusion models like Stable Diffusion require iterative sampling at every generation step. Bayesian inference demands thousands of MCMC samples for uncertainty quantification. Energy-based models need expensive sampling during training. Active Inference systems continuously update probabilistic beliefs. 

This presents a profound paradox: We run probabilistic algorithms on deterministic machines, those fueled by the first revolution in mass chip production. To do so, we spend energy and complexity of chip design to suppress the noise at the device level, then burn even more energy to add stochasticity back via its emulation in software. Hence, current compute stacks are fundamentally misaligned with the algorithms we want to run: We simulate uncertainty on hardware designed to suppress it. 

The results of this fundamental mismatch are well known: GPU clusters that consume the power of an entire city. If current scaling laws hold, we will soon produce more energy than the earth can radiate to space and start heating our planet beyond its capability to adapt. From the perspective of neuroscience and biology, the paradox borders on ridiculous. The contemporary approach to AI requires that we build and harness the power of a literal miniature sun to power the required data centers. The human brain, on the other hand, requires only a banana and a glass of water for similar performance. 

Fortunately, this disparity in computational demand and performance is in fact an enormous opportunity. Alternative approaches are inevitable, and Noumenal Labs is the only company in the world poised to accelerate production for the algorithms that run natively on the chips that will eclipse and ultimately annihilate first generation hardware.

Thermodynamic Computers

The thermodynamic computer is a mass-manufacturable source of randomness that can sample from the most computationally useful distributions. A thermodynamic computer operates at the physical limit of computation by using thermal fluctuations as a resource, rather than suppressing them. Because these fluctuations have a temperature-dependent, statistically predictable structure, the device can shape and harness them for computation. 

Think of it as a programmable random number generator that you can “load” with any target distribution, so it emits samples directly from that distribution, replacing the heavy transformations used in classical digital devices, enabling us to turn uniform noise into the desired physical law.

In this paradigm, the physics itself is the sampler: Instead of simulating stochastic dynamics on a digital machine, which is extremely costly, the hardware embeds the energy landscape of the problem as a real physical potential, and the system’s state evolves under actual physical forces and thermal noise. This ultimately allows us to sample from the equilibrium distribution of this physical system.

So this isn't just physics inspired computing. This is physics as computing. This is natural intelligence.

Thermodynamic computing directly accelerates the most expensive steps in probabilistic AI that enable us to generate samples from complex probability distributions. Tasks such as Markov Chain Monte Carlo inference, diffusion-based generative models, and energy-based learning all depend on repeatedly adding noise and relaxing toward equilibrium. This involves calculating the gradients that enable learning and generating high-quality random samples to allow the system to avoid getting stuck in suboptimal solutions (so called local optima). 

On digital hardware, this means millions upon millions of simulated pseudo-random draws and gradient updates computed in software. 

In a thermodynamic computer, these operations happen in the fabric of the device itself: The downhill gradient becomes a real physical force acting on the system, while the required randomness arises naturally from intrinsic thermal noise. 

Instead of numerically integrating stochastic differential equations (that is, creating expensive simulations of this kind of physics in digital hardware), the thermodynamic hardware embodies them, allowing sampling, relaxation, and convergence to unfold as native physical processes rather than simulated ones.

This is what we mean when we say it’s not physics-inspired computing; it is physics as computing. This allows orders-of-magnitude improvements in speed and energy efficiency for inference, optimization, and generative modeling, the computational backbone of modern AI.

The performance claims are extraordinary and experimentally validated:

  • Energy Efficiency: Orders of magnitude more energy efficient than digital computing. A 2025 benchmark study showed 9,721× energy improvement over classical computing on graph optimization tasks, competitive with quantum annealers (Chowdhury et al., arXiv:2503.10302). SPICE simulations on 28nm CMOS predict 10^6× energy reduction for sampling operations (Van Brandt & Delvenne, 2025).

  • Speed: Orders of magnitude faster than digital computing. Published research demonstrates ~500× faster Markov Chain Monte Carlo sampling compared to software implementations (Camsari et al., 2023). FPGA prototypes achieve 50-64 billion flips per second, enabling contrastive divergence with 100,000 sampling sweeps, a regime completely impractical on conventional hardware (Niazi et al., 2023).

  • Practical Demonstrations: Comparable performance to state of the art, but more efficient. On MNIST digit classification, sparse Boltzmann machines achieved 90% accuracy with 30,000 parameters versus 3.25 million for dense implementations, a 100× parameter reduction, demonstrating that hardware constraints don't prevent high performance (Niazi et al., 2023).

And this is just the publicly available, academic research. Extropic has spent years developing patented, proprietary technology that demonstrates even greater efficiency. 

Probabilistic Robotic Platforms and Active Inference

Noumenal Labs is leading the way in the next frontier of AI because they are pioneering the only framework tailormade to deliver embodied intelligence that acts in the uncertain, dynamic physical world. Autonomous ground terrain vehicles in both constrained and outdoor environments, dexterous manipulators, and aerial drones must make split-second decisions under uncertainty, where mistakes have costly, material consequences. Yet, today’s deep networks output single-point predictions with no measure of confidence. These are systems that diverge drastically from humans in that they cannot represent, reason about, or minimize uncertainty. Most importantly, they have no way to scaffold their confidence because they have no way to determine or infer what and when they do not know.

Robots that achieve any commercial market traction whatsoever must instead utilize probabilistic inference — executed in methods like particle filtering, belief propagation, and grounded world models, all of which are monopolized by the talent trained in the neuro-inspired Active Inference framework — to maintain and update probability distributions over latent states and actions. 

In Active Inference, perception and control are unified: An agent continually updates beliefs about the world and selects actions that minimize expected free energy, effectively reducing prediction error through movement and perception. However, these Bayesian approaches are computationally demanding: up to 90% of their runtime goes into sampling, making real-time Active Inference impractical for embedded or mobile systems.

Thermodynamic computing removes this bottleneck, unlocking unbounded intelligence at the edge. In Active Inference, every perceptual and motor update involves stochastic sampling — from posterior inference to policy selection. A thermodynamic computer performs these operations directly as physics: posterior sampling corresponds to the natural equilibration of p-bit networks encoding probability distributions, while policy exploration arises from the same thermal fluctuations that drive the hardware. What digital processors simulate through massively expensive loops of random draws, thermodynamic systems achieve inherently.

By transforming sampling from an algorithm into a physical process, thermodynamic computing enables real-time, energy-efficient Active Inference — bringing adaptive, uncertainty-aware intelligence to edge applications, beginning with autonomous robots

Noumenal stands to lead this profound paradigm shift, building thermodynamic brains that think and act in the same probabilistic, physical terms as the world itself.

Noumenal has become the leading force in Active Inference, the emerging framework that unites physics, neuroscience, and AI under a single theory of self-organizing intelligence. Working closely with Professor Karl Friston (advisor to Noumenal), the originator of the Active Inference, the team integrates deep theoretical insight with practical engineering to create systems that do not merely compute — they understand, adapt, and evolve.

Noumenal Team

Noumenal Labs is the right team to bring applications of thermodynamic computing to industry. Programming thermodynamic computers is not merely difficult — it requires mastery across energy-based modeling, probabilistic programming, and free-energy minimization — a skill set possessed by only a handful of researchers worldwide. Noumenal’s cross-disciplinary expertise in these domains forms a powerful moat, positioning the company to define the standards and architectures of this new computational era. The founding team DNA is extraordinary and uniquely well suited to accomplishing their mission and vision: 

Candice Pattisapu, Ph.D. — Chief Executive Officer & Co-Founder

Candice Pattisapu is a brilliant strategist, a seasoned operator, and an inspiring team leader. Her background is in cognitive science and Bayesian latent knowledge modeling in educational contexts, focused on bridging human and machine intelligence. Her research on artificial models of episodic memory and event segmentation informs Noumenal’s work on data compressed world modeling for thermodynamic computing. She later developed an Active Inference model of emotional state space which has recently been adopted by Waymo’s autonomous R&D group. 

Maxwell Ramstead, Ph.D. — Chief Science Officer & Co-Founder

Maxwell Ramstead is a field-defining Active Inference and Free Energy Principle researcher, co-author of many key Extropic papers, including the upcoming nuclear-powered white papers. Along with his mentor Karl Friston, Maxwell is the leading global authority on Active Inference and natural intelligence, driving Noumenal’s theoretical and applied R&D. Deeply embedded in the contemporary thermodynamic hardware industry, he has worked extensively on Active Inference algorithms that can enable energy-efficient, physics-aligned computation — a key pillar of thermodynamic computing. Known in the field as the “Karl Friston Whisperer”, Maxwell Ramstead is celebrated for translating Karl Friston’s Free Energy framework into actionable insights for researchers, technologists, and investors alike.

Jason Fox — Chief Technology Officer & Co-Founder

Jason Fox is seasoned builder and systems architect who (together with Maxwell) assembled the world’s first industry lab for Active Inference under Karl Friston’s guidance. A former Navy technologist and Microsoft engineer, Jason leads hardware-software integration for real-world embodied agents, applying thermodynamic principles to overcome sampling bottlenecks in robotics. His track record spans defense systems, pandemic response technologies, and humanitarian computer vision deployments. 

Patrick Huembeli, Ph.D. — Founding Research Engineer

Patrick Huembeli leads Noumenal’s Thermodynamic Computing Lab, bridging statistical physics and intelligent systems. Formerly a Staff Scientist at Extropic, he architected platforms that fused energy-based models with thermodynamic hardware, pioneering the first core thermodynamic inference systems, physical samplers for AI. Trained in quantum information theory, Patrick brings rare end-to-end expertise—from stochastic physics and generative modeling to hardware acceleration and real-time inference on thermodynamic chips. His work turns inference from a simulated process into a physical computation substrate.

Mirko Klukas, Ph.D. — Founding Research Engineer

Mirko Klukas leads Noumenal’s Probabilistic Perception Lab, using his background as a mathematician to advance the theoretical foundations of perception and inference in robotics, using applied AI techniques like SLAM, probabilistic perception, differentiable rendering, his work integrates mathematics, neuroscience, and machine learning to build structured, energy-efficient intelligent systems that operate in-the-wild. Formerly at Josh Tenenbaum’s Probabilistic Computing Group and the Fiete Lab at MIT, he has developed GPU-accelerated inference methods, multi-resolution Gaussian scene models, and spatial cognition algorithms — all crucial for thermodynamic acceleration of inference in physical AI. 

Prof. Karl Friston, FRS, FMedSci, FRSB — Scientific Advisor

Karl Friston is the inventor of Active Inference and the Free Energy Principle, which provide the theoretical basis for thermodynamic computing. His work unifies perception, action, and learning under a single physical and mathematical framework for self-organizing intelligence. The most cited neuroscientist in history, Friston’s research continues to define the scientific frontier of artificial intelligence and physics-based, energy-efficient AI.

Noumenal recognized early that thermodynamic computing, Active Inference and probabilistic programming will be the cornerstones of the robotics and embodied intelligence revolution. 

This insight led to the development of Noumenal’s flagship product (currently in stealth), designed to harness thermodynamic principles for radically efficient, adaptive machine intelligence.

Early Access to Thermodynamic Computing

As Seattle teens, Bill Gates and Paul Allen lucked into scarce time-sharing access — first via a Teletype terminal at Lakeside School, then deep hours on a PDP-10 at Computer Center Corporation, even trading bug-finding for computer time. That head start let them rack up thousands of programming hours before most peers had even touched a computer, setting the stage for Traf-O-Data and, later, Microsoft.

Noumenal has secured early access to Extropic’s thermodynamic computing hardware, positioning them among the first organizations in the world able to explore this new computational substrate. This early access phase will allow them to validate the core principles of probabilistic bits (p-bits) and to gain first-hand understanding of how thermodynamic devices behave in practice.

The first generation hardware is primarily designed for proof-of-concept evaluation, but it provides an extraordinary opportunity to experiment with real physical samplers rather than simulated ones. Noumenal will use this opportunity to:

  • Characterize the device’s speed, stability, and energy profile compared to digital simulations;

  • Prototype early use cases in probabilistic robotics and adaptive control;

  • Develop a programming and compiler stack for connecting thermodynamic devices with modern machine-learning frameworks;

  • Benchmark performance on constrained optimization and sampling tasks that form the computational backbone of Active Inference.

This phase will give Noumenal an unsurmountable practical edge, the intuition and tooling to program the first generation of physical samplers, all well before the wider industry learns about their existence, let alone understands how to use them.

Looking Ahead: Physical AI and Active Inference

In the longer term, thermodynamic computers promise to unlock a new class of physical intelligence. Active Inference, an agentic framework inspired by neuroscience, has so far been limited by the sampling bottleneck. Every belief update and action selection requires thousands of stochastic samples, making real-time Active Inference in robots computationally prohibitive.

Thermodynamic hardware will remove this barrier. By performing probabilistic updates directly as physics, Active Inference agents will be able to infer, plan, and adapt in real time, not by simulating the world, but by physically equilibrating with it. This is learning as adaptation, just like in nature. This enables a future of low-power, self-organizing robotic systems that learn and act through thermodynamic principles rather than purely digital computation. Natural, not artificial intelligence

We see this as the ultimate path forward to more brain-like AI, ultimately answering the questions why the human brain can do all its computation running on a banana and a glass of water (and maybe a cup of coffee), instead of gigawatts of power.

Noumenal’s early access and deep expertise in Active Inference position it to lead this transition, bridging robotics, physics, and AI into a single coherent discipline of Physical Intelligence.

Short NVidia Now

NVidia’s business model is premised on the idea that scaling the current architecture will unlock AGI. But this is prohibitive both at the large and small scale.

At the large scale, the ultimate barrier is catastrophic cost scaling. Hyper-scaling current architectures will require trillions of dollars in new data centers and power plants for those centers. From the point of view of natural intelligence and neuroscience, as we have seen, this is insane. The most powerful computing device in the known universe, the human brains, runs on approximately 20 Watts. That’s not a data center, it’s a light bulb. If we need this much resource scaling to run intelligent applications, we are doing it wrong. This is not just a small or trivial mistake. It will waste trillions of dollars in investment.

At the small scale, the ultimate barrier is the laws of physics. We are arriving at the end of Moore’s law, the empirical observations that for the last decades, density of transistors on a chip roughly doubles every few years. Extrapolating this curve to 2030, we are faced with the alarming conclusion that miniaturization of digital computers runs into physical limits. By 2030, wires in transistors will become so small that stochastic effects and quantum weirdness (like quantum tunneling) will begin to take over, making digital computation impossible to scale further. Moore’s law will become Moore’s wall. Here, in the old paradigm, noise is the enemy. 

Thermodynamic computing involves switching teams, harnessing the power of noise to break through “Moore’s wall.” 

This will destroy the business proposal of companies like NVidia, which are premised on blindly scaling what works now with no consideration for what is coming. 

It is time to short NVidia, and invest in the future of AI technology — a future led by Noumenal Labs and Extropic Corp.