We Can, Must, and Will Simulate Nematode Brains

Michael Skuhersky

Scientists have spent over 25 years trying — and failing — to build computer simulations of the smallest brain we know. Today, we finally have the tools to pull it off.

A near-perfect simulation of the human brain would have profound implications for humanity. It could offer a pathway for us to transcend the biological limitations that have constrained human potential, and enable unimaginable new forms of intelligence, creativity, and exploration. This represents the next phase in human evolution, freeing our cognition and memory from the limits of our organic structure.

Unfortunately, it’s also a long way off. The human brain contains on the order of one hundred billion neurons — interconnected by up to a quadrillion synapses. Reverse-engineering this vast network would require computational resources far exceeding what’s currently available. Scientists seeking a proof of concept for whole brain emulation have had to turn to simpler model organisms. And by far the simplest available brain — at just 300 neurons — belongs to the nematode Caenorhabditis elegans

Scientists have been working on the problem of simulating C. elegans in some form or another for over 25 years. So far, they’ve been met with little success. But with today’s technology, the task is finally possible, and — as I’ll argue — necessary. 

Motion patterns of C. elegans. Credit: Hiroshima University, Osaka University

A brief history of worm brains

The biologist Sydney Brenner became interested in C. elegans as a model organism for developmental biology in the 1970s. Its simplicity and small size made it an ideal lab subject. In 1986, John C. White, a scientist in Brenner’s research group, produced a nearly complete map of the neural connections that make up the C. elegans brain — what scientists now call the connectome. As computers became more accessible, other scientists started building on Brenner’s work. Ernst Neibur and Paul Erdös kicked things off with a biophysical model of nematode locomotion in 1991. Two different teams (one at the University of Oregon and the other in Japan) published plans for building more ambitious models in the late 1990s. Both would have utilized White’s work on neural circuitry.  Unfortunately, neither got off the ground. 

In 2004, the Virtual C. elegans project at Hiroshima University got somewhat farther: they released two papers describing their model, which simulated the nematode’s motor control circuits. The simulated nematode could respond to virtual pokes on its head, but it didn’t do much else. And even this was, arguably, not a true simulation. Although the researchers had a map of the nematode’s neurons, they didn’t know their innate biophysical parameter — that is, the precise electrical characteristics of the connections between them. Instead, the researchers used machine learning to produce a set of values for each neuron that made their simulated nematode respond to a poke like a real one would. As a result, this approach was not entirely grounded in biological reality — a recurring theme that would surface in several future simulation attempts.

That is where things stood at the dawn of the 2010s. While work continued on simulating nematode locomotion, there was no progress on simulating a nematode’s brain — let alone a realistic one. Then, on January 1st, 2010, the engineer Giovanni Idili tweeted at the official account of the Whole Brain Catalogue, a project to consolidate data from mouse brains: “new year's resolution: simulate the whole C.Elegans brain (302 neurons)!” U.C. San Diego neuroscience grad student Stephen Larson noticed the tweet and, by August, Larson was pitching the idea at conferences. By early 2011, Larson and Idili had put together a team to start work on what would become the OpenWorm project — the efforts of a decentralized group of academics with the goal of creating a complete, realistic, and open source model of C. elegans. 

This was a heady time to be interested in simulating extremely tiny brains. Over the next few years, OpenWorm published a series of papers and model updates. In 2013, they hosted their first conference in Paris and landed an optimistic story in The Atlantic (title: “Is This Virtual Worm The First Sign of the Singularity?”). Meanwhile, the researcher David Dalrymple was working on a parallel project at MIT, which he dubbed Nemaload. OpenWorm scientists largely used data from dead nematodes but Dalrymple wanted to use the then-new technique of optogenetics to study living specimens. Optogenetics allows scientists to control neurons and other cells with light. In this case, the technique could be used to collect data on how a nematode’s brain responds to different states by perturbing it thousands-upon-thousands of times. In a 2011 comment on LessWrong, Dalrymple wrote “I would be Extremely Surprised, for whatever that's worth, if this is still an open problem in 2020.” 

It’s now 2025, and nematode simulation remains an open problem. Dalrymple abandoned Nemaload in 2012. OpenWorm still exists but has not made substantial progress over the past ten years towards creating a truly scientific whole brain simulation, due to a lack of available data. Occasionally, more modern (though still heavily assumption-based) simulations are published, including integrative models that strive to make fewer assumptions. We’re not quite back where we were in the 2010s: we have much better data on the C. elegans nervous system and — as I’ll discuss later — much better tools to study it. But we aren’t much closer to simulating a whole brain. 

What went wrong? Why has it taken over 25 years to build a working computer simulation of one of the simplest brains known to mankind? And, more importantly, why do I think that this time we can actually pull it off?

Why we got stuck

Before explaining what happened, we should ask a more fundamental question: what does it mean to successfully simulate a brain? This is a topic where it’s important to be specific. The term "simulation" in academic neuroscience often evokes the notorious failures of the Human Brain Project. In 2013, neuroscientist Henry Markram secured about 1 billion euros from the European Union to "simulate the human brain" — a proposal widely deemed unrealistic even at the time. The project faced significant challenges and ultimately did not meet its ambitious yet vague goals. These events cast something of a stigma on brain simulation research, making it especially important for those in the field to set clearer, more realistic goals with concrete milestones along the way. 

What makes a good simulation is a debate in itself, so I’ll just share my view: a good simulation of a nervous system is one that both accurately replicates its functionality and reliably predicts the future activity of a real system under the same initial conditions. That is, a simulated nematode in a simulated plate of agar should behave the same way as a real nematode in a real plate of agar. If we disturb the simulation — say, by poking or shining a light on it — it should respond the same way the real nematode would. And it should keep acting like a real nematode over time, instead of accumulating more error as time goes on. 

This definition can help us clarify what is and isn’t simulation. Last October, a consortium of scientists across 127 institutions published the complete connectome of the fruit fly, Drosophila melanogaster. This is a massive accomplishment by any objective standard: it is only the second complete connectome assembled, after that of C. elegans, and contains over 140,000 neurons (as compared to C. elegans’s 300). The success of the project, called FlyWire, has rekindled interest in brain simulation. And, in a sense, the FlyWire connectome can be used to simulate a fruit fly. When Philip Shiu, a researcher on the project, test-‘fired’ the neurons responsible for sensing sugar, the model predicted that other neurons that extend the fly’s proboscis would fire, as they would in a real fly. Other researchers have since used Shiu’s model to accurately predict neural patterns involved in the fly’s sense of taste, grooming, and locomotion. 

Shiu’s model represents an important advance in our understanding of fruit fly brains, but it isn’t really a simulation. (Nor is it trying to be; Shiu himself has been clear that the model is extremely simplified and makes assumptions about key parameters governing how neurons behave). While the model can successfully predict the behavior of particular groups of neurons, it cannot mimic the exact functionality of an entire fly brain. That’s because the FlyWire model is missing the same thing as OpenWorm (and other attempts to simulate nematodes) did: good data on the relationship between neural structure and neural function. 

Think of the connectome as a map of the brain. It can tell us how neurons connect to each other through electrical and chemical synapses. But despite revealing which neurons connect to one another, it doesn’t tell us anything about how those connections work. To fully model a brain, we need to understand the biophysical parameters governing each neuron’s behavior. This includes not only the variable strength of synapses (in neuroscience, these are called weights) but also the cells’ membrane properties, such as capacitance and the shapes of dendrites and axons, which affect how electrical signals propagate. We need to know both a neuron’s firing threshold as well as how that threshold changes as the animal learns new things (learning involves shifts in both synaptic weights and the intrinsic properties of neurons themselves). A simulation based only on a static connectome can’t learn — so it won’t behave very much like the real creature it’s trying to simulate. 

Unfortunately, learning the dynamic biophysical features of a living brain is much harder than understanding its structure (which, as we’ve seen, is hard enough). The primary technique used to map a connectome is electron microscopy. Because electrons have a wavelength up to one hundred thousand times smaller than that of visible light, they can be used to produce images at a much higher resolution than light microscopes. But electron microscopy has a serious disadvantage. It can only be used on sliced brain tissue, so it can’t tell us how a living brain responds to stimuli or changes over time. The technique can give us extremely detailed, high quality images, but can’t tell us a neuron’s electrical characteristics, like the strength of its synapses or how its membranes store electrical charge.

For decades, the only way to learn such things was through a technique called patch clamping. The advantage of patch clamping is that it is highly accurate. The disadvantage is that it requires the painstaking placement of electrodes on each individual neuron. With effort, it’s feasible to patch clamp about three neurons at once, making it a less-than-ideal choice for capturing information about neural activity throughout the whole brain.  

This is where things stood when earlier attempts to simulate C. elegans stalled out. It was a problem of timing:  In 2013, the tools that would let us understand what happens inside neurons either didn’t exist, or weren’t ready for practical use.

New ways to see

As C. elegans simulation research was losing steam, other researchers pushed forward in advancing the ability to observe cells. First, advances in optical microscopy made it possible to capture fast, relatively sharp images of living cells without destroying them. Since the late 1950s, biologists have relied on confocal microscopes, which use a tiny pinhole to block out-of-focus light. This creates higher resolution images, but the method is also slow, since capturing a whole sample means scanning it point-by-point. This is a serious problem for studying traits that change rapidly (like neuronal activity). This is where modern techniques like light sheet microscopy prove particularly useful. Instead of focusing light through a point, light sheet microscopes use a laser sheet to illuminate an entire 2D cross-section of a sample.The process is dramatically faster and gentler on tissue than traditional confocal methods. 

Light sheet microscopes have existed since the 1990s, but early versions of the technology struggled to capture fast intracellular processes. That changed with a series of innovations in the early 2010s. First, new techniques were developed to allow optical microscopy below the diffraction limit (the smallest distance between two points at which they can still be distinguished by an optical system). For visible light, this distance is between 200 and 250 nanometers — too big to distinguish most cellular features. That changed with the introduction of super-resolution microscopy which featured resolutions of 100 nanometers and below. Another major advance was DiSPIM, 1 invented in 2014. In light sheet microscopes, the light illuminating an image has to be perpendicular to the camera picking it up. Originally, this meant that the camera and the light sheet were part of separate assemblies. DiSPIM microscopes use two perpendicular lens assemblies, each equipped with a light source and a camera. This approach doubled the speed with which the microscope could capture images of living samples, and ensured that images could be reconstructed at the same resolution across all three dimensions. In 2015, a group at Columbia University developed a method called SCAPE, 2 which used an oblique sheet of light to scan and image a sample using a single lens assembly. SCAPE is even faster than earlier light sheet techniques, making it particularly useful for tracking rapid neuronal activity.  

Another set of innovations has to do with what the microscopes are looking at. All the methods we’ve discussed depend on fluorescent reporters — engineered proteins that fluoresce under certain conditions, such as the presence of a specific protein or the expression of a particular gene. In our case, that trigger is calcium. When a neuron fires, calcium ions flood into the cell, making calcium influx a reliable proxy for neuronal activity. The key breakthrough here was the development of the GCaMP6 family of reporters by a team at the Janelia Research Campus between 2013 and 2015. This new generation of calcium indicators were brighter and more sensitive than earlier versions, quickly becoming the go-to tool for imaging neuronal circuits in living organisms. While GCaMP6 revolutionized calcium-based imaging, even more precise measurements could come from fluorescent reporters that respond to voltage directly. These already exist for larger organisms and are actively being developed for use in C. elegans.

Today, the combination of calcium imaging and microscopy techniques like DiSPIM and SCAPE means that we can see how neurons behave throughout the entire C. elegans brain — in real time. The next challenge is to actually do it. And to do it a lot. Our understanding of the C. elegans connectome has improved significantly since White’s groundbreaking work in 1986. White’s connectome was a mosaic of five individual worms. However, the same neuron in different animals might differ in size or capacity for electric charge. To fully understand the C. elegans brain and its operation during a broad range of behaviors, we need to collect data from thousands of individuals. 

There’s the question of what to do with the data once we have it. This is another area where recent advances – this time, in machine learning — make the process much more feasible. For all its biological complexity, the C. elegans brain still consists of just 300 neurons — tiny compared to state-of-the-art large language models. Using symbolic regression, a machine learning technique for discovering mathematical formulas that explain observed data, we can take our data on neuronal activity and use it to derive key parameters like capacitance and synaptic strength for every single neuron and every single neuronal connection. These equations would likely resemble the biophysical models that scientists have already derived from patch-clamp experiments, but inferred directly from whole-brain data.

Fish, flies, and beyond

I don’t mean to suggest that building an accurate C. elegans simulation will be easy. There are many considerations that the technologies I’ve described may not account for, from extra-synaptic signalling to the role of specific neuron morphology (not to mention the fact that neurons and synapses change over the course of a nematode’s life). But with modern techniques, which continue to rapidly improve, I do believe that it is possible. 

And if we want to one day build simulations of larger animals — including humans — I also believe that it is necessary. The optical microscopy techniques that let us observe the neural activity of living organisms have one key limitation: depth. Light can only penetrate so far into tissue. With current techniques, that limit is roughly 750 microns, a bit less than a millimeter. To build an accurate whole brain simulation, we need activity data from a whole brain — which means that we’re currently limited to brains less than a millimeter deep. In other words, C. elegans, larval zebrafish, and fly brains are our only options. By investigating small organisms, we can develop new methods that allow us to predict neural activity by looking at the brain’s structure and other indirect forms of data. These techniques will make it possible for us to model more complex brains, including those that are too large for us to image their activity directly. 

My research focuses on creating a scientifically-grounded simulation of C. elegans by integrating these recently developed microscopy, fluorescent reporter, and machine learning methods into a cohesive pipeline and methodological framework. The idea is to create a proven simulation creation blueprint that can then be applied to more complex brains. But achieving a successful simulation of C. elegans would be a remarkable scientific accomplishment on its own. More importantly, it would help us begin to decipher how the structure of a brain relates to the dynamic processes unfolding within it. Over time, this understanding will open the doors to simulating more complex organisms, ultimately including humans. We have a long journey ahead of us, but now is the best time to begin — expeditiously, and with tractable, well-defined milestones along the way.

  1. Dual-view Plane Illumination Microscopy
  2. Swept, Confocally Aligned Planar Excitation

Michael Skuhersky holds a PhD in neuroscience from MIT and is currently founding a nonprofit research institute focused on brain simulation.

Published March 2025

Have something to say? Email us at letters@asteriskmag.com.

Further Reading