Sponsor Content from

Issue 2
Chapter: Chapter Title Not Found

Much of the current conversation around the rise of artificial intelligence can be categorized in one of two ways: uncritical optimism or dystopian fear. The truth tends to land somewhere in the middle—and the truth is much more interesting. These stories are meant to help you explore, understand and get even more curious about it, and remind you that as long as we’re willing to confront the complexities, there will always be something new to discover.

Feature

Mapping the Mind

Research from a team of Harvard and Google scientists has revealed never-before-seen details of the human brain’s structure. The findings pave the way for a better understanding of AI—and, in turn, ourselves.

By Maya Kosoff • Illustration by Anna Lucia

The average adult human brain is a mosaic of different kinds of cells. One hundred billion neurons are responsible for how distinct parts of the brain talk to each other and how information flows from one part of the brain to another. Think of yourself driving a car: If you see a red light, information flows through neuron after neuron until it gets to your leg muscles, prompting your leg to lift off the gas pedal and shift to the brake pedal.

“There’s no way a red light should make your leg muscle move unless there’s some connection between your retina and your leg, and that has many relays, but there is no simple pathway between those things that anyone understands,” Dr. Jeff Lichtman, a neuroscientist at Harvard University, explains.

Understanding how these processes work in the human brain is a discipline in neuroscience called connectomics, which aims to unveil the wiring diagrams of the nervous system. Eventually, connectomics wants to understand not just simple behaviors like the wiring behind how traffic signals affect us when we drive, but also the pathways that underlie every memory we have and every piece of information we’ve learned.

To do that, research needs to dig deep into the subcellular level. That’s what Lichtman’s team at Harvard, in collaboration with Google, did while developing the most detailed map of a section of the human brain to date.

Charting a Map of the Brain

Lichtman’s team started its research 10 years ago with a sample of a brain smaller than a grain of rice. That cubic millimeter of brain matter contained 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses—which altogether yielded 1,400 terabytes of data, or a volume of content equivalent to more than 1 billion books.

The project, Lichtman says, was a huge undertaking. “When we started, we didn’t appreciate just what it means to have one thousand terabytes of data,” he says. “We’re still somewhere toward the beginning of this great adventure of neuroscience. This realm should be quite familiar to us, since we carry these things on our shoulders, every one of us, but they’re largely unknown and unexplored.”

Despite the astounding amount of information Lichtman and his team had to contend with, they were able to develop those datasets relatively efficiently with the aid of AI, a process that would have taken hundreds of thousands or millions of humans to do by hand. To work with Lichtman’s massive imaging datasets, Google developed a software library called TensorStore, which “makes it possible to manipulate large datasets from thousands of computers simultaneously,” says Dr. Viren Jain, a research scientist who leads Google Research’s Connectomics team. “This software has been critical for our neuroscience research but has also turned out to be useful for other applications at Google, such as training [generative AI chatbot] Gemini.”

Google’s Connectomics team is focused on better understanding the brain with the help of AI. “We develop algorithms to interpret large-scale data about the structure of the brain and produce connectomes, or descriptions of how the brain is wired at a cellular and synaptic level,” Jain adds. “Neuroscientists can then use this data to explore, test, and develop theories about how the structure of the brain is related to its function.”

What AI Can Reveal About Our Brains—and What Our Brains Can Reveal About AI

Artificial intelligence not only sped up Lichtman’s research but also shed light on the findings themselves. AI and neural networks share a fundamental conceptual connection with the biological neural networks in the brain.

AI, particularly in the field of machine learning, is inspired by the structure and function of our brains’ neural connections and pathways. In our brains, neurons are connected through sophisticated networks. They process and transmit information through chemical and electrical signals. Artificial neural networks are computational models inspired by our brains’ neural connections. This enables machines to learn from—and make predictions based on—data.

In our brains, neurons are connected by synapses. Each neuron can have thousands of synaptic connections that form a network allowing for cognitive tasks and motor functions. Similarly, artificial neural networks have nodes (or “neurons”) organized in layers. Every node processes input data and communicates to other nodes, analogous to the synaptic connections we experience that help us tie our shoes, write emails, and make scrambled eggs.

“Historically, advances in neuroscience have inspired computer scientists and led to major new developments such as the overall concept of an artificial neural network, reinforcement learning algorithms for training AI systems, and even the famous convolutional neural network architecture,” Jain says, referring to a deep learning neural network that has contributed much to our present-day understanding of computer vision and image analysis. “So, indeed, while we can’t anticipate specific new advances, we do believe that achieving a much more precise understanding of biological computation will lead to important new ideas for artificial computing systems.” One area of interest, Jain says, is low-energy computing: “The human brain uses about as much energy as a light bulb. When we better understand how the brain works, perhaps we will be able to use some of the key insights to improve the energy efficiency of our AI systems.”

Though artificial neural networks are nowhere as complex as our own brains (yet), the parallels between these systems highlight how understanding biological intelligence can drive advancements in AI, in what Lichtman calls a “virtuous loop.”

“It’s profoundly interesting that we are depending on machines that have learned how to do something to give us insights into how brains learn to do things,” he says. “You could imagine those machines eventually providing insights into how we should program our future AI algorithms. That’s a virtuous loop—maybe not ‘virtuous’ for humans, but virtuous in the general sense that it makes these machines smarter and smarter.”

New AI mechanisms can learn without being taught, Lichtman says, and he predicts that eventually they could have a built-in understanding of their own wiring. Machines don’t use human logic—so when they make mistakes, a human gives them training to correct the mistake, and the machines don’t make the same mistake again.

They’re “superhuman,” Lichtman says, and they’ll be able to do everything humans can do, but currently they don’t know why or how they’re doing it. “They’re still at this coloring-book level. They’re not looking at the picture they end up with from filling in the coloring book. They can’t say, ‘That’s Mickey Mouse.’” But at some point, Lichtman says, these algorithms are going to recognize what they’re doing and then quickly get to the bottom line of what it is without going through the painstaking motions of coloring in every last little pixel before they can identify the final shape they’re coloring.

This process, Lichtman says, is called segmentation—and it’s a big part of the work Google did with his team on the brain-mapping project. “The group came up with an algorithm that made ‘coloring in’ very efficient, so these machines can do it faster and more accurately,” he says.

To transform the data from Lichtman’s team into something AI could process, Google used two advanced techniques. The first, flood-filling networks, tackled the monumental task of mapping individual wires and cells in massive 3D datasets—“a difficult problem that would be infeasible with purely manual effort,” says Google’s Jain. The second, an algorithm called SegCLR, helped label each wire by its type, making it easier for neuroscientists to connect the structures to familiar references in existing research.

Information Is the Enemy of Understanding

Instead of setting out with a goal of understanding something as complicated as the brain, Lichtman says, it’s much better to ambitiously attempt to describe it. “Millions of things are happening simultaneously,” he says. “It’s a wiring diagram like a road map, except it’s in three dimensions, and every road forks thousands of times. It’s impossible to look at it and say, ‘Oh, now I get it.’” The human brain is impossible to understand in depth. But we can describe it, and we couldn’t do that nearly as completely before Lichtman’s team published its paper and map of the human brain.

Mapping—an age-old practice that began with humans trying to understand the physical geography of the world around them—is driven by innate curiosity. “Humans have done this over and over, and we’re learning things, but I wouldn’t say the world is getting simpler as a result,” Lichtman says. “It’s not that we understand all this stuff. What we understand, usually, is how far we are from understanding.”

Ironically, information, Lichtman believes, is the enemy of understanding. Before his team had data about the brain, research scientists could develop untested theories about how the brain worked. But now that he can see what certain sections of the brain really look like, those theories go up in smoke. While someone who has seen their hard-won concepts disappearing over time might, understandably, become jaded and cynical, Lichtman is anything but. Instead, this process, he says, is indicative of the way human knowledge builds on itself. “We didn’t understand the physical basis of inheritance until we mapped the genome, and then we knew the name of every single gene. But no one would ask, ‘Do you understand the genome?’ It’s impossible to understand. So many things are simultaneously turning on and turning off as the genes are working to generate your body shape,” he says.

What’s Next for Lichtman’s Team

In tech, the 10x concept describes a goal of challenging traditional thinking and achieving huge growth: scaling users, or maximizing how many people can use your product at once. Though it originated in the IT world to describe software engineers who are 10 times more productive than the average developer, it also aptly describes Lichtman’s ambitions for his next project.

Working with Google and six other laboratories distributed across the United States and Europe, Lichtman hopes to do “something 10 times as big as the project we just finished”: Using a mouse brain, the team will map parts of the brain related to memory systems, the hippocampal format. That, Lichtman says, will be a proof of concept. If the team is successful, it will accomplish a task a thousand times more ambitious than its 2024 brain-mapping project: mapping a mouse’s entire brain. “All you have to do is scale up and find the money to do this on a bigger scale. For us scientists who are in academia, it’s an unbelievable opportunity to work with truly the world’s best programmers on a project,” he says.

The field of connectomics is still nascent, and Lichtman knows that his milestone achievement is soon to be surpassed. “It’s not like we could rest on our laurels, to put it mildly. A number of us working in this field are feeling that, finally, we have the wind in our sails—as opposed to feeling more like Sisyphus,” he adds with a laugh. “Virtually everything we tried failed and was very difficult, and now, finally, things are working.”

Before working with Jain’s team at Google, Lichtman had a “somewhat dim” view of AI, he says, colored by people’s initial reactions to large language models. “But once the training gets going, it’s a profound thing,” he says. “We are definitely the benefactors of AI.” Hard, ambitious problems like mapping the brain, he says, are the best problems to study. “The reason it’s attractive to do this work isn’t because it’s easy, but because it’s hard. We should always try to do things that are a bit beyond our capability.”