In 1997, while Demis Hassabis was a student at Cambridge, IBM Deep Blue, a chess program, defeated grandmaster Garry Kasparov—the first victory of a computer over a reigning chess champion under tournament conditions. It was a moment that would prove formative for Hassabis, a former child chess prodigy who had used his tournament winnings to buy his first computer.
Why? Because it was the human being who lost the match, not the technology that won it, that most impressed him. “Kasparov could not only play chess more or less to the same level as this brute of a calculation machine,” Hassabis says. “[He], of course, could [also] ride a bike, talk many languages, do politics, all the rest… Deep Blue, brilliant as it was at chess, couldn’t do anything else… Something was missing from that system that we would regard as intelligence.”
The search for that absent intelligence has driven Hassabis’s career ever since.
He began his career as a teenage designer of best-selling video games (and, like after his tournament winnings, used the proceeds to pay his way through Cambridge this time). In 2010, a few years after earning a Ph.D. in cognitive neuroscience, he co-founded DeepMind. His goal: to create artificial general intelligence (AGI) and to use it to achieve the widest possible social impact.
In 1972, accepting the Nobel Prize for Chemistry, Christian Anfinsen had thrown down the gauntlet for what would become known as “the protein-folding problem.” Proteins are the building blocks of life. Every enzyme and hormone in the body is a protein. They’re responsible for everything from digestion and neurological function to growth, repair, and reproduction.
Proteins, like DNA, are made up of chains of amino acids. Each one helps to determine the protein’s structure through its kinetic interactions with the others. Like a magnet, each chain, singly or in combination, has a particular valence; all together, they guide the protein into assuming its final three-dimensional shape. Only then, after the folded protein has correctly positioned its various channels, receptors, and binding sites, can it function. Glitches in this process are implicated in diseases as different as cancer, diabetes, and Alzheimer’s.
The amino-acid chains of proteins were no secret to researchers. Therefore, Anfinsen argued, scientists ought to be able to extrapolate from them the three-dimensional shape a given protein would assume. As an idea, it seemed simple enough; in practice, it meant confronting an almost unfathomable complexity. The protein-folding problem would baffle scientists for the next five decades—a kind of Fermat’s Last Theorem for the field of biology.
Of the 200 million proteins in nature, scientists had slowly and painstakingly documented the structure of approximately 150,000. Using this dataset, DeepMind in 2016 began training the AI system AlphaFold to predict the structure of the others. “I wanted to finally apply the AI to real-world domains,” Hassabis told a journalist for Scientific American in 2022. “Protein folding was right up there for me always, since the 1990s.” The first iteration of the technology, released in November 2020, fell short of the required “atomic” level of accuracy, but AlphaFold 2—a complex architecture of 32 component algorithms—essentially solved protein-folding. By July 2022, DeepMind’s database encompassed all of the 200 million known proteins. AlphaFold 2 was, a Forbes columnist declared, ”the most important achievement in AI—ever.”
We caught up with Demis Hassabis to ask him about present and future applications of this breakthrough technology, as well as some of the issues it has raised involving privacy, safety, and ethics. Our conversation, edited and condensed for clarity, is below.
Question It took more than 50 years for science to solve this problem. Why was protein-folding so tough to crack? And what does solving it mean, exactly? In other words, how quickly can AlphaFold enumerate the possible configurations of a single protein today?
Demis Hassabis There’s an astronomical number of potential shapes a protein could theoretically fold into, by some estimates 10^300 (10 to the power of 300, which is a 1 followed by 300 zeroes), which would take longer than the age of the universe to search through. And yet somehow in nature proteins spontaneously fold in fractions of a second. This is sometimes referred to as Levinthal’s paradox, and it is this complexity that makes the problem so tough to crack.
It often takes a graduate student their entire Ph.D. to experimentally determine the structure of just a single protein, and after decades scientists had only been able to determine around 150,000 protein structures experimentally. This is the problem we wanted our AlphaFold AI to solve, by making it possible to predict protein structures quickly and accurately directly from the amino acid sequence (roughly the genetic sequence for the protein).
Question This speed isn’t just an astounding technological achievement. It also has important repercussions for AlphaFold’s applications in the real world. For instance, limitations of both manpower and time have significantly hampered research into potential treatments for neglected diseases. What does the database’s accessibility, in concert with AlphaFold’s speed, accuracy, and negligible cost, mean for research into potential treatments for neglected diseases?
Hassabis Some diseases disproportionately impact communities in less affluent parts of the world, which also have fewer resources for researching new treatments. Making our AlphaFold predictions freely available to anyone in the scientific community is making a big impact here, and already more than 1 million researchers in 190 countries have accessed the AlphaFold Protein Structure Database, and many of these researchers would not have access to the expensive experimental facilities needed to determine the structures of the proteins implicated in the diseases they were studying.
One of AlphaFold’s earliest adopters, the Drugs for Neglected Diseases initiative (DNDi), has used AlphaFold to advance research into diseases like Leishmaniasis and Chagas disease that disproportionately affect the poorer parts of the world. We’ve also supported World Neglected Tropical Disease Day by creating structure predictions for organisms identified by the World Health Organization as high priority for their research, which is helping with the study of diseases like leprosy and schistosomiasis, which have impacted more than 1 billion people globally.
Question Researchers are also using AlphaFold to do some particularly interesting de novo protein design of applications external to the human body. They’re creating proteins not found in nature to serve climatic and environmental purposes. Can you talk about some of those applications and their social impact?
Hassabis We’re seeing researchers use AlphaFold to study protein design and specifically enzymes, which could be particularly valuable in helping us achieve a more sustainable future. For example, a team at the University of Portsmouth has been using AlphaFold in their work to discover and engineer enhanced enzymes that can eventually be applied at scale to break down some of the most polluting single-use plastics. We also know that scientists are using AlphaFold to explore carbon-capture technologies.
Question DeepMind decided to release the coding for AlphaFold publicly and to upload, free of any charge for access, its enormous database of protein structures—“a gift from us to the scientific community,” as you’ve said. To date, 500,000 researchers have used it, which you believe to be the vast majority of the biologists in the world. But before releasing this data, DeepMind consulted with 30 bioethicists about the safety of doing so. What ethical concerns did the bioethicists bring to your attention, and how did you address those concerns?
Hassabis Before releasing AlphaFold we consulted a range of experts, including bioethicists, as well as experts from fields like protein engineering and biosecurity. They determined that the risk of releasing AlphaFold was likely to be low and that the benefits far outweighed the risks. We have a very rigorous program for ensuring our technology is developed and deployed in a way that’s safe, responsible, and ethical, including ongoing engagement with biosecurity experts.
Question At the same time, the technology itself doesn’t seem to function transparently. AI has been described as a “black box”: its reasoning is incomprehensibly complex and opaque, which means we can’t reverse-engineer the steps it takes to arrive at its conclusions. If we don’t know how it reaches its conclusions, we perhaps don’t know what those conclusions are going to be, either—and perhaps they won’t be hospitable to us as human beings. In the worst-case scenario that organizations such as MIRI (Machine Intelligence Research Institute) have articulated, humankind itself will be at risk from a superintelligent AI. Before this technology becomes sentient, numerous scientists have warned, we must solve the so-called alignment problem—we must be able to train machines so that their interests and ours are permanently and inseparably aligned.
Hassabis AI systems are not actually black boxes: Unlike the brain, we can in principle inspect every weight and activation of an AI system. However, the extraordinary complexity of neural networks means that even with current advances in the science of interpretability, we have a long way to go before we can meaningfully understand them.
AI is an engineering science: we need to first build an AI system before we can take it apart and study it. Advances in AI increase the challenges with respect to safety, but they also amplify our ability to conduct AI safety research, by giving us more advanced systems to study and assist us.
As we begin to build increasingly more powerful and general systems, one promising idea would be to first test them in hardened simulation sandboxes and conduct safety evaluations, only later deploying them into the real world, once we have gained confidence in their safety. For this we need to advance the science of scalable alignment—methods to train models to do what we intend, that will scale with their increasing capability. Ultimately, success at scalable alignment is critical for unlocking the vast benefits of advanced AI in health, science, and well-being.
Question AI is very probably the most powerful tool humankind has ever built, and it’s largely in the hands of the private sector. Earlier this year, you co-signed a 22-word statement, along with many of the world’s most eminent researchers, scholars, and ethicists—among them some of them your peers at Google. The statement reads, “Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” There has been much discussion about creating a code of ethics to govern work involving AI, as you know. To what degree should members of the public be concerned about the motives of some Silicon Valley entrepreneurs when it comes to AI, particularly given its vast, possibly even unimaginable rewards in terms of both power and money?
Hassabis We believe the right way to respond to this moment in AI is with cautious optimism—with a firm grasp of the incredible benefits that AI could create, but also a sober understanding of the near and long-term challenges that we need to prepare for.
AI promises extraordinary new capabilities and opportunities to help us solve some of the biggest challenges of our time. It has the potential to help us to cure diseases, to deliver a more sustainable future for the world, and to unlock a new era of greater prosperity and opportunity for humanity.
Alongside this incredible potential, AI will obviously create some big challenges. We’ve always been committed to pioneering safely and responsibly, and have created industry-leading technical safety, ethics, and governance programs. These will remain important priorities for us, but we’re also working to drive action across the tech industry, government, and society, so that other innovators and leaders are preparing responsibly for the future.
Question AI is very probably the most powerful tool humankind has ever built, and it’s largely in the hands of the private sector. Earlier this year, you co-signed a 22-word statement, along with many of the world’s most eminent researchers, scholars, and ethicists—among them some of them your peers at Google. The statement reads, “Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” There has been much discussion about creating a code of ethics to govern work involving AI, as you know. To what degree should members of the public be concerned about the motives of some Silicon Valley entrepreneurs when it comes to AI, particularly given its vast, possibly even unimaginable rewards in terms of both power and money?
Hassabis We believe the right way to respond to this moment in AI is with cautious optimism—with a firm grasp of the incredible benefits that AI could create, but also a sober understanding of the near and long-term challenges that we need to prepare for.
AI promises extraordinary new capabilities and opportunities to help us solve some of the biggest challenges of our time. It has the potential to help us to cure diseases, to deliver a more sustainable future for the world, and to unlock a new era of greater prosperity and opportunity for humanity.
Alongside this incredible potential, AI will obviously create some big challenges. We’ve always been committed to pioneering safely and responsibly, and have created industry-leading technical safety, ethics, and governance programs. These will remain important priorities for us, but we’re also working to drive action across the tech industry, government, and society, so that other innovators and leaders are preparing responsibly for the future.
Question You’ve said that you created DeepMind in the image of Bell Labs, the celebrated R&D division of AT&T whose researchers earned eight Nobel Prizes in numerous disciplines. DeepMind’s mission statement, you have said, is “Step one, solve intelligence; step two, use it to solve everything else.” What does this mean, and how does it guide DeepMind’s ambitions? How does it influence what problems you choose to take on (including the protein-folding problem)?
Hassabis When we set up DeepMind, I took inspiration for our research culture from many innovative organizations, including Bell Labs and the Apollo program, but also creative cultures like Pixar. Our fundamental goal has always been to create AI technologies that can help us better understand the world around us and solve a lot of important challenges facing society from curing diseases, to creating a sustainable future, to powering products that enrich the lives of billions of people in their daily lives. Aiming for that kind of scale and impact is what drives our efforts.
Question Looking to the (perhaps distant) future, what huge problem that seems impossible to resolve today—be it scientific, technological, societal, or otherwise—seems conceivably solvable to you with the assistance of AI?
Hassabis There are many huge scientific and mathematical problems I have on my list to solve (one of them was protein folding!). Modeling a virtual cell has been one of my dreams for a long time. If you could build a highly accurate simulation of a cell using AI, that was capable of making useful predictions, it would be incredible for the understanding of biology as well as things like drug discovery. Lots of experiments could be conducted quickly and cheaply in the virtual cell, and then only at the last stage would the predictions be validated in the wet lab.
This would be revolutionary for processes like drug discovery. Currently it takes roughly 10 years to go from identifying a target to having a drug candidate. With a virtual cell, you could potentially massively shorten those timescales down to months instead of years, by much more efficiently exploring the search space of possible compounds. I think getting to a virtual cell might be possible in the next decade, and it’s something I’m really excited about.