Artificial general intelligence is a term that’s been around for more than two decades—but there is still debate about how far away it is, what it might be capable of, or even exactly what it is. But ques- tions that felt comfortably theoretical even a few years ago are now beginning to require more prac- tical answers. We asked three leaders at Google and Google DeepMind to explain where the technology stands, what breakthroughs are still needed, and what is still unknown.
Demis Hassabis
Cofounder & CEO, Google DeepMind
ON ADVANCING SCIENCE

I’ve always been most passionate about building AI to help us answer fundamental questions about the universe and solve critical societal problems in medicine, materials, and energy. Whether by processing data, coding, or generating novel insights, AI could prove to be the ultimate tool to accelerate scientific discovery. It can find structure and reveal abstractions in huge amounts of data, allowing scientists to interpret patterns that would otherwise be very difficult for people to see. As these systems approach AGI, they could usher in a new golden age of scientific discovery that pushes forward the frontier of human knowledge.
Shane Legg
Chief AGI Scientist, Google DeepMind
ON DEFINING AGI

We’ve long defined AGI as a system exhibiting all the cognitive capabilities that humans have. But intelligence is a spectrum with different levels of capability, and two in particular warrant being called out: what we call minimal AGI and full AGI. Minimal AGI should be able to at least do all the cognitive things that people can typically do. It could solve simple logic puzzles, come up with new jokes, and learn a game after a few minutes of instruction. Minimal AGI would open up countless applications, including universal assistants, personalized learning, and help in tackling scientific challenges. That doesn’t necessarily mean we’d be done, though, because there are people who can do extraordinary things beyond what is merely typical in some domains. Full AGI would be a system capable of the full extent of what human intelligence can achieve, from paradigm-shifting scientific theories on par with Einstein’s special relativity to elegant inventions like the game of Go to masterworks of art.
Shane Legg
Chief AGI Scientist, Google DeepMind
ON SHARED TERMINOLOGY

When I first proposed adopting the term AGI in early 2002, we weren’t overly concerned about reaching an agreement on its definition—the technology was still decades away. That is no longer the case, and we urgently need a common language so we can speak cogently about this technology and understand its potential impacts. The lack of consensus on a definition has created so much hype about AGI and confusion about its potential impact that is distracting society from the important work of preparing for it.
James Manyika
Senior Vice President, Research, Labs, Technology & Society, Google
ON PREPARING FOR AGI

On the one hand, we should think about the exciting possibilities that this is going to enable us to do. With more capable and more general systems, we should be able to tackle some of humanity’s greatest challenges and opportunities. How do we collectively ensure that we deliver on things that motivate us in the first place about pursuing AGI—powering prosperity, advancing science, improving lives, and progressing humanity? We should prepare to fully capitalize on that when we get there. On the other hand, we have to think about the risks and complexities. How do we govern such technology responsibly? How do we make sure it’s safe and aligned with human values and preferences? Then I think there’s a third thing to think about, which is, How do we reimagine and adapt our systems, our institutions, and ourselves as individuals in the age of AI? This will require us to think about not only what could be lost but what could be gained. These are some quite profound questions that we’re going to need to prepare for and think about when we get there.
Demis Hassabis
Cofounder & CEO, Google DeepMind
ON THE MISSING PIECES

AI systems today have some impressive capa- bilities, but they are “jagged intelligences” in that they lack consistency across the board. They can fail at relatively trivial tasks and they don’t continually learn. They also aren’t capable of true creativity or originality, which is an important benchmark for AGI. Can a system propose a novel and meaningful scientific hypothesis, not just prove an existing one? Can it invent a game as elegant as Go, not just master it? That is much more difficult. We don’t fully understand how humans come up with creative ideas, and we certainly haven’t cracked it yet in AI.
Demis Hassabis
Cofounder & CEO, Google DeepMind
ON WORLD MODELS

We are building what we call “world models,” or models capable of understanding how the physical world works. If we want AI systems to plan and reason in the real world—whether in robotics or as a helpful assistant on your devices—they need to understand the physical setting. One way to demonstrate a system’s understanding is by showing it can generate realistic videos and interactive worlds. The videos our models are generating now suggest they have some notion of intuitive physics, such as how liquids flow, reflections in glass, or the shadows cast by objects. We think a world model is necessary for understanding causality and will prove to be a critical component of an AGI system.
Shane Legg
Chief AGI Scientist, Google DeepMind
ON THE ARRIVAL OF AGI

What might surprise many people is that AGI won’t likely arrive in a single, dramatic moment. Our imagination is shaped by portrayals of artificial intelligence as being suddenly “switched on.” The reality is it will be a much more gradual and maybe even subtle emergence reflecting the continuum of intelligence and evolving from the AI systems we have today. The surprise isn’t a sudden arrival, but that we might one day realize the world’s most complex problems are being solved at an incredible speed.
James Manyika
Senior Vice President, Research, Labs, Technology & Society, Google
ON WHAT’S STILL UNKNOWN

One type of question is a technical, scientific one: What additional research breakthroughs are going to be needed to build more capable and more useful systems? Because it’s not clear to me that simply scaling what we’re doing—as much as that is helping us make progress—will be sufficient to build more capable systems. I find myself thinking a lot about what are some of the additional innovations and breakthroughs needed. But I’m also asking more philosophical questions: When we get there, whenever we get to that point—which I think is still a ways away—how are we going to grapple with the opportunities, the complexities, the adaptation challenges, and what it will mean to be human in the age of AI?