Mira Lane Where do we stand today in terms of AI development, and how close are we to realizing the possibility of AGI?
Shane Legg We’ve all seen that AI is progressing quickly. I think we could be living with artificial general intelligence in five years. I think the probability is even higher in 10 years. When I made the first AGI timeline predictions back in 2009, I predicted a 50 percent possibility of AGI by 2028. So, let’s just take seriously, for a moment, the possibility that this might happen in the next 10 years. AGI affects all these different areas, and you can’t be an expert in all these fields. The advent of AGI is actually something that will require deep expertise in all human endeavors. What we really need would be for all the different departments and all the different faculties in universities to be thinking about the arrival of AGI. What does medicine look like in a post-AGI world? What does accounting look like in the post-AGI world? What does education look like in the post-AGI world? What does research look like in a post-AGI world? What does economics look like in a post-AGI world?
Lane How are you defining AGI right now? I feel like that’s a big open question.
Legg I define an AGI to be an artificial agent that can do the kinds of cognitive things that people can typically do. I see this as the natural minimum bar. For some high levels of AGI capability, see the paper “Levels of AGI,” written by a group of us at Google DeepMind last year.
Lane What are the most significant open questions in AI research today, and what breakthroughs are still needed to propel the field forward?
Legg We are working on AI in different areas, and many people think that achieving AGI is probably a refinement and an improvement and working on the sorts of things we already know and combining some of the methods we already know in the right way. It’s not guaranteed. Maybe there is really a big breakthrough that is required, which we don’t know yet. But there probably doesn’t need to be a breakthrough as big as transformers—which were developed at Google—to get to AGI. There will be some advances. There are all sorts of low-hanging fruit at the moment to make our models better, like advances in datasets, memory, planning, and reasoning. And as we work on all of them, we see progress in all the different areas. So we are confident that, at least for the next few years, we can make these models—that are already getting very good—much better. And then as we start making agents out of these models, those agents will generate data as they interact with different kinds of environments and try to achieve goals in those environments. We’ll then train new foundation models on that data. The resulting models will then be much better for building agents. This process may get us to AGI in as soon as five years.
Lane How do you envision measuring the level of understanding in AI systems, and what frameworks inform your thinking on this complex issue?
Legg Yeah, it’s hard. Not all aspects of intelligent behavior by AI agents are easy to measure. And if there are aspects that are hard to measure, you don’t know how well you’re doing on them, and if you don’t know how well you’re doing on them, you may not even realize that you need to do better on them or are already doing well on them.
The other problem is that even if you can measure something, there are just so many things that you can measure. Because if what you are measuring is an AGI, it has generality—it’s not that it does a specific thing. If it was doing a very specific thing, you could measure it thoroughly on that aspect. But if it’s very, very general, it can do everything from writing code to understanding 20 languages, to making music, to making pictures to poems to legal work, to all kinds of things. That’s a lot of things to try to measure. So the measurement problem is very difficult and it’s very important.
It’s also difficult because it’s not a glamorous thing to do. Think about it this way: The most glamorous Olympics event is the 100-meter sprint, right? But you’re not going to have a 100-meter sprint event if somebody doesn’t build the track and get the start guns and the photo finish equipment all set up. As you know, you’re not going to have a good event [without those people], but the glory goes to the runners.
Lane Not the designers of the track.
Legg The design doesn’t attract the same kind of attention, but if you don’t have a good track and the photographs and all that, it’s just not really going to go very well. You need a good track that has to be level and the right surface and all these sorts of things. It’s been a problem with machine learning for a long time that, psychologically, people are drawn to building the agent or being state-of-the-art on the benchmark rather than building the benchmark itself.
Lane What governance models do you consider important to ensure this positive transformation? Where do you think public understanding needs to grow?
Legg I mean, they’re all enormous questions. I think the biggest thing is how advanced public understanding has recently become around LLMs, in that it’s not a technology that you just read about; you can get it on your mobile phone and you can talk and interact with it. And so you can at least start to get some grounding in terms of what this thing is. Members of the public are doing that en masse. Does it mean that people understand that powerful AGI is coming? Weirdly, I think many people do. And I actually think that, sometimes, lay people that have some technology interest have a better mental model of this than some experts who tend to be very skeptical and come at this with a lot of long-standing beliefs and biases. I’ve seen this throughout my career.
Lane How so?
Legg When we started DeepMind, everybody said it was ridiculous that we thought machine learning was going to be huge. People thought it was ridiculous that we were going to go and do this and that we were going to get Nature papers and be in academic journals or win awards. But no, AI keeps delivering the goods, and it keeps getting better and better.
Lane We were talking about how highly capable AI systems will transform every single industry and human endeavor. I use LLMs for so many things, and it is remarkably good. I have shared emails, documents, and text messages with a model and have had it help me examine different perspectives.
Legg I did something like that recently. I received a few messages and wanted to understand better what this person was trying to say. It seemed like they were hinting at something, so I put it in an LLM and I asked, “What is this person really trying to say?”
Lane Fascinating, isn’t it?
Legg It’s a whole new world, really.
Lane Looking ahead 50 years, what do you think will be the most profound ways in which AGI will have transformed society, and what are your biggest hopes and concerns for the future of AI? What would it look like if we got it right?
Legg Reductions in poverty and increased access to various kinds of resources and education. I think medicine could be advanced significantly, as well as scientific research. It could be good for the environment. We might have new types of clean energy sources or new types of materials and products. I see potential for improvements in every aspect of society.