Something shifted in AI over the past year. The pace changed. Capabilities that researchers expected to arrive gradually started appearing in rapid succession, each one raising the ceiling on what these systems could do in the real world.
James Manyika, Google’s senior vice president of Research, Labs, Technology & Society, has spent the year helping to drive that acceleration, and he’s equally focused on what will need to keep pace: the social, economic, and institutional infrastructure needed to integrate these capabilities effectively and responsibly.
This issue of Dialogues explores what happens when AI capabilities move into the world—across scientific discovery, economic transformation, and how we learn.
Dialogues The pace of AI development this year has been extraordinary. What stands out to you in terms of the major technological thresholds we’ve crossed? Have any of those surprised you or your colleagues at Google?
Manyika Surprised? No. But some of the things I would point to have been real milestones. Thinking models have become remarkably good, achieving new benchmarks in performance and the ability to reason. We’ve also seen video and image models reach new heights. In our case, just seeing the response to Veo 3, along with the quality and performance of video and images themselves, has been quite remarkable.
The other thing, which was a first this year, was the first set of world models—in our case, Genie 3— which can generate and process physical worlds in real time. In that same vein, we now have Gemini Robotics 2.0. For the first time, these vision-language-action models can work with and manipulate robotic systems, allowing you to essentially just talk to the robotic systems without having to program them. I think these are significant advances that are bending the curve.
There’s also been much progress in AI-adjacent areas: Since the last time we talked for this volume, we saw AI-enabled breakthrough progress in quantum computing. We achieved what’s called“ below-threshold error correction” with our Willow chip. This means the rate of logical errors decreases exponentially as more physical qubits are added to encode a logical qubit—solving a major issue that challenged scientists for nearly 30 years. More recently, we announced Quantum Echoes. This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers and is also intractable via classic heuristic methods.
Running on our Willow quantum chip, Quantum Echoes demonstrated a proof-of-concept application in Hamiltonian learning to probe and successfully learn an unknown parameter in a many-body quantum system using NMR (nuclear magnetic resonance) data. This paves a path toward useful applications, such as learning the structure of quantum systems from molecules to magnets to black holes.
As remarkable as all of this is, there’s still so much more to do in terms of capabilities and performance.
Dialogues Given all that, it’s becoming clear that these AI systems can make expert-level decisions, often in real time. How does that reality inform how Google designs AI from the outset?
Manyika It’s important to design these tools for collaboration from the outset because they can enable people to be remarkably productive and creative. You may have seen some of our tools like AMIE (Articulate Medical Intelligence Explorer), which are designed to assist medical practitioners in diagnosis, or even Veo, which we’ve designed from the ground up to assist filmmakers and storytellers.
Dialogues What does this approach to building collaborative or assistive AI reveal more broadly about the AI future that Google is envisioning?
Manyika We’re trying to build with the understanding that these tools can help automate tasks, but their impact will be more consequential in an assistive capacity. This topic of automation versus collaboration is one that the MIT economist David Autor and I explored in some detail in an article in The Atlantic. Focusing more on collaboration rather than just automation is important because it benefits not just workers, but also the economy and productivity growth. You want workers and people in the economy—whether they’re individuals, small businesses, or large companies—to leverage these tools to become more productive, create new outputs, new value-added products, and innovations. The International Labour Organization (ILO), for example, in a broad study across 140 countries, showcased that the assistive effect of these tools is six times the displace- ment effect.
So I think if we get this right, where we focus on assisting workers and others in the economy, and on expanding and improving innovations and outputs— that will create a virtuous cycle that should benefit the economy and everyone in it.
Dialogues What should readers understand about this specific moment in AI that we find ourselves in?
Manyika First, the technical and scientific progress is moving at a breathtaking pace. At the same time, though, it’s one thing for the technology to move fast, but I think adoption in the wider economy is going to be a little slower than that. It’ll be faster than in the past but still slower than the pace of technological innovation. So it’s important to note that while the pace of innovation is remarkable, the pace of adoption and transformation will have to catch up. Adoption in the wider economy is one of the factors that will be critical if we are to realize AI’s potential to enable productivity growth and economic gains. In addition to adoption, the other factor needed will include complementary co-investments, transformation of processes and work- flows to fully capitalize on AI’s capabilities, training and enabling workers to fully utilize AI, ensuring the application of AI to productivity-enhancing use cases in the respective sectors. In other words, the economic gains from AI will not be automatic; nor are they guaranteed.
Second, it’s worth recognizing that we’re in a moment where we should be extraordinarily bold. The possibilities of this technology benefiting people, the economy, science, and society are enormous. At the same time, it’s also important to be extremely responsible because we know this technology comes with all kinds of complications, challenges, and even risks. That’s why this idea of being bold and responsible—for us as developers, for individuals using these technologies, for companies deploying this technology, and for policymakers—is even more important for everyone to keep in mind.