Sponsor Content from

Issue 2
Chapter: Chapter Title Not Found

Much of the current conversation around the rise of artificial intelligence can be categorized in one of two ways: uncritical optimism or dystopian fear. The truth tends to land somewhere in the middle—and the truth is much more interesting. These stories are meant to help you explore, understand and get even more curious about it, and remind you that as long as we’re willing to confront the complexities, there will always be something new to discover.

Sidebar

Beyond the Brain

An interview with Blaise Agüera y Arcas on intelligence and his team at Google, Paradigms of Intelligence.

By Terrence Russell • Portrait by Oriana Fenwick

Blaise Agüera y Arcas has been at the forefront of AI research for years. As VP/Fellow and CTO of Technology & Society at Google, he leads the Paradigms of Intelligence (Pi) team, which takes a unique approach to AI research by interweaving ideas and expertise from fields such as computer science, neuroscience, biology, and philosophy. We spoke with the multihyphenate researcher, engineer, and author about how Paradigms of Intelligence is helping shape the future of AI.

On the roots of his fascination with artificial intelligence:

I’ve been interested in AI from the beginning, from the moment that I was a kid and started to understand that we have brains. I was a computer nerd hacking around on a computer rather than having a normal social life. I did all kinds of crazy stuff as a kid. I broke a lot of copy protection on video games and I was very, very interested in the computational view of minds. I was also just very interested in the fundamentals of how nature works. Originally, I thought that I was going to be a physicist, but eventually I found that the deepest and most interesting questions were more connected with neuroscience than particle physics or cosmology.

On the Pi team’s mission to challenge assumptions about how intelligence develops, in favor of developing a more holistic and long-term paradigmatic view of AI:

We’re looking at the basic assumptions about intelligence by exploring questions like “How can those be broken or changed, and where might that lead us?” For instance, one of those core assumptions is that brains are predictors—that the reason we have them is to predict the future given the past. And I never quite took those ideas 100 percent seriously. It’s worth reassessing the paradigm of how brains actually work and exploring the high-level principles. We’re in this very exciting period of having cracked one of the central problems of AI, but at the same time, if we were to go to sleep and wake up in 20 years like Rip Van Winkle, I don’t think any of us would still believe it’ll be about transformers and chatbots. Our work is about advancing fundamental areas of understanding. That’s why we’re bringing in all these diverse perspectives. Whether it’s complexity science or speculative philosophy, it’s all about rethinking intelligence and computation from the ground up.

On ensuring that the Pi team has assembled people capable of tackling its heady, multifaceted mission:

We’re working with folks from Mila, the Santa Fe Institute, and philosophers like Pi Visiting Researcher Benjamin Bratton, who think about computation in new ways. We have a wide range of thinkers: engineers, philosophers, and researchers from various fields, including those outside of AI. Everyone’s bringing a different perspective to the table. There’s no formula for it, but I believe that hybridity—multiple currents crossing in unusual ways—is where interesting things happen. I do try to match up people from very different worlds—maybe an engineer with a philosopher or a scientist. The magic happens when their different perspectives collide. I guess my intuition is that everything interesting that humans do comes from hybridity of some kind. It comes from multiple currents crossing in some unusual way.

On collective intelligence—progressing from individual knowledge to advanced networks of shared understanding—as one of the keys to redefining our view of what’s possible with AI:

We’re trying to understand the systems that inform other systems—the meta-systems, if you will. Intelligence isn’t just a product of neurons firing in a brain, or circuits in a machine. It’s shaped by evolution, by development, and by interactions within larger ecosystems. I’ve always had this sense that locating human intelligence solely within the individual human brain is a little limiting, even provincial. Intelligence, to me, is a collective phenomenon. It’s about cooperation, scale, and societies, not just the brain. When we attribute something to “human intelligence,” we’re often talking about the superhuman intelligence of all of us—our collective efforts, rather than what any one person can do alone. And this leads us to think about intelligence not as something that’s located in a brain—whether human or silicon—but as something that emerges from interactions across systems, across minds.

People often criticize AI for failing certain high-level tests, like solving complex math problems. But how many humans can solve those kinds of problems? It’s a rarefied skill. AI models, much like humans, aren’t designed to be universally intelligent. They’re drawing from vast amounts of collective human output. I don’t think that we’re going to be able to have AI models breaking new territory until they’re able to operate at this human social scale, just as an individual person is not going to become one of the 50 best number theorists in the world by clicking around at random on the internet. You can become one of the 50 best number theorists in the world—or even one of the 50 best poets—only by interacting with the other 49 and forming communities of interest through committed interactions.

On expanding our concept of learning well beyond the capabilities of a single brain—viewing intelligence as a phenomenon of cooperative social networks of shared knowledge:

If you were to take an individual human and raise them alone from birth, I don’t think that a visiting alien would be astounded by their intelligence relative to the other fauna on earth. We’re not that different individually from the other great apes. A lot of the differences are subtle. They have to do more with motivation, with the fact that we are not quite as strong as other mammals, the fact that we’re a little bit better cooperators, etc. The really magical thing happens when you start to get humans working together and collaborating in larger and larger groups.

There’s a bunch of really cool work in anthropology that’s happened in the past 10, 15 years showing how the scale of a society relates to the complexity of the technologies and art that it produces. There are these famous cases from when Tasmania was cut off from the mainland of Australia, and [Tasmanians’] level of technological complexity dropped quite a bit. They lost a bunch of technologies that would actually have been very useful for fishing and other tasks. They didn’t lose these technologies because they were not as intelligent as anybody on the Australian mainland. They lost them because they were smaller in number and were isolated. A lot of things that we attribute to human intelligence are highly collective. The individual is just not that smart.

On breaking old molds and going beyond conventional research boundaries to develop a new understanding of intelligence:

I’m very interested in that paradigm where we’re pursuing communities of interaction that are generative and mutual. And from that perspective, I don’t see AI as being particularly separate from human intelligence. I think that we are starting to have nodes in that graph, if you like, that are silicon-based, as opposed to brain-based, but the interactions are actually where the intelligence occurs.

Paradigms of Intelligence isn’t bound by the traditional disciplinary boundaries of research. Instead, we’re trying to explore intelligence from multiple angles: human, social, computational. My hope is that by challenging these old assumptions, we can help lay the groundwork for new ways of thinking about intelligence that go far beyond what we imagine today.