Sponsor Content from

Issue 1
Chapter: How should AI be governed?

Much of the current conversation around the rise of artificial intelligence can be categorized in one of two ways: uncritical optimism or dystopian fear. The truth tends to land somewhere in the middle—and the truth is much more interesting. These stories are meant to help you explore, understand and get even more curious about it, and remind you that as long as we’re willing to confront the complexities, there will always be something new to discover.

Q&A

The Sweet Spot

A conversation with Google’s Kent Walker on AI regulation and striking the right balance

By Nicholas Thompson • Photography by Cayce Clifford

Kent Walker, president of global affairs at Google and Alphabet

Almost everyone agrees that we need to regulate AI. Surely, we think, we can come up with new government policies that will help to maximize the benefits and minimize the harms of AI. But regulating AI is like trying to paint an airplane while it's in flight. It's moving so fast, it's hard to place the brushes right.

I spoke with Kent Walker, president of global affairs at Google and Alphabet, about what the company thinks effective regulation might look like. Our conversation has been edited for length and clarity.

Nicholas Thompson Everybody wants AI regulation—what should be the central objectives?

Kent Walker An AI agenda needs to rest on three key pillars: opportunity, responsibility, and security. If we get those right, we think we can deliver on the promise of AI for everybody.

Thompson So a regulatory framework should optimize those three pillars?

Walker That’s right. We tend to focus on AI as a chatbot. But it’s important to remember not only that we have all been using AI for many years—if you’ve used Google Search or Translate or Maps, you’ve been using AI—but also that its biggest potential is still ahead in changing how we do science and technology. That will lead to advances in areas from medicine to energy to sustainability to agriculture to economic productivity. So the opportunity side of AI and enabling its potential is going to be critical.

At the same time, you need to focus on the responsibility and the security aspects. Responsibility being things like [ensuring] high-quality results and avoiding problems with discrimination and toxicity, the risk of misuse and abuse. And avoiding security challenges—cybersecurity, national security, and global competitiveness.

Thompson So any regulator thinking about AI needs to make sure that the innovators are allowed to innovate, companies are allowed to grow, inventions are allowed to move forward, that the AI that’s developed doesn’t cause harm, doesn’t create biases, doesn’t create toxicity, and that we’re not hacked and destroyed. Is that it?

Walker That’s a good way of framing it. We view AI as an advance in mathematics. I think we’d be having a different conversation if we were calling it computational statistics, which is probably a more accurate way of describing what’s going on here. But thinking through how we can apply these breakthroughs in computer science to achieve these benefits and minimize the harms is exactly the right balance.

Thompson Let’s go to one of the specific frameworks or policies that you propose, which is a hub-and-spoke approach with the National Institute of Standards and Technology taking the lead. Explain what this idea is.

Walker If you think of AI as a general-purpose technology, and we believe it is—something like electricity—we don’t have a Department of Electricity. We have government agencies that are focused on particular areas where issues come up. The issues that AI presents in healthcare are different from the opportunities and issues in financial services or in transportation. We have agencies across governments that have worked on those issues for many, many years and have a lot of expertise about the potential risk of abuse. It’s going to be a lot easier to make every agency an AI agency than to have a one-size-fits-all solution where you try to take all that learning and put it into one place.

Thompson We don’t have a Department of Electricity, but is there not something fundamentally different about modern AI in its ability to precisely replicate and supersede human intelligence in so many ways?

Walker Again, there’s no one thing that you can call AI. AlphaFold, developed by Google DeepMind, used AI to predict the shapes of 200 million proteins—nearly all the proteins known to science—in just a matter of weeks. That’s a very different kind of pattern recognition than what you’re seeing in some of the generative AI chatbot tools. This is a huge leap forward in computational ability, and it’s going to play out in different ways.

If you’re concerned about misuse by terrorists, that’s one class of issues that you need to deal with. If you’re concerned about the potential for fraud and abuse, that may lead you down a different track. Or if you’re thinking through how best to regulate its use in stock trading, that’s yet a different category of things. Having a center of expertise, like the National Institute of Standards and Technology, is helpful to develop more state capacity to understand what’s going on and keep up with all the different flavors of AI that we’re going to see in the coming years.

Thompson How does Google balance its work on the opportunities of AI versus the responsibilities and security of AI? Clearly you have an economic imperative to build out the tools that can make the most money for the company. How do you weigh that against the need to make sure that you’re hitting your requirements for responsibility and security?

Walker We’ve been doing this for many years now, both on the technology side and on the responsibility side. We had our first team working on ML fairness in 2014. We published our AI principles in 2018. And we’ve continued working through the internal governance of how best to do that ever since. There are products that we have decided not to bring to market because we didn’t think the appropriate policy frameworks were in place—things like generally available facial recognition tools. It’s a continuing balance.

We have a long-term interest in encouraging general social trust in and adoption of AI in a whole variety of different areas. And we think, if responsibly managed, it will be a huge positive for societies around the world.

Thompson This is one of the most interesting things to me. You have regulators at the local level, at the state level, at the national level. But you also have decisions being made at the companies.

Walker Well, you’re right that it needs to be a multilayered framework. It’s not going to be just government regulation or individual companies working on this. We need to have a spectrum of approaches from individual companies, cross-industry groups like the Frontier Model Forum, which we co-founded with other leading AI companies and labs, as well as [companies] working with governments.

And governments are moving at pace to address this. In the United States, the Office of Science and Technology Policy published its AI Bill of Rights. The National Institute of Standards and Technology put out its AI Risk Management Framework. We joined the White House commitments this summer and more recently Sen. Chuck Schumer’s AI forum in the Senate. So the right conversations are happening, and there’s been good public-private collaboration.

Thompson What is the most interesting question you’re grappling with right now?

Walker There’s a lot of discussion around the balance between open and closed. Openness creates democratized access to tools, but also risks of abuse by nation-states or bad actors. How do you reconcile that, and what’s the right balance? How do you think about the scope of regulation?

When people talk about AI regulation, they’re not typically talking about Google Maps. They’re thinking about the challenges posed by emerging next-generation AI tools. But how do you draw the line between those two things when there’s not an obvious cutoff? How do you think through the standards for what “morally good” looks like, and how do you measure that? That turns out to be a deep and hard question across all the different areas we’ve talked about, whether it’s bias or toxicity or privacy or other issues.

Thompson I know that with Google Duplex, the voice assistant that seems like a human, you always declare that it’s a machine. Do you think that there should be a formal policy that any system using AI that could be reasonably thought to be impersonating a human should declare that it’s a machine?

Walker We took a step in this direction recently when we announced that we would require disclosure of the use of generative AI if it resulted in inauthentic and misleading election ads. So, for example, people for years have used Photoshop or other tools to avoid red-eye in photos or to touch things up. And we don’t require disclosure of that. Or if somebody uses AI to pose somebody with the U.S. Capitol’s dome in the background, that probably is not misleading. But if AI is used to replicate a human voice so that somebody seems to be saying something they never said, or to create an image that never occurred in a way that could mislead the viewer, we think it should be disclosed.

Of course you don’t want to require overbroad disclosure because AI is likely going to be used in almost everything we do, from the writing of articles to the creation of new photos to many of Google’s tools and services. Labeling all of that as AI would be like labeling pictures with “This was created with a camera” or your article with “This was created with a computer.” So we have to figure out exactly how we have meaningful disclosure so that people aren’t misled.

Thompson What is the biggest risk of regulation?

Walker There’s always a risk of under-regulation and over-regulation. Under-regulation could create a possibility of misuse by rogue actors or abuse that really creates harm in the world and undercuts trust in AI. But bad regulation could slow our ability to achieve the promise of AI. We said some years ago that AI is too important not to regulate and too important not to regulate well. That sums up the balance needed. There’s a sweet spot, and we’re hoping to share the benefits of our experience with governments to see if we can hit that.

Thompson Do you think there’s a risk that because of some concerns about AI—the doomsday scenarios and other rhetoric—we could end up with regulation that is focused in the wrong direction? That trying to prevent the worst outcomes prevents many of the best ones?

Walker There’s a phrase in the AI community, “the AI half-pipe of heaven and hell,” referring to headlines that go back and forth between “AI is wonderful” and “AI is terrible.” In fact, like most technologies, AI involves both opportunities and challenges, and we have to figure out how to get the balance right. We need to move beyond the bumper-sticker conversations and get into a more detailed discussion of the trade-offs and how to develop risk-based regulations that don’t limit that promise.

Thompson Do you think that these powerful tools necessitate differently structured data privacy laws now that so much of our data can be used to train models that have so many uses?

Walker The vast majority of training for the large language models we’re seeing now is actually on public data that’s on the web. We’ve been clear that we have used that kind of data for many years to improve the quality of Google Search. Privacy is going to be an important piece of this, and we also have to think about the security of data, making sure that the models aren’t misused.

We want to be clear with people when information is being used to improve the services. When you use Google Search, if you click on the fourth item in your search results, that’s a signal to our system that maybe that item should have ranked higher. When you use Google Maps on the road, it helps us when other people know where there are traffic jams. It’s a collective action benefit. We will continue to figure out how to make sure that people are aware of that and feel comfortable sharing feedback.

Thompson So it’s really a question of proper and clear disclosure of how data that you’ve inputted or data about you will and could be used?

Walker I would say both disclosure and control. Individuals should have the choice and the ability to choose whether or not to have a system trained on their data in a way that’s personalized and customized to their needs.

Thompson Do you think there’s any risk that aggressive AI policy could lock in the playing field as it is? So, for example, you can imagine a regulation that insists that every AI model has to be audited, which would require huge compliance costs, which would then make it extremely hard for anybody but a large, well-funded tech company to comply.

Walker It’s always a risk that regulation becomes a barrier to entry for new competitors. We’ve seen that with other tech regulations around the world—where we have devoted significant time and expense coming into compliance in a way that would be harder for a smaller company. So you have to take that into account. That’s not a reason not to regulate, but it is a reason to say that you should be regulating only where you need to in response to specific concerns.

Of course, some of the abuses could happen with big companies or with small companies. So you want to make sure that you have appropriate rules that apply broadly but are tailored enough to the underlying issue that you’re not making it harder for smaller companies or start-ups to innovate.

Thompson There’s an idea I’ve heard, that one of the goals of regulators shouldn’t be just setting the right frameworks and the right rules, but also creating data commons. And anybody could access these data commons. So they’d be publicly accessible data sets that could be used to train AI models and to facilitate the creation of AI models. They could be used by big or small companies. Do you think this is a helpful idea or a potentially dangerous one?

Walker In general, we’re in favor of data commons structures. In fact, Google’s Data Commons team just partnered with the United Nations to use data and AI to help track progress toward the UN’s Sustainable Development Goals for the globe. And we support proposals to fund a National AI Research Resource. But right now, in many ways, the biggest challenge is not the availability of data. Remember that the world’s three leading AI labs—OpenAI, Anthropic, and DeepMind—all succeeded without access to any proprietary data.

The biggest challenge for a lot of the up-and-coming companies right now is access to computing power. Now, as AI develops into more specialized areas, specialized data may become more important. If you’re developing an AI chemistry tutor, you may want to train it on chemistry textbooks. Or, for applications for a given company, you may want to train on that company’s data to help it stock its shelves or manage itself more efficiently. But the general-purpose models, the generative AI models that most folks are concentrating on right now, are mostly being trained on publicly available data.

Thompson What is the best way for governments to quickly develop the expertise to be able to make the right choices, make the decisions, and move nimbly?

Walker This is a core question of state capacity. There’s a huge amount of technical complexity that underlies the dramatic computer science advances we’ve seen in the past few years. Figuring out how to get governments up to speed on developments that have been happening largely in the private sector is one of the key challenges we face. We are organizing virtual gatherings for policymakers around the world to give them an overview of how AI works and of the issues and developments we’re seeing.

We are hoping to continue our engagement with groups like the National Institute of Standards and Technology and other technical experts in different countries to help develop a core of expertise. It’s going to be hard to get everybody to be an AI expert overnight, but key folks should be in a position to make intelligent risk assessments and help influence the direction of regulation, which is clearly going to evolve over the next few years.

Thompson I’ve heard a couple of interesting arguments about how AI will change the way geopolitics works and the way AI regulation will work. One is, this is such a new moment and such a powerful force, that it will necessarily allow for a reset and greater global cooperation. The other is that AI regulation and AI development will further harden the lines between the United States and China, and that the world, East and West, will grow further and further apart. What do you think will be most likely to happen?

Walker Somebody earlier said that Google is in the optimistic middle of the AI debate, and I think that’s right. We do see an important role for a global conversation around this, whether that’s creating regulatory alignment or setting norms that help influence the direction of research. Norms matter.

To tell a quick story, in the 1980s, genetic researchers got together at Asilomar in California to come up with standards of practice for genetic research. And those standards have continuing vitality today with regard to what kind of research on human beings is appropriate and not appropriate. There’s a real interest in the AI community to come up with similar frameworks. Ideally, you’d like those frameworks to be global, to bring as many countries around the world into alignment as you can. It’s challenging, given current geopolitical tensions. But it’s usually better to have people inside the tent than outside the tent.

The advances we hope to see with AI—like curing cancer or promoting nuclear fusion—are goals that are shared by everybody in the world. If you can have free power that allows you to create clean water for people across Africa, or if you can make dramatic advances in combating diseases that affect everybody, those goals are as broadly shared as the UN’s Sustainable Development Goals. So we hope that there will be an incentive for countries around the world to work together to make sure we get AI right.