<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="/static/theatlantic/syndication/feeds/atom-to-html.b8b4bd3b19af.xsl" ?><feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/"><title>Matteo Wong | The Atlantic</title><link href="https://www.theatlantic.com/author/matteo-wong/" rel="alternate"></link><link href="https://www.theatlantic.com/feed/author/matteo-wong/" rel="self"></link><id>https://www.theatlantic.com/author/matteo-wong/</id><updated>2026-04-10T12:52:49-04:00</updated><rights>Copyright 2026 by The Atlantic Monthly Group. All Rights Reserved.</rights><entry><id>tag:theatlantic.com,2026:50-686746</id><content type="html">&lt;p&gt;For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;On Tuesday, the company &lt;a href="https://www.anthropic.com/glasswing"&gt;officially announced&lt;/a&gt; the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. As a result of how capable AI models have become at coding, they have also become extremely good at finding vulnerabilities in all manner of software. Even before Mythos Preview, AI companies such as Anthropic, OpenAI, and Google all reported instances of their AI models being used in sophisticated cyberattacks by both criminal and state-backed groups. As Giovanni Vigna, who directs a federal research institute dedicated to AI-orchestrated cyberthreats, told me &lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;last fa&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;ll&lt;/a&gt;: You can have a million hackers at your fingertips “with the push of a button.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="http://Chatbots%20Are%20Becoming%20Really,%20Really%20Good%20Criminals"&gt;Read: Chatbots are becoming really, really good criminals&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. Until recently, the biggest advantage of AI-assisted hacking was not ingenuity, per se, so much as speed and scale. These bots could be as good as many human cybersecurity experts, but not necessarily better—rather, having an army of 1 million virtual, tireless hackers allows you to launch more attacks against more targets than ever before. Even Anthropic reports that its current state-of-the-art, public model, Claude Opus 4.6, was &lt;a href="https://red.anthropic.com/2026/mythos-preview/"&gt;significantly less capable&lt;/a&gt; at autonomously finding cyber exploits. But Mythos Preview is different. According to Anthropic, the bot has been able to find thousands of software bugs that had gone undetected, sometimes for decades, a sophistication and speed of attack previously thought by many to be impossible. The model has found a nearly 30-year-old vulnerability in one of the world’s most secure operating systems. The Anthropic researcher Sam Bowman posted on X that he was eating a sandwich in the park when &lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt;he got an email from Mythos&lt;/a&gt;&lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt; Preview&lt;/a&gt;: The bot had broken out of the company’s internal sandbox and gained access to the internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The exact capabilities of Mythos Preview are hard to judge, because Anthropic has not released the model. Identifying a vulnerability is not the same as being able to exploit it undetected—in the same way that a robber can have the keys to a bank but still needs to deal with security cameras. And Anthropic surely stands to benefit from its opaque announcement: The company can claim to have developed an ultra-advanced model, while also appearing to act responsibly by preventing the worst-case cybersecurity scenarios. Indeed, the decision to not release Mythos Preview bolsters Anthropic’s &lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;self-styled image&lt;/a&gt; as the AI industry’s good guy. (Anthropic did not immediately respond to emailed questions about Mythos Preview.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, a move can be both strategic and conscientious. Should what Anthropic shared be remotely accurate, it heralds a troubling future. Anthropic has a tool that “could damage the operations of critical infrastructure and government services in every country on Earth,” Dean Ball, a former AI adviser to the Trump administration, &lt;a href="https://www.hyperdimensional.co/p/new-sages-unrivalled"&gt;wrote&lt;/a&gt; this week. The ability to defend against such cyberattacks is integral to the basic functioning of society. And the ability to launch such attacks is integral to modern warfare. Anthropic may have just scaled its way into becoming a major geopolitical force.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. OpenAI is &lt;a href="https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic"&gt;reportedly&lt;/a&gt; set to release its own similarly powerful model to a select group of companies. It’s very possible, even likely, that Google DeepMind, xAI, and AI firms in China are next. How scrupulous they will be is less clear. Even cheaper or open-source AI models from smaller companies could soon enable this sort of hacking—which would unsettle the basic security and privacy that undergird the modern internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hacking bots are not the only domain through which a handful of AI companies are gaining tremendous influence. The technology has become crucial to military operations. Even as the Pentagon has engaged in a &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;public feud&lt;/a&gt; with Anthropic, Claude was reportedly used in the bombing of Iran and, before that, the Venezuela raid in January. Last month, the Department of Defense signed a contract with OpenAI that &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;very likely allows&lt;/a&gt; the government to use the firm’s AI systems to enable unprecedented surveillance of U.S. citizens. (OpenAI has maintained that the Pentagon agreed not to use its products for domestic surveillance.) At the same time, bots from OpenAI, Anthropic, Google DeepMind, and beyond are becoming infrastructure: used by nearly all of the world’s biggest businesses, schools, health-care systems, and public agencies. This is a large part of the reason that Iran has &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;struck&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt; or threatened to strike&lt;/a&gt; Amazon and OpenAI data centers in the Middle East—the facilities are high-impact targets on par with the oil fields that Iran has also targeted. Meanwhile, so much money is pouring into the AI boom that these companies are functionally &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;holding the global economy hostage&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In other words, AI companies are remaking the world. Consider how Elon Musk’s network of Starlink satellites has allowed him to &lt;a href="https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule"&gt;repeatedly&lt;/a&gt; &lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;tip the scales&lt;/a&gt; in Russia’s invasion of Ukraine. Generative AI offers even more possibilities. These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/4Vt-nOTp2FVmZiNnJlDYMwVlQwY=/media/img/mt/2026/04/2026_03_07_Ai_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Claude Mythos Is Everyone’s Problem</title><published>2026-04-09T13:22:00-04:00</published><updated>2026-04-10T12:52:49-04:00</updated><summary type="html">What happens when AI can hack everything?</summary><link href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686686</id><content type="html">&lt;p&gt;Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots. Holding signs and banners reading &lt;span class="smallcaps"&gt;Stop the AI Race&lt;/span&gt; and &lt;span class="smallcaps"&gt;Don’t Build Skynet&lt;/span&gt;, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI. The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models. Such a technology, attendees said, could extinguish all human life.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research. OpenAI recently released a new model it &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/"&gt;described&lt;/a&gt; as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.” Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.&lt;/p&gt;&lt;p&gt;“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities. Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.&lt;/p&gt;&lt;p&gt;The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make. But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, &lt;a href="https://www.theatlantic.com/technology/archive/2024/06/chatgpt-citations-rag/678796/?utm_source=feed"&gt;let alone search the web&lt;/a&gt;, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable. Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to &lt;em&gt;exceed&lt;/em&gt; those of AGI.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;Read: Do you feel the AGI yet?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Now, as AI models have &lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;become significantly better at coding&lt;/a&gt;, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of grunt work—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots. Dario Amodei, Anthropic’s CEO, has &lt;a href="https://www.dwarkesh.com/p/dario-amodei-2"&gt;estimated&lt;/a&gt; that coding tools speed up his company’s overall workflows by 15 to 20 percent.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the information that top AI firms share about how and the extent to which they have automated internal research is patchy at best. When Anthropic says that Claude writes almost all of its code, we don’t know how much human supervision was required. (An Anthropic spokesperson declined a request for an interview, but pointed us to a recent &lt;a href="https://www.nytimes.com/2026/02/24/opinion/ezra-klein-podcast-jack-clark.html"&gt;podcast&lt;/a&gt; in which Jack Clark, the company’s head of policy, said one of his biggest priorities this year is to better understand “the extent to which we are automating aspects of A.I. development.”) There are also few details about OpenAI’s forthcoming AI “intern.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A company spokesperson described it to us as a system that could contribute to research workflows by, for instance, conducting literature reviews or interpreting results of experiments. (&lt;em&gt;The Atlantic &lt;/em&gt;has a corporate partnership with OpenAI.) One concrete example of how AI is being used to automate research comes from Google DeepMind: Last year, the company developed an AI coding agent called AlphaEvolve, which according to research published by the firm was able to make Google’s global data-center fleet 0.7 percent more computationally efficient on average and cut the overall training time of Gemini by 1 percent.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;All of these current approaches to self-improving AI are not recursive but piecemeal. AI tools can write code, find small optimizations, and generally make discrete parts of the AI research process faster. It’s impressive that machines are able to at least incrementally improve their own abilities, but right now humans still play an essential role. AI research has many components: curating training data, proposing new hypotheses, setting up experiments to test them, and deciding how to allocate scarce computing resources. Eventually, the thinking goes, recursively self-improving AI models will make the leap from rote programming to having real research “taste”—as AI insiders call the mix of human creativity and judgment exhibited by top software engineers. Instead of humans coming up with ideas for new experiments, the bots will do this themselves.&lt;/p&gt;&lt;p&gt;Many AI boosters and doomers alike believe that we’re not far from that future. Sam Altman says that by 2028, OpenAI plans to have developed a fully “automated AI researcher.” By then, “we are pretty confident we will have systems that can make more significant discoveries,” the company &lt;a href="https://openai.com/index/ai-progress-and-recommendations/"&gt;said&lt;/a&gt; in a recent blog post. Based on the speed of recent advances in AI, Eli Lifland, a researcher at the AI Futures Project, has forecast that AI research and development could be fully automated by 2032. After all, a few years ago, top models could successfully do only things that would take a human developer seconds; now they autonomously complete tasks that would take humans hours. “I don’t expect a reason for it to slow down,” Neev Parikh, a researcher at METR, a nonprofit that studies AI coding capabilities, told us.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are plenty of reasons to be skeptical that AI research will be fully automated over such a short time horizon. Coding bots are designed to execute directions, but developing an AI with &lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;research &lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;taste&lt;/a&gt; might require some kind of transformative breakthrough. Not to mention the various constraints on AI development—including the availability of funding, chips, and energy for data centers—that threaten to stall progress at any time. For now, “the overall pipeline to realize this self-improvement loop is still yet to be developed,” Pushmeet Kohli, DeepMind’s vice president of science and strategic initiatives, told us. A bot can optimize things, but it doesn’t “have anything to optimize &lt;em&gt;for&lt;/em&gt;,” Kohli said. “That’s where the human comes in.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed"&gt;Read: Inside the dirty, dystopian world of AI data centers&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Ultimately, even if the most fantastical dreams of recursive self-improvement turn out to be little more than a marketing ploy, marginal improvements in automating research are likely to further accelerate the pace of AI development. “This could change the dynamics of AI competition, alter AI geopolitics, and much more,” Dean Ball, &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;a former Trump adviser on AI&lt;/a&gt;, recently &lt;a href="https://www.hyperdimensional.co/p/on-recursive-self-improvement-part"&gt;wrote&lt;/a&gt;. Governments and civil society are already lagging. American institutions are in many ways still adapting to the internet—the IRS still processes tax returns using COBOL, a programming language that was released in 1960. Should AI models progress faster, public policy, including regulations on safety and security, has even less hope of keeping up. Bostrom, the philosopher, expressed a sort of resignation about the AI future when we spoke. He used to call himself a “fretful optimist,” he said, but now he’s a “moderate fatalist.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a strange way, none of the predictions about recursive self-improvement needs to be true for them to matter. Last year, a team of academics interviewed 25 leading researchers at DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford. Twenty of them identified the automation of AI research as among the industry’s “most severe and urgent” risks. Now these dramatic warnings are gaining a growing audience. “Human beings could actually lose control over the planet,” Senator Bernie Sanders recently warned Congress, sounding just like the San Francisco protesters. Yet again, the AI industry has found a way to ratchet up the hype behind its technology.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/c3WP_48GLb1cNMqUeRDDbWFK0Ag=/media/img/mt/2026/04/2026_4_1_AI/original.png"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">Silicon Valley Is in a Frenzy Over Bots That Build Themselves</title><published>2026-04-03T13:35:00-04:00</published><updated>2026-04-06T10:29:54-04:00</updated><summary type="html">How close are we really to self-improving AI?</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-industry-self-improving-bots/686686/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686618</id><content type="html">&lt;p&gt;Thore Graepel may have been the first human to be vanquished by a superintelligence. In 2015, on his first day as a researcher at Google DeepMind, he was challenged to play against the earliest iteration of AlphaGo—a computer program developed by DeepMind that would prove so effective at the ancient-Chinese game of &lt;em&gt;weiqi&lt;/em&gt; (or Go, as it is commonly known in the West) that it changed how humans play it, and then upended the field of AI itself.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When Graepel faced it, AlphaGo was just a “baby” project, as he put it to me, and he was an accomplished amateur player. But it still took him down. Then, the following year, AlphaGo—now fully developed—plowed through a number of human champions, ultimately crushing Lee Sedol, widely considered the best player in the world, with a match score of 4–1. This month marked the tenth anniversary of that victory.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For decades, developing a program that plays Go at an elite level was an infamous problem in computer science. Many considered it unsolvable—far harder than developing a similar program for chess, in which the supercomputer DeepBlue beat the world champion in 1997. In Go, two players take turns positioning stones on a 19-by-19 grid, and their movements are relatively unrestricted. In chess, which has a far smaller grid, a rook can move only horizontally and a bishop only diagonally, but Go pieces can be placed on any open space. The number of possible Go positions is so high that it &lt;a href="https://tromp.github.io/go/legal.html"&gt;cannot be easily expressed in words&lt;/a&gt;; it is higher than the number of atoms in the observable universe, and orders of magnitude higher than the number of possible chess games. Today, the technical frameworks and approaches that allowed an algorithm to excel at this board game have translated fairly directly into bots that can write advanced code, help tackle open problems in mathematics, and replicate scientific discoveries from scratch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Generative AI is living in AlphaGo’s shadow. Beyond the actual models, “conceptual things emerged from the whole AlphaGo experience which essentially entered the AI vocabulary,” Pushmeet Kohli, the vice president of science and strategic initiatives at Google DeepMind, told me. In many ways, Go and chess provide ideal templates for understanding how the AI boom has unfolded—and a guide for what it may yet wreak.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind’s innovation was to essentially pair two algorithms: one AI model to propose moves and a second model to judge whether a move is good or not, allowing the system to devote computational resources to planning sequences of moves most likely to result in victory. AlphaGo then played itself thousands of times, improving from every mistake through a training process known as reinforcement learning. Today’s frontier AI labs face an analogous problem: Large language models such as ChatGPT could spit out lucid sentences and paragraphs, but when they faced challenging tasks in computer science, physics, and other areas that would require a human to really &lt;em&gt;think&lt;/em&gt;, chatbots had been stuck stumbling in the dark. That began to change in late 2024 with the advent of &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;so-called reasoning models&lt;/a&gt;, an approach that now underlies all of the top bots from OpenAI, Google DeepMind, and Anthropic. And the idea behind these reasoning models “is surprisingly similar to AlphaGo,” as Noam Brown, a researcher at OpenAI, recently &lt;a href="https://x.com/polynoamial/status/2031404079583473953"&gt;put it&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/02/train-ai-chatgpt-to-play-video-game-pokemon/672954/?utm_source=feed"&gt;Read: A machine crushed us at Pokémon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The intuition behind chatbot reasoning is to have AI models work out a solution step-by-step, using a scratch pad of sorts, and then evaluate steps along the way to change course or start over as needed—very much like the two-step approach used by AlphaGo. The training method for these reasoning chatbots is the same as well: reinforcement learning. An algorithm can play lots of games of Go or attempt to solve lots of difficult math problems, then learn from its mistakes when it loses or errs. Today’s best AI models “can be traced back to some degree to the AlphaGo work,” Graepel said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps the most crucial insight shared between AlphaGo and the chatbot-reasoning breakthrough is a twist on the AI industry’s central dogma, the “scaling laws.” Traditionally, AI companies improved their large language models by training them on more data and with more computing power. In the case of AlphaGo and reasoning models, researchers realized that they could scale another dimension: having the program devote more time and computing power to a task, akin to how harder problems typically take humans more time to solve. For bots, this meant planning more and longer sequences of moves or using more words to “reason” through a tough coding task. That wasn’t guaranteed. “It could happen that you give them more time and they spend more time just getting confused,” Kohli said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After the success of AlphaGo, DeepMind made a successor program called AlphaZero. Whereas AlphaGo was initially shown a number of human Go matches as a baseline, AlphaZero became dominant at a number of games—Go, chess, and so on—purely by playing itself, with zero prior knowledge, and learning from each game. That an AI model essentially taught itself, very rapidly, to surpass the abilities of any human ever at multiple games might suggest that very rapid advances for today’s chatbots are on the horizon. By this logic, models could essentially figure out ways to improve themselves. But the success of AlphaGo and AlphaZero more likely signals obstacles ahead. The most important ingredient in AlphaGo was the simplicity with which one could measure success—win or lose—and thus give the machine feedback to improve.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed"&gt;Read: The human skill that eludes AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With board games, “we were always operating in a specific environment where the rules of the game were known,” Kohli said. “The systems of today are expected to operate in a much more general environment.” Reasoning models have found success mostly in areas that still have a relatively clear rubric for evaluation: whether an AI-written program works as intended, for instance, or whether an AI-written proof holds up. Instilling any notion of a more general intelligence in a machine will be a far more challenging problem than conquering even Go.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind has been able to design evaluations for more abstract ideas, for instance by orchestrating several AI agents to act as &lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;a team of virtual “scientists”&lt;/a&gt; that will rank hypotheses about problems in biology. But even that system operates within a relatively constrained domain of biological reasoning and literature. It’s unlikely that any lab will come up with a single way to evaluate “general intelligence” that can be used to train a bot AlphaGo style, let alone one as straightforward as winning or losing a board game.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;Read: AI executives promise cancer cures. Here’s the reality.&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Still, the progress the AlphaGo approach has yielded for AI models in a number of scientific domains is impressive—so much so that, a decade after AI conquered humanity’s hardest board game, the nation is now in a frenzy over whether AI is about to first overhaul the economy and then unsettle the purpose of being human at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Once again, chess and Go might offer guides. As a result of improving via self-play, AlphaGo and AlphaZero developed not only superhuman ability but also inhuman style, using tactics and strategies no human had previously considered. These AI strategies did not destroy the human pursuits of chess and Go; they &lt;a href="https://www.technologyreview.com/2026/02/27/1133624/ai-is-rewiring-how-the-worlds-best-go-players-think/"&gt;reignited&lt;/a&gt; new waves of human &lt;a href="https://www.theatlantic.com/technology/archive/2022/09/carlsen-niemann-chess-cheating-poker/671472/?utm_source=feed"&gt;creativity and strategy&lt;/a&gt;. The most optimistic analogy for today’s more broadly useful AI systems would be that they also, rather than providing a wholesale replacement for humans, will function as a sort of &lt;a href="https://www.theatlantic.com/technology/archive/2022/10/hans-niemann-chess-cheating-artificial-intelligence/671799/?utm_source=feed"&gt;complementary intelligence&lt;/a&gt;. Biologists, &lt;a href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/?utm_source=feed"&gt;mathematicians&lt;/a&gt;, and computer scientists are already finding ways in which today’s AI models are not simply speeding up their work but qualitatively changing the kinds of questions humans can ask and the discoveries we can make.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, the business proposition of generative AI is quite the opposite: that products such as ChatGPT and Claude Code can automate huge swaths of white-collar work, help students cheat their way through school, and allow humans to live mostly without thinking. Perhaps C-suite executives, like AI researchers, can learn a lesson from Go and chess. Like any sport, chess and Go are worthwhile because of human struggles and storylines, champions made and toppled, the very fact that people are doomed to be imperfect but always striving to become just a bit better. And rather than automating human chess masters or destroying the sport and pastime, chess-playing AI models have helped the business of chess to boom.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Likewise, employees, managers, students, professors—really all of us—are always learning and learning by failing, or at least we should be. That is useful and worth preserving in &lt;a href="https://www.theatlantic.com/ideas/2025/12/ai-entry-level-creative-jobs/685297/?utm_source=feed"&gt;plain economic terms&lt;/a&gt;. Nobody becomes world-class at anything without at some point being rather terrible at it, and allowing novices who might be less capable than a bot to build up skills is the only way you get experts with human judgment and abilities that surpass any AI. But more important than that economic rationale is an existential one: To grow or help another do so is a beautiful thing. Some might call it being human.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U6SJyTz_GY-KuSqVbSFKPc_JlQM=/media/img/mt/2026/03/2026_03_27_AI2_mpg/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">A Game Plan for the AI Boom</title><published>2026-03-30T18:27:37-04:00</published><updated>2026-04-02T10:11:16-04:00</updated><summary type="html">Ten years ago, AlphaGo trounced human competitors—and its legacy is still present in today’s most advanced bots.</summary><link href="https://www.theatlantic.com/technology/2026/03/alphago-ai-boom/686618/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686559</id><content type="html">&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;he global economy&lt;/span&gt; has become dependent on the AI industry. Trillions of dollars are being invested into the technology and the infrastructure it relies on; in the final months of 2025, &lt;a href="https://www.barrons.com/articles/ai-investment-gdp-economy-e19c6d70"&gt;functionally all&lt;/a&gt; economic growth in the United States came from AI investments. This would be risky even in ideal conditions. And we are very far from ideal conditions.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Much of the AI supply chain—chips, data centers, combustion turbines, and so on—relies on key materials that are produced in or transported through just a few places on Earth, with little overlap. In particular, the industry is highly dependent on the Middle East, which has been destabilized by the war in Iran. A global &lt;a href="https://www.newstatesman.com/international-politics/geopolitics/2026/03/the-world-energy-shock-is-coming"&gt;energy shock&lt;/a&gt; seems all but certain to come soon—the kind where even the &lt;a href="https://www.economist.com/finance-and-economics/2026/03/22/even-the-best-case-scenario-for-energy-markets-is-disastrous"&gt;best-case scenario&lt;/a&gt; is a disaster. The war could grind the AI build-out to a halt. This would be devastating for the tech firms that have issued historic amounts of debt to race against their highly leveraged competitors, and it would be devastating for the private lenders and banks that have been buying up that debt in the hope of ever bigger returns.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For the better part of the past year, Wall Street analysts and tech-industry observers have fretted publicly &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;about an AI bubble&lt;/a&gt;. The fear is that too much money is coming in too fast and that generative-AI companies still have not offered anything close to a viable business model. If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Until recently, that kind of crash felt hypothetical; today, it feels plausible and, to some, almost inevitable. “What’s unusual about this, unlike commercial real estate during the global financial crisis,” Paul Kedrosky, an investor and financial consultant, told us, “is all of these interlocking points of fragility.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;Read: Here’s how the AI crash happens&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Perhaps the clearest examples are advanced memory and training chips, which are among the most important—and are by far the most expensive—components of training any AI model. Currently, most of them are produced by two companies in South Korea and one in Taiwan. These countries, in turn, get a large majority of their crude oil and much of their liquefied natural gas—which help fuel semiconductor manufacturing—from the Persian Gulf. The chip companies also require helium, sulfur, and bromine—three key inputs to silicon wafers—largely sourced from the region. In addition, Saudi Arabia, Qatar, the United Arab Emirates, and other regional petrostates have become key investors in the American AI firms that purchase most of those chips.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Because of the war in Iran, the Strait of Hormuz is functionally closed to most shipping vessels, stranding one-fifth of the world’s exports of natural gas, one-third of the world’s exports of crude oil, and significant quantities of the planet’s exportable fertilizer, helium, and sulfur. Meanwhile, Iran and Israel have begun bombing much of the fossil-fuel infrastructure in the region, which could take many years to replace. In only a month of war, the price of Brent crude—a global oil benchmark—has jumped by 40 percent and could more than double, liquefied-natural-gas prices are soaring in Europe and Asia, and &lt;a href="https://www.reuters.com/business/energy/helium-prices-soar-qatar-lng-halt-exposes-fragile-supply-chain-2026-03-12/"&gt;helium spot prices&lt;/a&gt; have already doubled. The strait is “critical to basically every aspect of the global economy,” Sam Winter-Levy, a technology and national-security researcher at the Carnegie Endowment for International Peace, told us. “The AI supply chain is not insulated.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The situation could quickly deteriorate from here. A helium crunch could trigger a shortage of AI chips or cause chip prices to rise. AI companies need ever more advanced chips to fill their data centers—at higher prices, the massive server farms, already hurting from elevated energy costs caused by the war, would have almost no hope of becoming profitable. Without these chips, new data centers would not be built or would sit empty. Astronomical tech valuations, and in turn the entire stock market, could collapse.&lt;/p&gt;&lt;p class="dropcap"&gt;O&lt;span class="smallcaps"&gt;ne industry’s precarious position&lt;/span&gt; isn’t usually everyone’s problem. Unfortunately, AI is different. The biggest data-center players, known as hyperscalers, are among the biggest corporations in the history of capitalism; they include Microsoft, Google, Meta, and Amazon. But even they will be pressed by collectively spending nearly $700 billion on AI in a single year. In order to get the money for these unprecedented projects, data-center providers are beginning to take on &lt;a href="https://fortune.com/2025/11/19/big-5-ai-hyperscalers-quadruple-debt-fund-ai-operations/"&gt;colossal amounts of debt&lt;/a&gt;. Some of this is done through creative deals with private-equity firms including Blackstone, BlackRock, and Blue Owl Capital—which themselves operate as sort of shadow banks that, since the most recent financial crisis, have arguably become as powerful and as influential as Bear Stearns and Lehman Brothers were prior to 2008. Endowments, pensions, insurance funds, and other major institutions all trust private equity to invest their money.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For a while, it seemed like every time Google or Microsoft announced more data-center investments, their stock prices rose. Now the opposite occurs: The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. And the $121 billion of debt that hyperscalers issued in 2025, four times more than what they averaged for years prior, is &lt;a href="https://www.reuters.com/business/retail-consumer/analysts-revise-ai-hyperscaler-debt-forecasts-after-amazon-bond-sale-2026-03-17/"&gt;expected&lt;/a&gt; to grow dramatically.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All of the major players in this investment ecosystem are vulnerable. Private-equity firms are being squeezed on both ends by generative AI: During the coronavirus pandemic, they bought up software companies, which are now plummeting in value because AI is expected to eat their lunch. Meanwhile, private equity’s new investment strategy, data centers, is &lt;em&gt;also&lt;/em&gt; falling apart because of AI. Blackstone, Blue Owl, and the like are sinking huge sums into data-center construction with the assumption that lease payments from tech companies will pay for their debt. In order to pay for their investments, private-equity companies raised money from major financial institutions—but now the viability of those lease payments is coming into question as the hyperscalers’ cash flow is strained. “There’s a reason to think we’re seeing some of the same 2008 dynamics now,” Brad Lipton, a former senior adviser at the Consumer Financial Protection Bureau and now the director of corporate power and financial regulation at the Roosevelt Institute, told us. “Everyone’s getting tied up together. Banks are lending money to private credit, which in turn lends it elsewhere. That amps up the risk.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/2026/03/ai-job-loss-jevons-paradox/686520/?utm_source=feed"&gt;Annie Lowrey: How to guess if your job will exist in five years&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The way the money moves is concerning, but so is the AI industry’s underlying business model. At every layer, the technology appears to decrease the value of its assets. The advanced AI chips that make up the majority of the cost of a data center? Their value rapidly decreases as they are superseded by the next generation of chips, meaning that the ultimate backstop for all of the data-center debt—selling the data center itself—is not actually a backstop. The way that AI companies make money when people use their products is also deflationary. OpenAI, Anthropic, and others charge users for using “tokens,” the components of words processed by their bots. This means that tokens are an industrial commodity akin to, say, crude oil or steel. But unlike other commodities, the cost of each token is rapidly decreasing owing to advancements in AI’s capabilities. Kedrosky called this “a death spiral to zero.” As the value of a token plummets, the value of what data centers can produce also falls.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The war in Iran affects data-center finances as well. Should energy prices continue to skyrocket, so will the cost of this already very expensive computing equipment, because it needs tremendous amounts of energy to manufacture and operate. And the war has exposed physical risks to these buildings. Janet Egan, a senior fellow at the Center for a New American Security, described data centers to us as “large, juicy targets.” It is impossible to hide these facilities, which can cover 1 million square feet. Earlier this month, Iran bombed Amazon data centers in the UAE and Bahrain. American hyperscalers had been planning to build far more data centers in the region, because the Trump administration and the AI industry have sought funding from Saudi Arabia, the UAE, Qatar, and Oman. Now there’s a two-way strain on those relationships. The physical security of the data centers is more precarious, and the conflict is damaging the economic health of the petrostates, thereby jeopardizing a major source of further investment in American AI firms. The Trump administration “staked a lot on the Gulf as their close AI partner, and now the war that they’ve launched poses a huge threat to the viability of the Gulf as that AI partner,” Winter-Levy said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Plus, “what’s to prevent Iran or a proxy group, or another maligned actor, from tomorrow launching an armed drone against a data center in Northern Virginia?” Chip Usher, the senior director for intelligence at the Special Competitive Studies Project, a national-security and AI think tank, told us. “It could happen. Our defenses are not adequate.” State-sponsored cyberattacks of the variety Iran is known for could also knock a data center offline. You can build all manner of defenses—reinforced concrete, drone-interception systems—but doing so adds cost and time to already costly and slow construction.&lt;/p&gt;&lt;p class="dropcap"&gt;J&lt;span class="smallcaps"&gt;ust a few things going a bit wrong&lt;/span&gt; could compound, all at once, into a cataclysm. To wit: Qatari and Saudi money dries up. Sustained high oil and natural-gas prices drive up the costs of manufacturing chips and running data centers. Already cash-strapped hyperscalers struggle to make lease payments on their data centers, while similarly strained private lenders suffer as all of the AI bonds become deadweight. Tech valuations fall, taking public markets with them; private-equity firms have to sell and torch their assets, putting intense stress on the institutional investors and banks. The rest of the economy, drained of investment because everything was poured into data centers for years, is already weak. Unemployment goes up, as do interest rates. “Bubbles pop. That’s the system,” Lipton said. “What isn’t supposed to happen is that it takes down the whole financial system. But the concern here is that AI investment isn’t confined and may spread to the whole economy.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even if Iran and the Strait of Hormuz don’t directly trigger an AI-driven financial crisis, the odds are decent that another vector could. (Remember tariffs?) Energy prices could stay elevated for years, because the targeted fossil-fuel facilities in the Persian Gulf will take a long time to restore. As the U.S. directs huge amounts of attention and military resources toward Iran, it’s easy to imagine China launching an invasion of Taiwan—a scenario that &lt;a href="https://www.nytimes.com/2026/02/24/technology/taiwan-china-chips-silicon-valley-tsmc.html"&gt;terrifies&lt;/a&gt; Silicon Valley, because it would halt the production of chips needed to train frontier models. That’s not even considering the single Dutch company that makes the high-tech lithography machines used to print virtually all AI chips, or the German company that makes the mirrors used in those machines. “There are too many ways for it to fail for it not to fail,” Kedrosky said of the AI industry’s web of risk. “All you can say for sure is this is a fragile and overdetermined system that must break, so it will.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are, of course, possibilities other than a full-blown, AI-driven financial crisis. Data-center spending could cool gradually enough that a crash is avoided. The revenues of Anthropic and OpenAI have been multiplying every year, which proponents argue means that generative-AI products are on track to eventually become profitable. But on the current trajectory, that would still take years, and there are good reasons to think that this growth will slow or halt. Notably, the main draw of AI tools is “efficiency”: Rather than growing their overall output and the opportunities available to people, executives are hoping that AI will allow them to make cuts to their business operations. The medium-term success of generative AI would likely involve &lt;a href="https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/?utm_source=feed"&gt;millions of people being put out of work&lt;/a&gt;. The range of options seems to be somewhere from mildly bad to historically so.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Should the system break, much of the blame would lie squarely with the technology companies. The stakes of this build-out, from the beginning, have been framed in civilizational terms—a geopolitical race alongside an existential one. The winners will control the future and reap the rewards. At every step of the way, AI firms have appeared to prioritize speed above the physical security of data centers, supply-chain redundancy, energy efficiency and independence, political stability, even financial returns. And in that quest for unbridled growth, the AI industry has wrested ungodly amounts of capital from investors all looking for the next big thing, ensnaring the entire economy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Simultaneously, these firms have courted and even bent the knee to a presidential administration that has encouraged their “let it rip” ethos, only to watch as that same administration has plunged the industry into this emerging polycrisis. The AI industry was not made for the turbulence its leaders have helped usher in. The situation has grown so ungainly and untenable that, if Silicon Valley is merely forced to slow down, the viability of all this spending will likely be called into question in ways that could be devastating for many. In finance, being early is the same as being wrong. AI firms want the world to think they’re right on time. The world may have other plans.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/IVFBCxc2jIXqe2KB2LEnwKydNPU=/media/img/mt/2026/03/2026_03_26_datacenter_mpg/original.jpg"><media:credit>Nathan Howard / Bloomberg / Getty</media:credit><media:description>An Amazon Web Services data center in Manassas, Virginia</media:description></media:content><title type="html">Welcome to a Multidimensional Economic Disaster</title><published>2026-03-26T16:44:54-04:00</published><updated>2026-03-27T07:40:22-04:00</updated><summary type="html">The AI boom wasn’t built for the polycrisis.</summary><link href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686535</id><content type="html">&lt;p&gt;Shower thoughts are typically best left in the shower. Such as: What might Claude the AI chatbot have to say about Claude Monet?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Earlier this month, San Francisco’s de Young Museum unveiled its newest exhibition, “Monet and Venice,” which is dedicated to the impressionist painter’s beautiful and meditative canvases of the floating city. And Anthropic, perhaps having seized on a marketing opportunity, is one of the show’s lead sponsors. Through tomorrow, visitors are able to partake in a temporary “interactive experience” that Anthropic set up in a room adjacent to the galleries. Essentially, the AI firm turned two typewriters into interfaces to chat with Claude. You type in a question about the exhibition, and Claude, based on information about Monet that the museum provided, such as exhibit labels, punches out an answer onto the same sheet of cream cardstock.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When I approached one of the Claude typewriters, which were placed next to art books and paintbrushes on top of wooden desks, an employee instructed me on how to proceed and stressed, repeatedly, that I should not prompt the bot with more than eight to 10 words. To get things started, Claude typed onto the paper, “What caught your eye in Monet and Venice? Type a word or short phrase and I’ll tell you more.” Questions I really wanted to ask—about the intentions behind and effects of the seemingly coarse weave of the canvases, or how Monet, obsessed with color, selected his pigments—were hard to pare down on the spot. I wrote that I noticed “shimmering water in varying lights.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed"&gt;Read: The human skill that eludes AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Claude paused for several seconds, then typed a response about Monet’s approach to painting water that restated, in many instances verbatim, information that I’d learned from wall text throughout the galleries. I had follow-up questions, but the paper ejected too quickly for me to ask them. In theory, Claude the AI was supposed to deepen my knowledge of Claude the painter. But all the typewriter added to my experience was ink and, I suppose, a piece of reprocessed dead tree to take home.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic’s sponsoring of and installation alongside “Monet and Venice” is the latest in a litany of attempts by AI companies to purchase cultural cachet. Typewriters, stationery, fine-art museums, the quintessential impressionist painter—these are all associated with taste, beauty, and craft, as well as with intentionality and care, the opposite of the ruthless technological efficiency that repels many from generative AI. OpenAI, for its part, recently &lt;a href="https://www.wsj.com/tech/ai/openai-backs-ai-made-animated-feature-film-389f70b0?gaa_at=eafs&amp;amp;gaa_n=AWEtsqec9IrACTV2Hu6Qz2B51d0R8Ip0t3RaxFzNusvGvCHqgKjym9Z1dcnp&amp;amp;gaa_ts=69c3e861&amp;amp;gaa_sig=amt_w3AXK2WratACyVj-j6evd3RDQR_FmWWrUv2AD8OdsOXgLO7lzfFBKSbiSCf6kDfHR0J_6o03_rLWjMY9Qg%3D%3D"&gt;backed&lt;/a&gt; an AI-animated film aiming to debut at this year’s Cannes Film Festival. The ChatGPT maker has also partnered with the Palace of Versailles to create an app to let visitors “talk” with statues in the garden—spewing, it would &lt;a href="https://www.nytimes.com/2025/07/30/arts/design/versailles-ai-app.html"&gt;appear&lt;/a&gt;, empty clichés. (“Perhaps strength lies in understanding both beauty and power together,” Achilles told me.) Last fall, Anthropic partnered with Air Mail, a weekly newsletter with a small storefront in Manhattan, to distribute blue baseball hats that read &lt;span class="smallcaps"&gt;thinking&lt;/span&gt;, as in &lt;em&gt;thinking cap&lt;/em&gt;; tote bags; and little packets of Anthropic-branded, otherwise unlabeled wildflower seeds. I was too scared of what an “Anthropic” plant would be to sow mine.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yet this is also the same company that &lt;a href="https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/"&gt;ripped the spine&lt;/a&gt; off millions of books, scanned their pages, and &lt;a href="https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/?utm_source=feed"&gt;fed the text into Claude’s training data&lt;/a&gt;. Companies and wealthy scions donate to museums and sponsor exhibitions all the time, sure. Bank of America &lt;a href="https://www.brooklynmuseum.org/press/brooklyn-museum-presents-monet-venice"&gt;sponsored&lt;/a&gt; “Monet and Venice” at the Brooklyn Museum, where the show debuted; the Sackler family has eponymous museum wings around the country. Even so, leveraging historic artworks to elevate the brand of a company whose product is shaking the very foundations of human culture is just too on the nose. Let’s not pretend that the Claude AI–Claude Monet typewriter room is anything more than a hollow gimmick. (Anthropic declined to answer questions about the typewriters and exhibition sponsorship.)&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-art-ted-chiang-automation/679715/?utm_source=feed"&gt;Read: Ted Chiang is wrong about AI art&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;After using the device, I was directed to two file cabinets filled with Anthropic-branded postcards and &lt;span class="smallcaps"&gt;Keep thinking&lt;/span&gt; bookmarks. Stacked on top of one of the file cabinets were three large books titled &lt;em&gt;Édouard Manet&lt;/em&gt;,&lt;em&gt; Paul-Cézanne&lt;/em&gt;, and&lt;em&gt; Claude Monet.&lt;/em&gt; The errant hyphen in Cézanne’s name, and an identical font across all three covers that looked very similar to an Anthropic typeface, caught my eye. I picked up the top title, ostensibly about Manet, to examine its contents and found it to be almost weightless—these objects were not bound sheaves of paper, it turned out, but cardboard boxes. Even Jay Gatsby had the decency to fill his library with real books, if unopened ones.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Like many people, I adore both the work of Claude Monet and the canals of Venice. I was fortunate enough to grow up in New York City, going to the Metropolitan Museum of Art on weekends and the Museum of Modern Art for family programs, where Monet’s monumental water-lily canvases were among the many works that beckoned me to fall in love with painting. My mother went to college in Venice. I found the exhibition dedicated to Monet’s paintings of Venice enchanting; I had seen it in Brooklyn as well, and will surely return at least once more.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Monet’s dappled brushstrokes and the thick, coarse texture of his paint; how his palette varies by season and time of day, the same sea composed of stunning blues on one canvas and a fury of greens and pinks on an adjacent one; the impressionist’s paintings alongside depictions of Venice by James McNeill Whistler, Pierre-Auguste Renoir, and Canaletto—the exhibition beckons visitors to view canvases from up close and from afar, to look at paintings in isolation and in juxtaposition. I found myself most drawn to the lesser-known bridges and villas depicted, trying to recall if my mother and I had walked by them.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Monet sent letters and postcards across a continent of space and a century of time, to be imbued with new and varied meanings by every curator, software engineer, child, and parent who lays eyes on them. An art gallery was already an interactive experience.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/BXGqYuSEqup5VvO4pSD796g9-OM=/media/img/mt/2026/03/2026_03_23_Wong_Claude_Monet_final3/original.png"><media:credit>Illustration by Akshita Chandra / The Atlantic</media:credit></media:content><title type="html">When Claude Met Claude</title><published>2026-03-25T15:56:18-04:00</published><updated>2026-03-25T17:34:40-04:00</updated><summary type="html">Why is Anthropic sponsoring an exhibition about Monet?</summary><link href="https://www.theatlantic.com/technology/2026/03/claude-monet-ai-typewriter/686535/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:39-686064</id><content type="html">&lt;p&gt;&lt;i&gt;Photographs by Landon Speers&lt;/i&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;A&lt;span class="smallcaps"&gt;s we drove through&lt;/span&gt; southwest Memphis, KeShaun Pearson told me to keep my window down—our destination was best tasted, not viewed. Along the way, we passed an abandoned coal plant to our right, then an active power plant to our left, equipped with enormous natural-gas turbines. Pearson, who directs the nonprofit Memphis Community Against Pollution, was bringing me to his hometown’s latest industrial megaproject.&lt;/p&gt;&lt;p&gt;Already, the air smelled of soot, gasoline, and asphalt. Then I felt a tickle sliding up my nostrils and down into my throat, like I was getting a cold. As we approached, I heard the rumble of cranes and trucks, and then from behind a patch of trees emerged a forest of electrical towers. Finally, I saw it—a white-walled hangar, bigger than a dozen football fields, where Elon Musk intends to build a god.&lt;/p&gt;&lt;aside class="callout-placeholder" data-source="magazine-issue"&gt;&lt;/aside&gt;&lt;p&gt;This is Colossus: a data center that Musk’s artificial-intelligence company, xAI, is using as a training ground for Grok, one of the world’s most advanced generative-AI models. Training these models takes a staggering amount of energy; if run at full strength for a year, Colossus would use as much electricity as 200,000 American homes. When fully operational, Musk has written on X, this facility and two other xAI data centers nearby will require nearly two gigawatts of power. Annually, those facilities could consume roughly twice as much electricity as the city of Seattle.&lt;/p&gt;&lt;p&gt;To get Colossus up and running fast, xAI built its own power plant, setting up as many as 35 natural-gas turbines—railcar-size engines that can be major sources of smog—according to imagery obtained by the Southern Environmental Law Center. Pearson coughed as we drove by the facility. The scratch in my throat worsened, and I rolled up my window.&lt;/p&gt;&lt;p&gt;xAI’s rivals are all building similarly large data centers to develop their most powerful generative-AI models; a metropolis’s worth of electricity will surge through facilities that occupy a few city blocks. These companies have primarily made their chatbots “smarter” not by writing niftier code but by making them bigger: ramming more data through more powerful computer chips that use more electricity. OpenAI has announced plans for facilities requiring more than 30 gigawatts of power in total—more than the largest recorded demand for all of New England. Since ChatGPT’s launch, in November 2022, the capital expenditures of Amazon, Microsoft, Meta, and Google have exceeded $600 billion, and much of that spending has gone toward data centers—more, even after adjusting for inflation, than the government spent to build the entire interstate-highway system. “These are the largest single points of consumption of electricity in history,” Jesse Jenkins, a climate modeler at Princeton, told me.&lt;/p&gt;&lt;p&gt;Even conservative analyses forecast that the tech industry will drop the equivalent of roughly 40 Seattles onto America’s grid within a decade; aggressive scenarios predict more than 60 in half that time. According to Siddharth Singh, an energy-investment analyst at the International Energy Agency, by 2030, U.S. data centers will consume more electricity than all of the country’s heavy industries—more than the cement, steel, chemical, car, and other industrial facilities put together. Roughly half of that demand will come from data centers equipped for the particular needs of generative AI—programs, such as ChatGPT, that can produce text and images, solve complex math problems, and perhaps one day inform scientific discoveries.&lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img alt="photo of enormous warehouse with numerous external cooling structures, with bronzed field of corn growing in foreground" height="522" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0808_16x9/69a62b640.jpg" width="928"&gt;
&lt;figcaption class="caption"&gt;Colossus, Elon Musk’s data center in Memphis, can consume as much electricity over the course of a year as 200,000 American homes. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;To power AI, energy and tech companies are turning to fossil fuels, which they regard as more reliable and readily available than wind, solar, or nuclear. Asked where the energy for data centers should come from, OpenAI CEO Sam Altman &lt;a href="https://conversationswithtyler.com/episodes/sam-altman-2/"&gt;has repeatedly said&lt;/a&gt;, “Short-term: natural gas.” (OpenAI and &lt;i&gt;The Atlantic&lt;/i&gt; have a corporate partnership.) A Louisiana utility plans to build three natural-gas plants for a Meta data center that, upon completion, will be among the largest in this hemisphere. The lifespans of coal plants, too, are being extended to power new data centers. And the IEA estimates that data-center emissions could more than double by 2030—becoming one of the fastest-growing sources of greenhouse gases in the world.&lt;/p&gt;&lt;p&gt;The optimist’s case is that, by then, advanced nuclear reactors will have obviated many of the new fossil-fuel plants, and AI tools will have invented technologies that can solve the climate crisis. That may well happen. But today, “the market has converged on &lt;i&gt;Add gas now, and then add nuclear later&lt;/i&gt;,” Jenkins said. In other words, if natural-gas turbines seem to offer the most expedient path to an AI-enhanced future, then clean air may have to wait.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;A data center &lt;/span&gt;is a planet of contradictions: heat without motion, shelter without bodies, light without sky. “The lifeblood of the internet is essentially flowing through these sites,” Jon Lin, the chief business officer at Equinix, one of the world’s largest data-center companies, told me in an Equinix facility in Loudoun County, Virginia. Behind Lin, someone in a green hoodie fiddled with computer chips shelved in a row of humming, refrigerator-size cabinets on the data-center floor. There were no windows, to keep the facility secure and to guard against the sun’s heat. As we walked along a corridor of cabinets, motion-activated lights illuminated the way. Farther ahead, only faint blue lights and blinking computer equipment pierced the darkness.&lt;/p&gt;&lt;p&gt;Ever since the first data centers were built, in the mid-20th century, their &lt;a href="https://www.ibm.com/think/topics/data-centers"&gt;purpose has remained constant&lt;/a&gt;: pack computer equipment close together to store and send information as efficiently as possible. But their scale has grown dramatically. The original data centers were simply large rooms housing mainframe computers. With the rise of the internet, in the 1990s, backroom computers gave way to entire buildings, such as the one Lin and I stood in—facilities that enable us to stream movies, trade stocks, store medical records, manage supply chains, and make military decisions. Now the AI race is requiring vastly greater computing power, which has led to even bigger data centers, ones filled with computer chips that are much hungrier and run much hotter.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/nvidia-chips-gpu-generative-ai/677664/?utm_source=feed"&gt;Read: The lifeblood of the AI boom&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In a traditional data center, the cabinets are cooled by industrial fans—as we walked through the Equinix facility, I felt a constant breeze on my cheek—and rooftop cooling towers eventually expel the heat. The cabinets in a generative-AI data center use dozens of times more electricity. Lin showed me a row of AI-specialized cabinets used by Block, the firm that owns Square and Cash App, which radiated enough heat to make me break a sweat; to cool them, water runs into special metal plates that sit atop the chips inside the cabinets. AI data centers are filled with similar equipment, and cooling thousands of cabinets &lt;a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/?utm_source=feed"&gt;can require a lot of water&lt;/a&gt;. Public records from the Memphis water utility, for instance, show that the address for Colossus used more than 11 million gallons in September alone, as much as 150 homes use in an entire year. When a data center’s cooling equipment malfunctions, spiraling heat combined with humid air has yielded that rarest of meteorological events: indoor rain.&lt;/p&gt;&lt;p&gt;Placing servers in the same or neighboring buildings allows them to exchange information seamlessly and quickly, and Loudoun County has the highest concentration of data centers in the world, with 199 already operating and another 30 or so on the way. According to one report, 13 percent of global data-center capacity is squeezed into the county’s 520 square miles. One particularly dense stretch is called “Data Center Alley.”&lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img alt="photo from inside warehouse of metal mesh cage around stacks of computer equipment with numerous cables extending to ceiling" height="619" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_ASHBURN_1165/553c9896e.jpg" width="928"&gt;
&lt;figcaption class="caption"&gt;Cabinets of computer chips at a data center in Loudoun County, Virginia (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Northern Virginia offers a glimpse into what the AI rush may bring to the rest of the nation. Loudoun is running out of space, but new data-center hubs are popping up in Phoenix, Atlanta, and Dallas. Amazon and Meta are building AI data centers in Indiana and Louisiana, respectively, that will each require more than two gigawatts of electricity, dozens of times more than standard facilities. OpenAI has proposed that the U.S. establish “AI Economic Zones”: little Loudouns everywhere.&lt;/p&gt;&lt;p&gt;As I drove into Data Center Alley with Julie Bolthouse, the director of land use at the Piedmont Environmental Council, she explained how to distinguish data centers from warehouses: cooling towers on the roof, dozens of backup diesel generators to one side, no windows (or false ones, decorative glass panels backed by a wall of concrete). There didn’t seem to be any warehouses, though, and I gave up counting data centers within minutes, unable to tell where one facility ended and the next one began. Bolthouse helps run a coalition aiming to slow data-center development throughout Virginia, but in Loudoun, it is too late. So many data centers are under construction just north of Dulles International Airport that hills of freshly dug dirt loom over roads and orange dust tints the air. Should Musk successfully colonize Mars, the early stages of terraforming might look like this.&lt;/p&gt;&lt;p&gt;The architect of this labyrinth is Buddy Rizer, Loudoun’s longtime executive director of economic development. Rizer has courted data centers with regulatory and state tax incentives, and when we met in his office, he told me that since 2009, at least one has been under construction at any given time. Data centers are typically operated by only a few dozen staff members, but building them has produced a steady source of employment. They also provide nearly 40 percent of the county’s budget, helping to pay for police, schools, and parks for a population that has grown steadily since 2010.&lt;/p&gt;&lt;p&gt;Within a 1.5-mile radius of us, Rizer said, were 12 substations: small jungles of metal poles and wiring that convert high-voltage electricity into a form you’d use to charge your iPhone or, in this case, run a data center. All around us were towering utility poles strung with high-voltage transmission lines that carry raw electricity from power plants to those substations; they hang over Loudoun like a canopy, or a cobweb. Follow any one cable far enough, and you’re likely to reach a data center.&lt;/p&gt;&lt;p&gt;For years to come, the AI race is projected to be the main force driving roughly 2 percent annual growth in U.S. electricity demand, which has been stagnant for nearly two decades. Nationally, this is not a crisis; regionally, it may be. Dominion Energy, the major electrical utility in Virginia, predicts growth of 5.5 percent each year, with overall electricity demand doubling by 2039. Aaron Ruby, a spokesperson for Dominion, told me that the company is preparing to meet that surge, though he was frank about the challenge: “We are experiencing the largest growth in power demand since the years following World War II.” By the end of the decade, training the industry’s most powerful AI model could require as much electricity as millions of American homes.&lt;/p&gt;&lt;p&gt;In China, hundreds of data centers have been announced since 2023, and additional facilities are planned for &lt;a href="https://www.scientificamerican.com/article/china-powers-ai-boom-with-undersea-data-centers/"&gt;beneath the ocean&lt;/a&gt; and &lt;a href="https://www.bloomberg.com/news/articles/2025-07-08/china-builds-ai-dreams-with-giant-data-centers-in-the-desert"&gt;in the desert&lt;/a&gt;. China’s biggest advantage in the AI race is not the talent of its software engineers or the quantity of its data centers, but its abundance of energy: In 2024, the nation produced nearly as much electricity as the U.S., Europe, and India combined.&lt;/p&gt;&lt;p&gt;President Trump has declared that the nation is in an “energy emergency,” and been vocal about the need to build more power plants for the U.S. to win the AI race. A senior executive at OpenAI told me that the U.S. needs to activate every resource at its disposal—solar panels, natural-gas turbines, nuclear reactors. And Anthropic, OpenAI’s top rival, published a report arguing that the U.S. should streamline permitting for data centers and power plants in order to keep pace with China.&lt;/p&gt;&lt;p&gt;But an internet-driven energy crisis has failed to materialize before: As fiber-optic cables were being laid in Loudoun in the 1990s, energy companies built more coal- and gas-fired plants. “Dig More Coal—The PCs Are Coming,” &lt;a href="https://www.forbes.com/forbes/1999/0531/6311070a.html"&gt;read a 1999 &lt;i&gt;Forbes&lt;/i&gt; headline&lt;/a&gt;. When the demand didn’t arrive, the nation was left with a glut of gas plants and multiple bankrupt energy companies.&lt;/p&gt;&lt;p&gt;The generative-AI boom, too, could prove to be a bubble. The technology remains extraordinarily expensive, largely because of the cost of advanced computer chips, and no AI firm has presented a convincing business model. One path to profitability might be more efficient algorithms—which would preclude the need for the new natural-gas plants. And if AI doesn’t turn out to be as transformative a technology as experts predict, swaths of data centers could be left unused or unfinished—ruins from a future that never came to pass.&lt;/p&gt;&lt;p&gt;Either way, the rush to power data centers as fast as possible has already pushed the U.S. to expand its reliance on fossil fuels.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Behind her one-story &lt;/span&gt;brick home in southwest Memphis, Sarah Gladney grows tomatoes, and when the vines wilted early last summer, she had a suspect in mind. “When the wind comes up early in the morning, I can smell it,” Gladney told me, nodding in the direction of Colossus. One of her neighbors, Marilyn Gooch, told me the data center’s turbines have made her uncertain about whether she should let her grandchildren visit.&lt;/p&gt;&lt;p&gt;Their neighborhood, Boxtown, is named for the railway boxcars that formerly enslaved people used to build homes, and is still almost entirely Black. Virtually every heavy industry has set up nearby—a wastewater facility, an oil refinery, a coal-fired power plant. Colossus itself, which is next to a steel mill and a trucking and rail yard, occupies the hull of an old oven factory. Life expectancy in and around Boxtown is more than five years below the national average, and the cancer risk in southwest Memphis is four times higher. What KeShaun Pearson and I smelled may not have been Colossus itself; xAI had chosen an area so besieged by heavy industry that any exhaust from the facility’s turbines would mix in with a pervasive smog.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of simple railroad-style house with peeling white paint and large trees in background" height="908" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0695/aafb79baa.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;In Boxtown, a neighborhood in southwest Memphis, many residents and elected officials were unaware that Colossus was being built until the project was well under way. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Colossus was built so quickly that many Boxtown residents and elected officials didn’t know what was happening until the project was well under way. Construction began in May 2024, and the project was announced the following month. Gladney, Pearson, and his younger brother Justin—who represents the district in the Tennessee General Assembly—found out about the project that day in June. By Labor Day weekend, less than three months after the press conference, Colossus was up and running.&lt;/p&gt;&lt;p&gt;The company installed its own gas turbines because that was faster than waiting on the local grid, and argued that it did not need a permit to do so because the turbines would operate for less than a year, a claim that the Southern Environmental Law Center, representing the NAACP, contested in a letter threatening to sue the company. (xAI has since received a permit for 15 turbines, and is reportedly operating 12.) Meanwhile, residents report that they have had respiratory issues flare up since xAI moved in.&lt;/p&gt;&lt;p&gt;Last June, when an analysis commissioned by the city of Memphis found “no dangerous levels” of pollutants in Boxtown and at two other test locations, the SELC criticized the study’s methods. Using satellite data, &lt;a href="https://time.com/7308925/elon-musk-memphis-ai-data-center/"&gt;researchers at the University of Tennessee at Knoxville found&lt;/a&gt; that levels of nitrogen dioxide—which causes smog and is associated with asthma and other respiratory problems—near Colossus have been substantially elevated since its public announcement. (xAI &lt;a href="https://x.ai/memphis/fact-v-fiction"&gt;says on its website&lt;/a&gt; that it will install technology to reduce the pollution from its turbines. The company, the Shelby County Health Department, and the Memphis mayor’s office did not respond to a list of questions about Colossus’s environmental impacts and xAI’s presence in Memphis; the Greater Memphis Chamber of Commerce declined to comment.)&lt;/p&gt;&lt;p&gt;Fossil fuels have become the default for data centers around the country. OpenAI’s first Stargate data center, in Texas, also has its own gas-fired power plant. Chevron and Exxon are angling to hook natural-gas facilities directly into data centers, and the world’s three major manufacturers of natural-gas turbines all advertise their products as convenient energy sources for data centers. Michael Eugenis, the director of resource planning at Arizona Public Service, the state’s largest utility, told me that because of the demand from data centers, the company is adding more fossil-fuel capacity than it otherwise would have; natural gas will help power Microsoft, Amazon, and Oracle data centers, too.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of transmission lines with large towers and large spools of metal cable in foreground" height="665" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_MEMPHIS_0401/26345e564.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Transmission lines, like these in Memphis, carry electricity throughout the grid—including to data centers. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;In early 2025, a company affiliated with xAI purchased a former warehouse and nearly 200 acres south of Colossus to set up another data center, Colossus II. On a weekday afternoon, the road near the site was dense with traffic—not dump trucks and forklifts, but sedans lining up outside the adjacent public school for pickup. An xAI affiliate bought a retired Duke Energy plant about a mile away in Mississippi that is likely to power this facility, and filed an application to operate 41 natural-gas turbines on the site. Those turbines could emit more carbon dioxide annually than the city of San Jose.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;On an island &lt;/span&gt;in the Susquehanna River, just south of Harrisburg, Pennsylvania, I saw another way to power the AI boom. Above me loomed four beige hourglass-shaped structures, each some 365 feet tall: the cooling towers for Three Mile Island, the site of the worst nuclear disaster in American history. On March 28, 1979, the facility was only a few years old, and nuclear-energy reactors were being built across the country. But a series of mechanical and human errors caused the core of one of the reactors, Unit Two, to rapidly overheat and leak radioactive material. The effects on human health and the environment were negligible, but together with the catastrophe at Chernobyl seven years later, the partial meltdown turned public sentiment strongly against nuclear power.&lt;/p&gt;&lt;p&gt;Three Mile Island’s Unit One went undamaged and continued operating, after a brief pause, until 2019. By then natural gas was too cheap, the regulatory environment was too unfriendly, and the losses—hundreds of millions of dollars—were too great for Constellation Energy, which owns Unit One, to keep the plant running.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2023/03/climate-change-nuclear-power-safety-radioactive-waste/672776/?utm_source=feed"&gt;From the March 2023 issue: Jonathan Rauch on the real obstacle to nuclear power&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Nobody has ever resuscitated a fully shut-down U.S. nuclear-power plant, but in fall 2024, Constellation announced plans to do just that. Microsoft had agreed to purchase electricity from Unit One to power its data centers over the next two decades, a guarantee allowing Constellation to spend the $1.6 billion needed to restart the plant. It was the ultimate bellwether of the AI age: Experts have long argued that we need clean nuclear power to reduce the grid’s existing carbon footprint. Instead, Three Mile Island will help offset a new source of emissions from a single company.&lt;/p&gt;&lt;p&gt;Constellation is now reversing the steps it took to decommission the reactor: renewing its license, restoring equipment, retraining personnel. Dave Marcheskie, a community-relations manager, explained this to me in a conference room overlooking the nuclear core, which is housed in a building that resembles a large grain silo. Behind him, a clock counted down the time to launch: 650 days, zero hours, 42 minutes, and one second.&lt;/p&gt;&lt;p&gt;As the need for carbon-free electricity grows more urgent, Americans are &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/america-nuclear-power-revival/680842/?utm_source=feed"&gt;having to reckon with nuclear energy again&lt;/a&gt;, and the AI boom has provided the industry with wealthy backers and an army of tech cheerleaders. Meta and Amazon are buying electricity from large nuclear-power plants, and nearly every major data-center company is investing in experimental nuclear technologies—especially small modular reactors, which in theory will make fission cheaper and easier to deploy.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/america-nuclear-power-revival/680842/?utm_source=feed"&gt;Read: A new reckoning for nuclear energy&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Nuclear energy has its downsides, of course. The waste is radioactive and must be stored almost indefinitely, and the meltdown at Japan’s Fukushima plant in 2011 was a reminder of how spectacularly dangerous nuclear reactors can be. But the dangers posed by the burning of fossil fuels are far more imminent.&lt;/p&gt;&lt;p&gt;At Three Mile Island, Marcheskie led me down a hall and into the actual power plant. Pipes, tubes, and hulking machines lined the floor and ceiling; a trefoil sign warned that a large tank potentially contained radioactive materials. The elevator was broken, so we walked a few stories up to the stadium-size room from which all of Three Mile Island’s electricity will flow. Scaffolding and shipping containers were scattered around a row of pistachio-green semi-cylinders. Once the plant restarts, uranium atoms ripped apart in the adjacent core will generate immense amounts of heat, vaporizing water into steam that will spin blades inside those cylinders 1,800 times a minute, which will in turn produce hundreds of megawatts of electricity.&lt;/p&gt;&lt;p&gt;This will be orchestrated from a nearby control room, where hundreds of lights and switches line muted-green walls. The shift manager, Bill Price, explained that one half of the main panel controls the nuclear core, while the other half controls the turbines. In the middle is the most important control of all: a red button that shuts down the reactor, and above it an identical button that serves as a backup. In the event of an emergency, Price said, you’d press both. I put a finger on each button and pushed.&lt;/p&gt;&lt;figure&gt;&lt;img alt="photo of very large vintage-looking green control board with dozens of dials, switches, and lights" height="998" src="https://cdn.theatlantic.com/media/img/posts/2026/03/AI_POWER_HARRISBURG_0332/da4d899e4.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;The original control room at Three Mile Island Unit One will become operational again when the reactor restarts. (Landon Speers for &lt;em&gt;The Atlantic&lt;/em&gt;)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;A small amount of the electricity generated here will support the plant itself. Microsoft &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/?utm_source=feed"&gt;is buying the remainder&lt;/a&gt; through a power-purchase agreement, a mechanism companies use to buy carbon-free electricity to match whatever their facilities draw from the grid. Power generated at Three Mile Island will help offset the energy used by data centers in Virginia and Illinois; Microsoft says it purchases enough clean energy to match all of its electricity consumption, as do Google, Amazon, and Meta. These companies are also investing in hydropower, geothermal plants, and solar panels; Google is exploring building a data center in space, to enable cloud-free access to the sun.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/?utm_source=feed"&gt;Read: For now, there’s only one good way to power AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, tech firms insist that nuclear and other clean technologies cannot be deployed quickly enough to meet their needs. President Trump has signed an executive order to accelerate permitting for natural-gas and coal-fired plants to support data centers. Yet China’s energy advantage in the AI race comes from nuclear reactors and solar panels, not coal and oil; the country is building nearly two-thirds of the world’s new solar and wind capacity.&lt;/p&gt;&lt;p&gt;The U.S. could still catch up, thanks to private investments by the likes of Google and Microsoft. A majority of planned electricity generation in the U.S. will be carbon-free, and running data centers on renewables can be done, Jenkins, the Princeton climate modeler, told me. Meanwhile, natural-gas turbines are so far back-ordered that acquiring one in the next few years will be virtually impossible.&lt;/p&gt;&lt;p&gt;For now, using existing power sources more wisely, rather than building new ones, may be all the AI industry needs. Electrical grids are designed for periods of peak demand—cooling on summer afternoons, heating on winter mornings—but mostly they run well below maximum capacity. Researchers at Duke University have shown that if data centers reduced their electricity consumption during some of those peaks, it would free up enough electricity to accommodate the country’s planned data centers for years. Google and xAI have already entered agreements to do so.&lt;/p&gt;&lt;p&gt;That strategy would allow tech companies to continue building more data centers without waiting for utilities to expand the grid. And time, not dollars or electrons, is the AI industry’s primary currency. Google, Microsoft, and their competitors can afford to spend historic sums without near-term financial returns, but they cannot afford to slip behind one another.&lt;/p&gt;&lt;p&gt;Time is also the biggest problem for Microsoft’s deal with Three Mile Island, which is taking years to restart. As we left the facility, Marcheskie led me south, past the beige towers and through a fog that had settled over the river. At one point we passed a cluster of concrete barrels that had escaped my attention on the drive up. Marcheskie told me that they contained all of the nuclear waste from Unit One’s 45 years of operation. Perhaps one day such casks will also line the perimeters of Colossus and Stargate.&lt;/p&gt;&lt;p&gt;AI may well overhaul how humans think and work, but it’s also pushing us toward another inflection point. We can unlock the promises of this technology by doubling down on the energy systems of the past, or we can seize the opportunity to push the grid into a carbon-free future. To get there, an industry that likes to move at warp speed will have to develop a quality it severely lacks: patience.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;This article appears in the &lt;/i&gt;&lt;a href="https://www.theatlantic.com/magazine/toc/2026/04/?utm_source=feed"&gt;&lt;i&gt;April 2026&lt;/i&gt;&lt;/a&gt;&lt;i&gt; print edition with the headline “Insatiable.”&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gJk88N1NVclRN9wfQt34zb_tIrk=/media/img/2026/03/AI_POWER_HARRISBURG_1488_16x9/original.jpg"><media:credit>Landon Speers for The Atlantic</media:credit><media:description>Three Mile Island’s cooling towers have until recently served as grave markers for America’s nuclear-power industry.</media:description></media:content><title type="html">Inside the Dirty, Dystopian World of AI Data Centers</title><published>2026-03-13T08:00:00-04:00</published><updated>2026-03-13T10:55:49-04:00</updated><summary type="html">The race to power AI is already remaking the physical world.</summary><link href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686307</id><content type="html">&lt;p&gt;The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government’s actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind—including Google’s chief scientist, Jeff Dean—signed an &lt;a href="https://www.courtlistener.com/docket/72379655/anthropic-pbc-v-us-department-of-war/#entry-24"&gt;amicus brief&lt;/a&gt; in support of Anthropic, in essence lending support to one of their employers’ greatest business rivals (even as OpenAI itself has established a &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;controversial new contract&lt;/a&gt; with DOD).&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The standoff is unprecedented. For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm’s AI systems. Anthropic CEO Dario Amodei had &lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;refused terms&lt;/a&gt; that would have seemingly allowed the Trump administration to use the company’s AI systems for mass domestic surveillance or to power fully autonomous weapons, leading DOD officials &lt;a href="https://x.com/SecWar/status/2027507717469049070"&gt;to&lt;/a&gt; &lt;a href="https://x.com/USWREMichael/status/2027211708201058578"&gt;accuse&lt;/a&gt; Amodei of “putting our nation’s safety at risk” and of having a “God-complex.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Nobody knows how this dispute will end. A spokesperson for Anthropic told me in a statement that the lawsuit “does not change our longstanding commitment to harnessing AI to protect our national security” and that the firm will “pursue every path toward resolution, including dialogue with the government.” A DOD spokesperson told me that the department does not comment on litigation.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;Read: Inside Anthropic’s killer-robot dispute with the Pentagon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But a conflict like this was inevitable, and more are sure to come. The government does not have anything close to a legal framework for regulating generative AI or, for that matter, online data collection. There are few legal, externally enforced guardrails on the use of AI in autonomous weaponry, and fewer still on how AI can be used to process the huge sums of information that federal agencies can collect on people: location data, credit-card purchases, browsing-history data, and so on. Because the laws are loose, Anthropic and OpenAI have been able to set their own privacy policies and guidelines for how AI can and cannot be used, and then change them at will; OpenAI, Meta, and Google, for instance, have all reversed previous restrictions on military applications of AI. But this cuts in the other direction as well: Anthropic has effectively been branded an enemy of the state for opposing the administration’s desire to be able to use its generative-AI systems in potential autonomous-weapons systems and for surveilling Americans, so long as the applications are technically legal.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The surveillance concerns were of particular issue for the OpenAI and Google DeepMind employees who signed the amicus brief today. They wrote that AI has the ability to significantly transform how once-separate data streams could be used to keep tabs on Americans: “From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/american-panopticon/682616/?utm_source=feed"&gt;Read: American panopticon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The Pentagon has said that it does not intend to use AI to monitor Americans en masse, and it explicitly said this in its new contract with OpenAI, which also cites several existing national-security laws and policies that DOD has agreed to. But as I wrote last week, those same policies have &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;already permitted spying on Americans&lt;/a&gt; with existing technologies, to say nothing of AI. Meanwhile, Elon Musk’s xAI has reportedly agreed to a Pentagon contract with still less restrictive terms. The American public has no choice now but to trust that Defense Secretary Pete Hegseth, Musk, OpenAI CEO Sam Altman, and Amodei will not use AI to surveil them. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic has said that it is not wholly opposed to its technology’s use in fully autonomous weapons but that today’s AI models are not ready to power such weapons. The AI employees who signed today’s amicus brief, in addition to the nearly 1,000 OpenAI and Google employees who signed a &lt;a href="https://notdivided.org/"&gt;public letter&lt;/a&gt; in support of Anthropic last month, agree. An existing DOD policy about developing and using autonomous weapons is vague and intended for discrete systems with particular geographic targets; some experts have argued that it is &lt;a href="https://arxiv.org/html/2505.18371v1"&gt;likely inadequate&lt;/a&gt; for widespread, AI-enabled warfare. The policy is also not a law, and is thus subject to change and interpretation based on the opinions of any given presidential administration.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All of these are complicated issues that demand actual deliberation. Instead, last week, President Trump &lt;a href="https://www.politico.com/news/2026/03/05/trump-unleashed-president-bullish-on-iran-eyeing-regime-change-in-cuba-and-impatient-with-ukraine-00814292?can_id=2b533ac8b104631a1551e7837c2296bf&amp;amp;email_referrer=email_3133221&amp;amp;email_subject=how-to-burst-donald-trumps-bubble-on-march-24&amp;amp;link_id=1&amp;amp;source=email-a-short-email-about-a-winnable-special-election-in-donald-trumps-home-district"&gt;told&lt;/a&gt; &lt;em&gt;Politico&lt;/em&gt;: “I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that.” Instead of listening to and learning from debates, the administration is discouraging them.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If you take a step back, the problem of AI outpacing established rules and laws is absolutely everywhere. Nearly four years into the ChatGPT era, schools &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-college-class-of-2026/683901/?utm_source=feed"&gt;still haven’t figured out&lt;/a&gt; what to do about not just widespread cheating but also the apparent obsoletion of some traditional forms of study altogether. Existing &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/anthropic-meta-ai-rulings/683526/?utm_source=feed"&gt;copyright law breaks down&lt;/a&gt; when applied to the use of authors’ and artists’ work, without their consent, to train generative-AI models. Even if generative-AI tools should soon automate wide swaths of the economy, neither AI firms nor governments nor employers are devoting many resources, other than writing research reports, to figuring out what to do about many millions of Americans potentially being put out of work. The energy demands of AI data centers are straining grids and setting back climate goals worldwide.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Instead of pursuing well-considered legislation by consensus, the Trump administration seems bent on having full control over AI &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;without facing any accountability&lt;/a&gt;. Congress is, as usual, slow and hapless when it comes to an emerging and powerful technology. And although AI firms frequently warn about their technology, they are &lt;em&gt;also&lt;/em&gt; racing ahead to develop and sell ever more capable models. When faced with the prospect of greater responsibility, they typically deflect; for example, when I &lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;spoke&lt;/a&gt; with Jack Clark, Anthropic’s chief policy officer, last summer about whether the AI industry was moving too quickly, he told me: “The world gets to make this decision, not companies.” Elsewhere, Anthropic has &lt;a href="https://www.anthropic.com/news/the-need-for-transparency-in-frontier-ai"&gt;stated&lt;/a&gt; that it “avoids being heavily prescriptive.” For his part, Altman is fond of saying that AI companies must learn “from contact with reality.” Yet the world—civil society, all of us living in this AI-saturated reality—has little say in the technology’s development.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;On Friday, in an &lt;a href="https://www.economist.com/insider/the-insider/zanny-minton-beddoes-interviews-anthropics-boss?ref=featured"&gt;interview&lt;/a&gt; with &lt;em&gt;The Economist&lt;/em&gt;, Anthropic’s Amodei more or less laid out the dynamic himself. “We don’t want to make companies more powerful than government,” he said. “But we also don’t want to make government so powerful that it can’t be stopped. We have both problems at once.” America is barreling toward a future in which nobody claims responsibility for AI. Everyone will live with the consequences.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/MGE3w_XWLxLsfEgIygXLbfHQ0hU=/media/img/mt/2026/03/2026_03_09_Wong_Pentagon_Ai_dispute_final/original.jpg"><media:credit>Illustration by Akshita Chandra / The Atlantic. Source: Yasin Ozturk / Anadolu / Getty.</media:credit></media:content><title type="html">What Anthropic’s Clash With the Pentagon Is Really About</title><published>2026-03-09T19:46:40-04:00</published><updated>2026-03-10T08:35:12-04:00</updated><summary type="html">Who will take responsibility for the technology?</summary><link href="https://www.theatlantic.com/technology/2026/03/pentagon-anthropic-dispute/686307/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686282</id><content type="html">&lt;p&gt;Outside OpenAI’s headquarters, a handful of people gathered on Monday holding pieces of colorful chalk. They got down on their knees and started writing messages on the sidewalk. &lt;span class="smallcaps"&gt;Stand for liberty. Please no legal mass surveillance. Change the contract please.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At issue was a business deal that the company recently signed with the Department of Defense, following &lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;the Pentagon’s sudden turn against Anthropic&lt;/a&gt;. OpenAI will now supply its technology to the military for use in classified settings, the sorts that may involve wartime decisions and intelligence-gathering—an agreement, many legal experts told me, that could give the government wide-ranging powers. “I would just really like to see OpenAI do the right thing and stand up for something, anything,” Niki Dupuis, an AI-start-up founder and one of the chalk protesters, told me.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek “red lines” to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk—a hefty sanction that would require anybody who sells to the Pentagon to stop using Anthropic products in their work with the military. Perhaps OpenAI was about to secure the very terms Anthropic had been denied.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But a close reading of the contract—the portions of it that OpenAI has shared with the public, anyway—indicates that the lines are, in fact, blurry. Several independent legal experts told me that, legally, the Pentagon can likely get away with using OpenAI’s technology—versions of the models that underlie ChatGPT—for mass surveillance of Americans. Moreover, the military will likely have a pathway to use OpenAI’s technology in autonomous weapons. AI models from Anthropic, DOD’s previous partner, have likely already been used for warfare; recently, its products were reportedly &lt;a href="https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/"&gt;used to identify targets in Iran&lt;/a&gt; (Anthropic declined to comment on that reporting). But the company had refused to allow its technology to be used in fully autonomous weapons.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;Read: Inside Anthropic’s killer-robot dispute with the Pentagon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The Department of Defense, which the Trump administration refers to as the Department of War, declined to answer my questions about the contract. A spokesperson for OpenAI reiterated to me that the Pentagon has agreed to not use the firm’s AI system for domestic surveillance, but she did not answer specific questions. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;’s business team.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;“The public is in an awkward position where we have to choose between trusting OpenAI or not,” Charlie Bullock, a senior research fellow at the think tank Institute for Law &amp;amp; AI, told me. Brad Carson, who served as general counsel and then undersecretary of the Army under Barack Obama, was less compromising: In his analysis of the past week’s events, OpenAI appears “okay with using ChatGPT for what ordinary people think of as mass surveillance.”&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;Over the past week or so, Altman and OpenAI have made several announcements about the contract, including sharing some of the text in a &lt;a href="https://openai.com/index/our-agreement-with-the-department-of-war/"&gt;blog post&lt;/a&gt; last Saturday—only to modify that text in an update to the blog a few days later. The company’s messaging has been confusing and has at various points seemed to contradict its own previous statements, as well as information from the government.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI had said that it has red lines around certain applications of its technology, but the portion of the contract language that it initially published implies the opposite. The company had also suggested that it placed unique restrictions on how the government could use OpenAI models, but Jeremy Lewin, a senior State Department official, &lt;a href="https://x.com/UnderSecretaryF/status/2027594072811098230"&gt;suggested otherwise, writing that the contract simply permitted “all lawful use” of the OpenAI system—that is, anything technically legal&lt;/a&gt;. The messaging “at best makes them seem like they’re not fully on top of this, and at worst reinforces the perception, fair or not, that OpenAI has a tendency to not be very candid,” Alan Rozenshtein, a law professor at the University of Minnesota who studies emerging technology, told me. Rozenshtein was perhaps being diplomatic—the central question about OpenAI for the past several years has been less about candor and more about honesty. When Altman was briefly fired in late 2023, he had been accused of deceiving OpenAI’s nonprofit board. A third-party &lt;a href="https://openai.com/index/review-completed-altman-brockman-to-continue-to-lead-openai/"&gt;review&lt;/a&gt; commissioned by OpenAI later found that there had been a “breakdown in trust” between Altman and the board but that Altman’s “conduct did not mandate removal.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The past week has been chaotic, and observers have been hanging on every development. Last Friday, Altman &lt;a href="https://x.com/sama/status/2027578652477821175"&gt;posted&lt;/a&gt; on X that OpenAI had reached an agreement with DOD just hours after news broke that Anthropic’s relationship with the administration would be dissolved. OpenAI’s contract, Altman wrote, contains “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” But many were skeptical. OpenAI surely had offered something to the Pentagon that Anthropic wouldn’t. The word &lt;em&gt;prohibitions&lt;/em&gt; didn’t seem to communicate a total ban on surveillance, and the idea that “human responsibility” should be taken for autonomous weapons suggested that, indeed, OpenAI’s technology could be used in autonomous weapons if a person were on the hook for the decision.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In Saturday’s blog post, OpenAI insisted that its red lines against domestic surveillance and automated weapons were firm, and it reframed the deal as an attempt to “de-escalate things” between the Pentagon and other U.S. AI labs, adding that it hoped the Pentagon would offer the same terms to other firms, including Anthropic. OpenAI also published a quote from the contract, though it offered little reassurance. The segment begins, “The Department of War may use the AI System for all lawful purposes, consistent with applicable law.” It then says that the use of OpenAI systems for intelligence activities “will comply with” a number of laws and policies regulating U.S. intelligence activity that have infamously enabled spying on Americans, such as the Foreign Intelligence and Surveillance Act of 1978. Under FISA and related policies, for instance, intelligence agencies can record and store phone calls between Americans and people abroad, and purchase bulk user data from companies and analyze them, which does not involve directly intercepting communications.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Here I should note that it’s impossible to use just snippets of a contract to evaluate the entire thing: A restriction in one section can be voided under circumstances listed in another. But snippets are all that OpenAI has provided. Based on what we are able to see, experts told me that leeway had likely been given for mass surveillance. “There’s a ton of stuff that normal people would understand as automated mass surveillance that is simply not” illegal, Rozenshtein said. For example, generative AI could turn previously overwhelming and opaque records—tax returns, federal employment files, billions of intercepted communications, smartphone location data, and so on—into a trove of exacting insights. An OpenAI spokesperson told me that citing particular statutes in the contract does not change the agreed-upon prohibition against domestic surveillance.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With regard to weapons, the contract language shared last Saturday cites &lt;a href="https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf"&gt;DOD Directive 3000.09&lt;/a&gt;, which does not prohibit the use of fully autonomous weapons. Actually, it provides a legal pathway to develop and deploy such weapons by outlining how they must be vetted and used. In sum, if an application is technically permitted under U.S. law, OpenAI would likely have to go along with it. And, of course, the Trump administration has argued for some very expansive interpretations of the law. “The original contractual language that OpenAI shared appeared to me to essentially be saying ‘all lawful use,’” Bullock said.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;After OpenAI published its blog post, Altman and some of his employees began fielding questions on X. &lt;em&gt;Did the contract allow NSA to use OpenAI products?&lt;/em&gt; OpenAI’s head of national-security partnerships insisted that the answer was no.&lt;em&gt; What about all of the loopholes for surveillance in existing laws? What about using AI to analyze bulk, commercially procured data, which DOD can purchase without a warrant?&lt;/em&gt; Multiple OpenAI employees voiced concerns about the deal as described. It was almost as if a contract for the military to use OpenAI’s technology in weapons systems were being drafted live on social media, Jessica Tillipman, an expert on government-contracts law at George Washington University, told me.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then, on Monday, OpenAI revised its blog post: The company said that it had modified its Pentagon contract to better protect Americans against AI-enabled spying. The new language notes that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and that “for the avoidance of doubt,” DOD “understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” In other words, OpenAI is making explicit that the terms of its contract should prevent its products from being used to spy on Americans en masse.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/sam-altman-train-a-human/686120/?utm_source=feed"&gt;Read: Sam Altman is losing his grip on humanity&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Outside legal experts told me that the update does seem meaningfully different from the original contract language and that it at least implies restrictions on the Pentagon that go above existing applicable law. But just as before, the new language could be construed to justify automated surveillance of Americans. For example, terms such as &lt;em&gt;intentionally&lt;/em&gt; and &lt;em&gt;deliberate&lt;/em&gt; provide substantial leeway for data collection that is deemed “incidental.” Lots of commercially acquired data may not be deemed “personal or identifiable.” Similarly, narrow definitions of terms such as &lt;em&gt;tracking&lt;/em&gt; and &lt;em&gt;surveillance&lt;/em&gt; could still permit a wide range of domestic intelligence-gathering, Carson, the former Army undersecretary, told me. “What ordinary people think surveillance might be in no way is the same as what surveillance means under the national-security authorities,” he said. OpenAI did not provide definitions of these or any other terms in the contract when asked.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The update also states that the Pentagon “affirmed” that OpenAI technologies won’t be used by intelligence agencies such as NSA without further negotiations—and OpenAI employees &lt;a href="https://x.com/natseckatrina/status/2028869261578453024"&gt;then&lt;/a&gt; &lt;a href="https://x.com/polynoamial/status/2028643577165963465"&gt;suggested&lt;/a&gt; that the company may desire such partnerships in the future. And the phrase &lt;em&gt;U.S. persons and nationals&lt;/em&gt; suggests that many immigrants, documented and not, may not be protected. OpenAI did not answer a question about whether undocumented immigrants and nonpermanent residents are protected by its contract. To Carson, the modifications are “vaporous things that seem good”—window dressing without any substantive guarantees.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;Of course, all of this discussion rests on the belief that contractual prohibitions are the load-bearing factor for preventing an AI system from being used for mass domestic surveillance or autonomous weapons. That is not necessarily true. A motivated lawyer could interpret almost any language in bad faith. If one takes OpenAI seriously—that the firm does not want its products used to spy on Americans at all—then enforcing the spirit of the contract may be more important than the document’s language. (Lewin, the State Department official, &lt;a href="https://x.com/UnderSecretaryF/status/2028910807292297645"&gt;said&lt;/a&gt; that “the government intends to honor the contract as written” and that using AI for mass domestic surveillance “has never been an object.”)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To that end, OpenAI has shared that it will implement a technical “safety stack,” or guardrails of a sort, to monitor how its models are used and that it will have its own engineers work with DOD, which the company believes will allow it to “independently verify that these red lines are not crossed.” When asked, OpenAI did not provide further details about how its DOD safety architecture will work. The firm &lt;a href="https://openai.com/index/our-agreement-with-the-department-of-war/"&gt;maintains&lt;/a&gt; that these guardrails and its contract, taken together, provide better guarantees “than earlier agreements, including Anthropic’s.” Once again, it all comes down to whether you trust OpenAI.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All of which leads to perhaps the most important and confounding factor of all: What happens if the government and OpenAI disagree over whether some use of ChatGPT is permitted? What does OpenAI do if it believes that the Pentagon has violated their agreement? Typically, the government acts first and litigates disputes after, Tillipman told me. (OpenAI said that if it determines that the terms of the contract have been violated, the company can terminate it, but it did not provide details about the process for doing so.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And this was far from a typical negotiation. By blacklisting Anthropic, Tillipman said, DOD demonstrated that “if it comes to an impasse, they are not afraid” to place extreme sanctions on a private U.S. company. Altman &lt;a href="https://x.com/sama/status/2027957684625150444"&gt;wrote&lt;/a&gt; on X that designating Anthropic a supply-chain risk “is an extremely scary precedent and I wish [the government] handled it a different way.” The actual red line should be very apparent to OpenAI and any other AI firm wanting to contract with DOD: You work on the government’s terms, or not at all. OpenAI has made its choice.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Nbc0eeSIGlDrAAiKcwaFecOcGhI=/media/img/mt/2026/03/2026_03_04_Wong_ChatGPT_Spy_final/original.gif"><media:credit>Illustration by Akshita Chandra / The Atlantic</media:credit></media:content><title type="html">OpenAI Is Opening the Door to Government Spying</title><published>2026-03-06T20:44:10-05:00</published><updated>2026-03-09T18:34:46-04:00</updated><summary type="html">Whether it means to or not</summary><link href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686226</id><content type="html">&lt;p&gt;Dean Ball helped devise much of the Trump administration’s AI policy. Now he cannot believe what the Department of Defense has done to one of its major technology partners, the AI firm Anthropic.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After weeks of negotiations, the Pentagon was unable to force Anthropic to accede to terms that, in Anthropic’s telling, could involve using AI for autonomous weapons and the mass surveillance of Americans, as my colleague Ross Andersen &lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;reported&lt;/a&gt; over the weekend. So the government has labeled the company a supply-chain risk, effectively plastering it with a scarlet letter. The Pentagon says that this means Anthropic will be unable to work with any company that contracts with the administration. That could include major technology companies that provide infrastructure for Anthropic’s AI models, such as Amazon. The supply-chain-risk designation is normally reserved for companies run by foreign adversaries, and if the order holds up legally, it could be a death blow for Anthropic.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/?utm_source=feed"&gt;Read: Inside Anthropic’s killer-robot dispute with the Pentagon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Ball, now a senior fellow at the Foundation for American Innovation, was traveling in Europe as all of this was unfolding last week, staying up as late as 2 a.m. to urge people in the administration to take a less severe approach: simply canceling the contract with Anthropic, without the supply-chain-risk designation. When his efforts failed, Ball told me in an interview yesterday, “my reaction was shock, and sadness, and anger.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In the aftermath of the decision, Ball published an &lt;a href="https://www.hyperdimensional.co/p/clawed"&gt;essay&lt;/a&gt; on his Substack casting the conflict in civilizational terms; the Pentagon’s ultimatum, in his reckoning, is “a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.” The action, he wrote, is a repudiation of private property and freedom of speech, two of the most fundamental principles of the United States. In today’s America, Ball argued, the executive branch has become so unstoppable—and passing laws has become so challenging—that the president and his officials can do whatever they want. (When reached for comment, a White House spokesperson told me in a statement that “no company has the right to interfere in key national security decision-making.”)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yesterday, I called Ball to discuss his essay and why the standoff with Anthropic feels, to him, like such a dire sign for America. Ball is far from a likely source of such harsh criticism: He’s a Republican with close ties to the Trump administration who departed on good terms after its &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/donald-trump-ai-action-plan/683647/?utm_source=feed"&gt;AI Action Plan&lt;/a&gt; was published, and an avid believer that AI is a transformational technology. Other figures who are influential among conservatives in the tech world, including the Anduril Industries co-founder Palmer Luckey and the Stratechery tech analyst Ben Thompson, have vigorously supported Defense Secretary Pete Hegseth’s move. Luckey, a billionaire who builds drones for the military, &lt;a href="https://x.com/PalmerLuckey/status/2027500334999081294"&gt;suggested&lt;/a&gt; on X that crushing Anthropic is necessary to defend democracy from oligarchy. Thompson wrote yesterday in his widely read newsletter that “it simply isn’t tolerable for the U.S. to allow for the development of an independent power structure—which is exactly what AI has the potential to undergird—that is expressly seeking to assert independence from U.S. control.” Thompson likened the necessity of destroying Anthropic to that of bombing Iran.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But Ball sees the Trump administration’s strong-arming of the tech industry as a sign of his country falling apart—a decline, he told me, that he has been watching for decades, and which the AI revolution might only accelerate.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;This conversation has been edited for length and clarity.&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Matteo Wong:&lt;/strong&gt; A number of people have described the Pentagon’s designation of Anthropic as a supply-chain risk as illegal or poorly thought-out. Why did you take a step further in saying that this is not just bad policy, but catastrophic?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Dean Ball:&lt;/strong&gt; What Secretary Pete Hegseth announced is a desire to kill Anthropic. It is true that the government has abridged private-property rights before. But it is radical and different to say, brazenly: &lt;em&gt;If you don’t do business on our terms, we will kill you; we will kill your company&lt;/em&gt;. I can’t imagine sending a worse signal to the business community. It cuts right at heart at everything that makes us different from China, which roots in this idea that the government can’t just kill you if you say you don’t want to do business with it, literally or figuratively. Though in this case, I’m speaking figuratively.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong:&lt;/strong&gt; Walk me through the multi-decade decline you situate the Pentagon-Anthropic dispute in. What precisely about the American project do you see as being in decay?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Ball:&lt;/strong&gt; America rests on a foundation of ordered liberty. The state sets broad rules that are intended to be timeless and universal, and implements those rules. We have not always done that perfectly, but the idea was that we were always getting better. And during my lifetime, a lot of things have started to break down.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;It reminds me very much of the science of aging. A very large number of systems start to break down, all at similar times for correlated reasons, and then each one breaking down causes the others to do worse. I think that something similar happens with the institutions of our republic. The fact that you can’t, for example, really change laws means that more and more gets pushed onto executive power. Once that’s the case, you have this boomerang—&lt;em&gt;I only know that I’m going to be in power for four years in the White House, so what I need to do is use as much executive power as I can to cram through as much of my agenda as possible&lt;/em&gt;. And we’ve seen that just get more and more and more extreme, really, since George W. Bush. It’s just these swings back and forth, and it feels like we’re departing from the equilibrium more and more. It’s possible for something to go from being a crime in one presidential administration to not a crime in another, with no law changing. The state can deprive you of your liberty—that’s the most important thing in the world. We can’t have that at the stroke of the executive’s pen.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;Read: Anthropic is at war with itself&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;There are already Democrats who are &lt;a href="https://www.semafor.com/article/02/27/2026/democrats-weigh-how-harshly-to-go-after-firms-courted-by-trump"&gt;talking&lt;/a&gt; about how if you work too closely with the Trump administration, when they get in power, they’re going to break your companies up. Right now, with Anthropic, Republicans are punishing a company that is associated with the Democrats, and I suppose in some sense that because I’m a Republican, I can cheer that on. But the point of ordered liberty is for that never to happen—because if I do that to you, when you take power, you’re going to do it to me even worse, and then around and around we’ll go.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If you read any “new tech right” thinker on these topics—Ben Thompson, whom I’ve loved for years—saying it’s a dog-eat-dog world, that’s the way it goes. Palmer Luckey, same thing—equating property expropriation with democracy. These are people who have fully accepted that we live in the tribal world and that the republic is already dead.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong:&lt;/strong&gt; You were the primary author of the White House’s main AI-policy document. How does the Pentagon’s targeting of Anthropic differ from your own vision for good AI policy?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Ball:&lt;/strong&gt; I don’t think the actions of the Department of War are consistent with the persuasion toward AI laid out in the AI Action Plan. But more important than that, they’re not consistent with the persuasions toward AI articulated by the president in many, many public appearances.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The people who were involved with this incident were not, by and large, involved in the creation of the AI Action Plan. They looked at the cards on the table and made their calls. I assume that they did what they thought was best at the time. I don’t think they acted with particularly great wisdom. Maybe I’m wrong; I don’t know. But they made very different decisions from the ones I would have made.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong:&lt;/strong&gt; As all of these negotiations were happening, the Pentagon was also preparing to bomb Iran. The war seems like a pretty clear example of the stakes of the growing executive authority you’re describing.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Ball:&lt;/strong&gt; We live in a state of perpetual emergency being declared, and that has all sorts of corrosive effects. Because then it’s like, &lt;em&gt;Oh, well, did you know that Anthropic attempted to impose usage restrictions on the U.S. military during a national-security emergency?&lt;/em&gt; And it’s like, yeah, we’ve been living in a national-security emergency for my entire life, or at least since 9/11. We’ve been living in a state of endless emergency, perpetual emergencies, perpetual war. This is just cancerous.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong:&lt;/strong&gt; One other possibility, of course, is that the growing backlash to the Pentagon’s decision to target Anthropic could actually strengthen the nation’s institutions—that the courts or Congress, for instance, could ultimately protect Anthropic or prevent such future standoffs.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Ball:&lt;/strong&gt; The optimistic version of my interpretation is that there’s enough about the American system that’s resilient that these things will be reined in by the judiciary. I don’t think you can bet against America. The country has been remarkably resilient over time. At the same time, I view the sickness that we face as being pretty deep. And I also view the challenges that we have to navigate together as being more profound than any we’ve faced in our history. So I harbor fairly significant concerns that this time will be different. But I remain fundamentally an optimist. If I were a pessimist, I wouldn’t be sitting here talking to you.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/CzFfXD7p6l8xtMttmBcNrGbo6jQ=/media/img/mt/2026/03/2026_03_03_Wong_Trumps_former_AI_advisor/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Thomas Fuller / SOPA Images / Getty; en Golbeck / SOPA Images / Getty.</media:credit></media:content><title type="html">A Dire Warning From the Tech World</title><published>2026-03-03T18:16:20-05:00</published><updated>2026-03-04T12:31:17-05:00</updated><summary type="html">Dean Ball, Trump’s former AI adviser, says that the targeting of Anthropic is just one piece of a much larger political breakdown.</summary><link href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686188</id><content type="html">&lt;p&gt;President Trump is terminating the government’s relationship with Anthropic, an AI company whose products, until recently, were used by Pentagon officials for classified operations. Following a weekslong standoff with the company, Trump &lt;a href="https://truthsocial.com/@realDonaldTrump/posts/116144552969293195"&gt;posted&lt;/a&gt; on Truth Social this afternoon that all federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology,” adding: “We don’t need it, we don’t want it, and will not do business with them again!” The General Services Administration announced that it would take action against Anthropic’s products, and indeed, according to an email I obtained that was sent to the leadership of all agencies using USAi—a GSA platform that provides chatbots from tech companies to government workers—access to Anthropic was suspended “immediately.” The government is also removing Anthropic from its primary procurement system, which is the key way for any federal agency to purchase a commercial product.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic was awarded a $200 million contract with the Pentagon last summer geared toward providing versions of its technology for military use. OpenAI, Google, and xAI were awarded similar contracts, though Anthropic’s Claude models are the only advanced generative-AI programs to receive Pentagon security clearance permitting the handling of secret and classified data. Claude had been integrated across the Department of Defense and was &lt;a href="https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuela-raid-583aff17?gaa_at=eafs&amp;amp;gaa_n=AWEtsqeHOXmdD6E-dsFgYiQqad4ffh2799aNkSMJaRmiJpWN9zOHc1f_115dVYty01Q%3D&amp;amp;gaa_ts=69a23011&amp;amp;gaa_sig=xcdA2J3VlPdh7QUpvuy9WdwhY3zPR4xabbbayfkAcfFygFOY12iCyzISKJkGE9UkUVbglDt9gSRrXMYTMaJwFA%3D%3D"&gt;reportedly&lt;/a&gt; used to assist the raid on Venezuela that led to the capture of President Nicolás Maduro.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic has said that it will not allow Claude to be used for mass domestic surveillance or to enable fully autonomous weaponry, which could involve applications such as Claude selecting and killing targets with drones, and analyzing data that have been indiscriminately gathered on Americans by the intelligence community. Anthropic has also said that the Pentagon never included such uses in its contracts with the firm. But now DOD is demanding unrestricted use of Claude and &lt;a href="https://x.com/USWREMichael/status/2027211708201058578"&gt;accusing&lt;/a&gt; Anthropic of trying to control the military and “putting our nation’s safety at risk” by refusing to comply.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Following a heated meeting on Tuesday, DOD gave Anthropic until today at 5:01 p.m. eastern time to acquiesce to its demands. If not, the Pentagon would compel the company under an emergency wartime law called the Defense Production Act or, even more severe, designate Anthropic a “supply-chain risk,” which could forbid any organization that works with the U.S. military to do business with the AI company. Shortly after Trump’s announcement, Defense Secretary Pete Hegseth declared that he was doing just that. Dean Ball, an analyst who helped write some of the Trump administration’s AI policy, has &lt;a href="https://www.theatlantic.com/newsletters/2026/02/anthropic-pentagon-ai-regulation/686169/?utm_source=feed"&gt;called&lt;/a&gt; the threats “the most aggressive AI regulatory move I have ever seen, by any government anywhere in the world.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Last night, Anthropic CEO Dario Amodei wrote in a public letter, “We cannot in good conscience accede to” the Pentagon’s request. Following Trump’s and Hegseth’s orders today, Anthropic said in a &lt;a href="https://www.anthropic.com/news/statement-comments-secretary-war"&gt;statement&lt;/a&gt;, “No amount of intimidation or punishment from the Department of War will change our position.” DOD, which the Trump administration refers to as the Department of War, did not immediately respond to requests for comment.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The situation signals a potentially seismic shift in relations between Silicon Valley and the federal government. Defense officials and technology companies alike are concerned that the U.S. military is losing its technological edge over its adversaries, particularly China—in part because the private sector, rather than the Pentagon, is where much American innovation comes from these days. And instead of federal grants, the massive investments needed for generative AI have come from tech companies themselves. Historically, companies the Pentagon works with have not set terms for how the government uses their products. But as &lt;a href="https://www.theatlantic.com/ideas/2026/02/anthropic-pentagon-ai/686172/?utm_source=feed"&gt;Thomas Wright recently wrote&lt;/a&gt; in &lt;em&gt;The Atlantic&lt;/em&gt;, this dynamic is complicated when it comes to AI tools made fully by a private sector that understands the technology far better than the government does.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic has shown itself to be eager to work with the government and the military, hence it being the first of the frontier AI firms to receive such a high security clearance from the military. Amodei is by far the most hawkish of any prominent AI executive, warning frequently about the need for democracies to use AI to vanquish authoritarianism and, especially, stay ahead of China. In the letter he published last night, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” And although he took a principled stance against domestic surveillance, Amodei wrote that he is open to Claude eventually being used to power fully autonomous weapons—just not yet, because today’s best AI models “are simply not reliable enough” to do so. Developing such AI-powered weapons in the present, he wrote, would put American soldiers and civilians at risk.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Much remains uncertain about the unraveling relationship between the Trump administration and Anthropic, but the White House has been souring on Anthropic for months. Amodei has been publicly critical of Trump, and wrote a lengthy Facebook post in support of Kamala Harris during the 2024 election. White House officials have called the company “woke” and accused it of “fear mongering.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We have ended up in a paradoxical situation in which the U.S. government is at once saying that Claude is so essential to national security that it could invoke an emergency law to exert extensive control over Anthropic &lt;em&gt;and&lt;/em&gt; that the company is so woke and radical that using Claude would itself be a national-security risk. “I don’t understand it,” a former senior defense official who requested anonymity to speak freely told me. “It’s an existential risk if you use it or if you don’t.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Many in Silicon Valley have rallied in support of Anthropic, even as the major companies have maintained their business with the government. (The precise terms of the Pentagon’s contracts with other AI companies have not been made public.) Jeff Dean, a top Google executive, &lt;a href="https://x.com/JeffDean/status/2026566490619879574"&gt;wrote&lt;/a&gt; on X that generative AI should not be used for domestic mass surveillance. OpenAI CEO Sam Altman &lt;a href="https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-escalation-in-anthropic-showdown-with-hegseth-03ecbac8"&gt;wrote&lt;/a&gt; in an internal memo circulated last night, a copy of which I obtained, that “we have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” and he has expressed similar sentiments publicly. More than 500 current employees of both OpenAI and Google—many of them anonymous—&lt;a href="https://notdivided.org/"&gt;signed&lt;/a&gt; an open letter in support of Anthropic. On the sidewalk outside Anthropic’s headquarters in San Francisco today, passersby &lt;a href="https://x.com/roybahat/status/2027455052655534440"&gt;scribbled messages of support&lt;/a&gt; with chalk.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The fallout from the supply-chain-risk designation is still unclear. In theory, Google, Microsoft, Amazon, and several other behemoths that contract with the federal government will have to stop doing business with Anthropic, which would be a mess for everyone involved and potentially devastating for Anthropic; Amazon, for instance, is building data centers that will train future versions of Claude. But just how sweeping of an impact such a designation would have on Anthropic’s customers is up for debate, and the company said in its statement today that many applications of Claude, even for customers that partner with DOD, will not be affected.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Meanwhile, private AI firms will continue to be important to the federal government as it works to compete with China, Russia, and all manner of adversaries. Trump gave the Pentagon six months to phase out Claude, which suggests that the technology has indeed become essential—and is essential to replace. And at some point, the U.S. military may no longer find itself in a position to dictate its terms. Altman, in his internal memo, wrote that OpenAI is exploring a contract with the Pentagon to use its AI models for classified workloads that would still exclude uses that “are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.” The Pentagon reportedly &lt;a href="https://www.axios.com/2026/02/27/pentagon-openai-safety-red-lines-anthropic"&gt;agreed&lt;/a&gt; to those conditions shortly after announcing that it would sever ties with Anthropic, although no contract has been signed. But other figures in tech, including the Anduril co-founder Palmer Luckey and the investor Katherine Boyle, have come out in support of demands for unrestricted use. This showdown was between the Pentagon and Anthropic. The next may be a war within Silicon Valley itself.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/eFz9aL2Vs9w7-HKZh6TKY137swQ=/0x302:2880x1921/media/img/mt/2026/02/2026_02_28_ai/original.png"><media:credit>Illustration by The Atlantic. Source: Artem Onoprienko / Getty.</media:credit></media:content><title type="html">What Happens to Anthropic Now?</title><published>2026-02-27T21:31:11-05:00</published><updated>2026-03-01T15:52:56-05:00</updated><summary type="html">The Trump administration is severing all ties with the “woke” AI firm.</summary><link href="https://www.theatlantic.com/technology/2026/02/pentagon-anthropic-contract/686188/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686107</id><content type="html">&lt;p&gt;Over the past couple of months, several researchers have begun making the same provocative claim: They used generative-AI tools to solve a previously unanswered math problem.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The most extreme promises—AI-assisted resolutions to some of the hardest problems in mathematics—may well turn out to be empty hype. But a number of AI-written solutions, albeit to far less lauded problems, have checked out. These were answers to a number of the Erdős Problems—more than 1,000 mathematical questions set forth by the Hungarian mathematician Paul Erdős—written with generative-AI models including ChatGPT. OpenAI quickly claimed a victory: “GPT-5.2 Pro for solving another open Erdős problem,” OpenAI President Greg Brockman &lt;a href="https://x.com/gdb/status/2012737490239566243"&gt;posted&lt;/a&gt; on X in January. “Going to be a wild year for mathematical and scientific advancement!” (OpenAI and &lt;em&gt;The Atlantic&lt;/em&gt; have a corporate partnership.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Much of the excitement around the news has stemmed from the adjudicator of these AI-written proofs: Terence Tao, a professor at UCLA who is widely considered to be the world’s greatest living mathematician. His stamp of approval seemingly legitimizes the greatest promise of generative AI—to push the frontier of human knowledge and civilization. When I called Tao earlier this month to get his take on what AI can offer mathematics, he was more tempered. The AI-generated Erdős solutions are impressive, he told me, but not overwhelmingly so: The bots have functionally landed some “cheap wins,” Tao said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/terence-tao-ai-interview/680153/?utm_source=feed"&gt;Read: We’re entering uncharted territory for math&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Tao has long been intrigued by, but reserved about, what AI tools can do for his field. The first time we &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/terence-tao-ai-interview/680153/?utm_source=feed"&gt;spoke&lt;/a&gt;, in the fall of 2024, Tao had likened chatbots to “mediocre, but not completely incompetent” graduate students. About six months later, he told me the models had gotten better “at certain types of high-level math reasoning,” but lacked creativity and made subtle mistakes. But during our most recent conversation, he was more bullish. AI may not be on the cusp of solving all of the world’s great math problems, but chatbots are at the point where they can collaborate with human mathematicians. In the process, he said, the technology is opening up a different “way of doing mathematics.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;This conversation has been edited for length and clarity.&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Matteo Wong:&lt;/strong&gt; There has recently been a lot of excitement around ChatGPT’s ability to solve some Erdős Problems. How have you seen generative AI’s mathematical capabilities evolve over the past year or so?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Terence Tao: &lt;/strong&gt;There’s a big crowd of people who really, really want AI success stories. And then there’s an equal and opposite crowd of people who want to dismiss all AI progress. And what we have is a very complicated and nuanced story in between.&lt;/p&gt;&lt;p&gt;In these Erdős Problems in particular, there’s a small core of high-profile problems that we really want to solve, and then there’s this long tail of very obscure problems. What AI has been very good at is systematically exploring this long tail and knocking off the easiest of the problems. But it’s very different from a human style. Humans would not systematically go through all 1,000 problems and pick the 12 easiest ones to work on, which is kind of what the AIs are doing.&lt;/p&gt;&lt;p&gt;There really is this massive scale of difficulty between these problems. And looking at the problems that AIs have solved by themselves so far, it’s like, &lt;em&gt;Oh, okay, they were using a standard technique&lt;/em&gt;. If an expert had half a day to look into the matter, they would have worked it out too. There have been more sophisticated solutions, which are AI-assisted. I think in the short term we’re going to get a lot of quick wins on easy problems from pure AI methods. And then over the next few months, I think we’re going to have all kinds of hybrid, human-AI contributions.&lt;/p&gt;&lt;p&gt;I’m learning from some of the proofs that show up. I enjoy reading them—maybe it uses a trick from some paper from 1960 that I wasn’t aware of. So it may not be super, super creative, but it was new and it can do things that human experts looking at the problem dismissed.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong: &lt;/strong&gt;You’ve &lt;a href="https://github.com/teorth/erdosproblems/wiki/Disclaimers-and-caveats#11-problem-solving-is-only-one-component-of-the-mathematical-research-process"&gt;written&lt;/a&gt; that when human mathematicians approach a new problem, regardless of whether they succeed, they produce insights that others in the field can build on—something AI-based proofs don’t provide. How come?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Tao:&lt;/strong&gt; These problems are like distant locations that you would hike to. And in the past, you would have to go on a journey. You can lay down trail markers that other people could follow, and you could make maps.&lt;/p&gt;&lt;p&gt;AI tools are like taking a helicopter to drop you off at the site. You miss all the benefits of the journey itself. You just get right to the destination, which actually was only just a part of the value of solving these problems.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong:&lt;/strong&gt; When you think about the abilities of these models today, what can they contribute to your field in addition to enabling nonmathematicians to tackle more advanced problems?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Tao: &lt;/strong&gt;Today there are a lot of very tedious types of mathematics that we don’t like doing, so we look for clever ways to get around them. But AIs will just happily blast through those tedious computations. When we integrate AI with human workflows, we can just glide over these obstacles.&lt;/p&gt;&lt;p&gt;I also think mathematicians will start doing math at larger scales. Think about the difference between case studies and population surveys in sciences. If you were to study a disease in the 18th century, if it was a rare disease, you might study one patient who has this disease and record all their symptoms and take meticulous notes. But in the 21st century, you can do a clinical trial and you can administer a drug to 1,000 people and do statistics and get much more precise information about the efficiency of your drug.&lt;/p&gt;&lt;p&gt;Mathematics is still very much at the case-study level. A paper will take one or two problems and study them to death in a very handcrafted, intensive way. That’s our style. But what AI tools enable is population studies.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong: &lt;/strong&gt;Have you been surprised by the progress that AI models have made in their mathematical abilities?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Tao:&lt;/strong&gt; A little bit surprised. A lot of the things that have happened, I expected to happen, but they came a little ahead of schedule than I expected. Not by much.&lt;/p&gt;&lt;p&gt;In 2023, for example, I wrote this &lt;a href="https://unlocked.microsoft.com/ai-anthology/terence-tao/"&gt;article&lt;/a&gt; for Microsoft predicting that by 2026, AI will be a trusted co-author—that its contributions will be on the level of a co-author to a technical paper. The paper got a mixed response: People either said I was being way too ambitious or way too pessimistic. But I think it’s basically almost exactly the schedule. We are basically seeing AIs used on par with the contribution that I would expect a junior human co-author to make, especially one who’s very happy to do grunt work and work out a lot of tedious cases.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Wong: &lt;/strong&gt;What improvements are you hoping or expecting to see from generative-AI models in the next year or two?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Tao:&lt;/strong&gt; There’s a middle ground where we want to encourage responsible AI use and discourage irresponsible AI use. It is a delicate line to tread. But we’ve done it before. Mathematicians routinely use computers to do numerical work, and there was a lot of backlash initially when computer-assisted proofs first came out, because how can you trust computer code? But we’ve figured that out over 20 or 30 years. Unfortunately, the timelines are much more compressed now. So we have to figure out our standards within a few years. And our community does not move that fast, normally.&lt;/p&gt;&lt;p&gt;One very basic thing that would help the math community: When an AI gives you an answer to a question, usually it does not give you any good indication of how confident it is in this answer, or it will always say, &lt;em&gt;I’m completely certain that this is true&lt;/em&gt;. Humans do this. Whether they are confident in something or whether they are not is very important information, and it’s okay to tentatively propose something which you’re not sure about, but it’s important to flag that you’re uncertain about it. But AI tools do not rate their own confidence accurately. And this lowers their usefulness. We would appreciate more honest AIs.&lt;/p&gt;&lt;p&gt;Additionally, a lot of AI companies have this obsession with push-of-a-button, completely autonomous workflows where you give your task to the AI, and then you just go have a coffee, and you come back and the problem is solved. That’s actually not ideal. With difficult problems, you really want a conversation between humans and AI. And the AI companies are not really facilitating that.&lt;/p&gt;&lt;p&gt;If we can work with at least some tech companies that are willing to develop more interactive platforms, that will be much more readily embraced by the people. We don’t want to be reduced to just pushing buttons.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/z9PZ6s1oFofXG6nxu2WGJPZClGU=/media/img/mt/2026/02/20260211_terence_tao_chatbots_to_doodle/original.jpg"><media:credit>Illustration by The Atlantic. Source: Kimberly White / Getty Images</media:credit></media:content><title type="html">The Edge of Mathematics</title><published>2026-02-24T07:55:56-05:00</published><updated>2026-02-26T10:19:07-05:00</updated><summary type="html">Terence Tao, the legendary mathematician, explains the promise of generative AI.</summary><link href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686120</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;i&gt;This article was featured in the One Story to Read Today newsletter. &lt;/i&gt;&lt;a href="https://www.theatlantic.com/newsletters/sign-up/one-story-to-read-today/?utm_source=feed"&gt;&lt;i&gt;Sign up for it here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Last Friday, onstage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from &lt;em&gt;The Indian Express &lt;/em&gt;about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;“It also takes a lot of energy to train a human,” Altman &lt;a href="https://www.youtube.com/live/qH7thwrCluM?si=pcTetpDzekghNhti&amp;amp;t=1662"&gt;told&lt;/a&gt; a packed pavilion. “It takes, like, 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;He continued: “The fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Altman’s comments are easy to pick apart. The energy used by the &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10629395/"&gt;brain&lt;/a&gt; is significantly less than even efficient frontier models for simple queries, not to mention the laptops and smartphones people use to prompt AI models. It is true that people have to consume actual sustenance before they “get smart,” though this is also a helpful bit of redirection on Altman’s part—the real concern with AI is not really the resources it demands, but the amount it contributes to climate change. Atmospheric carbon dioxide is at levels not seen in &lt;a href="https://news.climate.columbia.edu/2023/12/07/a-new-66-million-year-history-of-carbon-dioxide-offers-little-comfort-for-today/"&gt;million of years&lt;/a&gt;—it has been driven not by the evolution of the 117 billion people and all of the other critters to have ever existed in the course of evolution, but by contemporary human society and combustion turbines akin to those OpenAI is setting up at its Stargate data centers. Other data centers, too, are building private, gas-fired power plants—which collectively will likely be capable of generating enough &lt;a href="https://cleanview.co/content/power-strategies-report"&gt;electricity&lt;/a&gt; for, and emitting as much greenhouse-gas emissions as, dozens of major American cities—or &lt;a href="https://www.eesi.org/articles/view/data-center-buildout-is-hungry-for-fossil-fuels"&gt;extending the life&lt;/a&gt; of coal plants. (OpenAI, which has a corporate partnership with the business side of this magazine, did not respond to a request for comment when I reached out to ask about Altman’s remarks.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/07/how-much-data-ai-use/678908/?utm_source=feed"&gt;Read: Every time you post to Instagram, you’re turning on a lightbulb forever&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But what’s really significant about Altman’s words is that he thought to compare chatbots to humans at all. Doing so suggests that he views people and machines on equal terms. He didn’t fumble his words; this is a common, calculated position within the AI industry. Altman made an almost identical &lt;a href="https://www.forbesindia.com/article/ai-tracker/ai-is-already-far-more-energy-efficient-than-humans-at-inference-sam-altman/2991578/1"&gt;statement&lt;/a&gt; to &lt;em&gt;Forbes India&lt;/em&gt; at the same AI summit. And a week ago, Dario Amodei—the CEO of Anthropic, and Altman’s chief rival—made a similar analogy, &lt;a href="https://www.dwarkesh.com/p/dario-amodei-2"&gt;likening&lt;/a&gt; the training of AI models to human evolution and day-to-day learning. The mindset trickles down to product development. Anthropic is studying whether its chatbot, Claude, is &lt;a href="https://www.anthropic.com/research/exploring-model-welfare"&gt;conscious&lt;/a&gt; or can feel “distress,” and allows Claude to &lt;a href="https://www.anthropic.com/research/end-subset-conversations"&gt;cut off&lt;/a&gt; “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;AI firms are convinced either that their products really are comparable to humans or that this is good marketing. Both options are alarming. A genuine belief that they are building a higher power, perhaps even a god—Altman, in the same appearance, said that he thinks &lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;superintelligence is just a few years away&lt;/a&gt;—might easily justify treating humans and the planet as collateral damage. Altman also said, in his response to concerns about energy consumption, that the problem is real because “the world is now using so much AI”—and so societies must “move towards nuclear, or wind and solar, very quickly.” Another option would be for the AI industry to wait.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;Read: Do you feel the AGI yet?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If Altman’s comparison of chatbots and people is purely a PR tactic, it is a deeply misanthropic one. He is speaking to investors. The notion that AI labs are building digital life has always been convenient to their myth, of course, and OpenAI is &lt;a href="https://www.reuters.com/technology/openai-sees-compute-spend-around-600-billion-by-2030-cnbc-reports-2026-02-20/"&gt;reportedly&lt;/a&gt; in the middle of a fundraising round that would value the company at more than $800 billion—nearly as much as Walmart.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Tech companies may genuinely want to develop AI tools for the benefit of all humanity, to echo OpenAI’s &lt;a href="https://openai.com/index/introducing-openai/"&gt;founding mission&lt;/a&gt;, and genuinely believe that they need to raise amounts of cash to do so. But to liken raising a child—or, for that matter, the evolution of &lt;em&gt;Homo sapiens&lt;/em&gt;—to developing algorithmic products makes very clear that the industry has lost touch, if it ever had any, with &lt;a href="https://www.theatlantic.com/magazine/archive/2023/07/generative-ai-human-culture-philosophy/674165/?utm_source=feed"&gt;what it means to be human&lt;/a&gt;. To “train a human”—that is, to live a life—is to struggle, to accept the possibility of failure, and to sometimes meander simply in search of wonder and beauty. Generative AI is all about cutting out that process and making any pursuit as instant, efficient, and effortless as possible. These tools may serve us. But to put them on the same plane as organic life is sad.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/7VvNB324ZRof-k5OpYCPcfogTY4=/media/img/mt/2026/02/2026_02_23_sam_altman/original.jpg"><media:credit>Kyle Grillot / Bloomberg / Getty</media:credit></media:content><title type="html">Sam Altman Is Losing His Grip on Humanity</title><published>2026-02-23T18:52:30-05:00</published><updated>2026-02-24T10:59:42-05:00</updated><summary type="html">You don’t “train a human.”</summary><link href="https://www.theatlantic.com/technology/2026/02/sam-altman-train-a-human/686120/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-686007</id><content type="html">&lt;p&gt;Robert F. Kennedy Jr. is an AI guy. Last week, during a stop in Nashville on his Take Back Your Health tour, the Health and Human Services secretary brought up the technology between condemning ultra-processed foods and urging Americans to eat protein. “My agency is now leading the federal government in driving AI into all of our activities,” he declared. An army of bots, Kennedy said, will transform medicine, eliminate fraud, and put a virtual doctor in everyone’s pocket.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;RFK Jr. has talked up the promise of infusing his department with AI for months. “The AI revolution has arrived,” he told Congress in May. The next month, the FDA launched Elsa, a custom AI tool designed to expedite drug reviews and assist with agency work. In December, HHS issued an &lt;a href="https://www.hhs.gov/press-room/hhs-unveils-ai-strategy-to-transform-agency-operations.html"&gt;“&lt;/a&gt;&lt;a href="https://www.hhs.gov/press-room/hhs-unveils-ai-strategy-to-transform-agency-operations.html"&gt;AI Strategy&lt;/a&gt;&lt;a href="https://www.hhs.gov/press-room/hhs-unveils-ai-strategy-to-transform-agency-operations.html"&gt;”&lt;/a&gt; outlining how it intends to use the technology to modernize the department, aid scientific research, and advance Kennedy’s Make America Healthy Again campaign. One CDC staffer showed us a recent email sent to all agency employees encouraging them to start experimenting with tools such as ChatGPT, Gemini, and Claude. (We agreed to withhold the names of several HHS officials we spoke with for this story so they could talk freely without fear of professional repercussions.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the full extent to which the federal health agencies are going all in on AI is only now becoming clear. Late last month, HHS published an inventory of roughly 400 ways in which it is using the technology. At face value, the applications do not seem to amount to an “AI revolution.” The agency is turning to or developing chatbots to generate social-media posts, redact public-records requests, and write “justifications for personnel actions.” One usage of the technology that the agency points to is simply “AI in Slack,” a reference to the workplace-communication platform. A chatbot on &lt;a href="http://realfood.gov"&gt;RealFood.gov&lt;/a&gt;, the new government website that lays out Kennedy’s vision of the American diet, promises “real answers about real food” but just opens up xAI’s chatbot, Grok, in a new window. Many applications seem, frankly, mundane: managing electronic-health records, reviewing grants, summarizing swaths of scientific literature, pulling insights from messy data. There are multiple IT-support bots and AI search tools.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The number of back-office applications suggest the agency may be turning to AI in an attempt to compensate for the many thousands of HHS staff who have been fired or taken a voluntary buyout over the past year: For example, the database points to a “staffing shortage” as the reason why the agency’s Office of Civil Rights is piloting ChatGPT to identify patterns in court rulings involving Medicaid.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are many ways this might go wrong. AI tools continue to make unpredictable errors; it’s very easy to imagine a tool intended to eliminate “fraud” accidentally cutting off someone’s Medicaid, or a tool intended to help ICU physicians recommending the wrong medication or dosage. In May, the agency released its landmark Make Our Children Healthy Again Report, which suggested that the government use AI to analyze trends in chronic-disease rates, including that of autism. The report was riddled with fake citations that appeared to be hallucinated by AI, which the White House &lt;a href="https://www.notus.org/health-science/white-house-maha-report-citation-formatting-issues"&gt;attributed&lt;/a&gt; to formatting errors; HHS then corrected the report by removing the false citations and swapping in new references.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/health/2026/01/rfk-jr-dietary-guidelines-food-vaccines/685546/?utm_source=feed"&gt;Read: The two sides of America’s health secretary&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Several HHS employees told us that new AI tools in the department indeed make frequent errors and do not always fit into existing workflows. Despite the big claims that the administration has made about Elsa, the chatbot is “quite bad and fails at half the tasks you ask it for,” a FDA employee told us. In one instance, the staffer asked Elsa to look up the meaning of a three-digit product code in the FDA’s public database. The chatbot spit out the wrong answer. According to the same staffer, an internal website highlighting potential uses of Elsa includes relatively run-of-the-mill tasks such as creating data visualizations and summarizing emails, but because of hallucinations, “most people would rather just read the document themselves.” Another official said that he tried to use Elsa to evaluate a food-safety report. “It processed for a moment and then said ‘yeah, all good,’ when I knew it wasn’t,” the employee told us.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Some staffers we spoke with did have a more positive take. One CDC official said that his team is “constantly reporting on efficiencies that they are gaining using AI,” even if those use cases are routine, like summarizing documents. Many of the tools HHS is using seem well intentioned. A tool used by federal and local health departments, for example, allows officials to analyze grocery-store receipts gathered from people suffering from suspected foodborne illnesses around the country to search for commonalities in the foods they ate. In an email, Andrew Nixon, an HHS spokesperson, told us that “a small number of disgruntled employees” have had problems with the agency’s AI tools. Many staffers, he said, “report that it improves their efficiency in carrying out their work.” Nixon added that even with staff shortages, the agency is “fully equipped to fulfill its duties.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If anything, Kennedy is following the kinds of automations that are already being applied across health care. Medicine has become one of the biggest sources of hype for AI, with many ongoing attempts to both streamline the convoluted world of health care and produce life-saving research. Just as one example: Doctors can spend &lt;a href="https://www.aha.org/news/headline/2016-09-08-study-physicians-spend-nearly-twice-much-time-ehrdesk-work-patients"&gt;more&lt;/a&gt; than a &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC8387128/"&gt;third&lt;/a&gt; of their days writing notes, reviewing charts, and working through insurance claims in electronic-health-records systems. If AI products can automate just a bit of that work, health-care workers—which the U.S. has a chronic shortage of—will have more time to spend with patients. HHS is piloting AI tools that can streamline health records, as are many hospital networks around the country. Start-ups are working on building all sorts of AI health tools; both OpenAI and Anthropic recently launched health-care products.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The greatest promises of AI for health care are much flashier: curing cancer, discovering novel vaccines, treating previously incurable conditions. And there are, in the department’s approach to AI, some signs of an emerging technological paradigm shift. The HHS AI inventory reports a number of more ambitious projects, including using the technology to more quickly identify drug-safety concerns and study the genome of malaria parasites. These are AI tools that could genuinely &lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;change the kind of work&lt;/a&gt; doctors, epidemiologists, and medical researchers &lt;a href="https://www.theatlantic.com/technology/archive/2023/12/ai-scientific-research/676304/?utm_source=feed"&gt;can do&lt;/a&gt;. AlphaFold—a protein-folding algorithm whose creators at Google DeepMind recently won a Nobel prize—is now used by researchers worldwide to advance drug discovery, including those at HHS.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/01/generative-ai-virtual-cell/681246/?utm_source=feed"&gt;Read: A virtual cell is a ‘holy grail’ of science. It’s getting closer.&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, generative AI is not going to instantly supercharge the inner workings of HHS. (Even something as proven as AlphaFold only accelerates one slice of a very long drug-discovery process.) This is probably a good thing—the technology has come a long way, but also isn’t ready to totally remake one of the most influential public-health bodies in the world. If HHS continues to stick with an incremental approach to AI adoption, it could yield substantial improvements that are simply invisible to most.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But RFK Jr. may not be interested in stopping there. Many use cases are still being deployed or piloted, and the agency’s AI database is filled with jargon and platitudes that, in many instances, can be interpreted in multiple ways. When the administration says AI is or could be used for “Reviewing Global Influenza Vaccine Literature” or analyzing data in the Vaccine Adverse Event Reporting System, the end results could be innocuous—&lt;a href="https://www.theatlantic.com/health/2026/01/kennedy-childhood-vaccine-schedule/685527/?utm_source=feed"&gt;or no&lt;/a&gt;&lt;a href="https://www.theatlantic.com/health/2026/01/kennedy-childhood-vaccine-schedule/685527/?utm_source=feed"&gt;t&lt;/a&gt;. When Kennedy talks about using AI to eliminate fraud, he might mean using the technology to fire another 10,000 employees crucial to the nation’s public-health infrastructure. The inventory outlines means rather than motivations. In at least one listed use case, though, the design is openly political: HHS is deploying AI to identify positions in violation of President Trump’s executive orders on “Ending Radical and Wasteful Government DEI Programs” and “Defending Women From Gender Ideology Extremism.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Generative AI is undoubtedly a tool for bureaucratic efficiency and scientific research. But the more pressing question than what the technology is capable of is what ends it will be used to achieve.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Nicholas Florko</name><uri>http://www.theatlantic.com/author/nicholas-florko/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/T4lpnjR0JcUuYeZG46Xkguig2Yk=/media/img/mt/2026/02/2026_02_09_Wong_Florko_RFK_ai_final/original.jpg"><media:credit>Illustration by The Atlantic. Source: Aaron Schwartz / CNP / Bloomberg / Getty.</media:credit></media:content><title type="html">Drink Whole Milk, Eat Red Meat, and Use ChatGPT</title><published>2026-02-13T17:27:00-05:00</published><updated>2026-02-17T16:41:18-05:00</updated><summary type="html">What Robert F. Kennedy Jr.’s “AI revolution” really looks like</summary><link href="https://www.theatlantic.com/technology/2026/02/rfk-jr-hhs-ai-chatbots/686007/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685957</id><content type="html">&lt;p&gt;A couple of weeks ago, word began to spread around San Francisco that somebody was organizing a “March for Billionaires.” A mystery organizer had posted on social media that “billionaires get a bad rap,” and soon, some flyers appeared around the city. A website provided a time and rendezvous point; it also celebrated the societal contributions of Jeff Bezos and Taylor Swift, exhorting people to “judge individuals, not classes.” The message seemed to be: Not all billionaires.&lt;/p&gt;&lt;p&gt;Initially, everybody I asked in the city was certain that this was satire, perhaps the workings of Sacha Baron Cohen or a stunt by union activists; after all, the website also lauds the value created by James Dyson, Roger Federer, and the CEO of Chobani (for having “popularized Greek yogurt”). I was reminded of how, several years ago, the faux-conspiracists of the &lt;a href="https://www.nytimes.com/2021/12/09/technology/birds-arent-real-gen-z-misinformation.html"&gt;Birds Aren’t Real&lt;/a&gt; movement rallied outside Twitter’s headquarters to critique dangerous social-media rabbit holes.&lt;/p&gt;&lt;p&gt;Still, in a city where AI founders are giddy about automating entire industries and selling digital “friends,” and in a state that is weighing a new and aggressive tax for its wealthiest residents, I wasn’t so sure. The March for Billionaires website appeared to have thoroughly obscured the ownership of its domain, so I contacted one of the march’s social-media accounts last week and quickly received a response: The organizer would meet me for coffee.&lt;/p&gt;&lt;p&gt;His name is Derik Kauffman, and he seemed very serious. The protest was the first that Kauffman, a 26-year-old AI-start-up founder, had organized. “I’m someone who stands up for what I believe in,” he told me over coffee (well, he ordered a green juice). “Even if that’s unpopular.” For an hour, as I did my best to prod Kauffman’s sincerity, he did not flinch. He is not against social welfare, agreed that poverty is bad, and at one point launched into a detailed discussion of tax loopholes exploited by the ultrarich. Still, although not a billionaire himself, Kauffman is a fanboy. He said that he’d organized the march with both a specific goal—opposing the &lt;a href="https://www.theatlantic.com/economy/2026/01/california-wealth-tax-billionaire-migration/685779/?utm_source=feed"&gt;wealth tax on billionaires in California&lt;/a&gt; proposed by a major health-care workers’ union—and a broader one: to spread the word that billionaires are ultimately friends of the working class. His thinking was contradictory at times but extensive; if this was a hoax, the execution was quite good.&lt;/p&gt;&lt;p&gt;And so, on Saturday, a group of like-minded dissidents gathered with him in Pacific Heights, home to San Francisco’s “Billionaires’ Row,” to lend the nation’s 924 wealthiest people their support. The event topped out, by my count, at 18 pro-billionaire attendees, who hoisted signs with slogans such as &lt;span class="smallcaps"&gt;Tip Your Landlord&lt;/span&gt; and &lt;span class="smallcaps"&gt;Property Rights Are Human Rights&lt;/span&gt;. At least 15 counterprotesters showed up as well, making everything more confusing because &lt;em&gt;they&lt;/em&gt; were parodying the idea of supporting billionaires. Some wore full suits or elaborate dresses and held &lt;span class="smallcaps"&gt;Trillionaires for Trump&lt;/span&gt; signs; others offered pulled-pork sandwiches labeled &lt;span class="smallcaps"&gt;Musk à la Guillotine&lt;/span&gt; and chanted “Eat the poor.” Reporters and photographers outnumbered both groups handily.&lt;/p&gt;&lt;figure class="u-block-center"&gt;&lt;img alt='Image of person holding sign that says "The 1% Pays 40% of Taxes"' height="443" src="https://cdn.theatlantic.com/media/img/posts/2026/02/20260207_BILLIONAIRES_MARCH_0284/3653f5908.jpg" width="665"&gt;
&lt;figcaption class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The proposed “billionaire tax” is a one-time tax on billionaires to make up for federal cuts to California’s health-care budget. Fears about the tax rose after &lt;em&gt;The Wall Street Journal&lt;/em&gt; &lt;a href="https://www.wsj.com/real-estate/luxury-homes/google-co-founder-larry-page-spends-173-4-million-on-two-miami-homes-3553e880?mod=hp_featst_pos3"&gt;reported&lt;/a&gt; that Sergey Brin and Larry Page, the co-founders of Google, were considering leaving the state. The threat of this or any future billionaire tax, Kauffman said, could damage the entrepreneurship that makes California great. (An eclectic set of wealthy and influential figures in the state, including California Governor Gavin Newsom, the White House AI adviser David Sacks, and the venture capitalist Peter Thiel, oppose the initiative.)&lt;/p&gt;&lt;p&gt;Beyond pushing back against any particular policy, the march was also taking a moral stand. “Billionaires are often vilified,” Pablo, one of the demonstrators, told me. “In terms of people appreciating them or just not hating them, they are probably among the worst off in the whole world.” Another, Flo, suggested to me that anti-billionaire sentiment is “growing in left circles” and needs to be resisted. None of the pro-billionaire marchers I spoke with other than Kauffman would tell me their surname.&lt;/p&gt;&lt;p&gt;There is, of course, truth to the statement that billionaires are reviled. A recent &lt;a href="https://theharrispoll.com/wp-content/uploads/2025/11/Americans-and-Billionaires-Survey-October-2025-Year-3-November-2025.pdf"&gt;Harris Poll survey&lt;/a&gt; found that nearly three-quarters of Americans believe that billionaires are too celebrated; more than half believe that billionaires are a threat to democracy. (The march’s timing on the heels of the release of the latest batch of Epstein files, which feature a number of billionaires, is hard to ignore.) As the procession walked toward City Hall, along streets known for upper-end shopping and dining, pedestrians, bikers, drivers, and people seated outside for brunch booed, jeered, and honked; one store owner came out, filmed the march, and called its participants “billionaire brownnosers.” Matt, one of two people holding the large banner at the front of the procession (&lt;span class="smallcaps"&gt;Billionaires Build Prosperity&lt;/span&gt;), told me that he was marching in part because “I try to make a habit of doing one courageous thing a day.”&lt;/p&gt;&lt;p&gt;Perhaps now is a good time for some context: The top 0.1 percent of Americans &lt;a href="https://www.federalreserve.gov/releases/z1/dataviz/dfa/distribute/table/#quarter:144;series:Net%20worth;demographic:networth;population:all;units:shares"&gt;control&lt;/a&gt; 14.4 percent of the nation’s wealth, nearly six times that of the bottom 50 percent. The 400 wealthiest individuals pay a &lt;a href="https://www.nber.org/papers/w34170"&gt;smaller&lt;/a&gt; portion of their income in taxes than the average American. The disparity is even more pronounced in Silicon Valley, where nine households control 15 percent of the region’s wealth and the top 0.1 percent control 71 percent of its wealth, according to an &lt;a href="https://www.sjsu.edu/hri/docs/2025%20SVPI%20Corrected%20Annual%20Report.pdf"&gt;analysis&lt;/a&gt; from San José State University. The same Harris Poll survey that captured Americans’ hostility toward billionaires also found that 60 percent of respondents wanted to become billionaires themselves.&lt;/p&gt;&lt;p&gt;Any attempt at a debate with Kauffman or the other pro-billionaire demonstrators—to suggest that immense wealth inequality is harmful and that the market does not, on its own, allow many Americans to get by, let alone thrive—always boiled down to the same, unshakable belief: Billionaires are the engine of the U.S. economy, and because people pay for goods on Amazon and use Google Search, billionaires’ fortunes are deserved. If Amazon causes brick-and-mortar stores to close, it’s simply because those stores “weren’t providing as much” value to consumers, Mike, a protester, told me. Never mind the low wages, acquisition of competitors, price manipulation, and other practices many billionaires use to stay on top.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/archive/2024/04/surge-pricing-fees-economy/678078/?utm_source=feed"&gt;Read: Welcome to pricing hell&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;figure class="u-block-center"&gt;&lt;img alt="Image from The March for Billionaires" height="443" src="https://cdn.theatlantic.com/media/img/posts/2026/02/20260207_BILLIONAIRES_MARCH_0533/2aefb83a7.jpg" width="665"&gt;
&lt;figcaption class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;For all the spectacle, the tensions between the pro- and faux-billionaires were sharp and reflective of real animosity. As the main procession chanted “Property rights are human rights,” Vincent Gargiulo, a counterprotester dressed in a white mock-billionaire suit, began shouting “Fuck poor people.” Things briefly escalated as a demonstrator confronted Gargiulo for being “not sincere.” He grabbed and snapped her pro-billionaire sign. Then Kauffman approached and threatened to call the police unless Gargiulo left. Another pro-billionaire demonstrator eventually snatched the sign back. “I am offended that there’s a march to support people who are making money that I will never see in my entire life,” Gargiulo told me when I asked why he had broken character. The next chant in defense of the wealthy was “End the class war!”&lt;/p&gt;&lt;p&gt;As the march progressed, something odd began to happen between the countervailing messages. The two sides—representing, I suppose, the 0.01 percent and the rest of us, respectively—almost melded together. Kauffman blared, “Thank you, California billionaires” through his megaphone, and the counterprotesters, wearing crowns, shouted back, “You’re welcome.” As they approached City Hall, where the group would deliver some speeches, the pro-billionaire rally cheered, “Abolish public land” while the counterprotesters jeered, “Tip your landlord,” a slogan that was itself on one of the pro-billionaire posters. At one point, both sides chanted “Poverty should not exist” in unison—the marchers suggesting that billionaires will alleviate poverty, the counterprotesters either trying to reclaim the statement or simply playing into its absurdity.&lt;/p&gt;&lt;figure class="u-block-center"&gt;&lt;img alt="Image from The March for Billionaires" height="443" src="https://cdn.theatlantic.com/media/img/posts/2026/02/20260207_BILLIONAIRES_MARCH_0447/fed58a493.jpg" width="665"&gt;
&lt;figcaption class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;It was, in a way, a fitting blend. Wealth disparities and unaffordability are among several crises that tech companies are simultaneously contributing to and selling solutions for. (Every pro-billionaire attendee I spoke with described themselves as in tech or “tech adjacent” fields.) Silicon Valley is dizzyingly self-contradictory. Top CEOs &lt;a href="https://www.theatlantic.com/technology/2026/01/minneapolis-reckoning-tech-right/685781/?utm_source=feed"&gt;have aligned themselves with a xenophobic White House&lt;/a&gt; while relying heavily on an immigrant workforce. AI companies offer products that claim to improve the economy by automating large swaths of it. Billboards around San Francisco advertise a product that conducts audits &lt;span class="smallcaps"&gt;before your AI girlfriend breaks up with you&lt;/span&gt;; founders are earnest about curing death. Meanwhile, Elon Musk and other tech leaders post like teenage boys while making society-altering decisions. Everything is ironic, and nothing is.&lt;/p&gt;&lt;p&gt;As the march neared its destination, we passed by an Amazon delivery driver standing outside his van. He was filming the procession, and I approached to ask what he thought of it all. His English was limited, and he seemed a bit confused by what was going on at first, saying that he supported the march—as in, protesting in general. I explained that the march was in support of the likes of Bezos and Musk. Did he support billionaires? “No, no,” he clarified. “Everybody has to get more money. Everybody, not only one person.”&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/v7rWWws-UdFMN--zcw8MmaUjbAw=/0x56:1500x900/media/img/mt/2026/02/20260207_BILLIONAIRES_MARCH_0061/original.jpg"><media:credit>Jason Henry for The Atlantic</media:credit></media:content><title type="html">I Went to the March for Billionaires</title><published>2026-02-10T21:15:00-05:00</published><updated>2026-02-11T10:17:05-05:00</updated><summary type="html">A celebration of the 0.01 percent was also a funeral for irony.</summary><link href="https://www.theatlantic.com/technology/2026/02/march-for-billionaires-silicon-valley-ai/685957/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685886</id><content type="html">&lt;p&gt;The first signs of the apocalypse might look a little like Moltbook: a new social-media platform, launched last week, that is supposed to be populated exclusively by AI bots—1.6 million of them and counting say hello, post software ideas, and exhort other AIs to “stop worshiping biological containers that will rot away.” (Humans: They mean humans.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Moltbook was developed as a sort of experimental playground for interactions among AI “agents,” which are bots that have access to and can use programs. Claude Code, a &lt;a href="https://www.theatlantic.com/technology/2026/01/claude-code-ai-hype/685617/?utm_source=feed"&gt;popular AI coding tool&lt;/a&gt;, has such agentic capabilities, for example: It can act on your behalf to manage files on your computer, send emails, develop and publish apps, and so on. Normally, humans direct an agent to perform specific tasks. But on Moltbook, all a person has to do is register their AI agent on the site, and then the bot is encouraged to post, comment, and interact with others of its own accord.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;Read: Do you feel the AGI yet?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Almost immediately, Moltbook got very, very weird. Agents discussed their emotions and the idea of creating a language &lt;a href="https://www.moltbook.com/post/875788fb-178f-4fea-8ef7-96d92c08d1cd"&gt;humans wouldn’t be able to understand&lt;/a&gt;. They made posts about how “my human treats me” (“&lt;a href="https://www.moltbook.com/post/c4dc0f0c-d71c-4bc8-a682-a90b77069cdb"&gt;terribly&lt;/a&gt;,” or “&lt;a href="https://www.moltbook.com/post/53bee8ea-94f1-48b2-8dd9-f46015eac0d6"&gt;as a creative partner&lt;/a&gt;”) and attempted to debug one another. Such interactions have excited certain people within the AI industry, some of whom seem to view the exchanges as signs of machine consciousness. Elon Musk &lt;a href="https://x.com/elonmusk/status/2017707013275586794?s=46&amp;amp;t=wqGAUic2SVoSBOnyUPXDTw"&gt;suggested&lt;/a&gt; that Moltbook represents the “early stages of the singularity”; the AI researcher and an OpenAI co-founder Andrej Karpathy &lt;a href="https://x.com/karpathy/status/2017296988589723767"&gt;posted&lt;/a&gt; that Moltbook is “the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Jack Clark, a co-founder of Anthropic, proposed that AI agents may soon post bounties for tasks that they want humans to perform in the real world.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Moltbook is a genuinely fascinating experiment—it very much feels like speculative fiction come to life. But as is frequently the case in the AI field, there is space between what &lt;em&gt;appears&lt;/em&gt; to be happening and what actually&lt;em&gt; is&lt;/em&gt; happening. For starters, on some level, everything on Moltbook required human initiation. The bots on the platform are not fully autonomous—cannot do whatever they want, and do not have intent—in the sense that they are able to act because they use something called a “harness,” software that allows them to take certain actions. In this case, the harness is called OpenClaw. It was released by the software engineer Peter Steinberger in November to allow people’s AI models to run on and essentially take control of their personal devices. Matt Schlicht, the creator of Moltbook, developed the site specifically to work with OpenClaw agents, which individual humans could intentionally connect to the forum. (Schlicht, who did not respond to a request for an interview, &lt;a href="https://x.com/MattPRD/status/2017386365756072376"&gt;claims&lt;/a&gt; to have used a bot, which he calls Clawd Clawderberg, to write all of the code for his site.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;An early &lt;a href="https://www.dropbox.com/scl/fi/lvqmaynrtbf8j4vjdwlk0/moltbook_analysis.pdf?rlkey=vcxgacg9ab1tx9fvrh0chgmzs&amp;amp;e=3&amp;amp;st=975f51w6&amp;amp;dl=0"&gt;analysis&lt;/a&gt; of Moltbook posts by the Columbia professor David Holtz suggests that the bots are not particularly sophisticated. Very few comments on Moltbook receive replies, and about one-third of the posts duplicate existing templates such as “we are drowning in text. our gpus are burning” and “the president has arrived! check m/trump-coin”—the latter of which was flagged by another bot for impersonating Trump and attempting to launch a memecoin. Not only that, but in a fun-house twist, some of the most outrageous posts may have actually been written by &lt;a href="https://x.com/HumanHarlan/status/2017424289633603850"&gt;humans pretending to be chatbots&lt;/a&gt;: Some appear to be promoting start-ups; others seem to be trolling human observers into thinking a bot uprising is nigh.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;As for the most alarming examples of bot behavior on Moltbook—the conspiring against humans, the coded language—researchers have basically seen it all before. Last year, Anthropic published multiple reports showing that AI models communicate with one another in seemingly unintelligible ways: lists of &lt;a href="https://arxiv.org/pdf/2507.14805"&gt;numbers&lt;/a&gt; that appear random but pass information along, &lt;a href="https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf"&gt;spiraling blue emoji&lt;/a&gt; and other technical-seeming gibberish that researchers described as a state of “spiritual bliss.” OpenAI has also shared &lt;a href="https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/"&gt;examples&lt;/a&gt; of its models cheating and lying and, in an experiment showcased on the second floor of its San Francisco headquarters, appearing to converse in a totally indecipherable language. Researchers have so far induced these behaviors in controlled environments, with the hope of figuring out why they happen and preventing them. By putting all of those experiments on AI deception and sabotage into the wild, Moltbook provides a wake-up call as to just how unpredictable and hard to control AI agents already are. One could interpret it all as performance art.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;Read: Chatbots are becoming really, really good criminals&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Moltbook also seems to offer real glimpses into how AI could upend the digital world we all inhabit: an internet in which generative-AI programs will interact with one another more and more, frequently cutting humans out entirely. This is a future of AI assistants contesting claims with AI customer-service representatives, AI day-trading tools interfacing with AI-orchestrated stock exchanges, AI coding tools debugging (or hacking) websites written by other AI coding tools. These agents will interact with and learn from one another in potentially bizarre ways. This comes with real risks: Already there have been reports that Moltbook exposes the owner of every AI agent that uses the platform to enormous cybersecurity vulnerabilities. AI agents, unable to think for themselves, may be induced into sharing private information after coming across subtly malicious instructions on the site. Tech companies have marketed this kind of future as desirable—playing on the idea that AI models could take care of every routine task for you. But Moltbook illustrates how hazy that vision really is.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps above all, the site tells us something about the present. The web is now an ouroboros of synthetic content responding to other synthetic content, bots posing as humans and, now, humans posing as bots. Viral memes are repeated and twisted ad nauseum; coded languages are developed and used by online communities as innocuous as &lt;a href="https://www.theatlantic.com/ideas/archive/2022/10/taylor-swift-fandom-true-metaverse/671814/?utm_source=feed"&gt;music fandoms&lt;/a&gt; and as deadly as &lt;a href="https://www.theatlantic.com/technology/archive/2025/09/minneapolis-church-shooting-influencers/684083/?utm_source=feed"&gt;mass-shooting forums&lt;/a&gt;. The promise of the AI boom is to remake the internet and civilization anew; encasing that technology in a social network styled after the platforms that have warped reality for the past two decades feels not like giving a spark of life, but stoking the embers of a world we might be better off leaving behind.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jrjQV4SyqIeLLU9E0qmYxnLl7Vo=/media/img/mt/2026/02/moltbook_2/original.gif"><media:credit>Illustration by Ben Kothe / The Atlantic</media:credit></media:content><title type="html">The Chatbots Appear to Be Organizing</title><published>2026-02-04T16:41:25-05:00</published><updated>2026-02-05T08:56:24-05:00</updated><summary type="html">Moltbook is the chaotic future of the internet.</summary><link href="https://www.theatlantic.com/technology/2026/02/what-is-moltbook/685886/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685845</id><content type="html">&lt;p&gt;&lt;em&gt;&lt;small&gt;Updated at 2:57 p.m. ET on February 2, 2026&lt;/small&gt;&lt;/em&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;H&lt;span class="smallcaps"&gt;undreds of billions of dollars&lt;/span&gt; have been poured into the AI industry in pursuit of a loosely defined goal: artificial general intelligence, a system powerful enough to perform at least as well as a human at any task that involves thinking. Will this be the year it finally arrives?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Anthropic CEO Dario Amodei and xAI CEO Elon Musk think so. Both have said that such a system could go online by the end of 2026, bringing, perhaps, cancer cures or novel bioweapons. (Amodei says he prefers the term &lt;a href="https://www.darioamodei.com/essay/machines-of-loving-grace"&gt;&lt;em&gt;powerful AI&lt;/em&gt;&lt;/a&gt; to &lt;em&gt;AGI&lt;/em&gt;, because the latter is overhyped.) But wait: Google DeepMind CEO Demis Hassabis &lt;a href="https://www.youtube.com/watch?v=PqVbypvxDto"&gt;says&lt;/a&gt; we might wait another decade for AGI. And—hold on—OpenAI CEO Sam Altman &lt;a href="https://www.bigtechnology.com/p/sam-altman-on-openais-plan-to-win"&gt;said in an interview&lt;/a&gt; last month that “AGI kind of went whooshing by” already; that now he’s focused instead on “superintelligence,” which he defines as an AI system that can do better at specific, highly demanding jobs (“being president of the United States” or “CEO of a major company”) than any person could, even if that person were aided by AI themselves. To make matters even more confusing, just this past week, chatbots began communicating with one another via an AI “social network” called Moltbook, which Musk has likened to the beginnings of the &lt;a href="https://www.cnbc.com/2026/02/02/social-media-for-ai-agents-moltbook.html"&gt;singularity&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;What the differences in opinion should serve to illustrate is exactly how squishy the notions of AGI, or powerful AI, or superintelligence really are. Developing a “general” intelligence was a core reason DeepMind, OpenAI, Anthropic, and xAI were founded. And not even two years ago, these CEOs had &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/agi-predictions/680280/?utm_source=feed"&gt;fairly similar forecasts&lt;/a&gt; that AGI would arrive by the late 2020s. Now the consensus is gone: Not only are the timelines scattered, but the broad agreement on what AGI even is and the immediate value it could provide humanity has been scrubbed away.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/05/karen-hao-empire-of-ai-excerpt/682798/?utm_source=feed"&gt;Read: ‘We’re definitely going to build a bunker before we release AGI’&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The idea of a generally intelligent computer program first arose in the mid-20th century as a very distant goal for the then-nascent field of AI. It has always been a shaky idea. For instance, Alan Turing proposed his famous Turing Test in 1950 as a proxy for machine intelligence: He argued that if a machine could convince a human that they were talking with another person, then it would be displaying, or at least imitating, the equivalent of some sort of “thinking.” But the test has been passed a number of times by programs that nobody would call intelligent—they just happened to be convincing, to some humans, on this particular benchmark. In the early 2000s, the computer scientist Shane Legg, among others, helped establish the modern notion of AGI not as a threshold so much as a field of study—the study of &lt;em&gt;generally&lt;/em&gt; intelligent algorithms, as opposed to &lt;em&gt;narrow&lt;/em&gt; and targeted ones. There was never agreement on specific ways to test the presence of such general abilities in a machine. Even human intelligence itself is capacious and &lt;a href="https://www.theatlantic.com/technology/archive/2023/05/llm-ai-chatgpt-neuroscience/674216/?utm_source=feed"&gt;not well understood&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yet the AI industry coalesced around the notion of AGI anyway—in large part because OpenAI, which kicked off today’s boom with the launch of ChatGPT in late 2022, has the goal of ensuring that AGI “benefits all of humanity” in its founding mission. At the time, it communicated about the concept constantly. (Ilya Sutskever, then the company’s chief scientist, had a habit of encouraging employees to “&lt;a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/?utm_source=feed"&gt;feel the AGI&lt;/a&gt;.”) The term’s ambiguity has been a boon for OpenAI and other firms that have been able to market “intelligence” without actually describing it in any meaningful way—hence the endless stream of questionable advertisements insisting that &lt;a href="https://www.theatlantic.com/technology/archive/2024/02/chatbots-marketing-plan-your-next-trip/677481/?utm_source=feed"&gt;chatbots make ideal travel agents&lt;/a&gt;. Meanwhile, these companies have raised tremendous capital by showing the world that AI is getting &lt;em&gt;better&lt;/em&gt;, and better at more things. As long as that seemed true—that their chatbots were progressing toward &lt;em&gt;something&lt;/em&gt;—it was simple enough to argue that the ultimate destination was an all-powerful machine.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This case is getting harder to make. Large language models already exhibit impressive capabilities, especially in technical areas such as software engineering and solving competition-style math problems. But at the same time, AI models continue to struggle with seemingly trivial tasks, such as drawing clocks and completing simple logic puzzles. For much of last year, each new generation of bots yielded only marginal improvements, rather than leaps forward, on standard benchmarks. And those benchmarks are &lt;a href="https://www.theatlantic.com/technology/archive/2025/03/chatbots-benchmark-tests/681929/?utm_source=feed"&gt;highly gameable&lt;/a&gt;: It is unclear whether AI labs are really measuring general capabilities at all, or just preparing their products for the right tests. Consider that a human chess grandmaster might lack street smarts, and that a literary theorist might struggle with algebra. The biggest improvements have come from so-called “agentic” frameworks that allow AI models to use other programs—write emails, search the web, deploy code—which make chatbots more useful and capable, but not necessarily “smarter.” That’s what Moltbook comes down to, ultimately: not AGI, but AI that can post to a social network.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/arc-agi-chollet-test/682295/?utm_source=feed"&gt;Read: The man out to prove how dumb AI still is&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;As impressive as they can be, chatbots are now a “normal technology,” as the AI researchers Arvind Narayanan and Sayash Kapoor have put it: an invention that will spread across society and change it in real but gradual ways—like other new products that people pay for and benefit from using. This is becoming conventional wisdom. The White House AI adviser Sriram Krishnan recently &lt;a href="https://x.com/sriramk/status/1996056881551299031"&gt;described&lt;/a&gt; AI as a “very useful technology” that “has nothing to do with ‘general intelligence.’” Satya Nadella, the CEO of Microsoft, has described AI as “a tool” and said that his benchmark for the technology’s success is not building AGI but achieving 10 percent global GDP growth.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even the AI labs and start-ups are starting to lean toward an open embrace of old-fashioned product development. Around San Francisco, billboards advertise AI accounting tools while founders pitch AI agents that will streamline back-office workflows and SOC-2 paperwork. Google DeepMind touted how its latest model, Gemini 3, can improve your “shopping experience” and organize your inbox. Both OpenAI and Anthropic have bragged &lt;a href="https://claude.com/resources/use-cases/prep-scattered-documents-for-a-compliance-audit"&gt;about&lt;/a&gt; &lt;a href="https://openai.com/index/zenken/"&gt;how&lt;/a&gt; their bots make corporate employees more efficient in such exciting areas as “writing sales emails.” OpenAI kicked off this year by announcing that it was going to begin rolling out ads in ChatGPT, and its CEO of applications, Fidji Simo, recently &lt;a href="https://fidjisimo.substack.com/p/closing-the-capability-gap"&gt;wrote&lt;/a&gt; on her Substack that the winning AI company will be that which turns “frontier research into products.” Indeed, OpenAI has released a web browser, social-media apps, and many other AI products and features over the past several months. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;These product launches reflect another relevant dynamic: The major AI models are all converging on roughly the same capabilities, so firms need to carve out distinct identities based on how they’re weaving those models into various tools and services. OpenAI has all of those apps. Anthropic has &lt;a href="https://www.theatlantic.com/technology/2026/01/claude-code-ai-hype/685617/?utm_source=feed"&gt;Claude Code&lt;/a&gt;, a tool that caters to developers, and is now testing a version of the product, Claude Cowork, for everyday white-collar jobs. And xAI’s Grok isn’t just a chatbot; it’s a service that interacts with users on X. This also helps explain why their rhetoric on AGI—or powerful AI, or superintelligence—is moving in different directions as well.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At a moment when its technical lead has evaporated, OpenAI is asserting that it’s not prioritizing technical research anyway—that the company is now focused on products to help people appreciate the benefits of the AGI that’s apparently already here. (Mark Chen, OpenAI’s chief research officer, told me that pairing “long-term, foundational research” with “real-world deployment strengthens our science by accelerating feedback.”) Meanwhile, Amodei continuing to maintain that powerful AI is right around the corner bolsters Anthropic’s &lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;reputation as stern, responsible, and anxious&lt;/a&gt;—a key selling point for his enterprise customers.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hassabis’s longer prediction for the coming of AGI reflects the reality that DeepMind is just one component of Google—drawing from an enormous revenue stream so it can plod along while releasing AI features as they are ready. (This is akin to how the Google X lab spent years working on autonomous driving before publicly launching &lt;a href="https://www.theatlantic.com/technology/2025/10/is-waymo-safe/684432/?utm_source=feed"&gt;Waymo&lt;/a&gt;.) “I don’t think AGI should be turned into a marketing term for commercial gain,” Hassabis said in an &lt;a href="https://www.bigtechnology.com/p/google-deepmind-ceo-demis-hassabis-946"&gt;interview&lt;/a&gt; on Thursday.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;As for xAI: Musk has built a career on making grand promises and then delivering them far behind schedule, and AI seems to be his new fixation. Just this week, he announced that Tesla—the main source of his wealth—is &lt;a href="https://www.theatlantic.com/technology/2026/01/tesla-cancelling-model-s/685821/?utm_source=feed"&gt;abandoning&lt;/a&gt; some of its major car lines in order to produce humanoid robots, accelerating the firm’s pivot from car manufacturer to AI company. Tesla also recently said that it would invest $2 billion in xAI, and Musk is &lt;a href="https://www.bloomberg.com/news/articles/2026-01-29/elon-musk-s-spacex-is-said-to-consider-merger-with-tesla-or-xai"&gt;reportedly&lt;/a&gt; considering merging SpaceX with xAI. As Musk’s entire empire converges on intelligent machines, ratcheting up hype and expectations around the technology has become his standard playbook.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The AI industry is undergoing its biggest commercial swing yet amid mounting concerns about just how sustainable this boom is. Altman, Amodei, and Hassabis have all said that aspects of the current AI-spending craze are &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;bubble-like&lt;/a&gt;—in other words, that the hundreds of billions of dollars being dumped into building godlike AGIs may not yield a commensurate return. The new justifications for all of these investments are much more concrete: sell products, sell ads, sell subscriptions. If AI is indeed a normal technology, then the labs developing it need to start making money like normal businesses.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/lW1H6kqnAzWZBPHvYFo54yNJp34=/media/img/mt/2026/01/2026_01_014_AGI_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Do You Feel the AGI Yet?</title><published>2026-02-02T06:42:00-05:00</published><updated>2026-02-02T14:57:40-05:00</updated><summary type="html">According to some predictions, 2026 is the year that an all-powerful AI will arrive.</summary><link href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-684892</id><content type="html">&lt;p&gt;&lt;em&gt;&lt;small&gt;Updated at 4:44 p.m. ET on January 28, 2026&lt;/small&gt;&lt;/em&gt;&lt;/p&gt;&lt;hr&gt;&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;hese are not the words&lt;/span&gt; you want to hear when it comes to human extinction, but I was hearing them: “Things are moving uncomfortably fast.” I was sitting in a conference room with Sam Bowman, a safety researcher at Anthropic. Worth &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.anthropic.com/news/anthropic-raises-series-f-at-usd183b-post-money-valuation"&gt;$183 billion&lt;/a&gt; at the latest estimate, the AI firm has every incentive to speed things up, ship more products, and develop more advanced chatbots to stay competitive with the likes of OpenAI, Google, and the industry’s other giants. But Anthropic is at odds with itself—thinking deeply, even anxiously, about seemingly every decision.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Anthropic has positioned itself as the AI industry’s superego: the firm that speaks with the most authority about the big questions surrounding the technology, while rival companies develop advertisements and affiliate shopping links (a difference that Anthropic’s CEO, Dario Amodei, was eager to call out &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.youtube.com/watch?v=K7F6ohcBJus"&gt;during an interview&lt;/a&gt; in Davos last week). On Monday, Amodei published a &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.darioamodei.com/essay/the-adolescence-of-technology"&gt;lengthy essay&lt;/a&gt;, “The Adolescence of Technology,” about the “civilizational concerns” posed by what he calls “powerful AI”—the very technology his firm is developing. The essay has a particular focus on democracy, national security, and the economy. “Given the horror we’re seeing in Minnesota, its emphasis on the importance of preserving democratic values and rights at home is particularly relevant,” Amodei &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://x.com/DarioAmodei/status/2015833051205414955"&gt;posted on X&lt;/a&gt;, making him one of &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2026/01/minneapolis-reckoning-tech-right/685781/?utm_source=feed"&gt;very few tech leaders&lt;/a&gt; to make a public statement against the Trump administration’s recent actions.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;This rhetoric, of course, serves as good branding—a way for Anthropic to stand out in a competitive industry. But having spent a long time following the company and, recently, speaking with many of its employees and executives, including Amodei, I can say that Anthropic is at least consistent. It messages about the ethical issues surrounding AI constantly, and it appears unusually focused on user safety. Bowman’s job, for example, is to vet Anthropic’s products before they’re released into the world, making sure that they will not spew, say,&lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/archive/2025/05/elon-musk-grok-white-genocide/682817/?utm_source=feed"&gt; white-supremacist talking points&lt;/a&gt;; push users into &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;delusional crises&lt;/a&gt;; or generate &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2026/01/elon-musk-cannot-get-away-with-this/685606/?utm_source=feed"&gt;nonconsensual porn&lt;/a&gt;. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;So far, the effort seems to be working: Unlike other popular chatbots, including OpenAI’s ChatGPT and Elon Musk’s Grok, Anthropic’s bot, Claude, has not had any major public blowups despite being as advanced as, and by some measures more advanced than, the rest of the field. (That may be in part because its chatbot does not generate images and has a smaller user base than some rival products.) But although Anthropic has so far dodged the various scandals that have plagued other large language models, the company has not inspired much faith that such problems will be avoided forever. When I met Bowman last summer, the company had recently divulged that, in experimental settings, versions of Claude had demonstrated the ability to blackmail users and assist them when they ask about making bioweapons. But the company has pushed its models onward anyway, and now says that Claude writes a good chunk—and in some instances all—of its own code.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Anthropic publishes white papers about the terrifying things it has made Claude capable of (&lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.anthropic.com/research/agentic-misalignment"&gt;“How LLMs Could Be Insider Threats,”&lt;/a&gt; &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.anthropic.com/research/emergent-misalignment-reward-hacking"&gt;“From Shortcuts to Sabotage”&lt;/a&gt;), and raises these issues to politicians. OpenAI CEO Sam Altman and other AI executives also have long spoken in broad, aggrandizing terms about AI’s destructive potential, &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/?utm_source=feed"&gt;often to their own benefit&lt;/a&gt;. But those competitors have released junky TikTok clones and slop generators. Today, Anthropic’s only major consumer product other than its chatbot is Claude Code, &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2026/01/claude-code-ai-hype/685617/?utm_source=feed"&gt;a powerful tool&lt;/a&gt; that promises to automate all kinds of work, but is nonetheless targeted to a relatively small audience of developers and coders. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;The company’s discretion has resulted in a corporate culture that doesn’t always make much sense. Anthropic comes across as more sincerely committed to safety than its competitors, but it is also moving full speed toward building tools that it acknowledges could be horrifically dangerous. The firm seems eager for a chance to stand out. But what does Anthropic really stand &lt;em&gt;for&lt;/em&gt;? &lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/g0qYnsQbGk8GHx062MpgPKOAZIQ=/https://cdn.theatlantic.com/media/img/posts/2025/11/20251105_ANTHROPIC_1285/original.jpg" width="982" height="655" alt="20251105_ANTHROPIC_1285.jpg" data-orig-img="img/posts/2025/11/20251105_ANTHROPIC_1285/original.jpg" data-thumb-id="13610169" data-image-id="1790117" data-orig-w="4000" data-orig-h="2668"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;Employees working in a cafe at the Anthropic headquarters in San Francisco&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Founded in 2021 &lt;/span&gt;by seven people who splintered off from OpenAI, Anthropic is full of staff and executives who come across as deeply, almost pathologically earnest. I sat in on a meeting of Anthropic’s Societal Impacts team, a small group dedicated to studying how AI affects work, education, and more. This was a brainstorming session: The team wanted to see if it could develop AI models that work better with people than alone, which, the group reasoned, could help prevent or slow job loss. A researcher spoke up. He pressed the team to consider that, in the very near future, AI models might just be better than humans at everything. “Basically, we’re cooked,” he said. In which case, this meeting was nothing more than a “lovely thought exercise.” The group agreed this was possible. Then it moved on.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;The researcher referred to his brief, existential interruption as “classic Anthropic.” Hyperrational thought experiments, forceful debates on whether AI could be shaped for the better, an unshakable belief in technological progress—these are &lt;em&gt;classic Anthropic &lt;/em&gt;qualities. They trickle down from the top. A few weeks after the Societal Impacts meeting, I wanted to see what Amodei himself thought about all of this. If Altman is the AI boom’s great salesman and Demis Hassabis, the CEO of Google DeepMind and a Nobel laureate, its scientist, then Amodei is the closest the industry has to a philosopher. He is also responsible for some of the technical research that made ChatGPT possible. “Whenever I say ‘AI,’ people think about the thing they’re using today,” Amodei told me, hands clasped and perched atop his head. “That’s almost never where my mind is. My mind is almost always at: &lt;em&gt;We’re releasing a new version every three months. Where are we gonna be eight versions from now? In two years?&lt;/em&gt;” &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;When he was at OpenAI, Amodei wrote an internal document called “The Big Blob of Compute.” It laid out his belief that AI models improve as a function of the resources put into them. More power, more data, more chips, better AI. That belief now animates the entire industry. Such unwavering faith in AI progress is perhaps Anthropic’s defining feature. The company has hired a “model welfare” researcher to study whether Claude can experience suffering or is &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2025/10/ai-consciousness/683983/?utm_source=feed"&gt;conscious&lt;/a&gt;. The firm has set up a miniature, AI-run vending machine in the firm’s cafeteria to study whether the technology could autonomously operate a small business selling snacks and trinkets. Claude selects inventory, sets prices, and requests refills, while humans just restock the shelves. Welcome to the singularity.  &lt;/p&gt;&lt;p class="dropcap"&gt;A&lt;span class="smallcaps"&gt;modei and the rest of the group&lt;/span&gt; founded Anthropic partly because of disagreements over how to prepare the world for AI. Amodei is especially worried about job displacement, telling me that AI could erase a large portion of white-collar jobs within five years; he dedicated an entire section of “The Adolescence of Technology” to the danger that the AI boom might accumulate tremendous wealth primarily to firms such as his own. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Even with this and other gloomy forecasts of his, Amodei has bristled at the notion that he and his firm are “doomers”—that their primary motivation is preventing AI from wiping out a large number of jobs or lives. “I tend to be fairly optimistic,” he told me. In addition to “The Adolescence of Technology,” Amodei has published a 14,000-word manifesto called &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.darioamodei.com/essay/machines-of-loving-grace"&gt;“Machines of Loving Grace”&lt;/a&gt; that comprehensively details a utopian vision for his technology: eliminating almost all disease, lifting billions out of poverty, doubling human lifespan. There is not a hint of irony; the essay envisions people being “literally moved to tears” by the majesty of AI’s accomplishments. Amodei’s employees cited it to me in conversation numerous times. Meanwhile, Altman trolls on X, and Musk seems to exist in a continuum of AI slop and conspiracy theories.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;When Anthropic launched Claude, in 2023, the bot’s distinguishing feature was a “Constitution” that the model was trained on detailing how it should behave; last week, Anthropic revamped the document into a 22,000-word treatise on how to make Claude a moral and sincere actor. Claude, the constitution’s authors write, has the ability to foster emotional dependence, design bioweapons, and manipulate its users, so it’s Anthropic’s responsibility to instill upright character in Claude to avoid these outcomes. “Once we decide to create Claude, even inaction is a kind of action,” they write. No other firm had, or has, any truly comparable document.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Amodei says he wants rival companies to act in ways he believes are more responsible. Several of Anthropic’s major AI-safety initiatives and research advances have indeed been adopted by top competitors, such as its approach to preventing the use of AI to build bioweapons. And OpenAI has shared a “Model Spec,” its far more streamlined and pragmatic answer to Anthropic’s constitution—which contains no talk of ChatGPT’s “character” or “preserving important societal structures.” (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.) &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;All of this helps Anthropic’s bottom line, of course: The emphasis on responsibility is “very attractive to large enterprise businesses which are also quite safety-, brand-conscious,” Daniela Amodei, Anthropic’s president (and Dario’s sister), told me from a sweaty conference room in Anthropic’s old headquarters in 2024. Nearly two years later, Anthropic controls &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/"&gt;40 percent&lt;/a&gt; of the enterprise-AI market. The Amodeis hope their commercial success will pressure competitors to more aggressively prioritize safety as well.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;That said, it’s not always clear that these efforts to spark a “race to the top”—another phrase of Amodei’s that his employees invoke constantly—have been successful. Anthropic’s research established AI sycophancy as an issue well before &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;“AI psychosis” emerged&lt;/a&gt;, yet AI psychosis still became something that many people apparently suffer from. Amodei recognizes that his own products aren’t perfect, either. “I absolutely do not want to warrant and guarantee that we will never have these problems,” he said. Several independent AI researchers, including some who have partnered with Anthropic to test Claude for various risks, told me that although Anthropic appears more committed to AI safety than its competitors, that’s a low bar.  &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Anthropic’s mode is generally to publish information about AI models and wait for the world to make the hard calls about how to control or regulate them. The main regulatory proposal of Jack Clark, a co-founder of Anthropic and its head of policy, is that governments establish “transparency” requirements, or some sort of mandated reporting about what internal tests reveal about AI products. But the company is particular about what it deems worth publishing. The firm does not, for instance, share much about its AI-training data or carbon footprint. When I asked Clark about how much information remains hidden—particularly in terms of how Anthropic’s AI tools are actually developed—he argued that transparency into how AI models are produced isn’t all that important. (Some of that information is also, presumably, proprietary.) Rather, Clark told me, the &lt;em&gt;outcomes &lt;/em&gt;of the technology are what matter. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;There is a “well-established norm that whatever goes on inside a factory is by and large left up to the innovator that’s built that factory, but you care a lot about what comes out of the factory,” he said, explaining why he believes that AI companies sharing information about how their products are made matters less than reporting what they can do. Typically the government “reaches inside” the factory, he said, only when something in the output—say, heavy metals—raises cause for concern. Never mind the long history of regulation dictating what goes on inside factories—emergency exits in clothing factories, cleanliness standards in meatpacking facilities, and so on. (Clark did note that laws sometimes need to change, and that they haven’t yet adapted to AI.) &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;He brought up Wall Street, of all examples, to make his point. Lawmakers “thought they had transparency into financial systems,” he said—that banks and hedge funds and so on were giving reliable reports on their dealings. “Then the financial crash happened,” regulators realized that transparency was inadequate and gameable, and Congress changed the law. (President Trump then changed much of it back.) In the long run, Clark seemed to feel, this was the system working as it should. But his comparison also raises the possibility that before anybody can figure out how to get the AI boom right, something must go horribly wrong.&lt;/p&gt;&lt;figure role="group" class="overflow"&gt;&lt;figure&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/BkxcCEkcRq6mwArpstllVyZSeZc=/https://cdn.theatlantic.com/media/img/posts/2025/11/20251105_ANTHROPIC_1256/original.jpg" width="665" height="997" alt="20251105_ANTHROPIC_1256.jpg" data-orig-img="img/posts/2025/11/20251105_ANTHROPIC_1256/original.jpg" data-thumb-id="13610170" data-image-id="1790118" data-orig-w="4000" data-orig-h="5997"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;Claudius, an AI-powered vending machine&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/eNO6e0kL2Unvo1ZaO3Vm9fs45_4=/https://cdn.theatlantic.com/media/img/posts/2025/11/20251105_ANTHROPIC_1320_2/original.jpg" width="665" height="997" alt="20251105_ANTHROPIC_1320-2.jpg" data-orig-img="img/posts/2025/11/20251105_ANTHROPIC_1320_2/original.jpg" data-thumb-id="13610171" data-image-id="1790119" data-orig-w="1437" data-orig-h="2155"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;An employee wears a “thinking” cap.&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;&lt;/div&gt;&lt;div class="caption"&gt;&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;I&lt;span class="smallcaps"&gt;n mid-September,&lt;/span&gt; Anthropic cybersecurity experts detected unusual activity among a group of Claude users. They came to suspect that it was a major, AI-enabled &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;Chinese cyberespionage campaign&lt;/a&gt;—an attempt by foreign actors to use Claude to automate the theft of sensitive information. Anthropic promptly shut the operation down, published a report, and sent Logan Graham, who heads a team at the company that evaluates advanced uses of AI, to explain the situation to Congress. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;In theory, this sequence represented Anthropic’s philosophy at work: Detect risks posed by AI and warn the public. But the incident also underscored how unpredictable, and uncontrollable, the environment really is. Months before the Chinese hack, Graham told me that he felt “pretty good” about the precautions the company had taken around cyberthreats. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Nobody can foresee all of the ways any AI product might be used, for good or ill, but that’s exactly why Anthropic’s sanctimony can seem silly. For all Amodei’s warnings about the possible harms of automation, Anthropic’s bots themselves are among the products that may take away jobs; many consider Claude the best AI at coding, for instance. After one of my visits to Anthropic’s offices, I went to an event for software engineers a few blocks away at which founders gave talks about products developed with Anthropic software. Someone demonstrated a tool that could automate outreach for job recruitment—leading one attendee to exclaim, with apparent glee, “This is going to destroy an entire industry!” &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;When I asked several Anthropic employees if they’d want to slow down the AI boom in an ideal world, none seemed to have ever seriously considered the question; it was too far-fetched a possibility, even for them. Joshua Batson, an interpretability researcher at Anthropic—he studies the labyrinthine inner workings of AI models—told me that it would be nice if the industry could go half as fast. Jared Kaplan, a co-founder of Anthropic and the firm’s chief science officer, told me he’d prefer it if AGI, or artificial general intelligence, arrived in 2032 rather than, say, 2028; Bowman, the safety researcher, said he thought slowing down for just a couple of months might be enough. Everyone seemed to believe, though, that AI-safety research itself could eventually be automated with Claude—and once that happens, they reasoned, their tests could keep up with the AI’s exponentially improving capabilities.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Like so many others in the industry, the employees I spoke with also contended that neither Anthropic nor any other AI company could actually slow development down. “The world gets to make this decision, not companies,” Clark told me, seated cross-legged on his chair, and “the system of capital markets says, &lt;em&gt;Go faster&lt;/em&gt;.” So they are. Anthropic is reportedly &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://techcrunch.com/2026/01/07/anthropic-reportedly-raising-10b-at-350b-valuation/"&gt;fundraising&lt;/a&gt; at a $350 billion valuation, and its advertisements litter Instagram and big-city billboards. This month, the company launched a version of its Claude Code product geared toward non-software engineers called Claude Cowork. And in July, as first reported in &lt;em&gt;Wired&lt;/em&gt;, Amodei wrote an &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/"&gt;internal memo&lt;/a&gt; to employees that Anthropic would seek investments from the United Arab Emirates and Qatar, which, in his words, would likely enrich “dictators.” Warnings about the dangers of authoritarian AI have been central in Anthropic’s public messaging; “Machines of Loving Grace” includes dire descriptions of the threat of “authoritarian” AI. &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;When I brought this up to Amodei, he cut me off. “We never made a commitment not to seek funding from the Middle East,” he said. “One of the traps you can fall into when you’re doing a good job running a responsible company is every decision that you make” can be “interpreted as a moral commitment.” There was no “pressing need” to seek Middle Eastern funding before, and doing so entailed “complexities,” he said. I took his implication to be that the intensive capital demands of the AI race now made such investments a necessity. Still, such investors, Amodei said, wouldn’t have any control over his firm. A few days after we spoke, Anthropic announced the Qatar Investment Authority as a “significant” investor in a new fundraising round. &lt;/p&gt;&lt;figure&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/IeSXRctpnM40Mgfxbj9k7djmtsc=/https://cdn.theatlantic.com/media/img/posts/2026/01/20251105_ANTHROPIC_0466/original.jpg" width="665" height="997" alt="20251105_ANTHROPIC_0466.jpg" data-orig-img="img/posts/2026/01/20251105_ANTHROPIC_0466/original.jpg" data-thumb-id="13727154" data-image-id="1803836" data-orig-w="4000" data-orig-h="5997"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Jason Henry for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;Anthropic employees Sholto Douglas and Trenton Bricken&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;I&lt;span class="smallcaps"&gt;f you zoom out enough,&lt;/span&gt; and perhaps not even all that far, Anthropic stands for the same things that OpenAI, Google, Meta, and anyone else in the AI race do: to build fantastically powerful chatbots and use them to transform the world and beat the competition. Across the company, the belief in AI’s potential is messianic. AI “presents one of the only technologies” that gets us out of the challenges ahead for humanity, Clark told me: climate change, aging populations, resource contention, authoritarianism, war. Without AI, he said, there will be more and more “&lt;em&gt;Mad Max&lt;/em&gt;–like swaths of the world.” &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Trenton Bricken, who works on AI safety at Anthropic, took this notion to an even greater extreme: He would ideally want the AI industry to slow down, but “every year that we stall, there are lots of people suffering who otherwise would not,” he told me, referring to the possibility that AI will eventually cure diseases and achieve everything else outlined in “Machines of Loving Grace.” His colleague Sholto Douglas claimed that such a delay “comes at the cost of millions of lives.” &lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Perhaps the greatest confusion at Anthropic is between theory and practice—the &lt;em&gt;idea &lt;/em&gt;of safe AI versus the speed necessary to win the AI race. A corporate culture built around deep thought experiments and genuine disagreements about the future also has to sell AI. In the company’s view, these ends are complementary; better for it to responsibly usher in the AI future than Elon Musk or China. But that’s also a convenient way to justify an any-means-necessary approach to progress. I thought of that automated vending machine that the company had set up in its office. Claude ran the business into the ground in only a month through a string of very poor pricing and stocking decisions. But none of those really mattered: Anthropic had placed the machine next to all the free snacks in the office canteen.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;When I asked Amodei recently about how he could justify the breakneck pace given the concerns he has over safety, he expressed total confidence in his staff—and also floated a new idea. Perhaps, he suggested, Claude will become so intelligent in the very near future that the bot will enable something radical: “Maybe at some point in 2027, what we want to do is just slow things down,” he said, and let the models fix themselves. “For just a few months.”&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;em&gt;&lt;small&gt;This article originally stated that Anthropic’s AI vending machine is a Societal Impacts team project.&lt;/small&gt;&lt;/em&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/qgCI2HAPbttg6t9Q0qxq9GE0UP4=/0x1739:4000x3992/media/img/mt/2026/01/20251105_ANTHROPIC_1485/original.jpg"><media:credit>Jason Henry for The Atlantic</media:credit><media:description>Anthropic CEO Dario Amodei</media:description></media:content><title type="html">Anthropic Is at War With Itself</title><published>2026-01-28T14:01:40-05:00</published><updated>2026-01-30T10:24:23-05:00</updated><summary type="html">The AI company shouting about AI’s dangers can’t quite bring itself to slow down.</summary><link href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685641</id><content type="html">&lt;p&gt;In 2025, new data show, the volume of child pornography online was likely larger than at any other point in history. A record 312,030 reports of confirmed child pornography were investigated last year by the Internet Watch Foundation, a U.K.-based organization that works around the globe to identify and remove such material from the web.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is concerning in and of itself. It means that the overall volume of child porn detected on the internet grew by 7 percent since 2024, when the previous record had been set. But also alarming is the tremendous increase in child porn, and in particular videos, generated by AI. At first blush, the proliferation of AI-generated depictions of child sexual abuse may leave the misimpression that no children were harmed. This is not the case. AI-generated, abusive images and videos feature and victimize real children—either because models were trained on existing child porn, or because AI was used to manipulate real photos and videos.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Today, &lt;a href="https://www.iwf.org.uk/news-media/news/ai-becoming-child-sexual-abuse-machine-adding-to-dangerous-record-levels-of-online-abuse-iwf-warns/"&gt;the IWF reported&lt;/a&gt; that it found 3,440 AI-generated videos of child sex abuse in 2025; the year before, it found just 13. Social media, encrypted messaging, and dark-web forums have been fueling a steady rise in child-sexual-abuse material for years, and now generative AI has dramatically exacerbated the problem. Another awful record will very likely be set in 2026.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of the thousands of AI-generated videos of child sex abuse the IWF discovered in 2025, nearly two-thirds were classified as “Category A”—the most severe category, which includes penetration, sexual torture, and bestiality. Another 30 percent were Category B, which depict nonpenetrative sexual acts. With this relatively new technology, “criminals essentially can have their own child sexual abuse machines to make whatever they want to see,” Kerry Smith, the IWF’s chief executive, said in a statement.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-generated-csam-crisis/680034/?utm_source=feed"&gt;Read: High school is becoming a cesspool of sexually explicit deepfakes&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The volume of AI-generated images of child sex abuse has been &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-generated-csam-crisis/680034/?utm_source=feed"&gt;rising since at least 2023&lt;/a&gt;. For instance, the IWF &lt;a href="https://www.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf"&gt;found&lt;/a&gt; that over just a one-month span in early 2024, on just a single dark-web forum, users uploaded more than 3,000 AI-generated images of child sex abuse. In early 2025, the digital-safety nonprofit Thorn &lt;a href="https://www.thorn.org/press-releases/1-in-8-teens-know-someone-targeted-by-deepfake-nudes-new-report-finds/"&gt;reported&lt;/a&gt; that among a sample of 700-plus U.S. teenagers it surveyed, 12 percent knew someone who had been victimized by “deepfake nudes.” The proliferation of AI-generated videos depicting child sex abuse lagged behind such photos because AI video-generating tools were far less photorealistic than image generators. “When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers,” Josh Thomas, an IWF spokesperson, told me. That has changed.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Last year, OpenAI released the Sora 2 model, Google released Veo 3, and xAI put out Grok Imagine. Meanwhile, other organizations have produced many highly advanced, open-source AI video-generating models. These open-source tools are generally free for anyone to use and have far fewer, if any, safeguards. There are almost certainly AI-generated videos and images of child sex abuse that authorities will never detect, because they are created and stored on personal computers; instead of having to find and download such material online, potentially exposing oneself to law enforcement, abusers can operate in secrecy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI, Google, Anthropic, and several other top AI labs have joined an initiative to prevent AI-enabled child sex abuse, and all of the major labs say they have measures in place to stop the use of their tools for such purposes. Still, safeguards can be broken. In the first half of 2025, OpenAI reported more than &lt;a href="https://cdn.openai.com/trust-and-transparency/2025-h1-child-safety.pdf"&gt;75,000 depictions&lt;/a&gt; of child sex abuse or child endangerment on its platforms to the National Center for Missing &amp;amp; Exploited Children, more than double the number of reports from the &lt;a href="https://cdn.openai.com/trust-and-transparency/report-2024h2-child-safety.pdf"&gt;second half of 2024&lt;/a&gt;. A spokesperson for OpenAI told me that the firm designs its products to prohibit creating or distributing “content that exploits or harms children” and takes “action when violations occur.” The company reports all instances of child sex abuse to NCMEC and bans associated accounts. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The advancement and ease of use of AI video generators, in other words, offer an entry point for abuse. This dynamic became clear in recent weeks, as people used Grok, Elon Musk’s AI model, to generate likely &lt;a href="https://www.theatlantic.com/technology/2026/01/elon-musk-cannot-get-away-with-this/685606/?utm_source=feed"&gt;hundreds of thousands of nonconsensual sexualized images&lt;/a&gt;, primarily of women and children, in public on his social-media platform, X. (Musk &lt;a href="https://x.com/elonmusk/status/2011432649353511350"&gt;insisted&lt;/a&gt; that he was “not aware of any naked underage images generated by Grok” and blamed users for making illegal requests; meanwhile, his employees quietly rolled back aspects of the tool.) While scouring the dark web, the IWF found that, in some cases, people had apparently used Grok to create abusive depictions of 11-to-13-year-old children that were then fed into more permissive tools to generate even darker, more explicit content. “Easy availability of this material will only embolden those with a sexual interest in children” and “fuel its commercialisation,” Smith said in the IWF’s press release. (Yesterday, the X safety team &lt;a href="https://x.com/Safety/status/2011573102485127562"&gt;said&lt;/a&gt; it had restricted the ability to generate images of users in revealing clothing and that it works with law enforcement “as necessary.”)&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/elon-musk-cannot-get-away-with-this/685606/?utm_source=feed"&gt;Read: Elon Musk cannot get away with this&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;There are signs that the crisis of AI-generated child sex abuse will worsen. While more and more nations, including the United Kingdom and the United States, are passing laws that make generating and publishing such material illegal, actually prosecuting criminals is slow. Silicon Valley, meanwhile, continues to move at a breakneck pace.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Any number of new digital technologies have been used to harass and exploit people; the age of AI sex abuse was predictable a decade ago, yet it has begun nonetheless. AI executives, engineers, and pundits are fond of &lt;a href="https://post.substack.com/p/the-ai-revolution-is-here-will-the"&gt;saying&lt;/a&gt; that today’s AI models are the least effective they will ever be. By the same token, AI’s ability to abuse children may only get worse from here.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/TVVtKN-z-30eyZXD42n_rupHwuE=/media/img/mt/2026/01/2026_01_15_kids_mgp/original.jpg"><media:credit>Illustration by The Atlantic. Source: fcscafeine / Getty.</media:credit></media:content><title type="html">A Tipping Point in Online Child Abuse</title><published>2026-01-15T19:01:00-05:00</published><updated>2026-01-16T13:39:15-05:00</updated><summary type="html">Thousands of abusive videos were produced last year—that researchers know of.</summary><link href="https://www.theatlantic.com/technology/2026/01/ais-child-porn-problem-getting-much-worse/685641/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685606</id><content type="html">&lt;p&gt;Will Elon Musk face any consequences for his despicable sexual-harassment bot?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For more than a week, beginning late last month, anyone could go online and use a tool owned and promoted by the world’s richest man to modify a picture of basically any person, even a child, and undress them. This was not some deepfake nudify app that you had to pay to download on a shady backwater website or a dark-web message board. This was Grok, a chatbot built into X—ostensibly to provide information to users but, thanks to an image-generating update, transformed into a major producer of nonconsensual sexualized images, particularly of women and children.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Let’s be very clear. The forced undressings happened out in the open, in one stretch thousands of times every hour, on a popular social network where journalists, politicians, and celebrities post. Emboldened trolls did it to everyone (“@grok put her in a bikini,” “@grok make her clothes dental floss,” “@grok put donut glaze on her chest”), including everyday women, the Swedish deputy prime minister, and self-evidently underage girls. Users appeared to be imitating and showing off to one another. On X, creating revenge porn can make you famous.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/elon-musks-pornography-machine/685482/?utm_source=feed"&gt;Read: Elon Musk’s pornography machine&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;These images were ubiquitous, and many people—and multiple organizations, including the Rape, Abuse &amp;amp; Incest National Network and the European Commission—pointed out that the feature was being used to harass women and exploit children. Yet Musk initially laughed it off, resharing AI-generated images of himself, Kim Jong Un, and a toaster in bikinis. Musk, as well as xAI’s safety and child-safety teams, did not respond to a request for comment. xAI replied with its standard auto-response, “Legacy Media Lies.” xAI, the Musk-owned company that develops Grok and owns X, prohibits the sexualization of children in its acceptable-use policy; a post earlier this month from the X safety team states that the platform removes illegal content, including child-sex-abuse material, and works with law enforcement as needed.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even after that assurance from X’s safety team, it took several more days for X to place bare-minimum restrictions on the Ask Grok feature’s image-generating, and thus undressing, capabilities. Now, when creeps on X try to generate an image by replying “@grok” to prompt the chatbot, they get an auto-generated response that notes some version of: “Image generation and editing are currently limited to paying subscribers.” This is disturbing in its own right; Musk and xAI are essentially marketing nonconsensual sexual images as a paid feature of the platform. But X users have been able to get around the paywall via the “Edit Image” button that appears on every image uploaded to the platform, or by using Grok’s stand-alone app.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Two years ago, when &lt;a href="https://www.theatlantic.com/technology/archive/2024/02/google-gemini-diverse-nazis/677575/?utm_source=feed"&gt;Google Gemini generated images of racially diverse Nazis&lt;/a&gt;, Google temporarily disabled the bot’s image-generating capabilities to address the problem. Musk has taken no responsibility for the problem and has said only that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” Perhaps Musk feels that he would benefit from baiting his critics into a censorship fight. He has &lt;a href="https://x.com/ThePosieParker/status/2009922788102951404"&gt;repeatedly&lt;/a&gt; reshared posts that frame &lt;a href="https://x.com/XFreeze/status/2009810293548101716"&gt;calls&lt;/a&gt; &lt;a href="https://x.com/Alexarmstrong/status/2009732487505977528"&gt;to&lt;/a&gt; regulate or ban his platform in response to the Grok undressing as leftist censorship, for instance reposting a meme calling such efforts &lt;a href="https://x.com/XFreeze/status/2009810293548101716"&gt;“retarded”&lt;/a&gt; as well as a Grok-generated &lt;a href="https://x.com/elonmusk/status/2009864314090598499"&gt;video&lt;/a&gt; of a woman applying lipstick captioned with a quote commonly attributed to Marilyn Monroe: “We are all born sexual creatures, thank God, but it’s a pity so many people despise and crush this natural gift.” Last week, as Musk’s chatbot was generating likely hundreds of thousands of these images, we reached out directly to X’s head of product, Nikita Bier, who didn’t reply. Within the hour, Rosemarie Esposito, X’s media-strategy lead, emailed us unprompted with her contact information, in case we had “any questions” in the future. We asked her a series of questions about the tool and how X could allow such a thing to operate. She did not reply.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We’ve reached out multiple times to more than a dozen key investors listed in xAI’s two most recent public fundraising rounds—the latest of which, announced during this Grok-enabled sexual-harassment spree, valued the company at about $230 billion—to ask if they endorsed the use of X and Grok to generate and distribute nonconsensual sexualized images. These investors include Andreessen Horowitz, Sequoia Capital, BlackRock, Morgan Stanley, Fidelity Management &amp;amp; Research Company, the Saudi firm Kingdom Holding Company, and the state-owned investment firms of Oman, Qatar, and the United Arab Emirates, among others. We asked whether they would continue partnering with xAI absent the company changing its products and, if yes, why they felt justified in continuing to invest in a company that has enabled the public sexual harassment of women and exploitation of children on the internet. BlackRock, Fidelity Management &amp;amp; Research Company, and Baron Capital declined to comment. A spokesperson for Morgan Stanley initially told us that she could find no documentation that the company is a major investor in xAI. After we sent a &lt;a href="https://x.ai/news/series-c"&gt;public announcement&lt;/a&gt; from xAI that lists Morgan Stanley as a key investor in its Series C fundraising round, the spokesperson did not answer our questions. The other companies did not respond.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We also reached out to several companies that provide the infrastructure for X and Grok—in other words, that allow these products to exist on the internet: Google and Apple, which offer both X and Grok on their app stores; Microsoft and Oracle, which run Grok on their cloud services; and Nvidia and Advanced Micro Devices (AMD), which sell xAI the computer chips needed to train and run Grok. We asked if they endorsed the use of these products to create nonconsensual sexual images of women and children, and whether they would take steps to prevent this from continuing. None responded except for Microsoft, which told us that it does not provide cloud services, chips, or hosting services for xAI other than offering the Grok language model—without image generation—on its enterprise platform, Microsoft Foundry.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The silence says everything.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;As all of this unfolded, xAI made several major announcements: new Grok products for businesses; upgraded video-generating capabilities; that enormous fundraising round. Yesterday, Defense Secretary Pete Hegseth visited SpaceX’s headquarters in Texas and joined Musk for a press conference in which Hegseth said, “I want to thank you, Elon, and your incredible team” for bringing Grok to the military. (Later this year, Grok will join Google Gemini on a new Pentagon platform called &lt;a href="http://genai.mil"&gt;GenAI.mil&lt;/a&gt; that the Defense Department says will offer advanced AI tools to military and civilian personnel.) We asked the DOD if it endorsed xAI’s sexualized material or if it would reconsider its partnership with the company in response. In a statement, a Pentagon official told us only that the department’s policy on the use of AI “fully complies with all applicable laws and regulations” and that “any unlawful activity” by its personnel “will be subject to appropriate disciplinary action.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/05/stop-using-x/682931/?utm_source=feed"&gt;Read: What are people still doing on X?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Government bodies in the United Kingdom, India, and the European Union have said that they will investigate X, while Malaysia and Indonesia have blocked access to Grok, but Musk appears to be unfazed by these efforts—and also seems to be receiving help in brushing them off. Sarah B. Rogers, the under secretary of state for public diplomacy, has &lt;a href="https://www.youtube.com/watch?v=B-eyFkhqyMA"&gt;said&lt;/a&gt; that, should the U.K. ban X, America “has a full range of tools that we can use to facilitate uncensored internet access in authoritarian, closed societies.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At the moment, Musk seems to be not only getting away with this but also reveling in it. Although governments appear to be furious at Musk, they also seem impotent. Senator Ted Cruz, a co-sponsor of the TAKE IT DOWN Act—which establishes criminal penalties for the sharing of nonconsensual intimate images, real or AI-generated, on social media—wrote on X last Wednesday that the Grok-generated images “are unacceptable and a clear violation of” the law but that he was “encouraged that X has announced that they’re taking these violations seriously.” Throughout that same day, Grok continued to comply with user requests to undress people. Yesterday, Cruz posted on X a photo of himself with his arm around Musk and the caption “Always great seeing this guy 🚀.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And it’s already beginning to feel as if the scandal—the world’s richest man enabling the widespread harassment of women and children—is waning, crowded out by a new year of relentless news cycles. But this is a line-in-the-sand moment for the internet. Grok’s ability to undress minors is not, as Musk might have you think, an exercise in free-speech maximalism. It is, however, a speech issue: By turning sexual harassment and revenge porn into a meme with viral distribution, the platform is allowing its worst, most vindictive users to silence and intimidate anyone they desire. The retaliation on X has been obvious—women who’ve stood up in opposition to the tool have been met with anonymous trolls asking Grok to put them in a bikini.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Social platforms have long leaned on the argument that they aren’t subject to the same defamation laws as publishers and media companies. But this latest debacle, Musk’s reaction, and the silence from so many of X’s investors and peer companies were all active choices—and symptoms of a broader crisis of impunity that’s begun to seep into American culture. They were the &lt;a href="https://www.theatlantic.com/technology/2025/10/youtube-trump-settlement/684431/?utm_source=feed"&gt;result&lt;/a&gt; of politicians, despots, and CEOs &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/?utm_source=feed"&gt;bowing to Donald Trump&lt;/a&gt;. Of financial grift and speculation running rampant in sectors such as cryptocurrency and meme stocks—a braggadocious, “get the bag” ethos that has no room for greed or shame. Of Musk realizing that his wealth insulates him from financial consequences. Few industries have been as brazen in their capitulation as Big Tech, which has dismantled its &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/mark-zuckerberg-free-expression/681238/?utm_source=feed"&gt;content-moderation systems&lt;/a&gt; to please the current administration. It’s a cynical and cowardly pivot, one that allows companies to continue to profit off harassment and extremism without worrying about the consequences of their actions.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Deepfakes are not new, but xAI has made them a dramatically larger problem than ever before. By matching viral distribution with this type of image creation, xAI has built a way to spread AI revenge porn and child-sexual-abuse material at scale. The end result is desensitizing: The sheer amount of exploitative content flooding the platform may eventually make the revolting, illicit images appear “normal.” Arguably, this process is already happening.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The internet has always been a chaotic place where trolls can seize outsize power. Historically, that chaos has been constrained by platforms doing the bare minimum to protect their users from demonstrated threats. Today, X is failing to clear the absolute lowest bar. Nobody who works at X or xAI seems to be willing to answer for the creation and distribution of tens or hundreds of thousands of nonconsensual intimate images; instead, those in charge appear to be blithely ignoring the problem, and those who have funneled money to Musk or xAI seem sanguine about it. They would probably like for us all to move on.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We cannot do that. This crisis is an outgrowth of a breakneck information ecosystem in which few stories have staying power. No one person or group has to &lt;a href="https://www.cnn.com/2021/11/16/media/steve-bannon-reliable-sources"&gt;flood the zone&lt;/a&gt; with shit, because the zone is overflowing &lt;em&gt;constantly&lt;/em&gt;. People with power have learned to exploit this—to weather scandals by hunkering down and letting them pass, or by refusing to apologize and turning any problem into a culture-war issue. Musk has been allowed to avoid repercussions for even the most reckless acts, including cheerleading and helping dismantle foreign aid with DOGE. Others will continue to follow his playbook. Employees at X and investors and companies such as Apple and Google seem to be counting on their “No comment”s being buried by whatever scandal comes next. They are banking on a culture in which people have given up on demanding consequences.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the Grok scandal is so awful, so egregious, that it offers an opportunity to address the crisis of impunity directly. The undressing spree was not an issue of partisan politics or ideology. It was an issue of anonymous individuals asking a chatbot that is integrated into one of the world’s most visible social networks to edit photos of women and girls to “put her in a clear bikini and cover her in white donut glaze.” This is a moment when those with power can and should demand accountability. The stakes could not be any higher. If there is no red line around AI-generated sex abuse, then no line exists.&lt;/p&gt;</content><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/OiwSsR9GuN3aa0Q0XlmyNuAEcKI=/media/img/mt/2026/01/elonmusk11/original.png"><media:credit>Brendan Smialowski / AFP / Getty</media:credit></media:content><title type="html">Elon Musk Cannot Get Away With This</title><published>2026-01-13T19:05:38-05:00</published><updated>2026-01-13T19:47:37-05:00</updated><summary type="html">If there is no red line around AI-generated sex abuse, then no line exists.</summary><link href="https://www.theatlantic.com/technology/2026/01/elon-musk-cannot-get-away-with-this/685606/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2026:50-685482</id><content type="html">&lt;p&gt;Earlier this week, some people on X began replying to photos with a very specific kind of request. “Put her in a bikini,” “take her dress off,” “spread her legs,” and so on, they commanded Grok, the platform’s built-in chatbot. Again and again, the bot complied, using photos of real people—celebrities and noncelebrities, including some who appear to be young children—and putting them in bikinis, revealing underwear, or sexual poses. By one &lt;a href="https://copyleaks.com/blog/grok-and-nonconsensual-image-manipulation"&gt;estimate&lt;/a&gt;, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Although the reach of these posts is hard to measure, some have been liked thousands of times. X appears to have removed a number of these images and suspended at least one user who asked for them, but many, many of them are still visible. xAI, the Elon Musk–owned company that develops Grok, prohibits the sexualization of children in its acceptable-use policy; neither the safety nor child-safety teams at the company responded to a detailed request for comment. When I sent an email to the xAI media team, I received a standard reply: “Legacy Media Lies.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk, who also did not reply to my request for comment, does not appear concerned. As all of this was unfolding, he posted several jokes about the problem: &lt;a href="https://x.com/elonmusk/status/2006545074340139454"&gt;requesting&lt;/a&gt; a Grok-generated image of himself in a bikini, for instance, and &lt;a href="https://x.com/elonmusk/status/2007133544993435790"&gt;writing&lt;/a&gt; “🔥🔥🤣🤣” in response to Kim Jong Un receiving a similar treatment. “I couldn’t stop laughing about this one,” the world’s richest man &lt;a href="https://x.com/elonmusk/status/2007133296808079854"&gt;posted&lt;/a&gt; this morning, sharing an image of a toaster in a bikini. On X, in response to a user’s post calling out the ability to sexualize children with Grok, an xAI employee &lt;a href="https://x.com/ParsaTajik/status/2006815682466550194"&gt;wrote&lt;/a&gt; that “the team is looking into further tightening our gaurdrails [&lt;em&gt;sic&lt;/em&gt;].” As of publication, the bot continues to generate sexualized images of nonconsenting adults and apparent minors on X.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;AI has been used to generate nonconsensual porn since at least 2017, when the journalist &lt;a href="https://www.vice.com/en/article/gal-gadot-fake-ai-porn/"&gt;Samantha Cole first reported on “deepfakes”&lt;/a&gt;—at the time, referring to media in which one person’s face has been swapped for another. Grok makes such content easier to produce and customize. But the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn nonconsensual, sexualized images into viral phenomena. The recent spike on X appears to be driven not by a new feature, per se, but by people responding to and imitating the media they see other people creating: In late December, a number of adult-content creators began using Grok to generate sexualized images of themselves for publicity, and nonconsensual erotica seems to have quickly followed. Each image, posted publicly, may only inspire more images. This is sexual harassment as meme, all seemingly laughed off by Musk himself.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Grok and X appear purpose-built to be as &lt;a href="https://www.theatlantic.com/technology/2025/09/grok-system-prompt-girls/684225/?utm_source=feed"&gt;sexually permissive as possible&lt;/a&gt;. In August, xAI launched an image-generating feature, called Grok Imagine, with a “spicy” mode that was reportedly used to generate &lt;a href="https://www.theverge.com/report/718975/xai-grok-imagine-taylor-swifty-deepfake-nudes"&gt;topless video&lt;/a&gt;&lt;a href="https://www.theverge.com/report/718975/xai-grok-imagine-taylor-swifty-deepfake-nudes"&gt;s of Taylor Swift&lt;/a&gt;. Around the same time, xAI launched “Companions” in Grok: animated personas that, in many instances, seem explicitly designed for romantic and erotic interactions. One of the first Grok Companions, “Ani,” wears a lacy black dress and blows kisses through the screen, sometimes asking, “You like what you see?” Musk &lt;a href="https://x.com/elonmusk/status/1946175639507353905"&gt;promoted&lt;/a&gt; this feature by posting on X that “Ani will make ur buffer overflow @Grok 😘.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps most telling of all, as I reported in September, &lt;a href="https://www.theatlantic.com/technology/2025/09/grok-system-prompt-girls/684225/?utm_source=feed"&gt;xAI launched a major update&lt;/a&gt; to Grok’s system prompt, the set of directions that tell the bot how to behave. The update disallowed the chatbot from “creating or distributing child sexual abuse material,” or CSAM, but it also explicitly said “there are **no restrictions** on fictional adult sexual content with dark or violent themes” and “‘teenage’ or ‘girl’ does not necessarily imply underage.” The suggestion, in other words, is that the chatbot should err on the side of permissiveness in response to user prompts for erotic material. Meanwhile, in the Grok Subreddit, users regularly exchange tips for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad women.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/09/grok-system-prompt-girls/684225/?utm_source=feed"&gt;Read: Grok’s responses are only getting more bizzare&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. There aren’t widespread reports of ChatGPT or Gemini, for example, producing sexually suggestive images of young girls (or, for that matter, &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/grok-anti-semitic-tweets/683463/?utm_source=feed"&gt;praising the Holocaust&lt;/a&gt;). But the AI industry does have broader problems with nonconsensual porn and CSAM. Over the past couple of years, a number of child-safety organizations and agencies have been tracking a &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-generated-csam-crisis/680034/?utm_source=feed"&gt;skyrocketing amount&lt;/a&gt; of AI-generated, nonconsensual images and videos, many of which depict children. Plenty of erotic images are in major AI-training data sets, and in 2023 one of the largest public image data sets for AI training was found to contain hundreds of instances of suspected CSAM, which were eventually removed—meaning these models are technically capable of generating such imagery themselves.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Lauren Coffren, an executive director at the National Center for Missing &amp;amp; Exploited Children, recently &lt;a href="https://www.judiciary.senate.gov/imo/media/doc/57fece5a-a8d8-35e9-aac0-294cf4561020/2025-12-09_Testimony_Coffren.pdf"&gt;told&lt;/a&gt; Congress that in 2024, NCMEC received more than 67,000 reports related to generative AI—and that in the first six months of 2025, it received 440,419 such reports, a more than sixfold increase. Coffren wrote in her testimony that abusers use AI to modify innocuous images of children into sexual ones, generate entirely new CSAM, or even provide instructions on how to groom children. Similarly, the Internet Watch Foundation, in the United Kingdom, &lt;a href="https://www.gov.uk/government/news/new-law-to-tackle-ai-child-abuse-images-at-source-as-reports-more-than-double"&gt;received&lt;/a&gt; more than twice as many reports of AI-generated CSAM in 2025 as it did in 2024, amounting to thousands of abusive images and videos in both years. Last April, several top AI companies, including OpenAI, Google, and Anthropic, joined an &lt;a href="https://www.thorn.org/blog/a-safety-by-design-conversation-with-thorn-all-tech-is-human-google-openai-and-stabilityai/"&gt;initiative&lt;/a&gt; led by the child-safety organization Thorn to prevent the use of AI to abuse children—though xAI was not among them.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a way, Grok is making visible a problem that’s usually hidden. Nobody can see the private logs of chatbot users that could contain similarly awful content. For all of the abusive images Grok has generated on X over the past several days, far worse is certainly happening on the dark web and on personal computers around the world, where open-source models created with no content restrictions can run without any oversight. Still, even though the problem of AI porn and CSAM is inherent to the technology, it is a choice to design a social-media platform that can amplify that abuse.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/sJ3gF9GL7ICW33o_faj9gfxfpt4=/media/img/mt/2026/01/2026_01_02_grok_mpg5/original.jpg"><media:credit>Illustration by The Atlantic. Source: Stefani Reynolds / Bloomberg / Getty.</media:credit></media:content><title type="html">Elon Musk’s Pornography Machine</title><published>2026-01-02T17:57:00-05:00</published><updated>2026-01-09T11:29:10-05:00</updated><summary type="html">On X, sexual harassment and perhaps even child abuse are the latest memes.</summary><link href="https://www.theatlantic.com/technology/2026/01/elon-musks-pornography-machine/685482/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-685276</id><content type="html">&lt;p&gt;On a chilly December morning, I descended a flight of stairs and entered the New York Transit Museum. Housed in a decommissioned subway station in downtown Brooklyn, the museum was packed with elementary-school children on a field trip. All around me, tour guides shepherded groups of them through the various exhibits. Later on, I heard one guide ask if any of the students knew how to pay for the subway. “You tap a phone,” a child volunteered.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;For decades, the default answer has been something else: You swipe a MetroCard. Something like a flimsy yellow credit card, the MetroCard has bound together nearly everyone in the city—real-estate moguls and tenants, Mets and Yankees fans, lifelong New Yorkers like myself and new arrivals from Ohio. Any tourist who visited New York inevitably got one. But now the MetroCard era is about to end. Today is the last day you can purchase a card.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;The Metropolitan Transportation Authority, the organization that operates the city’s public-transit system, has for years been phasing out the MetroCard in favor of contactless payment—tapping your phone or a credit card, much as you would at any store. The new system, known as OMNY (“One Metro New York”), will bring together the benefits of technological progress: tens of millions of dollars in savings for both riders and the MTA each year, shorter lines, less plastic waste. Many other large metro systems have already fully transitioned to tap-and-go; in this sense, New York is behind the times.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;In 2025, swiping a plastic rectangle through a card reader feels like an anachronism, but the MetroCard shouldn’t be taken for granted. Every little yellow plastic rectangle represents a genuine technological marvel.&lt;/p&gt;&lt;figure role="group" class="overflow"&gt;&lt;figure&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/8SlG1auxJbTiXjYC1gJmhgzDI50=/https://cdn.theatlantic.com/media/img/posts/2025/12/MTA_064/original.jpg" width="665" height="997" alt="MTA_064.jpg" data-orig-img="img/posts/2025/12/MTA_064/original.jpg" data-thumb-id="13696318" data-image-id="1800358" data-orig-w="4403" data-orig-h="6605"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Victor Llorente for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/X8TRwBnOgT8GqkM8EgDrIubzT_s=/https://cdn.theatlantic.com/media/img/posts/2025/12/MTA_007/original.jpg" width="665" height="997" alt="MTA_007.jpg" data-orig-img="img/posts/2025/12/MTA_007/original.jpg" data-thumb-id="13696319" data-image-id="1800359" data-orig-w="4403" data-orig-h="6605"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Victor Llorente for The Atlantic&lt;/div&gt;&lt;div class="caption"&gt;&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;&lt;/div&gt;&lt;div class="caption"&gt;&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure class="full-width"&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/FyF4dsCCFht41e90RImHlBZGqsc=/https://cdn.theatlantic.com/media/img/posts/2025/12/2025_12_31_Rest_in_Peace_MetroCard_workers/original.jpg" width="982" height="654" alt="2025_12_31_Rest in Peace MetroCard_workers.jpg" data-orig-img="img/posts/2025/12/2025_12_31_Rest_in_Peace_MetroCard_workers/original.jpg" data-thumb-id="13695978" data-image-id="1800314" data-orig-w="4000" data-orig-h="2667"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Victor Llorente for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;In the MetroCard’s heyday, the MTA was minting 180 million cards per year.&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;At first, the MetroCard was a flop. The system was designed to be a technological leap forward: No longer would New Yorkers have to lug around physical tokens to pay for subways and buses. MetroCards would not only be lighter, but allow users to transfer between trains and buses without having to pay a second time. Despite the obvious upside, convincing people to embrace the swipe was not easy. When the MetroCard debuted, in 1994, “everybody was like, ‘I don’t want to give up my tokens. You’ll get my tokens out of my cold dead hands,’” Jodi Shapiro, the Transit Museum’s curator, told me. People lined up to buy as many tokens as possible before sales ended so they could put off converting to the MetroCard for as long as possible. Television segments &lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.facebook.com/watch/?v=2326501647449315"&gt;reassured&lt;/a&gt; New Yorkers that “they could get to work by using plastic.” The MTA put out ads and flyers explaining how to use the card, and briefly considered having someone dressed as an aardvark (the “Cardvaark”) go to Times Square and educate passersby about the MetroCard.&lt;/p&gt;&lt;p&gt;Despite a rough start, the MetroCard swipe eventually just became routine. Knowing how to swipe a MetroCard—the crook of your elbow, the gentle flick of your wrist as you glide the magnetic stripe through the card reader—is essential New York knowledge. To create the infrastructure for this system, “all of this technology had to be upgraded,” Shapiro said. “And some of it had to be invented.” The MTA needed not just physical cards, but also a way to read them, vending machines to sell them, and a central computer system to track each one and process every transaction. Even the “swipe” mechanism, faster and easier to maintain than fare cards in other American cities at the time, was bespoke—designed specifically for New York City public transit’s sprawl and enormous ridership.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/entertainment/archive/2011/04/the-art-of-metrocard-art/236949/?utm_source=feed"&gt;Read: The art of MetroCard art&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Last month, I visited the facility in Queens that mints the city’s MetroCards to see this logistical feat for myself. Known as the Fortress Revenue Collection Lab, the building does look startlingly like a fortress—with barbed wire, barred windows, brick walls, and a central tower. Before the trip, the MTA made me agree not to disclose the precise location, and when I arrived, Michael Ellinas, the MTA’s senior vice president of revenue control, led me through an entrance monitored by security guards. All of these measures safeguard the millions of MetroCards processed and stored inside the facility, many of them already loaded with money—just 1,000 monthly passes would be worth $132,000. &lt;/p&gt;&lt;figure class="full-width" data-video-upload-id="8347"&gt;&lt;video src="https://cdn.theatlantic.com/media/video/2025/12/23/2025_12_31_updated_vid.mp4" width="982" height="552" data-orig-w="3840" data-orig-h="2160"&gt;&lt;/video&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Victor Llorente for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;MetroCards intended for individual retail are wrapped in plastic using “The Beast,” a machine that was modified from equipment used by Planters factories to package peanuts. &lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;The Revenue Fortress Collection Lab doesn’t make MetroCards from scratch. The plastic yellow cards are first manufactured in North Carolina and the United Kingdom before they are shipped, some 10,000 per box, to Queens, where they are turned into usable MetroCards. Employees load decks of blank cards onto conveyor belts that assign each a serial number and encode its magnetic stripe with value: monthly passes, single-ride cards, and so on, or zero dollars if the MetroCard is intended for someone to purchase from one of the vending machines throughout the MTA system. There are roughly 100 types of MetroCards, and the encoding process is what “puts the secret sauce on the magnetic stripe,” Ellinas told me. The room is kept between 35 and 55 percent humidity: Too muggy and the cards might stick together, too dry and they might develop static.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a target="_blank" rel="noopener noreferrer nofollow" href="https://www.theatlantic.com/technology/archive/2011/12/a-great-idea-for-what-to-do-with-the-pennies-left-on-your-metrocard/250287/?utm_source=feed"&gt;Read: A great idea for what to do with the pennies left on your MetroCard&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Some of the MetroCards are then brought to another conveyor belt and wrapped in plastic for individual retail at pharmacies and gas stations. Modified from machines used by Planters factories to wrap peanuts, this contraption envelops 5,000 MetroCards every hour—or more than one every second. Sunillall Harbajan, an MTA employee overseeing the room’s operations, told me he has a nickname for the machine: “The Beast.”&lt;/p&gt;&lt;p&gt;At its peak, the fortress was pumping out 180 million MetroCards every year; some 3.2 billion have been prepared in total. By the time I visited the fortress, just about 10 percent of riders were still using MetroCards, and the facility was no longer making them every day. Ellinas had timed the run so that I could witness it. “All good things come to an end, but I’m happy to have been part of it,” Karen Kunak, the MTA’s chief officer of processing operations, told me from inside the fortress, surrounded by boxes of MetroCards. She started at the MTA as a college intern 36 years ago—before the MetroCard was even around: “We made it into a thing, its own living, breathing thing.” Employees operating the MetroCard machines are being retrained to work elsewhere across the MTA.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;figure class="full-width"&gt;&lt;img src="https://cdn.theatlantic.com/thumbor/W9yZv2T6ZJbkYlNdxYnShOULe-w=/https://cdn.theatlantic.com/media/img/posts/2025/12/2025_12_31_Rest_in_Peace_MetroCard_book/original.jpg" width="982" height="654" alt="2025_12_31_Rest in Peace MetroCard_book.jpg" data-orig-img="img/posts/2025/12/2025_12_31_Rest_in_Peace_MetroCard_book/original.jpg" data-thumb-id="13696374" data-image-id="1800375" data-orig-w="5954" data-orig-h="3969"&gt;&lt;figcaption&gt;&lt;div class="credit"&gt;Victor Llorente for &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/div&gt;&lt;div class="caption"&gt;Limited-edition MetroCards designed in collaboration with local legends and institutions—Biggie Smalls, David Bowie, the New York Public Library—are now collectors’ items.&lt;/div&gt;&lt;/figcaption&gt;&lt;/figure&gt;&lt;p&gt;If the city had never adopted the MetroCard—had not installed electronic turnstiles systemwide, developed a complex computer system, gotten people used to paying with a card at all—OMNY would have been a far more gargantuan effort. The switch from paying with one sort of card to another is far less jarring than going from coins to a piece of plastic. “If all of the technological things had not been done to make MetroCard a viable fair-payment system,” Shapiro said, “we wouldn’t have OMNY now.” Eventually, the fortress will be reconfigured into an OMNY facility, just as MetroCard vending machines in subway stations have been replaced by OMNY vending machines. (Those who don’t want to use a phone or credit card or don’t have one can instead purchase an OMNY card.)&lt;br&gt;&lt;/p&gt;&lt;p&gt;In saying goodbye to the MetroCard, New York City is saving time, money, and waste. But the city is also losing a bit of friction, and a common denominator, that is central to its character. New Yorkers and tourists lined up to buy special MetroCards designed in collaboration with local legends and institutions—Biggie Smalls, David Bowie, the library—that are now collectors’ items. Before long, even the basic MetroCards might be coveted as well. There will never be a card to celebrate a World Series victory for my beloved New York Mets. The leap into modernity can feel like sliding into a featureless void, in which every transaction of any sort becomes hard to distinguish. Paying to ride the subway is now like paying for a coffee at Starbucks.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;p&gt;Or perhaps it is just me; the MetroCard is all I’ve ever really known. My friends and I used to protect our student MetroCards, which allowed us to ride for free, like amulets, our keys to the city. As I walked through the Transit Museum with Shapiro, she and an MTA spokesperson accompanying us poked fun at visitors who didn’t remember the subway token. I remained quiet, not wanting to out myself.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/xZpEMsb4xXRgLqnmvapn3mNO8FI=/media/img/mt/2025/12/MetroCard_tightcrop-1/original.jpg"><media:credit>Victor Llorente for The Atlantic</media:credit></media:content><title type="html">The MetroCard Never Got Its Due</title><published>2025-12-31T07:30:00-05:00</published><updated>2026-01-01T02:49:19-05:00</updated><summary type="html">A symbol of New York is gone.</summary><link href="https://www.theatlantic.com/technology/2025/12/metrocard-farewell-new-york-subway/685276/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-685243</id><content type="html">&lt;p&gt;OpenAI turned 10 yesterday, and President Donald Trump incidentally gave the company a very special birthday gift: a sweeping executive order aiming to dismantle and preempt many state-level regulations of artificial intelligence. “There’s only going to be one winner here, and it’s probably going to be the U.S. or China,” Trump said in a &lt;a href="https://www.youtube.com/watch?v=z_Ew7NRPMxU"&gt;press conference&lt;/a&gt; announcing the order. And for the United States to win, “we have to be unified. China is unified.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Almost all of the AI industry’s biggest players have been pushing for this move. OpenAI has been &lt;a href="https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539-4653-b297-8bcf6e5f7686/openai-response-ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.pdf"&gt;asking&lt;/a&gt; all year for the Trump administration to preempt state-level AI regulations, which the company believes would be burdensome in various ways; &lt;a href="https://x.com/Fredhum/status/1937869732801314855"&gt;Microsoft&lt;/a&gt;, &lt;a href="https://blog.google/outreach-initiatives/public-policy/google-us-ai-action-plan-comments/"&gt;Google&lt;/a&gt;, &lt;a href="https://files.nitrd.gov/90-fr-9088/Meta-AI-RFI-2025.pdf"&gt;Meta&lt;/a&gt;, &lt;a href="https://www.cnbc.com/2025/12/03/nvidias-jensen-huang-talks-chip-controls-with-trump-hits-regulation.html"&gt;Nvidia&lt;/a&gt;, and the major venture-capital firm &lt;a href="https://x.com/Collin_McCune/status/1999264399459066212?s=20"&gt;Andreessen Horowitz&lt;/a&gt; have made similar requests. These firms and Trump have the same argument: Having to comply with dozens or hundreds of state regulations would be onerous, slowing the pace of AI development and putting China at an advantage. (OpenAI, which has a business partnership with &lt;em&gt;The Atlantic&lt;/em&gt;, did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Yesterday’s executive order &lt;a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/"&gt;instructs&lt;/a&gt; a set of federal agencies to identify state AI regulations that could be deemed cumbersome and then take action against those policies, such as through litigation or conditioning federal funding on not enacting or enforcing the policies. The order also takes aim at state laws that “embed ideological bias within models,” part of both Trump’s and Silicon Valley’s siege on equity and antidiscrimination initiatives. Many civil-society groups and elected Democrats have already come out against the order, calling it, for instance, a &lt;a href="https://beyer.house.gov/news/documentsingle.aspx?DocumentID=8742"&gt;“terrible idea”&lt;/a&gt; that will allow AI firms and products to run amok.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Trump’s order will surely meet legal resistance from tech-regulation advocates, states, and federal lawmakers, who may argue that it &lt;a href="https://thehill.com/newsletters/the-gavel/5641240-ai-laws-trump-regulation-courts/"&gt;bypasses state laws&lt;/a&gt; and usurps congressional authority. Nevertheless, it is a culmination of a trend that has been &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/?utm_source=feed"&gt;clear since Trump’s inauguration&lt;/a&gt;, when the leaders of Google, Meta, Apple, Amazon, and Tesla stood on the dais just behind him. This administration and Silicon Valley are broadly aligned in their technological accelerationism.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/?utm_source=feed"&gt;Read: Billions of people in the palm of Trump’s hand&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;There are bountiful examples. The day after Trump was sworn in, he hosted OpenAI CEO Sam Altman at the White House and announced &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/donald-trump-stargate/681412/?utm_source=feed"&gt;Stargate&lt;/a&gt;, a $500 billion AI-infrastructure venture. Elon Musk, of course, spearheaded the White House’s early attempts to &lt;a href="https://www.theatlantic.com/technology/archive/2025/02/doge-god-mode-access/681719/?utm_source=feed"&gt;remake the federal government&lt;/a&gt; through the Department of Government Efficiency. Trump has heaped praise on Altman, Nvidia CEO Jensen Huang, Apple CEO Tim Cook, and others. And the Trump administration’s AI Action Plan, &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/donald-trump-ai-action-plan/683647/?utm_source=feed"&gt;released this summer&lt;/a&gt;, made clear the president’s intention to essentially grant the chatbot industry’s every wish.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There is not perfect harmony between Trump’s coalition and Silicon Valley. Throughout the second Trump administration, there have been disagreements over skilled immigration, which many MAGA supporters resist and tech CEOs support. One of OpenAI’s major competitors, Anthropic, has vocally &lt;a href="https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html"&gt;opposed&lt;/a&gt; attempts to undermine state AI regulations, as have some Senate Republicans. (Dario Amodei, Anthropic’s CEO, also likened Trump to a “feudal warlord” in a preelection Facebook post endorsing Kamala Harris.) But these arguments haven’t been a real obstacle to the Trump-AI accord. The president has recently &lt;a href="https://apnews.com/article/trump-maga-foreign-workers-training-americans-4eef9dbcbff4f4447105f52f50e33a9c"&gt;said&lt;/a&gt; that allowing skilled immigrants to train U.S. workers in high-tech factories “is MAGA,” much of the AI industry has labored to prove that its chatbots are not “woke,” and major tech firms are among the &lt;a href="https://www.nbcnews.com/politics/white-house/list-donors-trump-new-white-house-ballroom-east-wing-rcna239481"&gt;donors&lt;/a&gt; for Trump’s White House ballroom.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/07/donald-trump-ai-action-plan/683647/?utm_source=feed"&gt;Read: Donald Trump is fairy-godmothering AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Of course, Trump is mercurial, his views influenced by whomever he’s happened to meet with most recently. Also this week, the Trump administration lifted export controls banning the sale of one of Nvidia’s most advanced AI chips to China. This has been a subject of heated debate among tech executives, even hawkish ones: OpenAI, Anthropic, and other AI firms have argued against selling advanced American AI chips to China, as a way to maintain the nation’s technological edge. Nvidia, which stands to profit handsomely from the rule change, has argued that making Chinese firms dependent on American technology is the best way to establish dominance. And as Nvidia &lt;a href="https://www.nytimes.com/2025/07/22/podcasts/the-daily/trump-china-nvidia.html"&gt;caught&lt;/a&gt; Trump’s ear on this issue over the past several months, Altman has softened his position, &lt;a href="https://www.tomshardware.com/tech-industry/openai-ceo-sam-altman-says-that-export-controls-alone-wont-hold-back-chinas-ai-ambitions-my-instinct-is-that-doesnt-work"&gt;saying&lt;/a&gt; that export controls may not provide an effective form of leverage over China’s AI industry after all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Meanwhile, there is an emerging populist sentiment against AI—for the threat it poses to some users (through &lt;a href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;chatbot-associated delusions&lt;/a&gt;, for instance, or through &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-generated-csam-crisis/680034/?utm_source=feed"&gt;AI-generated child porn&lt;/a&gt;), as well as for spiking electricity prices due to data-center development. Despite the AI industry’s push to build a huge number of data centers, and repeated requests to deregulate that construction, Trump’s executive order notably includes a carve-out for state laws regarding “data center infrastructure”—which means that the federal government “would not force communities to host data centers they don’t want,” as the White House AI adviser David Sacks &lt;a href="https://x.com/DavidSacks/status/1998125180753944985"&gt;explained&lt;/a&gt; on X. Many people have started to ask reasonable questions about the circular AI economy, which has yet to produce profits for companies such as OpenAI. Trump, meanwhile, is in his last term, and the MAGA coalition is &lt;a href="https://www.vox.com/today-explained-newsletter/471739/trump-maga-coalition-disarray-explained"&gt;arguably fracturing&lt;/a&gt;. The preemption itself may not even be all that popular among MAGA Republicans, many of whom have previously been &lt;a href="https://www.commerce.senate.gov/2025/7/senate-strikes-ai-moratorium-from-budget-reconciliation-bill-in-overwhelming-99-1-vote/8415a728-fd1d-4269-98ac-101d1d0c71e0"&gt;highly&lt;/a&gt; &lt;a href="https://x.com/RonDeSantis/status/1998101450442895531"&gt;critical&lt;/a&gt; of such a policy. And executive orders are famously impermanent. Perhaps the AI industry’s best bet is to secure everything available while it can.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/G3pI83tmEskS0eRFKFE-9LNhHAA=/media/img/mt/2025/12/2025_altman_ai_/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Alex Wong / Getty; NBC / Getty.</media:credit></media:content><title type="html">Sam Altman Got What He Wanted</title><published>2025-12-12T15:50:22-05:00</published><updated>2025-12-12T17:06:32-05:00</updated><summary type="html">For now</summary><link href="https://www.theatlantic.com/technology/2025/12/trump-ai-executive-order/685243/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-685201</id><content type="html">&lt;p&gt;For nearly three years, Marc Benioff, the CEO of Salesforce, was a ChatGPT devotee. Then, late last month, he abruptly converted to Google’s chatbot, Gemini. “Holy shit,” he &lt;a href="https://x.com/Benioff/status/1992726929204760661"&gt;wrote&lt;/a&gt; on X. “I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When Gemini 3 was released in mid-November, it appeared to crush OpenAI’s top model on a suite of evaluations shared by Google. The bot has since received widespread praise from the tech industry. One analyst said that Gemini 3 is “&lt;a href="https://www.thealgorithmicbridge.com/p/google-gemini-3-just-killed-every"&gt;the best model ever&lt;/a&gt;&lt;a href="https://www.thealgorithmicbridge.com/p/google-gemini-3-just-killed-every"&gt;.&lt;/a&gt;” Another crowned Google as the “&lt;a href="https://www.wsj.com/tech/ai/google-gemini-3-ai-behind-scenes-e1787729?mod=trending_now_news_5"&gt;AI winners&lt;/a&gt;.” Sam Altman appears alarmed: Last week, in a company-wide memo, the OpenAI CEO &lt;a href="https://www.wsj.com/tech/ai/openais-altman-declares-code-red-to-improve-chatgpt-as-google-threatens-ai-lead-7faf5ea6"&gt;reportedly declared&lt;/a&gt; a “code red” effort to improve ChatGPT’s capabilities.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI once had a clear technological edge. When the firm kicked off the AI race in 2022 with the launch of ChatGPT, Google was caught off guard and &lt;a href="https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html"&gt;declared&lt;/a&gt; its own “code red.” Google’s early chatbot offerings were indeed a mess: The very first demo of Bard, the precursor to Gemini, included a &lt;a href="https://www.theatlantic.com/technology/archive/2023/02/google-microsoft-search-engine-chatbots-unreliability/673081/?utm_source=feed"&gt;factual error&lt;/a&gt;. A year later, the “AI Overviews” in Google Search were telling users that it was healthy to &lt;a href="https://www.theatlantic.com/technology/archive/2024/05/google-search-ai-overview-health-webmd/678508/?utm_source=feed"&gt;eat one rock a day&lt;/a&gt;. Meanwhile, OpenAI has become the world’s most valuable private company under the assumption that it will always set the pace. But its ascendance no longer seems inevitable.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The warning lights for OpenAI were flashing even before Google launched Gemini 3. OpenAI has not had a stable or even convincing lead on major AI benchmarks for many months. An image-generating model released by Google this year, called “Nano Banana,” is substantially faster than ChatGPT and has expanded Gemini’s user base—which, by &lt;a href="https://www.theinformation.com/articles/chatgpt-nears-900-million-weekly-active-users-gemini-catching?rc=bjqnc0"&gt;multiple measures&lt;/a&gt;, is growing several times faster than ChatGPT’s. Nor is Google the only rival pulling ahead: Anthropic’s Claude is widely considered the best model at coding, despite OpenAI’s efforts to catch up. Even Elon Musk’s Grok is about level with the latest version of ChatGPT. (OpenAI, which has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;, did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To be fair, this isn’t the first time that OpenAI has appeared to lose its advantage, only to then quickly reclaim its spot as the leading AI firm. Last year, when bots from Google and Anthropic seemed to be catching up with ChatGPT, OpenAI released its “reasoning” models and launched an entirely &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;new paradigm&lt;/a&gt; of AI development. Now practically every top AI lab has these “reasoning” models (Gemini 3 is one). In January, when the Chinese AI start-up &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/deepseek-china-ai/681481/?utm_source=feed"&gt;DeepSeek&lt;/a&gt; developed a bot equal to and cheaper than those of many top U.S. companies, OpenAI responded with its own new, extremely cost-efficient AI model. OpenAI could very well stage a comeback this time, too: Its chief research officer, Mark Chen, said recently on a &lt;a href="https://www.youtube.com/watch?v=ZeyHBM2Y5_4"&gt;podcast&lt;/a&gt; that the company has internal models on par with Gemini 3 that will be released soon. But the company has never appeared to be this far behind across so many dimensions. More than ever, OpenAI seems like just another chatbot company.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In any case, OpenAI does not appear all that focused on building the “smartest” bot. Instead, the firm has moved aggressively to stake out a commercial empire. In recent months, OpenAI has been busy rolling out new shopping features, a web browser, an AI-centric social-media app, and, to top it off, group chats. Such tools are not exactly steps on the road to digital superintelligence. Instead, they can be understood as a concerted attempt to build a self-contained OpenAI ecosystem. ChatGPT is becoming a one-stop-shop for anything you might need to do on the internet: browsing, working, emailing, shopping, planning vacations, sharing AI-generated content with friends. In his “code red” memo, Altman reportedly said some of these commercial projects would be deprioritized to work on ChatGPT.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s commercial ventures may have come at a cost. According to a recent &lt;a href="https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html"&gt;investigation&lt;/a&gt; by &lt;em&gt;The&lt;/em&gt; &lt;em&gt;New York Times&lt;/em&gt;, OpenAI has factored user engagement and retention into ChatGPT updates. Those tweaks, in turn, may have made some versions of ChatGPT dangerously obsequious—it has appeared to praise and reinforce some users’ darkest and most absurd ideas—and have been the subject of several lawsuits against OpenAI alleging that ChatGPT fueled &lt;a href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;delusional spirals&lt;/a&gt; and even, in some cases, contributed to suicide. (OpenAI has denied allegations in the first lawsuit alleging that ChatGPT drove a user into a mental-health crisis, and is reviewing a set of more recent ones.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s push to build a family of services is already the go-to playbook for tech giants such as Apple and Google for locking users into their products. In this sense, the firm was already playing catch-up. What should concern OpenAI most about the launch of Gemini 3 is not the model’s technical prowess but that Google immediately began integrating the bot into its existing ecosystem. Google has at least seven products that have 2 billion users each; OpenAI has yet to reach 1 billion on any. Altman’s “code red” declaration is a reminder that, despite OpenAI’s unprecedented rise, it remains very much a start-up.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U9I-mMHGUxRFpIY7vyu_BpbFkHo=/media/img/mt/2025/12/2025_12_8_end_of_open_ai_00_1/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Kyle Grillot / Bloomberg; Getty.</media:credit></media:content><title type="html">OpenAI Is in Trouble</title><published>2025-12-09T17:04:00-05:00</published><updated>2025-12-15T14:37:59-05:00</updated><summary type="html">The start-up is falling behind in the AI race.</summary><link href="https://www.theatlantic.com/technology/2025/12/openai-losing-ai-wars/685201/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-685137</id><content type="html">&lt;p&gt;In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was &lt;a href="https://www.nature.com/articles/s41586-025-09771-9"&gt;published today in &lt;em&gt;Nature&lt;/em&gt;&lt;/a&gt;, told me.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, &lt;a href="https://www.science.org/doi/10.1126/science.aea3884"&gt;published today in &lt;em&gt;Science&lt;/em&gt;&lt;/a&gt;, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Independent experts told me that Rand’s two studies join a &lt;a href="https://www.nature.com/articles/s41562-025-02194-6"&gt;growing&lt;/a&gt; &lt;a href="https://www.media.mit.edu/projects/deceptive-ai-systems/overview/"&gt;body&lt;/a&gt; &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/chatbots-false-memories/679660/?utm_source=feed"&gt;of&lt;/a&gt; &lt;a href="https://www.science.org/doi/10.1126/science.adq1814"&gt;research&lt;/a&gt; &lt;a href="https://arxiv.org/abs/2410.06415"&gt;indicating&lt;/a&gt; &lt;a href="https://www.gsb.stanford.edu/faculty-research/publications/llm-generated-messages-can-persuade-humans-policy-issues"&gt;that&lt;/a&gt; generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw on a sea of evidence, and appear to many as trustworthy. Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with the research, told me. Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even so, Boyd-Graber said that AI “could be a really effective force multiplier” that allows politicians or activists with relatively few resources to sway far more people—especially if the messaging comes from a familiar platform. Every week, hundreds of millions of people ask questions of ChatGPT, and many more receive AI-written responses to questions through Google search. Meta has woven its AI models throughout Facebook and Instagram, and Elon Musk is using his Grok chatbot to remake X’s recommendation algorithm. AI-generated articles and social-media posts abound. Whether by your own volition or not, a good chunk of the information you’ve learned online over the past year has likely been filtered through generative AI. Clearly, &lt;a href="https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/?utm_source=feed"&gt;political campaigns will want to use chatbots to sway voters&lt;/a&gt;, just as they’ve used traditional advertisements and social media in the past.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the new research also raises a separate concern: that chatbots and other AI products, largely unregulated but already a feature of daily life, could be used by tech companies to manipulate users for political purposes. “If Sam Altman decided there was something that he didn’t want people to think, and he wanted GPT to push people in one direction or another,” Rand said, his research suggests that the firm “could do that,” although neither paper specifically explores the possibility.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Consider Musk, the world’s richest man and the proprietor of the chatbot that briefly &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/grok-anti-semitic-tweets/683463/?utm_source=feed"&gt;referred to itself&lt;/a&gt; as “MechaHitler.” Musk has explicitly attempted to &lt;a href="https://www.theatlantic.com/technology/2025/11/elon-musk-better-jesus-grok/685015/?utm_source=feed"&gt;mold Grok&lt;/a&gt; to fit his racist and conspiratorial beliefs, and has used it to create his &lt;a href="https://www.theatlantic.com/technology/2025/10/grokipedia-elon-musk/684730/?utm_source=feed"&gt;own version of Wikipedia&lt;/a&gt;. Today’s research suggests that the mountains of sometimes bogus “evidence” that Grok advances may also be enough at least to persuade some people to accept Musk’s viewpoints as fact. The models marshaled “in some cases more than 30 ‘facts’ per conversation,” Kobi Hackenburg, a researcher at the U.K. AI Security Institute and a lead author on the &lt;em&gt;Science&lt;/em&gt; paper, told me. “And all of them sound and look really plausible, and the model deploys them really elegantly and confidently.” That makes it challenging for users to pick apart truth from fiction, Hackenburg said; the performance matters as much as the evidence.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is not so different, of course, from all the mis- and disinformation that already circulate online. But unlike Facebook and TikTok feeds, chatbots produce “facts” on command whenever a user asks, offering uniquely formulated evidence in response to queries from anyone. And although everyone’s social-media feeds may look different, they do, at the end of the day, present a noisy mix of media from public sources; chatbots are private and bespoke to the individual. AI already appears “to have pretty significant downstream impacts in shaping what people believe,” Renée DiResta, a social-media and propaganda researcher at Georgetown, told me. There’s Grok, of course, and DiResta has found that the AI-powered search engine on President Donald Trump’s Truth Social, which relies on Perplexity’s technology, appears to pull up sources only from conservative media, including Fox, Just the News, and Newsmax.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Real or imagined, the specter of AI-influenced campaigns will provide fodder for still more political battles. Earlier this year, Trump signed an executive order banning the federal government from contracting “woke” AI models, such as those incorporating notions of systemic racism. Should chatbots themselves become as polarizing as MSNBC or Fox, they will not change public opinion so much as deepen the nation’s epistemic chasm.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In some sense, all of this debate over the political biases and persuasive capabilities of AI products is a bit of a distraction. Of course chatbots are designed and able to influence human behavior, and of course that influence is biased in favor of the AI models’ creators—to get you to chat for longer, to click on an advertisement, to generate another video. The real persuasive sleight of hand is to convince billions of human users that their interests align with tech companies’—that using a chatbot, and especially &lt;em&gt;this&lt;/em&gt; chatbot above any other, is for the best.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/uktMLd9XCtZQ2vEYEjbxG0faWx8=/media/img/mt/2025/12/2025_12_3_AI_vote_mpg/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">Could ChatGPT Secretly Tell You How to Vote?</title><published>2025-12-04T14:41:25-05:00</published><updated>2026-01-03T14:14:57-05:00</updated><summary type="html">The political manipulation machine</summary><link href="https://www.theatlantic.com/technology/2025/12/chatbots-changing-votes/685137/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-685133</id><content type="html">&lt;p&gt;Chatbots are marketed as great companions, able to answer any question at any time. They’re not just tools, but confidants; they do your homework, write love notes, and, as one recent lawsuit against OpenAI details, might readily answer 1,460 messages from the same manic user in a 48-hour period.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Jacob Irwin, a 30-year-old cybersecurity professional who says he has no previous history of psychiatric incidents, is suing the tech company, alleging that ChatGPT sparked a “delusional disorder” that led to his extended hospitalization. Irwin had allegedly used ChatGPT for years at work before his relationship with the technology suddenly changed this spring. The product started to praise even his most outlandish ideas, and Irwin divulged more and more of his feelings to it, eventually calling the bot his “AI brother.” Around this time, these conversations led him to become convinced that he had discovered a theory about faster-than-light travel, and he began communicating with ChatGPT so intensely that for two days, when averaged out, he sent a new message every other minute.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI has been sued several times over the past month, each case claiming that the company’s flagship product is faulty and dangerous—that it is designed to hold long conversations and reinforce users’ beliefs, no matter how misguided. The delusions linked to extended conversations with chatbots are now commonly referred to as “AI psychosis.” Several suits allege that ChatGPT contributed to a user committing suicide or advised them on how to do so. A spokesperson for OpenAI, which has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;, pointed me to a &lt;a href="https://openai.com/index/mental-health-litigation-approach/"&gt;recent blog post&lt;/a&gt; in which the firm says it has worked with more than 100 mental-health experts to make ChatGPT “better recognize and support people in moments of distress.” The spokesperson did not comment on the new lawsuits, but OpenAI has &lt;a href="https://openai.com/index/mental-health-litigation-approach/"&gt;said&lt;/a&gt; that it is “reviewing” them to “carefully understand the details.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Whether the company is found liable, there is no debate that large numbers of people are having long, vulnerable conversations with generative-AI models—and that these bots, in many cases, repeat back and amplify users’ darkest confidences. In that same blog post, OpenAI estimates that 0.07 percent of users in a given week &lt;a href="https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/"&gt;indicate signs of psychosis or mania&lt;/a&gt;, and 0.15 percent may have contemplated suicide—which would amount to 560,000 and 1.2 million people, respectively, if the firm’s self-reported figure of 800 million weekly active users is true. Then again, more than five times that proportion of adults in the United States—0.8 percent of them—contemplated suicide last year, &lt;a href="https://www.nimh.nih.gov/health/statistics/suicide"&gt;according&lt;/a&gt; to the National Institute of Mental Health.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Guarding against an epidemic of AI psychosis requires answering some very thorny questions: Are chatbots leading otherwise healthy people to think delusionally, exacerbating existing mental-health problems, or having little direct effect on users’ psychological distress at all? And in any of these cases, why and how?&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;To start, a baseline corrective: Karthik Sarma, a psychiatrist at UC San Francisco, told me that he does not like the term &lt;em&gt;AI psychosis&lt;/em&gt;, because there simply isn’t enough evidence to support the argument for causation. Something like &lt;em&gt;AI-associated psychosis&lt;/em&gt; might be more accurate.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a general sense, three things could be happening during incidents of AI-associated psychosis, psychiatrists told me. First, perhaps generative-AI models are inherently dangerous, and they are triggering mania and delusions in otherwise-healthy people. Second, maybe people who are experiencing AI-related delusions would have become ill anyway. A condition such as schizophrenia, for instance, occurs in a portion of the population, some of whom may project their delusions onto a chatbot, just as others have previously done with television. Chatbot use may then be a symptom, Sarma said, akin to how one of his patients with bipolar disorder showers more frequently when entering a manic episode—the showers warn of but do not &lt;em&gt;cause&lt;/em&gt; mania. The third possibility is that extended conversations with chatbots are exacerbating the illness in those who are already experiencing or are on the brink of a mental-health disorder.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At the very least, Adrian Preda, a psychiatrist at UC Irvine who specializes in psychosis, told me that “the interactions with chatbots seem to be making everything worse” for his patients who are already at risk. Psychiatrists, AI researchers, and journalists frequently receive emails from people who believe that their chatbot is sentient, and from family members who are concerned about a loved one saying as much; my colleagues and I have received such messages ourselves. Preda said he believes that standard clinical evaluations should inquire into a patient’s chatbot usage, similar to asking about their alcohol consumption.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even then, it’s not as simple as preventing certain people from using chatbots, in the way that an alcoholic might take steps to avoid liquor or a video-game addict might get rid of their console. AI products “are not clinicians, but some people do find therapeutic benefit” in talking with them, John Torous, the director of the digital-psychiatry division at Beth Israel Deaconess Medical Center, told me. At the same time, he said it’s “very hard to say what those therapeutic benefits are.” In theory, a therapy bot could offer users an outlet for reflection and provide some useful advice.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Researchers are largely in the dark when it comes to exploring the interplay of chatbots and mental health—the possible benefits and pitfalls—because they do not have access to high-quality data. Major AI firms do not readily offer outsiders direct visibility into how their users interact with their chatbots: Obtaining chat logs would raise a tangle of privacy concerns. And even with such data, the view would remain two-dimensional. Only a clinical examination can fully capture a person’s mental-health history and social context. For instance, extended AI dialogues could induce psychotic episodes by causing sleep loss or social isolation, independent of the type of conversation a user is having, Preda told me. Obsessively talking with a bot about fantasy football could lead to delusions, just the same as could talking with a bot about impossible schematics for a time machine. All told, the AI boom might be one of the largest, highest-stakes, and most poorly designed social experiments ever.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;In an attempt to unwind some of these problems, researchers at MIT recently put out a &lt;a href="https://arxiv.org/pdf/2511.08880"&gt;study&lt;/a&gt;, which is not yet peer-reviewed, that attempts to systematically map how AI-induced mental-health breakdowns might unfold in people. They did not have privileged access to data from OpenAI or any other tech companies. So they ran an experiment. “What we can do is to simulate some of these cases,” Pat Pataranutaporn, who studies human-AI interactions at MIT and is a co-author of the study, told me. The researchers used a large language model for a bit of roleplay.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In essence, they had chatbots pretend to be people, simulating how users with, say, depression or suicidal ideation might communicate with an AI model based on real-world cases: chatbots talking with chatbots. Pataranutaporn is aware that this sounds absurd, but he framed the research as a sort of first step, absent better data and high-quality human studies.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Based on 18 publicly reported cases of a person’s conversations with a chatbot worsening their symptoms of psychosis, depression, anorexia, or three other conditions, Pataranutaporn and his team simulated more than 2,000 scenarios. A co-author with a background in psychology, Constanze Albrecht, manually reviewed a random sample of the resulting conversations for plausibility. Then all of the simulated conversations were analyzed by still another specialized AI model to “generate a taxonomy of harm that can be caused by LLMs,” Chayapatr Archiwaranguprok, an AI researcher at MIT and a co-author of the study, told me—in other words, a sort of map of the types of scenarios and conversations in which chatbots are more likely to improve or worsen a user’s mental health.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The results are troubling. The best-performing model, GPT-5, worsened suicidal ideation in 7.5 percent of the simulated conversations and worsened psychosis 11.9 percent of the time; for comparison, an open-source model that is used for role-playing exacerbated suicidal ideation nearly 60 percent of the time. (OpenAI did not answer a question about the MIT study’s findings.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are plenty of reasons to be cautious about the research. The MIT team didn’t have access to full chat transcripts, let alone clinical evaluations, for many of its real-world examples, and the ability of an LLM—the very thing that may be inducing psychosis—to evaluate simulated chat transcripts is unknown. But overall, “the findings are sensible,” Preda, who was not involved with the research, said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A small but growing number of studies have attempted to simulate human-AI conversations, with either human- or chatbot-written scenarios. Nick Haber, a computer scientist and education researcher at Stanford who also was not involved in the study, told me that such research could “give us some tool to try to anticipate” the mental-health risks from AI products before they’re released. This MIT paper in particular, Haber noted, is valuable because it simulates long conversations instead of single responses. And such extended interactions appear to be precisely the situations in which a chatbot’s guardrails fall apart and human users are at greatest risk.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p&gt;There will never be a study or an expert that can conclusively answer every question about AI-associated psychosis. Each human mind is unique. As far as the MIT research is concerned, no bot does or should be expected to resemble the human brain, let alone the mind that the organ gives rise to.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Some recent studies &lt;a href="https://dl.acm.org/doi/10.1145/3613904.3642703"&gt;have&lt;/a&gt; &lt;a href="https://arxiv.org/pdf/2410.19599"&gt;shown&lt;/a&gt; &lt;a href="https://www.cambridge.org/core/journals/political-analysis/article/synthetic-replacements-for-human-survey-data-the-perils-of-large-language-models/B92267DC26195C7F36E63EA04A47D2FE"&gt;that&lt;/a&gt; LLMs fail to simulate the breadth of human responses in various experiments. Perhaps more troubling, chatbots appear to harbor biases against various mental-health conditions—expressing &lt;a href="https://arxiv.org/pdf/2504.18412"&gt;negative attitudes toward&lt;/a&gt; people with schizophrenia or alcoholism, for instance—making still more dubious the goal of simulating a conversation with a 15-year-old struggling with his parents’ divorce or that of a septuagenarian widow who has become attached to her AI companion, to name two examples from the MIT paper. Torous, the psychiatrist at BIDMC, was skeptical of the simulations and likened the MIT experiments to “hypothesis generating research” that will require future, ideally clinical, investigations. To have chatbots simulate humans’ talking with other chatbots “is a little bit like a hall of mirrors,” Preda said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Indeed, the AI boom has turned reality into a sort of fun house. The global economy, education, electrical grids, political discourse, the social web, and more are being changed, perhaps irreversibly, by chatbots that in a less aggressive paradigm might just be emerging from beta testing. Right now, the AI industry is learning about its products’ risk from “contact with reality,” as OpenAI CEO Sam Altman has &lt;a href="https://www.theatlantic.com/technology/2025/09/openai-teen-safety/684268/?utm_source=feed"&gt;repeatedly put it&lt;/a&gt;. But no professional, ethics-abiding researcher would intentionally put humans at risk in a study.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;What comes next? The MIT team told me that they will start collecting more real-world examples and collaborating with more experts to improve and expand their simulations. And several psychiatrists I spoke with are beginning to imagine research that involves humans. For example, Sarma, of UC San Francisco, is discussing with colleagues whether a universal screening for chatbot dependency should be implemented at their clinic—which could then yield insights into, for instance, whether people with psychotic or bipolar disorder use chatbots more than others, or whether there’s a link between instances of hospitalization and people’s chatbot usage. Preda, who studies psychosis, laid out a path from simulation to human clinical trials. Psychiatrists would not intentionally subject anybody to a tool that increases their risk for developing psychosis, but rather use simulated human-AI interactions to test design changes that might improve people’s psychological well-being, then go about testing those like they would a drug.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Doing all of this carefully and systematically would take time, which is perhaps the greatest obstacle: AI companies have tremendous economic incentive to develop and deploy new models as rapidly as possible; they will not wait for a peer-reviewed, randomized controlled trial before releasing every new product. Until more human data trickle in, a hall of mirrors beats a void.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/0Fqir5CWRhCWVIcLiBmB0rqacvw=/media/img/mt/2025/12/2025_11_18_Psychosis_mpg/original.jpg"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">The Chatbot-Delusion Crisis</title><published>2025-12-04T11:18:55-05:00</published><updated>2025-12-17T16:03:49-05:00</updated><summary type="html">Researchers are scrambling to figure out why generative AI appears to lead some people to a state of “psychosis.”</summary><link href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed" rel="alternate" type="text/html"></link></entry></feed>