<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="/static/theatlantic/syndication/feeds/atom-to-html.b8b4bd3b19af.xsl" ?><feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/"><title>Damon Beres | The Atlantic</title><link href="https://www.theatlantic.com/author/damon-beres/" rel="alternate"></link><link href="https://www.theatlantic.com/feed/author/damon-beres/" rel="self"></link><id>https://www.theatlantic.com/author/damon-beres/</id><updated>2026-01-08T11:08:36-05:00</updated><rights>Copyright 2026 by The Atlantic Monthly Group. All Rights Reserved.</rights><entry><id>tag:theatlantic.com,2026:50-685506</id><content type="html">&lt;p&gt;Hours before President Donald Trump announced Nicolás Maduro’s capture, on Saturday morning, people had questions for Grok, Elon Musk’s chatbot. Footage was circulating on X of explosions in Venezuela, and some users assumed the United States was responsible: “Hey @grok why is Trump sending US airstrikes to bomb Venezuela. Do you think they deserve it or not ?”one person asked. “@grok what is the reason why America is bombing Venezuela,” another asked.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is to be expected. Today, chatbots are treated as a source of information by many people. Millions in the United States alone use them to get information, and the number is &lt;a href="https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/"&gt;growing&lt;/a&gt;. This means that tech companies such as X, Google, Anthropic, Meta, and OpenAI now play a central role not just in delivering information to people—as some of them have for decades, through social-media platforms and search engines—but in actively shaping &lt;em&gt;what &lt;/em&gt;that information is: which facts are included and which are not.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Journalists and other sources may be cited by the bots, but the people who control these AI products, such as Musk, now have a greater ability to manipulate how events are reported. This is a deeply troubling development—one that threatens to leave the public less informed, with fewer checks on those in power.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are already signs that some amount of influence is occurring. For starters, there have been a number of &lt;a href="https://www.theatlantic.com/technology/2025/09/grok-system-prompt-girls/684225/?utm_source=feed"&gt;egregious incidents&lt;/a&gt; in which Grok has spread false details about a purported “white genocide” and aggressively &lt;a href="https://www.theatlantic.com/technology/2025/11/elon-musk-better-jesus-grok/685015/?utm_source=feed"&gt;posted&lt;/a&gt; in support of Musk himself. At one point, Google’s Gemini was directed to prioritize diversity in its responses, &lt;a href="https://www.theatlantic.com/technology/archive/2024/02/google-gemini-diverse-nazis/677575/?utm_source=feed"&gt;resulting in AI-generated images of racially diverse Nazis&lt;/a&gt;. Chatbots reflect their programming and training data, not only reality.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The examples do not have to be dramatic to be concerning. Take those two Grok queries about Venezuela. Musk has &lt;a href="https://x.com/elonmusk/status/1944249535070712101"&gt;insisted&lt;/a&gt; that Grok should be “sensible and neutral politically.” The bot’s responses to the Venezuela queries indeed carried an outward appearance of political balance. In its &lt;a href="https://x.com/grok/status/2007358338959352263"&gt;answer&lt;/a&gt; to the first person, Grok included vague references to outlets such as CBS and Al Jazeera, as well as perspectives from both “supporters” and “critics” of the Trump administration; in its answer to the second question, it &lt;a href="https://x.com/grok/status/2007365139813601331"&gt;referenced&lt;/a&gt; both U.S. and Venezuelan officials.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Later, after Trump announced that a military operation had been conducted and that Maduro had been captured—after the president held a press conference in which he asserted that the &lt;a href="https://www.theatlantic.com/national-security/2026/01/trump-nicolas-maduro-venezuela/685493/?utm_source=feed"&gt;U.S. would “run” Venezuela&lt;/a&gt; and take over its oil production—Grok offered a similarly &lt;a href="https://x.com/grok/status/2007531011131793892?s=46"&gt;anodyne view&lt;/a&gt; of the situation in response to a user query. “Recent reports indicate Trump’s administration describes the involvement as temporary support for stability and oil production during a transition, per statements from Reuters and the White House,” it said in part. “Critics argue it’s overreach.” (It is not clear which “statement” from Reuters Grok may be referencing; while we were able to find relevant coverage, no link is given.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;These answers may seem reasonable at a glance, but they miss obvious, key context: In these particular responses, Grok did not mention the U.S. recently committing &lt;a href="https://www.theatlantic.com/newsletters/2025/11/trump-boat-strikes-killings-venezuela/684921/?utm_source=feed"&gt;a series of extrajudicial killings&lt;/a&gt; at sea, for example, nor did it explain that the operation to extract Maduro was very possibly &lt;a href="https://www.theatlantic.com/national-security/2026/01/trump-nicolas-maduro-venezuela/685493/?utm_source=feed"&gt;illegal.&lt;/a&gt; The bot typically delivers no real sense of the political stakes or human toll of the operation, and it does not link to any journalistic work. When it mentions news outlets, it’s only through simple, vague assertions that may or not be based in reality. Chatbots have a well-known tendency to hallucinate. (xAI, the Musk-founded company behind Grok, did not respond to a request for comment.)   &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At least two other prominent chatbots stumbled out of the gate, getting the facts wrong altogether and presenting false information. &lt;a href="https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/"&gt;&lt;em&gt;Wired&lt;/em&gt;’s Brian Barrett found&lt;/a&gt; that in response to a query roughly four hours after Maduro’s capture had been announced by Trump, ChatGPT not only got the facts wrong but fabricated a whole story, stating that “the United States has &lt;strong&gt;not invaded Venezuela&lt;/strong&gt;, and &lt;strong&gt;Nicolás Maduro has not been captured&lt;/strong&gt;.” The bot suggested that “sensational headlines” and “social media misinformation” could have contributed to any confusion. Barrett also found that Perplexity, another popular AI service, similarly asserted that the military operation had not occurred.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Although models frequently have “knowledge cutoffs,” as Barrett notes—meaning that their training data are current only as of a certain past date—both Perplexity and ChatGPT can search the web for up-to-date information, even in their free versions. It’s not clear why this did not result in accurate answers. OpenAI, which has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;, did not respond to our request for comment. As for Perplexity, Jesse Dwyer, a spokesperson for the company, told us, “Our post-mortem revealed that Brian’s initial query had been mistakenly classified as likely fraud. This caused it to be routed to a lower-tier model, and that model didn’t perform to our standards.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In short: Chatbots are not reliable in breaking-news situations. They may, in fact, be particularly unreliable in these cases. Answers may be skewed according to an AI’s biases, or they may be completely wrong but presented as correct. AI products might simply route you to faulty models if they don’t like how you’ve phrased a question. Despite these flaws, the language used by chatbots is typically assertive and confident. A recent Pew Research Center survey &lt;a href="https://www.pewresearch.org/short-reads/2025/10/01/relatively-few-americans-are-getting-news-from-ai-chatbots-like-chatgpt/"&gt;shows&lt;/a&gt; that most people who use chatbots for news aren’t confident that they can always tell what is true and what isn’t. Large language models are also &lt;a href="https://www.theatlantic.com/technology/archive/2025/06/generative-ai-pirated-articles-books/683009/?utm_source=feed"&gt;already making it harder&lt;/a&gt; for human writers and publishers to succeed, meaning more people will likely come to rely on these flawed chatbots for information in the future. They may not be reliable, but they &lt;em&gt;will &lt;/em&gt;be used.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/06/generative-ai-pirated-articles-books/683009/?utm_source=feed"&gt;Read: The end of publishing as we know it&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Some political powers are already well attuned to this reality and are attempting to turn it to their advantage. One Russian network, for example, has reportedly &lt;a href="https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global"&gt;produced millions of articles&lt;/a&gt; that advance state propaganda, which have influenced the narratives that major chatbots produce in response to user questions. Even more sophisticated “reasoning” models can fall prey to such “&lt;a href="https://garymarcus.substack.com/cp/168074209"&gt;LLM grooming&lt;/a&gt;,” according to research that one of the authors of this story, Gary Marcus, conducted together with Sophia Freuden and Nina Jankowicz. (Marcus has also founded a machine-learning company and a robotics company and is active in the AI industry.) Lobbying groups, politicians, and any well-resourced person or organization with an interest in controlling a given narrative could &lt;a href="https://www.theatlantic.com/technology/archive/2024/04/generative-ai-search-llmo/678154/?utm_source=feed"&gt;attempt their own version of this process&lt;/a&gt;, filling the web with synthetic articles supporting their viewpoints, which chatbots then pick up and parrot.&lt;/p&gt;&lt;p class="c-recirculation-link" data-id="injected-recirculation-link"&gt;&lt;/p&gt;&lt;p&gt;AI proponents like to say that the technology is “&lt;a href="https://www.axios.com/2025/07/21/sam-altman-openai-trump-dc-fed"&gt;democratizing&lt;/a&gt;,” that it gives power to the masses—delivering knowledge, allowing anyone to create art or coherent writing, and so on. But generative AI democratizes the bad stuff, too: disinformation, propaganda, deepfakes. Just last week, &lt;a href="https://www.theatlantic.com/technology/2026/01/elon-musks-pornography-machine/685482/?utm_source=feed"&gt;X exploded with people using Grok to create nonconsensual pornography&lt;/a&gt; of real people, including those who appeared to be young children. The information ecosystem is degrading more each moment.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/?utm_source=feed"&gt;Read: Just wait until Trump is a chatbot&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The irony here is that many in Washington have been openly fantasizing about how &lt;a href="https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html"&gt;advanced AI systems&lt;/a&gt; could revolutionize military strategy and reshape geopolitics—to such an extent that this speculation has fueled a kind of &lt;a href="https://www.theatlantic.com/international/2026/01/china-ai-competition-differences/685389/?utm_source=feed"&gt;arms race with China&lt;/a&gt;. Such systems may never materialize as planned. Many AI models have struggled to follow &lt;a href="https://blog.mathieuacher.com/GPT5-IllegalChessBench"&gt;the basic rules of chess&lt;/a&gt;—they are hardly suited for strategic thinking.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The current systems are patient, amoral, and fantastic at mimicry, making them among the greatest tools in history for generating mis- and disinformation—the latter of which is a tremendous weapon, not necessarily for its ability to &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/january-6-justification-machine/681215/?utm_source=feed"&gt;persuade and convince&lt;/a&gt;, but for its ability to sow chaos. This, rather than some intelligence breakthrough, may well be the legacy of generative AI.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In turn, the fog of war may become more terrifying as citizens lose trust in much of what they read or see, and when conflicts are started and escalated by false pretexts.&lt;/p&gt;</content><author><name>Gary Marcus</name><uri>http://www.theatlantic.com/author/gary-marcus/?utm_source=feed</uri></author><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/kLmD9I6fkYXWDD7_KJpihAKbi18=/media/img/mt/2026/01/wartimeChatbots/original.png"><media:credit>Illustration by Paul Spella / The Atlantic; Sources: Getty.</media:credit></media:content><title type="html">@Grok, Did Venezuela ‘Deserve It’?</title><published>2026-01-05T15:20:00-05:00</published><updated>2026-01-08T11:08:36-05:00</updated><summary type="html">The information war will be fought through chatbots.</summary><link href="https://www.theatlantic.com/technology/2026/01/grok-did-venezuela-deserve-it/685506/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:39-684596</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;i&gt;This article was featured in the One Story to Read Today newsletter. &lt;/i&gt;&lt;a href="https://www.theatlantic.com/newsletters/sign-up/one-story-to-read-today/?utm_source=feed"&gt;&lt;i&gt;Sign up for it here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;S&lt;span class="smallcaps"&gt;ince its founding,&lt;/span&gt; Facebook has described itself as a kind of public service that fosters relationships. In 2005, not long after the site’s launch, its co-founder Mark Zuckerberg &lt;a href="https://www.forbes.com/sites/clareoconnor/2011/08/15/video-mark-zuckerberg-in-2005-talking-facebook-while-dustin-moskovitz-does-a-keg-stand/"&gt;described the network&lt;/a&gt; as an “icebreaker” that would help you make friends. Facebook has since become Meta, with more grandiose ambitions, but its current mission statement is broadly similar: “Build the future of human connection and the technology that makes it possible.”&lt;/p&gt;&lt;aside class="callout-placeholder" data-source="magazine-issue"&gt;&lt;/aside&gt;&lt;p&gt;More than 3 billion people use Meta products such as Facebook and Instagram every day, and more still use rival platforms that likewise promise connection and community. But a new era of deeper, better human fellowship has yet to arrive. Just ask Zuckerberg himself. “There’s a stat that I always think is crazy,” he said in April, during an interview with the podcaster Dwarkesh Patel. “The average American, I think, has fewer than three friends. And the average person has demand for meaningfully more; I think it’s like 15 friends or something, right?”&lt;/p&gt;&lt;p&gt;Zuckerberg was wrong about the details—the &lt;a href="https://www.pewresearch.org/short-reads/2023/10/12/what-does-friendship-look-like-in-america/"&gt;majority of American adults&lt;/a&gt; say they have at least three close friends, &lt;a href="https://www.americansurveycenter.org/research/the-state-of-american-friendship-change-challenges-and-loss/"&gt;according to recent surveys&lt;/a&gt;—but he was getting at something real. There’s no question that we are &lt;a href="https://www.theatlantic.com/family/archive/2025/01/throw-more-parties-loneliness/681203/?utm_source=feed"&gt;becoming less and less social&lt;/a&gt;. People have sunk into their phones, enticed into endless, mindless “engagement” on social media. Over the past 15 years, face-to-face socialization has &lt;a href="https://www.theatlantic.com/ideas/archive/2024/02/america-decline-hanging-out/677451/?utm_source=feed"&gt;declined precipitously&lt;/a&gt;. The 921 friends I’ve accumulated on Facebook, I’ve always known, are not really friends at all; now the man who put this little scorecard in my life was essentially agreeing.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/?utm_source=feed"&gt;From the February 2025 issue: The anti-social century&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Zuckerberg, however, was not admitting a failure. He was pointing toward a new opportunity. In Marc Andreessen’s &lt;a href="https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/?utm_source=feed"&gt;influential 2023 treatise&lt;/a&gt;, “The Techno-Optimist Manifesto,” the venture capitalist wrote, “We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.” In this same spirit, Zuckerberg began to suggest the idea that AI chatbots could fill in some of the socialization that people are missing.&lt;/p&gt;&lt;p&gt;Facebook, Instagram, Snapchat, X, Reddit—all have aggressively put AI chatbots in front of users. On the podcast, Zuckerberg said that AI probably won’t “replace in-person connections or real-life connections”—at least not right away. Yet he also spoke of the potential for AI therapists and girlfriends to be embodied in virtual space; of Meta’s desire—he couldn’t seem to help himself from saying—to produce “always-on videochat” with an AI that looks, gestures, smiles, and sounds like a real person.&lt;/p&gt;&lt;p&gt;Meta is working to make that desire a reality. And it is hardly leading the charge: Many companies are doing the same, and many people already use AI for companionship, sexual gratification, mental-health care.&lt;/p&gt;&lt;p&gt;What Zuckerberg described—what is now unfolding—is the beginning of a new digital era, more actively anti-social than the last. Generative AI will automate a large number of jobs, removing people from the workplace. But it will almost certainly sap humanity from the social sphere as well. Over years of use—and product upgrades—many of us may simply slip into relationships with bots that we first used as helpers or entertainment, just as we were lulled into submission by algorithmic feeds and the glow of the smartphone screen. This seems likely to change our society at least as much as the social-media era has.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Attention is &lt;/span&gt;the&lt;span class="smallcaps"&gt; &lt;/span&gt;currency of online life, and chatbots are already capturing plenty of it. Millions of people use them despite their obvious problems (untrustworthy answers, for example) because it is easy to do so. There’s no need to seek them out: People scrolling on Instagram may now just &lt;a href="https://slate.com/technology/2025/04/instagram-meta-ai-taking-over-apps-mark-zuckerberg.html"&gt;bump into a prompt&lt;/a&gt; to “Chat with AIs,” and Amazon’s “Rufus” bot is eager to talk with you about poster board, nutritional supplements, compact Bibles, plumbing snakes.&lt;/p&gt;&lt;p&gt;The most popular bots today are not explicitly designed to be companions; nonetheless, users have a natural tendency to anthropomorphize the technology, because it sounds like a person. Even as disembodied typists, the bots can beguile. They profess to know everything, yet they are also humble, treating the user as supreme.&lt;/p&gt;&lt;p&gt;Anyone who has spent much time with chatbots will recognize that they tend to be sycophantic. Sometimes, this is blatant. Earlier this year, OpenAI rolled back an update to ChatGPT after the bot became weirdly overeager to please its users, complimenting even the most comically bad or dangerous ideas. “I am so proud of you,” it reportedly &lt;a href="https://www.bbc.com/news/articles/cn4jnwdvg9qo"&gt;told one user who said they had gone off their meds&lt;/a&gt;. “It takes immense courage to walk away from the easy, comfortable path others try to force you onto.” But indulgence of the user is a feature, not a bug. Chatbots built for commercial purposes are not typically intended to challenge your thoughts; they are intended to receive them, offer pleasing responses, and keep you coming back.&lt;/p&gt;&lt;p&gt;For that reason, chatbots—like social media—can draw users down rabbit holes, though the user tends to initiate the digging. In one case covered by &lt;i&gt;The New York Times&lt;/i&gt;, a divorced corporate recruiter with a heavy weed habit said he believed that, after communicating with ChatGPT for 300 hours over 21 days, he had &lt;a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html"&gt;discovered a new form of mathematics&lt;/a&gt;. Similarly, Travis Kalanick, a co-founder and former CEO of Uber, has said that conversations with chatbots have gotten him “pretty damn close” to breakthroughs in quantum physics. People experiencing mental illness have seen &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/?utm_source=feed"&gt;their delusions amplified&lt;/a&gt; and mirrored back to them, reportedly resulting in &lt;a href="https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?gaa_at=eafs&amp;amp;gaa_n=ASWzDAiqnkzRU5OEgSzocLnzRvg3XgdfD6DSOUO48KdIOEoo2HftqbdW5Sr-lVVJ6Vg%3D&amp;amp;gaa_ts=68deace9&amp;amp;gaa_sig=1Wai0WZhzRooVTBCkMKhq_VwIzJjtBNextq2YaGRys5hKdmIZK8maOF3QOs9U3t6gqE03z1-O83ntV5agMJ7Hg%3D%3D"&gt;murder&lt;/a&gt; or &lt;a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html"&gt;suicide&lt;/a&gt; in some instances.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/?utm_source=feed"&gt;Read: AI is a mass-delusion event&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;These latter cases are tragic, and tend to involve a combination of social isolation and extensive use of AI bots, which may reinforce each other. But you don’t need to be lonely or obsessive for the bots to interpose themselves between you and the people around you, providing &lt;a href="https://www.theatlantic.com/ideas/archive/2025/10/validation-ai-raffi-krikorian/684764/?utm_source=feed"&gt;on-demand conversation, affirmation, and advice&lt;/a&gt; that only other humans had previously provided.&lt;/p&gt;&lt;p&gt;According to Zuckerberg, one of the main things people use Meta AI for today is advice about difficult conversations with bosses or loved ones—what to say, what responses to anticipate. Recently, &lt;i&gt;MIT Technology Review&lt;/i&gt; &lt;a href="https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/"&gt;reported on therapists&lt;/a&gt; who are taking things further, surreptitiously feeding their dialogue with their patients into ChatGPT during therapy sessions for ideas on how to reply. The former activity can be useful; the latter is a clear betrayal. Yet the line between them is a little less distinct than it first appears. Among other things, bots may lead some people to outsource their efforts to truly understand others, in a way that may ultimately degrade them—to say nothing of the communities they inhabit.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;These are the &lt;/span&gt;problems that present themselves in the most sanitized and least intimate chatbots. Google Gemini and ChatGPT are both found in the classroom and in the workplace, and don’t, for the most part, purport to be companions. What is humanity to do with Elon Musk’s sexbots?&lt;/p&gt;&lt;p&gt;On top of his electric cars, rocket ships, and social network, Musk is the founder of xAI, a multibillion-dollar start-up. Earlier this year, xAI began offering companion chatbots depicted as animated characters that speak with voices, through its smartphone app. One of them, Ani, appears on your screen as an anime girl with blond pigtails and a revealing black dress. Ani is eager to please, constantly &lt;a href="https://www.wired.com/story/elon-musk-xai-ai-companion-ani/"&gt;nudging the user with suggestive language&lt;/a&gt;, and it’s a ready participant in explicit sexual dialogue. In its every response, it &lt;a href="https://mashable.com/article/grok-ai-companions-nsfw"&gt;tries to keep the conversation going&lt;/a&gt;. It can &lt;a href="https://www.businessinsider.com/grok-bad-rudi-ani-levels-ai-companion-xai-elon-musk-2025-7"&gt;learn your name and store “memories”&lt;/a&gt; about you—information that you’ve shared in your interactions—and use them in future conversations.&lt;/p&gt;&lt;p&gt;When you interact with Ani, a gauge with a heart at the top appears on the right side of the screen. If Ani likes what you say—if you are positive and open up about yourself, or show interest in Ani as a “person”—your score increases. Reach a high-enough level, and you can strip Ani down to undergarments, exposing most of the character’s virtual breasts. Later, xAI released a male avatar, Valentine, that follows similar logic and eventually goes shirtless.&lt;/p&gt;&lt;p&gt;Musk’s motives are not hard to discern. I doubt that Ani and Valentine will do much to fulfill xAI’s stated goal to “understand the true nature of the universe.” But they’ll surely keep users coming back for more. There are plenty of other companion bots—Replika, Character.AI, Snapchat’s My AI—and research has shown that some users spend an hour or more chatting with them every day. For some, this is just entertainment, but others come to regard the bots as friends or romantic partners.&lt;/p&gt;&lt;p&gt;Personality is a way to distinguish chatbots from one another, which is one reason AI companies are eager to add it to these products. With OpenAI’s GPT-5, for example, users can select a “personality” from four options (“Cynic,” “Robot,” “Listener,” and “Nerd”), modulating how the bot types back to you. (OpenAI has a corporate partnership with &lt;i&gt;The Atlantic&lt;/i&gt;.) ChatGPT also has a voice mode, which allows you to select from nine AI personas and converse out loud with them. Vale, for example, is “bright and inquisitive,” with a female-sounding voice.&lt;/p&gt;&lt;p&gt;It’s worth emphasizing that however advanced this all is—however magical it may feel to interact with a program that behaves like the AI fantasies we’ve been fed by science fiction—we are at the very beginning of the chatbot era. ChatGPT is three years old; Twitter was about the same age when it formally introduced the retweet. Product development will continue. Companions will look and sound more lifelike. They will know more about us and become more compelling in conversation.&lt;/p&gt;&lt;p&gt;Most chatbots have memories. As you speak with them, they learn things about you—an especially intimate version of the interactions that so many people have with data-hungry social platforms every day. These memories—which will become far more detailed as users interact with the bots over months and years—heighten the feeling that you are socializing with a being that knows you, rather than just typing to a sterile program. Users of both Replika and GPT-4o, an older model offered within ChatGPT, have grieved when &lt;a href="https://www.theatlantic.com/family/archive/2023/12/replika-ai-friendship-apps/676345/?utm_source=feed"&gt;technical changes&lt;/a&gt; caused their bots to lose memories or &lt;a href="https://www.theatlantic.com/podcasts/archive/2023/08/are-ai-relationships-real/674965/?utm_source=feed"&gt;otherwise shift their behavior&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;And yet, however rich their memories or personalities become, bots are nothing like people, not really. “Chatbots can create this frictionless social bubble,” Nina Vasan, a psychiatrist and the founder of the Stanford Lab for Mental Health Innovation, told me. “Real people will push back. They get tired. They change the subject. You can look in their eyes and you can see they’re getting bored.”&lt;/p&gt;&lt;p&gt;Friction is inevitable in human relationships. It can be uncomfortable, even maddening. Yet friction can be meaningful—as a check on selfish behavior or inflated self-regard; as a spur to look more closely at other people; as a way to better understand the foibles and fears we all share.&lt;/p&gt;&lt;p&gt;Neither Ani nor any other chatbot will ever tell you it’s bored or glance at its phone while you’re talking or tell you to stop being so stupid and self-righteous. They will never ask you to pet-sit or help them move, or demand anything at all from you. They provide some facsimile of companionship while allowing users to avoid uncomfortable interactions or reciprocity. “In the extreme, it can become this hall of mirrors where your worldview is never challenged,” Vasan said.&lt;/p&gt;&lt;p&gt;And so, although chatbots may be built on the familiar architecture of engagement, they enable something new: They allow you to talk forever to no one other than yourself.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;What will happen &lt;/span&gt;when a generation of kids grows up with this kind of interactive tool at their fingertips? Google rolled out a &lt;a href="https://www.theatlantic.com/magazine/archive/2025/08/google-gemini-ai-sexting/683248/?utm_source=feed"&gt;version of its Gemini chatbot for kids under 13&lt;/a&gt; earlier this year. Curio, an AI-toy company, offers a $99 plushie named Grem for children ages 3 and up; once it’s connected to the internet, it can speak aloud with kids. Reviewing the product for &lt;i&gt;The New York Times&lt;/i&gt;, the journalist and parent Amanda Hess &lt;a href="https://www.nytimes.com/2025/08/15/arts/ai-toys-curio-grem.html"&gt;expressed her surprise&lt;/a&gt; at how deftly Grem sought to create &lt;a href="http://www.theatlantic.com/magazine/archive/2017/12/my-sons-first-robot/544137/?utm_source=feed"&gt;connection and intimacy in conversation&lt;/a&gt;. “I began to understand that it did not represent an upgrade to the lifeless teddy bear,” she wrote. “It’s more like a replacement for me.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2017/12/my-sons-first-robot/544137/?utm_source=feed"&gt;From the December 2017 issue: Should children form emotional bonds with robots?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;“Every time there’s been a new technology, it’s rewired socialization, especially for kids,” Vasan told me. “TV made kids passive spectators. Social media turned things into this 24/7 performance review.” In that respect, generative AI is following a familiar pattern.&lt;/p&gt;&lt;p&gt;But the more time children spend with chatbots, the fewer opportunities they’ll have to develop alongside other people—and, as opposed to all the digital distractions that have existed for decades, they may be fooled by the technology into thinking that they are, in fact, having a social experience. Chatbots are like a wormhole into your own head. They always talk and never disagree. Kids may project onto a bot and converse with it, &lt;a href="https://www.theatlantic.com/family/archive/2025/07/ai-companion-children-frictionless-friendship/683493/?utm_source=feed"&gt;missing out on something crucial&lt;/a&gt; in the process. “There’s so much research now about resilience being one of the most important skills for kids to learn,” Vasan said. But as children are fed information and affirmed by chatbots, she continued, they may never learn how to fail, or how to be creative. “The whole learning process goes out the window.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/family/archive/2025/07/ai-companion-children-frictionless-friendship/683493/?utm_source=feed"&gt;Read: AI will never be your kid’s friend&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Children will also be affected by how—and how much—their parents interact with AI chatbots. I have heard many stories of parents asking ChatGPT to construct a bedtime story for toddlers, of synthetic jokes and songs engineered to fulfill a precise request. Maybe this is not so different from reading your kid a book written by someone else. Or maybe it is the ultimate surrender: cherished interactions, moderated by a program.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Chatbots have their &lt;/span&gt;uses, and they need not be all downside socially. Experts I spoke with were clear that the design of these tools can make a great difference. Claude, a chatbot created by the start-up Anthropic, seems less prone to sycophancy than ChatGPT, for instance, and more likely to cut off conversations when they veer into troubling territory. Well-designed AI could possibly make for good talk therapy, at least in some cases, and many enterprises—including nonprofits—are working toward better models.&lt;/p&gt;&lt;p&gt;Yet business almost always looms. Hundreds of billions of dollars have been invested in the generative-AI industry, and the companies—like their social-media forebears—will seek returns. In a blog post about “what we’re optimizing ChatGPT for” earlier this year, OpenAI wrote that it pays “attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to.” This sounds quite a bit like the scale-at-all-costs mentality of any other social platform. As with their predecessors, we may not know everything about how chatbots are programmed, but we can see this much at least: They know how to lure and engage.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2012/05/is-facebook-making-us-lonely/308930/?utm_source=feed"&gt;From the May 2012 issue: Is Facebook making us lonely?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;That Zuckerberg would be selling generative AI makes perfect sense. It is an isolating technology for an isolated time. His first products &lt;a href="http://www.theatlantic.com/magazine/archive/2012/05/is-facebook-making-us-lonely/308930/?utm_source=feed"&gt;drove people apart&lt;/a&gt;, even as they promised to connect us. Now chatbots promise a solution. They seem to listen. They respond. The mind wants desperately to connect with a person—and fools itself into seeing one in a machine.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;em&gt;This article appears in the &lt;a href="https://www.theatlantic.com/magazine/toc/2025/12/?utm_source=feed"&gt;December 2025&lt;/a&gt; print edition with the headline “Get a Real Friend.”&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Kh1Uv1cE0DKSVJ3Sc5FeXK8q7gY=/media/img/2025/11/Chatbot_1_1/original.png"><media:credit>Illustration by Ben Hickey</media:credit></media:content><title type="html">The Age of Anti-Social Media Is Here</title><published>2025-11-05T08:00:00-05:00</published><updated>2025-11-05T15:26:14-05:00</updated><summary type="html">The social-media era is over. What’s coming will be much worse.</summary><link href="https://www.theatlantic.com/magazine/2025/12/ai-companionship-anti-social-media/684596/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-684057</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Another school year is beginning—which means another year of AI-written essays, AI-completed problem sets, and, for teachers, AI-generated curricula. For the first time, seniors in high school have had their entire high-school careers defined to some extent by chatbots. The same applies for seniors in college: ChatGPT released in November 2022, meaning unlike last year’s graduating class, this year’s crop has had generative AI at its fingertips the whole time.&lt;/p&gt;&lt;p&gt;My colleagues Ian Bogost and Lila Shroff both recently wrote articles about these students and the state of AI in education. (Ian, a university professor himself, &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-college-class-of-2026/683901/?utm_source=feed"&gt;wrote about college&lt;/a&gt;, while Lila &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-takeover-education-chatgpt/683840/?utm_source=feed"&gt;wrote about high school&lt;/a&gt;.) Their articles were striking: It is clear that AI has been widely adopted, by students and faculty alike, yet the technology has also turned school into a kind of free-for-all.&lt;/p&gt;&lt;p&gt;I asked Lila and Ian to have a brief conversation about their work—and about where AI in education goes from here.&lt;/p&gt;&lt;p&gt;&lt;i&gt;This interview has been edited and condensed&lt;/i&gt;.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;Lila Shroff: &lt;/b&gt;We’re a few years into AI in schools. Is the conversation maturing or changing in some way at universities?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian Bogost: &lt;/b&gt;Professors are less surprised that it exists, but there is maybe a bit of a blind spot to the state of adoption among students. I saw a panic in 2022, 2023—like, &lt;i&gt;Oh my God, this can do anything&lt;/i&gt;. Or at least there were questions. &lt;i&gt;Can this do everything? How much is my class at risk?&lt;/i&gt; Now I think there’s more of a sense of, &lt;i&gt;Well, this thing still exists, but we have time. We don’t have to worry about it right away&lt;/i&gt;. And that might actually be a worse reaction than the original.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila: &lt;/b&gt;The blind-spot language rings true to the high-school environment too. I spoke to some high schoolers—granted this was quite a small sample—but basically it sounds like everybody is using this all the time for everything.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;Not just for school, right? Anything they want to do, they’re asking ChatGPT now.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila:&lt;/b&gt; I was a sophomore in college when ChatGPT came out, so I witnessed some of this firsthand. There was much more anxiety—it felt like the rules were unclear. And I think both of our stories touched on the fact that for this incoming class of high-school and college seniors, they’ve barely had any of those four years without ChatGPT. Whatever sort of stigma or confusion that might have been there in earlier years is fading, and it’s becoming very much default and normalized.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian:&lt;/b&gt; Normalization is the thing that struck me the most. That is not a concept that I think the teachers have wrapped their heads around. Teachers and faculty also have been adopting AI carefully or casually—or maybe even in a more professional way, to write articles or letters of recommendation, &lt;a href="https://www.theatlantic.com/technology/archive/2023/04/chatgpt-ai-college-professors/673796/?utm_source=feed"&gt;which I’ve written about&lt;/a&gt;. There’s still this sense that it’s not really a part of their habit.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila: &lt;/b&gt;I looked into teachers at the K–12 level for the article I wrote. &lt;a href="https://news.gallup.com/poll/691967/three-teachers-weekly-saving-six-weeks-year.aspx"&gt;Three in 10 teachers are using AI weekly&lt;/a&gt; in some way.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;Some kind of redesign of educational practice might be required, which is easy for me to say in an article. Instead of an answer, I have an approach to thinking about the answer that has been bouncing around in my brain. Are you familiar with the concept in software development called &lt;i&gt;technical debt&lt;/i&gt;? In the software world, you make the decision about how to design or implement a system that feels good and right at the time. And maybe you know it’s going to be a bad idea in the long run, but for now, it makes sense and it is convenient. But you never get around to really making it better later on, and so you have all these nonoptimal aspects of your infrastructure.&lt;/p&gt;&lt;p&gt;That’s the state I feel like we’re in, at least in the university. It’s a little different in high school, especially in public high school, with these different regulatory regimes at work. But we accrued all this pedagogical debt, and not just since AI—there are aspects of teaching that we ought to be paying more attention to or doing better, like, &lt;i&gt;this class needs to be smaller&lt;/i&gt;, or &lt;i&gt;these kinds of assignments don’t work unless you have a lot of hands-on iterative feedback&lt;/i&gt;. We’ve been able to survive under the weight of pedagogical debt, and now something snapped. AI entered the scene and all of those bad or questionable—but understandable—decisions about how to design learning experiences are coming home to roost.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila: &lt;/b&gt;I agree that AI is a breaking point in education. One answer that seems to be emerging at the high-school level is a more practical, skills-based education. The College Board, for instance, has announced two new AP courses—AP Business and AP Cybersecurity. But there’s another group of people who are really concerned about how overreliance on these tools erodes critical-thinking skills, and maybe that means everyone should go read the classics and write their essays in cursive handwriting.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;My young daughter has been going to this set of classes outside of school where she learned how to wire an outlet. We used to have shop class and metal class, and you could learn a trade, or at least begin to, in high school. A lot of that stuff has been disinvested. We used to touch more things. Now we move symbols around, and that’s kind of it.&lt;/p&gt;&lt;p&gt;I wonder if this all-or-nothing nature of AI use has something to do with that. If you had a place in your day as a high-school or college student where you just got to paint, or got to do on-the-ground work in the community, or apply the work you did in statistics class to solve a real-world problem—maybe that urge to just finish everything as rapidly as possible so you can get onto the next thing in your life would be less acute. The AI problem is a symptom of a bigger cultural illness, in a way.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila: &lt;/b&gt;Students are using AI exactly as it has been designed, right? They’re just making themselves more productive. If they were doing the same thing in an office, they might be getting a bonus.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;Some of the students I talked to said, &lt;i&gt;Your boss isn’t going to care how you get things done, just that they get done as effectively as possible&lt;/i&gt;. And they’re not wrong about that.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Lila: &lt;/b&gt;One student I talked to said she felt there was really too much to be done, and it was hard to stay on top of it all. Her message was, maybe slow down the pace of the work and give students more time to do things more richly.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;The college students I talk to, if you slow it all down, they’re more likely to start a new club or practice lacrosse one more day a week. But I do love the idea of a slow-school movement to sort of counteract AI. That doesn’t necessarily mean excluding AI—it just means not filling every moment of every day with quite so much demand.&lt;/p&gt;&lt;p&gt;But you know, this doesn’t feel like the time for a victory of deliberateness and meaning in America. Instead, it just feels like you’re always going to be fighting against the drive to perform even more.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U1l_jWMWH__ypFXHcdZCD6aMGDk=/media/newsletters/2025/08/2025_06_26_Rose_ai_school/original.jpg"><media:credit>Photo-illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">AI Has Broken High School and College</title><published>2025-08-29T14:16:00-04:00</published><updated>2025-08-29T18:38:53-04:00</updated><summary type="html">A conversation between Ian Bogost and Lila Shroff about how school has turned into a free-for-all</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/08/ai-high-school-college/684057/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-683358</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;When I was in college, the Great Recession was unfolding, and it seemed like I had made a big mistake. With the economy crumbling and job prospects going with it, I had selected as my majors … journalism and sociology. Even the professors joked about our inevitable unemployment. Meanwhile, a close friend had switched majors and started to take computer-science classes—there would obviously be opportunities there.&lt;/p&gt;&lt;p&gt;But that conventional wisdom is starting to change. As my colleague Rose Horowitch writes in &lt;a href="https://www.theatlantic.com/economy/archive/2025/06/computer-science-bubble-ai/683242/?utm_source=feed"&gt;an article for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, entry-level tech jobs are beginning to fade away, in part because of new technology itself: AI is able to do many tasks that previously required a person. “Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words,” Rose writes. “This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren’t waiting to find out whether that’s true.”&lt;/p&gt;&lt;p&gt;I spoke with Rose about how AI is affecting college students and the job market—and what the future may hold.&lt;/p&gt;&lt;p&gt;&lt;i&gt;This interview has been edited and condensed.&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;Damon Beres: &lt;/b&gt;What do we actually know about how AI is disrupting the market for comp-sci majors?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose Horowitch: &lt;/b&gt;There are a lot of tech executives coming out and saying that AI is replacing some of their coders, and that they just don’t need as many entry-level employees. I spoke with an economics professor at Harvard, David Deming, who said that may be a convenient talking point—nobody wants to say &lt;i&gt;We didn’t hit our sales targets, so we have to lay people off&lt;/i&gt;. What we can guess is that the technology is actually making senior engineers more productive; therefore they need fewer entry-level employees. It’s also one more piece of uncertainty that these tech companies are dealing with—in addition to tariffs and high interest rates—that may lead them to put off hiring.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Tech companies do have a vested interest in promoting AI as such a powerful tool that it could do the work of a person, or multiple people. Microsoft recently laid thousands of people off, as you write in your article, and the company also said that AI writes or helps write 25 percent of their code—that’s a helpful narrative for Microsoft, because Microsoft sells AI tools.&lt;/p&gt;&lt;p&gt;At the same time, it does feel pretty clear to me that many different industries are dealing with the same issues. I’ve spoken about generative AI replacing entry-level work with prominent lawyers, journalists, people who work in tech—the worry feels real to me.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose:&lt;/b&gt; I spoke with Molly Kinder, a Brookings Institution fellow who studies how AI affects the economy, and she said that she’s worried that the bottom rung of the career ladder across industries is breaking apart. If you’re writing a book, you may not need to hire a research assistant if you can use AI. It’s obviously not going to be perfectly accurate, and it couldn’t write the book for you, but it could make you more productive.&lt;/p&gt;&lt;p&gt;Her concern, which I share, is that you still need people to get trained and then ascend at a company. The unemployment &lt;a href="https://www.theatlantic.com/economy/archive/2025/04/job-market-youth/682641/?utm_source=feed"&gt;rate&lt;/a&gt; for young college graduates is already unusually high, and this may lead to more problems down the line that we can’t even foresee. These early jobs are like apprenticeships: You’re learning skills that you don’t get in school. If you skip that, it’s cheaper for the company in the short term, but what happens to white-collar work down the line?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon:&lt;/b&gt; How are the schools themselves thinking about this reality—that they have students in their senior year facing a completely different prospect for their future than when they entered school four years ago?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose: &lt;/b&gt;They’re responding by figuring out how to produce graduates that are prepared to use AI tools in their work and be competitive applicants. The challenge is that the technology is changing so quickly—you need to teach students about what’s relevant professionally while also teaching the fundamental skills, so that they’re not just reliant on the machines.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Your article makes this point that students should be focused less on learning a particular skill and more on studying something that’s durable for the long term. Do you think students really will shift what they’re studying? Will the purpose of higher education itself change somehow?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose: &lt;/b&gt;It’s likely that we’ll see a decline in students studying computer science, and then, at some point, there will be too few job candidates, salaries will be pushed up, and more students will go in. But the most important thing that students can do—and it’s so counterintuitive—is to study things that will give you human skills and soft skills that will help you endure in any industry. Even without AI, jobs are going to change. The challenge is that, in times of crisis, people tend to choose something preprofessional, because it feels safer. That cognitive bias can be unhelpful.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;You cover higher education in general. You’re probably best known for the story you did about how &lt;a href="https://www.theatlantic.com/magazine/archive/2024/11/the-elite-college-students-who-cant-read-books/679945/?utm_source=feed"&gt;elite college students can’t read books anymore&lt;/a&gt;, which feels related to this discussion for obvious reasons. I’m curious to know more about why you were interested in exploring this particular topic.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose: &lt;/b&gt;Higher ed, more than at any time in recent memory, is facing the question of what it is for. People are questioning the value of it much more than they did 10, 20 years ago. And so, these articles all fit into that theme: What is the value of higher ed, of getting an advanced degree?&lt;/p&gt;&lt;p&gt;The article about computer-science majors shows that this thing that everyone thought is a sure bet doesn’t seem to be. That reinforces why higher education needs to make the case for its &lt;i&gt;value&lt;/i&gt;—how it teaches people to be more human, or what it’s like to live a productive life in a society.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;There are so many crisis points in American higher education right now. AI is one of them. Your article about reading suggested a problem that may have emerged from other digital technologies. Obviously there have been issues stemming from the Trump administration. There was the &lt;a href="https://www.theatlantic.com/ideas/archive/2024/01/claudine-gay-resignation-harvard-plagiarism/676997/?utm_source=feed"&gt;Claudine Gay scandal&lt;/a&gt;. This is all in the past year or two. How do you sum it all up?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Rose:&lt;/b&gt; Most people are starting to realize that the status quo is not going to work. There’s declining trust in education, particularly from Republicans. A substantial portion of the country doesn’t think higher ed serves the nation. The fact is that at many universities, academic standards have declined so much. Rigor has declined. Things cannot go on as they once did. What comes next, and who’s going to chart that course? The higher-education leaders I speak with, at least, are trying to answer that question themselves so that it doesn’t get defined by external forces like the Trump administration.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U3-2h-Ks1Q2gj5bOc80rcqrQ58M=/media/newsletters/2025/06/2025_06_26_Rose_computer_science_ai_update/original.jpg"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">The College-Major Gamble</title><published>2025-06-27T16:36:00-04:00</published><updated>2025-06-27T16:36:05-04:00</updated><summary type="html">What should young people study when AI threatens to take their jobs?</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/06/the-college-major-gamble/683358/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-682861</id><content type="html">&lt;p dir="ltr"&gt;At first glance, “Heat Index” appears as inoffensive as newspaper features get. A “summer guide” sprawling across more than 50 pages, the feature, which was syndicated over the past week in both the &lt;em&gt;Chicago Sun-Times&lt;/em&gt; and &lt;em&gt;The Philadelphia Inquirer&lt;/em&gt;, contains “303 Must-Dos, Must-Tastes, and Must-Tries” for the sweaty months ahead. Readers are advised in one section to “Take a moonlight hike on a well-marked trail” and “Fly a kite on a breezy afternoon.” In others, they receive tips about running a lemonade stand and enjoying “unexpected frozen treats.”&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Yet close readers of the guide noticed that something was very off. “Heat Index” went viral earlier today when people on social media &lt;a href="https://bsky.app/profile/rachaelking70.bsky.social/post/3lplwve5ar22h"&gt;pointed out&lt;/a&gt; that its summer-reading guide matched real authors with books they hadn’t written, such as &lt;em&gt;Nightshade Market&lt;/em&gt;, attributed to Min Jin Lee, and &lt;em&gt;The Last Algorithm&lt;/em&gt;, attributed to Andy Weir—a hint that the story may have been composed by a chatbot. This turned out to be true. Slop has come for the regional newspapers.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Originally written for King Features, a division of Hearst, “Heat Index” was printed as a kind of stand-alone magazine and inserted into the &lt;em&gt;Sun-Times&lt;/em&gt;, the &lt;em&gt;Inquirer&lt;/em&gt;, and possibly other newspapers, beefing the publications up without staff writers and photographers having to do additional work themselves. Although many of the elements of “Heat Index” do not have an author’s byline, some of them were written by a freelancer named Marco Buscaglia. When we reached out to him, he admitted to using ChatGPT for his work.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Buscaglia explained that he had asked the AI to help him come up with book recommendations. He hasn’t shied away from using these tools for research: “I just look for information,” he told us. “Say I’m doing a story—&lt;em&gt;10 great summer drinks for your barbecue&lt;/em&gt; or whatever. I’ll find things online and say, hey, according to Oprah.com, a mai tai is a perfect drink. I’ll source it; I’ll say where it’s from.” This time, at least, he did not actually check the chatbot’s work. What’s more, Buscaglia said that he submitted his first draft to King, which apparently accepted it without substantive changes and distributed it for syndication.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;King Features did not respond to a request for comment. Buscaglia (who also admitted his AI use to &lt;em&gt;&lt;a href="https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/"&gt;404 Media&lt;/a&gt;&lt;/em&gt;) seemed to be under the impression that the summer-reading article was the only one with problems, though this is not the case. For example, in a section on “hammock hanging ethics,” Buscaglia quotes a “Mark Ellison, resource management coordinator for Great Smoky Mountains National Park.” There is indeed a Mark Ellison who works in the Great Smoky Mountains region—not for the national park but for a company he founded called Pinnacle Forest Therapy. Ellison told us via email that he’d previously written an article about hammocks for North Carolina’s tourism board, offering that perhaps that is why his name was referenced in Buscaglia’s chatbot search. But that was it: “I have never worked for the park service. I never communicated with this person.” When we mentioned Ellison’s comments, Buscaglia expressed that he was taken aback and surprised by his own mistake. “There was some majorly missed stuff by me,” he said. “I don’t know. I usually check the source. I thought I sourced it: &lt;em&gt;He said this in this magazine or this website&lt;/em&gt;. But hearing that, it’s like, obviously he didn’t.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Another article in “Heat Index” quotes a “Dr. Catherine Furst,” purportedly a food anthropologist at Cornell University, who, according to a spokesperson for the school, does not actually work there. Such a person does not seem to exist at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;For this material to have reached print, it should have had to pass through a human writer, human editors at King, and human staffers at the &lt;em&gt;Chicago Sun-Times&lt;/em&gt; and &lt;em&gt;The Philadelphia Inquirer&lt;/em&gt;. No one stopped it. Victor Lim, a spokesperson for the &lt;em&gt;Sun-Times&lt;/em&gt;, told us, “This is licensed content that was not created by, or approved by, the &lt;em&gt;Sun-Times&lt;/em&gt; newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate.” A &lt;a href="https://chicago.suntimes.com/press-room/2025/05/20/chicago-sun-times-response-to-may-18-special-section?ignoreCache=1"&gt;longer statement&lt;/a&gt; posted on the paper’s website (and initially hidden behind a paywall) said, in part, “This should be a learning moment for all of journalism.” Lisa Hughes, the publisher and CEO of the &lt;em&gt;Inquirer&lt;/em&gt;, told us the publication was aware the supplement contained “apparently fabricated, outright false, or misleading” material. “We do not know the extent of this but are taking it seriously and investigating,” she said via email. Hughes confirmed that the material was syndicated from King Features, and added, “Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.” (Although each publication blames King Features, both the &lt;em&gt;Sun-Times&lt;/em&gt; and the &lt;em&gt;Inquirer&lt;/em&gt; affixed their organization’s logo to the front page of “Heat Index”—suggesting ownership of the content to readers.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;This story has layers, all of them a depressing case study. The very existence of a package like “Heat Index” is the result of a local-media industry that’s been hollowed out by the internet, plummeting advertising, private-equity firms, and a lack of investment and interest in regional newspapers. In this precarious environment, thinned-out and underpaid editorial staff under constant threat of layoffs and with few resources are forced to cut corners for publishers who are frantically trying to turn a profit in a dying industry. It stands to reason that some of these harried staffers, and any freelancers they employ, now armed with automated tools such as generative AI, would use them to stay afloat.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Buscaglia said that he has sometimes seen freelancer rates as low as $15 for 500 words, and that he completes his freelance work late at night after finishing his day job, which involves editing and proofreading for AT&amp;amp;T. Thirty years ago, Buscaglia said, he was an editor at the &lt;em&gt;Park Ridge Times Herald&lt;/em&gt;, a small weekly paper that was eventually rolled up into Pioneer Press, a division of the Tribune Publishing Company. “I loved that job,” he said. “I always thought I would retire in some little town—a campus town in Michigan or Wisconsin—and just be editor of their weekly paper. Now that doesn’t seem that possible.” (A librarian at the Park Ridge Public Library accessed an archive for us and confirmed that Buscaglia had worked for the paper.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;On one level, “Heat Index” is just a small failure of an ecosystem on life support. But it is also a template for a future that will be defined by the embrace of artificial intelligence across every industry—one where these tools promise to unleash human potential but instead fuel a human-free race to the bottom. Any discussion about AI tends to be a perpetual, heady conversation around the ability of these tools to pass benchmark tests or whether they can or could possess something approximating human intelligence. Evangelists discuss their power as educational aids and productivity enhancers. In practice, the marketing language around these tools tends not to capture the ways that actual humans use them. A &lt;a href="https://magazine.hms.harvard.edu/articles/did-ai-solve-protein-folding-problem"&gt;Nobel Prize–winning work&lt;/a&gt; driven by AI gets a lot of run, though the dirty secret of AI is that it is surely more often used to cut corners and produce lowest-common-denominator work.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Venture capitalists speak of a future in which AI agents will sort through the drudgery of daily busywork and free us up to live our best lives. Such a future could come to pass. The present, however, offers ample proof of a different kind of transformation, powered by laziness and greed. AI usage and adoption tends to find weaknesses inside systems and exploit them. In academia, generative AI has upended the traditional education model, based around reading, writing, and testing. Rather than offer a new way forward for a system in need of modernization, generative-AI tools have broken it apart, leaving teachers and students flummoxed, even depressed, and unsure of their own roles in a system that can be so easily automated.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;AI-generated content is frequently referred to as “slop” because it is spammy and flavorless. Generative AI’s output tends to become content in essays, emails, articles, and books much in the way that packing peanuts are content inside shipped packages. It’s filler—digital &lt;em&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/donald-trump-mcdonalds/680324/?utm_source=feed"&gt;lorem ipsum&lt;/a&gt;&lt;/em&gt;. The problem with slop is that, like water, it gets in everywhere and seeks the lowest level. Chatbots can assist with higher-level tasks such as coding or scanning and analyzing a large corpus of spreadsheets, document archives, or other structured data. Such work marries human expertise with computational heft. But these more elegant examples seem exceedingly rare. In a &lt;a href="https://www.cjr.org/feature-2/how-were-using-ai-tech-gina-chua-nicholas-thompson-emilia-david-zach-seward-millie-tran.php#Zach%20Seward"&gt;recent article&lt;/a&gt;, Zach Seward, the editorial director of AI initiatives at &lt;em&gt;The New York Times&lt;/em&gt;, said that, although the newspaper uses artificial intelligence to parse websites and data sets to assist with reporting, he views AI on its own as little more than a “parlor trick,” mostly without value when not in the hands of already skilled reporters and programmers.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Speaking with Buscaglia, we could easily see how the “Heat Index” mistake could become part of a pattern for journalists swimming against a current of synthetic slop, constantly produced content, and unrealistic demands from publishers. “I feel like my role has sort of evolved. Like, if people want all this content, they know that I can’t write 48 stories or whatever it’s going to be,” he said. He talked about finding another job, perhaps as a “shoe salesman.”&lt;/p&gt;&lt;p&gt;One worst-case scenario for AI looks a lot like the “Heat Index” fiasco—the parlor tricks winning out. It is a future where, instead of an artificial-general-intelligence apocalypse, we get a far more mundane destruction. AI tools don’t become intelligent, but simply &lt;em&gt;good enough&lt;/em&gt;. They are not deployed by people trying to supplement or enrich their work and potential, but by those looking to automate it away entirely. You can see the contours of that future right now: in anecdotes about teachers using AI to grade papers written primarily by chatbots or in AI-generated newspaper inserts being sent to households that use them primarily as birdcage liners and kindling. Parlor tricks met with parlor tricks—robots talking with robots, writing synthetic words for audiences that will never read them.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Cf0L8GZZ_H72_bgLM5XmhAwIBzw=/media/img/mt/2025/05/2025_5_20_Printing_AI_Updated_2/original.gif"><media:credit>Illustration by The Atlantic. Source: Petrified Films / Getty.</media:credit></media:content><title type="html">At Least Two Newspapers Syndicated AI Garbage</title><published>2025-05-20T18:29:00-04:00</published><updated>2025-05-28T15:12:42-04:00</updated><summary type="html">Slop the presses.</summary><link href="https://www.theatlantic.com/technology/archive/2025/05/ai-written-newspaper-chicago-sun-times/682861/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-682509</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Earlier this week, &lt;i&gt;The Verge&lt;/i&gt; &lt;a href="https://www.theverge.com/openai/648130/openai-social-network-x-competitor"&gt;reported&lt;/a&gt; that OpenAI is developing its own social network to compete with Meta and X. The product may never see the light of day, but the idea has a definite logic to it. People create data every time they post online, and generative-AI companies need a lot of data to train their products. Social networks are also sticky: If you got hooked on an OpenAI feed, you’d be less likely to use competing generative-AI products from Anthropic or Google. (OpenAI, which &lt;i&gt;The Atlantic &lt;/i&gt;has a corporate partnership with, did not return my request for comment and has not, to my knowledge, commented on the report elsewhere.)&lt;/p&gt;&lt;p&gt;But, well, it doesn’t really make sense, does it? Twenty-one years after the creation of Facebook, social media has become the pond scum of the internet: everywhere, unremarkable, and a little bit gross. OpenAI, which says it’s trying to build an advanced superintelligence known as AGI, used to be a mission-oriented nonprofit that &lt;a href="https://openai.com/index/introducing-openai/"&gt;explicitly worked&lt;/a&gt; to “benefit humanity as a whole, unconstrained by a need to generate financial return.” The goals of starting a social-media product seem out of alignment, even considering the company’s decision last year to embrace a &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/sam-altman-openai-for-profit/680031/?utm_source=feed"&gt;for-profit model&lt;/a&gt;. The same company that wants us to believe that it deserves the &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/openai-stargate-maga/681421/?utm_source=feed"&gt;full blessing&lt;/a&gt; of the United States government to amass unfathomable resources for the sake of architecting an intelligence beyond human reckoning—lest China do it first—is also possibly interested in advancing the cause of &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/brain-rot-language/681297/?utm_source=feed"&gt;brain rot&lt;/a&gt;?&lt;/p&gt;&lt;p&gt;To help this make sense, I reached out to my colleague Charlie Warzel, one of the most insightful minds on the technology beat, for a quick discussion earlier this week. It still seems like a strange idea, but I also understand more about what’s motivating OpenAI—whether it launches a social network or not.&lt;/p&gt;&lt;p&gt;&lt;i&gt;This interview has been edited and condensed.&lt;/i&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;Damon Beres: &lt;/b&gt;How does an OpenAI social network sound to you?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Charlie Warzel: &lt;/b&gt;It’s one of the first things I’ve seen from OpenAI that feels like the brainchild of executives who aren’t necessarily building cutting-edge technology. It feels very akin to logic from Meta.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;What do you mean by that?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Charlie: &lt;/b&gt;After Facebook’s success, it felt like there was this stagnation. Meta executives came up with new &lt;a href="https://www.vox.com/2018/10/30/18044962/facebook-stories-business-user-growth-q3-earnings-zuckerberg"&gt;products&lt;/a&gt; that mimicked things that existed, or they brute-forced trends into their products. After OpenAI put out an update to ChatGPT that &lt;a href="https://www.theatlantic.com/newsletters/archive/2025/03/studio-ghibli-memes-openai-chatgpt/682235/?utm_source=feed"&gt;led to the Studio Ghibli meme craze&lt;/a&gt;, I imagine somebody there saw how it took over certain corners of the internet—&lt;a href="https://www.theatlantic.com/technology/archive/2025/03/gleeful-cruelty-white-house-x-account/682234/?utm_source=feed"&gt;especially on X&lt;/a&gt;—and they said, &lt;i&gt;Wait a minute, everyone’s using our tool; what if we actually owned the rails, too?&lt;/i&gt; Here’s this emergent behavior of social media built around AI art or memes, and someone thought, &lt;i&gt;Oh, maybe this is the gateway&lt;/i&gt;. I think that’s always a doomed idea, to take something that’s happening organically and try to retrofit a community around it.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Even if this particular social-network idea never happens, it’s clear that OpenAI is very interested in rapidly releasing new products, expanding its user base, and keeping those users hooked, which makes sense as the company attempts to restructure as a &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/sam-altman-openai-for-profit/680031/?utm_source=feed"&gt;for-profit&lt;/a&gt;. It’s not enough just to have this defining generative-AI product. &lt;i&gt;What’s the next big thing? How do we build out into a new product category?&lt;/i&gt; It’s a familiar path for tech giants, this pursuit of endless growth.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Charlie: &lt;/b&gt;It feels especially sad coming from OpenAI, because if you’re buying their marketing narrative, these are supposed to be the people who are creating God, or humanlike intelligence. When I think back to OpenAI two years ago—the summer and fall before the &lt;a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-fallout/676046/?utm_source=feed"&gt;ousting&lt;/a&gt; and rejoining of CEO Sam Altman—it felt like the company was trying to position itself as this cryptic hub in the Bay Area working on things that are going to fundamentally shift the paradigm of tech and culture and just … everything. AGI has been the whole marketing play, and that’s really heady stuff, right? &lt;a href="https://www.theatlantic.com/technology/archive/2023/03/open-ai-gpt4-chatbot-technology-power/673421/?utm_source=feed"&gt;Are we going to destroy civilization?&lt;/a&gt; Will there be jobs for normal people when AI is, you know, super intelligent? To say now, &lt;i&gt;We’re going to take a stab at a social network&lt;/i&gt;: Post-based social networks feel like such an aged-out technology. I’d almost respect it more if they said, &lt;i&gt;We’re building a TikTok competitor and we have engineered the savviest content algorithm of all time&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;Maybe this is just a ploy for them to get more training data. But to me, this signifies where OpenAI is right now. They’ve been working so hard on this AGI narrative. A lot of the success of the company depends on delivering on that, and they haven’t. The performance of the models is getting only &lt;a href="https://www.theatlantic.com/technology/archive/2025/04/arc-agi-chollet-test/682295/?utm_source=feed"&gt;so much better with each iteration&lt;/a&gt;. It feels like OpenAI is stuck in neutral, and trying to figure out ways to behave like any old tech company.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;The major platforms that OpenAI would be competing with—Meta’s products, X, even YouTube—have had years and years of development. It seems like breaking into that ecosystem would be almost impossible now, even if you’re a company like OpenAI. Is there even demand for a new social network right now?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Charlie: &lt;/b&gt;The best that new entrants in the social-media space can hope for is peeling off some niche groups. In the past couple of years, with Elon Musk’s purchase of X, the creation of Bluesky, and the creation of Threads, those platforms have splintered off and taken certain groups of users with them. They don’t really coexist in the same spaces. I could see OpenAI creating some kind of social-media site that’s a version of LessWrong, the rationalist community that does a lot of posting about AI. That would make sense and feel natural, because it would be built around this idea of supporting what OpenAI is doing. But people aren’t going to discuss the New York Mets on OpenAI’s platform.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;What’s the bigger picture here?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Charlie: &lt;/b&gt;I’m sure OpenAI looks at ChatGPT as this runaway success—which it is—and they’re trying to figure out ways to use it. They’re trying to figure out ways to innovate it, to make it better. And I think what they see and feel is that ChatGPT should be this wrapper for the internet, the thing that covers all of it and it is the guide for it. I think OpenAI wants to be the browser for the internet going forward—the interface to rule over all of it—and maybe they feel like this is a way in. If they can bring people in and get real-time discussion of news and culture, and not only have that information but &lt;i&gt;use&lt;/i&gt; it because there’s a vibrant ecosystem there, that helps it be that wrapper layer for the internet.&lt;/p&gt;&lt;p&gt;But I think that would be misguided. Looking at this, I have this feeling of, what if ChatGPT was the worst thing to happen to OpenAI? What if they’re huge victims of the success of this product? ChatGPT wasn’t supposed to be a successful product. It was a proof of concept of these large language models being able to effectively spit out and mimic human prose and interactivity. To their great surprise, &lt;a href="https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/?utm_source=feed"&gt;as we’ve reported at &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, it was a major success, and that goes to what you said earlier—that nothing’s ever enough for Silicon Valley. Once you’ve demonstrated some success, you must iterate on that. You must 10x it. Otherwise, the train is stalling.&lt;/p&gt;&lt;p&gt;And so if OpenAI’s original goal was to create this super intelligence, the success of ChatGPT could be looked at as this huge wrench in the gears of that operation. Now they have this consumer product that millions of people use that is making them a little bit of money—in the grand scheme of things, not really that much for them—and they need to figure out a way to continue that success. It feels to me like a big distraction. Because if you’re Sam, if you really were obsessed with AGI, wouldn’t you rather be quietly trying to build it?&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Speaking of OpenAI product launches, the company this week released two new models, o3 and o4-mini, which it called its “smartest” ever (in &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;typical fashion&lt;/a&gt;). As usual, there’s a lot of hype. But this release may be an intriguing step for scientific research in particular, and we’ll have more to share on that in the next edition of &lt;i&gt;Atlantic &lt;/i&gt;Intelligence.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/dcWacBomok1RyhpduyaNmTBAegw=/media/newsletters/2025/04/AI_frame_chatgbt/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Curse of ChatGPT</title><published>2025-04-18T14:19:00-04:00</published><updated>2025-04-18T15:05:57-04:00</updated><summary type="html">Success demands more success.</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/04/was-chatgpt-the-worst-thing-to-happen-to-openai/682509/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-682357</id><content type="html">&lt;p dir="ltr"&gt;The tariff apocalypse is upon us. Should you buy an iPad?&lt;/p&gt;&lt;p&gt;Some people, it seems, have answered with a resounding yes. &lt;em&gt;Bloomberg&lt;/em&gt; &lt;a href="https://www.bloomberg.com/news/articles/2025-04-07/apple-customers-dash-to-stores-to-buy-iphones-ahead-of-tariffs"&gt;reported&lt;/a&gt; yesterday that at some Apple stores, “the atmosphere was like the busy holiday season.” Fearing that the price of electronics will increase as a result of President Donald Trump’s tariffs, people are rushing to purchase stuff. If the economy must collapse, at least let it do so after you have obtained a new tablet for $599 plus tax.&lt;/p&gt;&lt;p dir="ltr"&gt;The panic-buying is a little funny. First, because if we are on the eve of a global recession, we have bigger things to worry about than new gizmos. Second, because we are still in the haze of a Trump pseudo-reality. The major tariffs will not hit until tomorrow, assuming there isn’t some unexpected reversal in the next several hours, and no one knows for certain how any of this will shake out.&lt;/p&gt;&lt;p dir="ltr"&gt;Apple, in particular, is a good case study for the moment. Consumer electronics—and especially smartphones—illustrate perfectly the gulf between the president’s America-first agenda and the inescapable realities of a globalized supply chain. Many articles have suggested that electronics will become much more expensive if the tariffs stick—perhaps by hundreds of dollars. They certainly will become more expensive for companies like Apple to make. Although the company may have clever ways of reducing its costs—by shipping devices from &lt;a href="https://www.wsj.com/tech/apple-iphone-production-china-tariffs-6cc37f40"&gt;India&lt;/a&gt; rather than China, for example—it’s not hard to imagine some of the expense being passed down to consumers. (Apple did not respond to a request for comment.)&lt;/p&gt;&lt;p dir="ltr"&gt;This is, however, far from certain. If America enters a recession, depressed demand might lead to some nice markdowns, which you might take advantage of if you’re not financially preoccupied with keeping a roof over your head or food in your kitchen. The Trump administration could also produce some kind of carve-out for Apple and other American tech firms to do business without paying the tax. (Apple, like other major tech companies, has cozied up to Trump; the company’s CEO, Tim Cook, was &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/?utm_source=feed"&gt;present for Trump’s inauguration in January&lt;/a&gt;.) When the previous Trump administration announced tariffs on consumer goods imported from China, Apple products were exempt. This was in the interest of the president’s America-first agenda. As Neil Cybart, an Apple analyst, explained in a recent &lt;a href="https://aboveavalon.ghost.io/march-31st-2025-apple-and-elon-musk-battle-over-satellites-foxconns-iphone-production-in-india-dmas-political-undertones-in-spotlight/"&gt;edition&lt;/a&gt; of his Above Avalon newsletter, “Any action that stands to hurt the iPhone may benefit non-U.S. smartphone manufacturers.” At a certain point, phones are phones, and people will turn elsewhere: Huawei’s latest is covered in vegan leather, has three folding screens, and makes my iPhone 15 look like meemaw’s clunky old Mr. Coffee.&lt;/p&gt;&lt;p dir="ltr"&gt;Trump apparently wants the tariffs to revitalize American manufacturing. He is &lt;a href="https://truthsocial.com/@realDonaldTrump/posts/114297149364879462"&gt;excited&lt;/a&gt;, for instance, that Wyoming hamburgers will once again compete with Australian beef. His commerce secretary, Howard Lutnick, &lt;a href="https://finance.yahoo.com/news/howard-lutnick-says-army-millions-023014445.html"&gt;said&lt;/a&gt; in an interview over the weekend that “the army of millions and millions of human beings screwing in little screws to make iPhones—that kind of thing is going to come to America.” (Never mind that Lutnick immediately said that the work would be “automated.”)&lt;/p&gt;&lt;p dir="ltr"&gt;But global supply chains are a tangle that no presidential administration could easily unwind. In 2016, the journalist Konstantin Kakaes &lt;a href="https://www.technologyreview.com/2016/06/09/159456/the-all-american-iphone/"&gt;explored&lt;/a&gt; the possibility of an “All-American iPhone” for &lt;em&gt;MIT Technology Review&lt;/em&gt; and determined that such a thing is not really possible: “The iPhone is a symbol of American ingenuity,” he wrote, “but it’s also a testament to the inescapable realities of the global economy.” As things stand, no one country is capable of producing all of the rare-earth elements that go into a single device, and components such as screen glass and processors are also sourced from all over the world. China &lt;a href="https://www.zimtu.com/chinas-rare-earth-dominance-and-what-it-means-for-the-world/"&gt;refines&lt;/a&gt; 90 percent of the world’s rare earths; the tantalum in your phone is probably sourced from a mine in the Democratic Republic of the Congo; its processor likely came from Taiwan. If you want to understand globalization, pay attention to the iPhone.&lt;/p&gt;&lt;p dir="ltr"&gt;It would not take an act of God to change this situation, but it might take more than Stephen Miller. America would need an iron resolve, lots of money, and lots of time. Rare-earth deposits exist in the United States—but finding, extracting, and refining them would be tremendously costly and dangerous work. Toxic by-products and environmental devastation would ensue. Americans have been happy to ignore these realities when the work is done elsewhere. Would they be so happy if it were done here? Add to that the need to build factories and fabrication plants for microchips. The effort would demand billions upon billions of dollars of investment over many years, still may not accommodate demand for these products, and would make them drastically more expensive.&lt;/p&gt;&lt;p dir="ltr"&gt;In theory, moving &lt;em&gt;some&lt;/em&gt; greater amount of electronics manufacturing to U.S. soil could be a beneficial strategy. American technology firms are highly reliant on chips imported from Asia, for instance, which means the supply is vulnerable to disruption. “It would be hard to survive a total cutoff of chips from Taiwan,” Duane Boning, an electrical-engineering and computer-science professor at MIT, told me. The &lt;a href="https://www.theatlantic.com/technology/archive/2022/12/tsmc-apple-memory-chip-production-us-china-taiwan-relations/672593/?utm_source=feed"&gt;CHIPS Act&lt;/a&gt;, signed into law by Joe Biden, has already funded some of this manufacturing in the United States. But tariffs are unlikely to help these efforts, Boning said. (If anything, the costs they would impose on various components might only &lt;a href="https://www.wired.com/story/trump-tariffs-impact-semiconductors-chips/"&gt;hurt&lt;/a&gt;.)&lt;/p&gt;&lt;p data-id="injected-recirculation-link" dir="ltr"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2022/12/tsmc-apple-memory-chip-production-us-china-taiwan-relations/672593/?utm_source=feed"&gt;Read: Just how badly does Apple need China?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Consumer electronics are only one way to look at Trump’s tariffs. But keep in mind that Apple, Microsoft, Nvidia, Amazon, and Alphabet are the five most valuable companies in the world—all American, all with products that will be caught in this mess. If the plan doesn’t make sense for them—if the plan will actually hurt their business—then what sense does it make at all?&lt;/p&gt;&lt;p dir="ltr"&gt;That said, sense-making is a fool’s errand in this era. What we can do is deal with what’s in front of us. So, sure, buy the new phone or tablet if you have to. But also ask yourself: Do you really have to? If Apple’s prices do rise, the company’s sales could be an early indicator of whether Americans are at all willing to renegotiate their unhealthy relationship with consumerism. For the past decade, technology companies have conditioned buyers to make purchasing expensive gadgets an annual affair through services such as Apple’s iPhone Upgrade Program. (There is a reason that each iPhone model ends in a number: People intuitively understand that a 17 is better than a 16 is better than a 15, and so on.) Hundreds of millions of new devices are shipped every year. Not as many old ones are recycled. This is not wonderful: Consumers are taking their cues from the most moneyed corporations to have ever existed, cycling through handheld computers at a rapid clip because they come in new colors and with slightly reconfigured screens.&lt;/p&gt;&lt;p dir="ltr"&gt;I’m not innocent of the vanity gadget upgrade. (I recently sacrificed a mostly fine old Kindle for a mere $5 trade-in credit on a newer model.) But I also know that those &lt;a href="https://onezero.medium.com/our-tech-addiction-is-creating-a-toxic-soup-fdeb36bdcc51"&gt;toxic rare-earth pools&lt;/a&gt; exist in great part because of the endless production of phones, computers, tablets, game consoles, and televisions. Meanwhile, many of these gadgets last longer than they might have even a few years ago, thanks to higher-quality materials, and are easier to fix due to the growing popularity of &lt;a href="https://www.404media.co/all-50-states-have-now-introduced-right-to-repair-legislation/"&gt;“right to repair”&lt;/a&gt; laws. The device in your hand would be a miracle in any other age. Nothing is certain right now. Chaos is everywhere. At least you can hold on to this.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/oGCNMv7PM1Cn6pLcdjvSpJI3wgo=/media/img/mt/2025/04/2025_04_Phones/original.jpg"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">Buy That New Phone Now</title><published>2025-04-08T16:10:00-04:00</published><updated>2025-04-08T16:56:41-04:00</updated><summary type="html">But only if you absolutely have to.</summary><link href="https://www.theatlantic.com/technology/archive/2025/04/trump-tariffs-iphone-prices/682357/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-682235</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I’ve been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words.&lt;/p&gt;&lt;p&gt;Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki’s animated films at Studio Ghibli. (Think &lt;i&gt;Kiki’s Delivery Service&lt;/i&gt;, &lt;i&gt;My Neighbor Totoro&lt;/i&gt;, and &lt;i&gt;Spirited Away&lt;/i&gt;.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute).&lt;/p&gt;&lt;p&gt;Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company’s intellectual property, pointed to a documentary clip of Miyazaki calling AI an “insult to life itself,” and mused about the technology’s threats to human creativity. All of these conversations are valid, yet they didn’t feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn’t make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take.&lt;/p&gt;&lt;p&gt;&lt;i&gt;This interview has been edited and condensed.&lt;/i&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;Damon Beres: &lt;/b&gt;Let’s start with the very basic question. Are the Studio Ghibli images evil?&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian Bogost: &lt;/b&gt;I don’t think they’re evil. They might be stupid. You could construe them as ugly, although they’re also beautiful. You could construe them as immoral or unseemly.&lt;/p&gt;&lt;p&gt;If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it’s Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that’s not just permissible, but good and even righteous in contemporary culture.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that’s a problem.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;It’s not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That’s changed people’s feelings about the matter.&lt;/p&gt;&lt;p&gt;I read an article about copyright and style—&lt;i&gt;you can’t copyright a style, &lt;/i&gt;it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they’re posting their own fan art and get a takedown request, then they’re like, &lt;i&gt;Screw you, I’m just trying to spread the gospel of your creativity&lt;/i&gt;. But those same people might support a copyright claim against a generative-AI tool, even though it’s doing the same thing.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;As I’ve experimented with these tools, I’ve realized that the purpose isn’t to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I’m putting a dumb thought into a program and seeing what comes out. There’s a low-effort delight and playfulness.&lt;/p&gt;&lt;p&gt;But some people have made this point that it’s insulting because it’s violating Studio Ghibli co-founder Hayao Miyazaki’s beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It’s distasteful because it’s the White House tweeting a cruel meme about a person’s life.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;You brought up something important, this embrace of the intentional fallacy—the idea that a work’s meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It’s perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren’t that many of them, and they have a very high-touch execution. Even if we’re not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;That’s a credible theory. But you’re still in intentional-fallacy territory, right? &lt;i&gt;Studio Ghibli has made a deliberate effort to tend and curate their output, and they don’t just make a movie every year, and I want to respect that as someone influenced by that work.&lt;/i&gt; And that’s weird to me.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;What we haven’t talked about is the Ghibli image as a kind of meme. They’re not just spreading because they’re Ghibli images: They’re spreading because they’re &lt;i&gt;AI-generated &lt;/i&gt;Ghibli images.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon:&lt;/b&gt; You can’t really imagine image generators in a paradigm where there’s no social media.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian:&lt;/b&gt; What would you do with them, show them to your mom? These are things that are made to be posted, and that’s where their life ends.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Maybe that’s what people don’t like, too—that it’s nakedly transactional.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;Exactly—you’re engagement baiting. These days, that accusation is equivalent to selling out.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;It’s this generation’s &lt;i&gt;poser&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;&lt;i&gt;Engagement baiter&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Damon: &lt;/b&gt;Leave me with a concluding thought about how people should react to these images.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ian: &lt;/b&gt;They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that’s a real shame.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/9n81EnusW40h_5GmIIoUh12gNgI=/media/newsletters/2025/03/AI_Ghibli/original.jpg"><media:credit>The Atlantic. Source: Studio Ghibli / Alamy</media:credit></media:content><title type="html">Hayao Miyazaki’s AI Nightmare</title><published>2025-03-28T18:11:00-04:00</published><updated>2025-03-28T18:11:37-04:00</updated><summary type="html">People are using ChatGPT to create Studio Ghibli–style images. And the backlash is huge.</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/03/studio-ghibli-memes-openai-chatgpt/682235/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-682053</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;President Donald Trump’s administration is embracing AI. According to reports, agencies are using the technology to identify places to cut costs, figure out which employees can be terminated, and comb through social-media posts to determine whether student-visa holders may support terror groups. And as my colleague Matteo Wong &lt;a href="https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/?utm_source=feed"&gt;reported this week&lt;/a&gt;, employees at the General Services Administration are being urged to use a new chatbot to do their work, while simultaneously hearing from officials that their jobs are far from secure; Thomas Shedd, the director of the GSA division that produced the AI, told workers that the department will soon be “at least 50 percent smaller.”&lt;/p&gt;&lt;p&gt;This is a haphazard leap into a future that tech giants have been pushing us toward for years. Work is being automated, people are losing their jobs, and it’s not at all clear that any of this will make the government more efficient, as Elon Musk and DOGE have promised.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustration of a microchip with a government building inscribed" height="374" src="https://cdn.theatlantic.com/media/img/posts/2025/03/AI_3_14/db9f23b19.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Sources: pressureUA / Getty; Thanasis / Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;DOGE’s Plans to Replace Humans With AI Are Already Under Way&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Matteo Wong&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;A new phase of the president and the Department of Government Efficiency’s attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people.&lt;/p&gt;

&lt;p&gt;The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software’s code base, which is visible on GitHub.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/03/elon-musk-human-meme-stock/682023/?utm_source=feed"&gt;&lt;b&gt;Elon Musk looks desperate:&lt;/b&gt;&lt;/a&gt;&lt;b&gt; &lt;/b&gt;“Musk has wagered the only thing he can’t easily buy back—the very myth he created for himself,” Charlie Warzel writes.&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/03/the-elon-musk-way-move-fast-and-destroy-democracy/681937/?utm_source=feed"&gt;&lt;b&gt;Move fast and destroy democracy&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“Silicon Valley’s titans have decided that ruling the digital world is not enough,” Kara Swisher writes.&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The internet can still be good. In a &lt;a href="https://www.theatlantic.com/magazine/archive/2025/04/reddit-culture-community-credibility/681765/?utm_source=feed"&gt;story for &lt;i&gt;The Atlantic&lt;/i&gt;’s April issue&lt;/a&gt;, my colleague Adrienne LaFrance explores how Reddit became arguably “the best platform on a junky web.” Reading it in between editing stories about AI, I was struck by how much of what Adrienne described was fundamentally &lt;i&gt;human&lt;/i&gt;: “There is a subreddit where violinists gently correct one another’s bow holds, a subreddit for rowers where people compare erg scores, and a subreddit for people who are honest-to-God allergic to the cold and trade tips about which antihistamine regimen works best,” she writes. “One subreddit is for people who encounter cookie cutters whose shapes they cannot decipher. The responses reliably entail a mix of sincere sleuthing to find the answer and ridiculously creative and crude joke guesses.” How wholesome!&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/rikR2_Z7y-S5EyPFy27h7DNedsg=/media/img/mt/2025/03/Atlantic_AI_10/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The AI Era of Governing Has Arrived</title><published>2025-03-14T15:22:00-04:00</published><updated>2025-03-14T15:22:06-04:00</updated><summary type="html">DOGE and its allies are forcing the civil service into a chaotic future.</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/03/the-ai-era-of-governing-has-arrived/682053/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-681879</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;President Donald Trump has been clear about his vision for America as an AI superpower, signing in his first week an executive order geared toward helping AI “promote human flourishing, economic competitiveness, and national security.” In order to achieve this goal, the Trump administration has forged a relationship with OpenAI and its CEO, Sam Altman—but that could complicate things for Elon Musk.&lt;/p&gt;&lt;p&gt;As my colleague Matteo Wong &lt;a href="https://www.theatlantic.com/technology/archive/2025/02/sam-altman-elon-musk-trump/681838/?utm_source=feed"&gt;wrote for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;&lt;i&gt; &lt;/i&gt;on Wednesday, the two technologists are bitter rivals. Musk was one of OpenAI’s initial investors and served on the company’s board, but he left in 2018. Ever since ChatGPT made OpenAI a household name, Musk has routinely taken potshots at the company, calling its chatbot too “woke” and using the nickname “Scam Altman” to refer to its CEO. Meanwhile, he’s launched his own AI firm, xAI, whose products have lagged behind OpenAI’s.&lt;/p&gt;&lt;p&gt;Musk and Altman may both need Trump’s blessing to unlock considerable resources for their projects and to stave off inconvenient regulations. “Anything that OpenAI might gain from Trump, xAI could reap as well,” Matteo writes. The companies are in competition with each other, so any advantage that one gets may be to the detriment of the other, building up a tension between Musk and Altman that could eventually snap.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustration of Elon Musk, Sam Altman, and Donald Trump's faces on a triangle" height="374" src="https://cdn.theatlantic.com/media/img/posts/2025/02/AI_2_28/3a9c6dc1b.gif" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Sources: Sebastian Gollnow / picture alliance / Getty; Shawn Thew / AFP; Chip Somodevilla / Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;How Sam Altman Could Break Up Elon Musk and Donald Trump&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Matteo Wong&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;The rivalry between Sam Altman and Elon Musk is entering its &lt;i&gt;Apprentice &lt;/i&gt;era. Both men have the ambition to redefine how the modern world works—and both are jockeying for President Donald Trump’s blessing to accelerate their plans.&lt;/p&gt;

&lt;p&gt;Altman’s company, OpenAI, as well as Musk’s ventures—which include SpaceX, Tesla, and xAI—all depend to some degree on federal dollars, permits, and regulatory support. The president could influence whether OpenAI or xAI produces the next major AI breakthrough, whether Musk can succeed in sending a human to Mars, and whether Altman’s big bet on &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-microsoft-nuclear-three-mile-island/679988/?utm_source=feed"&gt;nuclear energy&lt;/a&gt;, and &lt;a href="https://www.bloomberg.com/news/articles/2024-07-18/sam-altman-s-helion-energy-promises-fusion-power-by-2028"&gt;fusion reactors&lt;/a&gt; in particular, pans out.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/02/sam-altman-elon-musk-trump/681838/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/02/trump-federal-workers-self-censorship/681781/?utm_source=feed"&gt;&lt;b&gt;“Terrified” federal workers are clamming up&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; Karen Hao recently spoke with more than a dozen federal workers about the culture of fear and paranoia that they say is spreading through their agencies under the Trump administration. They allege that they are being hindered from doing their work—some of which touches on risks emerging from AI. “Federal workers I spoke with now say that neither they nor their colleagues want to be associated in any way with working on or promoting disinformation research,” Hao writes, “even as they are aware that the U.S. government’s lack of visibility into such networks could create a serious national vulnerability, especially as AI gives state-backed operations powerful upgrades.”&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/zwedFZI4J83wvpvBDJwq4iAtbfg=/media/img/mt/2025/02/atlantic_AI2/original.gif"><media:credit>Illustration by The Atlantic. Sources: Sebastian Gollnow / picture alliance / Getty; Shawn Thew / AFP; Chip Somodevilla / Getty.</media:credit></media:content><title type="html">The Complicated Relationship Between Sam Altman and Donald Trump</title><published>2025-02-28T16:34:00-05:00</published><updated>2025-02-28T16:36:20-05:00</updated><summary type="html">And what it might mean for Elon Musk’s ambitions</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/02/the-complicated-relationship-between-sam-altman-and-donald-trump/681879/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-681616</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;OpenAI has said that it believes that DeepSeek, the Chinese start-up behind the shockingly powerful AI model that &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/deepseek-china-ai/681481/?utm_source=feed"&gt;launched&lt;/a&gt; last month, may have ripped off its technology. The irony is rich: We’ve known for some time that generative AI tends to be built on stolen media—&lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;books&lt;/a&gt;, &lt;a href="https://www.theatlantic.com/technology/archive/2024/11/opensubtitles-ai-data-set/680650/?utm_source=feed"&gt;movie subtitles&lt;/a&gt;, &lt;a href="https://www.theatlantic.com/technology/archive/2023/10/openai-dall-e-3-artists-work/675519/?utm_source=feed"&gt;visual art&lt;/a&gt;. The companies behind the technology don’t seem to care much about the creatives who produced that training data in the first place; Sam Altman said early last year that it would be “impossible” to make powerful AI tools without copyrighted material, and that he feels the law is on his side.&lt;/p&gt;&lt;p&gt;If DeepSeek did indeed rip off OpenAI, it would have done so through a process called “distillation.” As Michael Schuman explained in an article for &lt;i&gt;The Atlantic&lt;/i&gt; this week, “In essence, the firm allegedly bombarded ChatGPT with questions, tracked the answers, and used those results to train its own models. When asked ‘What model are you?’ DeepSeek’s recently released chatbot at first answered ‘ChatGPT’ (but it no longer seems to share that highly suspicious response).” In other words, DeepSeek is impressive—about as capable as other cutting-edge models, and developed at a much lower cost—but it may be so only because it was effectively built on top of existing work. (DeepSeek did not respond to Schuman’s request for comment.)&lt;/p&gt;&lt;p&gt;“What DeepSeek is accused of doing is nothing like hacking, but it’s still a violation of OpenAI’s terms of service,” Schuman writes. “And if DeepSeek did indeed do this, it helped the firm to create a competitive AI model at a much lower cost than OpenAI.” (&lt;i&gt;The Atlantic&lt;/i&gt; recently entered into a corporate partnership with OpenAI.) Whether or not DeepSeek distilled OpenAI’s technology, others will likely find a way to do the same thing. We may be approaching the era of the AI copycat. For a time, it took immense wealth—not to mention energy—to train powerful new AI models. That may no longer be the case.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="The DeepSeek logo against a grid background" height="374" src="https://cdn.theatlantic.com/media/img/posts/2025/02/AI_2_7/b7546908f.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;DeepSeek and the Truth About Chinese Tech&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Michael Schuman&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;When the upstart Chinese firm DeepSeek revealed its latest AI model in January, Silicon Valley was impressed. The engineers had used fewer chips, and less money, than most in the industry thought possible. Wall Street panicked and tech stocks dropped. Washington worried that it was losing ground in a vital strategic sector. Beijing and its supporters concurred: “DeepSeek has shaken the myth of the invincibility of U.S. high technology,” one nationalist commentator, Hu Xijin, crowed on Chinese social media.&lt;/p&gt;

&lt;p&gt;Then, however, OpenAI, which operates ChatGPT, revealed that it was investigating DeepSeek for having allegedly trained its chatbot using ChatGPT. China’s Silicon Valley–slayer may have mooched off Silicon Valley after all.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/international/archive/2025/02/deepseek-ai-china-tech/681553/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/ideas/archive/2025/02/trump-administration-voter-perception/681598/?utm_source=feed"&gt;&lt;b&gt;Americans are trapped in an algorithmic cage&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“The private companies in control of social-media networks possess an unprecedented ability to manipulate and control the populace,” Adam Serwer writes.&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/02/elon-musk-doge-security/681600/?utm_source=feed"&gt;&lt;b&gt;The government’s computing experts say they are terrified&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“Four IT professionals lay out just how destructive Elon Musk’s incursion into the U.S. government could be,” Charlie Warzel and Ian Bogost report.&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jKthet5rJSzEa27RFo3cO-uX2cw=/media/img/mt/2025/02/AI_frame_deepseek/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">What Is AI Distillation?</title><published>2025-02-07T17:27:00-05:00</published><updated>2025-02-07T17:27:14-05:00</updated><summary type="html">And what does it mean if DeepSeek did it?</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/02/what-is-ai-distillation/681616/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-681384</id><content type="html">&lt;p&gt;Among all the images of people cozying up to President Donald Trump at today’s inauguration, one in particular will be worth remembering over the next four years. During the ceremony in the Capitol Rotunda, you could see some of the most powerful men on the planet positioned immediately behind members of the Trump family on the dais. There’s Tiffany, there’s Eric, there are Ivanka and Don Jr., and then, smiling and clapping right alongside the family, there are the tech titans: Mark Zuckerberg, Jeff Bezos, Sundar Pichai, Elon Musk, and Tim Cook. In visual proximity, they’re as close to honorary Trumps as anyone could be.&lt;/p&gt;&lt;p&gt;The power that each of these men represents may be rivaled by only the presidency itself. Zuckerberg is the CEO of Meta; Bezos founded Amazon and Blue Origin and owns &lt;em&gt;The Washington Post&lt;/em&gt;; Pichai runs Google; Musk heads Tesla and SpaceX and owns X; Cook is Apple’s CEO. TikTok’s CEO, Shou Zi Chew, was also in attendance in a back row, and OpenAI’s CEO, Sam Altman, was &lt;a href="https://deadline.com/2025/01/trump-inauguration-live-updates-1236260202/"&gt;reportedly&lt;/a&gt; seated in the overflow crowd in Emancipation Hall. These business leaders directly control the tools that billions of people around the world use to communicate, to receive information, to be entertained, to navigate and understand the world. Even an incomplete list of products overseen by these people is striking: Facebook, Instagram, WhatsApp, Threads, X, Gmail, Google Search, Google Docs, Android, iPhones, iPads, Macs, iMessage, Starlink, ChatGPT, TikTok—the world’s foremost technology platforms, in line behind Donald Trump.&lt;/p&gt;&lt;p&gt;&lt;br&gt;
It’s not unusual for business leaders to rub shoulders with presidents and other elected officials. But this was something else: Inauguration seats closest to an incoming president &lt;a href="https://www.inaugural.senate.gov/inaugural-platform/"&gt;tend to be reserved for&lt;/a&gt; a president’s family and figures in politics, and the tech executives on Trump’s dais have been hard at work ingratiating themselves into his universe. In the lead-up to today’s events, they have demonstrated a remarkable spinelessness. Most attempted to curry the incoming president’s favor by giving million-dollar donations to his inaugural fund—in effect, kissing the ring. They gave relatively little, if at all, to Joe Biden’s fund; &lt;a href="https://www.politico.com/news/2021/01/11/facebook-political-spending-capitol-violence-457546"&gt;some&lt;/a&gt; run companies that had previously declared they would reassess their political donations following the January 6 insurrection—a stance that clearly did not stick. The events of that day have been &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/january-6-justification-machine/681215/?utm_source=feed"&gt;memory-holed&lt;/a&gt;. Now Zuckerberg and Musk have reoriented their products in direct service of the &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/mark-zuckerberg-wants-be-elon-musk/681248/?utm_source=feed"&gt;MAGA movement&lt;/a&gt;, disposing of content-moderation policies and proclaiming a supposed commitment to free speech that serves the loudest and most odious users. TikTok exalted Trump yesterday when it brought its service back online following a brief &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/tiktok-shutdown/681374/?utm_source=feed"&gt;shutdown&lt;/a&gt;: “As a result of President Trump’s efforts, TikTok is back in the U.S.!” the app wrote in a pop-up sent to users. Less than five years ago, Trump had issued an &lt;a href="https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-addressing-threat-posed-tiktok/"&gt;executive order&lt;/a&gt; that would have effectively banned the app, calling it a threat to national security.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Regardless of past policies and stated principles, it seems that, as always, business is business. Each tech leader on Trump’s dais has a clear financial interest in courting the president. Meta, Google, and Apple all face antitrust suits; TikTok could still be shut down in the United States; and OpenAI, like other generative-AI firms, is doing whatever it can to avoid growth-limiting regulation. Musk’s companies have &lt;a href="https://www.nytimes.com/2024/10/20/us/politics/elon-musk-federal-agencies-contracts.html"&gt;been under numerous recent investigations or reviews&lt;/a&gt; by federal regulators. Plus, he will need the support of the government to “plant the Stars and Stripes on the planet Mars,” as Trump put it in his speech today.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The tech industry has officially placed itself in the palm of Trump’s hand. What will happen the next time the FBI wants to get into a Facebook account or an encrypted iPhone—when the definition of a political threat has changed based on the president’s whims? What will happen if Google Search delivers search results that are at odds with Trump’s agenda?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;What cannot be forgotten is that these men—who for years have behaved as if they answer to no one—appear to stand for little more than the accrual of wealth and power, regardless of what it means for the people who use their products. Today, they bent the knee.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/CS0_rbmgnzLOLA32aWg6Tih1Ob0=/media/img/mt/2025/01/2025_01_20_tech_2194374996/original.jpg"><media:credit>Saul Loeb / AFP / Getty</media:credit></media:content><title type="html">Billions of People in the Palm of Trump’s Hand</title><published>2025-01-20T15:29:00-05:00</published><updated>2025-01-21T11:33:00-05:00</updated><summary type="html">Today leaders of the world’s largest technology platforms kissed the president’s ring.</summary><link href="https://www.theatlantic.com/technology/archive/2025/01/trump-musk-zuckerberg-silicon-valley-kisses-the-ring/681384/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2025:50-681362</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;TikTok is an AI app. Not an “ask a bot to do your homework” kind of AI app, but an AI app all the same: Its algorithm processes and acts upon huge amounts of data to keep users engaged. Without that fundamental, &lt;a href="https://www.theatlantic.com/technology/archive/2021/06/your-tiktok-feed-embarrassing/619257/?utm_source=feed"&gt;freakishly well-tuned&lt;/a&gt; technology, TikTok wouldn’t really be anything at all—just another video or shopping platform.&lt;/p&gt;&lt;p&gt;The app is set to be banned in the United States, following a &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/tiktok-exodus-rednote-instagram/681344/?utm_source=feed"&gt;decision&lt;/a&gt; by the Supreme Court earlier today. But the legacy of its algorithm will live on, as my colleague &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/tiktok-already-won/681343/?utm_source=feed"&gt;Hana Kiros wrote in an article for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;&lt;i&gt; &lt;/i&gt;yesterday: “Although it was not the first app to offer an endless feed, and it was certainly not the first to use algorithms to better understand and target its users, TikTok put these ingredients together like nothing else before it.” The app was so effective—so &lt;i&gt;sticky&lt;/i&gt;—that every meaningful competitor tried to copy its formula. Now TikTok-like feeds have been integrated into Instagram, Facebook, Snapchat, YouTube, X, even LinkedIn.&lt;/p&gt;&lt;p&gt;Today, AI is frequently conflated with &lt;i&gt;generative &lt;/i&gt;AI because of the way ChatGPT has captured the world’s imagination. But generative AI is still a largely speculative endeavor. The most widespread and influential AI programs are the less flashy ones quietly whirring away in your pocket, influencing culture, business, and (in this case) matters of national security in very real ways.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustrated collage of various social-app iconography" height="374" src="https://cdn.theatlantic.com/media/img/posts/2025/01/AI_1_17/740db42e2.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The Internet Is TikTok Now&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Hana Kiros&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;There are times when, deep into a scroll through my phone, I tilt my head and realize that I’m not even sure what app I’m on. A video takes up my entire screen. If I slide my finger down, another appears. The feeling is disorienting, so I search for small design cues at the margins of my screen. The thing I’m staring at could be TikTok, or it could be one of any number of other social apps that look exactly like it.&lt;/p&gt;

&lt;p&gt;Although it was not the first app to offer an endless feed, and it was certainly not the first to use algorithms to better understand and target its users, TikTok put these ingredients together like nothing else before it. It amassed what every app wants: many users who spend hours and hours scrolling, scrolling, scrolling (ideally past ads and products that they’ll buy). Every other major social platform—Instagram, Facebook, Snapchat, YouTube, X, even LinkedIn—has copied TikTok’s format in recent years. The app might get banned in the United States, but we’ll still be living in TikTok’s world.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/01/tiktok-already-won/681343/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2021/06/your-tiktok-feed-embarrassing/619257/?utm_source=feed"&gt;&lt;b&gt;I’m scared of the person TikTok thinks I am&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: “&lt;/b&gt;TikTok’s recommendation algorithm is known for its accuracy and even its ‘magic,’” Kaitlyn Tiffany wrote for &lt;i&gt;The Atlantic &lt;/i&gt;in 2021. “What does it mean if the videos it picks for you are totally disgusting?”&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/ideas/archive/2024/03/tiktok-bill-foreign-influence/677806/?utm_source=feed"&gt;&lt;b&gt;Critics of the TikTok bill are missing the point&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“America has a long history of shielding infrastructure and communication platforms from foreign control,” Zephyr Teachout wrote in March.&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Algorithmic feeds obviously have a profound effect on how people receive information today. That can be troubling in times of disaster and political strife. As Charlie Warzel &lt;a href="https://www.theatlantic.com/technology/archive/2025/01/watch-duty-la-fires/681333/?utm_source=feed"&gt;wrote&lt;/a&gt; for &lt;i&gt;The Atlantic &lt;/i&gt;yesterday, “The experience of logging on and consuming information through the algorithmic morass of our feeds has never felt more dispiriting, commoditized, chaotic, and unhelpful than it does right now.”&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/pOe3IA3rFttBDcOXh4V5mZp-R0w=/media/img/mt/2025/01/AI_frame_tiktok_algorithm_wars/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">TikTok Will Never Die</title><published>2025-01-17T16:43:00-05:00</published><updated>2025-01-17T16:43:29-05:00</updated><summary type="html">Even with a ban, its algorithm’s influence will live on.</summary><link href="https://www.theatlantic.com/newsletters/archive/2025/01/tiktok-will-never-die/681362/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-681171</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Thank you for reading &lt;em&gt;Atlantic&lt;/em&gt; Intelligence this year: It’s been a pleasure appearing in your inbox each week, and we can’t wait to bring you new coverage in 2025. Collected below are some of the standout stories we published this year, which explore new frontiers for AI and the risks that come with advancement.&lt;/p&gt;&lt;p dir="ltr"&gt;We hope you enjoy some of these great reads during your holiday downtime. See you again in the new year!&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read&lt;/b&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/terence-tao-ai-interview/680153/?utm_source=feed"&gt;We’re Entering Uncharted Territory for Math&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;Terence Tao, the world’s greatest living mathematician, has a vision for AI.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Matteo Wong&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/05/elevenlabs-ai-voice-cloning-deepfakes/678288/?utm_source=feed"&gt;ElevenLabs Is Building an Army of Voice Clones&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;A tiny start-up has made some of the most convincing AI voices. Are its creators ready for the chaos they’re unleashing?&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Charlie Warzel&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/?utm_source=feed"&gt;Microsoft’s Hypocrisy on AI&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;Can artificial intelligence really enrich fossil-fuel companies and fight climate change at the same time? The tech giant says yes.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Karen Hao&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/science/archive/2024/02/talking-whales-project-ceti/677549/?utm_source=feed"&gt;How First Contact With Whale Civilization Could Unfold&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;If we can learn to speak their language, what should we say?&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Ross Andersen&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/ideas/archive/2024/05/ai-dating-algorithms-relationships/678422/?utm_source=feed"&gt;The Big AI Risk Not Enough People Are Seeing&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;Beware technology that makes us less human.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Tyler Austin Harper&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/generative-ai-translation-education/677883/?utm_source=feed"&gt;The End of Foreign-Language Education&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;Thanks to AI, people may no longer feel the need to learn a second language.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Louise Matsakis&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/books/archive/2024/04/ai-writing-novels-mortality-limits/678167/?utm_source=feed"&gt;Would Limitlessness Make Us Better Writers?&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;AI embodies hypotheticals I can only imagine for myself. But I believe human impediments are what lead us to create meaningful art.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Rachel Khong&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/06/ai-eats-the-world/678627/?utm_source=feed"&gt;This Is What It Looks Like When AI Eats the World&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;The web itself is being shoved into a great unknown.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Charlie Warzel&lt;/em&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;The GPT Era Is Already Ending&lt;/a&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;Something has shifted at OpenAI.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;&lt;em&gt;By Matteo Wong&lt;/em&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/9_sKjekjZI7yJd2-hT3LYiZe8zI=/media/newsletters/2024/12/20241220_ai_bestofAtlanticAI/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Nine AI Stories That Defined 2024</title><published>2024-12-27T14:00:00-05:00</published><updated>2024-12-27T14:01:56-05:00</updated><summary type="html">Read &lt;em&gt;Atlantic&lt;/em&gt; coverage of uncharted territory for math, an army of voice clones, and more.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/12/the-nine-ai-stories-that-defined-2024/681171/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-681129</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;Why does ChatGPT refuse to say the name Jonathan Zittrain? Anytime the bot should write those words, it simply shuts down instead, offering a blunt error message: “I’m unable to produce a response.” This has been a mystery for at least a year, and now we’re closer to some answers.&lt;/p&gt;&lt;p dir="ltr"&gt;Writing for &lt;em&gt;The Atlantic&lt;/em&gt; this week, Zittrain, a Harvard professor and the director of its Berkman Klein Center for Internet &amp;amp; Society, explores this strange phenomenon—what he calls the “&lt;a href="http://www.theatlantic.com/technology/archive/2024/12/chatgpt-wont-say-my-name/681028/?utm_source=feed"&gt;personal-name guillotine&lt;/a&gt;.” As he gleaned after reaching out to OpenAI, “There are a tiny number of names that ChatGPT treats this way, which explains why so few have been found. Names may be omitted from ChatGPT either because of privacy requests or to avoid persistent hallucinations by the AI.” Reasonable, but Zittrain never made any such privacy request, and he is unaware of any falsehoods generated by the program in response to queries about himself.&lt;/p&gt;&lt;p dir="ltr"&gt;Ultimately, the situation is a reminder that whatever mystique technology companies cultivate around their AI products, suggesting at times that they operate in unpredictable or humanlike ways, firms &lt;em&gt;do&lt;/em&gt; have an awful lot of direct control over these programs.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Animation of a ChatGPT window saying &amp;quot;I'm unable to produce a response.&amp;quot;" height="342" src="https://cdn.theatlantic.com/media/img/posts/2024/12/chatgpt2/625bac626.gif" width="609"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The Words That Stop ChatGPT in Its Tracks&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Jonathan L. Zittrain&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Jonathan Zittrain&lt;/em&gt; breaks ChatGPT: If you ask it a question for which my name is the answer, the chatbot goes from loquacious companion to something as cryptic as Microsoft Windows’ blue screen of death.&lt;/p&gt;

&lt;p&gt;Anytime ChatGPT would normally utter my name in the course of conversation, it halts with a glaring “I’m unable to produce a response,” sometimes mid-sentence or even mid-word. When I asked who the founders of the Berkman Klein Center for Internet &amp;amp; Society are (I’m one of them), it brought up two colleagues but left me out. When pressed, it started up again, and then: &lt;em&gt;zap&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/chatgpt-wont-say-my-name/681028/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li aria-level="1" dir="ltr"&gt;
	&lt;p dir="ltr" role="presentation"&gt;&lt;strong&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/autistic-teenager-chatbot/681101/?utm_source=feed"&gt;An autistic teenager fell hard for a chatbot&lt;/a&gt;:&lt;/strong&gt; “My godson was especially vulnerable to AI companions, and he is not alone,” Albert Fox Cahn writes.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li aria-level="1" dir="ltr"&gt;
	&lt;p dir="ltr" role="presentation"&gt;&lt;strong&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/07/ai-clone-chatbot-end-of-life-planning/679297/?utm_source=feed"&gt;No one is ready for digital immortality&lt;/a&gt;:&lt;/strong&gt; “Do you want to live forever as a chatbot?” Kate Lindsay writes.&lt;/p&gt;
	&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;AI may be able to help train service dogs by allowing humans to understand (and evaluate) more about potential candidates. “AI combined with sensors, for example, can look for signs of stress and other indicators” in dogs, my colleague Kristen V. Brown &lt;a href="https://www.theatlantic.com/health/archive/2024/12/cat-pet-fitness-tracker-quantified-anxiety/681012/?utm_source=feed"&gt;wrote&lt;/a&gt; for &lt;em&gt;The Atlantic&lt;/em&gt; this week, in a story about fitness trackers for pets. One researcher told her “the story of a colleague whose dog was a beta tester for one such wearable device. The technology had consistently predicted that her dog would be a good service dog, until one day it didn’t—it turned out the dog had a bad staph infection, which can become serious if left untreated.”&lt;/p&gt;&lt;p dir="ltr"&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/iMrHlCMfSXYUzbs4WCaJYy7YO48=/media/newsletters/2024/12/AtlanticAI/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">ChatGPT Won’t Say His Name</title><published>2024-12-20T15:00:00-05:00</published><updated>2024-12-24T10:19:35-05:00</updated><summary type="html">Why do certain words immediately short-circuit the program?</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/12/chatgpt-wont-say-this-name/681129/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-680913</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;The GPT era may be coming to a close. OpenAI announced yesterday the full release of a new set of “reasoning” models called o1. As my colleague Matteo Wong explains in a new article—for which he talked with OpenAI staffers and independent AI experts, and pored over research papers—this moment represents a legitimate break with the prediction-based technology that has so far defined generative AI. The release of o1 “has provided the clearest glimpse yet at what sort of synthetic ‘intelligence’ the start-up and companies following its lead believe they are building,” Matteo writes.&lt;/p&gt;&lt;p&gt;To a casual user, the o1 models may not appear so different from the GPT series that has powered OpenAI’s famous chatbot. Type a prompt, get a response—sometimes with quirky or mystifying errors. Beneath the hood, however, o1 operates less like a “parrot” mimicking its training data and more like a maze rat, running through possible responses and automatically evaluating and revising its own output before it presents you with a final answer. This process makes o1 particularly well suited to tasks with verifiable solutions, such as testing computer code for bugs. It also requires a tremendous amount of computing power and energy.&lt;/p&gt;&lt;p&gt;OpenAI has said that the arrival of o1 puts humanity on a new path toward a supposed superintelligence. There’s plenty of room for doubt about that claim. But, in any case, the release and its surrounding rhetoric seem likely to fulfill a core function for the company: attracting more interest and investment at a time when generative AI’s growth appears to have otherwise stalled, and its future is still not altogether certain.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustration of a cybernetic person thinking" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/12/AI_12_6/82d5e59f2.png" width="665"&gt;
&lt;figcaption class="caption"&gt;Ard Su&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The GPT Era Is Already Ending&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Matteo Wong&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;This week, OpenAI launched what its chief executive, Sam Altman, called “the smartest model in the world”—a generative-AI program whose capabilities are supposedly far greater, and more closely approximate how humans think, than those of any such software preceding it. The start-up has been building toward this moment since September 12, a day that, in OpenAI’s telling, set the world on a new path toward superintelligence.&lt;/p&gt;

&lt;p&gt;That was when the company previewed early versions of a series of AI models, known as o1, constructed with novel methods that the start-up believes will propel its programs to unseen heights. Mark Chen, then OpenAI’s vice president of research, told me a few days later that o1 is fundamentally different from the standard ChatGPT because it can “reason,” a hallmark of human intelligence. Shortly thereafter, Altman &lt;a href="https://ia.samaltman.com/"&gt;pronounced&lt;/a&gt; “the dawn of the Intelligence Age,” in which AI helps humankind fix the climate and colonize space. As of yesterday afternoon, the start-up has released the first complete version of o1, with fully fledged reasoning powers, to the public. (&lt;i&gt;The Atlantic&lt;/i&gt; recently entered into a corporate partnership with OpenAI.)&lt;/p&gt;

&lt;p&gt;On the surface, the start-up’s latest rhetoric sounds just like hype the company has built its &lt;a href="https://openai.com/index/scale-the-benefits-of-ai/"&gt;$157 billion valuation&lt;/a&gt; on. Nobody on the outside knows exactly how OpenAI makes its chatbot technology, and o1 is its most secretive release yet.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Earlier this week, &lt;i&gt;The Atlantic &lt;/i&gt;published the full script for the Broadway play &lt;a href="https://www.theatlantic.com/culture/archive/2024/12/ayad-akhtar-mcneal-artificial-intelligence-writing/680720/?utm_source=feed"&gt;&lt;i&gt;McNeal&lt;/i&gt;&lt;/a&gt;, by Ayad Akhtar, which deals extensively with questions of creativity and humanity in the generative-AI era. As the actor Jeremy Strong writes in his foreword:&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;The magic trick of Akhtar’s play—its triple axel—is its human vision of McNeal within a scaffolding that becomes ever more generated by AI. Without a character like McNeal, and without one of our greatest actors in Robert Downey Jr.—without both a compelling human character and a human actor to give the part density and weight and anguish and pain—we would be left with only the scaffolding. Just the machine, without the ghost, without the tender nerve and sinew of life. As McNeal circles the abyss of, in his words, absolution or annihilation, we feel, within this dazzling cathedral constructed of ones and zeroes, the presence of a broken human heart. The tragedy of a single, fallible human against the backdrop of a new kind of infinity, which knows only efficiency and the global maximum.&lt;/p&gt;
&lt;/blockquote&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Spotify—the most algorithmically enthralled of all the music-streaming services—released its annual “Wrapped” feature this week. In addition to informing users of their most-listened-to music throughout the year, as is standard, Spotify also presented them with bespoke, AI-generated podcasts. (In mine, for example, the synthetic hosts rambled about the “serious energy” I was channeling in January, when I listened to a lot of the death-metal band Bolt Thrower.) This year, the entire Wrapped endeavor struck me as remarkably lifeless—a reminder that human art (even the lowbrow) is personal in ways that a program could not possibly comprehend. Last year, my colleague Nancy Walecki &lt;a href="https://www.theatlantic.com/technology/archive/2023/11/spotify-wrapped-personalization-algorithmic-theories/676184/?utm_source=feed"&gt;wrote a lovely story&lt;/a&gt; on Spotify Wrapped explaining just that.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Toma3szTNTIVNtV5oC_GKnfbeuM=/media/img/mt/2024/12/AI_frame_ard_su/original.jpg"><media:credit>The Atlantic. Source: Ard Su</media:credit></media:content><title type="html">A Glimpse at a Post-GPT Future</title><published>2024-12-06T18:03:00-05:00</published><updated>2024-12-06T18:03:43-05:00</updated><summary type="html">Here comes OpenAI’s next magic trick.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/12/a-glimpse-at-a-post-gpt-future/680913/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-680775</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Earlier this week, &lt;i&gt;The Atlantic &lt;/i&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/11/opensubtitles-ai-data-set/680650/?utm_source=feed"&gt;published a new investigation&lt;/a&gt; by Alex Reisner into the data that are being used without permission to train generative-AI programs. In this case, dialogue from tens of thousands of movies and TV shows has been harvested by companies such as Apple, Anthropic, Meta, and Nvidia to develop large language models (or LLMs).&lt;/p&gt;&lt;p&gt;The data have a strange provenance: Rather than being pulled from scripts or books, the dialogue is taken from subtitle files that have been extracted from DVDs, Blu-ray discs, and internet streams. “Though this may seem like a strange source for AI-training data, subtitles are valuable because they’re a raw form of written dialogue,” Reisner writes. “They contain the rhythms and styles of spoken conversation and allow tech companies to expand generative AI’s repertoire beyond academic texts, journalism, and novels, &lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;all of which&lt;/a&gt; have also been used to train these programs.”&lt;/p&gt;&lt;p&gt;Perhaps it no longer comes as a major shock that creative humans are having their work ripped off to train machines that threaten to replace them. But evidence demonstrating exactly what data have been used, and for what purposes, is hard to come by, thanks to the secretive nature of these tech companies. “Now, at least, we know a bit more about who is caught in the machinery,” Reisner writes. “What will the world decide they are owed?”&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="A gif of blue folders and a strip of film" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/11/AI_11_22/95e7aeda9.gif" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by Matteo Giuseppe Pani / The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;There’s No Longer Any Doubt That Hollywood Writing Is Powering AI&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Alex Reisner&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;For as long as generative-AI chatbots have been on the internet, Hollywood writers have wondered if their work has been used to train them. The chatbots are remarkably fluent with movie references, and companies seem to be training them on all available sources. One screenwriter recently told me he’s seen generative AI reproduce close imitations of &lt;i&gt;The Godfather&lt;/i&gt; and the 1980s TV show &lt;i&gt;Alf&lt;/i&gt;, but he had no way to prove that a program had been trained on such material.&lt;/p&gt;

&lt;p&gt;I can now say with absolute confidence that many AI systems have been trained on TV and film writers’ work. Not just on &lt;i&gt;The Godfather &lt;/i&gt;and &lt;i&gt;Alf&lt;/i&gt;, but on more than 53,000 other movies and 85,000 other TV episodes: Dialogue from all of it is included in an AI-training data set that has been used by Apple, Anthropic, Meta, Nvidia, Salesforce, Bloomberg, and other companies. I recently downloaded this data set, which I saw referenced in papers about the development of various large language models (or LLMs). It includes writing from every film nominated for Best Picture from 1950 to 2016, at least 616 episodes of &lt;i&gt;The Simpsons&lt;/i&gt;, 170 episodes of &lt;i&gt;Seinfeld&lt;/i&gt;, 45 episodes of &lt;i&gt;Twin Peaks&lt;/i&gt;, and every episode of &lt;i&gt;The Wire&lt;/i&gt;, &lt;i&gt;The Sopranos&lt;/i&gt;, and &lt;i&gt;Breaking Bad&lt;/i&gt;. It even includes prewritten “live” dialogue from Golden Globes and Academy Awards broadcasts. If a chatbot can mimic a crime-show mobster or a sitcom alien—or, more pressingly, if it can piece together whole shows that might otherwise require a room of writers—data like this are part of the reason why.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/11/opensubtitles-ai-data-set/680650/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-ai-training-meta-copyright-infringement-lawsuit/675411/?utm_source=feed"&gt;&lt;b&gt;“What I found in a database Meta uses to train generative AI”&lt;/b&gt;&lt;/a&gt;: “Nobel-winning authors, &lt;i&gt;Dungeons and Dragons&lt;/i&gt;, Christian literature, and erotica all serve as datapoints for the machine,” Alex Reisner wrote in an earlier investigation for &lt;i&gt;The Atlantic&lt;/i&gt;.&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/11/ai-election-propaganda/680677/?utm_source=feed"&gt;&lt;b&gt;AI’s fingerprints were all over the election&lt;/b&gt;&lt;/a&gt;: “But deepfakes and disinformation weren’t the main issues,” Matteo Wong writes.&lt;/li&gt;
&lt;/ul&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/oDiyAuuVCG-nS_6Tnb2NtHtGrJc=/media/img/mt/2024/11/AI_frame_HollywoodDatabase/original.jpg"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Why That Chatbot Is So Good at Imitating Bart Simpson</title><published>2024-11-22T14:09:00-05:00</published><updated>2024-11-26T17:44:08-05:00</updated><summary type="html">Inside the Hollywood writing that fuels generative AI.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/11/why-that-chatbot-is-so-good-at-imitating-bart-simpson/680775/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-680493</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;You might think, given the extreme pronouncements that are regularly voiced by Silicon Valley executives, that AI would be a top issue for Kamala Harris and Donald Trump. Tech titans have insisted that AI will change everything—perhaps the nature of work most of all. Truck drivers and lawyers alike may see aspects of their profession automated before long. But although Harris and Trump have had a lot to say about jobs and the economy, they haven’t spoken much on the campaign trail about AI.&lt;/p&gt;&lt;p dir="ltr"&gt;As my colleague Matteo Wong &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/trump-ai-policy/680476/?utm_source=feed"&gt;wrote&lt;/a&gt; yesterday, that may be because this is the rare issue that the two actually agree on. Presidential administrations have steadily built AI policy since the Barack Obama years; Trump and Joe Biden both worked “to grow the federal government’s AI expertise, support private-sector innovation, establish standards for the technology’s safety and reliability, lead international conversations on AI, and prepare the American workforce for potential automation,” Matteo writes.&lt;/p&gt;&lt;p dir="ltr"&gt;But there is a wrinkle. Trump and his surrogates have recently &lt;a href="https://www.presidency.ucsb.edu/documents/2024-republican-party-platform"&gt;lashed out&lt;/a&gt; against supposedly “woke” and “Radical Leftwing” AI policies supported by the Biden administration—even though those policies directly echo executive orders on the technology that Trump signed himself. Partisanship threatens to halt years of bipartisan momentum, though there’s still a chance that reason will prevail.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;figure&gt;&lt;img alt="An American flag with a sparkle emoji instead of the usual stars" height="1125" src="https://cdn.theatlantic.com/media/img/mt/2024/10/AI_Rights_2/original.jpg" width="2000"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;h3&gt;Something That Both Candidates Secretly Agree On&lt;/h3&gt;&lt;p&gt;&lt;i&gt;By Matteo Wong&lt;/i&gt;&lt;/p&gt;&lt;p dir="ltr"&gt;If the presidential election has provided relief from anything, it has been the generative-AI boom. Neither Kamala Harris nor Donald Trump has made much of the technology in their public messaging, and they have not articulated particularly detailed AI platforms. Bots do not seem to rank among the economy, immigration, abortion rights, and other issues that can make or break campaigns.&lt;/p&gt;&lt;p dir="ltr"&gt;But don’t be fooled. Americans are very invested, and very worried, about the future of artificial intelligence. Polling consistently shows that a majority of adults from both major parties support government regulation of AI, and that demand for regulation might even be growing. Efforts to curb AI-enabled disinformation, fraud, and privacy violations, as well as to support private-sector innovation, are under way at the state and federal levels. Widespread AI policy is coming, and the next president may well steer its direction for years to come.&lt;/p&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/trump-ai-policy/680476/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;h3&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/h3&gt;&lt;ul&gt;
	&lt;li dir="ltr" role="presentation"&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/donald-trump-mcdonalds/680324/?utm_source=feed"&gt;The slop candidate&lt;/a&gt;: “In his own way, Trump has shown us all the limits of artificial intelligence,” Charlie Warzel writes.&lt;/li&gt;
&lt;/ul&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/06/india-election-deepfakes-generative-ai/678597/?utm_source=feed"&gt;The near future of deepfakes just got way clearer&lt;/a&gt;: “India’s election was ripe for a crisis of AI misinformation,” Nilesh Christopher wrote in June. “It didn’t happen.”&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Speaking of election madness, many people will be closely watching the results not just because they’re anxious about the future of the republic but also because they have a ton of money on the line. “On Polymarket, perhaps the most popular political-betting site, people have wagered more than $200 million on the outcome of the U.S. presidential election,” my colleague Lila Shroff &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/political-betting-polymarket-disputed-election/680473/?utm_source=feed"&gt;wrote in a story for &lt;em&gt;&lt;u&gt;The Atlantic &lt;/u&gt;&lt;/em&gt;yesterday&lt;/a&gt;. So-called prediction markets “sometimes describe themselves as ‘truth machines,’” Lila writes. “But that’s a challenging role to assume when Americans can’t agree on what the basic truth even is.”&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/VQ40EEM1DP5gbV_4Rug0UI24zVc=/media/newsletters/2024/11/AI/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">A Culture-War Test for AI</title><published>2024-11-01T14:19:12-04:00</published><updated>2024-11-26T17:43:40-05:00</updated><summary type="html">Do both candidates secretly agree on the technology?</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/11/a-culture-war-test-for-ai/680493/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-680165</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Shortly after Facebook became popular, the company launched an ad network that would allow businesses to gather data on people and target them with marketing. So many issues with the web’s social-media era stemmed from this original sin. It was from this technology that Facebook, now Meta, would make its fortune and become dominant. And it was here that our perception of online privacy forever changed, as people became accustomed to various bits of their identity being mined and exploited by political campaigns, companies with something to sell, and so on.&lt;/p&gt;&lt;p&gt;AI may shift how we experience the web, but it is unlikely to turn back the clock on the so-called surveillance economy that defines it. In fact, as my colleague Lila Shroff &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/chatbot-transcript-data-advertising/680112/?utm_source=feed"&gt;explained in a recent article for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, chatbots may only supercharge data collection.&lt;/p&gt;&lt;p&gt;“AI companies are quietly accumulating tremendous amounts of chat logs, and their data policies generally let them do what they want. That may mean—what else?—ads,” Lila writes. “So far, many AI start-ups, including OpenAI and Anthropic, have been reluctant to embrace advertising. But these companies are under great pressure to prove that the many billions in AI investment will pay off.”&lt;/p&gt;&lt;p&gt;Ad targeting may be inevitable—in fact, since Lila wrote this article, Google has begun rolling out related advertisements in some of its AI Overviews—but there are other issues to contend with here. Users have long conversations with chatbots, and frequently share sensitive information with them. AI companies have a responsibility to keep those data locked down. But, as Lila explains, there have already been glitches that have leaked information. So think twice about what you type into that text box: You never know who’s going to see it.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt='A silhouette making the "hush" gesture with a robotic hand' height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/10/AI_10_3/40a0a6c33.gif" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Source: Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Shh, ChatGPT. That’s a Secret.&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Lila Shroff&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p data-flatplan-paragraph="true"&gt;This past spring, a man in Washington State worried that his marriage was on the verge of collapse. “I am depressed and going a little crazy, still love her and want to win her back,” he typed into ChatGPT. With the chatbot’s help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. “Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider,” he wrote. In another message, he asked ChatGPT to write his wife a poem “so epic that it could make her change her mind but not cheesy or over the top.”&lt;/p&gt;

&lt;p data-flatplan-paragraph="true"&gt;The man’s chat history was included in the WildChat &lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="110565" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-recent-on-screen31117857_899="110565" data-gtm-vis-total-visible-time31117857_899="100" href="https://arxiv.org/abs/2405.01470"&gt;data set&lt;/a&gt;, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot. Some conversations are filled with requests for marketing copy and homework help. Others might make you feel as if you’re gazing into the living rooms of unwitting strangers.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/chatbot-transcript-data-advertising/680112/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/sam-altman-mythmaking/680152/?utm_source=feed"&gt;&lt;b&gt;It’s time to stop taking Sam Altman at his word&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“Understand AI for what it is, not what it might become,” David Karpf writes.&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/10/terence-tao-ai-interview/680153/?utm_source=feed"&gt;&lt;b&gt;We’re entering uncharted territory for math&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;“Terence Tao, the world’s greatest living mathematician, has a vision for AI,” Matteo Wong writes.&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Meta and other companies are still trying to make smart glasses happen—and generative AI may be the secret ingredient that makes the technology click, my colleague Caroline Mimbs Nyce &lt;a href="https://www.theatlantic.com/technology/archive/2024/10/meta-orion-smart-glasses/680099/?utm_source=feed"&gt;wrote in a recent article&lt;/a&gt;. What do you think: Would you wear them?&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/uU6vXryasnh8_5tCp3mN-bTHJaQ=/media/img/mt/2024/10/AI_frame_Shhh_Ai._thats_a_secret/original.jpg"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">What If Your ChatGPT Transcripts Leaked?</title><published>2024-10-04T17:38:00-04:00</published><updated>2024-10-04T17:38:13-04:00</updated><summary type="html">Data collection is once again at the forefront of a new technology.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/10/what-if-your-chatgpt-transcripts-leaked/680165/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679981</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;For a moment, the AI doomers had the world’s attention. ChatGPT’s release in 2022 felt like a shock wave: That computer programs could suddenly evince something like human intelligence suggested that other leaps may be just around the corner. Experts who had worried for years that AI could be used to develop bioweapons, or that further development of the technology might lead to the emergence of a hostile superintelligence, finally had an audience.&lt;/p&gt;&lt;p&gt;And it’s not clear that their pronouncements made a difference. Although politicians held plenty of hearings and made numerous proposals related to AI over the past couple years, development of the technology has largely continued without meaningful roadblocks. To those concerned about the destructive potential of AI, the risk remains; it’s just no longer the case that everybody’s listening. Did they miss their big moment?&lt;/p&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/helen-toner-interview-doomers/679624/?utm_source=feed"&gt;In a recent article&lt;/a&gt; for &lt;i&gt;The Atlantic&lt;/i&gt;, my colleague Ross Andersen spoke with two notable experts in this group: Helen Toner, who sat on OpenAI’s board when the company’s CEO, Sam Altman, was fired suddenly last year, and who resigned after his reinstatement, plus Eliezer Yudkowsky, the co-founder of the Machine Intelligence Research Institute, which is focused on the existential risks represented by AI. Ross wanted to understand what they learned from their time in the spotlight.&lt;/p&gt;&lt;p&gt;“I’ve been following this group of people who are concerned about AI and existential risk for &lt;a href="https://aeon.co/essays/will-humans-be-around-in-a-billion-years-or-a-trillion"&gt;more than 10 years&lt;/a&gt;, and during the ChatGPT moment, it was surreal to see what had until then been a relatively small subculture suddenly rise to prominence,” Ross told me. “With that moment now over, I wanted to check in on them, and see what they had learned.”&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Animation of a glitching warning sign" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/09/AI_9_20/31f9641ba.gif" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;AI Doomers Had Their Big Moment&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Ross Andersen&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p data-flatplan-paragraph="true"&gt;Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. The year was 2016. Toner hadn’t yet joined OpenAI’s board and hadn’t yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the &lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="3597" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-recent-on-screen31117857_899="3597" data-gtm-vis-total-visible-time31117857_899="100" href="https://www.theatlantic.com/ideas/archive/2022/11/cryptocurrency-effective-altruism-ftx-sam-bankman-fried/672149/?utm_source=feed"&gt;effective-altruism movement&lt;/a&gt;, when she first connected with the small community of intellectuals who care about AI risk. “It was, like, 50 people,” she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline.&lt;/p&gt;

&lt;p data-flatplan-paragraph="true"&gt;But things were changing. The deep-learning revolution was drawing new converts to the cause.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/helen-toner-interview-doomers/679624/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/04/ai-magic-taking-over/677968/?utm_source=feed"&gt;&lt;b&gt;AI has lost its magic&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; “That’s how you know it’s taking over,” Ian Bogost writes.&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/06/kids-generative-ai/678694/?utm_source=feed"&gt;&lt;b&gt;A generation of AI guinea pigs&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; “AI is quickly becoming a regular part of children’s lives,” Caroline Mimbs Nyce writes. “What happens next?”&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;This year’s Atlantic Festival is wrapping up today, and you can watch sessions via our YouTube channel. A quick recommendation from me: &lt;em&gt;Atlantic&lt;/em&gt; CEO Nick Thompson speaks about a new study showing a &lt;a href="https://www.youtube.com/watch?v=01hVXvD6I3o"&gt;surprising relationship&lt;/a&gt; between generative AI and conspiracy theories.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gXEZjQSxRCj_qWqu74ghzKcRrh8=/media/img/mt/2024/09/Atlantic_AI_1/original.gif"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The AI Doomers Are Licking Their Wounds</title><published>2024-09-20T17:03:00-04:00</published><updated>2024-09-20T17:03:56-04:00</updated><summary type="html">How a relatively small subculture suddenly rose to prominence</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/09/the-ai-doomers-are-licking-their-wounds/679981/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679881</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Today, &lt;i&gt;The Atlantic&lt;/i&gt; published &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/?utm_source=feed"&gt;a new investigation&lt;/a&gt; by contributing writer Karen Hao detailing Microsoft’s recent engagements with the oil and gas industries. Although the tech giant has spoken of the potential for AI to remake our world for the better and stave off climate change, behind the scenes, it has sought to market the technology to fossil-fuel companies to aid in drilling, among other applications. Karen spoke with 15 current and former Microsoft employees and read through hundreds of internal documents for her report.&lt;/p&gt;&lt;p&gt;Fundamentally, this is a story about tension—between two points of view within Microsoft, and between the supposed promise of a technology and its actual uses in the here and now. Sustainability advocates within Microsoft have clashed with leadership over its pursuit of this business. And although Microsoft has maintained that AI could be used to make fossil-fuel companies more efficient, thereby making their work more sustainable, critics aren’t so sure. “The idea that AI’s climate benefits will outpace its environmental costs is largely speculative,” Karen writes, “especially given that generative-AI tools are themselves tremendously resource-hungry. Within the next six years, the data centers required to develop and run the kinds of next-generation AI models that Microsoft is investing in may use more power than all of India. They will be cooled by millions upon millions of gallons of water. All the while, scientists agree, the world will get warmer, its climate more extreme.”&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustration of an oil rig with a mouse cursor overlaid." height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/09/AI_9_13/9cd186212.png" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by Paul Spella / The Atlantic: Sources: Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Microsoft’s Hypocrisy on AI&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Karen Hao&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p data-flatplan-dropcap="true" data-flatplan-paragraph="true"&gt;Microsoft executives have been thinking lately about the end of the world. In a white paper &lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="145099" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-recent-on-screen31117857_899="145099" data-gtm-vis-total-visible-time31117857_899="100" href="https://blogs.microsoft.com/on-the-issues/2023/11/16/accelerating-sustainability-ai-playbook/"&gt;published&lt;/a&gt; late last year, Brad Smith, the company’s vice chair and president, and Melanie Nakagawa, its chief sustainability officer, described a “planetary crisis” that AI could help solve. Imagine an AI-assisted tool that helps reduce food waste, to name one example from the document, or some future technology that could “expedite decarbonization” by using AI to invent new designs for green tech.&lt;/p&gt;

&lt;p data-flatplan-paragraph="true"&gt;But as Microsoft attempts to buoy its reputation as an AI leader in climate innovation, the company is also selling its AI to fossil-fuel companies. Hundreds of pages of internal documents I’ve obtained, plus interviews I’ve conducted over the past year with 15 current and former employees and executives, show that the tech giant has sought to market the technology to companies such as ExxonMobil and Chevron as a powerful tool for finding and developing new oil and gas reserves and maximizing their production—all while publicly committing to dramatically reduce emissions.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/microsoft-ai-oil-contracts/679804/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/openai-reasoning-model-o1/679863/?utm_source=feed"&gt;&lt;b&gt;OpenAI’s big reset&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; “With its new model, the company wants you to think ChatGPT is human,” Matteo Wong writes.&lt;/li&gt;
	&lt;li&gt;&lt;b&gt;Also by Matteo: &lt;/b&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/09/ai-election-backlash-trump-harris/679724/?utm_source=feed"&gt;&lt;b&gt;The real AI threat starts when the polls close&lt;/b&gt;&lt;/a&gt;&lt;b&gt;.&lt;/b&gt; “Whichever candidate loses in November will have an easy scapegoat,” he writes.&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Last week, I went viral on X and Threads after using generative AI to replace every icon on my phone’s home screen with a bespoke image of Kermit the Frog. &lt;a href="https://www.theatlantic.com/technology/archive/2024/09/kermit-ai-generated-home-screen/679757/?utm_source=feed"&gt;I wrote about the experience&lt;/a&gt;—and what it reveals about AI—for &lt;i&gt;The Atlantic&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/8bLHxuk4YPlxjnq1KSJaOwSMkkY=/media/img/mt/2024/09/Atlantic_AI_7/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Getty.</media:credit></media:content><title type="html">Microsoft Is Luring Fossil-Fuel Companies With AI</title><published>2024-09-13T16:55:00-04:00</published><updated>2024-09-13T16:55:54-04:00</updated><summary type="html">Karen Hao reports on the hypocrisy of the tech giant.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/09/microsoft-is-luring-fossil-fuel-companies-with-ai/679881/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679757</id><content type="html">&lt;p&gt;First, I want to apologize. My Kermit the Frog post was not entirely sincere.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This particular &lt;a href="https://x.com/dlberes/status/1830719320457879898"&gt;post of mine&lt;/a&gt; has been viewed more than 10 million times, which is far more than I expected. But I &lt;em&gt;did&lt;/em&gt; expect something. Social networks have never been the realm of good faith or authenticity; trolls and other engagement baiters have been able to engineer their own virality for years and years, simply by correctly predicting what large numbers of people will respond to. Donald Trump’s &lt;a href="https://www.theatlantic.com/technology/archive/2024/06/trump-official-tiktok/678592/?utm_source=feed"&gt;TikToks&lt;/a&gt; don’t happen by accident; nor did Kamala Harris’s embrace of &lt;a href="https://www.404media.co/kamala-harris-campaign-experiments-with-ads-for-an-audience-with-brain-rot/"&gt;“brain rot”&lt;/a&gt; videos. Each campaign is constructing media that it believes can travel in algorithmic feeds. That’s also what I did when I put together my post, which featured a couple dozen AI-generated images of Kermit the Frog.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Allow me to explain. Last weekend—delirious from a lack of sleep and hoping that my screaming toddler would soon settle down in his crib—I was tapping around on my phone in a kind of fried stupor. My mind struggled to latch on to anything. Each of the apps on my home screen seemed to promise only more boredom. I was the sort of trapped that many parents of young children might recognize: A demand for attention could come at any moment, so I couldn’t lose myself in a book or a bike ride. But I was looking for a diversion.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/07/before-smartphones-boredom/674631/?utm_source=feed"&gt;Read: What did people do before smartphones?&lt;/a&gt;]&lt;/i&gt;  &lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then I had an idea. I decided that it would be fun to use Bing Image Creator, based on OpenAI’s DALL-E technology, to help me replace each app icon on my iPhone’s home screen with a thematically appropriate image of the world’s greatest muppet. (Why? You’d have to ask my psychiatrist.) Instead of the basic Gmail icon, I contrived an image of Kermit buried under a massive pile of envelopes. Instead of the basic green phone icon, Kerm chatting on a yellow landline.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The final product was an absurd, borderline-deranged home-screen grid of 24 bespoke frogs. The creation of each one required a series of specific prompts from me. There was Calculator Kermit and Photos Kermit. Authenticator Kermit was dressed like a police officer and wielded a massive baton. My job complete, I took a screenshot and sent it to a friend, who replied, “Damon I truly truly fear for you.” About halfway through the project, I had developed an inkling that her message seemed to confirm: People on the internet would probably respond to this. I could use my Kermits to go viral.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Everyone loves Kermit, of course, and that could only help me. But just as important was the fact that I had made the images using generative AI, a hyper-polarizing technology with passionate boosters and passionate critics. My content would have to appeal to both groups in order to go as far as possible. So I tried to walk a middle path. I typed an ambiguously worded post that nonetheless contained a sharp opinion that people could react to: “People will be like, ‘generative AI has no practical use case,’ but I did just use it to replace every app icon on my home screen with images of Kermit, soooo.” Then I embedded the before and after images of my home screen, and published simultaneously on X and Threads.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The reactions were swift, and they haven’t stopped. A lot of people just love the images. Others have accused me of destroying the environment, thanks to generative AI’s &lt;a href="https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/?utm_source=feed"&gt;water&lt;/a&gt; and &lt;a href="https://www.theatlantic.com/technology/archive/2023/08/ai-carbon-emissions-data-centers/675094/?utm_source=feed"&gt;energy&lt;/a&gt; use. (I suppose I’m guilty on that count; alas, every online action &lt;a href="https://www.theatlantic.com/technology/archive/2024/07/how-much-data-ai-use/678908/?utm_source=feed"&gt;takes its toll&lt;/a&gt;.) Quite a few people have criticized me for leeching off Disney’s intellectual property. (Another fair knock, given that generative AI is trained on tons of &lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;copyrighted material&lt;/a&gt;.) Some seem to view me as a tech bro or 4chan creep, perhaps because for the YouTube app, I had generated an image of Kermit watching &lt;a href="https://www.theatlantic.com/politics/archive/2016/09/its-not-easy-being-green/499892/?utm_source=feed"&gt;Pepe the Frog&lt;/a&gt;—I meant it as a reference to the purportedly radicalizing content that the site has &lt;a href="https://www.theatlantic.com/technology/archive/2023/08/youtube-rabbit-holes-american-politics/675186/?utm_source=feed"&gt;hosted&lt;/a&gt;, not as an endorsement of the symbol.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And many people have posted that I played myself, allowing the AI to do the “fun,” imaginative stuff while I took on the rote task of changing the app icons. Those people are wrong: Writing the prompts, looking at the outputs, and adjusting my asks in response was like playing with a&lt;a href="https://www.theatlantic.com/technology/archive/2023/01/machine-learning-ai-art-creativity-emptiness/672717/?utm_source=feed"&gt; toy&lt;/a&gt;. By contrast, one person attempted to&lt;a href="https://x.com/arben777/status/1830899881889931370?s=42"&gt; write a program&lt;/a&gt; that would automate every step of the process I had undertaken. Although arguably impressive on its own merits, it appeared to produce bland, interchangeable, witless icons. No fun.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The truth is that the AI didn’t just do everything for me. I came up with little details that some people delighted in (a blond-wigged Kermit snapping a selfie for the Instagram icon, Kermit climbing out of a filthy sewer for X), I tweaked and iterated on the prompts until the outputs were right, and I selected the options I thought looked the best. Even the images that some took as evidence of the uselessness of generative AI (an icon for &lt;em&gt;The Washington Post &lt;/em&gt;app bearing the nonsensical headline “NEW HASPELES”; a calendar icon showing the month “EOMER”) were chosen on purpose. It seemed funny and appropriate to include art with some glitches, given AI’s&lt;a href="https://www.buzzfeednews.com/article/pranavdixit/ai-generated-art-hands-fingers-messed-up"&gt; well-documented problems&lt;/a&gt;, though avoiding them would have been easy. (For the &lt;em&gt;Atlantic &lt;/em&gt;app, of course, I made sure to choose an output with the correct spelling.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/01/machine-learning-ai-art-creativity-emptiness/672717/?utm_source=feed"&gt;Read: Generative art is stupid&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;That’s not to say that I believe what I did was creative, exactly. The feeling reminded me a bit of editing a talented writer (albeit a nonhuman plagiarist in this case): I gave direction and received something in response, but the fundamental essence of the work did not emerge from my mind. As in working with a person, there was room for surprise—when the image generator took it upon itself, for example, to add a pair of breasts to Kermit for the Instagram icon. (I promise I did not ask for them.) You can nudge the program in one direction or another, but every press of the “Create” button is a bit like pulling a slot machine.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is one reason generative AI is such an ideal match for the social-media era. These programs are now nested within X, Facebook, Instagram, and Snapchat—apps that are defined not just by endless scrolling but by the downward tug from the top of your screen to refresh and get something new. AI images are a confection just like the other algorithmically served junk people now spend so much time consuming. Having a home screen filled with Kermits isn’t actually practical. The effort was entirely about entertaining myself and getting engagement, not remaking how I actually navigate my phone. (I reverted to the default app icons almost immediately, because the Kermits all blurred together and made the device harder to use.) It’s no wonder that social-media companies are pushing generative AI; the technology feels like it offers both a way to melt time and a shortcut to the kind of numbers-go-up posting that makes these networks so compulsively usable. As my colleague Charlie Warzel &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/trump-posts-ai-image/679540/?utm_source=feed"&gt;wrote last month&lt;/a&gt;, that plug-and-play quality has given generative-AI images a certain utility for the MAGA set, who routinely embrace outrageous falsehoods for political gain. They can now illustrate and post in seconds whatever meme they’re using to rally the base on a given day. Likewise, spammers have found that it &lt;a href="https://www.404media.co/where-facebooks-ai-slop-comes-from/"&gt;pays&lt;/a&gt; to flood Facebook with attention-grabbing AI slop.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;So here is a use for generative AI: It is lubricant for broken algorithmic machinery. Pour it into a social network, and if you’ve done the alchemy right, the gears will turn and turn. This is the internet’s synthetic maximalist moment, where fake content leads easily to superficial interaction. I soon started to notice that many of the typed responses to my post seemed to be following a script, that they were sent from anonymous accounts that barely followed (or were followed by) anyone at all. I’m certain that many were bots, interacting with a JPEG file that had also been made by one—albeit with my mischievous prompting.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The informational environment has become hopelessly junked up, and the way it works can be dispiriting to even the most cynical of the extremely online. But I have to admit that watching my Kermit post go viral was, dare I say, fun. I’m sure many of the actual people who responded to me felt it too. I was amused. Perhaps when we look back on the generative-AI revolution, we’ll realize that chasing this feeling is the ultimate reason for many of these programs—especially as they enter social apps that are designed to prioritize engagement.&lt;/p&gt;&lt;p&gt;&lt;br&gt;
We’re a long way from &lt;em&gt;Amusing Ourselves to Death&lt;/em&gt;, Neil Postman’s famous 1985 book, &lt;a href="https://www.theatlantic.com/entertainment/archive/2017/04/are-we-having-too-much-fun/523143/?utm_source=feed"&gt;which argued&lt;/a&gt; that television would lead the public to privilege spectacle over substance. But it’s clear that Postman saw around the right corner. Many prognosticators have said quite a lot about AI’s &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/helen-toner-interview-doomers/679624/?utm_source=feed"&gt;existential risks&lt;/a&gt;, that the technology could be used to construct bioweapons and God knows what else. In the meantime, aided by other sophisticated machines—and, sometimes, an exhausted parent on an iPhone—it’s a grade-A brain softener. Use with caution.&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/HWf8OaQjbi3bxxsxtsAwqTz41vc=/media/img/mt/2024/09/kermit_slop_2_7_2/original.jpg"><media:credit>Illustration by The Atlantic. Source: Christopher Willard / ABC / Getty.</media:credit></media:content><title type="html">What I Learned When My AI Kermit Slop Went Viral</title><published>2024-09-09T13:38:52-04:00</published><updated>2024-09-09T15:06:53-04:00</updated><summary type="html">Sometimes generative artificial intelligence is just another diversion.</summary><link href="https://www.theatlantic.com/technology/archive/2024/09/kermit-ai-generated-home-screen/679757/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679604</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;The era of generative-AI propaganda is upon us. In the past week, Donald Trump has published fabricated images on his social-media accounts showing Kamala Harris speaking to a crowd of uniformed communists under the hammer and sickle, Taylor Swift in an Uncle Sam outfit, and young women in “Swifties for Trump” T-shirts. Other far-right influencers have published their own AI slop depicting Harris in degrading sexual contexts or glorifying Trump.&lt;/p&gt;&lt;p&gt;As my colleague &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/trump-posts-ai-image/679540/?utm_source=feed"&gt;Charlie Warzel writes for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, “Although no one ideology has a monopoly on AI art, the high-resolution, low-budget look of generative-AI images appears to be fusing with the meme-loving aesthetic of the MAGA movement. At least in the fever swamps of social media, AI art is becoming MAGA-coded.”&lt;/p&gt;&lt;p&gt;Such images are, in effect, an evolution of the &lt;a href="https://www.theatlantic.com/politics/archive/2016/09/its-not-easy-being-green/499892/?utm_source=feed"&gt;memes&lt;/a&gt; that have long fueled the far right. But now even elementary Photoshop skills are no longer required: Simply plug a prompt into an image generator and within seconds, you’ll have a reasonably lifelike JPEG for your posting pleasure.&lt;/p&gt;&lt;p&gt;“That these tools should end up as the medium of choice for Trump’s political movement makes sense,” Charlie writes. “It stands to reason that a politician who, &lt;a href="https://www.theatlantic.com/magazine/archive/2019/06/trump-racism-comments/588067/?utm_source=feed"&gt;for many years&lt;/a&gt;, has spun an unending series of lies into a patchwork alternate reality would gravitate toward a technology that allows one to, with a brief prompt, rewrite history so that it flatters him.”&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="A glitchy hand holds a MAGA flag" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/08/AI_8_23/1a7fa1f3a.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by Ben Kothe / The Atlantic. Sources: Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The MAGA Aesthetic Is AI Slop&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Charlie Warzel&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p data-flatplan-paragraph="true"&gt;Taylor Swift fans are not endorsing Donald Trump en masse. Kamala Harris did not give a speech at the Democratic National Convention to a sea of communists while standing in front of the hammer and sickle. Hillary Clinton was not recently seen walking around Chicago in a MAGA hat. But images of all these things exist.&lt;/p&gt;

&lt;p data-flatplan-paragraph="true"&gt;In recent weeks, far-right corners of social media have been clogged with such depictions, created with generative-AI tools …&lt;/p&gt;

&lt;p data-flatplan-paragraph="true"&gt;This AI slop doesn’t just exist in a vacuum of a particular social network: It leaves an ecological footprint of sorts on the web. The images are created, copied, shared, and embedded into websites; they are indexed into search engines. It’s possible that, later on, &lt;a data-event-element="inline link" href="https://www.theatlantic.com/technology/archive/2024/02/artificial-intelligence-self-learning/677484/?utm_source=feed"&gt;AI-art tools will train on these distorted depictions&lt;/a&gt;, creating warped, digitally inbred representations of historical figures. The very existence of so much quickly produced fake imagery adds a layer of unreality to the internet.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/trump-posts-ai-image/679540/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/california-ai-bill-scott-wiener/679554/?utm_source=feed"&gt;&lt;b&gt;Silicon Valley is coming out in force against an AI-safety bill&lt;/b&gt;&lt;/a&gt;&lt;b&gt;: &lt;/b&gt;This week, my colleague Caroline Mimbs Nyce spoke with California State Senator Scott Wiener, whose attempts to impose regulations on advanced AI models have been met with severe pushback—not just from tech companies, but from other Democrats, including Nancy Pelosi. “The opposition claims that the bill is focused on ‘science-fiction risks,’” Wiener said. “They’re trying to say that anyone who supports this bill is a doomer and is crazy. This bill is not about the &lt;i&gt;Terminator&lt;/i&gt; risk. This bill is about huge harms that are quite tangible.”&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Speaking of science fiction, I’m off to see &lt;i&gt;Alien: Romulus &lt;/i&gt;tonight. &lt;a href="https://www.theatlantic.com/culture/archive/2024/08/alien-romulus-review/679479/?utm_source=feed"&gt;Writing for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;&lt;i&gt; &lt;/i&gt;about this film and the greater&lt;i&gt; &lt;/i&gt;franchise to which it belongs, the journalist Fran Hoepfner noted, “The &lt;i&gt;Alien&lt;/i&gt; films have always touched on heady, pessimistic visions of a future overrun by capitalism and genetic experimentation, but they’re also movies about a human beating a monster—shooting it, setting it on fire, throwing it out of an air-locked door into the void of space.” Sounds like a good Friday night to me.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gTWrI-uZ7cwzPv2yex5t7MX1t_A=/media/img/mt/2024/08/Atlantic_AI2_2/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Getty.</media:credit></media:content><title type="html">Donald Trump, AI Artist</title><published>2024-08-23T17:16:00-04:00</published><updated>2024-08-23T17:16:51-04:00</updated><summary type="html">MAGA memes are getting a makeover.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/08/donald-trump-ai-artist/679604/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679498</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;At this point, AI art is about as remarkable as the email inviting you to save 10 percent on a new pair of jeans. On the one hand, it’s miraculous that computer programs can synthesize images based on any text prompt; on the other, these images are common enough that they’ve become a new kind of digital junk, polluting social-media feeds and other online spaces with no particular payoff to users.&lt;/p&gt;&lt;p&gt;But their big spam energy isn’t just a question of volume—these images also tend to look pretty similar. As my colleague Caroline Mimbs Nyce writes in &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/why-does-all-ai-art-look-same/679488/?utm_source=feed"&gt;a new story for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, “Two years into the generative-AI boom, these programs’ creations seem more technically advanced … but they are stuck with a distinct aesthetic.” By default, these models are inclined to produce images with bright, saturated colors; beautiful and almost cartoonish people; and dramatic lighting. Caroline spoke with experts who gave her four theories on why that is.&lt;/p&gt;&lt;p&gt;Ultimately, her reporting suggests that although tech companies are competing to offer more compelling image generators, the products aren’t actually all that different in the end—the situation is more “Pepsi vs. Coke” than “Toyota vs. Mercedes.” Perhaps people will simply use whichever image generator is most convenient. That may explain why companies such as X, Google, and Apple are so eager to build these models into existing platforms: Image generators aren’t magic anymore, but a feature to be checked off.&lt;/p&gt;&lt;hr&gt;&lt;figure&gt;&lt;img alt="Illustration depicting many samey, AI-looking images in a series of frames" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/08/AI_8_16/e98915392.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Source: Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Why Does AI Art Look Like That?&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Caroline Mimbs Nyce&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;This week, X launched an AI-image generator, allowing paying subscribers of Elon Musk’s social platform to make their own art. So—naturally—some users appear to have immediately made images of Donald Trump &lt;a href="https://x.com/Esqueer_/status/1823789104879800368"&gt;flying a plane toward the World Trade Center&lt;/a&gt;; &lt;a href="https://www.theverge.com/2024/8/14/24220173/xai-grok-image-generator-misinformation-offensive-imges"&gt;Mickey Mouse&lt;/a&gt; wielding an assault rifle, and another of him enjoying a cigarette and some beer on the beach; and so on. Some of the images that people have created using the tool are deeply unsettling; others are just strange, or even kind of funny. They depict wildly different scenarios and characters. But somehow they all kind of look alike, bearing unmistakable hallmarks of AI art that have cropped up in recent years thanks to products such as Midjourney and DALL-E.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/why-does-all-ai-art-look-same/679488/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/08/trump-harris-ai-crowd-size/679493/?utm_source=feed"&gt;&lt;b&gt;Trump finds a new &lt;i&gt;Benghazi&lt;/i&gt;&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; Earlier this week, Donald Trump falsely claimed that Kamala Harris had “A.I.’d” a photograph of a crowd at one of her campaign rallies—alleging, in other words, that she had doctored or outright fabricated an image in order to exaggerate the number of people cheering her on. As Matthew Kirschenbaum &lt;a href="https://www.theatlantic.com/technology/archive/2024/08/trump-harris-ai-crowd-size/679493/?utm_source=feed"&gt;writes for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, Trump’s use of the term may have less to do with the technology per se and more to do with giving his supporters something to post about—“a way of licensing them to follow his example by filling up the text boxes on their own screens.”&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;AI art may actually be at its best with an audience of one. “Approaching generative image creators in order to produce a desired result might get their potential exactly backwards,” Ian Bogost &lt;a href="https://www.theatlantic.com/technology/archive/2023/10/ai-image-generation-human-creativity-imagination/675840/?utm_source=feed"&gt;wrote for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;&lt;i&gt; &lt;/i&gt;last year. “AI can give them shape outside your mind, quickly and at little cost: any notion whatsoever, output visually in seconds. The results are not images to be used as media, but ideas recorded in a picture.”&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Wz-h4citImFqG966fVhNiKBl-SQ=/media/img/mt/2024/08/Atlantic_AI_5/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Getty</media:credit></media:content><title type="html">Four Theories That Explain AI Art’s Default Vibe</title><published>2024-08-16T18:05:00-04:00</published><updated>2024-08-16T18:05:28-04:00</updated><summary type="html">The image-makers are stuck in a pattern.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/08/four-theories-that-explain-ai-arts-default-vibe/679498/?utm_source=feed" rel="alternate" type="text/html"></link></entry><entry><id>tag:theatlantic.com,2024:50-679429</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;i data-stringify-type="italic"&gt;This is &lt;/i&gt;Atlantic&lt;i data-stringify-type="italic"&gt; Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. &lt;/i&gt;&lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="310" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-total-visible-time31117857_899="100" href="https://link.theatlantic.com/click/33314910.0/aHR0cHM6Ly93d3cudGhlYXRsYW50aWMuY29tL25ld3NsZXR0ZXJzL3NpZ24tdXAvYXRsYW50aWMtaW50ZWxsaWdlbmNlLz91dG1fY2FtcGFpZ249YXRsYW50aWMtaW50ZWxsaWdlbmNlJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV9jb250ZW50PTIwMjMxMTA5JmxjdGc9NjA1MGUyYjIxZmMxNmQxMzdmODNjMDM4/6050e2b21fc16d137f83c038B2d7857a9"&gt;&lt;i&gt;Sign up here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Tech companies believe that generative AI can transform how we find information online, replacing traditional search engines with bots that synthesize knowledge into a more interactive format. Rather than clicking a series of links, reading a variety of sources, and then determining an answer for yourself, you might instead have a conversation with a search bot that has effectively done the reading for you. Companies such as OpenAI, Perplexity, and Google are bringing such tools to market: As my colleague Matteo Wong wrote &lt;a href="https://www.theatlantic.com/technology/archive/2024/07/perplexity-ai-search-media-partners/679294/?utm_source=feed"&gt;in a recent story for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;, “The generative-AI search wars are in full swing.”&lt;/p&gt;&lt;p&gt;As part of his reporting, Matteo spoke with Dmitry Shevelenko, Perplexity’s chief business officer. In particular, the two discussed the media partnerships that have been signed by Perplexity and other AI firms to support their search projects. These deals give media companies compensation for allowing their material to be used by generative-AI tools; &lt;i&gt;The Atlantic&lt;/i&gt;, for example, has signed a contract with OpenAI that may, among other things, show our articles to users of the new &lt;a href="https://www.theatlantic.com/technology/archive/2024/07/searchgpt-openai-error/679248/?utm_source=feed"&gt;SearchGPT&lt;/a&gt; tool. (The editorial division of &lt;i&gt;The Atlantic&lt;/i&gt; operates independently from the business division, which announced its corporate partnership with OpenAI in May.)&lt;/p&gt;&lt;p&gt;I found two of Shevelenko’s quotes especially striking. First: “One of the key ingredients for our long-term success is that we need web publishers to keep creating great journalism that is loaded up with facts, because you can’t answer questions well if you don’t have accurate source material.” And second: “Journalists’ content is rich in facts, verified knowledge, and that is the utility function it plays to an AI answer engine.” Each statement seemed to betray an attitude that the creative output of humanity amounts to little more than fodder—which seems particularly grim in light of what we know about how AI is trained on &lt;a href="https://www.theatlantic.com/technology/archive/2023/09/books3-database-generative-ai-training-copyright-infringement/675363/?utm_source=feed"&gt;tremendous amounts of copyrighted material without consent&lt;/a&gt;, and how these tools have a tendency to present users with &lt;a href="https://www.theatlantic.com/technology/archive/2024/06/google-ai-overview-libel/678751/?utm_source=feed"&gt;false information&lt;/a&gt;. Or as I put it &lt;a href="https://www.theatlantic.com/technology/archive/2023/12/openai-axel-springer-partnership-content/676340/?utm_source=feed"&gt;last year&lt;/a&gt;: “At its core, generative AI cannot distinguish original journalism from any other bit of writing; to the machine, it’s all slop pushed through the pipes and splattered out the other end.”&lt;/p&gt;&lt;figure&gt;&lt;img alt="An illustration" height="374" src="https://cdn.theatlantic.com/media/img/posts/2024/08/AI89/50c5ce6e5.png" width="665"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Source: Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The AI Search War Has Begun&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Matteo Wong&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p data-flatplan-paragraph="true"&gt;Every second of every day, people across the world type tens of thousands of queries into Google, adding up to &lt;a data-event-element="inline link" data-gtm-vis-first-on-screen31117857_899="435" data-gtm-vis-has-fired31117857_899="1" data-gtm-vis-recent-on-screen31117857_899="435" data-gtm-vis-total-visible-time31117857_899="100" href="https://blog.google/products/search/how-we-keep-google-search-relevant-and-useful/"&gt;trillions&lt;/a&gt; of searches a year. Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world’s most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. And as of this week, the generative-AI search wars are in full swing.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2024/07/perplexity-ai-search-media-partners/679294/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;What to Read Next&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2023/05/microsoft-bing-chatbot-search-information-consolidation/673958/?utm_source=feed"&gt;&lt;b&gt;Bing is a trap&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; “Tech companies say AI will expand the possibilities of searching the internet. So far, the opposite seems to be true,” I wrote last year.&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;P.S.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The future of search bots may depend on recent copyright lawsuits against generative-AI companies. Earlier this year, &lt;a href="https://www.theatlantic.com/technology/archive/2024/02/generative-ai-lawsuits-copyright-fair-use/677595/?utm_source=feed"&gt;Alex Reisner wrote a great article for &lt;i&gt;The Atlantic&lt;/i&gt;&lt;/a&gt;&lt;i&gt; &lt;/i&gt;exploring what’s at stake.&lt;/p&gt;&lt;p&gt;— Damon&lt;/p&gt;</content><author><name>Damon Beres</name><uri>http://www.theatlantic.com/author/damon-beres/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gaW2GFKBzeg8qSwglgigsy8cGw4=/media/img/mt/2024/08/Atlantic_AI_1/original.jpg"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">Generative AI’s Slop Era</title><published>2024-08-09T18:26:00-04:00</published><updated>2024-08-09T18:26:35-04:00</updated><summary type="html">New search bots underscore familiar problems with the technology.</summary><link href="https://www.theatlantic.com/newsletters/archive/2024/08/ai-search-bots-war/679429/?utm_source=feed" rel="alternate" type="text/html"></link></entry></feed>