Kyle Lehning makes for a pretty unlikely AI pioneer. He’s a genial, white-haired, 75-year-old music producer based in Nashville. It was 1985 when he was introduced to a fledgling singer by the name of Randy Ray who would soon change his stage name to Randy Travis. With his rich, sonorous baritone and preternatural phrasing, Travis would become one of the most revered country music singers of all time, and Lehning would produce or co-produce nearly every track by Travis during his storied career, which included 16 No. 1 Billboard country hits. In 2013, their 30-year professional relationship came to a tragic halt when Travis suffered a near-fatal stroke. It would take him years to learn how to walk again, even longer to regain a limited ability to speak, and performing live or recording new music was no longer possible. One of country music’s greatest voices had been silenced.
That is, until July 2023, when Lehning received a call from Cris Lacy, co-chair of Travis’s record label, Warner Music Nashville. “Pretty quickly into the conversation,” he recalls, “Cris asked me, ‘What would I think about getting Randy’s voice back using AI?’ And my immediate response was, ‘That sounds pretty creepy to me.’” Lehning says that he’d paid little attention to the headlines about the rise of AI, and while he was intrigued by Lacy’s pitch, “it all boiled down to what Randy and his wife, Mary, thought about this.”
When they signed off on the idea, Lehning was put in contact with a British company called Myvox and sent it 42 Travis vocal tracks for its machine to learn from. Next, Lehning dug up a song that he’d co-produced in 2011, by a singer named James Dupré. “I’d always loved James’s ‘Where That Came From,’ but for one reason or another, it was never released.” The plan was simple: What would it sound like if vintage Randy Travis covered “Where That Came From”? When Lehning first heard the AI Randy, he was stunned: “It wasn’t a weird-sounding, robot version. It was Randy Travis’s voice.”
After a couple months of polishing, he sent the finished version to Randy and Mary, who were thrilled with the result. The track, which was released in May and cracked the top 40 of Billboard’s Hot Country Songs chart, even spurred a return, of sorts, to the stage for Travis: He will take part in a concert tour in which James Dupré and Travis’s original touring band will perform Travis’s songs, with Randy and Mary, in nonperforming roles, acting as guiding spirits and hosts.
We live in an era of widespread anxiety over AI’s role in content creation. There are concerns that AI will replace graphic designers, write journalism, be an art critic, and even host podcasts. But what you find when you look closely at the creators who are the early adopters of this technology is a less drastic and more nuanced picture. Yes, the technology is powerful, but it can also be quite dopey: It doesn’t seem that good at writing song lyrics, for example. Yet it does act as a spur to creation, as a way to sketch out a project or imagine an alternate headline or a different way of composing a scene. The AI is a tool without judgment: It will attempt to suggest, analyze, or reconfigure any scenario that you throw its way. Most importantly, AI is not what gives a piece of art its value. It’s the human who made the art, who imbued the artwork with their own distinct identity and vision.
Travis’s “Where That Came From” is about as warm and fuzzy an example of the technology’s power that you’ll find. Crucially, the song was made with the approval of the artists (Travis, Dupré) and the various copyright holders (the record label, publishing companies); it was lovingly and painstakingly put together; and it could not have existed without AI. Don Was, the president of the venerable jazz label Blue Note, is a fan of the Travis song, and of AI’s potential. “If you have an emotional attachment to Randy Travis’s voice, which millions of people do, you got to hear him sing something new,” Was says. “It really got me choked up.”
Most applications of AI in music-making aren’t this blue-sky, however. The ability to mimic a popular artist’s voice or style or to create songs lickety-split and en masse, simply by using text or voice prompts, is fraught, to say the least. In April 2023, the music industry was gripped with panic when a song called “Heart on My Sleeve” went viral. The AI-generated track, created by a mysterious entity known as Ghostwriter, sounded, quite intentionally, like a collaboration between megastars Drake and the Weeknd. The Universal Music Group (UMG), the world’s largest record company and home to both artists, quickly had the “Fake Drake” song removed from streaming platforms due to copyright infringement, but not before it prompted fears that flesh-and-blood musicians were about to be supplanted by AI deepfakes. “The talk was, AI’s going to knock off all the popular music in the world, and there will be no copyright associated with it at all, and the business model of the music industry is going to disappear overnight,” says Michael Nash, executive vice president, chief digital officer of UMG. “Well, it’s a year and half later, and that didn’t happen.”

Still, the seeds had been planted for disruption to an industry that is regularly tossed about by new technologies: Innovations from synthesizers to sampling and Auto-Tune were all considered extinction-level threats to skilled musicians, and in the early 2000s, Napster and peer-to-peer file sharing nearly decimated the recorded music business. “There is a long continuum of music technologies creating both great excitement and also great panic,” says Mark Katz, a professor of music at the University of North Carolina. “The long view is that these upheavals are neither utopian nor apocalyptic.”
Was recalls buying one of the first LinnDrums, in 1982. “This was the first commercially available drum machine that used digitally recorded samples of real drums,” he says. “Every drummer in the world thought this was, like, doomsday, man, the end of drummers. I bought serial number 003. Prince had been to the same store a couple of days before, and he bought serial number 002. And when you listen to ‘When Doves Cry,’ you hear an artist taking this new tool that people found threatening and creating a drum part that no human drummer would have played. He changed music with that. And it didn’t put drummers out of business.”
Dozens of AI music companies have popped up in the past couple of years. Nearly all offer a similar promise: to make the often-arduous process of creating songs simpler and more efficient and to lower the barrier to entry for amateurs. Some permit the user to mimic the style of established artists who have been compensated for allowing their musical DNA to be mapped and replicated; others provide the tools for the AI-curious to create an instrumental with a few clicks of a mouse.
Brian Transeau, better known as BT, is a popular electronic music composer and DJ and a co-founder of the assistive AI company SoundLabs. Its first product is a plug-in called MicDrop, which, BT explains, “allows you to change your voice into any other human voice in real time, and also to change your voice into instruments.” Female-sounding voices can be transformed into male-sounding voices; a vocal can be turned into a trumpet line. And if you can’t sing, you can still generate a studio-quality vocal. “Let’s say you’re a dance music or hip-hop producer and you want a vocal for your record,” he says. “It’s as simple as talking into your iPhone and playing a couple notes on a MIDI keyboard, and you can make a vocal that you can use on a record or that you could play to a singer and have them resing it.” The idea is to “help reduce friction in the compositional process and aid in creativity, without replacing humans.”
The software program has been “ethically trained,” which is to say, the inputs—the voices used to train the machine—were provided with the consent of artists. In June, UMG announced a strategic agreement with SoundLabs, wherein UMG artists can choose to supply their voice data for training while “retaining control over ownership and giving them full artistic approval and control of the output.” One of the first collaborations between the companies is an AI-generated Spanish-language version of Brenda Lee’s holiday classic, “Rockin’ Around the Christmas Tree.” Originally recorded by a 13-year-old Lee in 1958, the 65-year-old song, thanks to seasonal streaming, topped the Billboard Hot 100 in December 2023, making Lee, at age 78, the oldest artist ever to do so. To find new audiences and recapture that top spot, SoundLabs trained MicDrop on Lee’s 1958 vocal track and merged it with a work-for-hire vocalist performing it en Español, and out popped a teenage Lee singing her seasonal classic in Spanish.
This translation of songs into other languages could be a boon for artists looking to expand their global fan base. “If we have an artist touring in Manila, for instance, and they can release a song in Tagalog, that could be huge,” says Carletta Higginson, executive vice president, chief digital officer for Warner Music Group. Nash, her fellow music executive, cites the American artist Lauv, who is popular in Korea and used voice-to-voice technology to create a single for his Korean fan base. “We’re just at the beginning of what is possible,” Nash says.
Post Malone’s longtime producer, Louis Bell, is part of a select group of musicians in YouTube’s Music AI Incubator who have been field-testing Google DeepMind’s music AI tools. Bell said he’s been using AI to help generate songwriting ideas. “It’s the modern-day version of crate digging,” he says, referring to the old-school practice of hip-hop DJs foraging in record-store bins for songs to sample. “AI gives you billions of more options now.” He also senses that the technology will need guardrails for all the creative arts: “It’s kind of like they have to create some kind of Geneva Conventions for AI, where everyone comes together and figures out what is going to be our path moving forward.”
The producers I spoke with tended to view AI as a new step in a procession of technologies that have transformed music creation—and not as a threat to music itself. At heart, popular music is driven not just by songcraft and technique, but also by the personas of the artists, their narratives, their points of view. Taylor Swift fans, to choose the most outsize example, are emotionally invested in even the most quotidian details of her life, and they come together at her stadium concerts to share that bond and to see themselves—their longings, their heartbreak—in her songs. Despite the scare over “Fake Drake” and other replicas, AI won’t take the place of Swift or Bad Bunny or Adele.
However, there is a lot of music, heretofore created by people, that AI can generate more efficiently and at greater quantity, without listeners noticing the difference. Think of the soothing background music you might hear when entering the lobby of a boutique hotel, or the ambient lo-fi songs you might stream to help you sleep or study. This “functional” music has no star at the center of it—anonymity is a feature, not a bug—and it’s a big business: A 2023 report said that functional music accounted for 15 billion streams a month across all music platforms.
A dozen years ago, Alex Bestall started Rightsify, a company specializing in creating and licensing songs to be used in advertisements and film and video-game soundtracks and played in hotels, airport lounges, gyms, and retail stores. He and his team commissioned millions of tracks, 95 percent of them instrumental. When AI started to make inroads in content generation, Bestall foresaw the future: “My take was, music is next. And that our kind of music was a prime suspect to be disrupted. So we dove right in.” Beginning in early 2023, Bestall licensed his catalog to multiple AI companies that needed songs in bulk to train their machines. Those licensing fees now make up 60 percent of Rightsify’s revenue and have even led to Bestall hiring additional musicians and contracting more work-for-hire instrumentals to meet the demand of the AI companies.
The functional music space offers an important lesson for all content creators: The AI itself will become a sort of platform, and they can find new ways to engage with its capabilities. I think of the artist Brian Eno, who has played around with chance and randomness in his work and has been experimenting with generative music since the 1970s. More recently, the economist Tyler Cowen wrote a book titled GOAT: Who Is the Greatest Economist of All Time and Why Does It Matter? that was meant to be analyzed and then queried by readers. Will this approach replace traditional books? Of course not. What it offers is an interactive way to learn about economics that might appeal to some. Every major technological shift presents new challenges for artists, both on the level of the individual artwork and in how the art will be seen or heard or experienced by the public.
The coming year could be telltale in charting AI’s place in the music ecosphere. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, legislation backed by various artist-advocacy trade groups and aimed to protect the voices, names, and likenesses of creators from being used nonconsensually, has received bipartisan congressional support. Among creators, there is a strong desire for a legal framework. Lyor Cohen, YouTube’s global head of music, who ran major record labels during Napster’s insurgence, believes that his “colleagues in the music industry are much more inclined to be on their front foot” with regard to AI. “Everybody realizes that GenAI is here, and it’s up to us to shape it. Otherwise, it will shape us.”