
Public debate tends to treat AI music as a single, existential threat. Inside studios the reality is far more nuanced. AI tools are already part of everyday production, yet the industry still lacks shared language or standards to explain what counts as AI-made, AI-assisted, or something in between.
To understand the disconnect, we spoke with Nathan Brackett, Founder of Stems, a weekly newsletter analyzing music, culture, and technology. An industry veteran whose experience with the likes of Rolling Stone and Amazon Music spans legacy media and big tech, Brackett says that before the industry can solve its AI problem, it must first learn to talk about it clearly.
"There’s a huge difference between AI as a production tool and songs that are generated entirely from a prompt. Right now, those two things are getting lumped together in ways that confuse both creators and listeners," says Brackett. He argues that making sense of the moment requires viewing AI on a broad spectrum.
On one end are assistive technologies for tasks like EQ, mastering, or stem separation, which many producers are adopting as standard gear. But according to Brackett, much of the industry's resistance and consumer anxiety is focused on the other end of the spectrum: fully generative songs created from a text prompt in tools like Suno or Udio. Some platforms are already beginning to make distinctions between the two.
- Supercharged scammers: While AI's threat to creativity can feel intangible, its impact on economics is concrete. Though chart manipulation is as old as the music business itself, Brackett notes that AI has changed the game. "AI gives manipulators a lower start-up cost and a lot more runway to get fraudulent tracks onto platforms. It gives them superpowers."
- Manipulation at scale: It's a problem Brackett says is placing significant technical strain on the entire streaming ecosystem. "Streamers have strong infrastructures to fight manipulation, but I don't know if they're equipped to deal with the scale of this new era," he says. With some reports suggesting a high percentage of AI-generated streams are fraudulent, platforms have been forced to demonetize millions of tracks.
Brackett believes that fundamental unpreparedness is exactly why the industry needs a new framework built on disclosure. For this to work, he says, the industry needs companies to lead the way, pointing to transparency tools on TikTok and other platforms as precedents. "Deezer now identifies 100% AI songs. Bandcamp does too."
- A multifaceted fix: As for the near future of AI-music identification, Brackett envisions an integrated approach where accountability is shared between creators, platforms, and technology. "I could see a combination of self-reporting and then using some form of tool to spot check or verify as best as they can," he says.
- Flipping a switch: On the user's side, a simple method like an on-off toggle for filtering AI content would offer listeners greater choice and peace of mind. "If you're a lo-fi listener and you think 'hey, this stuff is fine,' you can toggle it off when you're listening to that kind of music. But when you're listening to the top 40 charts, or singer-songwriters, you might really care about it and turn it on."
Brackett says ultimately, an effective solution must center on the listener, empowering them to make their own choices with the goal of rebuilding trust through clarity and control. "We're seeing a strong desire from consumers to know what they're listening to. Nobody wants to hear music they thought was human-made, only to find out it was AI," he concludes. "I hope that every streamer adopts a disclosure system."
