AI News

Ethnomusicologist Calls for Human-Led Discovery in AI-Driven Music Era

Credit: Outlever

Key Points

  • AI tools are now capable of generating complete musical performances, challenging long-held assumptions about authorship and artistic ownership.
  • William Smith, Ethnomusicologist and Founder of ComposerSync, says what machines can produce in sound they cannot reproduce in biography or cultural memory.
  • He argues the industry must pair human-led creative direction with enforceable transparency standards to prevent music from losing its historical traceability.
William Smith, PhD - Founder | ComposerSync
At the end of the day, music is humanly patterned sound. Someone introduced it to you, you stumbled upon it, you vibed with it. AI can perform, but it can’t carry the stories, the context, or the life behind those notes.William Smith, PhD - Founder | ComposerSync

With Generative AI now performing music itself, the industry is forced to rethink authorship, discovery, and value. Unlike past innovations that expanded how artists recorded or sampled sound, this technology produces the performance without the lived experience behind it. The result is technically convincing music that lacks the identity, memory, and cultural context that give songs their meaning.

William Smith is a PhD-trained Ethnomusicologist and Founder of ComposerSync, a GenAI SaaS venture translating research into applied tools for creative workflows. Smith's perspective comes from his dual experience as a technologist architecting the future of music AI and a world-class jazz saxophonist who has played with legends like Donald Byrd, Kenny Burrell, and Wynton Marsalis. For Smith, the conversation about AI must be rooted in the fundamental nature of music itself.

“At the end of the day, music is humanly patterned sound. Someone introduced it to you, you stumbled upon it, you vibed with it. AI can perform, but it can’t carry the stories, the context, or the life behind those notes,” Smith explains. As AI-generated music becomes more common, the algorithmic playlists served up by AI platforms excel at pattern-matching. But they operate differently from the human journey of finding music. Smith recalls accidentally picking up a Miles Davis cassette from a discount bin, expecting to hear the artist’s acoustic work but getting his electronic music instead. After initial disappointment, he sat with the album, In a Silent Way, and it became a personal favorite. That kind of misfire-turned-discovery is difficult to engineer.

  • Curation still needs a human: AI systems can surface similarity, but they cannot recreate the social friction and context that shape taste. Smith argues that AI should assist rather than replace human guidance. "When it comes to curation, AI is maybe a five out of 10. I don't think that AI should be the one doing the guiding. A human should guide you, because the interaction with music is a fundamentally human experience," Smith says. The risk is that AI reflects a narrow set of experiences. When discovery is mediated by algorithms at scale, the assumptions embedded in their design shape what listeners encounter and what remains obscure.
  • Programmers are the new gatekeepers: In a system where discovery is mediated by code, those writing the code determine what gets surfaced and what remains obscure. "It all depends on who's programming the algorithm," Smith says. "The gatekeepers, who are the programmers, need to have more diverse life experiences. You can't be stuck in a box with limited awareness and expect to be programming code that will lead us down a new path of discovery." He argues builders should bring in subject-matter experts and broader datasets to expand the system’s cultural range.

As artists experiment with generative tools, Smith’s own workflow offers a model for human-AI collaboration. He uses platforms like Suno to generate takes that he then arranges, mixes, and masters in his own DAW.

  • AI as a session player: Smith treats generative tools the way a producer treats hired musicians. "I treat it like a studio session where I have a singer, a bass player, and a drummer giving me a take," he says. "I'm the producer. If it's not good, we do another take. I've written the lyrics, the melody, the chords, and made the production choices. The AI is just performing it for me." In this model, the human remains composer and producer, and the machine supplies performance.
  • Attribution belongs in the metadata: That approach only works if authorship remains traceable. Without clear disclosure, AI-generated tracks risk severing the historical thread that connects artists to their influences. When a performance is machine-generated, its lineage can disappear unless the industry builds standards to preserve it. "Attribution must be included directly in the metadata. You should have to disclose whether a piece was AI-assisted or fully AI-generated. Even the prompts themselves should be included to help curators, librarians, and archivists understand the motivation behind the work," Smith says. Without those standards, influence becomes untraceable and musical history harder to map.

Building that metadata requires industry standards and clearer disclosure from AI model creators. Not all music assigns value the same way. In pop, the value is often tied to producing a hit. But in jazz, the value is in the artist's journey, and the audience wants to hear the mistakes because they reveal the process of mastering an instrument. In that world, the imperfections contain the story.

The distinction highlights a core risk of AI: the loss of human context. Smith recalls generating a piece of music with a "pretty darn good trumpet solo," yet something about it felt "odd." He realized the feeling came from the performance having no story, producing a result that is technically impressive but culturally unmoored. "There's no context," Smith says. "That's the disconnect with AI and music, especially in jazz. It doesn't matter if you're playing a bunch of notes if I don't know that individual and their story."