
With Charlie Puth recently having been appointed as Chief Music Officer of the AI music platform Moises, algorithmic tools seem closer to mainstream acceptance. But among working creators, the reception is mixed. Audiences are already showing signs of fatigue when they encounter obvious AI content online, and creators remain skeptical about how platforms will use their work or likeness. That poses the question: Where does the human perspective play a role in this evolution and adoption, and what more needs to be done to make mainstream acceptance a reality?
Bradley Johnson is Director and Head of Design at Twelve South, a premium tech accessories company. He has helped scale consumer products from early-stage growth into major revenue drivers and has designed best-selling products for brands like Under Armour. Now incorporating AI into his day-to-day workflow, he sees algorithmic tools not as a threat, but as a high-speed communication layer for creative work.
"AI is revealing that art is fundamentally a human point of view. When paired with someone who has a unique perspective on the world, it becomes the right combination for how AI in the arts will be adopted and embraced," says Johnson. For many commercial designers, AI tools are most commonly used as a visualization engine. It helps stakeholders see an artist’s concepts faster. But Johnson notes this utility still relies heavily on consumer psychology. People might be intrigued by a synthetic output, but they usually look for a connection with the person behind it.
- Craving the creator: Johnson points out that listeners eventually gravitate toward the person behind the work. "When you think about music you love, you eventually learn to love the musician and their personality," he says, referencing producer Rick Rubin’s philosophy on music as a deeply human experience where the listener connects with the creator on a fundamental level. In this perspective, without the creator, music loses much of its value.
- Art on autopilot: It's about more than just the importance of the human perspective, however. Technical perfection fails without that human anchor. Johnson recalls seeing an algorithmically generated art display at an airport about serious real-world issues that missed the mark. "Because it wasn’t coming from a real person, it was hard to take seriously," he says. "As AI becomes more common, you miss that human perspective."
As algorithmic suggestions creep into more enterprise software and design platforms, creators like Johnson are using AI tools to make ideas easier to refine. That might mean isolating specific details in complex audio via stem separation or building richer product visuals for internal reviews. Johnson views these tools as a natural extension of creative iteration, comparing the dynamic to the historical evolution of jazz or modern remix culture.
- Sketching on steroids: "A designer will do a sketch by themselves, and then use an AI program to create a realistic rendering," Johnson says. "Beyond that, we’re seeing short animated videos of a pair of legs in a realistic setting wearing an interpretation of that rendering. Just the upfront investment of time and effort to get that result wasn’t realistic before. But now with AI, it is very possible, and it is totally something that people like me can utilize."
- Downloading kung fu: Johnson says the software helps people with strong ideas bypass a lack of formal training, comparing the instant enablement to downloading skills in The Matrix. "If you have an original idea or something really cool you want to share, but you maybe lack the basic talent or skills to present it in the way you imagine it, you are not as held back as you used to be."
- Cloning the voice: But lowering the barrier to entry for creation also lowers the barrier for intellectual property theft, as Johnson points out. "A marketing department I worked with recreated my voice," he recalls. "That was great because if I were narrating an ad and forgot a line, they could easily add it in. But it’s also disconcerting that someone could use my cloned voice to make it sound like I said something I never would. In this case, I knew that the marketing team would never use my voice to make it look like I said something I wouldn’t, but it does make you wonder what would happen if my voice got into the hands of people who would." Johnson equates using a person’s voice or likeness without permission to unlicensed sampling. "You can’t just take someone’s song and add it to your own without paying a royalty for a recognizable part of the melody. Why would taking someone’s voice or likeness be any different? If anything, it is even more egregious."
Current creative contracts often treat consent in broad strokes, relying on legal frameworks built long before algorithmic cloning was possible. As convincing media becomes easier to generate, and as large tech companies pour billions into speech and voice technology, platforms will need clearer rules. Johnson does not see this friction as a new phenomenon: When the printing press made it easy to reproduce text, societies eventually developed libel and slander laws to respond. He views generative software as a modern version of that cycle: a powerful utility that exposes gaps in existing rules, often creating pressure to update them.
"These challenges simply require time and deliberate effort," Johnson says. "Once that is figured out and dealt with, I think it’s just going to be a great tool for speed and productivity, and it’s going to open doors that used to be previously locked to so many creators and so many people. As long as we get it right, like so many technologies, it will be extremely beneficial. So the importance is that we just get it right."
