Policy & Ethics

Stability AI Pitches Artist Payouts as Lawsuits Mount

Credit: stability.ai (edited)

Key Points

  • Stability AI plans to launch an opt-in marketplace to compensate artists for using their work in AI model training.
  • The move comes amid multiple copyright lawsuits, including a major suit from Getty Images over unauthorized use of 12 million photos.
  • New CEO Prem Akkaraju acknowledges internal disagreements over the use of copyrighted work, aiming to address artist concerns.
  • The initiative is seen as a potential solution to ongoing legal challenges and artist dissatisfaction.

Facing multiple copyright lawsuits, Stability AI announced it is developing an opt-in marketplace to compensate artists whose work is used for training its AI models, a significant shift detailed by new CEO Prem Akkaraju in an interview with the Financial Times.

A mountain of lawsuits: The pivot follows immense legal pressure, including a suit from Getty Images over 12 million photos. Artists like Sarah Andersen and Karla Ortiz also hit the company with a class-action lawsuit, alleging their work was used for training without consent.

Peace offering or PR?: The move addresses long-standing internal friction, which saw former audio lead Ed Newton-Rex resign because he didn’t agree that training on copyrighted work was “fair use.” Still, Akkaraju appears to be hedging, telling the FT that artist campaigns haven’t had a “profound of an impact” and insisting the company’s current use of “free-to-use data” is “the right way.”

An olive branch on a battlefield: Stability AI is trying to build a lifeboat of legitimacy after its ship, built from scraped data, has already been struck by legal icebergs. Whether artists will get on board depends on if the compensation model is a real fix or just a way to calm the waters while the lawsuits play out.

The wider view: The fight over training data is escalating, with major record labels suing AI music generators Suno and Udio. Beyond copyright, the societal impact of AI misuse remains a flashpoint, as seen with the AI-generated Taylor Swift images that drew White House concern. Meanwhile, some startups are skipping the controversy entirely by building ethical, opt-in AI models from the ground up.