Artificial Intelligence began to rise to fame in late 2022 with the popularization of OpenAI’s ChatGPT. Since then, AI has become a more integral part of our everyday life.
With that rise, however, came a fear of what power AI held. An industry that feared AI’s power more than anyone, is the music industry.
Now three years since its popularization, it’s starting to rear its ugly head.
The line between creativity and code is beginning to get blurred as AI becomes more involved in the songwriting process. Some see this as a dawn of a new creative era while others are afraid that it is the degradation of what makes music human.
Now to ask the question, how did we get here?
AI has opened a door that’s been locked for several years for rising independent artists. It has allowed for people to access songwriting and producing styles that they never would have in the past. They can do this through programs like Suno, a software that allows for users to generate a song completely from AI using a text prompt.
Hybrid models, which fuse human aspects with artificial ones, are also proving to be successful. Research projects like Amuse explore how inputting text, images and audio can inspire chords and melodies, helping brainstorming artists overcome creative blocks.
However none of this comes without critics. Several of the harsher critics say that artificial intelligence projects lack emotional depth and authenticity that human expression brings. Kelley Jones of the Stereophonics said “art should come from the people.”
Peter Hook of the bands “New Order” and “Joy Division” has gone on record saying that every song made by AI has been and always will be “shit.”
Not to mention the legal uncertainty. Who owns an AI-generated voice? If an AI model was trained using existing music without permission, is that infringement? These questions are being asked in courts as we speak.
Country musician Anthony Justice as well as the firms Delgado Entertainment Law and Lovey + Lovey have filed a class action lawsuit against Suno and other generative AI company Udio. They allege that they scraped tens of millions of publicly available recordings from independent artists without their consent to train their AI algorithm.
Suno and Udio have responded to the suits, alleging that their use of copyrighted material falls under “fair use” and have filed a motion to dismiss in court arguing that the music generated by its platform does not contain samples of existing recordings.
This is just one example of the myriad of legal issues that have arisen from this kind of technology. It begs the question. “Is it worth it to use technology that creates more problems than it solves?”
Platforms are already seeing an influx of content called “AI-Slop” which are low effort tracks that ride on the novelty of AI rather than pure artistic talent. Streaming platforms like Spotify have made progress in regulating it, reportedly removing large numbers of AI tracks that violate that same infringement companies like Suno and Udio are being sued for.
All of this has led to growing pressure for regulations. Laws like the ELVIS Act in Tennessee are early steps toward defining artists’ control over their voices and likenesses when AI is involved.
Artificial Intelligence in music is no longer just a possibility, but a certainty. Yet despite this, its role is uncertain. Will it produce a net positive or net negative? The answer may not be as black and white as you may think.
Music has always adapted to new tools. Synthesizers, sampling, digital audio workstations, all of it the industry has survived and adapted through. Each time it evolves, the same question comes up: “What makes art, art?”
In this age, It’s not just about what music sounds like. It’s about who makes it, and how they choose to do so.