Tag: policy

Spotify Strengthens AI Protections for Artists, Songwriters, and Producers

Music has always been shaped by technology. From multitrack tape and synthesizers to digital audio workstations and Auto-Tune, every generation of artists and producers has used new tools to push sound and storytelling forward. 

However, the pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives. At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push “slop” into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors.

The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.

We envision a future where artists and producers are in control of how or if they incorporate AI into their creative processes. As always, we leave those creative decisions to artists themselves while continuing our work to protect them against spam, impersonation, and deception, and providing listeners with greater transparency about the music they hear.

This journey isn’t new to us. We’ve invested massively in fighting spam over the past decade. In fact, in the past 12 months alone, a period marked by the explosion of generative AI tools, we’ve removed over 75 million spammy tracks from Spotify.

AI technology is evolving fast, and we’ll continue to roll out new policies frequently. Here is where we are focusing our policy work today:

    • Improved enforcement of impersonation violations
    • A new spam filtering system
    • AI disclosures for music with industry-standard credits

Stronger impersonation rules

The issue: We’ve always had a policy against deceptive content. But AI tools have made generating vocal deepfakes of your favorite artists easier than ever before.

What we’re announcing: We’ve introduced a new impersonation policy that clarifies how we handle claims about AI voice clones (and other forms of unauthorized vocal impersonation), giving artists stronger protections and clearer recourse. Vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorized the usage. 

We’re also ramping up our investments to protect against another impersonation tactic—where uploaders fraudulently deliver music (AI-generated or otherwise) to another artist’s profile across streaming services. We’re testing new prevention tactics with leading artist distributors to equip them to better stop these attacks at the source. On our end, we’ll also be investing more resources into our content mismatch process, reducing the wait time for review, and enabling artists to report “mismatch” even in the pre-release state.

Why it matters: Unauthorized use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work. Some artists may choose to license their voices to AI projects—and that’s their choice to make. Our job is to do what we can to ensure that the choice stays in their hands.

Music spam filter

The issue: Total music payouts on Spotify have grown from $1B in 2014 to $10B in 2024. But big payouts entice bad actors. Spam tactics such as mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop have become easier to exploit as AI tools make it simpler for anyone to generate large volumes of music.

What we’re announcing: This fall, we’ll roll out a new music spam filter—a system that will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them. We want to be careful to ensure we’re not penalizing the wrong uploaders, so we’ll be rolling the system out conservatively over the coming months and continue to add new signals to the system as new schemes emerge. 

Why it matters: Left unchecked, these behaviors can dilute the royalty pool and impact attention for artists playing by the rules. Our new music spam filter will protect against this behavior and help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters.

AI disclosures for music with industry-standard credits

The issue: Many listeners want more information about what they’re listening to and the role of AI technology in the music they stream. And, for artists who are responsibly using AI tools in their creation processes, there’s no way on streaming services for them to share if and how they’re using AI. We know the use of AI tools is increasingly a spectrum, not a binary, where artists and producers may choose to use AI to help with some parts of their productions and not others. The industry needs a nuanced approach to AI transparency, not to be forced to classify every song as either “is AI” or “not AI.” 

What we’re announcing: We’re helping develop and will support the new industry standard for AI disclosures in music credits, developed through DDEX. As this information is submitted through labels, distributors, and music partners, we’ll begin displaying it across the app. This standard gives artists and rights holders a way to clearly indicate where and how AI played a role in the creation of a track—whether that’s AI-generated vocals, instrumentation, or post-production. This change is about strengthening trust across the platform. It’s not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made.

This is an effort that will require broad industry alignment, and we’re proud to be working on this standard alongside a wide range of industry partners, including Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, EmuBands, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, RouteNote, SonoSuite, Soundrop, and Supply Chain.

Why it matters: By supporting an industry standard and helping to drive its wide adoption, we can ensure listeners see the same information, no matter which service they’re listening on. And ultimately, that preserves trust across the entire music ecosystem, as listeners can understand what’s behind the music they stream. We see this as an important first step that will undoubtedly continue to evolve.


While AI is changing how some music is made, our priorities are constant. We’re investing in tools to protect artist identity, enhance the platform, and provide listeners with more transparency. We support artists’ freedom to use AI creatively while actively combating its misuse by content farms and bad actors. Spotify does not create or own music; this is a platform for licensed music where royalties are paid based on listener engagement, and all music is treated equally, regardless of the tools used to make it.

These updates are the latest in a series of changes we’re making to support a more trustworthy music ecosystem for artists, for rightsholders, and for listeners. We’ll keep them coming as the tech evolves, so stay tuned.

How Spotify Approaches Safety With Sarah Hoyle, Global Head of Trust and Safety

Spotify’s mission is to unlock the potential of human creativity by giving a million artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it. In support of that endeavor, our global teams work around the clock to ensure that the experience along the way is safe and enjoyable for creators, listeners, and advertisers. 

While there is some user-generated content on Spotify, the vast majority of listening time is spent on licensed content. Regardless of who created the content, our top priority is to allow our community to connect directly with the music, podcasts, and audiobooks they love. When we think about the safety aspect of this, it can be helpful to do so in the context of seeing a show at a performance venue.

Like a performance venue, Spotify hosts different types of shows across a variety of genres. Not every show may be suitable for all audiences or in line with everyone’s unique tastes. Just like people select which shows they want to see, Spotify provides opportunities for users to seek out and curate content that they like and that is appropriate for their preferences. For example, users can skip music tagged by creators or rights holders as “explicit” by using our content toggle. Mobile users can block artists or songs they wish to hide and exclude playlists from their taste profiles or use the “not interested” button to better control their experiences.

While Spotify strongly supports enabling creative expression and listener choice, this does not mean that anything goes. In the same way that a venue has rules to ensure that shows run smoothly and are safe, Spotify has Platform Rules to guide what’s acceptable content and behavior on our platform. Bad behavior at a concert can lead to things like backstage access being revoked or, in egregious situations, someone being kicked out of the venue. Breaking Spotify’s rules can have consequences like removal, reduced distribution, and/or demonetization. We will also remove content that violates the law and/or our Terms of Service. Creators or rights holders may also choose to remove content themselves.

Measures we continue to take around responsible content recommendations and search also play key roles in creating a safe and enjoyable experience. For example, product and engineering teams across the company work with Trust & Safety to conduct impact assessments that allow us to evaluate and better mitigate potential algorithmic harms and inequities. We’ve also been introducing search warnings and in-app messaging to users searching for suicide, self-harm, and disordered eating-related content, which link to Spotify’s Mental Health Resources page. This work is being done in partnership with experts like Ditch the Label and the Jed Foundation with the goal of connecting potentially at-risk users with trusted help resources. 

Keeping our platform safe is a challenging job and, as the landscape evolves, we’re committed to evolving along with it. Safety is a company-wide responsibility and our efforts involve ongoing coordination between engineers, product managers, data scientists, researchers, lawyers, and social impact experts, as well as the policy and enforcement experts in Trust & Safety. Many of the folks on these teams have long careers in online safety, as well as in fields like human rights, social work, academia, health care, and consulting. We have also established an internal Safety Leadership group that regularly brings executives from different departments together to help ensure awareness of safety needs and monitor progress on our efforts. 

To complement our in-house expertise, we also seek counsel and feedback from third-party experts around the world, including our Safety Advisory Council, to ensure we’re considering multiple points of view when shaping our safety approach. In 2022, we invested in the local and linguistic expertise of start-up Kinzen, now known as the Content Safety Analysis team within Spotify, which has a nuanced understanding of the global safety landscape and works proactively to deliver a safe and enjoyable experience across our content offerings. 

Click here to learn more about Spotify’s approach to safety.

US Senators, Tech CEOs, and More Make Their Voices Heard in Our Fight for Fair Competition on the Latest ‘For the Record’ Podcast

Over the past few years, it’s become abundantly clear that Apple tilts the playing field. It does this in favor of its own services in order to disadvantage rivals and make it harder for companies like Spotifyand so many othersto compete. This behavior harms consumers and app developers—and it stifles innovation from companies just trying to get off the ground.

This is about much more than just Spotify, which is why we have publicly advocated for platform fairness and pushed for expanded payment options, among other things, for a number of years. We are committed to fighting for fair competition, which, in turn, will unleash innovation as well as choice for consumers.

Today, we released a special episode of Spotify: For the Record featuring a chorus of voices who are as passionately focused as we are on creating a level playing field for all. Tune in to hear from our CEO, Daniel Ek; U.S. Senators Amy Klobuchar (D-Minnesota), Marsha Blackburn (R-Tennessee), and Richard Blumenthal (D-Connecticut); Agrin Health CEO Karen Thomas; Fanfix CEO Harry Gestetner; Schibsted CEO Kristin Skogen Lund; and ProtonMail CEO Andy Yen as they express their concerns about the impact of Apple’s unfair App Store rules on consumers and innovators alike and discuss the need for action.

Take a glimpse at what each of them had to say. 

“Our view is quite simple. We think that there needs to be regulation in this space. We think it is one where it has to make it clear that you as a developer or a company should be able to interact with your consumers. You should have the ability to bring new innovations to the market on equal terms as the platforms themselves, and that there should be a choice for how these consumers should be able to pay for goods and services on these platforms. And that can’t be dictated by Apple.” – Daniel Ek, CEO, Spotify

“The news that Apple plans to let rival app stores operate on iPhones in Europe shows that the arguments against our bill were simply scare tactics designed to stop it, and that’s why we must pass it.” – U.S. Senator Amy Klobuchar

“At the end of the day, what I would ask Tim Cook is to please support my bill. If you’re not doing any of these bad things, why not support the bill? If you’re in favor of competition and innovation, support the bill. If you believe that there’s no unfair charges, or rents, or whatever—no copy and kill. Support the bill.” – U.S. Senator Richard Blumenthal  

“It doesn’t matter if you’re Democrat, or Republican, or another party affiliation, app developers and innovators are saying we have an issue with market access and there is a way to solve this problem.” – U.S. Senator Marsha Blackburn  

“I think we have to re-envision what an app store is and the boundaries and the barriers that they put up in terms of gatekeepers . . . Status quo isn’t even an option anymore. We’re at a fork in the road. So either we pass this legislation and we send a signal to Apple and Google to say that monopoly won’t work—you’re going to have to behave better and participate in a free market—or we don’t.” – Karen Thomas, CEO, Agrin Health 

“I do think the majority of Gen Z is probably pretty unaware, but it’s going to take things like this and small businesses speaking out, creators speaking out, waking consumers up to the fact that this is going on and this is impacting their daily lives.” – Harry Gestetner, co-CEO, Fanfix 

“Probably almost the worst issue is that Apple blocks us from having access to data about our own customers. So that means we don’t know what kind of subscriptions our customers have bought via the Apple system. It means that we will either lose our business altogether or we will have very unhappy customers.” – Kristin Skogen Lund, CEO, Schibsted 

“The lack of people speaking up isn’t because there is no problem. The lack of people speaking up is actually a sign of the problem because people are so afraid that they’re just afraid to even say anything. And if that is the state of the Internet today, then I think that’s a terrible place for the world to be.” – Andy Yen, CEO, ProtonMail

And they’re all coming together with more than any single company at stake: “I’m fighting not because of just Spotify, but because I truly, at the core of my being, believe this is right,” Daniel Ek noted in the episode. “And it’s very important for the future of the economy and for app developers and creators alike.” 

We know that fair and open platforms enable better consumer experiences and allow developers to grow and thrive. When this happens, everybody wins.

Hear for yourself in the episode below. 

Access the full episode transcript here

Spotify Policy Update

Spotify recently shared a new policy around hate content and conduct. And while we believe our intentions were good, the language was too vague, we created confusion and concern, and didn’t spend enough time getting input from our own team and key partners before sharing new guidelines.

It’s important to note that our policy had two parts. The first was related to promotional decisions in the rare cases of the most extreme artist controversies. As some have pointed out, this language was vague and left too many elements open to interpretation. We created concern that an allegation might affect artists’ chances of landing on a Spotify playlist and negatively impact their future. Some artists even worried that mistakes made in their youth would be used against them.

That’s not what Spotify is about. We don’t aim to play judge and jury. We aim to connect artists and fans – and Spotify playlists are a big part of how we do that. Our playlist editors are deeply rooted in their respective cultures, and their decisions focus on what music will positively resonate with their listeners. That can vary greatly from culture to culture, and playlist to playlist. Across all genres, our role is not to regulate artists. Therefore, we are moving away from implementing a policy around artist conduct.

The second part of our policy addressed hate content. Spotify does not permit content whose principal purpose is to incite hatred or violence against people because of their race, religion, disability, gender identity, or sexual orientation. As we’ve done before, we will remove content that violates that standard. We’re not talking about offensive, explicit, or vulgar content – we’re talking about hate speech.

We will continue to seek ways to impact the greater good and further the industry we all care so much about. We believe Spotify has an opportunity to help push the broader music community forward through conversation, collaboration and action. We’re committed to working across the artist and advocacy communities to help achieve that.

Spotify Announces New Hate Content and Hateful Conduct Public Policy

UPDATE: Since this posting on May 10, the policy outlined below regarding hate content and hateful conduct has been updated and Spotify is moving away from implementing a policy around artist conduct. You can find our updated position here. While we believe our intentions were good, the language was too vague, we created confusion and concern, and didn’t spend enough time getting input from our own team and key partners before sharing new guidelines.

We have tens of millions of tracks on Spotify, growing by approximately 20,000 recordings a day. Nothing makes us more excited than discovering and sharing that music. One of the most amazing things about all that music is the range of genres, cultures, experiences, and stories embodied in it. We love that our platform is home to so much diversity because we believe in openness, tolerance, respect, and freedom of expression, and we want to promote those values through music on our platform.

However, we do not tolerate hate content on Spotify – content that expressly and principally promotes, advocates, or incites hatred or violence against a group or individual based on characteristics, including, race, religion, gender identity, sex, ethnicity, nationality, sexual orientation, veteran status, or disability.

Today, we are announcing our policy on Hate Content and Hateful Conduct. You can read the whole policy here, but here are the basics:

When we are alerted to content that violates our policy, we may remove it (in consultation with rights holders) or refrain from promoting or playlisting it on our service. It’s important to us that our values are reflected in all the work that we do, whether it’s distribution, promotion, or content creation.

At the same time, however, it’s important to remember that cultural standards and sensitivities vary widely. There will always be content that is acceptable in some circumstances, but is offensive in others, and we will always look at the entire context.

To help us identify hate content, we have partnered with rights advocacy groups, including The Southern Poverty Law Center, The Anti-Defamation League, Color Of Change, Showing Up for Racial Justice (SURJ), GLAAD, Muslim Advocates, and the International Network Against Cyber Hate. We also built an internal content monitoring tool, Spotify AudioWatch, which identifies content on our platform that has been flagged as hate content on specific international registers. And we listen to our users – if you think something is hate content, please let us know and we will review it carefully against our policy.

We’ve also thought long and hard about how to handle content that is not hate content itself, but is principally made by artists or other creators who have demonstrated hateful conduct personally. We work with and support artists in different ways – we make their music available on Spotify and help connect them to new and existing fans, we program and promote their music, and we collaborate with them to create content. While we don’t believe in censoring content because of an artist’s or creator’s behavior, we want our editorial decisions – what we choose to program – to reflect our values. So, in some circumstances, when an artist or creator does something that is especially harmful or hateful (for example, violence against children and sexual violence), it may affect the ways we work with or support that artist or creator.

This is our first iteration of this new policy. These are complicated issues, and we’re going to continue to revise our Policy on Hate Content and Hateful Conduct. We’ll make some mistakes, we’ll learn from them, and we’ll always listen to you as we work to keep building the Spotify platform.