Tag: artificial intelligence

Spotify Strengthens AI Protections for Artists, Songwriters, and Producers

Music has always been shaped by technology. From multitrack tape and synthesizers to digital audio workstations and Auto-Tune, every generation of artists and producers has used new tools to push sound and storytelling forward. 

However, the pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives. At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push “slop” into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors.

The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.

We envision a future where artists and producers are in control of how or if they incorporate AI into their creative processes. As always, we leave those creative decisions to artists themselves while continuing our work to protect them against spam, impersonation, and deception, and providing listeners with greater transparency about the music they hear.

This journey isn’t new to us. We’ve invested massively in fighting spam over the past decade. In fact, in the past 12 months alone, a period marked by the explosion of generative AI tools, we’ve removed over 75 million spammy tracks from Spotify.

AI technology is evolving fast, and we’ll continue to roll out new policies frequently. Here is where we are focusing our policy work today:

    • Improved enforcement of impersonation violations
    • A new spam filtering system
    • AI disclosures for music with industry-standard credits

Stronger impersonation rules

The issue: We’ve always had a policy against deceptive content. But AI tools have made generating vocal deepfakes of your favorite artists easier than ever before.

What we’re announcing: We’ve introduced a new impersonation policy that clarifies how we handle claims about AI voice clones (and other forms of unauthorized vocal impersonation), giving artists stronger protections and clearer recourse. Vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorized the usage. 

We’re also ramping up our investments to protect against another impersonation tactic—where uploaders fraudulently deliver music (AI-generated or otherwise) to another artist’s profile across streaming services. We’re testing new prevention tactics with leading artist distributors to equip them to better stop these attacks at the source. On our end, we’ll also be investing more resources into our content mismatch process, reducing the wait time for review, and enabling artists to report “mismatch” even in the pre-release state.

Why it matters: Unauthorized use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work. Some artists may choose to license their voices to AI projects—and that’s their choice to make. Our job is to do what we can to ensure that the choice stays in their hands.

Music spam filter

The issue: Total music payouts on Spotify have grown from $1B in 2014 to $10B in 2024. But big payouts entice bad actors. Spam tactics such as mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop have become easier to exploit as AI tools make it simpler for anyone to generate large volumes of music.

What we’re announcing: This fall, we’ll roll out a new music spam filter—a system that will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them. We want to be careful to ensure we’re not penalizing the wrong uploaders, so we’ll be rolling the system out conservatively over the coming months and continue to add new signals to the system as new schemes emerge. 

Why it matters: Left unchecked, these behaviors can dilute the royalty pool and impact attention for artists playing by the rules. Our new music spam filter will protect against this behavior and help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters.

AI disclosures for music with industry-standard credits

The issue: Many listeners want more information about what they’re listening to and the role of AI technology in the music they stream. And, for artists who are responsibly using AI tools in their creation processes, there’s no way on streaming services for them to share if and how they’re using AI. We know the use of AI tools is increasingly a spectrum, not a binary, where artists and producers may choose to use AI to help with some parts of their productions and not others. The industry needs a nuanced approach to AI transparency, not to be forced to classify every song as either “is AI” or “not AI.” 

What we’re announcing: We’re helping develop and will support the new industry standard for AI disclosures in music credits, developed through DDEX. As this information is submitted through labels, distributors, and music partners, we’ll begin displaying it across the app. This standard gives artists and rights holders a way to clearly indicate where and how AI played a role in the creation of a track—whether that’s AI-generated vocals, instrumentation, or post-production. This change is about strengthening trust across the platform. It’s not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made.

This is an effort that will require broad industry alignment, and we’re proud to be working on this standard alongside a wide range of industry partners, including Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, EmuBands, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, RouteNote, SonoSuite, Soundrop, and Supply Chain.

Why it matters: By supporting an industry standard and helping to drive its wide adoption, we can ensure listeners see the same information, no matter which service they’re listening on. And ultimately, that preserves trust across the entire music ecosystem, as listeners can understand what’s behind the music they stream. We see this as an important first step that will undoubtedly continue to evolve.


While AI is changing how some music is made, our priorities are constant. We’re investing in tools to protect artist identity, enhance the platform, and provide listeners with more transparency. We support artists’ freedom to use AI creatively while actively combating its misuse by content farms and bad actors. Spotify does not create or own music; this is a platform for licensed music where royalties are paid based on listener engagement, and all music is treated equally, regardless of the tools used to make it.

These updates are the latest in a series of changes we’re making to support a more trustworthy music ecosystem for artists, for rightsholders, and for listeners. We’ll keep them coming as the tech evolves, so stay tuned.

Spotify Reforça Proteções referentes à IA para Artistas e Produtores

A música sempre foi moldada pela tecnologia. Do multipista e sintetizadores às estações de trabalho digitais de áudio e ao Auto-Tune, cada geração de artistas e produtores usou novas ferramentas para expandir o som e a narrativa.

No entanto, o ritmo dos avanços recentes em tecnologia de IA generativa tem parecido rápido e, por vezes, perturbador, especialmente para os criadores. No melhor cenário, a IA está desbloqueando formas incríveis de artistas criarem música e de ouvintes descobrirem sons. No pior, pode ser usada por agentes mal-intencionados e fazendas de conteúdo para confundir ou enganar ouvintes, despejar conteúdo de baixa qualidade  no ecossistema e interferir no trabalho de artistas autênticos que buscam construir suas carreiras. Esse tipo de uso prejudicial degrada a experiência dos ouvintes e muitas vezes tenta desviar royalties para fraudadores.

O futuro da indústria musical está sendo escrito, e acreditamos que proteger de forma agressiva contra os piores usos da IA generativa é essencial para liberar seu potencial em benefício de artistas e produtores.

Visualizamos um futuro em que artistas e produtores estejam no controle de como — ou se — vão incorporar IA em seus processos criativos. Como sempre, deixamos essas decisões aos próprios criadores, ao mesmo tempo em que trabalhamos para protegê-los contra spam, imitação e fraude, e para dar aos ouvintes mais transparência sobre a música que escutam.

Essa jornada não é nova para nós. Investimos maciçamente no combate ao spam na última década. Só nos últimos 12 meses, período marcado pela explosão das ferramentas de IA generativa, removemos mais de 75 milhões de faixas de spam do Spotify.

A tecnologia de IA está evoluindo rápido, e continuaremos a lançar novas políticas com frequência. Estamos focando nosso trabalho hoje em:

    • Reforço na aplicação de violações por imitação
    • Um novo sistema de filtragem de spam
    • Divulgação de uso de IA em músicas com créditos dentro do padrão da indústria

Regras mais rígidas contra imitação

O problema: Sempre tivemos uma política contra conteúdo enganoso. Mas as ferramentas de IA tornaram mais fácil do que nunca gerar deepfakes vocais de artistas populares.

O que estamos anunciando: Introduzimos uma nova política de imitação que esclarece como lidamos com denúncias sobre clonagem de voz de IA (e outras formas de imitação vocal não autorizada), dando aos artistas proteções mais fortes e um processo mais claro de recurso. A imitação vocal só será permitida no Spotify quando o artista imitado tiver autorizado o uso.

Também estamos aumentando os investimentos para combater outra forma de fraude:quando alguém envia músicas (geradas por IA ou não) para o perfil de outro artista de forma fraudulenta em serviços de streaming. Estamos testando novas medidas de prevenção com os principais distribuidores de artistas, para que eles consigam bloquear esses ataques na origem. Do nosso lado, também vamos investir mais recursos no processo de análise de “conteúdo fora do lugar”, reduzindo o tempo de espera para revisão e permitindo que artistas denunciem o problema até mesmo no estado de pré-lançamento.

Por que importa: O uso não autorizado de IA para clonar a voz de um artista explora sua identidade, enfraquece sua arte e ameaça a integridade fundamental do seu trabalho. Alguns artistas podem escolher licenciar sua voz para projetos de IA e essa é uma decisão deles, mas nosso papel é garantir que essa escolha esteja em suas próprias mãos.

Filtro de spam musical

O problema: Os pagamentos totais de música no Spotify cresceram de US$ 1 bilhão em 2014 para US$ 10 bilhões em 2024. Grandes cifras atraem fraudadores. Táticas de spam – como envios em massa, duplicações, truques de SEO, abuso de faixas artificialmente curtas e outros tipos de conteúdo de baixa qualidade — ficaram mais fáceis de explorar conforme as ferramentas de IA permitem a qualquer um gerar grandes volumes de músicas.

O que estamos anunciando: Neste outono, lançaremos um novo filtro de spam musical — um sistema que vai identificar atores que sobem conteúdo e faixas envolvidos nessas práticas, rotulá-los e parar de recomendá-los. Queremos ter cuidado para não penalizar injustamente artistas legítimos, por isso o sistema será implementado de forma gradual nos próximos meses, com novos sinais sendo adicionados conforme surgirem esquemas inéditos.

Por que importa: Se não for contida, essa prática dilui o fundo de royalties e prejudica a atenção que deveria ser dirigida a artistas que seguem as regras. Nosso novo filtro vai proteger contra esse comportamento e impedir que spammers recebam royalties que poderiam ser distribuídos a artistas e compositores profissionais.

Divulgação de uso de IA em créditos musicais

O problema: Muitos ouvintes querem mais informações sobre o que estão ouvindo e sobre o papel da IA no processo criativo. Para artistas que usam IA de forma responsável, não há hoje uma maneira clara nas plataformas de streaming de indicar se – e como – a tecnologia foi utilizada. Sabemos que o uso da IA está cada vez mais em um espectro, e não em uma divisão binária, em que artistas podem optar por usar IA em partes específicas da produção e não em outras. A indústria precisa de uma abordagem mais sofisticada para a transparência no uso da IA, e não de uma classificação forçada entre “é IA” ou “não é IA”.

O que estamos anunciando: Estamos ajudando a desenvolver e vamos apoiar o novo padrão da indústria para divulgação de uso de IA nos créditos musicais, criado pelo DDEX. À medida que essas informações forem submetidas por gravadoras, distribuidores e parceiros, começaremos a exibi-las no nosso aplicativo. Esse padrão permite que artistas e detentores de direitos indiquem claramente onde e como a IA participou na criação de uma faixa – seja em vocais, instrumentação ou pós-produção. A mudança é sobre fortalecer a confiança na plataforma, não sobre punir artistas que usam IA de forma responsável ou rebaixar músicas por serem transparentes.

Esse esforço exige alinhamento em toda a indústria, e temos orgulho de trabalhar nesse padrão ao lado de diversos parceiros, incluindo: Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, SonoSuite, Soundrop e Supply Chain.

Por que importa: Ao apoiar um padrão da indústria e ajudar a impulsionar sua adoção, garantimos que os ouvintes tenham acesso às mesmas informações, independentemente do serviço em que escutam. Isso preserva a confiança em todo o ecossistema musical, permitindo que as pessoas entendam o que está por trás da música que ouvem. Vemos isso como um primeiro passo importante, que certamente continuará a evoluir.


Enquanto a IA muda a forma como parte da música é feita, nossas prioridades permanecem as mesmas. Estamos investindo em ferramentas para proteger a identidade dos artistas, aprimorar a plataforma e oferecer mais transparência aos ouvintes. Apoiamos a liberdade criativa dos artistas no uso da IA, ao mesmo tempo em que combatemos ativamente seu uso indevido por fazendas de conteúdo e fraudadores. O Spotify não cria nem é dono da música; somos uma plataforma de música licenciada, em que os royalties são pagos com base no engajamento dos ouvintes, e todas as faixas são tratadas igualmente, independentemente das ferramentas usadas em sua produção.

Essas atualizações fazem parte de uma série de mudanças que estamos implementando para construir um ecossistema musical mais confiável para artistas, detentores de direitos e ouvintes. Continuaremos nesse caminho à medida que a tecnologia evolui, então fique de olho.

Spotify Opens Up Support for ElevenLabs Audiobook Content

As Spotify continues to roll out audiobooks to new listeners worldwide, we’re committed to providing authors with tools that help them reach those listeners and lowering the barrier to entry so more authors than ever can have their books heard. 

Starting today, we’re excited to begin accepting audiobooks from ElevenLabs, an AI software company that provides easy-to-use, high-quality voice narration technology.

For authors looking for a cost-effective way to create high-quality audiobooks, digital voice narration by ElevenLabs is a great option. Authors can use the ElevenLabs platform to narrate their audiobooks in 29 languages, with complete control over voice and intonation.  

One of our most requested features, now, authors can distribute their ElevenLabs content to Spotify and select other audiobook retailers via Findaway Voices, reaching millions of new listeners and book fans.

How to quickly and easily submit ElevenLabs audiobooks on Spotify

  • After creating their digital voice-narrated audiobook in ElevenLabs, authors simply download the file package and upload to Findaway Voices by Spotify.
  • Following a standard review, the book will go live on Spotify, as well as other retailers that accept digital voice-narrated titles from Findaway Voices.

For listeners, all digitally narrated titles will be clearly marked in the metadata on Spotify and across the other retailers where Findaway Voices distributes. The book description will be prepended with the first sentence stating, “This audiobook is narrated by a digital voice.”

We previously announced support for digital voice-narrated content created with Google Play Books, and will continue to find ways for more authors and publishers to make and distribute their audiobook content to Spotify. To learn more, be sure to visit elevenlabs.io.

Mark Zuckerberg and Daniel Ek on Why Europe Should Embrace Open-Source AI: It Risks Falling Behind Because of Incoherent and Complex Regulation, Say the Two Tech CEOs

Editor’s Note: At Spotify, we believe that AI has the potential to offer real benefits for innovation and creators. Read on for thoughts from our CEO, Daniel Ek and Meta CEO, Mark Zuckerberg’s on the promise of open source AI and its ability to drive progress and create economic opportunity globally. 

This is an important moment in technology. Artificial intelligence (AI) has the potential to transform the world—increasing human productivity, accelerating scientific progress and adding trillions of dollars to the global economy.

But, as with every innovative leap forward, some are better positioned than others to benefit. The gaps between those with access to build with this extraordinary technology and those without are already beginning to appear. That is why a key opportunity for European organisations is through open-source AI—models whose weights are released publicly with a permissive licence. This ensures power isn’t concentrated among a few large players and, as with the internet before it, creates a level playing field.

The internet largely runs on open-source technologies, and so do most leading tech companies. We believe the next generation of ideas and startups will be built with open-source AI, because it lets developers incorporate the latest innovations at low cost and gives institutions more control over their data. It is the best shot at harnessing AI to drive progress and create economic opportunity and security for everyone.

Meta open-sources many of its AI technologies, including its state-of-the-art Llama large language models, and public institutions and researchers are already using these models to speed up medical research and preserve languages. With more open-source developers than America has, Europe is particularly well placed to make the most of this open-source AI wave. Yet its fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers. Instead of clear rules that inform and guide how companies do business across the continent, our industry faces overlapping regulations and inconsistent guidance on how to comply with them. Without urgent changes, European businesses, academics and others risk missing out on the next wave of technology investment and economic-growth opportunities.

Spotify is proud to be held up as a European tech success but we are also well aware that we remain one of only a few. Looking back, it’s clear that our early investment in AI made the company what it is today: a personalised experience for every user that has led to billions of discoveries of artists and creators around the world. As we look to the future of streaming, we see tremendous potential to use open-source AI to benefit the industry. This is especially important when it comes to how AI can help more artists get discovered. A simplified regulatory structure would not only accelerate the growth of open-source AI but also provide crucial support to European developers and the broader creator ecosystem that contributes to and thrives on these innovations.

Regulating against known harms is necessary, but pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation. Europe’s risk-averse, complex regulation could prevent it from capitalising on the big bets that can translate into big rewards.

Take the uneven application of the EU’s General Data Protection Regulation (GDPR). This landmark directive was meant to harmonise the use and flow of data, but instead EU privacy regulators are creating delays and uncertainty and are unable to agree among themselves on how the law should apply. For example, Meta has been told to delay training its models on content shared publicly by adults on Facebook and Instagram—not because any law has been violated but because regulators haven’t agreed on how to proceed. In the short term, delaying the use of data that is routinely used in other regions means the most powerful AI models won’t reflect the collective knowledge, culture and languages of Europe—and Europeans won’t get to use the latest AI products.

These concerns aren’t theoretical. Given the current regulatory uncertainty, Meta won’t be able to release upcoming models like Llama multimodal, which has the capability to understand images. That means European organisations won’t be able to get access to the latest open-source technology, and European citizens will be left with AI built for someone else.

The stark reality is that laws designed to increase European sovereignty and competitiveness are achieving the opposite. This isn’t limited to our industry: many European chief executives, across a range of industries, cite a complex and incoherent regulatory environment as one reason for the continent’s lack of competitiveness.

Europe should be simplifying and harmonising regulations by leveraging the benefits of a single yet diverse market. Look no further than the growing gap between the number of homegrown European tech leaders and those from America and Asia—a gap that also extends to unicorns and other startups. Europe needs to make it easier to start great companies, and to do a better job of holding on to its talent. Many of its best and brightest minds in AI choose to work outside Europe.

In short, Europe needs a new approach with clearer policies and more consistent enforcement. With the right regulatory environment, combined with the right ambition and some of the world’s top AI talent, the EU would have a real chance of leading the next generation of tech innovation.

We believe that open-source AI can help European organisations make the most of this new technology by levelling the playing field, and we hope that the EU doesn’t limit the possibilities that we are only starting to explore. Though Spotify and Meta use AI in different ways, we agree that thoughtful, clear and consistent regulation can foster competition and innovation while also protecting people and giving them access to new technologies that empower them.

While we can all hope that with time these laws become more refined, we also know that technology moves swiftly. On its current course, Europe will miss this once-in-a-generation opportunity. Because the one thing Europe doesn’t have, unless it wants to risk falling further behind, is time.

Mark Zuckerberg is the founder and chief executive of Meta. Daniel Ek is the founder and chief executive of Spotify.


Originally published at https://www.economist.com/by-invitation/2024/08/21/mark-zuckerberg-and-daniel-ek-on-why-europe-should-embrace-open-source-ai © The Economist Newspaper Limited, London, 2023

How Spotify Uses Design To Make Personalization Features Delightful

Every day, teams across Spotify leverage AI and machine learning to apply our personalization capabilities on a large scale, leading to the features, playlists, and experiences Spotify users have come to know and love. And when you spend your days working with emerging technologies, it’s easy to get transfixed by complicated new advancements and opportunities. So how do our forward-thinking teams ensure they can tackle this technical work while also prioritizing the experience of our users? 

That’s a question constantly on the mind of Emily Galloway, Spotify’s Head of Product Design for Personalization. Her team’s role is to design content experiences that connect listeners and creators. This requires understanding our machine learning capabilities as they relate to personalization to leverage them in a way that is engaging, simple, and fun for our users. 

“Design is often associated with how something looks. Yet when designing for content experiences, we have to consider both the pixels and decibels. It’s more about how it works and how it makes you feel,” Emily explains to For the Record. “It’s about being thoughtful and intentional—in a human way—about how we create our product. I am a design thinker and a human-centric thinker at my core. People come to Spotify to be entertained, relaxed, pumped up, and informed. They come for the content. And my team is really there to think about that user desire for personalized content. What are we recommending, when, and why?”

The Personalization Design team helps create core surfaces like Home and Search, along with much-loved features like Discover Weekly, Blend, and DJ. So to better understand just how to think about the design behind each of these, we asked Emily a few questions of our own.

How does design thinking work to help us keep our listeners in mind?

When you work for a company, you know too much about how things work, which means you are not the end user. Design helps us solve problems by thinking within their mindset. It’s our job to be empathetic to our users. We have to put ourselves in their shoes and think about how they experience something in their everyday life. A big thing to keep in mind is that when using Spotify, phones are often in pockets and people look at the screen in quick, split-second moments. 

Without design, the question often becomes, “How do we do something technically?” For those of us working at Spotify, we understand how or why we’re programming something technically in a certain way, but users don’t understand that—nor should they have to. What they need is to experience the product positively, to get something out of it. We’re accountable for creating user value. We really are there to keep the human, the end user, at the forefront. 

Without this thinking, our products would be overcomplicated. Things would be confusing and hard to use, from a functionality perspective. Good design is about simplicity and should largely remain invisible. 

But design is also additive: It adds delight. That’s what I love about projects like DJ or Jam that are actually creating connection and meaning. Design is not afraid to talk about the emotional side—how things make you feel. 

How does design relate to personalization?

Personalization is at the heart of what we do, and design plays an important role in personalization.  

Historically, Spotify’s personalization efforts happened across playlists and surfaces like Home and Search. But over time we utilized new technologies to drive more opportunities for personalization. This started from a Hack Week project back in the day to become Discover Weekly, our first successful algorithmically driven playlist. It then gave way to Blend, which was designed for a more social listening experience. And more recently, to DJ, our new experience that harnesses the power of AI and editorial expertise to help tell artists’ stories and better contextualize their songs. It utilizes an AI voice that makes personalization possible like never before—and it’s a whole new way for our listeners to experience Spotify’s personalization. 

When designing personalized experiences like these, we must think “content first,” knowing people come to Spotify for the content. Design ultimately makes it feel simple and human and creates experiences that users love. If recommendations are a math problem, then resonance is a design problem.

But we also have to have what I like to call “tech empathy”—empathy for the technology itself. My team, which is a mix of product designers and content designers, has to understand how the technology works to design our recommendations for the programming. Personalization designers need to understand the ways in which we’re working with complex technology like machine learning, generative AI, and algorithms. Our designers need to consider what signals we’re getting that will allow our recommendations to get better in real time and overtime. And when a recommendation is wrong, or a user just wants a different mood, we need to design mechanisms for feedback and control. That really came into play when we developed our AI DJ.

Tell us the story of the inception of DJ.

We’re always trying to create more meaningful connections between listeners and creators in new and engaging ways. And we use technology to deliver this value. DJ is the perfect example of how we’re driving deeper, more meaningful connections through technology.

Prior to generative AI, a “trusted friend DJ” would have required thousands of writers, voice actors, and producers to pull this off—something that wasn’t technically, logistically, or financially possible. Now, new technologies have unlocked quality at scale. Xavier “X” Jernigan’s voice and personality delivers on our mission of creating more meaningful connections to hundreds of millions of people. Generative AI made the once impossible feel magical.

To bring DJ to life we answered some core experiential questions knowing we are taking listeners on a journey with both familiar and unfamiliar music. We asked questions such as: What does it mean to give context to listening? How do we visualize AI in a human way? You can see this in how the DJ introduces itself in a playful way—owning that it’s an AI that doesn’t set timers or turn on lights. 

We also put a lot of thought into how we designed the character, since it is more than a voice. 

Ultimately, we really wanted to lean into making it feel more like a trusted music guide, as well as having an approachable personality. So much of our brand is human playfulness, so we made a major decision to acquire Sonantic and create a more realistic, friendly voice. And that led to Xavier training the model to be our first voice. His background and expertise made him the perfect choice.

With new technologies like generative AI, what are some of the new ways you’re thinking about your team and their work?

I’m challenging our team to think differently about the intersection of design and generative AI. We keep coming back to the conclusion that we don’t need to design that differently because our first principles still stand true. For example, we are still taking a content-first approach and we continue to strive for clarity and trust. We’ve realized that tech advancements are accelerating faster than ever, which makes design’s role more important than ever. 

Because there’s so much more complexity out there with generative AI, it means the human needs must be kept in mind even more. At the end of the day, if our users aren’t interested in a product or they don’t want to use it, what did we create it for? 

Emerging technology inspires you to think differently and to look from different angles. The world is trying to figure this out together, and at Spotify we’re not using technology to use technology. We’re using technology to deliver joy and value and meet our goals of driving discovery and connections in the process.