© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Hello, Brave New World!

Donald Iain Smith
/
Getty Images

8:02 a.m., July 12, 2019


Dear me (hopefully),

I don't even know if you'll be able to see this note, but I'll write it anyway, because I really need your help.

I'm trying to finish an assignment for NPR Music about the impact that technological advances will have on music streaming and listening in 20 years. The notion that we can be in full control of the music we consume on a biological and psychological level — like an always-on, always-adaptive mood playlist — seems radical at best, and detrimental to culture at worst. I'm hoping to unpack all the impending implications in written form before it's too late.

I've already talked with a handful of entrepreneurs, researchers and futurists to get a better sense of where we are currently (e.g., whether Spotify, Apple Music, Amazon Music and other incumbents are investing in biometric recommendation technology), and what kinds of questions we should be asking to inform future product and research decisions. But now I'm left with a dozen pages of interview transcripts filled with speculative, scientific jargon, and trying to go through and make sense of it all has, frankly, become quite time-consuming.

I have a lot of other work I need to get to in the coming weeks, so was hoping you'd be able to lend me a helping hand. If I can't make sense of the future when stuck in the present, why not get the answers straight from the source?

With the help of some smart friends, I've managed to cobble together a jank-looking time machine — but it only has the bandwidth to send back and forth a tiny amount of matter, equivalent to a single sheet of paper, which hopefully you're reading right now.

If you find this note, could you respond with a quick summary of, to put it simply, WTF is going on with music in 2040? Then put it through the machine? For now, I'm most interested in learning how people listen to music — how, or if, recommendation algorithms have evolved and whether the screens and visual interfaces on our music services look different from how they are today.

Hope to hear from you soon — fingers crossed you even see this in the first place...

-- Cherie


9:15 p.m., July 13, 2040

Hello from the future,

First of all, great news — that machine, while very dusty and noisy, seems to still actually work! (It scared the crap out of me when it turned on by itself, flashed like a lightning bolt, and then said out loud: "Jump complete.") Thanks so much for your note. (And yes, I still love music.)

I'm glad you reached out to me about this, because our current — that is, your future? — music landscape is changing so quickly, beyond even my own recognition. And not all of the changes are great. I appreciate how busy you are (obviously — I remember it well), and will try to give you as much preliminary detail as I can fit.

Let's start by talking about screens and visual interfaces because, well, there are none. The dominant interfaces through which people consume music and culture nowadays are primarily conversational, physiological, psychological and neural, rather than visual.

If I recall correctly, smart speakers were all the rage back then. Sure, they did introduce the mainstream to the concept of music streaming experiences beyond our phones. But they still failed to build compelling, screen-free voice experiences outside of the home, and are still relatively limited with respect to the types of inputs that their recommendation algorithms can handle.

To demonstrate how far we've come since: There's a relatively new device in my world called YouNite, which debuted in Hong Kong in 2035 and is credited with building the first brain-computer interface for music consumption (take that, Neuralink). This is largely considered the next great music "format" revolution after static streaming services like Spotify (RIP), and everyone's using it. If you walk on the street nowadays, you'll see people sporting a tiny metal sphere behind their right ear that looks similar to the "Experiencer Disks" from Black Mirror. Those are YouNite "beads," which curate music for you on the fly — no visual screens involved.

Unlike smart speakers, which operate on the resolution of individual sentences and conversations, YouNite operates on the level of individual heartbeats, breaths and even neurons. I can maybe explain in a later note, but the primary use case of a YouNite bead comes down to self-regulation — listening to music to improve one's physical and mental health and control one's negative emotions. In this vein, music has become both more unnoticeable in its omnipresence, like wallpaper, and more functional — generally, consumed to fulfill a certain mental or physiological task.

I'm running out of room on this sheet of paper, so I'll end it here and let you ask any follow-up questions. As you can imagine, there are lots of things that could go right, and wrong...

Best,

Cherie


2:57 p.m., July 14, 2019

Wow, I'm not surprised that YouNite literally looks and sounds like it came straight out of Black Mirror. But I guess that's what we're really asking for with the "Internet of Things," right — a completely "frictionless," "seamless" media experience that transcends the physical limitations of static hardware, molding itself directly to our minds and thought processes in real time.

I'm curious as to what exactly you mean by brain-computer interfaces bringing in "the next great format revolution" — because here in 2019, we're still debating how long streaming will last. Some have argued that we still have a long way to go with respect to convincing more of the world population to pay for a streaming subscription, particularly in markets like Asia, Africa and Latin America. Others suggest that there are maybe too many streaming services now, and that the market overall is close to reaching its saturation point.

Still others are already trying to predict and build for the next format. Take Lars Rasmussen, co-founder and CEO of adaptive music startup (and former director of engineering at Facebook). With the company's first app, , music can adapt in real time to your walking or running cadence, such that you're always exercising in sync with the beat — a phenomenon that has been shown to improve athletic performance.

"It's time for the concept of a record to change," he recently told me. "Records are still being streamed as single, static audio files, when they could become much more dynamic, especially as more of our listening is done on computers and mobile devices. Imagine always having a band with you, playing the perfect version of a given song for whatever you're currently doing — playing with kids, running, lovemaking — instead of having to listen to the same version every time."

I can't imagine traditional artists and record-label executives being open to this idea. But based on your description, it seems like YouNite has done precisely that — on a much more granular level in terms of monitoring, and with wider applications to health and wellness management in addition to other everyday activities. Could you share more details about how the YouNite bead actually works?

-- Cherie


4:41 p.m., July 15, 2040

Lars' ideas around how records would change were spot on.

Using a YouNite bead is relatively simple. Once you place it behind your ear, the operating system will turn on and start speaking to you in an androgynous voice, offering three modes of consumption: "Freestyle," "Stasis" and "Adaptation." Freestyle mode is the most similar to a standard jukebox-style streaming experience, in that you can shuffle any album or playlist you want, request a specific track or album with voice commands or control music from your phone. But in a world where everyone is shunning visual interfaces in the pursuit of ultimate convenience, the other two modes are ultimately your better friends.

CSA Images / Getty Images/Vetta
/
Getty Images/Vetta

Stasis mode, introduced two years after the bead first launched, allows you to use music to maintain a mental or physiological state of your choice. For instance, say you walked into a coffee shop today and wanted to stay as relaxed as possible during your visit. You could activate Stasis on your YouNite using a voice command ("launch Stasis for relaxation"), and the bead would automatically curate an infinite playlist of songs that it thinks will help you relax, based on predictive correlations with your heart rate, body temperature and breathing cadence. If you start experiencing key stress signals such as higher body temperature and/or heart rate variability (HRV), YouNite will send you an audio push notification to confirm that you are no longer relaxed and, with a single swipe by your ear, it will change the soundtrack accordingly.

Adaptation mode is where a truly new format comes to the forefront, and is definitely the most interesting sonically. A natural extension of Weav-style technology, it'll take any existing recording and adapt the rhythm, timbre and energy level in real time to an internal or external variable of your choice, including but not limited to your walking cadence, overall velocity (e.g. driving a car or riding a bike), heart rate, body temperature, weather, geographic location and number of people in the same room. For instance, if you listen to Ariana Grande all day and commute to a grueling office job during the week, her songs will sound smoother while you're driving to work on a crowded highway than when you get out of your car and walk into your office building, and will be slowest when you're sitting dormantly at your cubicle during a late night with no one else around. (Of course, there will still be just enough energy and volume to keep you awake and focused at your seat, which YouNite confirms by tracking biometric inputs in the background; incidentally, corporate accounts comprise a growing percentage of YouNite's annual revenue.)

It can't be overstated how fundamentally different this paradigm for music curation is from what you're used to. To compare it to another example from around your time, Spotify's Daily Drive playlist wove audio snippets from news talk shows with personalized music recommendations. I recall the feature was heralded as innovative for combining multiple audio formats into a single interface, but it was still fundamentally limited in how it relied on metadata around past listening activity. In contrast, the music information retrieval (MIR) techniques used in YouNite draw on real-time and forward-looking predictions around both present physiological states and desired future emotional outcomes.

Hope this all makes sense?

-- Cherie


12:38 a.m., July 16, 2019

Of course the logical endpoint of all this is corporations using adaptive music to control their labor force — again, something I'm not sure artists will be entirely happy with.

But before diving into that, I want to address an arguably more important question: Do YouNite's biometrically and psychometrically generated recommendations actually work? It's really difficult to answer that question these days. Yes, recommendations have become incredibly sophisticated in terms of their ability to parse content- and behavior-related consumption data on the surface. But whether, say, Spotify's "Sad Songs" playlists actually make listeners more sadremains an open question.

In fact, the answer might not even matter, because Spotify's business model as of now doesn't rely on diagnostic accuracy. As , an assistant professor of anthropology at Tufts University who's working on a book about music recommendation algorithms, recently suggested to me over the phone: "It might be the case that Spotify can guess with some probability that you're sad, but their goal isn't to report back to you as a listener about what 'percentage' sad you are. The point is more vague: to get users to believe that Spotify knows what their emotions are, which allows the company to do things like land lucrative ad contracts." In that sense, seemingly creepy campaigns like Spotify's data-sharing partnership with 23andme are merely instances of false mythologizing that overstates their technical capabilities for surveillance — for now.

Even startups using AI and body sensors to help users diagnose and manage holistic health are facing an uphill battle in making sure their products are actually functional in the first place. David Plans, CEO of digital therapeutics startup , was rather frank about the challenges at hand during a recent phone call. "The music data perspective is only half the picture," he told me. "The real problem is the human: not just capturing physiological states, but also matching that to the emotional states and ambitions of the user. If I have data showing you have a high HRV, that suggests you're suffering from serious stress, but I can't tell you whether or not that stress is actually disrupting you emotionally. To find that out, I still kind of need to ask you."

And even if the diagnoses are accurate, what then? Say there's an option to choose "Sad" in YouNite's Stasis mode as the mental state a given user wants to maintain. I recently came across some studies suggesting that listening to music for channeling or curing one's negative emotions actually correlates with lower quality of life and higher levels of anxiety and depression. What responsibilities, if any, does YouNite have to treat us in the wake of a negative diagnosis? Is the verdict that they should leave us alone in our worst times — or intervene before it's too late? Has YouNite addressed these questions at all to the public?

-- Cherie


7:41 a.m., July 17, 2040

These are all such important questions that we're still debating — which goes to show you that emerging technologies will always develop much more quickly than our ability to grasp morally and legally with them.

In terms of diagnosis, our ability to detect calm, stress, anger and other mental and physiological states has definitely improved. There are now detailed, user-friendly dashboards where you can review your historical biometric and psychometric data, and even export it, in the event you need to meet with a healthcare provider.

But unfortunately, YouNite can't seem to make a decision about treatment, and whether it needs to present some duty-of-care affectation towards its users. Everyone's asking now: If you use your YouNite bead the most when you're down in the dumps, should YouNite be responsible for "healing" you and making you feel happier? Or do they have the right to keep making you "optimally sad" enough to keep using your bead on a daily basis? Company execs have been deflective during press conferences and investor calls, usually just before muttering something about "optimizing engagement."

There's also a giant elephant in the room that everyone knows about: advertising. Yes, media-tech companies like YouNite still rely heavily on ads to monetize their free tiers and drive paid-user acquisition. And perhaps the most alarming aspect of YouNite, which a growing number of consumers are protesting, is that the company could potentially reveal your stress levels and health status to advertisers and other third parties without your knowledge or consent— like how long-term-care insurers have access to potential customers' genetic testing results, except with a lot more data involved.

I also don't think it's a coincidence that a lot of the groundbreaking research that led to YouNite drew inspiration from procedurally-generated game design. Biometrically and psychometrically mediated music only reinforces the gamification of life; it's not just about capturing and adapting to our emotions and behaviors, but also about controlling them, as if we were animated characters in a MMORPG.

The false mythologizing behind the Spotify/23andme partnership is actually not that far off from the effect YouNite is having here, even though the diagnoses are more accurate. But regardless of its accuracy, YouNite is selling us a belief in the benefits of eternal self-optimization through media. The accompanying, obsessive mania around self-regulation has become its own draw, more than any kind of treatment, cure or musical experience. I'm afraid that the key phrase everyone uses to describe YouNite's value — "mood-hacking" — will simply become a justification for emotional repression and isolation, the same way that "bio-hacking" became a Silicon Valley-endorsed stand-in for eating disorders and starvation. As you suspect, artists aren't feeling ecstatic about their work being treated this way.

-- Cherie

Bill Hinton / Getty Images
/
Getty Images

11:02 a.m., July 18, 2019

Well, that's kind of depressing. I guess the more things seem to change, the more they stay the same...

And yes, let's please talk about artists! I feel like artists have found themselves in between a rock and a hard place nowadays, simultaneously mistrusting and conforming to the whims of streaming services because their bottom lines demand it. Songwriters face external pressure to write music with functional playlists like Spotify's "Teen Party" in mind, and fears still abound about faceless "fake artists" homogenizing and diluting playlist culture.

Of course, the mainstream music industry has historically adapted its creative output to the dominant technological formats of its time — e.g., cutting 22-minute sides for the first 12-inch vinyl records, or trimming song lengths for broadcast radio — so artists' tethered relationship to the current streaming format is, fundamentally, nothing new. And newer paradigms around biofeedback could be a positive rather than negative force for driving innovation in music. In recent years, and Chagall van den Berg have given beautiful concert performances while wearing custom body sensors that turn movement into generative sound — a creative and academic realm known as "embodied music interaction" — and perhaps said artists were also astute predictors of how bodily motion would inform music consumption as well.

But it seems the impact of musical biofeedback will take a turn for the worse? Why aren't artists happy?

-- Cherie


2:39 a.m., July 19, 2040

You're right in that music and technology have been intertwined as long as the human race has been around. But many independent artists today argue that YouNite puts them at a significant disadvantage when it comes to building audiences and loyal fan bases.

For one, they argue, YouNite is mining, quantifying and optimizing culture for profit, and changing the nature of record labels as a result. Labels and publishers now prefer to sign artists and songwriters with formal training in neuroscience, psychology and/or machine learning, to help create, tag and optimize tracks for YouNite's Stasis and Adaptation modes. As a sign of their allegiance, YouNite has freely shared data with labels around the latest trends in what types of music its users consider to be "relaxing," "stressful" and "uplifting," which artists and songwriters then incorporate into their creative processes. Most of the resulting output is the opposite of cutting-edge, since provoking and challenging YouNite users would stop them from paying for the platform, and hence from paying for music.

Secondly, YouNite's model threatens the very idea of artist development altogether. In particular, local scenes and subcultures have historically been instrumental in surfacing groundbreaking talent, which then goes bubbles up to the mainstream (e.g. Atlanta's trap economy). But in order to sustain themselves, subcultures require a sense of community, in which multiple people bond over a shared identity that they collectively own.

From my observation, in YouNite's world, there's no chance of a "biometric subculture" for music ever succeeding. Everyone who owns a YouNite bead looks like they're stuck in their own world, talking and gesturing to themselves, unaware of the other realities being experiences around them (maybe that sounds familiar). Just as the rise of podcasts back then was (arguably) related to rising feelings of loneliness among certain populations, so is biometric media in danger of exacerbating our own navel-gazing at the expense of deeper community development. In 2040, the only shared cultures that exist are not in the social and artistic sense, but rather in the biological sense; we are giant dishes of microbes acting and transforming on command, in controlled experiments run by big tech.

-- Cherie


3:06 a.m., July 20, 2019

So much for "data-driven A&R" ... I guess at its most extreme, that concept just vanishes altogether.

Thank you so much for sharing your perspective — I have almost everything I need for this article. The only question I have left is: What do we do about it?There are several research groups already active in 2019 that could potentially be relevant to YouNite, including MIT's Affective Computing Research Group, Stanford's and Microsoft Research's Fairness, Accountability, Transparency and Ethics (FATE) Group. At the same time, 20 years seems so far away that I almost feel helpless with respect to the real impact that I and other everyday people can have on the future of technological change. What would you suggest we do to best prepare?

-- Cherie


8:42 p.m. on July 21, 2040

Dear Cherie,

In general, even more than better diagnostics (which we already have), we need more transparent privacy policies with respect to how tech companies like YouNite use our data around physical and mental health; more open discussions around whether these companies should be responsible for "treating" us versus exacerbating our existing behaviors; and systems in place to support physical cultural communities, such that artists can be in the steering wheel as much as consumers and tech companies.

In general, just keep in mind that while the future is mind-blowing, it's never as blissful as you think it's going to be. We should be careful what we wish for.

-- Cherie


Cherie Hu writes regularly about the intersection of music and technology forBillboard,Forbes,Music Business WorldwideandResident Advisor, and is also the owner ofWater & Music, an independent music-industrynewsletterandpodcast.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Cherie Hu