“I actually think 10 years from now, you won’t be listening to music.”
The remark was made by Indian American venture capitalist and tech mogul Vinod Khosla, speaking at a fireside chat at Creative Destruction Lab’s second annual Super Session event in Toronto in June 2019.
It’s a shocking premise to virtually any music lover, but what does that mean, exactly?
According to Khosla, music as we know it will be replaced by what he called “custom song equivalents” — or sonic landscapes custom designed to our individual brain structure, preferences, and emotional needs. Those custom song equivalents will be generated automatically by AI or artificial intelligence systems that will know us better than we know ourselves.
The groundwork for the AI generated music revolution is already underway, says Khosla. In his talk, he pointed out that one of the newest trends in music streaming is to organize tracks not by genre or even artist, but by mood, which corresponds to recent research into the links between musical preferences and brain structure.
There already exists a string of new companies eager to cash in on the AI music boom, and ready to fill all the gaps in an era of digital media that is video rich, but cash poor. There is no money for human composers, the argument goes, and so more and more commercials, video games, and other applications will be using music that has been written by a machine. Streaming services like Spotify have a built-in incentive to promoting machine generated tracks, in that there are bigger margins to be had without the licensing restrictions and fees paid to artists.
Classical music is often marketed these days for its calming and mood enhancing properties, and may be particularly vulnerable to replacement by AI generated alternatives — especially when music for movies is factored into the equation.
Khosla foresees custom playlists specific to the individual user, replacing the music industry’s current artist-driven model. The Creative Destruction Lab Super Session is the culmination of a program designed for tech companies just out of the box. Their second annual showcase was held at the Rotman School of Management, University of Toronto in June 2019. Other speakers included astronaut and first Canadian Commander of the International Space Station Chris Hadfield.
Khosla is a billionaire venture capitalist, an engineer by education. Co-founder of Sun Microsystems, he founded his company Khosla Ventures, based in California’s Silicon Valley, and was named among the 400 richest people in the world by Forbes Magazine in 2014.
But, does he truly understand the nature of music? Science itself suggests not — at least, not entirely.
There are times, it’s true, that music can be used as an escape from the surrounding world — as many commuters can attest. Even though the vast majority of people now listen to their music via earbuds or headphones, however, it’s crucial to note that music can be experienced in a variety of ways, with various implications.
Example of an AI Generated Film Score from Aiva Technologies
A study by researchers at l’Université Paul Valéry in Montpellier, France looked at the experience of concert audiences, a topic that has only received attention relatively recently. The social value of experiencing live music in a concert format has been documented by more than one such study, and those benefits come whether you attend concerts frequently or not. Both short-term and long-term communities are formed around the shared experience of a concert, and the social interaction is just as important as aesthetic preferences.
A broad study on the cross-cultural impact of music over human history was published by researchers from the Department of Psychology, University of Toronto Mississauga, among others, in 2015. It noted the lack of cultural perspectives on music in the research community, with its focus on the experience of solitary listeners and the experience at the neural level. Music’s significance stretches far beyond those boundaries.
The paper notes that music is a universal experience, but that not all cultures separate music per se from dance or other performance. Music is related to religious and other rituals, and bound to everyday life. Studies show that babies respond better to their mother’s voice singing than speaking.
There is a social component to music even when it is experienced alone. When music is performed alone, studies of music students show that the musician typically imagines a listener. When listening to music alone, in the modern Western model of iPods and earbuds, the music evokes memories and is woven into a social context. Music is linked to social cohesion and communal values.
Just listening to music enhances our ability to connect with other human beings. A music psychologist at the Freie University Berlin described the ways in which music enhances our ability to connect with each other in an academic paper published in 2013. He noted the ways in which it adds to social contact and cooperation with others. At the neural level, listening to music activates the parts of our brain linked to empathy. And, who — or what — the composer is, does make a difference.
One study compared the responses of participants as they listened to a piece of music. The listeners were told that the music was composed either by a human being, or a computer — although in fact, they used the same piece of music for all of the respondents. If the participants believed the music was composed by a human, the cortical network associated with empathy and other qualities lit up. If they thought the music was computer generated, that network remained still. It matters who writes the music, because the listener considers not only the sound but also the meaning, and what the composer intended.
Those findings seem to be backed up by another study of primary-school-aged children who were divided into three groups. For one hour a week for the whole academic year, one group was exposed to musical games with other children, another to games but without music, and the third was left to their own devices. At the end of the year, the students exposed to musical games increased their empathy scores on tests by a significant margin.
The implication is that AI generated music would respond only to a listener’s individual needs and preferences, without the socially enhancing benefits of human-composed music — isolating the isolated listener.
When the first Homo sapiens wandered north from the Mediterranean into Europe some 40,000 years ago, the hunter-gatherers brought flutes and drums with them. Certainly, in the hardscrabble of their lives, they could have used their time on much more seemingly vital tasks than carving pipes from swan’s bones. Yet, they chose to create music that was undoubtedly shared with their group.
There are some things that a custom song equivalent will never replace.
LUDWIG VAN TORONTO