The musical stylings of artificial intelligence

Street musician playing a violin

As a creative art form, music is found in every known culture throughout history and it can best be considered the universal language of humankind. While composers and performers rely on their own creative inspiration, artificial intelligence is changing how music is created altogether. Let’s examine how and why AI could create the next big hit:

Music – decoded & imitated 

In its most technical form, music is sound organized in time with elements such as pitch, rhythm, dynamics and timbre. Traditionally this information is transcribed into sheet music, a form of musical notation. In many ways, written music is a form of audible code that can easily be fed and understood by algorithms. With enough data, AI could theoretically analyze a composer’s style, learn from it, and create original music within that style. 

BachBot is a research project from the University of Cambridge that investigated the composition style of Johann Sebastian Bach, the famous German composer. Using deep sequence learning, the researchers created an AI composition system that composes music in the style of Bach. For the average listener, the AI Bach music is very hard to differentiate from actual Bach compositions. Researchers found that participants could differentiate BachBot’s music from Bach’s original work only 9 percent better than random guessing. 

a musical challenge called the BachBot Challenge, where the user is to decipher between music composed by Bach versus composed by AI
Take the BachBot challenge yourself and see if you can tell the difference between Bach and a computer.

It’s no secret that composers and musicians developed their own style and technique from listening to other artists. With enough data, machine learning systems can learn from other artists and generate original music in a style similar to a particular composer or genre. 

Collaborating with AI

Concepts of artificial intelligence. Human hand with neural hand prosthesis playing piano

Rather than fear the advancement of AI generates music, many composers are embracing AI as collaborators to push their work into new areas. Former American Idol contestant Taryn Southern 2018 album I AM AI was the first pop album to be co-written and co-produced with artificial intelligence. The album was produced using Amper, a music-making AI platform designed to work in collaboration with human musicians. Amper allows an artist to pick a music genre and mood before generating a basic song. The artist can then tweak the song by adjusting the tempo, adding or removing instruments, and make other changes until they are happy with the final product. 

In another case of AI aiding creativity, award-winning beatboxer Harry Yeff (aka Reeps One) collaborated with Nokia Bell Labs to produce We Speak Music – a documentary series that highlights the intersection of art and technology. 

Reeps One’s beatboxing was fed into an AI algorithm to create Second Self, an AI-simulated version of Yeff’s voice.

“We are living in an age of ever-engaging voice technology and we thought it would be good to bring his perspective on voice to that conversation,” said Domhnaill Hernon, Head of Experiments in Art and Technology (E.A.T) at Nokia Bell Labs. “Second Self is an AI digital twin of Reeps One that can beatbox. We trained machine learning algorithms on Yeff’s voice and after a period of time, the algorithms started giving us sounds and techniques that he never made in his lifetime and in the process has inspired Yeff to take his voice to whole new levels.”

The trailer for We Speak Music, a documentary series that follows beatboxer Reeps One around the world as he reflects on the beatboxing community and the technologies that can advance creativity.

Domhnaill believes AI technology will be employed to enhance the creative potential of artists, and humanity in general. “It will be used as a tool in the general creative process flow to either automate some mundane tasks thereby giving the creative more time to think and/or it will be used as a tool to provide unique connections between things upon which the artist can enhance their creativity.”

Conversely, major labels are signing AI companies to record deals. Warner Music signed a little known startup Endel to a 20-album contract, making it the first AI algorithm to sign a major label deal. Endel generates personalized sounds to help people focus, relax and sleep. The algorithm is based on circadian rhythms and the sound adapts to different inputs such as the time of day, weather, heart rate, and location. In typical AI fashion, all 600 songs on the 20-album deal were completed by the click of just one button. 

With the rise of “context playlists” – a selection of songs that cater to moods opposed to music genres – labels are looking for fast and easy wins with AI-generated playlists that generate millions of subscribers.   

Creativity, copyright and art

Young woman sitting on concrete stairs wearing headphones

In virtually every industry, technology moves faster than new laws can be created and the music industry is no exception. Just as digital music and file-sharing challenged copyright laws, AI-generated music blurs the lines between creator and computer. For example, if a composer has to merely push a button to generate an original composition, should that musical have copyright protection? Or if an artist’s body of work was analyzed by AI to mimic that artist’s style, should that artist be compensated? Can or should music created by an AI composer receive royalties?

Outside the legal ramifications of AI-generated music, the big debate among music purists is if AI-assisted music can be considered art. Should composers who use AI be considered in the same category of artists who create music the traditional way?   

Whatever the case may be, AI music is already being used to create backing tracks for videos, commercials, context playlists and even as musical scores for movies. As these musical algorithms develop and analyze more data, it won’t be long until AI will create the next big hit – without human intervention.

UPDATED 11/27/2019 This article was updated to include quotes from Domhnaill Hernon on the “We Speak Music” series, Reeps One and Second Self.

About Futurithmic

It is our mission to explore the implications of emerging technologies, seeking answers to next-level questions about how they will affect society, business, politics and the environment of tomorrow.

We aim to inform and inspire through thoughtful research, responsible reporting, and clear, unbiased writing, and to create a platform for a diverse group of innovators to bring multiple perspectives.

Futurithmic is building the media that connects the conversation.

You might also enjoy
two women mobile header
Africa’s mobile revolution