Google Magenta’s new Lo-Fi Player can make repetitive, electronic beats. Composing powerful music that moves you to tears will be a lot harder. [Pixabay | CC0]
When Bob Dylan said, “Creativity has much to do with experience, observation and imagination, and if any one of those key elements is missing, it doesn’t work,” he was likely talking about human musicians.
But now, a growing crop of programmers are building sophisticated songwriting tools that use artificial intelligence, or AI, to assist aspiring composers. Some of the computer programs can create entire pieces of music from scratch, but the results might not be as captivating as Dylan’s “Tangled Up in Blue” or “Hurricane.”
“I want to make music easier to play with for almost everyone,” explains Thio, who created the website while interning at Magenta, Google’s research program exploring AI as a tool for creators.
To create music, Thio’s website uses machine learning, an application of AI where computers learn from experience and adapt automatically, without human intervention. Two AI tools are hidden in the Lo-Fi Player’s virtual room. The TV, when clicked, can generate electronic beats by combining other beats. Or as Thio describes it, “Imagine creating a new face for a virtual sibling by mixing yours and your mom’s faces, but with music.” Another AI tool for generating melodies — called Melody RNN — is hidden in the radio. Click on it, and a new melody is instantly created.
Thio’s Lo-Fi Player is only one of the newest music creation tools in a rapidly expanding roster. In the last few years, AI has been used to create choral harmonies based on Bach’s music, compose hip-hop and jazz songs, and sing in the voices of different celebrities. In April, OpenAI — a nonprofit competitor to Google Magenta — unveiled JukeBox, an algorithm that can generate entire songs based on the style of 2Pac, Ella Fitzgerald or Frank Sinatra’s discography.
Still, Robert Laidlow, a British composer and researcher who has used AI to create music for the BBC Philharmonic Orchestra, thinks that compelling AI-generated music won’t be possible for a while longer.
“I’ve yet to hear a piece of AI-generated music that is either breathtakingly beautiful, or very surprising,” he says.
Laidlow relies on a variety of AI tools, each suited to a specific task, to make music. For a recent composition entitled “Alter,” he used MuseNet (from OpenAI) to generate unique melodies, and WaveNet (from DeepMind) to create human-sounding vocals.
Using a hodgepodge of tools suits Laidlow just fine, because he sees AI as more of a musical assistant — to make beats and simple melodies, for example — rather than as a replacement for human composers.
OpenAI, Sony and other companies have released full-length songs composed entirely from AI, but that’s not the best use of the technology, says Gus Xia, a computer scientist researching AI and music at New York University Shanghai.
“The goal is not to compose a piece fully from AI, from scratch and then — boom — you have a masterpiece, even though some day we could achieve that,” Xia says.
While it might be technically possible to create convincing music entirely with AI in a few years, Xia thinks that it wouldn’t be exciting to listen to.
Laidlow agrees, pointing out that machines lack cultural context — they can’t tell timely stories that strike an emotional chord with listeners. “People like to know why a piece was written and what it responds to,” he says. And with AI, “I think that’s going to be a very difficult problem to overcome.”
In other words, even if AI could create a stunning piece of music, says Laidlow, “What would it value? Why would it write music about one thing rather than another?”
For now, the answer to those questions is, as Dylan would say, blowin’ in the wind.