Using a patient’s brain scans that researchers fed into Google’s AI, they were able to reproduce snippets of a tune that the patient recently listened to. (Source: Adobe Stock)

Research: Google AI Reads fMRIs to Recreate Music Derived from Brain Signals

From time to time everyone gets a song stuck in their heads, often called a musical earworm. Now AI researchers say they can identify the song and even recreate it, up to a point.

Artificial intelligence can produce music that sounds similar to tunes people were listening to as they had their brains scanned, a collaborative study from Google and Osaka University shows. Carrisa Wong’s story on livescience.com delves into this musical brain scan and the research findings derived from a fairly simple process.

By examining a person’s brain activity, AI “can produce a song that matches the genre, rhythm, mood, and instrumentation of music that the individual recently heard,” according to the story.

fMRI

Previously, scientists have “reconstructed” sounds from brain activity, such as human speech, bird songs, and horse whinnies. For this study, music, which is much more complex, was drawn from brain signals.

Researchers chose five research subjects and conducted functional magnetic resonance imaging while each listened to 15-second snippets of music. The scientists then tracked the areas of the brain that blood was rushed to when reacting to the music.

“Now, researchers have built an AI-based pipeline, called Brain2Music, that harnesses brain imaging data to generate music that resembles short snippets of songs a person was listening to when their brain was scanned. They described the pipeline in a paper, published July 20 to the preprint database arXiv, which has not yet been peer-reviewed.”

Using AI in the Tests

Scientists personalized each result they fed into AI. Each person reacts differently to various instruments and music genres.  The writer, Wong, explains how the researchers conducted the tests, which led to the results. Trying to retrieve recent memories and having AI try to imitate the song that was in the test subject’s head may sound like sci-fi movie stuff, but it’s happening in reality.

“The agreement, in terms of the mood of the reconstructed music and the original music, was around 60%,” study co-author Timo Denk, a software engineer at Google in Switzerland, told Live Science. “The method is pretty robust across the five subjects we evaluated,” Denk said. “If you take a new person and train a model for them, it’s likely that it will also work well.”

Essentially, the AI reads the person’s brain scan of a song they heard, and replays it in short clips you can hear online. 

read more at livescience.com