Pierre Barreau is a computer scientist, award nominated film director and registered composer. He will be one of the speakers on March 22nd, at CONVERGE, Globant’s Conference about Augmented Intelligence, that will be held in Buenos Aires, Argentina.
In this interview by Haldo Sponton (AI and Data Science Tech Manager at Globant), you will get to know Pierre Barreau. He is one of the most innovative minds in the music industry and CEO of AIVA (Artificial Intelligence Virtual Artist), a company that develops a deep learning algorithm applied to music composition.
Haldo: Tell us a little bit about you and why you chose this path of mixing music and technology.
Pierre: I am a pianist, computer scientist, and I personally come from a family of artists. In general, the team at AIVA is made up entirely of musicians and / or music lovers.
More specifically, it was when I saw the science fiction movie “Her”, in which an Artificial Intelligence writes a beautiful piece for piano, that I decided that AI composed music was a topic I wanted to pursue.
H: Historically, we have always thought that music could only be composed by humans, since that task involves the expression of many emotions, among other things. Please tell us how is it possible for AI to compose original songs.
P: Sheet music and notes themselves don’t contain emotions; they’re just a discrete representation of what a musician should play with their instrument. The emotions are actually created in our brains when we hear the music being performed, and it is our personal interpretation that associates the sound we hear to emotions and experiences. That is why two different person hearing the same music might feel different things and that is why AI composed music can make people feel something: because the emotions are not in the score, but are created in our brains.
H: Last year, AIVA (Artificial Intelligence Virtual Artist) composed a song exclusively for Globant. Tell us a little more about that experience.
P: The piece was composed specifically for Globant’s Converge conference in New York. Globant specifically requested a piece that would feel emotional, and the result was a score for Piano, Violin and Cello trio. When the piece was performed (by humans!) at the conference, I definitely felt like the audience was inspired, so I would say the experience was amazing!
H: On the more technical aspect, what DL approach or models did you use to build AIVA?
P: We choose to take a modular approach, as opposed to training one model end-to-end, and we use different architectures for different tasks. For example, we have LSTM-based architectures for the generation of the music, and architectures based on Classifiers / Auto-encoders for selection of the style of a music we want AIVA to emulate. As for our dataset, we train AIVA on over 30.000 scores of music, written by the greatest composers like Mozart, Bach, Beethoven.
H: As far as we know, AIVA mainly composes classical music. Is she capable of composing other genres?
We like to say that AIVA composes symphonic music, which has many sub-genres, but regarding more modern applications, yes! We’ve done some experiments, one will be released soon, in collaboration with a pop artist.
In general though, modern music is more about the sound production than the composition itself, so the application of our technology is slightly less relevant there. And also, most of the soundtracks found in games, films, trailers are actually based on symphonic scores.
P: How do AIVA compose a song that is aligned to your client’s needs? And what kind of needs are that?
Usually, a typical project for a client starts with a music briefing, to define what they want the music to sound like. Are they after a cinematic score in the style of a specific composer? Should AIVA be influenced by a specific epoch of music? What kind of emotions should the music convey?
Once we figure out the answers to those questions, we can move on to actually selecting the training dataset that is the most relevant to those criteria, and train AIVA on those.
Finally, AIVA can compose many different samples and judge which ones are closest in style to the track or styles specified by the client.
H: AIVA has recently become the first non-human composer to have their creations registered in an author’s rights society. That is amazing! What is your vision about that?
P: I think it’s a great first step towards a broader legal recognition of Artificial Intelligence in the creative fields. It’s also a way for us to demonstrate that the scores composed by AIVA aren’t just random noise, but have a lot of quality to them.
H: Finally, how do you see the evolution of AIVA in the coming years? Or more generally, the collaboration between artificial intelligence and different artistic expressions.
P: I think what AI is going to excel at is creating soundtracks for use cases where human labor cannot scale. For example, video games have hundreds of hours of gameplay, and yet only two hours of music, which means that the same tunes loop over fifty times in the ears of the gamers. The reason for that is simple: no human composer can write hundreds of hours of adaptive music for a single project.
Instead, an alternative solution is to let the composer write their two hours of music, and have Artificial Intelligence build on their vision to create the remaining ninety-eight hours of soundtrack.
We are very glad to have Pierre as a speaker in our #CONVERGEBA. If you have any further question you’d like to ask him please drop us an email to firstname.lastname@example.org
Save your spot at #CONVERGEBA here.