Hear the words ‘artificial intelligence’ and what do you think? Robots? Alexa? Haley Joel Osment? Whatever first comes to mind, the chances are that collaborative music composition and performance isn’t it. But that’s exactly what musicians Ben Hayes and Hector Plimmer will be exploring as they bring together music, artificial intelligence (AI) and machine learning (ML) for a very unique Purcell Session performance.
In February, electro-musician Hayes, and DJ, producer and designer Plimmer will take up residency in Southbank Centre’s Purcell Room as they compose, rehearse and ultimately perform a live concert created in collaboration with artificial intelligence. Confused? Yep, we were a little, too. Which is why we sat down with the pair to get a better idea of how exactly this remarkable musical project will be realised.
Southbank Centre: How did you each get into music? Was there a particular moment, or experience that drew you in?
Hector Plimmer: Music was a big part of my household. My dad had a big set of shelves with records of all sorts of music, and I was encouraged to play and listen from an early age. I think that helped me to keep an open mind when it came to searching for music, and form my own tastes.
Ben Hayes: It was a family thing for me too. My dad was always playing guitar, my grandma was a pianist, and two of my cousins were musicians. I guess it just rubbed off. I think growing up just as internet connections were becoming fast enough to actually download music had a pretty big impact. There was a period of time when I was listening pretty indiscriminately and that definitely turned me onto all sorts of things I’d never have checked out otherwise.
SC: What’s been your respective proudest moments in your musical careers to date?
HP: I’d have to say my two LPs ‘Sunshine’ and ‘Next to Nothing’. Together they are a pretty good representation of the musical blocks I’m made up of.
BH: Probably the first time I heard something I’d worked on come on in a public place. It’s a nice feeling realising that people actually want to listen to something you created.
SC: How did your collaboration come about? How did you meet, and when did you first start working together?
HP: We’ve vaguely known each other for a few years now, I think we first met at a festival called Brainchild Festival. It wasn’t until last year that I started working with Ben. I was asked to perform a piece of music that highlighted positive outcomes of human and AI collaboration at a dinner hosted by Nesta.
Purely by chance, Ben phoned me about something completely unrelated to any of this. I had heard he had worked somewhere that integrated AI, ML and music so I brought the performance up. He told me he’d been itching to work on something like this for ages and so he got involved in the project. After the performance for Nesta it was pretty clear that we should keep exploring this subject and here we are now.
BH: What Hector said. Though I had also been fanboy-ing over his work for a while before we finally met.
SC: What drew you to the Purcell Sessions project?
BH: The creative use of AI is in a strange place. The results of the deep learning boom are in the process of finding creative applications, and there are lots of possibilities. But at the same time there’s this sort of residual reticence to accept AI as a valid creative tool — maybe inherited from the (pretty understandable) response to some of its more nefarious applications.
For a lot of people, I think AI is associated with things like facial recognition, large-scale automation, and hyper-targeted advertising, so it represents something pretty dystopian, which has no place in art or music. So for me, doing the Purcell Sessions, and developing a concept through open rehearsals, seems like an ideal chance to contribute something positive to the narrative, and disentangle the very exciting creative possibilities of these technologies from their other applications.
SC: As you’ve just mentioned, your Purcell Sessions gig here at Southbank Centre is set to be collaborative performance, not just with each other, but also with AI-generated music. Can you tell us how this will be realised?
BH: We’re going to be interacting with some deep learning models which have been trained to develop their own sort of understanding of music. The tools we’re building essentially ‘listen’ to us and each other, and learn to respond to what’s going on around them by sampling from their own internal representation of music. A big part of the rehearsal process during our time in the Purcell Room will be exploring these representations to find ideas that work, and then training the models to respond with these ideas when the time is right. In a way it’ll be like having rehearsals with AI band members.
SC: How easy is it to work with, and respond to AI musicians in this way? Do you follow the AI musicians, or do they follow you?
HP: I’ll let Ben go into detail here, but personally as someone new to the process of AI and ML, my main hurdle has been accepting that the AI isn’t a human. It listens differently, and reacts in unexpected ways, sometimes completely unexpected. This is a big part of why we’re doing this. The aim isn’t to create an AI musician who plays and responds exactly like a human would; we are trying to explore what a machine can contribute as a collaborator.
BH: Definitely both. They follow us and we follow them, and even each other. And that feedback relationship is something we’re really excited to explore. I would agree with Hector though that unpredictability plays a big part. Humans are pretty good at extrapolating their experience to new situations, so if you do something a little differently during a performance they can handle it. The hope with AI is that it can learn to extrapolate in the same way, but it’s not a guarantee, so there’s always the chance that it’ll respond with something seriously unpredictable – especially when you’re working with abstract creative material like music.
SC: Am I right in saying that the technology you’re using to bring this performance to life is custom-built? Was that a challenging process?
BH: That’s right. Though everything we’re building stands on the shoulders of the amazing research done by some really incredible teams. For example, to create our AI’s ‘understanding’ of music we’ll be using the MusicVAE model developed by Magenta – the creative machine learning team at Google.
It’s definitely a challenging process. Probably the biggest challenge is taking these technologies and enabling their use in a live situation. Cutting edge deep learning techniques push modern computers to their limits, so part of the challenge is finding ways to get them to respond meaningfully in real-time. It’s also an ongoing process – we’ll be continuing to develop and improve our tools right up until the residency.
SC: And you’re planning to include a visual element to the performance too. What form do you envisage that taking?
HP: We wanted to include a visual element in the performance to help aid the listener to understand a bit more about what’s happening on stage. The visuals will help to show a bit of the process, but also differentiate between what Ben and I are contributing to the music and what the AI is.
SC: In advance of the performance, you’ll effectively be in residency with us for a few days as you compose and write with the AI tools in open rehearsals. How do you feel about allowing the audience to see your unfinished work?
HP: We’ll no doubt be learning a lot about the process ourselves during the residency, so I think it’ll be a great opportunity to share what we’re finding as we go.
BH: I’m very excited by it. We see this as an opportunity to demystify AI and ML and hopefully help some people feel empowered to apply it in their own creative work. There already exist some incredible tools for doing this, such as Rebecca Fiebrink’s Wekinator, ml5.js and others, and we hope to contribute in our own small way to this ecosystem by releasing what we create and making it open source so that others can use it and extend it.
SC: It’s a very distinctive project; what are you hoping to gain from it as artists? And what do you hope the audience will take from it?
HP: This is all a big experiment for me, and my main personal aim is to explore how far we can push this collaboration between humans and AI. I’m hoping that people may come out of it with a broader view on the possibilities of AI/ML in art and music, and see the potential of AI/ML as a collaboration tool rather than something that will replace the human role altogether.
BH: I feel very similarly to Hector about this. I have no doubt that as time goes on, AI will come to enable entirely new musical aesthetics and creative practices, in much the same way that other developments in music technology have; just think how much the digital audio workstation (DAW) changed the way music is created. I often think about what it would have been like to be making music when Robert Moog and Don Buchla were building their first synthesisers, and now, maybe, we can get to experience something like this with AI.