Künstliche Intelligenz komponiert Songs und Alben, Hologramm-Avatare füllen Konzertsäle, Roboter lösen DJs ab und Algorithmen entdecken die Hit-Formel für den nächsten Chartstürmer. Trotzdem ist die Arbeit mit KI für viele Künstlerinnen und Akteurinnen aus der Musikwelt immer noch Neuland. Viele sind dem Ganzen gegenüber eher skeptisch eingestellt – befürchten sogar, dass menschliche Kreativität durch Technologie ersetzt werden könnte. Aber stimmt das überhaupt?

Wir haben Dr. Maya Ackerman, die am 4. November 2020 bei der Berliner Musikkonferenz Most Wanted: Music (MW:M) als Speakerin dabei sein wird, genau dazu befragt. Die US-Amerikanerin ist Professorin für KI, Sängerin, CEO und Mitgründerin des Start-ups WaveAI, das erforscht, ob KI Musik komponieren kann und den fortgeschrittenen Songtext-Assistenten LyricStudio entwickelt hat. Im Interview, das auf Englisch stattgefunden hat, erklärt sie, wie KI dabei helfen kann, Kreativität nachhaltig zu fördern, wie ihr mit ihrer Hilfe ganze Songs schreiben könnt und wie sie sogar die musikalische Früherziehung eurer Kinder übernehmen kann.

Die Rolle von KI in der Musikproduktion und der Musikbildung ist nur eines von vielen spannenden Themen bei MW:M. Die 7. Edition der Berliner Musikkonferenz ist komplett online und ihr könnt von zu Hause aus teilnehmen, Online-Tickets gibt es hier. Unter dem Motto #Togetherness stellt sich die Konferenz dieses Jahr der zentralen Frage, wie der Neustart der Musikwirtschaft nach dem Stillstand der COVID-19-Pandemie gelingen kann. Bei der Gestaltung einer tragfähigen Zukunft liegt der Fokus des Programms auf Themen wie auf der digitalen Transformation, Nachhaltigkeit, Gender Balance, Diversität und Inklusion. Euch erwarten digitale Workshops, Performances, Live-Interviews und -Chats sowie interaktive Talks, Networking-Formate und vieles mehr.

Dr. Maya Ackerman im Interview

Welcome Dr Maya Ackerman, it is an absolute pleasure to interview you for MW:M20. You are an expert in Computational Creativity, a professor of AI, a CEO, a singer, and were announced as a “2020 Woman of Influence” by the Silicon Valley Business Journal. You have so many strings to your bow, but what inspired you to start applying your computer science skills to music?

Thank you, it is a pleasure to be here. It all began when, as a PhD student in Computer Science, I signed up to take voice lessons. It was supposed to be just for fun. To my great surprise, within 9 months I became a semi-professional opera singer.

As a musician, I longed to sing my own material. But, songwriting didn’t come easily to me. For three years, I took piano lessons, improvisation, and learned to produce music – but songwriting remained elusive. I was already a professor of Computer Science when I realized that my expertise in Artificial Intelligence can make my music dreams come true.

The part that I found most difficult was creating vocal melodies. So my research team and I created a system that took a line of lyrics and suggested original melodies for it. I still remember printing out a whole bunch of melodies and excitedly searching through them, editing, and combining. It was exhilarating! Our AI has come a long way since then. For example, it can now also help with writing lyrics.

I see that you have an opera background, how does it feel fusing the most cutting edge AI with a music tradition as old as opera?

That’s an excellent question! I can see how it might seem contradictory at first. I believe that my training in classical music gives me a wider lens. It’s also not just opera. I love playing the acoustic piano and performing with live musicians.

I guess it was never an either/or question for me. I connect deeply with the old and the new. Some of the earliest works created with our technology were operatic pieces. By now, I doubt that there is a single genre of music to which our technology has not been applied. It has always been, and will always be, about enabling self-expression through any tools at our disposal, old or new.

Also at MW:M, Ricardo Simian will be presenting his 3D printed musical instruments, but admits that it can be difficult to present new technologies in an industry that favours nostalgia. Do you ever come up against resistance when introducing the use of AI to artists/musicians?

Yes, this happens sometimes with people who have not tried our products. AI has a certain reputation – all powerful, making decisions for us, stealing our jobs. Of course, there is going to be fear.

 What we created here is a very different kind of AI. It is, by design, set up to help rather than replace. Our systems, ALYSIA and LyricStudio, cannot decide to sit down and write a song. Their role is to get our creative juices flowing.

 I’ve watched hundreds of people of all ages create their first song. That includes people who never thought that this would be possible for them. Often from that very first song, they are already editing the AI’s suggestions and starting to come up with their own material.

It’s a new perspective for many people. But, when they experience it, it is undeniable: The AI is helping us to express ourselves. The AI is helping us to be more human.

The live music industry is at a standstill, and while this is devastating for so many, it gives artists and creators more time to produce music and think about creative processes. Do you think this is a good time for artists to consider incorporating Computational Creativity and AI in their music?

It is indeed absolutely devastating that the music industry is at a standstill. My heart goes out to the countless musicians who have been affected by this crisis on a multitude of levels.

I’ve been thinking about this a lot – how to best make use of this time. I think it is so important to be gentle and kind to ourselves these days. When it is possible to make time to learn and grow, learning about how AI can be used to further your art is a great way to go. AI will soon become an indispensable part of music making. Those who get acquainted with it now will be ahead of the curve.

Do you think that there is a role for AI in the future of live music?

I personally love improvisational AI. There is a system by the name of Impro-Visor made by my late colleague Robert Keller out of Harvey Mudd College. I once spent hours improvising with it at a conference.

Another role that AI can play is to enrich the audience experience. One of my students is working on a fascinating project to visualize the story conveyed by a live orchestral performance. There is a lot of potential in incorporating AI to engage audiences and to help them to connect more deeply with a musical experience.

As a professor of AI, you are also an educator, do you find that this inspires you to create more inclusive technology, according to the varying needs and abilities that you come across in your teaching?

That’s a great question! Everyone is different, so a good educational experience must be flexible. Allowing people to go at their own pace makes a world of a difference. Another key element is empowerment, letting people see their abilities and talents.

With LyricStudio and ALYSIA, the user is instantly able to write lyrics and songs, which eliminates the all too common “I can’t do it” belief. At first, they rely more heavily on the AI. But very quickly, within a few minutes, people of all levels of expertise start to edit the AI’s suggestions and soon begin integrating their own original ideas.

The flexible design also allows it to be useful across a surprising range of skill levels. We have professionals using our system to get out of writer’s block and explore new creative spaces. We also have many young people who are just learning to write songs for the first time. Perhaps one of the most fulfilling experiences I’ve had was when, in collaboration with CoachArt, we offered a songwriting class to children with chronic illnesses. They created some of the most moving and memorable songs I’ve ever heard.

Where do you see AI in the future of music education?

One of the greatest opportunities with AI is to adapt to the individual needs of our students. We all know that AI can be used to give us personalized recommendations, but it can also become a personalized teacher. There is also an opportunity to offer the student unprecedented freedom and control.

For example, our AI adapts to the preferences and writing style of the user, so the suggestions it gives you will be completely different from what it suggests to anyone else. As the user improves, the AI effectively steps out of the way to let them grow.

Finally, since kids already play games on their devices, an AI-based education system can make learning fun. Learning doesn’t have to feel like learning. The learn-by-doing nature of our systems makes them feel more like play than work.

Alysia and LyricStudio are incredible apps that can be used by everyone, regardless of training, and encourage people to create for the love of music. At a time where so many countries are returning to lockdown restrictions, how important do you think it is for one’s mental health to remain creative during these times?

These times have certainly been very difficult on just about everyone. Even before the pandemic, many people seeking mental health support were denied care due to long wait times or high costs. The problem has now been amplified many times over.

Shortly before the pandemic hit, we finished a study at Dundee University, looking at how our songwriting systems can help people who are grieving the loss of a loved one. People came in having trouble believing that they will be able to write a song. Every single person came out with a song that they wrote about their loved ones. Further, this process helped them process and engage with their grief and bereavement.

But what’s most fascinating is that we found that our songwriting systems helped people to connect with their emotions, at times discovering emotions of which they were previously unaware. This is critical to mental health. The ability to connect with and express our feelings is a primary goal of therapy. The fact that AI, affordable and judgement-free, can help us on an emotional level is a real game changer.

While enabling people to create who may have not previously had the skills or resources to write songs is undoubtedly a positive thing, where is the line between furthering creativity and turning the craft of songwriting into something unoriginal and machine made?

We value creativity so much not just because it can lead to beautiful things, but because it is a vehicle for self-expression. Just as we have come to recognize the value of electric pianos and digital audio workstations as tools to enrich human creativity, I believe that the role of AI in music is to enable us to reach more deeply into ourselves.

When it comes to our musical AI, the person is in the driver’s seat. Even those new to lyric and songwriting take the lead after the first few sessions, if not from the very first. It is entirely about empowering the person to create. If you have any doubt, give it a try!

Humans have emotions. We have stories to tell. We have a deep rooted need to share our pains and our joys. AI has none of that. But it can be there for us to help us share ourselves more fully. AI can help us tap more deeply into our deepest emotions and help us heal our wounds.

Dieses Interview entstand in Zusammenarbeit von der Berliner Musikkonferenz Most Wanted: Music (MW:M) und Blogrebellen.