Sunday, December 10, 2017

AI Remixing Sounds using Neural Networks

Photo by Denisse Leon on Unsplash

With an increased desire to use unique sounds, music producers invest a lot of money in synthesizers, virtual instruments, samplers, and recording equipment. This new sound design trend has led to the rise of some popular synthesis virtual instruments such as Serum. Serum is "a wavetable synthesizer with a truly high-quality sound, visual and creative workflow-oriented interface to make creating and altering sounds fun instead of tedious, and the ability to 'go deep' when desired - to create / import / edit / morph wavetables, and manipulate these on playback in real-time." In a fairly recent Google Magenta project, NSynth, Google was able to accomplish similar audio synthesis using neural networks.

At it's core NSynth uses neural networks to encode and decode sounds allowing artists to interpolate between multiple instruments, generating unique sounds. It differs from Serum's approach of synthesis in the fact that NSynth completely relies on neural networks instead of wavetables. The creativity of this instrument is limited to your sample library. You can try a demonstration of this in practice on the web here. My favorite combination is the Cat + Vibraphone.

While this is an incredibly cool use of neural networks to generate sounds, this project could be furthered by combining this with different sound synthesis algorithms to make far more interesting sounds. For example if NSynth was paired with granular synthesis, AI could make some pretty unique cinematic pads. Paired with FM (frequency modulation) synthesis, AI could use basic sound waves to make some very harmonic sounds that could be used for something like a dubstep or trap bass. The possibilities could be endless, and the result could be a virtual instrument that is far more powerful than Serum.

6 comments:

  1. Who do you think would get the credit for the new sounds AI creates? Would you give the credit to the humans behind creating the machine, or the machine itself? This question could be asked again if AI ends up creating songs down the road.

    It was fun playing with the NSynth link! Try out the Trombone and Cow combination!

    ReplyDelete
    Replies
    1. I really think the credit will still go to the human behind sound. This is no different from any other virtual instrument other than that this virtual instrument uses neural networks. The user still tweaks and messes around with the virtual instrument's parameters to get a sound they like.

      Delete
    2. I also think the credit goes to the human "creator." It would probably be similar to DJs who remix or simply create beats for songs on computers. While people say it isn't "real music" because it's more digital, they still put in the time and had the ears to create a pleasing sound for the masses.

      Delete
  2. What do you think the significance of this could be fr the future of music? Could AI be our next greatest artist, or would people shy away from it if they knew what was going on? I feel like a lot of today's music is already so processed and created on computers that it may be very hard to clearly draw or even identify a line between "human" and "robot" music. Another thing that I wondered is whether or not music could exist outside the frame of a human neural network. Relax the constraints on the AI and I think it would be very interesting to see where it could go, even if it doesn't produce and "good" music.

    ReplyDelete
    Replies
    1. I think this experiment specifically signifies better music production technologies that embrace algorithms that utilize artificial intelligence to create new sounds. This Magenta experiment doesn't focus on the aspect of AI as a music artist but more as a tool in the music industry. It is possible for AI to create music, and it can be possible for AI to produce "good" music with more effort. However I think the greatest potential lies in humans collaborating with AI - whether it be through re-sampling AI generated music or using AI as a tool.

      Delete
  3. As a society we've made moves (at least in rap music) towards more robot-sounding music (read: autotune). There are an incredible amount of fans of virtual singers using Yamaha's Vocaloid software, so much so that these singers travel as holograms (of sorts) to perform. I think there's a big place in the future of music for AI.

    ReplyDelete

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....