In this post I want to mention about something that's been on my mind for a few days now. It's about AI and its role in music performances and compositions. Here is a compilation of what I've discovered to date:
Robot trumpet player:
Robotic music playing system by Intel:
Robot violinist:
Robot pianist:
Robot band:
Festo Soundmachine:
...And probably much more floating around on YT.
Unfortunately however, while these robotic systems are indeed impressive, they only appear to play what they have been pre-programmed to play, and are no more than sophisticated forms of playback devices. In other words, they cannot create their own, "original" musical compositions.
The good news is that we're in luck. There are AI / computer programs out there that do have the capability of seemingly doing just that, by taking advantage of principles in music theory, and taking "inspiration" from various existing musical pieces and their styles to be "remade" via its algorithms, and in the process, creating entirely new styles and melodies. When you hear of news like this (Iamus, a computer cluster), and this (Emily Howell, a program by David Cope), it pretty much leaves us with less doubt that this really is indeed the case. Another interesting find is WolframTones, which can generate an almost infinite array of possible compositions in Midi. While it might not be able create a piece of Beethoven, the output may yet have sound application as background music for 2D platform games (pun intended).
And even though all of those are, as of yet, still primitive in its capabilities, it will only be a matter of time before they are able to replicate what we do so passionately: be able to play a piece of Beethoven, Chopin, Bach, Mozart, or any great musical genius's piece, with feeling, with ability to vary tempo/speed, and volume/percussion/intensity in an appropriate manner. They will play a piece in not just one, but also multiple interpretations, with slight variation in the notes played, where they can be influenced by whatever sensory input they receive: flash a variety of colors through its visual sensors, vary the temperature of its environment, allow it to 'feel' and 'taste' the quality of air flowing in its chemical and pressure receptors...and the next time it plays the piece, they will play a new variation of it; we could say then that it had been "inspired" by such "experiences". Goodness knows what results would be generated from this kind of approach.
This is just one simplistic example of such a system attempting to replicate our own neural processes and how we come about in expressing "feeling" when playing a piece of music and most certainly additional layers of complexity can be scaled to achieve greater effect. Though of course we still expect to have limits as to how far we can do this, considering how complex our human brains are and we are far off from replicating the effect we get from say, as an exaggerated example, when we're madly in love with someone, to which many great artists of the past have been known to have demonstrated elevated levels of creativity under such circumstances.
I one day envision a point when we would have such advanced AI musicians churning out at hyper-speed, the most human-like, ingeniously-formulated compositions, each new composition as chillingly exhilarating, and emotionally captivating, as those made by the great musical geniuses of the past, if not far more each time, and creating an endless stream of what can be made possible in the universe of music, if not saturating all the musical possibilities that could ever be conceived...we could almost go far as to say that music itself then had experienced a singularity event.
No comments:
Post a Comment