Georgia Tech’s Shimon robot writes and plays its own music using machine learning.
The Shimon robot was trained with 5,000 songs and two million motifs, rifts, and licks. The algorithm behind it involves using a neural network that simulates the brain’s bottom up cognition process. The result sounds very soothing, according to article the song it wrote is a blend of jazz and classical music. What I admire most about this project its that the music as well as the performance is totally generated, and yet it still sounds human and not robotic. This robot is making debatably “creative artistic decisions” by synthesizing novel music for pre-existing ones. Additionally, I also admire the performance. Instead of pre-defining the note location of the keyboard by assigning them a position variable, the robot uses computer vision through a camera on its robot-head which actively rotates, pans, and scans around its field of vision the very same way an actual musician would do when they’re playing the keyboard. If I closed my eyes, I could be fooled that this is a human.