The synthesizer is composed of two main parts: the motion data reading section and the music control section. I used an online myo-osc communication application (https://github.com/samyk/myo-osc) and udp messaging to read the armband data. I am able to obtain normalized quaternion metrics as well as several gesture readings. These data laid a solid foundation for a stable translation from motion to sound.
I selected pitch, playback speed, timbre and reverberation as the manipulation parameters. I downloaded music as separate instrument stems so that I can play with the parameters on individual track without interfering with the overall music flow. After many trials, I eventually had the following mapping relationships:
I recorded a section of the generated music, which is shown below:
The code for the project is as follows:
]]>
I used Leap Motion sensor to read the absolute position of my left hand along the z (vertical) axis, and the range of that data stream is translated into 8 MIDI notes from C3 to C4. The velocities of my right ring finger are normalized and then mapped onto the computer system’s volume scale, so the faster my right hand moves, the higher the volume will be.
I also added a slowly rotating noise point cloud to create some visual atmosphere. The note change will be reflected in the color change of the visualization, and volume change will alter the cloud size.
More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…
Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.
]]>It’s interesting to see how the result of this feedback system does not look like what I predicted at all. Please see the video documentation below to find out how the picture has evolved. The filter was applied for 1000 times.
]]>