For the final project I decided to further explore the connection between motion and sound. I incorporated data from the Myo armband into a music synthesizer that used several techniques I have learned from this class.
The synthesizer is composed of two main parts: the motion data reading section and the music control section. I used an online myo-osc communication application (https://github.com/samyk/myo-osc) and udp messaging to read the armband data. I am able to obtain normalized quaternion metrics as well as several gesture readings. These data laid a solid foundation for a stable translation from motion to sound.
I selected pitch, playback speed, timbre and reverberation as the manipulation parameters. I downloaded music as separate instrument stems so that I can play with the parameters on individual track without interfering with the overall music flow. After many trials, I eventually had the following mapping relationships:
- The up/down motion of the arm will change the pitch of the timbani instrument.
- The left/right motion of the arm will change the playback speed of both timbani and percussion part of the music.
- The fist/rest gesture will switch between piano-based and bass-based core melody.
- The rotation motion of the arm will change the reverberation delay time of the piano melody.
I recorded a section of the generated music, which is shown below:
The code for the project is as follows:
For this project I explored the connection between movement and music, and essentially created my own theremin, which is an instrument that controls the frequency and amplitude of sounds using hand movement.
I used Leap Motion sensor to read the absolute position of my left hand along the z (vertical) axis, and the range of that data stream is translated into 8 MIDI notes from C3 to C4. The velocities of my right ring finger are normalized and then mapped onto the computer system’s volume scale, so the faster my right hand moves, the higher the volume will be.
I also added a slowly rotating noise point cloud to create some visual atmosphere. The note change will be reflected in the color change of the visualization, and volume change will alter the cloud size.
For this assignment I used two Pokemon models to represent the frequency spectrum. Larvitar is displayed when the sound frequency is lower, and Pikachu is shown when the frequency is higher. The scale of the models vary based on the amplitude for each frequency. In the video, the audio frequency changes from 1000Hz to 3000Hz, we can clearly see a greater number of Larvitars at the beginning, and Pikachus gradually take over the space toward the end of the video.
I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max.
More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…
Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.
For this project I narrated a horror story. Below are the steps I took:
- Wrote a short horror story and recorded it in CFA’s sound recording studio;
- Recorded the balloon poping sounds in Scott Hall elevator (IR1) and CFA Atrium (IR3);
- Downloaded garden ambient sound (IR2) and scary background sound (IR4) online;
- Edited the original voice and the IRs in Audacity and convoluted them in Max (IR1=bedroom, IR2=garden, IR3=basement, IR4=horror movie sound effect);
- Added the audio time shifting+feedback effect to render the scary atmosphere a bit more;
- Outputted the final audio file from Max using the “sfrecord~” component.
For this assignment I have made a fun photo booth effect using the time-shifting technique. Each pose is transformed into three colorful delayed snapshots coupled with three gradually leveled up voices.
The system I chose is the the “Facet” algorithm located in the Photoshop filter gallery. What I did is that I opened a picture in Photoshop, then applied the “Facet” filter onto the picture over and over so that the color and the composition of the pixels are transformed into something very different from the original.
It’s interesting to see how the result of this feedback system does not look like what I predicted at all. Please see the video documentation below to find out how the picture has evolved. The filter was applied for 1000 times.