Proposal 1 – Willow Hong

I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max. 

More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…

Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.