yiliuh@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 Twisted Signals Thu, 07 Dec 2017 05:03:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.24 https://i1.wp.com/courses.ideate.cmu.edu/18-090/f2017/wp-content/uploads/2016/08/cropped-Screen-Shot-2016-03-29-at-3.48.29-PM-1.png?fit=32%2C32&ssl=1 yiliuh@andrew.cmu.edu – 18-090 https://courses.ideate.cmu.edu/18-090/f2017 32 32 115419400 Final Project – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/12/04/final-project-willow-hong/ Mon, 04 Dec 2017 13:05:44 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1539 For the final project I decided to further explore the connection between motion and sound. I incorporated data from the Myo armband into a music synthesizer that used several techniques I have learned from this class.

The synthesizer is composed of two main parts: the motion data reading section and the music control section. I used an online myo-osc communication application (https://github.com/samyk/myo-osc) and udp messaging to read the armband data. I am able to obtain normalized quaternion metrics as well as several gesture readings. These data laid a solid foundation for a stable translation from motion to sound.

I selected pitch, playback speed, timbre and reverberation as the manipulation parameters. I downloaded music as separate instrument stems  so that I can play with the parameters on individual track without interfering with the overall music flow. After many trials, I eventually had the following mapping relationships:

  1. The up/down motion of the arm will change the pitch of the timbani instrument.
  2. The left/right motion of the arm will change the playback speed of both timbani and percussion part of the music.
  3. The fist/rest gesture will switch between piano-based and bass-based core melody.
  4. The rotation motion of the arm will change the reverberation delay time of the piano melody.

I recorded a section of the generated music, which is shown below:

The code for the project is as follows:

 

 

 

]]>
1539
Project 1 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/11/01/project-1-willow-hong/ Wed, 01 Nov 2017 15:51:11 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1469 For this project I explored the connection between movement and music, and essentially created my own theremin, which is an instrument that  controls the frequency and amplitude of sounds using hand movement.

I used Leap Motion sensor to read the absolute position of my left hand along the z (vertical) axis, and the range of that data stream is translated into 8 MIDI notes from C3 to C4. The velocities of my right ring finger are normalized and then mapped onto the computer system’s volume scale, so the faster my right hand moves, the higher the volume will be.

I also added a slowly rotating noise point cloud to create some visual atmosphere. The note change will be reflected in the color change of the visualization, and volume change will alter the cloud size.

]]>
1469
Assignment 4 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/10/16/assignment-4-willow-hong/ Mon, 16 Oct 2017 04:13:23 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=1342 For this assignment I used two Pokemon models to represent the frequency spectrum. Larvitar is displayed when the sound frequency is lower, and Pikachu is shown when the frequency is higher. The scale of the models vary based on the amplitude for each frequency. In the video, the audio frequency changes from 1000Hz to 3000Hz, we can clearly see a greater number of Larvitars at the beginning, and Pikachus gradually take over the space toward the end of the video.

]]>
1342
Proposal 1 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/10/01/proposal-1-willow-hong/ Sun, 01 Oct 2017 05:48:58 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=951 I want to capture motion data using camera or Kinect, and translate those data into audio signals using Max. 

More specifically, I’m interested in using different audio patterns to represent the qualities of people’s movement, that is, how people move between two time points. For example, dancers need to move their bodies from one gesture to another between two beats. The two ends of this movement is fixed according to the dance choreography, but how the dancers move from one end to another can vary: the movement can be smooth or jerky, can be accelerated or decelerated, can be soft or hard…

Since the differences between different movement qualities might be too subtle for eyes to grasp, I wanted to see if I can analyze the speed or the changes in the speed of the body parts, and map them to different notes/melodies to help people better understand movement qualities. I want to make this project a real-time piece.

]]>
951
Assignment 3 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/09/28/assignment-3-willow-hong/ Thu, 28 Sep 2017 05:28:53 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=921 For this project I narrated a horror story. Below are the steps I took:

  1. Wrote a short horror story and recorded it in CFA’s sound recording studio;
  2. Recorded the balloon poping sounds in Scott Hall elevator (IR1) and CFA Atrium (IR3);
  3. Downloaded garden ambient sound (IR2) and scary background sound (IR4) online;
  4. Edited the original voice and the IRs in Audacity and convoluted them in Max (IR1=bedroom, IR2=garden, IR3=basement, IR4=horror movie sound effect);
  5. Added the audio time shifting+feedback effect to render the scary atmosphere a bit more;
  6. Outputted the final audio file from Max using the “sfrecord~” component.
Convoluted Original IR1 IR2 IR3 IR4
]]>
921
Assignment 2 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/09/17/assignment-2-willow-hong/ Sun, 17 Sep 2017 17:49:52 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=807 For this assignment I have made a fun photo booth effect using the time-shifting technique. Each pose is transformed into three colorful delayed snapshots coupled with three gradually leveled up voices.

]]>
807
Assignment 1 – Willow Hong https://courses.ideate.cmu.edu/18-090/f2017/2017/09/05/assignment-1-willow-hong/ Tue, 05 Sep 2017 18:21:53 +0000 https://courses.ideate.cmu.edu/18-090/f2017/?p=694 The system I chose is the the “Facet” algorithm located in the Photoshop filter gallery. What I did is that I opened a picture in Photoshop, then applied the “Facet” filter onto the picture over and over so that the color and the composition of the pixels are transformed into something very different from the original.

It’s interesting to see how the result of this feedback system does not look like what I predicted at all. Please see the video documentation below to find out how the picture has evolved. The filter was applied for 1000 times.

]]>
694