Author Archives: Joey Santillo

Project 2; FacePlace

FacePlace explores the intersection of real and virtual worlds, through the examination, digital representation, and sonification of faces. Utilizing FaceOSC, interfaced through Max, we were able to capture single frames, or a picture, of guests’ faces. Although the goals of this project were mostly technical, the experience has proven to be quite thought provoking. The idea of “scanning” your face, only for it to exist in a virtual world leaves the installation’s concept very open ended.

These frames were sent to exist in a virtual world, powered by Max, and experienced in an HTC Vive. Simultaneously, the raw data received over OSC from FaceOSC was analyzed in Max, and then scaled to be appropriately used to control multiple parameters of processors.

Adrian worked on producing the VR space, creating the virtual world, and handling technical issues surrounding the VR delivery, communication between devices, as well as over-all user experience design.

Joey worked hand in hand with Adrian on user experience design, as well as splicing data from FaceOSC (supplying Adrian with matrices over jit.net.send objects), and utilizing this data to create a unique sonic environment for each face, using custom sound files. In order to process these sound files into the virtual world, the HOA library (RIP)  was used for spatialization.

First tests of a virtual world in the Vive, powered by Max.

Testing the VR world, using Seymour as our test subject.

First playtests of the installation.

Experience photos to come.

Git Repository

Project 1: Convolution Kinect

For my first project, I utilized a Microsoft Kinect to gather movement information on a person, within a space, and apply the tracking information to parameters on a granular synthesis. At the end of the pipeline, the resulting sound was then processed into a Lissajous pattern and project onto a wall behind the performance space.

To farm the information from the Kinect, I used Processing, an open-sourced graphical library, which was programmed to collect data from the Kinect, parse out only the left hand, right hand, and head X-Y Coordinates. Then, using OSC I sent this bundle of coordinates to Max MSP.

Using the UDPRECEIVE object, I received data from Processing, containing coordinates.

Next, I scaled these numbers to be useful by the granular synthesis parameters.

Scaling the data which I am receiving from the Kinect, to be useful for granular synthesis parameters.

Originally, I started with a rather bland audio file to be processed, however, I realized soon that I would need a more complex sound file to really accentuate the differences in the granular synthesis. To do this, I used a file of wind chimes, which proved to be much more textual and rewarding.

Below is a video of the piece in work, as well as the max patch and processing files.

IMG_1881

https://gist.github.com/jsantillo/9441d9acfa79fbc7548cc4aeaf5d0150.js

 

Assignment 4: Vocoder

The Vocoder was once invented as a method of reducing the bandwidth of a vocal signal, as a way to reduce cost for expensive transatlantic copper cables, however, artists through the 70’s, 80’s, and even today use vocoders as a way of processing instruments and vocals. This distinctive technique creates a uniquely, sometimes robotic sounds, and is an effective route to bring science fiction to life.

The child patch is really quite simple;

 

Inside of the parent function, I created harmonics within the vocoder, by passing a major fifth chord into the algorithm. This makes the timbre of the vocoder flexible, allowing for a large range of sounds.

Here is an example of the vocoder, with tweaks to the parameters as time goes on.

 

Find child and parent code here!

 

 

Assignment 3; Convolve

Howdy y’all,

After gathering an impulse response inside of a local church, a stairwell, some thunder, and glass breaking, I uploaded all of those as impulse responses into Waves’ IR Reverb Plugin. The stairwell IR was actually really disappointing to me, but I recorded that one using an omnidirectional microphone, as an experiment, which might’ve resulted in a weaker signal.

I thoroughly enjoyed how the Mongolian singing convoluted with the glass breaking resulted. It created a really cool effect that I could definitely see myself using in the future.

You must dance!

I decided to use this project as a platform to force dance!

The patch uses a similar movement analysis method that we covered earlier in the semester, converts that movement into sample rate for a video feed of the guest, which is also used to determine amount of feedback into the system within our delay patch. More or less, if you don’t dance you’ll have a super granular video, with a horrible mess of feedback of a song, BUT if you dance, you’ll have clear video, and better audio. Enjoy!

Back it up Terry, Impulse Response

Utilizing the latest and greatest video, featuring Terry, I utilized a convolution reverb plugin, to generate my recursive art. To make this piece recursive, I used original audio from the video.

  • Original audio, with convolution reverb, using the original audio as the impulse response.
  • Record the output
  • Make that output the new impulse response AND the file being fed into the reverb plugin.
  • Repeat bullets 2 & 3

 (note: I did some EQ’ing of the Reverb, to prevent high frequencies that were feeding back.)

Using the output of the reverb device as both the impulse response and the content being fed into the reverb really expedited the process, not taking many iterations to lose sight of the original all together.

 

Enjoy!