These frames were sent to exist in a virtual world, powered by Max, and experienced in an HTC Vive. Simultaneously, the raw data received over OSC from FaceOSC was analyzed in Max, and then scaled to be appropriately used to control multiple parameters of processors.
Adrian worked on producing the VR space, creating the virtual world, and handling technical issues surrounding the VR delivery, communication between devices, as well as over-all user experience design.
Joey worked hand in hand with Adrian on user experience design, as well as splicing data from FaceOSC (supplying Adrian with matrices over jit.net.send objects), and utilizing this data to create a unique sonic environment for each face, using custom sound files. In order to process these sound files into the virtual world, the HOA library (RIP) was used for spatialization.
First tests of a virtual world in the Vive, powered by Max.
Testing the VR world, using Seymour as our test subject.
First playtests of the installation.
Experience photos to come.
]]>To farm the information from the Kinect, I used Processing, an open-sourced graphical library, which was programmed to collect data from the Kinect, parse out only the left hand, right hand, and head X-Y Coordinates. Then, using OSC I sent this bundle of coordinates to Max MSP.
Using the UDPRECEIVE object, I received data from Processing, containing coordinates.
Next, I scaled these numbers to be useful by the granular synthesis parameters.
Scaling the data which I am receiving from the Kinect, to be useful for granular synthesis parameters.
Originally, I started with a rather bland audio file to be processed, however, I realized soon that I would need a more complex sound file to really accentuate the differences in the granular synthesis. To do this, I used a file of wind chimes, which proved to be much more textual and rewarding.
Below is a video of the piece in work, as well as the max patch and processing files.
https://gist.github.com/jsantillo/9441d9acfa79fbc7548cc4aeaf5d0150.js
]]>
The child patch is really quite simple;
Inside of the parent function, I created harmonics within the vocoder, by passing a major fifth chord into the algorithm. This makes the timbre of the vocoder flexible, allowing for a large range of sounds.
Here is an example of the vocoder, with tweaks to the parameters as time goes on.
Find child and parent code here!
]]>
After gathering an impulse response inside of a local church, a stairwell, some thunder, and glass breaking, I uploaded all of those as impulse responses into Waves’ IR Reverb Plugin. The stairwell IR was actually really disappointing to me, but I recorded that one using an omnidirectional microphone, as an experiment, which might’ve resulted in a weaker signal.
I thoroughly enjoyed how the Mongolian singing convoluted with the glass breaking resulted. It created a really cool effect that I could definitely see myself using in the future.
ThunderIR GlassBreakingIR Mongolian Folk Chant StairwellIR ChurchIR ThunderProcessed_IR4-St GlassProcessed_IR3-St StairwellProcessed_IR2-St ChurchProcessed_IR1-St ]]>The patch uses a similar movement analysis method that we covered earlier in the semester, converts that movement into sample rate for a video feed of the guest, which is also used to determine amount of feedback into the system within our delay patch. More or less, if you don’t dance you’ll have a super granular video, with a horrible mess of feedback of a song, BUT if you dance, you’ll have clear video, and better audio. Enjoy!
]]> (note: I did some EQ’ing of the Reverb, to prevent high frequencies that were feeding back.)
Using the output of the reverb device as both the impulse response and the content being fed into the reverb really expedited the process, not taking many iterations to lose sight of the original all together.
Enjoy!
Iteration 5 Iteration 4 Iteration 3 Iteration 2 Iteration 1 Original ]]>