For this past semester, I have been conducting a research project under Prof. Susan Finger to install projection systems around the IDeATe Hunt Basement to create a platform for students in the animation, game design, and intelligent environment minors to publicly display their work. Therefore, my projects for Twisted Signals revolved around creating demos for specifically the interactive projection system using Max. My first project, a virtual ball pit, was a good exercise in learning on how to use the Kinect but was not really a conceptually heavy demo. Therefore, for my second project, I wanted to make a system that would actually teach the users something.
The concept that I settled on was to make a system that allowed users to interact with the Hunt Swiss Poster collection, an extensive set of extraordinary Swiss design posters that are housed in the Hunt Library which very students know exist. Originally, I had planned on using the Kinect to allow users to “draw something” using a colored depth map that would then get processed to display the closest Swiss design poster. However, in my early protoyping, it was starting to get apparent that the interaction was not as obvious as it could be, which was leading to a weaker installation. Moreover, as I have had to borrow all of my equipment from IDeATe for every project, I ran into the issue that every Kinect and my specific computer was checked out for the time span that I needed to work on this project. Therefore, I had to pivot.
While planning the projection installation, we were hit with the news that the Kinect was no longer going to be produced. As I was forced to work without a Kinect anyway, I decided to work on creating an interesting interaction with just an RGB camera which thankfully will probably always be produced. Additionally, I realized that, although being a far more difficult path, the best possible way for users to interact with these Swiss posters was to be a literal part of them, which would mean every single poster would have to be designed uniquely. However, this direction would also result in an avenue where several students could choose to participate in this project if they are lacking in their ideas for projects.
Therefore, for my Project 2, I created two different Swiss poster exhibits as well as a very simple UI that an IDeATe staff member would use when turning on the projection system each morning. Each exhibit has an interaction display that mimics a Swiss poster design that is placed next to the original Swiss poster, some information about the poster, and some information about the project.
This project allows a person via the Kinect to use their hand to move around balls in a virtual ball pit. Much of this patch has been built upon some of the dp.kinect2 reference patches as well as a reference from https://cycling74.com/tutorials/00-physics-patch-a-day, integrating the two by creating a kinect system that uses the closest player’s right hand to move around the main movable physics force. Most of the work in this project involved figuring out what good bounding boxes would be in the physical world and in the virtual world/how the user would actually interact with the Kinect (I wanted the output animation to be very obviously user controlled – almost painfully so). Additionally, I had some fun changing the aesthetics of the actual ball system.
Video of the system working:
Gist of code: https://gist.github.com/anonymous/9ddab8deb04b40090d8efeb8cd0b5f06
The goal of this patch was to create a rendering that would react to the amplitude of the microphone’s input. I made this by originally looking online for a sample of a rendering tutorial I found interesting located at https://www.youtube.com/watch?v=qf1OGUeIs1s. I originally started off with the video’s original patch, removed all the audio processing they were doing in the patch. After then playing around with the rendering part of the patch that I had kept, I changed the noise type of the rendering, as well as the scale and appearance of the rendering. During this process, I discovered the “distortion” input that this rendering originally had set to a fixed value, and decided that this was the input I wanted to be dependent on the amplitude of the audio input (as it was giving an interesting zoom effect). Thus, I wrote my pfft to be filtered and then have only the amplitude passed out which would then be scaled down to act as my distortion input.
For this example video, I simply used ambient noise as a catalyst (people walking by and talking) as I’m interested in making renderings that will use ambient noise/images from an environment in a way that is obvious yet still interesting. Unfortunately, the Youtube compression ruins the effect quite a bit, but the general visual is preserved. A Google Drive link to the video is located here: https://drive.google.com/open?id=0Byn46tolhCwUUlNzNDVObGppY1k
For my assignment 3, I convolved a series of impulse recordings with the quintessential audio clip of Dobby (the house elf from the Harry Potter series) being freed from his master (which I obtained a twenty second clip of from SoundBoard.com).
Original Audio Clip (Dobby is Free):
My two acoustic space IR recordings were taken by recording the audio of a balloon popping (with its accompanying reverb). The locations I chose to record were the women’s bathroom in the basement of Baker hall and the overlook present on the second floor of Baker Hall.
IR from Women’s Bathroom in Baker Basement:
Convolution of Dobby and Women’s Bathroom IR:
IR from Overlook in Baker:
Convolution of Dobby and Overlook in Baker:
I then explored a little bit of what the Dobby recording would sound like when convolved with common soundtrack noises that might be present in a bad movie. Specifically, I chose gurgling water, seagulls cawing, applause, and crickets soundtracks that I downloaded from SoundBible.com.
Gurgling Water IR:
Convolution of Dobby and Gurgling Water:
Convolution of Dobby and Seagulls:
Convolution of Dobby and Applause:
Convolution of Dobby and Crickets:
After making these samples, I started exploring some of the built-in Max examples and ran into one named “convolution workshop.” A bit curious about what it would do, I merged our original convolution reverb patch this patch. Specifically, I pushed the “Dobby is free” audio and the “applause” IR through the original convolution and then pushed the result into a source filter convolution with the “Dobby is free” audio again. The result sounds significantly more noisey than the previous results.
For my first project, I would like to use the Kinect, Max, and a projector to create a generative art piece that enables some sort of live interaction. Preferably, I would like to create a piece similar to the one depicted in this picture:
Depending on whether particle interaction, shape manipulation, etc. is easier for me to bring up, I might pivot the project more to one of those directions. However, as a minimum, I would like to be able to create a system that creates a live time interactive piece with the Kinect and Max.
For my time shifting assignment, I decided to make a “horror movie” webcam filter which takes in a webcam image which 1) plays a normal video until it time shifts 700 frames back for about 100 frames (to give the unsettling video playback effect that some horror movies have) 2) turns the RGB values from the webcam into pure luminance values and then filters the luminance value to create a grainy/distorted image .
Over the years, Amazon has given me quite a few interesting results under the “Customers who bought this item also bought” tab, some suggestions that have resulted in me impulsively buying more items online and others that have left me just plain confused. Therefore, I thought it would be interesting to explore this suggested purchases feedback loop.
As I have been recently setting up my new townhouse, I thought I would start with a simple search on Amazon for “office chair.” I then recorded the first suggested item of the list and then clicked on it to create the chain. If I encountered a situation where the suggested item looped, I simply recorded the next in line suggested item that I had not previously clicked on and continued down the chain. I followed the chain to about 50 items, by which time I had not seen an office chair for quite a while.