For this project I decided to revisit something I was working on previously and spice it up with some field recordings from freesound.org, some EQing, and heaps of convolution reverb. I originally made this piece as I was watching the sunrise and tried to capture that feeling.
An attempt at rave visuals – ESS Personal Project
Earlier in the semester I attempted to make reactive rave visuals in Max and decided to continue with it for my personal project. I wanted to make something that was exactly in time to the music and ideally utilize older footage of people dancing (particularly my favorite video on the internet).
The max patch takes midi input from Traktor(a popular program for DJing) which gets brought into the patch as a bang on every beat of the current song. This bang then causes a jit noise matrix to reconfigure itself. This noise matrix is laid over a jit video object which, on each bang, receives a semi random frame number to jump to from a drunk object. This causes the video to keep repeating certain segments while advancing overall.
----------begin_max5_patcher---------- 3047.3oc6cs0jihaE94YqZ+OP4ZeHoRutzcIRpT0l7KHuO6VSgwzcyLXvKf6 YlrUxu8nKfM31sP1FgMSl9ACMFDG8oyMcNGI+G+3O7tEqJ9RR0hf+Zv6Cd26 9C4Udm9Zpq7t1K7tEah9RbVTk9FWjuaypjxEOz7caipieNM+oOTlDWaZItfu D7P.kwzG.pOQnkffeq8gjsQZdVRstEgGZpxnMI0IkeHIOZUVh5KAcdjhc0sO Cp8xlqU+0sIl28hEODrXUT9SKN71pp+powVrmpSWq++hUe7mErE5K9e9wePc Td3gwDK3Fr.fmEXAwmXAMTCBLnXNfEbwUfEIeV1D6a05jungfE+k.H.XChXT nFanZQGBb.HBcZT.9lnPZds6..a7Af0k6x+TPXXnq.AjnNXTnXAHvdEHnWNP rMK5qqhFPrPKIvCQKwTZHkq5o5tMBNrvw4v+eNc5qf6OtXylD4q5Ui9U0EaC peNIXSwKoIA0EA6pRB93tMaCJJCdTIku+gxRyShK1kW2qacJMrHvRH.RAPIF hOfgPjVFBKNWLbupkGKxqqR+2ZLBJ48Zu9pcqZzBAsfkpmNW0iTW8eTlFkcJ XFBtbbdSRUUzSIuFmKShVGTtqpJMJ+SoqWJgaqxZbPG8xPiH2Un0oGtAWJGN fB9ayXZkizQT7JTUEUWWtK0AATAh9JAT7vLWpdviQw1rZcZjCefiyFnonecG IqnX61BoLd0BK.p51eLMSZMsooJipSTJGVuSdVZQt57WJxTGh1UWTUGUVq9G UyqNpDWUGaDWCVTmZN148ef5rL9oXW+vmSWW+r5KUp3w.Pn3DCuHeHjbfbC9 IXvOgrwAHH3i4.BYyNoDD9xgwxprz01cuqAj5ZGCZjbrHlrGkdLqHpV1s2tq tGLspnT9diKxJZXXAKoOz4idlHuDOEyRqp2TrNwt+BJFb6iAqdpCQpcdo2Gc DIdpWuAK.3Pjp2vfPNi18rdOWKyBR6ZzIFdodPJwpXQHmpMbfYGFwEyOwBlG vMnUbih9V.2Dd.2rNofPL3a.bCi7.tQs5fGPyuwgzYMtQ8kqxOEsZURozS4p gbUlvB0bfHwAjDFN+7WFDd4XozRsriX0S.l16XpPnrPi49LPOmI385n.UWlV EG07xVFdYX6iEkahz8d1o3bE6uQIRltQhdJp8fUbIXzdUoFtwmKW6qtzMWqJ IXZWU3jCtuQmeyDDG5AkD5HTL.50nhsK5AlcnGg5oIZM.5oLr2FCGF6fUJBe 1Agg9UypwtMSpSsEivg++t5UH.cJ8qzSpe8mo9PAqJHJCvi2Nz0k8lNC8bvG d8qBLr0rngwGic34m1UnWB1aS.5rXZJj9JvCNCAOnG.u3rD6Img23veWvaFF ebHc7yk2GSqW960KM4x4WdoHKv5j2objZ1.BA9vTmLfKJbThh9osYIrDEcYO 3CR6JkoeY3v68ZfN3eVjs9znMyCIOCD72CJd7w+1uVBUmkG7mVm7Xztr5+7u VhjWYaTVZ95xh2JGZXKiMP.foc.ig66bgQ0AY7yg1UvJ6g7xmUDsV62hKg2F bvCegqtdcIZBdNc85jb6gkd3JZvQG+49vxTRsDKj2a.PEz5yL2AgdMrJt.ti BvNdbqmB1LUWDgoESkRwWI+3YBAcKW.gOqbHHlxVRX.JWNAQDirjfnTLezJh nK15RGMZJ19dI1A7PmO5kImKr3jbNCSOKMo0iTBQRU6gJRIjDBzIaBPU4Zs+ yc7iwT+w0IjJTkkJ0YbHmQ3GmRKmjEn9kGAIUd7cdj4NOxUXHZ3TTCAMS4Vn AbrYhQhQMzIWpxTmG49TdwpdCA5wp1O5cqc.VVnG7I8qVAaHPiuXSDpZRSBX T702e9Wh7AN8E4sYW2kzmaFV.EDIDQ0HEALKvKLvK3UZtcVK4rHAbDVH1iWM oY5tGuf9.urCVpphjffpI20.VyDgQL2udLf63wff9cOFlidLvfdkGQRXK4X. BQ+NOx7kGAM9wLxjUKqrNHcNgw5wp.THVE+SKbLrqtjgGKlhQpPGe6wl99uh kbIL8fMACw5gctjGAPr3NK1moYc+Pmw0U2k0iixkMlTdJ4aMU.RIT48bDoOK 0KXOYxLheMmXXr9tkj4pkDOjHs35rz7.3.AtfnhLtwMDhYBhHW4at0rFp0cl KK+r6CKKdcsDQ.bkW.jPSmh5yH1uekAUEUuew83Tz7g9DBvlUM.AHlPHHVxw TFUUaC.5ufjz4e5DwGBN9J.j5rd5ojxfUAqrWELl4tCLkj.8xvMj8jIdtKSZ jObs9q4w+WqkzBVWKAgnySOXef.+l.QU5S4Q5U.24tpYggiOZTVaOXXMfA1r fz3iszzY08o9TwAho0XfMo4DxAiY12Qm4psbc5lEWe1gIiO6x1n3OEjFX2Jj IMHMHIYTWKHNBjWNlg8fHVT95hMAH6EvDxvv0rZomLPaPAPGwMj2vMrcfqga qA33z4FvA8SsyUEGk+4xnsAjf3miJUnnz0dqHIhhMgkvnAjxuRW.bFJutJk6 jnJvOn5px3hpHG.QBwr8DENewPrG1gVpBTKIt+kJRCUNXLFZR6l3BcF0ioNx ClVqGxC8FVqFOzGrPgGMVK28b2Qv6JrwtZWccgUGVolUZFTPZSBoaohzUOVO u4uv8ZW0ryEcmzUI9QkqQo1QlwB9Eo42jxsAxypetbWfKF1PPyABX9pTF5IT dq5lbvrlfcM9zeWffX+ff4EoUIsroJtTqrjXiMsFQWHBNiYI8wNNvi6xxphK SRxGXguY1rKnl3SvCmSy07ZVyQ0EO8j8DD2.LlsFR2sMLJ0h34DUGODSqpjL 47AGFbHbiRM7ToMq0Spwf6gO931mR9pCf19Y1XudCdClFxEMQ6GN0gQ.Duho 3nsZ94z70Ee1ACmMJnTqIMr7OjJwhpnoNla.iKNYFm7zZyyzyCVklkYuNcM9 JvL64nB9TOokQIZBiOD96RcrkEAxoBMLuSy9f0jEYqQadd9z7VqCov6byaux 4HyCpW8jGu2Xq6lpu3HnopXWYb6KuYaf3gfN800IU0o4lbvd3tTqTzd21gkk Vm8tf0s6phlsJivkr1sJiPVm0hnYOEa+k57tSqT.otGaYuOv89lacsA6YiMc IXtPXb0cAmVBi3DgQmdBySLozPsuGBp3X9yCWxe7mb23CDSNCpaDFNb5ILpy RNSLg4BcM83EyIUfL5fDliUT0QhWP.knS4uYC5BBLUSzQ+KRXByJDC0SDPzq IGcHwIdaV3XAIiN86jH.Ce2R+NYiY5MJeLh8F3J4tEWQNyW.uKoenyzO5tj9 cxxzjyVSENCqjokv3NSX3IkvHNgXn2zlIzaDlSHFQL8DF0StmyfZOB3bvRQe 2yObI+IQSbxRkpPXlXYZhSlpHjomvbxFDAO4yej.clvlXDC3zLsl9Izf8Ubg T+Htb6jowNYlFN8LBXmzgRnSOgcjz5avgRtMD1vCk2.ByIsfgSez.bBvvSOu ORbmBXHm73BhldBicuhXNoFCcC3wbyYNz8JgM8ADAg8jG.BJ5FFzcDx4bbLw 3MxECsH9sgvFVld5Mzhf2oNOi.eSluJm3CldU6vPmDbldU6Pl2lCklMPkCcw qSqtumCEzsv8L8Ne.uWCXCzISoz6T5Z5UrCQ2oSsC5lHM8dkvtEHluBNLmKZ UBdCpsHHD5q9EfbS6W9x2E0O4T2x90cp7AvWAZkbakO.9plsDH9Mse4lsIvz W0VSwrgZJ4znsaeIorpooMjzhMQezj3cyOdApeLqL+uokWTl7RZ6iX1SSVDU F+bZcRb8tRScu9EUsQZd5h0Ik46R0YtTdEEnnd05Zj83xcceEDug7R6ZpdQy O6HGAacqc42qqd4Cfwp9+hGyEBDV+6brP+w9slotaiWsa2TeHMWAIICW4AcK 4422VzyA+VKR+P6vdYhjROthfWrQ1kR2OzzmUoGLnnrdK7y2.PhhikunyXCs pC5Q59cIYIa52RfkHFgRLa42DFA1bFGhOdG.uR9zwJd096kZL.WsUSKAeDmP gpyfTlPsCACsLx47qcft8XNlTktY69BQ28lRQDxC+OvCYIO. -----------end_max5_patcher-----------
Roles: Luigi Cannatti – Max Patch Programming and Audio Recording
Arnav Luthra – Documentation and Audio Recording
Max Patch Description: Our project utilized jit.cv to read a recorded game of Go and determine the number of pieces in each row. Using this data, we created a step sequencer wherein the number of pieces determines the intensity of the player’s “voice”. We used two different sets of sample banks for the white and black pieces respectively.
Sound recordings: Luigi recorded himself grunting, moaning, and screaming at 11 different levels of intensity. I recorded a few clips of a little sister and then filled in the blanks with a few grunting and breathing samples of my own.
Final Impressions: Overall we were both quite happy with the result. Our original intention with this project was to riff on the stoic nature of Go and I think we accomplished this well. To improve this further, we could have it working with an actual Go board.
Computer vision programming – Dan Moore
Max Patch Programing and Sound Design – Kaitlin Schaer
Percussion Patch and Documentation – Arnav Luthra
Our goal for the project was to create a live performance system that would create music from the act of drawing. To do this, we utilized computer vision to recognize shapes being drawn on a piece of paper and generate sounds in response to the shapes being drawn. We had three main “instruments” one of which was melodic while the other two were “whooshey” sounds.
For the project, we ended up using two Max Patches and a separate instance of OpenCV. The computer vision processing was done on Dan’s laptop and allowed us to get the colors of various blobs, the location of the blob’s centroid (blob’s central point) and the velocity at which the blob was growing. We then took these parameters and send them over to Kaitlin’s laptop using OSC (Open Sound Control). From Kaitlin’s laptop, we took these parameters and used them to control an arpeggiator as well as resonant filters on the sounds. The arpeggiator added different notes within a fixed key depending on the location of the blob and then triggered two different midi instruments (the melodic saw wave and on the whooshey noises). The third instrument took white noise and applied resonant filters at a rhythmic interval to create a percussive effect. Parts of this patch were pieced together from various sources online and then compiled and modified to suit the needs of our project.
Video of final performance:
Overall our presentation went well despite a few technical difficulties in the beginning (We ran into some difficulties getting Kaitlin’s laptop to receive the information from Dan’s). We were limited in what we could do with the computer vision aspect but if we were to continue this project we could find other interesting parameters to get from the drawings.
From Golan Levin’s really awesome presentation last week my main takeaway was the works of Norman McLaren. The audio visual works he showed us really caught my attention. Upon further research into his main years of activity were from 50s to the 80s which made me appreciate his work even more. The kinds of audio-visual art he was doing was extremely groundbreaking for his name, not to mention extremely difficult and time-consuming to create using the means available to him. With modern day computing, generating simple geometric visuals in time with sounds or music is quite simple(I’ve even made simple geometric music visualizations using max). McLaren’s “A Phantasy in Colors” was entirely animated with pastel and then the film was manually synced up to the audio. This level of effort and coordination is absolutely mind boggling.
Anyways, I leave you all with a really neat short film of his I found(apparently an Oscar winning film) with some really interesting sound work. Even though it was released in the 1950s all the sounds used in the film were synthesized.
The roles for the project were the following:
Nick Ericson – Max programming
Jack Kasbeer – Research and ideas
Arnav Luthra – Sound Editing & Documentation
Project Summary: To approach the project goal of making a sound space, we decided to literally create a space with an interactive sound piece. Our initial idea was to create a sort of “room” where a webcam tracks the movements of people within a designated space and triggers certain sounds based on their location. After configuring Jit and a webcam, we found that the space we had was rather enclosed and the software could only really track one person well. From that limitation we got the idea to try and create an experience of claustrophobia for a single person who enters the space. We mapped out our field recordings (which were edited in different ways) to different parts of the box deliberately using a mix of harsh and subtle noises to create a dynamic space.
Arnav: For my recordings, I set up the microphone in two environments: driving around in a car and cooking in my kitchen. In the kitchen I was making some noodles so I was able to get a crackling sound from the noodle package and then a crunching of the noodles as I broke them up. I also got the sound of onions sautéing in a pan. We tried to use these sounds as more textural background noises. From the car I was able to get the sound of a turn signal, some general engine and some ignition sounds. The best sound I got from the car recordings, however, was the sound of the car door closing. I took this sound, looped it and added ample delay to get a loud harsh thud noise which ended up being central to our piece.
Nick: Some of my top field recording acquisitions were the radiator in my girlfriend’s apartment, a rattling fan in a gates stairwell, and a creaky door spring. My recording method for stationary objects was to place the recorder directly next to the sound source and walk away for a few minutes.
Programming: The patch uses the cv.jit.track object to follow the participant’s position in real space and map that onto nodes. By clicking on the participant’s head as they enter the space, jit track continuously updates the patch on the location of the participant’s head within the space.
We then use the tracking data’s proximity to each node to determine the volume of looped field recordings that we mixed and spatialized with the hoa.2d suite.
Setup: The day before the presentation, we booked the media lab to setup and troubleshoot our project. We mounted a webcam to a beam in the ceiling of the lab and used a hefty amount of masking tape to secure it. We had some issues getting the webcam to work properly with Nick’s laptop and Max but we managed to get everything working in the end. Once we had everything working, we played around with the arrangements of the sounds in the space and made last minute edits to the sounds.
Presentation and Closing Remarks: Below are the videos we recorded of the brave participants in our project. Luigi’s interaction with the project was definitely very interesting!
Arnav: It was nice to hear in the reflections from our classmates that the car door sound was harsh and jarring as that was exactly the effect I meant for it to have. I really thought that the comments on having the space’s sounds evolve over time so that the participant doesn’t get “comfortable” were really valuable and if I were to continue on this project that is definitely something I would implement.