Category Archives: Uncategorized

Project 1- Alex

I like taking digital mediums and making them more accessible for untrained artists. I think that everyone should be able to mess around with computers and make cool stuff. For this project, I decided to make a physical interface for the media lab control patch Jesse gave us.

insert photo of the arduino I forgot to take here

Arduino control code

// constants won’t change. They’re used here to set pin numbers:
const int buttonOnePin = 3; // the number of the redlight pushbutton pin
const int ledOnePin = 9; // the number of the LED pin
const int buttonTwoPin = 4; // the number of the yellowlight pushbutton pin
const int ledTwoPin = 10; // the number of the LED pin
const int buttonThreePin = 5; // the number of the bluelight pushbutton pin
const int ledThreePin = 8; // the number of the LED pin

// variable buttons
int buttonOneState = 0; // variable for reading the pushbutton status
int buttonTwoState = 0;
int buttonThreeState = 0;

//variables knob
const int potPin = 3; // select the input pin for the potentiometer
int potVal = 0; // variable to store the value coming from the sensor
int mappedPotVal = 0;

//variables slider
const int slidePin = 0;
int slideVal = 0;
int mappedSlideVal = 0;

//variables for ir detector
const int irPin = 5;
int irVal = 0;
int mappedIrVal = 0;

//joystick
const int swPin = 2; // digital pin connected to switch output
const int xPin = 2; // analog pin connected to X output
const int yPin = 1; // analog pin connected to Y output

int xState = 0;
int yState = 0;
int mappedXState = 0;
int mappedYState = 0;

void setup() {
Serial.begin(9600);
// initialize the LED pin as an output:
pinMode(ledOnePin, OUTPUT);
pinMode(ledTwoPin, OUTPUT);
pinMode(ledThreePin, OUTPUT);
// initialize the pushbutton pin as an input:
pinMode(buttonOnePin, INPUT);
pinMode(buttonTwoPin, INPUT);
pinMode(buttonThreePin, INPUT);

}

void loop() {
// read the state of the pushbutton value:
buttonOneState = digitalRead(buttonOnePin);
buttonTwoState = digitalRead(buttonTwoPin);
buttonThreeState = digitalRead(buttonThreePin);

// check if the redlight pushbutton is pressed. If it is, the buttonState is HIGH:
if (buttonOneState == HIGH) {
// turn LED on:
digitalWrite(ledOnePin, HIGH);
} else {
// turn LED off:
digitalWrite(ledOnePin, LOW);
}

// check if the yellowlight pushbutton is pressed. If it is, the buttonState is HIGH:
if (buttonTwoState == HIGH) {
// turn LED on:
digitalWrite(ledTwoPin, HIGH);
} else {
// turn LED off:
digitalWrite(ledTwoPin, LOW);
}

// check if the bluelight pushbutton is pressed. If it is, the buttonState is HIGH:
if (buttonThreeState == HIGH) {
// turn LED on:
digitalWrite(ledThreePin, HIGH);
} else {
// turn LED off:
digitalWrite(ledThreePin, LOW);
}

//check the turn potentiometer values, map them to the bay range
potVal = analogRead(potPin); // read the value from the potentiometer
mappedPotVal = map(potVal, 0, 1023, 1, 9);

//check the slide potentiometer values, map them to the saturation range
slideVal = analogRead(slidePin);
mappedSlideVal = map(slideVal, 0, 1023, 0, 255);

//check switch
xState = analogRead(xPin);
yState = analogRead(yPin);
mappedXState = map(xState, 0, 1023, 0, 255);
mappedYState = map(yState, 0, 1023, 0, 255);

//check the ir values
irVal = analogRead(irPin);
mappedIrVal = map(irVal, 0, 550, 0, 8);

Serial.print(buttonOneState);
Serial.print(” “);
Serial.print(buttonTwoState);
Serial.print(” “);
Serial.print(buttonThreeState);
Serial.print(” “);
Serial.print(mappedPotVal);
Serial.print(” “);
Serial.print(mappedSlideVal);
Serial.print(” “);
Serial.print(mappedXState);
Serial.print(” “);
Serial.print(mappedYState);
Serial.print(” “);
Serial.println(mappedIrVal);
delay(50);
}

Control patch

bpatcher edits

video of working is too big sorry

heres the gist

Project 1 – Kevin Darr

I decided to make use of the MIDI Fighter 3D midi controller with built in accelerometer to control the lighting system in the Media Lab. The Fighter is being utilized as a drum machine which has kick, snare, hi-hats, crash, and a series of bass notes. The accelerometer channels control the lights as well as some digital effects on the drum machine, including distortion and reverb. When sent MIDI notes on the correct channel, the MIDI Fighter lights up on the appropriate button with color dependent on the velocity of the note. I used Ableton Live to send these MIDI messages, as well as play the samples for the drum machine. To get control and MIDI messages into Max, I used ctlin and notein objects. When tilted left or right, the MIDI Fighter as well as the overhead lights turn red based on how far the device is tilted. The same is true for forwards and backwards, but blue instead of red. The snare drum triggers a flash from the UV LEDs, and the bass notes trigger a green flash.

 

I had trouble screen capturing audio and video at the same time, so here is a short audio example of a drumbeat I played on the MIDI Fighter.

 

And here is a video of what the patch looks like, with added pwindows to imagine howthe lights in the media lab would react.

Continue reading

Project 1- Tanushree Mediratta

I took Project 1 as an opportunity to explore controlling systems using mere hand gestures. I used a leap motion device to detect my hand gestures and movements to control the different aspects of granular synthesis of an audio signal. While one hand controlled the pitch rate, grain size and speed of the synthesis, the motion of the other hand was used to choose an audio file on which the synthesis was done.

Here is a short video:

I created the main patch entirely from scratch. I used a modified sugarSynth patch as a sub-patch and also used the leap-motion patches for collecting and routing data.

This is the gist of my main patch:

This is the modified sugarSynth patch:

I modified the visual subpath and fingers subpath in the leap motion patch:

  1. Visual sub patch
  1. Fingers sub patch

 

 

 

Project 1: Enhance It! – Anish Krishnan

As I make a lot of videos and short films in my free time, anything related to processing videos excites me, so I really wanted to learn how to use the computer vision object built into Max. For this project I used the cv.jit.faces object to be able to alter a face in a movie by either blurring it or placing a virtual spotlight on it. First, I downscale the image to 1/5th of its original size, then convert it to greyscale, and run it through the cv.jit.faces object. I use the output matrix to determine the positions of the face and accordingly place a blurred image with an alpha layer that I made on top of the face or add a spotlight. I hope you like my project!

Original Image:

Blurred Face:

Enhanced Face/Spotlight


Google Drive Link to Code AND Necessary Media:

https://drive.google.com/drive/folders/0B_T97VaALHA0U1Z6bVllU0MzWEE?usp=sharing

 

The code:

The helper patch “process”:

Project 1 – Sarika Bajaj

This project allows a person via the Kinect to use their hand to move around balls in a virtual ball pit. Much of this patch has been built upon some of the dp.kinect2 reference patches as well as a reference from https://cycling74.com/tutorials/00-physics-patch-a-day, integrating the two by creating a kinect system that uses the closest player’s right hand to move around the main movable physics force. Most of the work in this project involved figuring out what good bounding boxes would be in the physical world and in the virtual world/how the user would actually interact with the Kinect (I wanted the output animation to be very obviously user controlled – almost painfully so). Additionally, I had some fun changing the aesthetics of the actual ball system.

Video of the system working:

Gist of code: https://gist.github.com/anonymous/9ddab8deb04b40090d8efeb8cd0b5f06

Assignment 4- Tanushree Mediratta

This assignment required us to use fft- an object that segregates the different frequencies a signal is composed of. So, I used Alvin Lucier’s “I am sitting in a room” audio signal and got rid of frequencies above a certain threshold. I then delayed this signal and added feedback. This added a certain villainous tone to Alvin’s voice. I also modified the patch we made in class to create an audio visualizer that visualized the audio signal before and after the fft manipulation.

The code for the main pfft~ can be found here:

The code for the sub-patch is here:

Assignment N. 4 – Jacob Randall Holmes

For this assignment I created a spectral system to be used in live performance based on a tutorial video I found on YouTube. The patch consists of a random pitch generator and a noise reduction patch linked through pfft~

 

<pre><code>
———-begin_max5_patcher———-
1714.3oc0akzjiZCE9r8uBJW4Th6NHP.lbJyobO4PNLUJWxFY2ZBVxADd5Yl
Z7u8nE.ClM4tA6jKfaIgdu22aUK82lOawF1q3zEV+h0GslM6aymMS0jrgY4+
8rEGPutMFkpF1hsrCGvT9hk5933W4p1+MlEglRhvVblEZKmbBwwVGQ7suTLV
Z1AVFOFyUyjcdqo7uDiUSQw3TeDgtecBdKWyaAvm8VZ4F9r8RKfms7ki3o0e
k+IjH0Lv17om7gUnGgVPNfrsuOet7wRCkU7WiPaOOBruqshyc7fxWPuhmsw+
vfV4em2.+SweVLiMTUG2sie1R73I7+jghIeEmX4AbZWNAcJm5Aw+xQrVHWjR
1SQwKJEptABfSfTeBbbUvhSGHg+noI6.InLRJ97cTvWoz798J0dimTmcXCNo
cwywbwawRqEaPz88Jffv.oT4qDybOztjvN7PKm7DzALGmrFSQazLm8aP52Ey
Dy+8Q587j1yddNFH8N2tzm22NVxAjhh9uA73.NMEsG2vI3mkN+O+tcBLAerU
V9PPe3i6pQK928zBXkqTBg9JID35zqH5+fLA1jw4L56VUOHX.UQAb0tCNvhm
shENScP9nziobQwHcDmGZtbSj08nLqPpej+2xWCiFPfJrfeuo7bsmZzPm7Wk
36oDbTlnRMF8dk9GB0Y+0uf8BENgSdcPVe.Z8mjs+MNx5OHGxhEFIQV+Nd66
NhQNTr7F.kU5RaC6ERZOznc04VjuWCD4HgHjAIFeBmjJzyUznyVfNdrRyyp7
IR36SL0DErrrIBU2jaYSI3SjhuGV1JJQHgbg3kkngoWWUDuSNMrHbBMiTZVq
TjyKlvBPw2UUNAvVEBYktPwfvJ07KT56iYRkWELPnKNhoD5wDbpXUJHdN2U1
cDdGJKludGixSEU9pXCIMZo+cns3N+XpHDsR59PBQppyGx9DRDiJYhZXsr4B
xIpUxSqrqJLpQPQGa4iEpdAtzQmxPaYoaPIRUQdxBmhN4LVb8tJ+tX7Ndd2G
IT5UnHmcr6NSH6eomucCSz4g9laUOoqyn5dWKbT4qSQmpi1bTbbtma8o+UDk
HRBh4DsJvwtrScByWR2lvhiqIu5dN0ROQBq3s3OSh3unHTUiAwvIGKLhVTpk
iH6wo75swQ6Sq2Ri.Ehlx1j6ktliObTFwo9.psd7ptjUiwUq89p4o6nXcvfl
U+SWQxbzY5bB5nL3qimAbuhOaTKzP0C0VMQyJRPjGa4MBj0V7zCBHcBLCHCd
633HhXwjS3m2iHzyUYmSnjxXksOh53pWeHXiDqK0nZYIYwjT9fvafpHcOnd0
35Lut98XmF1I95L.9VE+kQ3hVi37DhnPb7kekVAbqftRrKNCy1Us6p8WkjB2
988iy0Fd5KrD9ML9BcfcG8ePfIx9eJ3R7yliAoLi76bDDJgKylpzR10UH0FX
lXfkNcvKiozJt5OGifAUqfrdUj+TO1xfaOFwUEL1uILLTuBqP4ZO6MRqW2Vv
iHLUeKFpiS+.PjndTgpAAIPw1t5091Nz.kf2ETpGioXBEedTy4bINoQYdJfr
Pk8kq+fPF3QCYhJM28XLq.pEt4BGDir6N878AiRPzH1A41tOpHU0M9X.vBnq
ILvaHvJ7tXOUaivFGv3Vbu.5U21bmwtFMV8nsbDoaSXVPp0upNSOrE39CZg4
aEfpLX.XPSH+GcHoe7rn3kvGQYA41WtZuMnyPPk2iFpRQe97iDnbUVStgCAT
t+mvlB7HfJ8xOcCWYlIkyiFopcntOBjx1s8S4sQcA2kf6Zgoa3vdP3nykeDn
25ZW6hmcZS3azFTTb2RtlAHzH7q+u.W..vsALAiCvzlq0k8VVtphN1TQkLH6
ucDKkkkrsvKH2l0ptzDgS4DZ4Ns+wxn.xwYj95l4AaCYB49Kd0.YIQ5SHw9A
yZvtYMvzvZlp5bmRUGzT7ALkbgmobgnHNKvc0.5lXs6qATno999SotqoJoCt
HXB4BGS0RdSHS3ZHSLk7f2s.DfIhI7MjIVMg.wJC4gvIjGL0AskrTSbrCS4r
faKgcNNVbABxOgE84k2xwq7s2x45tOlsAEmepskmXReGx6kSBd9E98Qc0i0U
h53CF9lGC7G7F2n9r50SpklqMHxkolANq4mzUn6JNIMvMioTnATRJxWxlbkw
1cfz1MIM3cQZ4EHqh.0QEd9i.955XfP5ZOFTx2DJsZLnjIVmPmQfRPno9Af2
Kk7LQlFCKBnI5I4+dCiuGm4jd.ONc.tqtZbRpd0Uh6pqCWyqBW2WCtquBbBJ
+84+Ki.dzi.
———–end_max5_patcher———–
</code></pre>

Assignment 4 – Sarika Bajaj

The goal of this patch was to create a rendering that would react to the amplitude of the microphone’s input. I made this by originally looking online for a sample of a rendering tutorial I found interesting located at https://www.youtube.com/watch?v=qf1OGUeIs1s. I originally started off with the video’s original patch, removed all the audio processing they were doing in the patch. After then playing around with the rendering part of the patch that I had kept, I changed the noise type of the rendering, as well as the scale and appearance of the rendering. During this process, I discovered the “distortion” input that this rendering originally had set to a fixed value, and decided that this was the input I wanted to be dependent on the amplitude of the audio input (as it was giving an interesting zoom effect). Thus, I wrote my pfft to be filtered and then have only the amplitude passed out which would then be scaled down to act as my distortion input.

For this example video, I simply used ambient noise as a catalyst (people walking by and talking) as I’m interested in making renderings that will use ambient noise/images from an environment in a way that is obvious yet still interesting. Unfortunately, the Youtube compression ruins the effect quite a bit, but the general visual is preserved. A Google Drive link to the video is located here: https://drive.google.com/open?id=0Byn46tolhCwUUlNzNDVObGppY1k

Github Gist Here: https://gist.github.com/anonymous/f69fd0c33650aeab618f81ad8d37ecfe
*** When I tested the compressed code just for checking to make sure my file was all right, for some reason the rendering just stays stationary while on my actual code it is working fine. For this reason, I also am uploading a zip file of my files, just in case something messed up on the copy compressed feature for some bizarre reason.
Zip of Files: Assignment 4 – Sarika Bajaj

Assignment 4 – Anish Krishnan

For this assignment, I used the pfft Fourier Transform object to cut out certain frequencies in an audio file that can be controlled through a slider. I combined the output audio from this with a modified version of the sound visualizer that we developed in class. By moving the slider up and down, you will notice a change in quality of the audio which is also reflected by the characteristics of the moving shapes in the visualizer.

Input Audio:

Output Audio:

 

Main Patch:

Frequency Gate Patch:

Sound Visualization Patch:

Assignment 4- Jonathan Namovic

For assignment 4 I took the piece “Pa Pa Papageno” from the opera The Magic Flute and separated the frequencies using a PFFT so that all frequencies within the human vocal range were allocated to one matrix and all other frequencies were allocated to another matrix.  I took these two matrices and used an altered version of the patch from class to create two groups of shapes, green polyhedrons and red cubes. The red cubes fluctuate in size with the orchestra, and the green polyhedrons fluctuate with the opera singers.