Assignment 1 – Neeraj Sharma

This is a simple example of feeding back output to input in a software we use very often – File Zipping.

Ever seen a file’s size increasing when you zip the file? In the video below, I have tried to give an example. Interestingly, if we zip an already zipped file something unusual happens, and the erratic behavior of the algorithm goes on as we keep on going in this loop of “zipping a zip”.

Assignment 1 – Adrienne Cassel

I used the Face App to make myself into the Ultimate Woman using the “young” filter, the “smile” filter, and the “female” filter.

This ^ is the original picture.

I tried to cycle the pictures through it until I got to here:

To get the *Ulitmate* ageless look, I used the “young” filter. Here’s what it’s like to be 30 times younger:

Cycle 4

Cycle 10

Cycle 20

Cycle 30

I’ve been told I should smile more, so naturally I thought I would include it in the criteria. Here’s what a 104 times processed *Ultimate* smile looks like:

Cycle 3

Cycle 20

Cycle 70

Cycle 104

I also used the “female” filter. Heres what a it looks like processed through 8 times:

Assignment 1 – Taylor Vence

I used an online voice changer (https://voicechanger.io/) as a system to modify the song Here It Goes Again by OK Go. As the song was fed through, it was interesting to see which sounds/instruments kept their integrity the longest. Bass and middle notes seemed to fade out first, and the vocals were distinguishable almost until the very end. There were quite a few artifacts in the later editions, but I think that was part of using low tech digital equipment.

The process of uploading the song file and applying the filter was repeated only eight times until the song was completely unrecognizable.

Listen here:  https://drive.google.com/file/d/0B0PpTvWRSoHFc1M2T0tXbjc4MVU/view?usp=sharing

Assignment 1 – Amanda Sari Perez

In the above video, I demonstrated the gradual degradation of the speech-to-text feature on my android phone. Using the original text from our assignment, I used Convert Text to Speech V2 on my laptop to convert the text to a sound file. I played the sound from my laptop speakers and recorded it directly to speech-to-text on my phone using facebook messenger (just for speedy transfer back to my laptop).

I fed this text back to the Convert Text to Speech V2 software, and did this loop for 20 iterations. I’d had the phone on my lap for most of the trials, and around version 14, I shifted positions and leaned back in my chair. That change of distance from the laptop speakers introduced more errors into the speech-to-text. I maintained that distance for the rest of the trials, which led to weirder misinterpretations. At the end of the video you can see how drastically the text had changed from the original passage.

Assignment 1 – Bri Hudock

I wanted to see where ifyoudig.net would take me.  Ifyoudig.net is a website that returns the name of an artist it believes you will “dig” based off of your input of the name of an artist you already “dig.”  If you were wondering, yes, this lingo is in fact killing me.

I started with Michael Jackson as the input artist and the original signal.  Then I kept clicking the topmost artist in the list, the only artificial constraint being that I would not select an artist that I had already selected previously.  Then I watched as the ifyoudig system did its magic.

After 45 iterations, the website was beginning to suggest artists like Austrian duo Kruder & Dorfmeister, known for their electronic downtempo sound.

After about 40 more iterations, the website suggested Yonder Mountain String Band, a progressive bluegrass group from Colorado.

I attached a video that takes you through the first 45 iterations by showing the major album of each successive artist/band – all to the backdrop of one of MJ’s most popular tunes.  Then the video jumps to number 85 and includes an audio clip from one of Yonder’s most famous tracks.  This provides you (the listener) with a side by side comparison of the sound of the input signal versus the sound of the output signal after 85 iterations of “feedback” through the system.

link to video

Assignment 1 – Matthew Xie

The system I used to process my signal through was an algorithm to Pixel Sort images for the application Processing. Pixel sorting is the process of isolating a horizontal or vertical line of pixels in an image and sorting their positions based on any number of criteria, in my case I sorted the pixels of the images through the brightness. In more detail, the script will begin sorting when it finds a pixel which is bright in the column or row, and will stop sorting when it finds a dark pixel. The script identifies black pixels by comparing the pixel’s brightness value to a threshold, if it’s lower than the brightness threshold the pixel is deemed to be dark, if it’s higher it’s deemed to be bright.

I started with a simple image of a bird-eye view picture of an island. I ran my image through the system 630 times to be exact. The effect or changes can be quite noticeable at the beginning, but they slowly become harder to notice as the signal gets twisted even more. I also noticed that the algorithm itself has a few bugs, leaving some blocks of pixels unsorted at all. Especially in the middle (middle of islands) where brightness levels are even and high, the effect won’t be present no matter how many times the process is ran.

For more info about pixel sorting, please see: http://datamoshing.com/2016/06/16/how-to-glitch-images-using-pixel-sorting/

I am sitting in a PVC pipe – Arnav Luthra – Assignment 1

The first idea I had for this assignment was to feed my name into the Wu-Tang Clan Name Generator and then feed the input of that back into itself over and over I did this 5 times and got the following:

Arnav Luthra -> Arrogant Menace -> Lazy-assed Killah -> Vizual Professional -> Annoyin’ Bstrd -> Shriekin’ Hunter

I didn’t really get any meaningful insight into the system so I decided to do something else.

Similar to I Am Sitting in a Room, I recorded myself briefly speaking but then fed it though Max for Live’s convolution reverb(set with the impulse of a PVC pipe) and repeated the process to yield the following:

It sounded harsher overall compared to the original I Am Sitting in a Room. This could be due to the resonant frequencies of a PVC tube. Also there was a certain warmness in the original that likely stemmed from some form of tape distortion.

Assignment 1 – Kevin Darr

For this assignment I took inspiration from the example of I am sitting in a room by Alvin Lucier by iteratively re-recording audio. The readymade system I used was Ableton Live. I took a default drum loop, recorded the audio using my computer speakers and a condenser mic, then applied a built-in audio effect called Redux which is essentially a bitcrusher/downsampler. I did this 11 times (including the original loop). It became somewhat painful to listen to since certain high frequencies were being amplified each pass.

An interesting and unintended side effect of the setup I used was the latency of the system. Each recorded section is exactly 3 measures at 70 BPM, but as you can hear the recordings become increasingly late and eventually cut off the end of the original loop.

Here is a link to the piece.

WARNING: As you can see in the Soundcloud waveform view, it gets very loud at the end. Please avoid damaging your ears/audio equipment.

Assignment 1- Jonathan Namovic

I decided to use the built in photo editor in Android messenger as my found system. I started with a close up photo of my friend Miles and I repeatedly put it through the editor.  The photo editor has multiple simplistic sliders such as contrast,shadows, and sharpness. In order to keep each individual jump relatively small, I turned each of these sliders only halfway up. The entire process took 45 iterations to reach the final picture.

The sharpness slider definitely had the most impact on the picture overall as its attempt to remove “blurriness” often resulted in the colors blending later on in the sequence. It is also interesting to see how the sliders changed as the photo changed. Towards the end of the process, the shadow slider which usually makes the darkness in the photo much more prominent, actually enunciated the whites in the photo.

Assignment 1 – Sarika Bajaj

Over the years, Amazon has given me quite a few interesting results under the “Customers who bought this item also bought” tab, some suggestions that have resulted in me impulsively buying more items online and others that have left me just plain confused. Therefore, I thought it would be interesting to explore this suggested purchases feedback loop.

As I have been recently setting up my new townhouse, I thought I would start with a simple search on Amazon for “office chair.” I then recorded the first suggested item of the list and then clicked on it to create the chain. If I encountered a situation where the suggested item looped, I simply recorded the next in line suggested item that I had not previously clicked on and continued down the chain. I followed the chain to about 50 items, by which time I had not seen an office chair for quite a while.

The video of the path across Amazon is below: