This week, I looked at two different projects similar to what my group is working on as our Final Project. The first project is a fairly “name brand” project taken under by Snapchat, and that is Snapchat’s Filters. Snapchat filters are quite interesting. They allow users real-time augmented reality through their own smartphone, allowing them to tune their faces, create mini animations, or just simply make it look like a cute puppy. It is interesting to see that this facial recognition technology is a relatively old technology, but Snapchat was able to innovate it and revitalize it in its current market!
Moreover, there is going to be an aspect of our project that plays music. I found a project by Pierry Jaquillard at ECAL Media and Interaction Design Unit. In this project, Pierry takes a person’s DNA and turns it into music notes. As of now the project only has Pierry’s DNA. It includes all 23 chromosomes, which are run through different interfaces in order to create different results. He uses Javascript to process the notes to a visualization. Then, he uses the visuals to encode a musical file. Though this project is definitely out of our scope for the final, it is interesting what designers can make out of the smallest and least-connected items!
This project will be done by Taisei Manheim, Carly Sacco, and myself. Our project proposal is a smartphone and on the screen, there would be different apps. You could click on the apps and they would each do different things. One of the apps would resemble snapchat and allow users to put a few filters on a picture of themselves (using the computer camera). Another app would resemble a music app and allow user to choose different songs to play. There would also be an app that would resemble Instagram and allow users to scroll through images and like certain photos. The last app would be the clock app which would allow users to see the time. We are thinking of working on the Snapchat app together, and then each of us working on one of the three remaining apps.
Mary Flanagan is an inventor, artist, writer, and designer that creates games, installations, poetry, and essays. Perfect.city an exploration of the South Korean city of Songdo, a planned international metropolis developed by Gale International. Mary’s goal for the city is that it be designed perfectly. Mary modeled the city in the video game, Sims 2. She then conducted a series renders, along with the video below showcasing the design.
While I do think it is smart to use software to model the city, I do believe that making it through Sims does limit the possibilities due to its constraints. Nonetheless, the project was quite interesting, and I look forward to seeing her future work!
//Jai Sawkar
//jsawkar@andrew.cmu.edu
//Section C
//Project 11: Generative Landscape
var objectSpeed = 0.00005; //speed of objects
var mountainDetail = 0.007; //threshold of variance in mountains
var waterDetail = 0.002 //threshold of variance in water
var xMove = 520 //initial point
var spec = []; //snow
function setup() {
createCanvas(480, 480);
for (var i = 0; i < 100; i++){ //setting up for loop that determines bounds of snow
var specX = random(0, width); //determines x
var specY = random(0, height/2); //determines y
spec[i] = makeSpec(specX, specY); //uses i from for loop
}
print(spec);
}
function draw() {
background(249, 233, 161);
drawMountains(); //calls mountain object
displaySpec(); //calls the snow
fill(228, 118, 47);
noStroke();
ellipse(width/2, height/2, 95);
// calls water object
drawWater();
//calls text
drawText();
xMove -= .1 //moves text
if (xMove < -90) { //makes it so that when text goes off screen, it reappears
xMove = 480
}
}
function drawText() {
//makes text of album
fill(0)
textFont('Courier');
textSize(24);
textAlign(CENTER);
text('K A U A I', xMove, 30)
textSize(8);
text('WITH JADEN SMITH AS', xMove, 40)
text('"THE BOY"', xMove, 50)
textSize(24);
text('G A M B I N O', xMove, 70)
}
function drawMountains() { //makes generative mountain
push();
beginShape();
fill(90, 57, 27);
vertex(0,height/2 + 50); //makes height of vertex of mountain
for (var x = 0; x < width; x++) {
var t = (x * mountainDetail) + (millis() * objectSpeed); //allows object to be generative based on time
var y = map(noise(t), 0, 1, 0, height); //remaps values to fit in desired area
vertex(x, y-15); //final vertex
}
vertex(width, height/2 + 50); //final vertex out of for loop
endShape();
pop();
}
function makeSpec(specX, specY){ //makes object for snow
var makeSpec = {x: specX,
y: specY, //calls y coordinate
draw: drawSpec} //allows it to be called to draw
return makeSpec;
}
function drawSpec(){
fill('WHITE')
ellipse(this.x, this.y, 3) //makes circles
}
function displaySpec(){
for(i = 0; i < spec.length; i++){
spec[i].draw(); //allows spec to be displayed
}
}
function drawWater(){ //makes water
fill(51,107,135,255);
push()
beginShape();
noStroke();
fill(170, 210, 241);
for (var x = 0; x < width; x++) { //makes height of vertex of mountain
var t = (x * waterDetail) + (millis() * objectSpeed); //allows object to be generative based on time
var y = map(noise(t), 0,1, height/2 - 20, height/2 + 50); //remaps values to fit in desired area
vertex(0, 900); //inital vertex
vertex(1000,900); //final vertex
vertex(x, y);
}
endShape();
pop()
}
This week, I used the Generative Landscape project to create an album cover of one of my favorite artists, Childish Gambino. I began with the initial cover, which is a Hawaiian sunset abstracted into three objects, the sky, the water, and a semi-circle. I first created the water to be generated randomly, followed by adding mountains in the back to create depth. Moreover, I used a javaScript Object to create snow! It was used to juxtapose the calm nature of Kauai with Gambino’s other, more rowdy album, Because the Internet. Lastly, I made the title for the album slide in & out of the landscape.
// Jai Sawkar
// Section C
// jsawkar@andrew.cmu.edu
// Project 10 - Sonic Sketch
function preload() {
var facesURL = "https://i.imgur.com/3aPIhhm.jpg?" //picture of 4 people
famPic = loadImage(facesURL); //sets fam pic as the variable
hi1 = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/hi1.wav");
hi2 = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/hi2.wav");
hi3 = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/hi3.wav");
hi4 = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/hi4.wav");
}
function setup() {
createCanvas(580, 366);
useSound();
}
function soundSetup() {
//sets the volume so they do not play so loud it frightens people
hi1.setVolume(.5);
hi2.setVolume(.5);
hi3.setVolume(.5);
hi4.setVolume(.5);
}
function draw() {
//simply places the picture to fill the canvas
background(200);
image(famPic, 0, 0)
}
function mousePressed() {
// allows that if the mouse presses on person, it plays & pauses the associated song
if (mouseX > 0 & mouseX < 134){
hi1.play();
} else {
hi1.pause();
}
if (mouseX > 134 & mouseX < 314){
hi2.play();
} else {
hi2.pause();
}
if (mouseX > 314 & mouseX < 454){
hi3.play();
} else {
hi3.pause();
}
if (mouseX > 454 & mouseX < width){
hi4.play();
} else {
hi4.pause();
}
}
This week, I used sound to make a simple, yet interactive sketch of different people saying hi! I will say it took some time to play with how I to actually put sound in my sketch. After I did that, it took some time to then figure out how to reduce the volume of the sound files. Once all that was done, it was a matter of using previous skills to incorporate mouse-use into the sketch.
This week, I looked at the work done by Mileece Petre, an English sound artist & environmental designer who makes generative & interactive art through plants. She believes that plants are observational and interactive beings, and she uses plants to make music by attaching electrodes to them. She uses data from the plants in the form of electromagnetic currents, and this data is translated into code. This code is then transformed into musical notes, in which she composes music from.
This project is super cool to me because it truly thinks out of the box. She is able to make minimalist, introspective music simply from small currents from plants. Moreover, it reflects the true possibilities of what music with computational tools can be!
For this week’s looking outwards, I am looking at Taisai Manheim’s Looking Outwards from week 2, in which he took a look at Taxi, Taxi! Taxi, Taxi is a generative project ny Robert Hodgin. It is a digital installation in the SEC, or Samsung Experience Center, in NYC. The project focuses on NY taxis, using Open Data to create real-time simulations of all the different routes different Cabbies can take. It uses a mix of coding, and also an Open Street Map that models NYC in 3d, to create this installation. I think this is super cool to see something iconic in NYC, the yellow taxis, and have the information these Taxies generate to see the flow of life and individuality in a completely different light. Thanks for the read Tai!
//Jai Sawkar
//jsawkar
//Project 9: Portrait
//Section C
function preload() {
var imgURL = "https://i.imgur.com/4e3vWfI.jpg?1" //image of jaclny & sophia
bImage = loadImage(imgURL);
}
function setup() {
createCanvas(480, 480);
background('black')
bImage.loadPixels(); //loads pixels from image
frameRate(1000000); //speed of picture
}
function draw() {
var px = random(width);
var py = random(height);
var sx = random(0, 240); //'sophia' on X axis
var sy = random(height); //'sophia' on Y axis
var jx = random(240, 480); //'jac' on X axis
var jy = random(height); //'jac' on y axis
////ix & iy calulates the closest int value that is less than or equal to the value of the parameter for the pixels
var ix = constrain(floor(px), 0, width - 1);
var iy = constrain(floor(py), 0, height - 1);
var colXY = bImage.get(ix, iy); //determines color based on x & y pixels of image
noStroke();
fill(colXY); //fill based on x & y pixels of image
ellipse(px, py, 5);
textFont('FUTURA');
textSize(2);
text("Sophia", sx, sy) //randomly prints Sophia on Sophia's side of Canvas
textSize(3);
text("Jac", jx, jy) //randomly prints Jac on Jaclyn's side of Canvas
}
For this project, I used a picture I took 2 years ago of two of my best friends, Jaclyn & Sophia. Along with having the picture generated with dots, I wanted some aspect of it to be more personalized as well, so I split the canvas to allow their names to make up the picture on their side of the canvas. Below are two screenshots of the process, and the original photo as well!
Deray McKesson & Samuel Sinyangwe have backgrounds as American civil rights activists; more specifically, they have a large voice in the Black Lives Matter movement. The piece presented is in an effort to attain a solution that addresses the sustained rise in police violence in America.
Their work is very interesting and respectable to be, as this is a hot topic in politics now, and is a sweeping movement across America. It has come to the point that when pulled over by police, many citizens are now scared for their lives rather than a simple speeding ticket they may face. I beleive the work they are doing in an effort to spread the information of protests, along with representing the news of police violence are crucial stepping stones in order to bring a possible solution to this. Moreover, their works follow key design principles to allow their ideas to be straightforward and easy to understand. I believe there is great work happening with McKesson & Singyangwe, and it refreshing to see citizens using design and computational practice for the greater good rather than commercial good.
This week, I read about the work of Stefanie Posavec in Information Visualization. In her case, she used the silent wireless signals that surround us, converted into a sound that can be heard through specially modified hearing aids. She created a system of visualizing the wireless signals as they were heard on walks around London.
Stefanie describes this as an “experimental platform which aims to answer this question by translating the characteristics of wireless networks into sound”. I find the graphic not only very interesting but find the premise in which it was derived interesting as well. She essentially brings a sonic ambiance into a visual resolution, creating a conversation about what else can we not visualize but can represent it through information visualization.