Taisei Manheim – Looking Outward – 12

The two projects I chose were Mirror Number 2 by Daniel Rozin and Milkdrop by Geisswerks.  Mirror Number 2 is projected screens or kiosks connected to video cameras and computers.  Daniel Rozin has done a number of mirror inspired pieces. When a viewer stands in front of one of these pieces, their image is reflected on the screen after it has been interpreted by the computer.  The displays change rapidly yielding a smooth transition tightly linked to the movements of the viewer.  For this particular piece, 6 different effects cycle for a minute each, totalling 6 minutes.  In all 6 effects there are 1000 moving pixels on the screen. These pixels move around following a few rules, such as moving in circles or in random.   As they move around they adopt the color makeup of the viewer standing in front of the piece, resulting in a rough approximation of the image of the viewer that resembles an impressionist painting.  I chose this project because it is an interesting way to change an image of the user. When making our project, we could possibly change the image of the user in a similar manner.

Milkdrop is a music visualizer first written in 2001, but later updated in 2007 to include pixel shaders – flexible programs written to run on modern GPUs that can result in spectacular imagery and special effects.  MilkDrop’s content is comprised of presets, which each have a certain look and feel.  These presets are created by a dozen or so short pieces of computer code, as well as several dozen variables that can be tuned. Users can play with MilkDrop to create their own new presets, even writing new computer code on-screen, while they run the preset and see the effects of their changes. This has spawned a large community of people authoring many thousands of presets, creating new visuals and making it react to the music.  I chose this project because it is an interesting way to visualize music. When creating a music player, this is one way to think about what is displayed while the music plays.

Image of Mirror 2 at Israel Museum.
Visualization created based on music played in Milkdrop.

Taisei Manheim – Project 12 – Final Project Proposal

This project will be done by Carly Sacco, Jai Sawker and I.  Our project proposal is a smart phone and on the screen there would be different apps.  You could click on the apps and they would each do different things. One of the apps would resemble snapchat and allow users to put a few filters on a picture of themselves (using the computer camera).  Another app would resemble spotify and allow user to choose different songs to play. There would also be an app that would resemble instagram and allow users to scroll through images and like certain photos.  The last app would be the clock app which would allow users to see the time.

Sketch of our proposed project.

Taisei Manheim – Project 10 – Interactive Sonic Sketch


sketch

For this project I started with my project 7 because it was one of my favorite projects that I have created.  In that project the geometries change as the mouse is moved so I added a pitch that changes as the mouse is moved.  I then added 4 differently sized circles that are centered on the canvas and when you click on each of them they produce a different sound.  The smallest sound, the ding is heard when the smallest circle is clicked, then the dream sound for the second circle, mail sound for the third circle, and the loudest sound, the blare for the outermost circle.  The sounds were taken from freesound.org.

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-10

function preload() {
    // call loadImage() and loadSound() for all media files here
    ding = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/ding.wav");
    ding.setVolume(1.0);
    dream = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/dream.wav");
    dream.setVolume(1.0);
    mail = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/mail.wav");
    mail.setVolume(1.0);
    blare = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/11/blare.wav");
    blare.setVolume(1.0);
}

function setup() {
    createCanvas(480, 480);
    useSound();
}

function soundSetup() { // setup for audio generation
    // you can replace any of this with your own audio code:
    osc = new p5.TriOsc();
    osc.freq(500.0);
    osc.amp(0.1);
    osc.start();
}

function draw() {
    //background color determined by mouse
    background(mouseX, mouseY, mouseX - mouseY);
    noFill();

    //ellipse
    stroke(255);
    ellipse(width / 2, height / 2, 600, 600);
    ellipse(width / 2, height / 2, 450, 450);
    ellipse(width / 2, height / 2, 300, 300);
    ellipse(width / 2, height / 2, 150, 150);

    stroke(0);
    drawHypotrochoidCurve()
    drawRanunculoidCurve()

    //background pitch based on mouseX and mouseY values
    osc.freq(mouseX + mouseY / 1.5);
}

function mousePressed() {
    //if click within first circle play ding sound
    var a = dist(mouseX, mouseY, width / 2, height / 2);
    if (a < 75) {
        ding.play();
    }

    //if click within second circle play dream sound
    var b = dist(mouseX, mouseY, width / 2, height / 2);
    if (b > 75 & b < 150) {
        dream.play();
    }

    //if click within third circle play mail sound
    var c = dist(mouseX, mouseY, width / 2, height / 2);
    if (c > 150 & c < 225) {
        mail.play();
    }

    //if click within fourth circle play blare sound
    var d = dist(mouseX, mouseY, width / 2, height / 2);
    if (d > 225 & c < 300) {
        blare.play();
    }
}

function drawHypotrochoidCurve() {
    //http://mathworld.wolfram.com/Hypotrochoid.html

    //mouse only affects image while on canvas
    var x = constrain(mouseX, 0, width); 
    var y = constrain(mouseY, 0, height);

    //changes speed in which it changes so that as you move away from center image changes
    var a = map(x, 0, width, 0, width / 64); 
    var b = map(y, 0, height, 0, height / 64);
    var h = width / 2;

    push();
    translate(width / 2, height / 2); // starts in the center of the canvas
    noFill();
    beginShape();
    for(var i = 0; i < 360; i += 1) {
        var x = (a - b) * cos(i) + h * cos((a - b) * i);
        var y = (a - b) * sin(i) - h * sin((a - b) * i);
        vertex(x, y);
    }
    endShape();
    pop();
}

function drawRanunculoidCurve() {
    //http://mathworld.wolfram.com/Ranunculoid.html
    
    //mouse only affects image while on canvas
    var x = constrain(mouseX, 0, width); 
    var y = constrain(mouseY, 0, height);

    //changes speed in which it changes
    var a = map(x, 0, width, 0, width / 8); 
    var b = map(y, 0, height, 0, height / 8);

    push();
    translate(width / 2, height / 2); // starts in the center of the canvas
    rotate(mouseX/ mouseY);
    noFill();
    beginShape();
    for(var i = 0; i < 360; i += 1) {
        var x = a * (6 * cos(i) - cos(6 * i));
        var y = a * (6 * sin(i) - sin(6 * i));
        vertex(x, y);
    }
    endShape();
    pop();
}

Taisei Manheim – Looking Outward – 10


The project I chose is called Amper Score by Amper, a company consisting of a team of engineers, musicians, and composers that have worked on many movies and television shows. The idea for this project arose from many people within film and other industries being frustrated by the process of obtaining stock or precomposed music, including finding the music, getting over the legal and financial hurdles, and the lack of customization. Amper’s solution is an AI composer that writes and composes unique royalty free music. There are two workflows: one that allows you to work quickly and another that allows a deep customization of the music. The simple workflow requires the user to input the style of music, mood, and length of the song and the AI will generate a song in seconds, while the more advanced workflow allows for changes in tempo, instruments, and more. This allows businesses, films, and more to create originally composed music that is unique to them. Amper’s AI is built from the ground up and utilizes a massive dataset created by an internal team of composer-developers. Rather than employing an approach that limits contextual awareness and the ability for Score to collaborate with its users, Amper’s data team describes every facet of music, including genre, mood, and instrumentation. What I admire about it is that it is something that anybody could use and the luxury of original music for any business or film suddenly becomes much more accessible to those that wouldn’t have had the money or resources before this tool.

Taisei Manheim – Project 11 – Generative Landscape

sketch

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-10

var trees = [];
var frames;

function preload() {
    //background  gradient
    backgroundImage = loadImage("https://i.imgur.com/L0VpcqE.jpg")

    //frames for person animation
    frames = [];
    frames[0] = loadImage("http://i.imgur.com/svA3cqA.png");
    frames[1] = loadImage("http://i.imgur.com/jV3FsVQ.png");
    frames[2] = loadImage("http://i.imgur.com/IgQDmRK.png");
    frames[3] = loadImage("http://i.imgur.com/kmVGuo9.png");
    frames[4] = loadImage("http://i.imgur.com/jcMNeGq.png");
    frames[5] = loadImage("http://i.imgur.com/ttJGwkt.png");
    frames[6] = loadImage("http://i.imgur.com/9tL5TRr.png");
    frames[7] = loadImage("http://i.imgur.com/IYn7mIB.png");
}

function setup() {
    createCanvas(480, 480); 
    
    // create an initial collection of trees
    for (var i = 0; i < 10; i++){
        var rx = random(width);
        trees[i] = makeTree(rx);
    }
    frameRate(10);
}

function draw() {
    image(backgroundImage, 0, 0, width * 2, height);
    mountain();
    mountain2();

    //ground
    fill(210,218,255);
    rect(-1, height-50, width + 1 , 50)

    updateAndDisplayTrees();
    removeTrees();
    addNewTrees(); 

    //person on ground
    push();
    scale(.35, .35);
    image(frames[frameCount % 8], width * 2.75, height * 2.33); 
    pop();
}

//upper mountain
function mountain() {
    var speed = 0.0005;
    var terrain = 0.01;
    stroke(70,119,187);

    for (var x = 0; x < width; x += 1) {
        var t = (x * terrain) + (millis() * speed);
        var y = map(noise(t), 0, 1, 0 + 100, height / 2 + 100);
        line(x, y, x, height); 
    }

    //person on mountain
    push();
    scale(.10, .10);
    image(frames[frameCount % 8], width * 9.85, y * 10 - 100); 
    pop();
}

//lower mountain
function mountain2() {
    var speed = 0.0003;
    var terrain = 0.005;
    stroke(50,99,167);

    for (var x = 0; x < width; x += 1) {
        var t = (x * terrain) + (millis() * speed);
        var y = map(noise(t), 0, 1, height / 2 + 150, height / 4 + 150);
        line(x, y, x, height); 
    }

    //person on mountain
    push();
    scale(.25, .25);
    image(frames[frameCount % 8], width * 3.9, y * 4 - 110); 
    pop();
}

function updateAndDisplayTrees(){
    // Update the tree's positions, and display them.
    for (var i = 0; i < trees.length; i++){
        trees[i].move();
        trees[i].display();
    }
}

function removeTrees(){
    // Copy all the trees we want to keep into a new array.
    var treesToKeep = [];
    for (var i = 0; i < trees.length; i++){
        if (trees[i].x + trees[i].treeWidth > 0) {
            treesToKeep.push(trees[i]);
        }
    }
    trees = treesToKeep; // remember the surviving trees
}

function addNewTrees() {
    // With a very tiny probability, add a new tree to the end.
    var newTreeLikelihood = 0.05; 
    if (random(0,1) < newTreeLikelihood) {
        trees.push(makeTree(width));
    }
}

// method to update position of tree every frame
function treeMove() {
    this.x += this.speed;
}
    
// draw the tree
function treeDisplay() {

    //tree leaves
    fill(22,138,130); 
    noStroke(); 
    push();
    translate(this.x, height - 60);
    triangle(0, -this.treeHeight, 0 - this.treeWidth / 2, 0, 0 + this.treeWidth / 2, 0)
    pop();

    //tree trunk
    fill(40,59,107);
    push();
    translate(this.x, height - 60);
    rect(- 2.5, 0, 5, 10);
    pop();
}

function makeTree(birthLocationX) {
    var tr = {x: birthLocationX,
                treeWidth: random(20,30),
                speed: -5.0,
                treeHeight: random(30,60),
                move: treeMove,
                display: treeDisplay}
    return tr;
}

For this project I spent some time messing around with different colors and mountain heights in order to get a look that I liked.  I couldn’t get the sky to gradient in a way that I thought looked good so I used an image to create the gradient. The trees are at random heights and come at random intervals.  The hardest part was to get the racing people on the right to run along the mountains rather than at a consistent y-value. I had the people decrease in size in order to give a sense of depth, but it was difficult to control the movements of the people once they were scaled down.  Overall, I am pretty happy with this project.

Sketch of concept

Taisei Manheim – Looking Outwards – 11

Sculpture created from Digital Simulation.

The project that I chose was Liquid Sound Collision by Eva Schindling.  Eva creates hardware and software in the interdisciplinary zone between art, science, technology, and design.  She received a MSc. in Art and Technology from Chalmers University in Sweden and a degree in Interaction and Media Design from FH Joanneum in Austria.  Her work ranges from embodied evolutionary robotics and algorithmic pattern formation to the visualization of otherwise invisible sound waves and has been shown at the Japan Media Arts festival, Hong Kong’s Museum of Art, Moscow’s Biennale of Contemporary Art, Burning Man, the Pixxelpoint festival in Slovenia, and Nuit Blanche Toronto. 

The project is a study of the interactions that occur when recorded voices engage with computer simulated fluids.  Each individual study sends two words that are considered opposites, such as chaos and order or body and mind, and send them as vibration sources through opposite ends of a fluid simulation.  The sounds create waves that collide with each other and the moment of collision is translated into a digital 3D model that is then 3D printed into a sculpture. I find it very interesting how she is using a very scientific technology in order to create these very fluid sculptures.  The words that are chosen are opposites and there is something poetic about capturing the intersection of these opposites.

Taisei Manheim – Project 09 – Computational Portrait

Click ‘u’ to move grid amount up and ‘d’ to move grid amount down!!!

(If it doesn’t work make sure your caps lock is off and click on the image first before typing)

sketch

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-09

var underlyingImage;
//starting grid size
var grid = 20;

function preload() {
    var myImageURL = "https://i.imgur.com/9Jjikbd.jpg";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(360, 480);
    background(255);
    underlyingImage.loadPixels();
}

function draw() {
    //creates grid of circles
    noStroke();
    for (var x = 0; x < underlyingImage.width; x += grid) {
        for (var y = 0; y < underlyingImage.height; y += grid) {
            var theColorAtLocationXY = underlyingImage.get(x, y)
            fill(theColorAtLocationXY);
            ellipse(x, y, grid, grid);
        }
    }
}

function keyTyped() {
    //if grid reaches 5 it wont get smaller (or else it would crash)
    if (grid < 5) {
        grid = 5;
    }
    //if grid reaches 100 it wont get bigger
    if (grid > 100) {
        grid = 100;
    }
    //if you hit u key size of grid would increase
    if (key === 'u') {
        clear(); //need this or else old grids would show underneath
        grid += 1
    }
    //if you hit d key size of grid would decrease
    if (key === 'd') {
        clear(); //need this or else old grids would show underneath
        grid -= 1
    }
}

For my portrait I did a picture of myself.  When I first started this project I was experimenting with the amount of circles in a grid to create a more abstract or realistic portrait.  I then thought it would be interesting to have it controllable so that people looking at the portrait could control the resolution.  I tried to tie the grid size to mouse location at first but it was too much for the program to process that quickly so I made it so that you hit the ‘u’ and ‘d’ key to move the grid amount up and down.

Starting position (grid size at 20). Medium level abstractness.
Grid size at 5 (lowest limit). Higher level of realism.
Grid size at 100 (highest limit). Higher level of abstraction.
Original picture.

Taisei Manheim – Looking Outward – 09

A project that I thought was interesting when looking through different posts was Jai Sawkar’s Looking Outward post for week 7.  The project was called Phantom Terrains and was by Stefanie Posavec.  For this project she tries to make us aware of the invisible silent wireless data signals that surround us translating the characteristics of the wireless signal into sound.  She also created a graphic to depict the wireless network landscape on a walk around the BBC Broadcasting House.

I agree with Jai that this creates a very compelling graphic and that it is interesting how she managed to take a phenomena that is constantly occurring around us without us sensing it and recreating it so we could experience it with multiple of our senses.  Also, for our studio in architecture we had a mapping project in which we had to map parts of an island that can’t be seen, such as wind or the history of the island and it is interesting to see how they used different pitches, colors, and line thicknesses to denote different aspects of the wireless signals.

link to project

Stefanie Posavec, Frank Swain and Daniel Jones: Phantom Terrains

Taisei Manheim – Looking Outward – 08


Refik Anadol is a media artist and director born in Istanbul that currently lives in Los Angeles. He is the co-founder and creative director of Antilop, a technology-driven creative design studio operating at the intersection of audio-visual performances, moving images, and interactive installations. He creates site-specific public art that uses a parametric data sculpture approach, as well as live visual or audio performances within immersive installations. His work often creates a hybrid relationship between architecture and media arts with machine intelligence. As someone that studies architecture, I appreciate how he uses light as an architectural median in order to create buildings with memories that can learn, dream and remember.

My favorite project by him was WDCH Dreams, a project in which he projects visualizations onto the Walt Disney Concert Hall. In order to make the building “dream” he collected 45 terabytes of data from the LA Phil data archives. He then applied machine intelligence to the digital archives in order to create a computerized mind that processed these memories to form new combinations of images and ideas. I admire this project because of the amount of preparation that went into the project, from remodeling every detail of the Disney Concert Hall from old Catia models to processing all of the data. In addition, the Disney Concert Hall is a building that I admire and it is fascinating to see a new layer of meaning on top of it.

Eyeo 2019 – Refik Anadol

Taisei Manheim – Project 07 – Composition with Curves


sketch

For this project, I was intimidated at first because I was not sure how some of the mathematical equations for the lines worked because I haven’t taken a math class in a while.  However, I realized that just by plugging different equations into the for loop it created interesting results that I was not expecting.  In the end I chose to use a Hypotrochoid and a Ranunculoid.  The combination of the overlapping  geometries, along with the changing colors according to the mouse location give it a bit of a psychedelic vibe.

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-07


function setup() {
    createCanvas(480, 480);
}

function draw() {
    //background color determined by mouse
    background(mouseX, mouseY, mouseX - mouseY);
    drawHypotrochoidCurve()
    drawRanunculoidCurve()
}

function drawHypotrochoidCurve() {
    //http://mathworld.wolfram.com/Hypotrochoid.html

    //mouse only affects image while on canvas
    var x = constrain(mouseX, 0, width); 
    var y = constrain(mouseY, 0, height);

    //changes speed in which it changes so that as you move away from center image changes
    var a = map(x, 0, width, 0, width / 64); 
    var b = map(y, 0, height, 0, height / 64);
    var h = width / 2;

    push();
    translate(width / 2, height / 2); // starts in the center of the canvas
    noFill();
    beginShape();
    for(var i = 0; i < 360; i += 1) {
        var x = (a - b) * cos(i) + h * cos((a - b) * i);
        var y = (a - b) * sin(i) - h * sin((a - b) * i);
        vertex(x, y);
    }
    endShape();
    pop();
}

function drawRanunculoidCurve() {
    //http://mathworld.wolfram.com/Ranunculoid.html
    
    //mouse only affects image while on canvas
    var x = constrain(mouseX, 0, width); 
    var y = constrain(mouseY, 0, height);

    //changes speed in which it changes
    var a = map(x, 0, width, 0, width / 8); 
    var b = map(y, 0, height, 0, height / 8);

    push();
    translate(width / 2, height / 2); // starts in the center of the canvas
    rotate(mouseX/ mouseY);
    noFill();
    beginShape();
    for(var i = 0; i < 360; i += 1) {
        var x = a * (6 * cos(i) - cos(6 * i));
        var y = a * (6 * sin(i) - sin(6 * i));
        vertex(x, y);
    }
    endShape();
    pop();
}