lee chu – looking outwards 10

visuals from Porter Robinson’s Worlds tour

Porter Robinson, renown producer of electronic music, is also well known for his visuals at his live performances. Not only do the visuals need to be visually stunning, but they also need to correlate with and move with the music logically.

Porter aimed to bring beautiful and immersive sounds and visions to a genre of music too often associated with heavy bass drops. Porter sought to transport his audiences into hyper-real, neon-toned worlds of anime and MMORPG’s. These videogame-esque worlds unfold before you to bring back that childhood escapism into fantasy and fictional worlds.

Carly Sacco – Looking Outwards – 10

LINES by Anders Lind.

LINES is an interactive sound art exhibit by Anders Lind that allows users to make music by moving their hands and feet over lines. The lines are paired with sensors and electronics to create the sounds of three different instruments. It works by distance sensors that are connected to an arduino board on a mac mini. The signals from users hands are then translated from distance to sound. Lind created LINES to allow people to experience a new form of musical interactions and experiences. 

I admire that this project has a direct correlation between human interaction and the sound produced. People can work together, or participate alone and the results will always be different based on how the people move their hands. I thought it was very interesting how Lind even incorporated the lines on the floor too to create more possibilities of interaction.

Kimberlyn Cho- Looking Outwards 10

“Rhapsody in Grey” by Brian Foo (2015)

Brian Foo draws from his fascination and interest in brain activity in “Rhapsody in Grey” by translating brainwave data into music. He specifically studied the brainwaves of a female pediatric patient with epilepsy so that listeners can empathize or briefly experience what may be going on during a seizure. I was intrigued by the distinctiveness of this project, as well as his creative approach to a scientific topic such as brainwaves. I admire Foo’s usage of his fluency and skills in programming to portray a personal interest(brain activity) in his own creative way(music). I found this project to be inspiring in the endless possibilities of programming and computational art.

Foo uses different variables from EEG brainwave data to computationally generate the intensity and fluidity of the rhapsody. He used Python to extract an excerpt from the EEG data, which he then calculated the average amplitude, frequency, and synchrony of. He then assigned instrumental and vocal samples in correlation to the calculations. The sounds were synthesized into a rhapsody using a music creating program called ChucK. Lastly, he used Processing, a visual programming language, to generate the visual waves that play with his music in the video above.

You’re able to see Foo’s artistic sensibilities in the final work in the sound samples he chose as well as connections he made to the EEG data. For example, he raised the pitch of string instruments during higher frequencies, while adding louder sounds for higher amplitudes. The connections he makes between his calculated algorithms and sound samples are representative of his interest in the human subject, as well as his artistic priorities and decisions.

“Rhapsody in Grey” by Brian Foo (2015)

Taisei Manheim – Looking Outward – 10


The project I chose is called Amper Score by Amper, a company consisting of a team of engineers, musicians, and composers that have worked on many movies and television shows. The idea for this project arose from many people within film and other industries being frustrated by the process of obtaining stock or precomposed music, including finding the music, getting over the legal and financial hurdles, and the lack of customization. Amper’s solution is an AI composer that writes and composes unique royalty free music. There are two workflows: one that allows you to work quickly and another that allows a deep customization of the music. The simple workflow requires the user to input the style of music, mood, and length of the song and the AI will generate a song in seconds, while the more advanced workflow allows for changes in tempo, instruments, and more. This allows businesses, films, and more to create originally composed music that is unique to them. Amper’s AI is built from the ground up and utilizes a massive dataset created by an internal team of composer-developers. Rather than employing an approach that limits contextual awareness and the ability for Score to collaborate with its users, Amper’s data team describes every facet of music, including genre, mood, and instrumentation. What I admire about it is that it is something that anybody could use and the luxury of original music for any business or film suddenly becomes much more accessible to those that wouldn’t have had the money or resources before this tool.

Fanjie Jin-LookingOutwards-10

Artificial intelligence researchers have made huge gains in computational creativity and there have been a number of artists that employed computational algorithm to produce albums in multiple genres, such as scored films, games and smartphone apps. 

Bach-style Prelude 29, Experiments in Musical Intelligence

David Cope, a professor at California Santa Cruz, has been exploring the intersection of algorithms and creativity for decades and he is specialized in what he terms algorithmic composition which is essentially computer-authored music production. He writes sets of instructions that enable computers to automatically generate complete orchestral compositions. His algorithms have produced classical music ranging for single instruments arrangement all the way up to full orchestral music and it is really hard to believe that the music is composed through a computer. 

I really admire this project “Bach style Prelude 29 Emmy Cope”, which he has let the computer to study the composition style of Bach. As you can hear, the final melody that the AI algorithms generate is an accurate representation of Bach’s composition style and some parts of the AI-generated music have created some unexpected beautiful melodies which is totally based on Bach’s composition techniques. The biggest advantage of the AI algorithmic music composition is perhaps “Algorithms that produce creative work have a significant benefit, then, in terms of time, energy, and money, as they reduce the wasted effort on failed ideas.” said Cope. 

Ammar Hassonjee – Looking Outwards 10

An image showing how tune table works.

The project related to computer music I chose to focus on is called Tune Table produced by a group of researchers Anna Xambo and Brigid Drozda. Tune Table is a tabletop, game like interface that is meant to teach users computer science related topics by allowing them to program their own musical compositions. Using blocks of code that utilize computer science elements like loops, users combine the blocks to make unique songs; when each block of code is placed on the table, cameras under the table interpret the block’s imprint on the bottom and output auditory and visual feedback. I like this project’s goal of utilizing music to teach computer science because it’s a fun way to learn something that is very mathematics based. I think the creator’s original goal of finding a link between computer science and musical outputs was achieved. The link to the paper describing the work can be found here.

Video showing how Tune Table works.

Timothy Liu — Looking Outwards — 10

An early photo of Junichi Masuda in his digital recording studio.

When I read the prompt for this week’s Looking Outwards, I immediately thought of video game music. I’ve always been a fan of videos games—especially Nintendo franchises such as Pokémon, Mario, and more—and their soundtracks have long been considered the gold standard of technical music. One of the most prominent composers in video game history has been Junichi Masuda, the mastermind behind most of the soundtracks in the Pokémon series. His works have ranged from being techno-like in nature to beautifully symphonic in his newer games. But the commonality among all of the works he’s composed is that they were each computationally created.

I first listened to some of Masuda’s soundtracks from his earlier games like Pokémon Red and Blue (1998). I loved the techno-funk feeling conveyed by the music, and after reading up more about Masuda’s processes, I learned that this was partly a byproduct of technical limitations of that era, but also due to Masuda’s self-proclaimed affinity for techno music at the time. Pokémon Red and Blue were developed on UNIX computer stations called the Sun SPARCstation 1, which made programming files susceptible to crashing. These were clear programming limitations that likely limited the quality of sound files and sound effects.

The soundtrack from Pokémon Red and Blue (1998).

Next, for the sake of comparison, I listened to music from Pokémon Black and White, games from 2012. I was blown away by the difference; the soundtracks from the newer games were not only crisper, smoother, and rendered more cleanly, but they legitimately sounded like orchestral movements. It was incredible to me how much Masuda’s work evolved, and after reading more about his inspirations, I learned that he was a big fan of the classical composers Igor Stravinsky and Dmitri Shostakovich. This was evident in the elegance of his compositions, and it blew my mind to learn that he programmed these tunes just like he did the techno-style music of 1998. It’s a testament to Masuda’s talent and understanding of the interplay between technology, computation, and music.

The soundtrack from Pokémon Black and White (2012).

Sources:

https://www.polygon.com/interviews/2018/9/27/17909916/pokemon-red-blue-junichi-masuda-interview

https://en.wikipedia.org/wiki/Junichi_Masuda

Katrina Hu – Looking Outwards – 10

The Computer Orchestra

A demonstration of the Computer Orchestra

The Computer Orchestra is an interactive installation consisting of multiple computers. It was created by Laura Perrenoud, Simon De Diesbach, and Jonas LaCôte in 2013. Its setup closely resembles that of a classical orchestra. This orchestra allows the user to conduct using the movements of their hands. Movements are recognized with a Kinect motion controller connected to a central computer. This then gives instructions to a multitude of screens. The screens then send both sounds and visual representations of the sounds back. Now, there are entire music sets created with this Computer Orchestra.

I admire how this project keeps many of the similarities of a classical orchestra. The “conductor’s” movements are like that of a real conductor, and the way the screens are set up resemble that of a real orchestra. There is not much information about the algorithms that generated the work, but the software used includes SimpleOpenNi and Ableton Live.

Looking Outwards 10 – Joanne Chui

Performed by the Stanford Laptop Orchestra, Composed by GE Wang
https://ccrma.stanford.edu/~ge/

GE Wang is a professor for Computer Research in Music and Acoustics at Stanford. His research focuses on interactive design in music in collaboration with programming languages. He created Smule and Magic Piano, both for the Iphone. I was interested in this piece, Twilight, because it is both a musical performance and an art performance, and integrates what GE Wang is focused on, which is interactive design in music. Using their laptops, the orchestra is able to translate thir body movements into sound/pitch, and because of that creates a visual performance as well. Something interesting with which they used the laptop seemed to mostly be biased towards gradual changes in sound, and slow build up of music, which is interesting. The algorithm used to create the music and select the pitch is definitely reliant on the length of the string attached to their wrists that the performers use.

Taisei Manheim – Project 11 – Generative Landscape

sketch

//Taisei Manheim
//Section C
//tmanheim@andrew.cmu.edu
//Assignment-10

var trees = [];
var frames;

function preload() {
    //background  gradient
    backgroundImage = loadImage("https://i.imgur.com/L0VpcqE.jpg")

    //frames for person animation
    frames = [];
    frames[0] = loadImage("http://i.imgur.com/svA3cqA.png");
    frames[1] = loadImage("http://i.imgur.com/jV3FsVQ.png");
    frames[2] = loadImage("http://i.imgur.com/IgQDmRK.png");
    frames[3] = loadImage("http://i.imgur.com/kmVGuo9.png");
    frames[4] = loadImage("http://i.imgur.com/jcMNeGq.png");
    frames[5] = loadImage("http://i.imgur.com/ttJGwkt.png");
    frames[6] = loadImage("http://i.imgur.com/9tL5TRr.png");
    frames[7] = loadImage("http://i.imgur.com/IYn7mIB.png");
}

function setup() {
    createCanvas(480, 480); 
    
    // create an initial collection of trees
    for (var i = 0; i < 10; i++){
        var rx = random(width);
        trees[i] = makeTree(rx);
    }
    frameRate(10);
}

function draw() {
    image(backgroundImage, 0, 0, width * 2, height);
    mountain();
    mountain2();

    //ground
    fill(210,218,255);
    rect(-1, height-50, width + 1 , 50)

    updateAndDisplayTrees();
    removeTrees();
    addNewTrees(); 

    //person on ground
    push();
    scale(.35, .35);
    image(frames[frameCount % 8], width * 2.75, height * 2.33); 
    pop();
}

//upper mountain
function mountain() {
    var speed = 0.0005;
    var terrain = 0.01;
    stroke(70,119,187);

    for (var x = 0; x < width; x += 1) {
        var t = (x * terrain) + (millis() * speed);
        var y = map(noise(t), 0, 1, 0 + 100, height / 2 + 100);
        line(x, y, x, height); 
    }

    //person on mountain
    push();
    scale(.10, .10);
    image(frames[frameCount % 8], width * 9.85, y * 10 - 100); 
    pop();
}

//lower mountain
function mountain2() {
    var speed = 0.0003;
    var terrain = 0.005;
    stroke(50,99,167);

    for (var x = 0; x < width; x += 1) {
        var t = (x * terrain) + (millis() * speed);
        var y = map(noise(t), 0, 1, height / 2 + 150, height / 4 + 150);
        line(x, y, x, height); 
    }

    //person on mountain
    push();
    scale(.25, .25);
    image(frames[frameCount % 8], width * 3.9, y * 4 - 110); 
    pop();
}

function updateAndDisplayTrees(){
    // Update the tree's positions, and display them.
    for (var i = 0; i < trees.length; i++){
        trees[i].move();
        trees[i].display();
    }
}

function removeTrees(){
    // Copy all the trees we want to keep into a new array.
    var treesToKeep = [];
    for (var i = 0; i < trees.length; i++){
        if (trees[i].x + trees[i].treeWidth > 0) {
            treesToKeep.push(trees[i]);
        }
    }
    trees = treesToKeep; // remember the surviving trees
}

function addNewTrees() {
    // With a very tiny probability, add a new tree to the end.
    var newTreeLikelihood = 0.05; 
    if (random(0,1) < newTreeLikelihood) {
        trees.push(makeTree(width));
    }
}

// method to update position of tree every frame
function treeMove() {
    this.x += this.speed;
}
    
// draw the tree
function treeDisplay() {

    //tree leaves
    fill(22,138,130); 
    noStroke(); 
    push();
    translate(this.x, height - 60);
    triangle(0, -this.treeHeight, 0 - this.treeWidth / 2, 0, 0 + this.treeWidth / 2, 0)
    pop();

    //tree trunk
    fill(40,59,107);
    push();
    translate(this.x, height - 60);
    rect(- 2.5, 0, 5, 10);
    pop();
}

function makeTree(birthLocationX) {
    var tr = {x: birthLocationX,
                treeWidth: random(20,30),
                speed: -5.0,
                treeHeight: random(30,60),
                move: treeMove,
                display: treeDisplay}
    return tr;
}

For this project I spent some time messing around with different colors and mountain heights in order to get a look that I liked.  I couldn’t get the sky to gradient in a way that I thought looked good so I used an image to create the gradient. The trees are at random heights and come at random intervals.  The hardest part was to get the racing people on the right to run along the mountains rather than at a consistent y-value. I had the people decrease in size in order to give a sense of depth, but it was difficult to control the movements of the people once they were scaled down.  Overall, I am pretty happy with this project.

Sketch of concept