Chelsea Fan-Project 10-Sonic-Sketch

SonicSketch

/* Chelsea Fan
Section 1B
chelseaf@andrew.cmu.edu
Project-10
*/
//important variables
var myWind;
var myOcean;
var myBirds;
var mySands;

function preload() {
    //load ocean image 
    var myImage = "https://i.imgur.com/cvlqecN.png"
    currentImage = loadImage(myImage);
    currentImage.loadPixels();
    //loading sounds
    //sound of wind
    myWind = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/winds.wav");
    myWind.setVolume(0.1);
    //sound of ocean
    myOcean = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/oceans.wav");
    myOcean.setVolume(0.1);
    //sound of birds
    myBirds = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/birds.wav");
    myBirds.setVolume(0.1);
    //sound of sand
    mySand = loadSound("https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/sand.wav");
    mySand.setVolume(0.1);
    //birds https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/birds.wav
    //oceans https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/oceans.wav
    //sands https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/sand.wav
    //winds https://courses.ideate.cmu.edu/15-104/f2019/wp-content/uploads/2019/10/winds.wav
}

function soundSetup() { // setup for audio generation
}

function setup() {
    createCanvas(480, 480);
}

function sandDraw() {
    noStroke();
    //sand background color
    fill(255, 204, 153);
    rect(0, height-height/4, width, height/4);
    //sand movement
    for (i=0; i<1000; i++) {
        var sandX = random(0, width);
        var sandY = random(height-height/4, height);
        fill(255, 191, 128);
        ellipse(sandX, sandY, 5, 5);
    }
}

var x = 0;
var cloudmove = 1;

function skyDraw() {
    noStroke();
    //sky color
    fill(179, 236, 255);
    rect(0, 0, width, height/2);
    //cloud color
    fill(255);
    //cloud move
    x = x + cloudmove;
    if(x>=width+100){
        x = 0;
    }
    //cloud parts and drawing multiple clouds in sky section 
    for (i=0; i<=4; i++) {
        push();
        translate(-200*i, 0);
        ellipse(x + 10, height / 6, 50, 50);
        ellipse(x + 50, height / 6 + 5, 50, 50);
        ellipse(x + 90, height / 6, 50, 40);
        ellipse(x + 30, height / 6 - 20, 40, 40);
        ellipse(x + 70, height / 6 - 20, 40, 35);
        pop();
    }
}
function birdDraw() {
    noFill();
    stroke(0);
    strokeWeight(3);
    //Birds and their random coordinates (not randomized 
    //because I chose coordinates for aesthetic reasons)
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    push();
    translate(-110, 0);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(-100, 80);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(-30, 40);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(70, 50);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(100, 100);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(150, 25);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(200, 75);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
    push();
    translate(250, 13);
    curve(100, 150, 120, 120, 140, 120, 160, 140);
    curve(120, 140, 140, 120, 160, 120, 180, 150);
    pop();
}
function draw() {
    //draw sand 
    sandDraw();
    //draw ocean
    image(currentImage, 0, height/2);
    //draw sky
    skyDraw();
    //draw birds
    birdDraw();
    //implement sound when mouse is pressed
    mousePressed();
}
function mousePressed() {
    //if mouse is in section of canvas where clouds are
    if (mouseIsPressed & mouseY>=0 && mouseY<=height/4) {
        //sound of wind
        myWind.play();
    }
    //if mouse is in section of canvas where birds are
    if (mouseIsPressed & mouseY>height/4 && mouseY<=height/2) {
        //sound of birds
        myBirds.play();
    }
    //if mouse is in section of canvas where ocean is
    if (mouseIsPressed & mouseY>height/2 && mouseY<=3*height/4) {
        //sound of waves
        myOcean.play();
    }
    //if mouse is in section of canvas where sand is
    if (mouseIsPressed & mouseY>3*height/4 && mouseY<=height) {
        //sound of sand
        mySand.play();
    }
}

My code has four different sounds (sounds of wind, birds, waves, and sand). Each is enabled by clicking on the respective quarter of the canvas. For example, the wind sound is enabled by clicking the top layer where the clouds are located.

This took me a very long time because I couldn’t get the sounds to work. But, the idea of having an ocean landscape with different sounds came quickly to me.

Lanna Lang – Looking Outwards – 10

Google Magenta // NSynth and NSynth Super // 2018

The goal Google Magenta had with NSynth and NSynth Super was to build a machine learning tool that gave musicians new ways to express themselves. NSynth (Neural Synthesizer) is a new way to approach audio synthesis using neural networks that creates the sound of the actual instrument that is being played instead of the note that’s being played. Magenta wanted the algorithm to be more accessible to musicians, so they created interfaces such as the Sound Maker and the Ableton Live plugin, and Magenta encourages creative use with the algorithm, from dubstep to scenic atmospherics. NSynth is Google’s neural network, but NSynth Super is the tool/musical instrument that brings NSynth to life.

What I love about this piece are the infinite possibilities this brings to artists and anyone anywhere. In the video, they show how using NSynth and NSynth Super, you can combine a flute and a snare to create a whole new instrument (i.e Fnure). NSynth Super isn’t just layering sounds on top of each other, but instead, it’s synthesizing an entirely new sound based on the acoustics of the individual instruments. This technology isn’t making the work of a musician easier, but it’s enhancing it and providing more possibilities and artistic direction that can manifest from this. Although the NSynth Super isn’t available for purchase, Google has provided instructions to make one from scratch using Raspberry Pi for artists to make themselves and explore.

The background behind creating NSynth and NSynth Super
An example of how someone can make music using NSynth Super

Crystal-Xue-Project-09

sketch-231.js

//Crystal Xue
//15104-section B
//luyaox@andrew.cmu.edu
//Project-09

var underlyingImage;
var xarray = [];
var yarray = [];

function preload() {
    var myImageURL = "https://i.imgur.com/Z0zPb5S.jpg?2";
    underlyingImage = loadImage(myImageURL);
}

function setup() {
    createCanvas(500, 500);
    background(0);
    underlyingImage.loadPixels();
    frameRate(20);
}

function draw() {
    var px = random(width);
    var py = random(height);
    var ix = constrain(floor(px), 0, width-1);
    var iy = constrain(floor(py), 0, height-1);
    var theColorAtLocationXY = underlyingImage.get(ix, iy);

    stroke(theColorAtLocationXY);
    strokeWeight(random(1,5));
    var size1 = random(5,15);
    //brush strokes from bottom left to top right diagnal direction
    line(px, py, px - size1, py + size1);

    var theColorAtTheMouse = underlyingImage.get(mouseX, mouseY);
    var size2 = random(1,8);
    for (var i = 0; i < xarray.length; i++) {
        stroke(theColorAtTheMouse);
        strokeWeight(random(1,5));
        //an array of brush strokes from top left to bottom right diagnal direction controlled by mouse
        line(xarray[i], yarray[i],xarray[i]-size2,yarray[i]-size2);
        size2 = size2 + 1;
        if (i > 10) {
            xarray.shift();
            yarray.shift();
        }
    }
}

function mouseMoved(){
    xarray.push(mouseX);
    yarray.push(mouseY);
}

phase-1
phase-2
phase-3
original picture

This is a weaving portrait of my friend Fallon. The color pixels will be concentrated on the cross between strokes of two directions

Fanjie Jin-LookingOutwards-10

Artificial intelligence researchers have made huge gains in computational creativity and there have been a number of artists that employed computational algorithm to produce albums in multiple genres, such as scored films, games and smartphone apps. 

Bach-style Prelude 29, Experiments in Musical Intelligence

David Cope, a professor at California Santa Cruz, has been exploring the intersection of algorithms and creativity for decades and he is specialized in what he terms algorithmic composition which is essentially computer-authored music production. He writes sets of instructions that enable computers to automatically generate complete orchestral compositions. His algorithms have produced classical music ranging for single instruments arrangement all the way up to full orchestral music and it is really hard to believe that the music is composed through a computer. 

I really admire this project “Bach style Prelude 29 Emmy Cope”, which he has let the computer to study the composition style of Bach. As you can hear, the final melody that the AI algorithms generate is an accurate representation of Bach’s composition style and some parts of the AI-generated music have created some unexpected beautiful melodies which is totally based on Bach’s composition techniques. The biggest advantage of the AI algorithmic music composition is perhaps “Algorithms that produce creative work have a significant benefit, then, in terms of time, energy, and money, as they reduce the wasted effort on failed ideas.” said Cope. 

Siwei Xie – Looking Outwards – 10

Microscale is a generative and web-based album. I admire it because although the creator has written generative/algorithmic music before, and almost all of his previous work has procedurally generated material, microscale is his first fully generative album which was created from a “generative” idea. Creator’s artistic sensibilities manifest because this album has been created not so much by thinking, as by emotions – so it’s not purely artificial intelligence or computer music. 

The music on microscale is generated in real-time from random Wikipedia articles. Each article becomes a step sequencer, where the letters are the sequencer steps and the track titles are regular expressions that switch the steps of the sequencers on and off.

The concept of the album is to show that through transforming one media (text) into another media (music), the meaning can be transformed – the article has its own meaning, but the music has a completely different meaning. And it’s not just one-to-one transformation – there are six articles (i.e. six meanings), which although unrelated to each other, create a whole piece of music that has one singular meaning.

Ales Tsurko, Microscale, 2017

Link to original source.

Emma NM-LO-10

Sonic Playground in Atlanta

Sonic Playground (2018) – Yuri Suzuki Design

Sonic Playground was an outdoor sound installation in Atlanta, Georgia that features colorful sculptures that modify and transmit sound in an unusual but playful way. I admire how the art installation engages the community in an art experience and gives people the opportunity to explore how sound is constructed, altered, and experienced. I like that it is for all people, regardless of age. Anyone can enjoy it. The installation itself is not computational, but they used Rhinoceros 3D to create a raytracing tool that allows the user to choose certain aspects of the sounds path. Users could “select a sound source and send sound in a certain direction or towards a certain geometry, in this case the shape of the acoustic mirrors or the bells at the start and end of the pipes to see how the sound is reflected and what is the interaction with the object.”

The artist’s creativity comes out in the path and shapes he chose for the final sculptures, thus influencing the sound that comes out. He decided which sounds were more interesting and the path it takes to make that sound. 

Sonic Playground Installation
Raytracing using Rhinoceros 3D

Ammar Hassonjee – Looking Outwards 10

An image showing how tune table works.

The project related to computer music I chose to focus on is called Tune Table produced by a group of researchers Anna Xambo and Brigid Drozda. Tune Table is a tabletop, game like interface that is meant to teach users computer science related topics by allowing them to program their own musical compositions. Using blocks of code that utilize computer science elements like loops, users combine the blocks to make unique songs; when each block of code is placed on the table, cameras under the table interpret the block’s imprint on the bottom and output auditory and visual feedback. I like this project’s goal of utilizing music to teach computer science because it’s a fun way to learn something that is very mathematics based. I think the creator’s original goal of finding a link between computer science and musical outputs was achieved. The link to the paper describing the work can be found here.

Video showing how Tune Table works.

Paul Greenway – Looking Outwards – 10

NSynth by Google Magenta

NSynth Super is part of an ongoing project by Google called Magenta that explores how machine learning can become a new tool for musicians. The NSynth, or neural synthesizer, uses algorithms to learn the characteristics of existing sounds and subsequently create new sounds based on the inputs. The results of this process are completely original sounds that may be produced by a combination of more than one instrument. The NSynth not only generates these unique sounds but also gives artists control over the dynamics of the new sounds through custom interface and well designed hardware. All of the code for the project is open source as the project, like all other magenta projects, is meant to be free to access by anyone.

What I found most interesting about this project was its potential for generating brand new sounds un-restricted by existing tools or instruments. In addition, I thought the ease of use and accessible nature of the project both in its hardware and software was another aspect which made it a great project and something that anyone who is interested in it could try out.

NSynth sound input to output flow

Timothy Liu — Looking Outwards — 10

An early photo of Junichi Masuda in his digital recording studio.

When I read the prompt for this week’s Looking Outwards, I immediately thought of video game music. I’ve always been a fan of videos games—especially Nintendo franchises such as Pokémon, Mario, and more—and their soundtracks have long been considered the gold standard of technical music. One of the most prominent composers in video game history has been Junichi Masuda, the mastermind behind most of the soundtracks in the Pokémon series. His works have ranged from being techno-like in nature to beautifully symphonic in his newer games. But the commonality among all of the works he’s composed is that they were each computationally created.

I first listened to some of Masuda’s soundtracks from his earlier games like Pokémon Red and Blue (1998). I loved the techno-funk feeling conveyed by the music, and after reading up more about Masuda’s processes, I learned that this was partly a byproduct of technical limitations of that era, but also due to Masuda’s self-proclaimed affinity for techno music at the time. Pokémon Red and Blue were developed on UNIX computer stations called the Sun SPARCstation 1, which made programming files susceptible to crashing. These were clear programming limitations that likely limited the quality of sound files and sound effects.

The soundtrack from Pokémon Red and Blue (1998).

Next, for the sake of comparison, I listened to music from Pokémon Black and White, games from 2012. I was blown away by the difference; the soundtracks from the newer games were not only crisper, smoother, and rendered more cleanly, but they legitimately sounded like orchestral movements. It was incredible to me how much Masuda’s work evolved, and after reading more about his inspirations, I learned that he was a big fan of the classical composers Igor Stravinsky and Dmitri Shostakovich. This was evident in the elegance of his compositions, and it blew my mind to learn that he programmed these tunes just like he did the techno-style music of 1998. It’s a testament to Masuda’s talent and understanding of the interplay between technology, computation, and music.

The soundtrack from Pokémon Black and White (2012).

Sources:

https://www.polygon.com/interviews/2018/9/27/17909916/pokemon-red-blue-junichi-masuda-interview

https://en.wikipedia.org/wiki/Junichi_Masuda

Katrina Hu – Looking Outwards – 10

The Computer Orchestra

A demonstration of the Computer Orchestra

The Computer Orchestra is an interactive installation consisting of multiple computers. It was created by Laura Perrenoud, Simon De Diesbach, and Jonas LaCôte in 2013. Its setup closely resembles that of a classical orchestra. This orchestra allows the user to conduct using the movements of their hands. Movements are recognized with a Kinect motion controller connected to a central computer. This then gives instructions to a multitude of screens. The screens then send both sounds and visual representations of the sounds back. Now, there are entire music sets created with this Computer Orchestra.

I admire how this project keeps many of the similarities of a classical orchestra. The “conductor’s” movements are like that of a real conductor, and the way the screens are set up resemble that of a real orchestra. There is not much information about the algorithms that generated the work, but the software used includes SimpleOpenNi and Ableton Live.