Looking Outwards 11 Lydia Jin

Computational Design of Metallophone Contact Sounds

I found this interesting computational design of sound online at this website. This project uses the technology of 3D printing to create specified shapes that makes sound. The parts include 3 sub parts: the mallet, fabricated shape, and the optimized stand. The parts have animal shapes that are very precisely calculated to make sure the best outcome of the sound comes out. It combines engineering and acoustic design skills. The project was created in 2013 and from this work, we can assume that the artist pays heavy attention to detail. I really like this project because it combines music and art in the most creative ways. And I admire how precise the pieces are, especially when it is made using innovative technology such as 3D printing.

Kyle Lee Project 11

For my project, I used turtle graphics to create a spiral. With each iteration, I modified each length and starting rotation to create a smooth spiral. Furthermore, I modified to color to change as the turtle graphics changes. It further emphasizes the spiral with the energetic red at the points with the subdued blue in the back.

 

kdlee-project-11

var iterations = 100;//number of forloop iterations
var rAngle = 90;//turn angle

function setup() {
    createCanvas(600, 600);
    background(0);
    var kyle = makeTurtle(300, 200);
    for(var i = 0; i < iterations; i++){
        var len = 100 - i;
        var r = map(i, 0, iterations, 0, 255);
        var g = 50;
        var b = map(i, 0, iterations, 150, 0);
        kyle.setColor(color(r, g, b));

        kyle.penDown();//drawing
        kyle.forward(len);
        kyle.right(rAngle);
        kyle.forward(len);
        kyle.right(rAngle);
        kyle.forward(len);
        kyle.right(rAngle);
        kyle.forward(len);
        kyle.right(rAngle);

        kyle.penUp();//adjusting for next draw
        kyle.forward(100);
        kyle.right(rAngle);
        kyle.forward(100);
        kyle.right(1);
    }
}

function turtleLeft(d){this.angle-=d;}function turtleRight(d){this.angle+=d;}
function turtleForward(p){var rad=radians(this.angle);var newx=this.x+cos(rad)*p;
var newy=this.y+sin(rad)*p;this.goto(newx,newy);}function turtleBack(p){
this.forward(-p);}function turtlePenDown(){this.penIsDown=true;}
function turtlePenUp(){this.penIsDown = false;}function turtleGoTo(x,y){
if(this.penIsDown){stroke(this.color);strokeWeight(this.weight);
line(this.x,this.y,x,y);}this.x = x;this.y = y;}function turtleDistTo(x,y){
return sqrt(sq(this.x-x)+sq(this.y-y));}function turtleAngleTo(x,y){
var absAngle=degrees(atan2(y-this.y,x-this.x));
var angle=((absAngle-this.angle)+360)%360.0;return angle;}
function turtleTurnToward(x,y,d){var angle = this.angleTo(x,y);if(angle< 180){
this.angle+=d;}else{this.angle-=d;}}function turtleSetColor(c){this.color=c;}
function turtleSetWeight(w){this.weight=w;}function turtleFace(angle){
this.angle = angle;}function makeTurtle(tx,ty){var turtle={x:tx,y:ty,
angle:0.0,penIsDown:true,color:color(128),weight:1,left:turtleLeft,
right:turtleRight,forward:turtleForward, back:turtleBack,penDown:turtlePenDown,
penUp:turtlePenUp,goto:turtleGoTo, angleto:turtleAngleTo,
turnToward:turtleTurnToward,distanceTo:turtleDistTo, angleTo:turtleAngleTo,
setColor:turtleSetColor, setWeight:turtleSetWeight,face:turtleFace};
return turtle;}

Looking Outwards 11: Mika Vainio

Mika Vainio is of Finnish descent but is currently based in Oslo, Norway. Since the 80’s, Vainio has been involved in the experimental sound art movement and played with the electronics to participate in the Finnish industrial and noise scene.

Vainio delves into minimal avant techno in his earlier practices. He also works under several pseudonyms such as ‘Ø’, ‘Philus’, and a duo known as Pan Sonic. Pan Sonic credits a lot of its inspiration to early industrial sound artists like Throbbing Gristle and Suicide. Vainio often remarks that their music is a cross of these two schools of music, taking the harsh and pure sounds typical of industrial techno and stretching them out into longer, subdued soundscapes reminiscent of instrumental reggae and dub.

Lots of their sounds are created using samples and an MPC2000 Sequencer. This device is now considered a “Vintage Unit”, but at the time it was responsible for the advancement of Vainio’s experimental sound practice.

On his website, I found a list of the current technology he uses during his performances and installations today:

1 MIDI keyboard (minimum of 2 octaves, 24 keys) to control his synthesizer, plus stand (with power adaptor, No USB)

with faders, 4 mono channels and 4 STEREO (!!) channels minimum, pre and post switches for fx, three range EQ and – most important! –  2 aux out-sends

  • 2 good monitor speakers
  • Mika Vainio can either use 1/4 inch (jack) and XLR for output
  • Strong, good quality PA
    Powerful subwoofers should go down to 20 Hz
    Overall range 20 Hz – 20 kHz
  • Mika Vainio will bring a case of approx. 20 kg which includes :
    Korg SX sampler/ sequencer, Lexicon FX unit, Vermona Mono Lancet synthesizer, OTO sound processing unit.

These two pieces in particular really caught my eye (ear). They’re from the early 90’s, but that’s besides the point. They’re both very experimental, and give off almost “extraterrestrial” vibes if that makes sense. There’s not much white noise, so the gaps between the sounds are very quiet… Silent almost. It’s very eerie, it almost reminds me of the stillness and curious nature of outer space itself. I love it.

http://www.mikavainio.com/

Hannah K-Looking Outwards-11

This week for my Looking Outwards, I looked at John Wynne‘s untitled installation for 300 Speakers, Pianola, and Vacuum Cleaner. It was released September of 2011 and explores the boundaries between sound, space, and music. In terms of sounds, there are three elements: the sound of the space in which this installation occurred, the notes being played by the piano, and a computer-controlled soundtrack of synthetic sounds. These three elements are not synchronized, meaning that the track never repeats. The pianola in the exhibit has been modified to only play notes whose frequencies would best resonate in the space in which they are being played, so there was careful consideration about which notes would be played. This exhibit has been described as creating a sort of “epic, abstract 3-D opera in slow motion,” and I agree with that statement completely.

I found this installation to be particularly interesting because I think it demonstrates an intersection of sound and space. There is a strong visual element because there are so many speakers in the space that have been carefully and deliberately placed, but this visual element is only an addition since the purpose of this exhibit was about highlighting sound.

This massive display of different sonic elements work together to explore the idea of space.
This massive display of different sonic elements work together to explore the idea of space.

 

Sarita Chen – Looking Outwards 11 – Music

This weeks blog post is about a well known (and somewhat infamous) Japanese music synthesizer software known as Vocaloid. It was originally developed in 2000 by Kenmochi Hideki at the Pompeu Fabra University in Barcelona, Spain. The project, which was initially not intended for a widespread commercial release, was backed by the Yamaha Corporation. The software allows users to create songs and synthesize singing by typing in lyrics and a melody. The voices samples were provided by different actors and singers, and there are specific “characters” that have their own unique voice. The software has allowed for the creation of many different songs and projects by both independent and professional artists and composers, like Hachi, DECO27 and sasakure.UK to name a few.

How the software works is that you use the provided voice samples and libraries of sound and run them through the synthesizer. Here is a visual diagram of how the sounds and program work together. 

What this is really is that the voice samples are joined together in a sequence according to the lyrics and melody provided. To avoid clunky, disjointed singing, the vocaloid synthesizer tries to smooth out the sound waves as best it can to recreate a person singing. Vocaloid songs almost always sound robotic in nature however.

The vocaloid characters themselves have built up something of a fanbase on the internet, but for the sake of keeping this blog post short I won’t really go too much into that. Although there are vocaloids for other languages besides Japanese (like Korean, English and Spanish), the Japanese vocaloids tend to be used the most and the most popular.

Here is an example of a Vocaloid singing a song (in Japanese).

Vocaloid has inspired many creators (animators, artists, composers etc.) and a lot of songs usually have their own fanmade animated music videos (again in Japanese, you don’t have to watch the following is just provided as an example)

Shannon Case – Looking Outwards Week 10

vinod
Dr. Vinod Vidwans is the creator of these Indian computational music pieces.

I researched a project by Dr. Vinod Vidwans, a professor of New Media, Creativity and Innovation based in India. His research work is an effort to take explorations in computer music a step ahead. It tries to explore the possibility of computer generated Indian music (computational music) using Artificial Intelligence. He has developed a Computational Theory of Indian Classical Music, and has put together a way to synthesize a Bandish (a musical composition) in a given Raga and renders it in a traditional style. The system works on its own to generate these songs without human intervention. I think the creators artistic sensibilities are shown in that he is representing his cultural heritage through developing a historical style in a new media.

I couldn’t figure out how to embed an mp3 file, so here is a link to one of his works, and there are many more listed on his website.

LookingOutwards-11

The Computer Orchestra is a crowdsourcing musical tool that allows people to upload sounds into a database for musicians to access and play. The technology was developed by Fragment.in. The crowdsourcing and interface of the tool most interest me. I think it is pretty unique that many people can contribute to the work of one musician. I think this makes the possibility for musical sound is just far wider than one can understand. In addition, I think the interface is pretty unique because it uses a Kinect and several computers to play. Under the right maestro, I think this tool can produce some really great sounding music as shown in the TEDx video.

The Computer Orchestra from computer-orchestra on Vimeo.

LookingOutwards-11-mdambruc

Zimoun

http://www.zimoun.net/video.html

 

 

157 prepared dc-motors, cotton balls, cardboard boxes 60x20x20cm

Zimoun 2014

screen-shot-2016-11-11-at-12-10-38-pm

Picture of installation

Video of Piece

Since in week 4 I wrote about a piece of music, this week I decided to choose sound artist Zimoun, a swiss artist located in Bern, Switzerland. With Zimoun’s piece “157 prepared dc-motors, cotton balls, cardboard boxes 60x20x20cm” I really enjoy his use of space. Zimoun specializes in creating large-scale sound sculptures composed with simple and functional components. I really admire how he uses simple materials such as cardboard and balls to create grand immersive sculptures. Zimoun uses prepared systems to control the dc-motors in his piece, allowing for him to compute the movements they create. Zimoun is truly an artist who is capable of creating complex imagery and sound from simple space and simple machines. I admire his efficiency and ability to make great art out of commonplace objects, because it proves that great art does not need to be expensive to create. It is art of the common people.

Looking-Outwards-11

This week, I looked at an instrument called the MikroKontrolleur that allows you to “play” your voice. It was invented in 2015-2016 by Katharina Hauke and Dominik Hildebrand Marques Lopes.

I really admire the idea behind this, and I think it’s an interesting concept and creation. I don’t particularly like the sound it produces as you can hear in the video, it sounds like something from a horror movie. Nonetheless, I find it very interesting.

This instrument is computational because it analyzes different physical aspects of the singer in order to affect the computed portion of the instrument -the aspects of it that cause such varying sounds to be made.

The link to the article describing the instrument can be found here along with a video. Below are some pictures taken of a singer using the instrument.

taken from http://www.creativeapplications.net/sound/mikrokontrolleur-an-instrument-to-play-ones-voice/
taken from http://www.creativeapplications.net/sound/mikrokontrolleur-an-instrument-to-play-ones-voice/

Diana Connolly – Looking Outwards 11

This project uses algorithmic composition in order to create computer-generated jazz improvisation solos with a walking baseline and a melodic guitar. I admire how a computer was able to simulate improvisation — given defined chord changes — filling in with seemingly improvised notes. To me, improvisation in jazz is a purely human characteristic, so to simulate that with a computer is mind boggling. The algorithm used is a computer program generated by the creator who posted this video (don’t know his/her name). The creator’s artistic sensibilities are manifested in the final form because the walking baseline and guitar work well together, acting as if there are two people playing and improvising together. I also appreciate the visualization that accompanies the computer-generated sounds, because it makes it even more playful as the purple and green balls bounce along the projected music notes.