Alexandra Kaplan – Looking Outwards – 11

 

Video of the project

A computational musician I found to be interesting is Mari Kimura, a violinist. The project I am focusing on is Bebe, a project made performed by violin and computer. The piece was originally written in 2008, with edits made to it in 2012. From Kimura’s description of the project, it seems she used the Max computer program to play music along with her. What caught my eye (or should I say ear) is how she created the program to follow along with her in order to let her improvise. From her comments, it seems that many computer music projects she has previously worked on put the musician in a place in which they had to play exactly to the note to keep up with the computer. I think it is amazing what she accomplished and what technology can do to improvise and keep in harmony with brilliant musicians.

Xindi Lyu-Looking Outwards 11


a world famous vocaloid music

Vocaloid is a singing voice synthesizer software. Its signal processing part was developed through a joint research project led by Kenmochi Hideki at the Pompeu Fabra University in Barcelona, Spain, in 2000, and originally was not intended to be a full commercial project. However now due to the popularity specific anime-character design representing each of the sound libraries, more and more individual music creators, mostly in Japan, and anime lovers start to join the community of Vocaloid producers/lovers in producing music pieces with or even for these libraries. In 10 years a great amount of outstanding music pieces are created and the community has been big enough to become a culture.

Nina Yoo- Looking-Outwards-11

Of Nature and Things -Fedde Ten Berge-December 7 , 2017

This project uses the objects forms to create sounds. It was interesting to see the creator  Fedde use many organic objects and man made objects to create these sounds. I was inspired  by his capability of understanding an object through sound, because it is almost creating a new way to interpret objects. It kind of reminds me of an instrument, but it is creating every object as an instrument.  The algorithms that would go into this would probably have a prerequisite standard for curves and edges and how they would sound and depending on how extreme or normal those aspects are, the different the sound comes out. Along with this would come the range of pitch and volume. The artist’s creativeness came out through the interpretation of how the sound should be depending on the object and the different objects he decides to use or create for usage.

 

Of Nature and Things – The Shroom

 

 

Kevin Thies – Looking Outwards 11


A user’s generated music

While looking at various computational instruments, I ended up on a small tangent that lead to the discovery of not a person, but a tool. Specifically, that tool was WolframTones, created 2005 by Wolfram Research, based on research from the 1980s. I found it interesting in that unlike what I looked at during week 4, this was a tool that was more centered around what I suppose you could call the formality of music. It allows for control over tempo, pitch mapping, and instrumentation. As an extra blessing or curse, the site had so many different options that one could really engross themselves. There are already hundreds of premade musical scales, instruments, and instrument roles. It’s crazy.
WolframTones is powered by Wolfram Automata. Basically, there’s a square that’s either black or white and it’ll gontinue to grow based on a specific rule, generating complexity. There are 256 rules, and Stephen Wolfram’s experiment went through all those rules. Hopefully this image explains it a little better.

The 256 Rules

WolframTones takes these rules and flips them sideways, and uses them as notes.
The above video is an example of someone using the website and their generated music.

Looking Outwards – 11 Min Lee

 

For this week’s project, I wanted to take a look at the computer-generated music from Clara Starkweather’s Student Project at Duke University. She uses a Kinect camera to detect the motions of her body parts and generate different sounds at different progressions. More accurately, the algorithm she wrote plays different instruments in response to the camera’s detection of the different body parts being moved.

In Starkweather’s project, she wrote her code using the software synthesis programming language called SuperCollider. In this video, Starkweather showcases her project by demonstrating the different audio sounds filling in as background music for the song “Golden” by Jill Scott. I admire this project because Starkweather states that she had to learn how to play a few instruments to write her code. Her musical stylistic choices are also manifested in her project and despite it being a hard task to create harmonizing sounds using different body parts to control different instruments, she does so in a seamless way.

Source: https://www.youtube.com/watch?v=Q1ad8KG7tWc

Eunice Choe – Looking Outwards-11

People can interact with sounds on the walls, floor, and ceiling.

LINES (2016) is an interactive sound art installation created by Anders Lind. I admire this project because it allows people to interact with the sounds by touching the lines on the floor, walls, and ceiling. I admire how the exhibition is immersive and how it allows anyone to create music whether they are musically inclined or not. The sound interactions were programmed through Max/MSP. There are three instruments in the piece and each of them consist of five to fifteen sensors connected to an arduino board with an output sound card. The creator’s artistic sensibilities manifest in the form through both visual and sound elements. The creator uses different colors and shapes to distinguish between different sound effects.

Katherine Hua – Looking Outwards – 11

“NightFall” by Mileece (2003)

Mileece Petre is an English sound artist and environmental designer who makes generative and interactive through plants. Believing that plants are sentient beings, she is able to have plants create music by attaching electrodes to them. The data collected from an electromagnetic currents are sent to amplifiers and the amplifiers sends this data which is translated into codes, and the codes are then input into software that creates musical notes out of them.

I admire this project because even though a single plant creates minimal sound, if she attaches electrodes to all plants in a garden, it sounds like an orchestra’s symphony. Also, the type of music the plants come together to make sound very soft and peaceful, reflecting the organic and subtleness of plants. Furthermore, she is able to create a platform in which people can be more connected to nature, helping her passion for the environment.

Looking Outwards 11: Hans Zimmer and his music technology

Hans Zimmer is well known for his epic movie scores, but little do people know about the computational power behind his music.

Though Zimmer is more associated with his orchestral scores now, his early film work was largely composed solo, on the synthesizer and through the use of samples that Zimmer took himself. But as his career expanded, so did the scope of his music, and it’s that scope that’s made him so enduring in the musical cultural consciousness. Zimmer is a constant innovator, and his embrace of technology means he’s able to adapt without compromising for the sake of whatever is trendy at the moment. More recently, Zimmer helped develop an app showcasing the score for Inception that took into account the user’s whereabouts and movements, and even launched a viral event to help populate the 100,000 voices he wanted for the “rise up” chant that forms the base of much of The Dark Knight Rises’ score.

Hans composing

So much of his music nowadays are composed and performed with custom built and programmed synthesizers that create the iconic sound that is so often associated with him.

My grand musical education is two weeks of piano lessons. So I’m not a good player, but I’m a good programmer. I’ve always felt that the computer was my instrument.

looking outwards

https://studio.carnegiemuseums.org/evaluating-dawn-chorus-48b82e051967

Mobile technology has changed the way we interact with everything in our everyday lives including museums. Dawn Chorus is an app that is also useful outside of the museum exhibition. The app is a natural history designed alarm clock exploring the idea of “museum as utility.” Dawn chorus takes its inspiration from the natural phenomenon the singing of a bunch of birds in the dawn of each day. Like the birds each dawn, the app grows louder as different birds join in the chorus allowing for a natural musical way to wake up. I was inspired by this project because of the approach to tackling a concept of museum as utility, combining inspiration from nature and the mobile app.

Jessica Timczyk – Looking Outwards 11

A picture of the Eigenharp instrument

The Eigenharp is an electronic instrument made by Eigenlabs in 2008, based in the UK and invented by John Lambert. The instrument is essentially a highly flexible and portable controller, in which the sound or music is being generated in the software that it drives. I really like this project because I find it interesting that it visually looks like a wind instrument like a bassoon, but is in fact an electronic instrument. Each of the keys are velocity sensitive and act like a joystick. Although I am not entirely sure how the algorithm behind the instrument works, I suppose that it works by changing or altering the sound that comes out the intstrument based upon the keys that the player presses. The artistic influence of the creator can be seen in the array of different manipulations that a player can influence given the keys on the instrument.