Ian Chang’s “Spiritual Leader” is an interesting sound art/music project because he uses both mechanical drum sounds as well as human produced drum sounds to create a percussive piece where the mechanical and human sounds blend together. This creates an effect where sonically it can be unclear what is the computer and what is Ian.
The collaboration of Chang with Endless Endless is a video where they created a light projection installation based off of the drum beats that shines different lights and projections on to Chang as he plays the drums. This creates an environment that is simultaneously sonically and visually percussive. The effect is interesting and I found really successful because while the human and mechanical sounds are blending, the lights only turn on through the human interaction that creates the beats. You can really feel the presence of the artist, although the environment and many of the sounds are produced by a computer.
his project created by Mark Wheeler is generated by integrating computational music data and visual graphics. It is a performance that shows how the soundtrack could control the world as well as how the outside could affect sound interactively.
The performance setup uses two synths, a monome running Mark Eats’ own Sequencer app and another monome controlling Ableton Live. These instruments connect via MIDI over wifi to a second laptop running a custom openFrameworks app that produces the visuals. Ableton Live and MidiPipe handle the routing. The visuals software was built using openFrameworks and operates much like a game engine. A map is created with rules for traffic flow, junctions and traffic lights. The simulation could thus visualize people’s behavior by manipulating the sound. By this project, not only the computational music got known by more general people, but It also give this kind of music a more functionalism position than it is before.
The website touchpianist.com is a really fun website in which you can “play” songs using a single key on your keyboard and view notes that are abstracted as lights in front of you on your browser page. I admire the mesmerizing way in which the lights appear on the html canvas, and the fact that you can tailor the songs to speeds of your choosing.
touchpianist.com uses an HTML canvas to display the lights, and WebAudio to play the songs.
Batuhan Bozkurt is both a musician and a creative coder, and he combined these two talents to create the website by playing all the songs, and then coding the setup itself.
This Looking Outwards, I decided to focus on what is probably the funniest use of computer music ever: autorap.
The autorap app is by Smule, a company that makes mobile apps and is located in San Francisco. It was founded in 2008, by Jeff Smith and Ge Wang, who is a Stanford Ass. Professor. The idea is that you can record yourself speaking/singing/whatever, and it will turn that recording into “”rap.”” I use this term loosely. It really just autotunes whatever you record, but it’s honestly hilarious.
Here’s a video of Ge Wang testing the app, which I highly recommend everyone watch because again, it’s hilarious:
They also have a very pretty graphic moving in the background when you record, which I thought was a nice touch.
Wendy Carlos (born 1939) is an American electronic composer. Her has worked on a variety of projects, from solo albums to film scores, and assisted in the development of the first Moog synthesizer.
In Switched-on Bach(1968), one of her best known albums, Carlos recorded a series of Bach’s most famous concertos using synthesizers. At a time when electronic music was largely experimental and technical, this album helped popularize the synthesizer and electronic music to a wider audience.
I really like electronic music, especially early electro-pop, and Wendy Carlos definitely laid the groundwork for every popular electronic record for the next 50 years. In general, I like artists who can balance making interesting work while also being accessible to everyone.
(If you’re interested in checking out her work, some of her other famous pieces include, film scores for both the Original Tron and A Clockwork Orange. Her website is: wendycarlos.com)
The piece of computer music that I found very interesting was a video called “Pipe Dream”. If you watch the video you will see a series of different objects interact with one another and make this beautiful harmonious melody. There are any other videos as well that symbolize the same thing but the reason why I found this particular one more interesting than the others was the work that must have gone into it. There were so many pieces and so much animation that if the music was strictly created from the computations, this would have been a very long and treacherous project. If it was played by another instrument and then made to match the music this would be different. I still encourage anyone to listen to this song and watch the video! It is fantastic! The second video under that shows a real life version of “Pipe Dream”. I found this at the last second and I am just in awe of what humans can do.
A work in the area of music that I found particularly interesting was the new musical instrument developed by Changxi Zheng. It’s called the “zoolophone,” a metallophone where each bar is shaped like a different animal. But the stylized shapes of the instrument aren’t nearly the most interesting thing about it, nor what I admire most. Zheng used computational design to create each bar, instead of painstakingly carving out each bar so that they vibrate perfectly. The computer tests and retests each bar, so that there is even less room for error, and when struck with a mallet, actually produces three notes (or a full chord).
I don’t know much about the algorithms used by Zheng to generate this work, but it’s really cool to think that, outside the intricate programming to carve out each piece, from this course we’ve been taught enough to know that the process beyond that is probably something along the lines of ‘if’ statements and continually testing until it works!
The artist’s sensibilities manifest in the final form by making each bar into a stylized animal shape – which is pretty incredible to think about, given instruments usually use regular bar shapes because it’s so difficult to get the sounds right with irregular shapes.
MicroKontrolleur is both and engineered instrument that uses gesture as a means of sound making. The instrument can be hooked up to any average microphone and It consists of a series of pulleys and foot pedals that can send signals to a software. The software than manipulates sound depending on what you are doing with the instrument. The instrument is naturally a very body performative act when played and can create a range of sound from small popping or clicking to storm like sounds. The musician playing the instrument must stand close to the microphone, looking as if they are going to sing into it. This breaks the viewer out of the common structure of how music is usually performed. In this is also begins to almost question the power dynamic between a singer and a band.
I discovered this amazing project by NY-based artist Lisa Park who is actively involved in the arts at the New Museum. She uses brainwaves to compose and perform music. The project has an atmosphere of Zen; it is about exploring vulnerability and self-control. Lisa approached the project by wearing a futuristic headset that contains electroencephalography (EEG) sensors, which would transpose her brainwaves to vibration of water in the plates surrounded. Music is produced using Max/MSP and Reaktor. I think this project is a very interesting integration of technology, neuroscience and art. It is not only produce music, but more importantly reflects the mood, the thinking and the neuro movements of the performer.
I really liked Sony’s Evans-bot. An animated robot that creates songs, it is given a series of constraints for melody and harmony, a primary chord to work off of, and then a few backup chords. These constraints are called on a “leadsheet,” described by the creator as “monophonic melodies with underlying chord sequence, “in the style ” of arbitrary composers, or corpora.”
I really like this project especially because the music is programmed to sound like a particular style of a composer like George Gershwin or Irving Berlin. I’m a big Gershwin and Berlin fan so I enjoyed hearing trademarks of their music within the program. I’m not sure how a computer would’ve been programmed to create such a thing, but I highly enjoyed it and trying to reason through a potential code.