Christine Seo – Looking Outwards 11

Caption: A video documentation, filmed in Montreal, Québec, of Mesa Musical Shadows (2016), an interactive musical pavement.

Mesa Musical Shadows is an interactive art related to computational music by Daily Tous Les Jours studio, an interaction design studio with a focus on work in public spaces. This project was inspired by a mix of performance, musical instruments, and digital arts, using contemporary tools such as sensors and real-time data. The artists wanted passing crowds to be invited to play a critical role in the transformation of their environment and their relationships. The creator’s artistic sensibilities manifest in the final form by using bright geometrically tiled surfaces that make the piece more inviting for the audience. The piece allows shadows to cast with the sensor, where sounds and melodies are projected by speakers. The sensors are controlled by a MaxMSP patch linking Arduino Mega boards via OSC and are constantly recalibrate themselves in order to define new threshold values to determine what is a shadow and what is not.

The shadows cast on different tiles, which trigger different voices, all singing in harmony. Additionally, the music dependent on the weather and the time of the day (due to variations of the lengths of the shadows). I thought that this project was very intriguing because the artists let the audience compose the music and are welcomed by a moment of surprise and an invitation to engage with the piece, as well as interact with other visitors and passersby. I believe that music is a language that is universal, and it is really exciting to see art and music collaborate in order for the community to come together as well.

https://www.creativeapplications.net/maxmsp/playin-the-pavement-daily-tous-les-jours-mesa-musical-shadows/

John Legelis – Looking Outwards 11, Computer Music

Teenage Engineering OP-1 Synthesizer/Sampler/Sequencer

Teenage Engineering is a young company from Stockholm Sweden. They developed the OP-1 instrument in response to the creativity-stifling environment of a completely unstructured Digital Audio Workspace.

The OP-1 as a digital instrument is far more restrictive to the user that creating music on a computer would be. The creators claim that “…[the] limits boost the creativity. Limitations are OP-1’s biggest feature.”

The device draws inspiration from retro 80’s synthesizers such as the Casio VL-1, which some developers grew up with and discovered that the physical barriers of older syths spurred unconventional ideas and creativity.

Sample of OP-1 in use.

There are 11 separate synthesizer chips in the device that use varying methods to create different types of sounds.

Hannah Cai—Looking Outwards—11

For this week’s Looking Outwards, I chose this sample of music, which was generated by David Cope’s “Experiments in Musical Intelligence” computer program. It’s termed as a “Bach-style chorale” in the title, which, based on the comments, confused and angered a lot of the audience, who said it sounded nothing like Bach. One commenter clarified this distinction: ” A “Bach Chorale” refers to a style, not a composer – pretty much any SATB vocal piece utilizing the rules of counterpoint. That’s why a computer can do it — once you lay in one of the voices, you really can fill in the rest “by the numbers”. But don’t confuse that with actually composing or creating. And no offense but any real lover of Bach can tell that this sounds nothing like him — it has none of his intrigue or quirkiness (some of which has to do with his deliberate breaking of said rules). It sounds incredibly bland and boring to me. Which is exactly what I’d expect from computer-generated music.” ”

I found the reactions to this experiment more interesting than the audio itself, which I enjoyed, but didn’t find really special. I guess that personally, I feel like music needs some kind of human creativity or spontaneity/randomness to truly be “art.” Because this piece only “fills in” harmonies based on preset rules, it is less interesting/valuable to me, and apparently to quite a few other people as well. I still find the experiment impressive though, and I’d love to experiment with some kind of generative music in the future, if I can manage to learn how to.

Dani Delgado Looking Outwards – 11

The laptop orchestra during a performance

The project I chose to research this week was the Stanford Laptop Orchestra (or SLOrk for short). This orchestra, which creates full performances by having people using controllers to generate music from more than laptops,  was founded in 2008 by Ge Wang and other students, faculty, and staff at Stanford University’s Center for Computer Research in Music and Acoustics. They use the ChucK programming language in order to create/ synthesize sounds and design the instruments.

The controllers that can be used to generate music. The knobs at the end can be pulled and twisted by the performers which then communicates to the laptops

I find this project to be fascinating, as it requires similar skills as a normal orchestra does (in terms of cohesion, practice, and performance) but utilizes a completely new medium. I was honestly very excited to stumble across this because I was expecting to find a sole person composing music, but instead, I found this entire group working to synthesize music live which is impressive.

^A video of one of their performances

Their website

Lan Wei-Looking Outwards-11

The name of the project is ‘Mr Shadow’ and the project was created by Scientists at SONY CSL Research Laboratory in 2016.

I looked at lots of videos about computer music before decided to investigate this one. The reason is that for me most of the ‘computer music’ are only making use of computers to save time in producing the rhythms without digging into the specialities of computers. But this one produced by Artificial Intelligence is special in my perspective. It contains not only rhythms that lift people’s spirit, but also, most importantly, human motions. It’s exciting to see how a ‘dead’ computer can produce such vivid music with ups and downs through analysing precedents. And this leads to another question: whether computers will replace artists totally in the future? I can write an essay about this but I’m not going to discuss the topic here. I just want to say this project is very exciting and inspiring.

Besides the music itself, the graphic effect is also very good and it echoes with the music very well. In general, the computer-generated music really takes me into its world.

Lingfan Jiang – Looking Outwards 11

This week, I am interested in the project called “looks like music” done by a Japanese creator Yuri Suzuki in 2013. It consists of a miniature robot which detects and follows a circuit – a black line traced in marker pen – interspersed with colored reference points that the device translates in sound. The main reason I admire it is that it is an easy but super fun project that everyone can be involved in. For the general public, people normally are not able to be part of the artwork, so sometimes they do not understand them. Also, the algorithms behind it should be pretty easy too. The robot has a sensor inside which is able to differentiate between black and white and use “0” and “1” to represent them. As for the color, every color would have a specific sound. Similar to the “pixel.get” command in p5js, the robot is able to get different colors from the paper.

As for the creator’s sensibilities, being a sound artist, designer and electronic musician himself, Yuri Suzuki really incorporated his professions into the project. It is a simple graphic artwork that combined with sound and electrics. His intention is “to raise public awareness of the way in which sound and music are produced”, and I think he did it really well in this project.

Romi Jin – Looking Outwards 11

Keiko Uenishi is known for experimenting with “restructuring and reanalyzing one’s relationship with sounds in sociological, cultural and/or psychological environments.” One of her works that stood out to me is an installation entitled CDL, or Car Décalé (Légèrement)/Because Shifted (Slightly). Since Uenishi dabbles with the questioning of ideas of space through what she calls aural perceptions, and her experiments expose people to temporal spatial experiences that are usually new to them.

(Below is an example of one of her sound experiments about found objects.)

CDL is an “experiential/phenomenological sound-performance/installation/method” that uses audio-feedback to redefine a space. Uenishi uses simple spatial parameters as a setting for the experience (i.e. no decorations, no special flooring or walls, no furniture, no columns, etc.) as there should be no unnecessary distractions. The version of CDL above uses found objects (mainly construction materials) to create a part to whole experience — a series of smaller, temporary structures within a larger structure (the room). After attempted experiments, she would add or alter the objects to, in conjunction, alter the spatial experience of this specific CDL.

Erin Fuller – LookingOutwards-11

The piece “Order from Chaos” is a beautiful composition of computed visual and audio work. The track was produced by London based electronica and techno producer Max Cooper. The track was originally inspired by a moment where Cooper was captivated by the sound of hard rain hitting a roof window at his apartment. He recorded the sound with binaural mics, microphones that capture audio the same way your real ears hear sounds, and used the initial recording as a seed for the rest of the track. The pattern from the raindrops based on their closest structure, that creates an emergent rhythm, an initially detailed and chaotic form which slowly develops into something with a recognizable structure.

chaos-3
Gif of Video

While the prompt is focused on computer music, it would not be fair not to note the mesmerizing visual animations done by French-based Houdini Fx artist, motion graphic artist and Director, Maxime Causeret.

chaos-2
Gif of Video

Vicky Zhou – Looking Outwards 11

4G Network into Musical Sounds

For this week’s looking outward centered on computational music, I decided to focus on a project by Andrius Sarapovas: a kinetic generative music installation. This installation turns a 4G network into a series of musical sounds, based on n algorithm and through 77 segments distributed through the physical exhibition space. Each segment consists of a metal bar, a sound activator, a sound damper, a resonator, and mechatronics, that pick up on the signals in the 4G network. The placement of these segments were optimized and based on the actual physical exhibition space.

I appreciate this project because it intakes very impersonal data that we commonly and frequently discard, and/or don’t understand, and transforms it into a medium that we are more familiar with, and can enjoy.

Audrey Zheng – Looking Outwards – 11

Atlås from binaura on Vimeo.

 

The creators: Binaura is making creative coding, interaction design, experimental interfaces, sonic toys and other, reactive environments. The collective regularly gives international workshops and lectures on these topics. The collective consists of Ágoston Nagy and Bence Samu.

Atlås is an anti game environment that generates music among a conversational cognitive space. As part of its existence, there are automatically generated tasks that are solved by machine intelligence without the need of human input. It also asks questions ad infinitum: questions that are dealing with presence, human cognition and imagination, corporate driven automatisms, advanced listening practices, silence.

I picked this beautiful app because of its beautiful generative UI. I love the geometric forms, flat design with shadows, circular text and color scheme. It makes for a very aesthetically pleasing composition.

Atlas

How it was made: The project is developed with free and open source tools with an educational aspect in mind. The main application is written in javascript, using the p5js library that is embedded into a regular Swift iOS application. This example template can be used to write and play with your own app using p5js within an iOS app. The sound synthesis is written in Pure Data, that is a useful graphical programming language to develop interactive sonic events. It can be integrated into all type of projects and applications using LibPd. This example template can be used to write and play with your own app using libPd within an iOS app.