Alice Fang – Looking Outwards – 12

Andrea Gysin’s website is an inspiration for my and Jaclyn’s proposed project. Her website is full of examples of interactive type, which flow across the screen and shift with the position of the mouse. Other than the changes with mouseX and mouseY in the background, the body of text left aligned on the page flickers through and ‘rotates’ when the mouse hovers, creating a really cool loading characters effect. Andrea Gysin’s other work beyond just the construction of her website also include a program created for graphic designers to build simple, animated alphabets, and other tools to create visuals and installations. A lot her work is inspiring and along the lines of typographic interaction that Jaclyn and I are trying to build.


A project along similar but slightly different lines is Amnon Owed’s CAN Generative Typography. Using processing, he created an alphabet of generated letters, with different graphic characteristics. While it does not deal with bodies of text, as Andrea Gysin’s website does, the generative part of this video is what I find interesting; it would be really cool if we could apply a generative aspect to how the lines or stanzas of the poem for our project appear onto the canvas. Owed’s alphabet is not interactive, but a hybrid with the interactions seen in Gysin’s website could produce results that we want.

CAN Generative Typography from Amnon Owed on Vimeo.

A demo video of his generative typography

Project 12: Final Project

This summer, I got sucked into a web development hole and found a couple examples of item parallaxing that i found extremely exciting. I thought i would take this opportunity to completely craft it in JS and create a paper cutout effect. This is intended to just be a page where the user will scroll around and the image changes perspective. I want to do something similar where it seems these sketches are on different planes as the user interacts with the webpage. Essentially that means moving each type of sketch at different rates. Depending on how this translates to code, I’m going to try to recreate these sketches and apply effects on them so they look like cutouts of paper.

example of images on the canvas

An additional goal would be to try to get it to react to mouseX and mouseY to control the parallax effect.

Looking outwards 12: Parallaxing

I’ve always been interested in Javascript features on websites so I’m going to try to create a parallax effect type of papercut artwork.

I bring up this example because parallaxing creates an illusion of depth that I think very much enhances the digital experience. (click 0.25x to get the full effect)

As seen here, the artwork is directly interactive with the user’s mouse. I like this because it is a subtle interaction that changes the feeling of the entire webpage.

See the Pen Papercut effect – mouse parallax by Julien Barreira (@JulienBarreira) on CodePen.

Both of these examples were developed with a mixture of html,css, and js. I’d love to see if I can compute this in pure Javascript.

Mimi Jiao – Looking Outwards 12 – Section E

Glitched image generated by pixel relocation via sound

I stumbled upon user avseoul on Vimeo while looking for the final project inspiration. Many of their works are create using Unity 3D and/or creative code to combine real time audio or video to alter the existing visuals on the screen. Avseoul uses a lot of organic shapes that mimics cells, mountains, and water droplets. Added with interesting textures and colors, they create really interesting graphics by integrating sound and visuals. I am really intrigued their glitched image by sound  and Audio Reactive Slit Photo-Scan Test where an image is taken and constantly reprocessed and altered based on a song that is playing. As the song continues to play, the image becomes more and more distorted and it’s an interesting way of visualizing the changes throughout the song. These alterations are almost a way of visualizing audio and it almost crosses into the domain of data visualization. I find this a really interesting starting point for one to explore deeper into ways information can be visualized. One thing I would like to see from this piece is how it translates into a three-dimensional space. I think it would be really cool if these changes were incorporated on the Z axis. These works provide a really interesting starting point for Sophia and me and we hope to branch off of these ideas and explore further into WEBGL.

Alice Fang – Project 12 – Proposal

I am planning to work with my classmate Jaclyn Saik to create an interactive poem. We plan to use one of our favorite poems, “Still I Rise” by Maya Angelou, not only because it’s an excellent piece of writing but also because her message feels especially pertinent in today’s political and social climate. The poem is 43 lines and 9 stanzas long, and we plan to figure out a way to break it up and display it on separate slides, which the user can move through as they continue to read and interact. We want to create interactions specific to the different lines (or couplets, or stanzas). For example, the line “I’m a black ocean, leaping and wide,/Welling and swelling I bear in the tide”, we plan to animate the text based on the mouse position to imitate waves.

We were inspired by the work of programmer and poet Allison Parish, who creates a lot of work involving interactive text and generative poetry.

Some sketches and storyboards for the ways users can interact with the lines of text

Dani Delgado – Looking Outwards 12

The two projects I looked at this week focus on combining sound with visuals and generative work. The first project is called “Ichographs I”. This project, created by Yiannis Kranidiotis, is an audio visual computation that explores the relationship between these two components by transforming colors into sound frequencies. They took they colors from classical paintings and transformed them into visual audio, which I think is a super interesting concept. Website .

Screencap of the code generating audio waves from the painting’s colors

The second project I looked at is a generative work for the rock group NAFF Chusma. This piece, created by Thomas Sanchez Lengeling, uses real time graphics and the sound to create visually stunning animations. Website

A screencap of the animation

Both projects were coded (at least in part) using C++ and created visual artwork by using sound frequencies. I’m very interested in this intersection of audio and visual, not only because I find the ability to make the invisible soundwaves somehow tangible fascinating, but also because code allows us to this in such seamless and vibrant ways. These projects have both visual and intellectual intrigue, which is something that I think would be nice to reflect within my final project as well.

^ Yiannis Kranidiotis’ work

^ Thomas Sanchez Lengeling’s work

Mimi Jiao – Project 12 Proposal – Section E

Sophia Kim and I plan on collaborating for this final project. We want to further explore interactive sound implementation and WEBGL. We started off exploring currently existing work and discovered code that integrates sound and visuals together. They utilized the frequency and amplitude of imported songs to alter the imported image. We started off looking at how static imported images are altered based off of sound with this video and we branched off static images by looking at more dynamic and generative shapes through WEBGL. We found this interactive particle to be really interesting and cool and we definitely want to play around with geometries. Since our current skillsets are not developed enough to create something as complicated and fleshed out as this particle equalizer, we want to stay confined to shapes like ellipses, boxes, and custom shapes generated by basic math functions like sin, cos, and tan. From this, we want to play around with the idea of interacting with multiple human senses to create an experience. The audience is able to have a more heightened experience because of the mix of visuals and audio. The use of sound can make visuals easier to comprehend. In a way, the visuals will almost be a method of data visualization of the structure of the song. 

Eliza Pratt – Project Proposal

For my final project, my inner-8-year-old has been begging me to make an interactive dress up game. I’m excited to do this since I plan on drawing all the clothes and then importing them into my code. It’ll have some click and drag features that snap into place when they’re close to the body, as well as accessories and hairdo selections. I also plan on having customizable colors for eyes, hair, lips, and maybe the clothes as well. I’m excited to do this because I’ve always been interested in designing for children and young girls in particular. As a design student, creating things that are fun and playful has always been at the forefront of what I see myself doing in the future. While it’s true that games like this already exist, the prospect of both designing and coding my own validates that I now have the skills to create something that I’ve wanted to since I was little.

Jaclyn Saik-Looking Outwards 12

My final project is going to be focused on typography and type interactions. I want to explore different ways I can enhance the reading experience of a poem on screens. I looked around for artists who worked with text interactions within javascript, and found some pretty interesting people.

The first project I want to talk about is programmer and poet Allison Parrish’s Articulations, which scans over a ton of open source poetry and generates smaller poems based on them. Her work is an art piece and a critique on social norms, since it points out trends in poetry that highlight what humans are most compelled to write expressively about. Articulations is a compiled book, which I also find inspiring that something as technical as coded poetry can be published into a physical book.

The cover of Parrish’s book, which was released early 2018.

The other project I found was a lot different. Artist Bruce Luo creates processing sketches that imitate natural and organic movements, such as wind and rock formations and waves. One of his sketches in particular, Ripples (shown below) is just a really beautiful animation that relies on mouse activity.

(caption)  Ripples is an interactive sketch that creates both randomized movement and movement dependent on mouse position.

In our final project, I want to synthesize ideas from both of these projects. Parrish’s generative poetry is a really interesting way to combine the use of code and type, and as we continue to brainstorm how we implement our chosen poem into our project, I’ll continually think of how she did this as inspiration. I also want to use a similar interaction to Luo’s, except with our project I plan to make moving objects all based on type.

Jaclyn Saik-Project 12-Proposal

I am planning to work with my classmate Alice Fang to create an interactive poem. We plan to use one of our favorite poems, “Still I Rise” by Maya Angelou, not only because it’s an excellent piece of writing but also because her message feels especially pertinent in today’s political and social climate. The poem is 43 lines and 9 stanzas long, and we plan to figure out a way to break it up and display it on separate slides, which the user can move through as they continue to read and interact. We want to create interactions specific the different lines (or couplets, or stanzas). For example, the line “I’m a black ocean, leaping and wide,/Welling and swelling I bear in the tide”, we plan to animate the text based on the mouse position to imitate waves.

We were inspired by the work of programmer and poet Allison Parish, who creates a lot of work involving interactive text and generative poetry.

Some sketches and storyboards for the ways users can interact with the lines of text