Fluid Leaves is a project made by Reinoud van Laar for tea boutique, ‘Tee & Cupp’ in Xian. This project generates prints of non overlapping leaves in a fluid pattern similar to that of tea leaves floating in water onto paper cups.
The elegance of each print initially drew me into the project such as the delicate shapes and simplistic color palette. I also found the application of generating patterns that are not each perceptually unique while also having perceptual difference as a great means of branding for the tea shop since the shop could maintain a uniform look while ensuring each customer has a drink that is special in its own way.
This project uses programs such as Geomerative, ControlP5,
Toxiclibs, and Mesh and applies topics such as Perlin noise, displacement maps, and fluid dynamics to create the water like movements. Each leaf is made in Processing by randomly varying certain characteristics of each leaf such as the traits of each vein and stem to create unique leaf shapes. Additionally, the liquid feel of the pattern is created by fluid dynamics mapped to vector paths.
hash2ash by panGenerator is an installation that uses multiple modalities to convey a unique, individualized interaction between the visitor and the installation. As a user inputs their selfie onto the given link, a granular display of their selfie appears on the big display, and as soon as the image appears, all the particles that make up the image start to fall and disintegrate. The digital rendering is accompanied by real falling pebbles (“ashes”) underneath the display, adding an auditory and tangible experience to what is considered as a very virtual act of taking selfies. The essential idea of the project is to convey the fear of losing all your digital records due to technical failures.
This project is one my favorites, as the digital and physical rendering are beautifully linked together through code. I like how controlled the “waterfall” effect is on the screen and how the animation precisely matches the falling grains below the screen. Each grain feels alive when you take a closer look. I’m inspired the way the computation imitates real life physics, yet opens up a vision into a fictional world.
I’m not sure how many people were involved in making the piece, but the studio that produced this is a relatively small organization, so I imagine less than 20 people were involved.* I’m fairly positive that the artists developed their custom software to develop it.
Mirages & miracles is an augmented reality exhibition created by Adrien M & Claire B, and was first displayed in 2017. It is meant as “a tribute to us humans as we cling to seemingly lifeless, motionless, inorganic things”.
Mirages & miracles – trailer
What fascinated me the most the first time I saw the project was how the AR technology was seamlessly integrated with the physical display. AR technology had used in many occasion, but too often do they feel detached from the physical world, and often somewhat unnecessary. This was the one project I saw and remembered where the technology was one with the art, and as we mentioned in the first class, it brings an emotional impact instead of merely leaving me to wonder how they did it. There were also VR and projection parts of the display, but it was the AR part that was most memorable to me.
To my knowledge the this project is done with a team of around 30 people (throughout the whole process, from concept to final set up). And I cannot find information on what it was created with.
The project that compelled me to take this class is MLDRAW: ML DRAWING TOOL by Connie Ye and Aman Tiwari. This project is still in progress and in fact, Connie is currently working on it as her senior BXA project. MLDRAW: ML DRAWING TOOL was the first inspirational project I encountered within the first month of freshman year. I was amazed that not only did the project reminded me of my childhood time scribbling on an empty paper with my friends, but also made me wish I had access to such program when I was little. What astounded me more was that this project wasn’t made by someone existing somewhere out in the world who I don’t know a single thing about, but rather two upperclassmen at CMU. I later learned that this project was made in the Interactive Art class also taught by Golan, which made me want to take this course even more, as this course would greatly help me take a first step towards building technical and creative skills for projects like MLDRAW.
To introduce more about the project, MLDRAW is a vector art drawing program that renders the line drawing made on the left with ML on the right.
The project is developing with Typescript, choo.js, paper.js, pix2pix models, python, socket.io. When the user makes a sketch, each layer of the vector drawing would then be processed separately in the machine learning model, producing the rendered view on the right.
The project was inspired by the research done on pix2pix and the potential of it altering drawings into another images. As mentioned before, the project is still in progress. They are planning to add more ML models and features, modify UI for the user’s clear understanding of what each tool does, and potentially release to the public for everyone to play with.
These are some screenshots and gif of the project from Connie’s website. I tried to create a gif version of the vimeo video above on my own, but giphy site says the vimeo video owner has disabled them in making the gif..;
“Expressions” – is a video created by the artist Kynd where the sound is from “Collaged Improvisation In Digital VII” by the musician Yu Miyashita
This video is a procedurally generated animation rendered in real time in response to the music piece by Yu Miyashita. The animation is 3d and was inspired in part from the thickness and texture of oil paints.
The creation process, for me, was more inspiring than the end result. To create convincing 3d graphics in real time with limited GPU, the artist made their own method of rendering. This method was specific to the piece and limited in scope. Perspective and lighting was fixed. The intuitive artistic understanding of how to create a convincing 3d shape (color, shading, details) combined with the computational background knowledge of efficient rendering birthed the generative algorithm behind the piece. It’s shocking they made a rendering method, which is very difficult and purely mathematical in the graphics field, using artistic knowledge.
(basic idea of the rendering used in the video)
This project would benefit from increased length. The animations produced are varied and exploring a wider scope of what the algo could produce would be fulfilling for the viewer. I also think a few parts of the animation were sound wavey. This felt predictable and detracted from the abstractions in the rest of the video.
The artist, Kynd demonstrates the foundational expertise needed to create this work through their pages dedicated to teaching physics in p5.js and the machine learning to artists.
I absolutely love the interactive piece “What Will Football Look Like in The Future” by Jon Bois. It completely changed the way I think about interactive story telling and you can communicate ideas with your audience. The story itself is too long and complicated for me to explain entirely, but its basically a discussion of future earth by a bunch of sentient space probes. One of my favorite parts of this piece in right at the beginning, where the viewers frame of reference for time is completely changed by the Bois, who presents the passage of decades through seemingly never ending completely blank calendar that the viewer has to scroll through in order for the story to progress.
To the best of my knowledge I think this piece was made by the author on their own, though the work is hosted on sbnnation.com. Jon Bois is a sports writer and video producer, but has made seemingly no other work that is comparable to this one in term of medium.
Refik Anadol’s Melting Memories is an installation in Istanbul.
I admire the complexity and synchronicity of the work. The way that certain components melt into one another and reform or reappear later has a very aesthetic quality. I also admire the algorithms and dataset it took to generate this work. Anadol’s work often takes massive datasets and expresses them visually. One thing I admire about his works is how he uses data as a pigment, to express what is normally unseen.
Anadol partnered with the Neuroscape Laboratory at UCSF and collected EEG data from participants’ hippocampus region to create Melting Memories. Anadol’s creative team (comprised of AI and CS experts) developed a neural network to convert the EEG data into procedural noise which could be visually interpolated. Given the uniqueness of this dataset, it is likely that Melting Memories was created using custom software.
The work was inspired by Anadol’s uncle’s getting Alzheimers which making him consider the computational foundation of memory. Future works that may take inspiration from Anadol’s work may make clever use of data from different kinds of signals of the body.
I am impressed by the visual aspects of this project – Ross was able to create watery, transparent, and distorted imagery with what appears to be images of flowers. The imagery in the video seems simple but feels familiar and meaningful to me (the compressed quality of the audio adds to this feeling). In the video is a faint silhouette of the artist dancing to the song “Wuthering Heights” by Kate Bush. Self-portraits or selfies are a common visual element in the artist’s work – she claims and embraces her digital image. In a world where large tech companies design their programs and products to create behavioral addictions in people who use those programs and products, Olivia uses her work to help people develop a positive relationship with technology.
I don’t know how long it took the artist to create this work but she made the piece when she was a student at the School for Poetic Computation. Her work was edited with C++, using the graphics toolkit OpenFrameworks. The artist’s early works comprise of many slitscan self-portraits, which coud’ve been inspired by artists Andrew Davidhazy and George Silk. From a technical standpoint, I do not know what opportunities or futures the project points at; however, I think Olivia’s emphasis on awareness, accessibility, and health/enjoyment in regards to relationships with technology is really important now and in an increasingly technological future.
JODI’s My%Desktop 2002 is a digital piece that utilizes a “user-friendly” Mac OS 9 operating system, manipulating the elements on the desktop through clicking and software manipulation to create a piece with interesting visuals and glitches. The “desktop performance” of glitches and interactions resulted in chaotic frenzy of elements jumping and popping from the screen. When I first walked into this installation in the NY MoMA, I was immediately drawn to the way positive and negative space can be manipulated in a computer desktop, where highlighting an icon or folder creates a contrasting and inverse shape. I never realized how many different formations and patterns can be created from overlaying folders and opening and closing files at a rapid pace. The aesthetic of the old operating system and how it was used as a tool to create interesting visuals inspired me to look for outdated tools which may be useful. After further research, JODI’s intentions behind manipulating a computer desktop was to create a space where normal computers could be given a personality and influence irrational behavior in viewers through overwhelming amounts of data, as they emphasize their concept that “The Computer is a device to get into someone’s head”.
Conway’s game of life was one of the first times I was exposed to generative art. The rules of the game are simple: Cells are either alive or dead, any live cell with two or three live neighbors survives, any dead cell with three live neighbors becomes a live cell, and live cells die in the next generation. The program is relatively simple to implement, but I didn’t realize potential it had until I watched “epic conway’s game of life” on Oingo Boingo’s youtube channel.
Accompanied by arguably the most epic music created, Requiem for a Dream, Oingo Boingo showcases increasingly mind-blowing applications of the game of life. I admire it because it’s a testament to how seemingly simple programs can be turned into captivating art. My favorite design is when the program spells out “Golly” continuously. Art created from the game of life is painstaking and requires crazy amounts of precision.
The program Oingo Boingo uses can be found on http://golly.sourceforge.net. While John Horton Conway created the game of life, the authors of the code include Andrew Trevorrow, Tom Rokicki, and a couple other contributors. It’s not clear if Oingo Boingo created the showcased programs, but I believe most of the programs have their own creators and the video was a compilation of them. Gosper’s Glider is a staple of game of life projects and it appears often.
Since John Horton Conway’s death from COVID-19, Kate Vass Gallery has created an exhibition filled with works from four generative artists. Just by scanning through the online gallery, it’s clear that Conway’s impact can be felt through media arts and its evolution.