axol-LookingOutwards01

Mirages & miracles is an augmented reality exhibition created by Adrien M & Claire B, and was first displayed in 2017. It is meant as “a tribute to us humans as we cling to seemingly lifeless, motionless, inorganic things”.

Mirages & miracles – trailer

What fascinated me the most the first time I saw the project was how the AR technology was seamlessly integrated with the physical display. AR technology had used in many occasion, but too often do they feel detached from the physical world, and often somewhat unnecessary. This was the one project I saw and remembered where the technology was one with the art, and as we mentioned in the first class, it brings an emotional impact instead of merely leaving me to wonder how they did it. There were also VR and projection parts of the display, but it was the AR part that was most memorable to me.

To my knowledge the this project is done with a team of around 30 people (throughout the whole process, from concept to final set up). And I cannot find information on what it was created with.

tale-lookingoutwards01

The project that compelled me to take this class is MLDRAW: ML DRAWING TOOL by Connie Ye and Aman Tiwari. This project is still in progress and in fact, Connie is currently working on it as her senior BXA project. MLDRAW: ML DRAWING TOOL was the first inspirational project I encountered within the first month of freshman year. I was amazed that not only did the project reminded me of my childhood time scribbling on an empty paper with my friends, but also made me wish I had access to such program when I was little. What astounded me more was that this project wasn’t made by someone existing somewhere out in the world who I don’t know a single thing about, but rather two upperclassmen at CMU. I later learned that this project was made in the Interactive Art class also taught by Golan, which made me want to take this course even more, as this course would greatly help me take a first step towards building technical and creative skills for projects like MLDRAW.

To introduce more about the project, MLDRAW is a vector art drawing program that renders the line drawing made on the left with ML on the right.

The project is developing with Typescript, choo.js, paper.js, pix2pix models, python, socket.io. When the user makes a sketch, each layer of the vector drawing would then be processed separately in the machine learning model, producing the rendered view on the right.

The project was inspired by the research done on pix2pix and the potential of it altering drawings into another images. As mentioned before, the project is still in progress. They are planning to add more ML models and features, modify UI for the user’s clear understanding of what each tool does, and potentially release to the public for everyone to play with.

These are some screenshots and gif of the project from Connie’s website. I tried to create a gif version of the vimeo video above on my own, but giphy site says the vimeo video owner has disabled them in making the gif..;

 

 

miniverse-LookingOutwards1

“Expressions” – is a video created by the artist Kynd where the sound is from  “Collaged Improvisation In Digital VII” by the musician Yu Miyashita

This video is a procedurally generated animation rendered in real time in response to the music piece by Yu Miyashita. The animation is 3d and was inspired in part from the thickness and texture of oil paints.

The creation process, for me, was more inspiring than the end result. To create convincing 3d graphics in real time with limited GPU, the artist made their own method of rendering. This method was specific to the piece and limited in scope. Perspective and lighting was fixed. The intuitive artistic understanding of how to create a convincing 3d shape (color, shading, details) combined with the computational background knowledge of efficient rendering birthed the generative algorithm behind the piece. It’s shocking they made a rendering method, which is very difficult and purely mathematical in the graphics field, using artistic knowledge.

(basic idea of the rendering used in the video)

This project would benefit from increased length. The animations produced are varied and exploring  a wider scope of what the algo could produce would be fulfilling for the viewer. I also think a few parts of the animation were sound wavey. This felt predictable and detracted from the abstractions in the rest of the video.

The artist, Kynd demonstrates the foundational expertise needed to create this work through their pages dedicated to teaching  physics in p5.js and the machine learning to artists.

OodBird – LookingOutwards01

I absolutely love the interactive piece “What Will Football Look Like in The Future” by Jon Bois. It completely changed the way I think about interactive story telling and you can communicate ideas with your audience. The story itself is too long and complicated for me to explain entirely, but its basically a discussion of future earth by a bunch of sentient space probes. One of my favorite parts of this piece in right at the beginning, where the viewers frame of reference for time is completely changed by the Bois, who presents the passage of decades through seemingly never ending completely blank calendar that the viewer has to scroll through in order for the story to progress. 

To the best of my knowledge I think this piece was made by the author on their own, though the work is hosted on sbnnation.com. Jon Bois is a sports writer and video producer, but has made seemingly no other work that is comparable to this one in term of medium.

lampsauce-lookingoutwards01

Refik Anadol’s Melting Memories is an installation in Istanbul.

I admire the complexity and synchronicity of the work. The way that certain components melt into one another and reform or reappear later has a very aesthetic quality. I also admire the algorithms and dataset it took to generate this work. Anadol’s work often takes massive datasets and expresses them visually. One thing I admire about his works is how he uses data as a pigment, to express what is normally unseen.

Anadol partnered with the Neuroscape Laboratory at UCSF and collected EEG data from participants’ hippocampus region to create Melting Memories. Anadol’s creative team (comprised of AI and CS experts) developed a neural network to convert the EEG data into procedural noise which could be visually interpolated. Given the uniqueness of this dataset, it is likely that Melting Memories was created using custom software.

The work was inspired by Anadol’s uncle’s getting Alzheimers which making him consider the computational foundation of memory. Future works that may take inspiration from Anadol’s work may make clever use of data from different kinds of signals of the body.

thumbpin-lookingoutwards01

Olivia Ross’s video “me doing my #gothexercises in my room”

I am impressed by the visual aspects of this project – Ross was able to create watery, transparent, and distorted imagery with what appears to be images of flowers. The imagery in the video seems simple but feels familiar and meaningful to me (the compressed quality of the audio adds to this feeling). In the video is a faint silhouette of the artist dancing to the song “Wuthering Heights” by Kate Bush. Self-portraits or selfies are a common visual element in the artist’s work – she claims and embraces her digital image. In a world where large tech companies design their programs and products to create behavioral addictions in people who use those programs and products, Olivia uses her work to help people develop a positive relationship with technology.

I don’t know how long it took the artist to create this work but she made the piece when she was a student at the School for Poetic Computation. Her work was edited with C++, using the graphics toolkit OpenFrameworks. The artist’s early works comprise of many slitscan self-portraits, which coud’ve been inspired by artists Andrew Davidhazy and George Silk.  From a technical standpoint, I do not know what opportunities or futures the project points at; however, I think Olivia’s emphasis on awareness, accessibility, and health/enjoyment in regards to relationships with technology is really important now and in an increasingly technological future.

sticks-LookingOutwards-01

JODI’s My%Desktop 2002 is a digital piece that utilizes a “user-friendly” Mac OS 9 operating system, manipulating the elements on the desktop through clicking and software manipulation to create a piece with interesting visuals and glitches. The “desktop performance” of glitches and interactions resulted in chaotic frenzy of elements jumping and popping from the screen. When I first walked into this installation in the NY MoMA, I was immediately drawn to the way positive and negative space can be manipulated in a computer desktop, where highlighting an icon or folder creates a contrasting and inverse shape. I never realized how many different formations and patterns can be created from overlaying folders and opening and closing files at a rapid pace. The aesthetic of the old operating system and how it was used as a tool to create interesting visuals inspired me to look for outdated tools which may be useful. After further research, JODI’s intentions behind manipulating a computer desktop was to create a space where normal computers could be given a personality and influence irrational behavior in viewers through overwhelming amounts of data, as they emphasize their concept that “The Computer is a device to get into someone’s head”.

shoez-lookingoutwards01

Conway’s game of life was one of the first times I was exposed to generative art. The rules of the game are simple: Cells are either alive or dead, any live cell with two or three live neighbors survives, any dead cell with three live neighbors becomes a live cell, and live cells die in the next generation. The program is relatively simple to implement, but I didn’t realize potential it had until I watched “epic conway’s game of life” on Oingo Boingo’s youtube channel. 

Accompanied by arguably the most epic music created, Requiem for a Dream, Oingo Boingo showcases increasingly mind-blowing applications of the game of life. I admire it because it’s a testament to how seemingly simple programs can be turned into captivating art. My favorite design is when the program spells out “Golly” continuously. Art created from the game of life is painstaking and requires crazy amounts of precision. 

The program Oingo Boingo uses can be found on http://golly.sourceforge.net. While John Horton Conway created the game of life, the authors of the code include Andrew Trevorrow, Tom Rokicki, and a couple other contributors. It’s not clear if Oingo Boingo created the showcased programs, but I believe most of the programs have their own creators and the video was a compilation of them. Gosper’s Glider is a staple of game of life projects and it appears often. 

Since John Horton Conway’s death from COVID-19, Kate Vass Gallery has created an exhibition filled with works from four generative artists. Just by scanning through the online gallery, it’s clear that Conway’s impact can be felt through media arts and its evolution. 

epic Conway’s game of life

mokka- Looking Outwards 01

Mintabox

Mintabox is a user-collaborative, generative website created by the artist Joe Hedges. His intention with this website was to explore the idea of information stored within the internet space or as he calls “the information page”. Any user is able to generate a digital box that contains a user-submitted text and/or image. Hedges created this system using PHP, CSS, HTML, Javascript, digital photography, oil painting while directly observing and utilizing digital photography and image manipulation.

How to Mintabox

What really drew me in about this project is the way it is so simple, yet so flavorful and thoughtful. It acts as both a time capsule and a simple platform for people around the globe to interact with each other by sending messages that are not even directed at them but are meant for them at the same time(and in form of a 2D digital box). Something so small and odd can easily bring a smile to my face such as this and I am here for it. You can choose the pixel size, color, size, glitch/no glitch effect on the box as you can see below with the various display of boxes.

A message from 2012, Joe Hedges
Mintabox, Joe Hedges

The process of having to choose the character of the box seems like such a fun way to somehow personalize the way that you enter yourself into this net space along with many others.

This website has been occupied by many different users since the year 2012 and is still a place where people visit time to time to leave messages even today.

Watch Joe go through different time boxes:

yanwen-lookingoutwards01

Something Similar to Me / About Seeing Things by Akihiko Taniguchi is an interactive essay on identity in the digital age. The work presents a digital space with 3D scannings of environment fragments from Taniguchi’s daily observations, and invits viewers to navigate around the created world by controlling a 3D scanning of the artist himself.

What I love about the work is that Taniguchi recontextualizes ordinary objects in physical spaces through digital processes, and constructs a new environment for reflecting upon the identities of these objects in digital space. The artist created both the 3D experience and the writing used in the piece. Nothing about the process of development was presented but I think the work is made with a combination of “off-the-shelf” software (maybe using 3D game engine) and custom scripts for the interactions and hardware setups.

Taniguchi listed domestic by Mary Flanagan and Legible City by Jeffrey Shaw as inspirations behind his work.

domestic, Mary Flanagan, 2003
domestic, Mary Flanagan, 2003
Legible City, Jeffrey Shaw, Dirk Groeneveld, 1989

Both works deal with creating 3D digital space and finding the connection between virtual and real world spaces. The most intriguing part of the project, which also could point to future opportunities, is its selection of mundane objects and the usage of direct presentation of the original forms (without reimagining or modeling via digital methods).

It also reminds me of how AR has been incorporated into online retailing experiences. How will we present everyday objects digitally in the near future?

Here’s the video abstract of Taniguchi’s work: