Final Project – Passers-by Palette

This interactive art project uses kinect and openframeworks to capture the patterns on participants’ clothes as a 3D point cloud mesh.

Original idea and set-backs:

The project originally hoped to capture the most saturated color of a participant’s wearing and imprint these colors onto a palette. However, random artifacts introduced by Kinect would produce incorrectly highly saturated colors. So the idea was updated.

Set-up and Process:

The project uses Kinect to detect people within a distance of 500 to 1500 mm (0.5 to 1.5 m) and to produce point cloud information. Using openframeworks, the point cloud information can be drawn on canvas in real time.

 

The code allows the program to take a “screenshot” of the current window. Then, using ofxCv from kylemcdonald addon, the program can detect and contour out an area of some specific hue. By choosing wanted hue and adjusting tolerance threshold, the program can grab an area of pattern on the detected person. It then adds the current wanted patterns to a public ofMesh variable Imprint. 

 

 

 

 

 

This interactive project therefore allows creating an imprinted point cloud collage of clothing patterns.

Passers-by Palette Proposal

For this project, I am hoping to construct an interactive art device that captures the clothes colors of passersby using Kinect and openFrameworks.

Using Kinect, the program should be able to detect the human bodies passing by the device. Then the color pixels of the person can be extracted and processed. The program will start off with an empty white palette; as more and more people pass by, the color pixels from their body are detected and shown on the palette for a period of time. The people interacting with the art basically fills in the palette over time.

Color of Monotony

For this project, I am using stop motion and paint to retrace my steps in the past two weeks.

This project was inspired by me and my friends joking about how we’re coming out of the fall break replenished, but will be walking dead in about a week. It was also inspired by my friends joking about how difficult it is to find time to do the things we want to do.

By using the form of a stop-motion animation on a white canvas, I can visualize the monotony of school week in straightforward, convinsing way. I attempt to capture how being trapped in the circle of school “muddies up” a person. The colors, even though bright and clean at first, becomes tainted and lose their shine after repetition and repetition of this circle.

The final step in the video is of a depressing color, but it breaks out of the circle and walks toward a new direction represented by the white spot. (Hopefully that will be me tomorrow :< )

Process:

I recorded the locations that I’ve been to every day since the end of fall break on my phone:

Then, I used red paint to represent Mellon Institute, yellow paint to represent campus/ETC, and blue to represent my home on the canvas. I then used my two fingers to “retrace” my steps among these locations.

To create the stop-motion video, I reused the processing code for the stop-motion capture from the lecture. I also implemented a quick auto-clicker using Python, where the program can click the left mouse every five seconds. This frees up my hand from touching the computer repeatedly when I’m manipulating paint.

Software testing using my lil pumpkin 🙂 :

Setup for recording the stop motion:

Inspiration:

A project that greatly inspired me is a performance art by Zhilin (直林) for Alienware laptop promotion. In this performance art, he captures the monotony of stay-ar-home working routine trapped in a house, which is a norm nowadays because of the pandemic.

Person in Time Proposal

My ambidexterity is one of the traits that define me and help me understand myself. The idea for this project is to focus and record the movement of my hands throughout time performing different tasks and reacting to different situations, in the hope that it would represent me as a person, a woman, a friend, a child, a scientist, a coder, an artist, a musician, etc. The hands themselves are embedded with my personality.

I am debating how to make the footage appear more intuitive and more “experimental”. My initial thought is inspired by the “MJ white glove” piece, where I can center my hands throughout the video. I am also thinking of doing something interesting with hand motion capture.

Two Cut Reading

From the reading, one cut separates the “object of observation” from the technical condition that produces it, which would render the data meaningless.  It should be two cuts that create a “gap”, and the object appears between two different sets of instruments. One cut as the “opening cut”, generates the situation/object of interest; one cut as the “closing cut”, where manifestation is measured.

My current idea for Person in Time is to interpret a person through their hands only by recording their hand movements, and specifically their ambidexterity, throughout the day performing different tasks and responding to different conditions. In this case, the recording of the hand-movement footage would be the
“opening cut”, while the deliberate interpretation of the hand-movement footage would be the “closing cut”.

Muted Identities

Introduction:

This project selects, explains, and visualizes through AI image generation the ignored Chinese names of international students.

link to “Muted Identities” project website

Inspiration:

As an international student from China, one of the things I hear the most while studying in the US is that “Chinese students tend to be shy and quiet”. In reality, the personalities of international students are often “muted” when they speak another language. While they might be sharp, funny, or wild in their mother language, such sparkling qualities do not always get “translated” when they’re living in an English-speaking country.

This project attempts to unveil the hidden identities of these students due to the language barrier while they’re in a foreign country, by unveiling their original names, which are often ignored and forgotten in an English-speaking environment.

Process:

The participants were asked to sign their English and Chinese name digitally in their usual handwritten style. They were then asked to describe the meaning of their name.

The description of their name was then given as a prompt for Midjourney AI Image generator, where a corresponding image is produced.

The participants would then give feedback, such as preferred composition, specific art style, or color scheme, on the generated image until the image is fine-tuned to their own interpretation of their names.

Example: The Chinese name of Bella is 刘宇辰, which means the universe, stars, and dragon. The description “A Chinese dragon in a starry universe” was given as a prompt for Midjourney AI generator. The image went through several iterations until the name owner was satisfied with the result.

 

Explanation:

The website displays the English name of eight international students. The flip cards “reveal” the true names of these students upon hovering. Further explanation and visualization of their name can be found when the flip card is clicked.

An interesting observation can be made when comparing their given name in Chinese and their chosen name in English. It subtly reveals a process of choosing a self-identity, a “doppelgänger”, that represents them in a new country. The one that chose to be the feminine “Bella” was given a masculine name at birth, for example, and the one that chose to be a self-created word “Rigeo” was given a commonly used name at birth.

Feedback:

The project, while very simple conceptually, surprisingly resonated with people more than I expected.  When the project was shown in class and in private to my friends, many more international students showed interest in adding their names to the list, including Korean and Japanese students as well. They were enthusiastic about creating visual representation of their names.

Typology Proposel – Chosen Doppelgänger

Proposal

Idea 1: Chosen Doppelganger

The idea is to capture the duality of the names of international students, particularly, Chinese students. This duality of given and chosen name is an interesting phenomenon that exist in the international student community. One given name in their native language, representing the best wishes and hopes of their parents; and one chosen by themselves as a new identity that they relate to more as they begin a new chapter in a new country. The process of choosing a name is a process of self-discovery. This duality is often unseen or ignored as their native language names are simply weirdly sounded “gibberish” to people who don’t speak Chinese.

The challenge is finding a way to appropriately capture and visualize the participants’ native language name. It could be done by word description by the participant themselves, AI generated image, etc. I did a test to AI generate my roommate’s name into an image using Night Café. The result is quite promising. However, I do wish to find a more objective, automated capturing process.

翀(chōng)- A bird flying straight upwards.

Fun Fact: This idea is inspired by the inside joke between me and my friends that, while my Chinese name is a macho macho boy’s name, my English name is as feminine as it gets. Surprise, my given name literally represents “the universe”, “the stars”, and “dragon”.

Idea 2: Tiny Neighborhoods (I might turn this into a personal project 🙂

The idea is to generate more microscopic photos and add creative interpretations to them to transform each photo into a cute little world with characters and neighborhoods.

Reading1 Reflection

I find it interesting that early years of photography is so tightly related to scientific field. Before this reading, I’ve always assumed that the invention of photography is almost entirely artistically driven. I’m also surprised by how complex early photography can be. In particular, that “it was often the case in the nineteenth century that professional or amateur photographers were hired to work side by side with astronomers, microscopists and surveyors” (19.) In this current world where taking a photo is such an insignificant task, it seems hard to imagine a time when photography requires multiple people’s work to decipher.

An interesting artistic opportunity made possible by scientific imaging for me is biomedical art. Using the open resource of scanned biomedical models (like viruses, protein strains, etc.) or doing manual modeling, artists can create biomedical render art or animations. While these creations once were for educational use, now they are more and more diverse and artistic.

Looking Outwards – DALL·E

The project I would like to share is DALL·E, which is a machine-learning program that generates images based on natural language description. DALL·E has been insanely popularized these recent months by artists who use it to create “AI art”.

There is a large room for improvement in the future of DALL·E. Currently, because of how the algorithm “mush together” found online images, generated images usually have less defined edges and distorted features. They appear more abstract and like a “fever dream”, where objects all resemble something but the details are all wrong. This is part of the reason why artists now mostly use it to generate abstract concept art or fix up the images later by drawing. However, with better algorithms implemented it could generate much more precise images in the future.

DALL·E has sparked heated discussions among traditional as well as digital artists about whether or not “machine will overtake human artists”. This artificial intelligence is revolutionizing the art industry drastically and forcing artists to rethink the definition, meaning, and scope of art. It reminds me of how the invention of photography caused revolution and panic in the traditional art world. DALL·E, and many other AI based image generator, should not be feared in the art world. They mark a turning point in the art-related technology. Now we can turn the images in our heads to image by a click of a button, what’s next?

An example of how DALL·E 1 vs DALL·E 2 compare for the prompt “a painting of a fox sitting in a field at sunrise in the style of Claude Monet” (Image credit: OpenAI)