surveil

It’s always fascinating to see people use technology without being aware or not placing importance on digital privacy and security. Just think about the abundance of live security camera footage on websites like insecam, where you can peruse the globe via frames that others have deemed important enough to look at. With just one website, you get access to what other people value or find interesting in their vicinity.

If all that footage was available to me, I’d want to see what still frames look like across the globe. Did different people, in different locations, choose to aim their camera at the similar things? What do people with different backgrounds and experiences both value?

Looking Outwards 01

Sele CT-EDS, a CT-based Explosive Detection System (EDS), used in some Airport security and checked baggage inspection is something I found interesting. The device allows not only 2D but also 3D scans of the bags which does not require people to take out their big electronic devices. If I were to use it I would be curious to see how different types of objects would look under the EDS including organic and inorganic objects, exploring things that “can not be opened” or “revealed.” And bring it back to life in real scale using spatial projection/hologram to allow deeper interaction.

Imaging and Detection - Analogic

Barcode Reader

this seems super trivial, but I think there’s something beautiful in a ‘camera’ that looks at a 2d image and just returns a number. In reality all digital cameras do this, they just hide behind more front end that converts that number back into something we can recognize as an image, but a barcode scanner doesn’t take any bs and just spits out a number directly. This in theory only works for very specifically formatted images (barcodes), but now AI is able to embed this into semi-normal looking images…

Perceiving Peripersonal Space

Yeseul Song’s Two Subtle Bodies (2022) uses bone-conduction headphones and capes with embedded electronics to generate sound compositions that respond dynamically in real time to the movements of two participants in a space. The idea is to allow two individuals co-existing within a space to experience each other’s peripersonal space–which Song describes as extending “the senses of self and others by recognizing and perceiving this soft and fluid space between us” [1]. Song further explains that “[t]he subtle body is a concept that appears in Taoism and Dharmic religions to indicate bodies that are neither solely physical nor solely spiritual, in contrast to the mind-body dualism that has dominated western thought. In neurology, the space surrounding a body is called Peripersonal Space and enabled by visuo-tactile senses” [2].

I actually had the chance to try this project with a friend when Yeseul exhibited it at NYU (see photos)! In terms of how I’d want to use it, I’d be very curious to experiment with this system in the context of butoh/butoh-inspired dance. One of the founders of butoh, Kazuo Ono, once described seeing with his feet and that the body is covered with so many eyes that it’s “virtually impossible to count them all” [3]. How would we perceive/interact with space differently if we had the aid of such a tool? It might also be interesting to modify this tool to perceive non-human bodies.

Here are some links to more information about this work:

[1] Two Subtle Bodies. An essay by Yeseul Song | by More Art | Medium

[2] Two Subtle Bodies – Yeseul Song

[3] Kazuo Ohno’s World: From Without & Within – Google Books

LoRaWAN Environmental Sensor

LoRaWAN Environmental Sensors: Capturing the Pulse of the Environment

One intriguing capture device that has caught my attention is the LoRaWAN Environmental Sensor. These sensors are capable of monitoring various environmental parameters such as air quality, temperature, and humidity across large areas. The idea of using real-time data to create art is what truly inspires me about these devices.

LoRa environmental sensors refer to environmental sensors equipped with LoRa nodes, such as temperature sensors, humidity sensors, air pressure, indoor air quality sensors, etc. However, unlike traditional environmental sensors, LoRa environmental sensors send the collected data to LoRa gateways through the LoraWan protocol, and then finally transmit them to the server.Since LoRa environmental sensors use LoRa technology to send data, this also determines that its networking is simpler than traditional sensors.

If I had access to LoRaWAN sensors, I would use them to develop a data-driven art installation visualizing the fluctuating environmental feature in urban spaces and transforming unconscious data into dynamic art forms. The installation would be a living reflection of the community we lived in, encouraging viewers to consider their connection to the urban. I think the project could be made even more compelling by incorporating interactive elements where viewers could contribute data or see the impact of their immediate actions in real-time. This would add a layer of personal responsibility and immediacy to the environmental message. This concept is inspired by earlier data-driven projects like Sarah Cameron Sunde’s The Climate Ribbon, which also uses environmental data to create responsive art.

https://www.niubol.com/Product-knowledge/LoRa-environmental-sensors.html

https://www.youtube.com/watch?app=desktop&v=d_TclUAKN10

3D Gaussian Splatting – Polycam

Over the summer I got to read a little about 3D Gaussian Splatting, which is an innovative method to render complex 3d scenes from a video. It is not necessarily a device, but I thought it was an interesting technique to share. It is faster to render and produces unprecedented level of detail compared to more traditional methods using Nerfs. Instead of using polygons to model the subject the system uses a mathematical method called structure-from-motion to find the differences between each photos (frames from the input video) to model using dots or gaussian points in 3d space. Due to the nature of Gaussian points, I think it adopts a brushstroke-like style in the renders.

Examples from Polycam.

Blind Camera

Artist Diego Trujillo Pisanty created a camera that uses AI to turn sound into images. The images he creates are really surreal and captivating to me, you can sometimes see basic outlines of the things he’s capturing (like the metro or a car) but there’s this otherworldly quality to the images that makes me feel kind of dizzy.

I think this is a really interesting idea, but his scope was pretty narrow (only in Mexico City) and I wish I could see the tool used in other applications like in a park, or in the countryside since the noise is totally different there than in a bustling city. I think if I had access to this tool, I would want to use it to record some of the aspects of every day life that are known for causing hearing damage. Like planes, concerts, gunshots etc. I’m interested in seeing a visual representation of the hearing damage being caused.

Additional Source: https://www.creativeapplications.net/sound/blind-camera-point-and-shoot-sound-to-image/

The PSiFi: Capturing Human Emotion

The PSiFI 

“Personalized Skin-Integrated Facial Interface” (PSiFI) is an advanced human emotion recognition system using a flexible wearable mask device designed to “recognize and translate human emotions” in real time (Nature Communications).

The PSiFI utilizes strategically placed multimodal triboelectric sensors (TES) to gather data by detecting changes in movement, facial strain, and vibration on the human body: speech, facial expression, gesture, and various physiological signals (temperature, electrodermal activity).

The device incorporates a circuit for wireless data transfer to a machine learning Convolutional Neural Network (CNN) algorithm that classifies facial expressions and speech patterns. The more the classifier trains, the more accurate the analysis of emotion (Nature Communications).

Image: PSiFI      CREDIT: Nature Communications

 

Inspiration/Research

I’ve conducted research where we have used various devices to collect physiological data, and the sensors are ridiculously bulky and temperamental. The ability to capture a variety of human emotions using a combination of machine learning, sensors, and computational power within one device is incredible!

In my art practice, I often explore ways to evoke emotional reactions from the audience. I could imagine creating installations where participants encounter thought-provoking or uncomfortable situations while wearing a motion-sensor mask that analyzes and tracks their every movement and physiological response. This would not only reveal the external, visible reactions to the artwork but also provide insight into more internal, unseen responses.

 

Critique

The researchers’ decided to test the PSiFI in a VR environment allowed for an extremely controlled research environment. By employing a VR-based data concierge that adjusted content-music or a movie trailer-based on user reactions, demonstrated the system’s ability to accurately keep up with the individual’s emotions. 

 

Links

https://thedebrief.org/groundbreaking-wearable-technology-can-accurately-detect-and-decipher-human-emotions-in-real-time/

https://www.nature.com/articles/s41467-023-44673-2#citeas

 

Brain Scan

 

Many devices we saw Monday in class were ones I had never thought of as capture devices for creating art, including medical equipment like the ultrasound transducer we experimented with. This inspired me to research medical equipment, and here’s a short list of common technologies we have for scanning brain images, which I found very interesting:

CT (Computed Tomography): An X-ray-based scan that beams X-rays through the head, producing a picture that looks like a horizontal slice of the brain.

How Neuroimaging Can Yield Better Diagnostic Information | Dartmouth

MRI (Magnetic Resonance Imaging): These scans construct an image of the brain by passing a magnetic field over the head. Hydrogen molecules in the brain react to this magnetic field by reverberating and sending a wave back. The scanner records this and turns it into a highly accurate image of the brain.

Brain Mri Images – Browse 42,314 Stock Photos, Vectors, and Video | Adobe  Stock

PET (Positron Emission Tomography): PET involves injecting a small amount of radioactive material into the body, which then accumulates in the brain. The scanner detects this radiation to create images that highlight areas of functional activity, producing a multi-color image of the brain that resembles a heat map.

Amyloid PET Scans May Drastically Change Alzheimer's Diagnosis and Care,  Study Finds

PET technology is particularly interesting due to its ability to visualize brain activity, which I think it could be used to create dynamic, time-lapse pieces representing changes in brain activity over time. For example, visualizing changes in brain activity during different emotional states, and it could be translated into a series of animation.

Looking Outwards 1

The project Breeze is a device created by the artist to capture and visualize wind using wind data and a robotic arm. Developed during the covid 19 lockdown, the artist sought to bring the outdoor weather into an isolated indoor space. I find the idea of visualizing wind through the swaying of plants particularly compelling. The use of a typically indoor plant to represent outdoor wind is an interesting contradiction, reflecting the artist’s experience of isolation during the lockdown. This parallel between the indoor plant and the outdoor wind highlights the tension between lockdown and the natural world. Since this project involves just a single device, I think it would be even more impactful if multiple devices were used together, potentially forming an indoor garden. This might create a more immersive experience. This project kind of reminds me of Janet Echelman’s installations as they both attempt to capture the forms and directions of air and wind.

Breeze