There are a couple of people I’ve seen on youtube who have made their own laser microphones which I find to be very compelling. The first video uses a photosensor to detect the vibrations from a mock up window(glass pane) while the one below it uses part of this method but changes it to send different kinds of sound to a speaker. Very cool and opens up a lot of possibilities! DIY-ing this seems really fun and could allow me to do different kinds of audio analysis or storytelling in a piece.
I’m really interested in the history of making these devices(war) and how they can be/ are used in more musical ways now.
The last video is describing radios and how interference in them works – not really a project necessarily but still really fascinating. I had a lot of attachment to my own radio as a kid so it’s really nice seeing how they work.
I’ve been thinking about doing some sort of self portrait related to mental illness (don’t hold me to it though!!), so I decided to research other artists who have focused on similar topics.
Artist Daniel Regan used his medical records to understand himself through other people’s eyes. The piece places pages of his medical records side by side with self portaits taken around the same time.
I’m a huge fan of analog video/capture so this piece really interested me. I think her portraits have “quiddity” but they’re also a form of performance art which is really interesting.
Elizabeth Jameson uses her own MRI’s as self portraiture. I’m really intrigued by the idea of using “functional” imaging, like MRIs and X-rays as a form of fine art.
This is a simple capture of pupil dilation, I am interested in potentially capturing changes in pupil size with varying light exposures and/or caffeine levels.
2. A time lapse of a baseball game shot with the tilt shift:
The person shot an entire baseball game with a tilt shift lens, and then converted it into a time lapse. This was essentially the exact same as an idea I had for using the tilt shift lens for this project, but I was going to potentially record a CMU football or volleyball game. It seems this idea is a bit unoriginal, but still good to see what was out there.
3. The use of a pupil detection/tracking algorithm:
This video demonstrates the work of a computer vision research project to enable fast and accurate eye tracking. If I proceed with capturing pupil dilation or eye motion this could be a great tool.
Quantified Self Portrait (One Year Performance). Michael Mandiberg. 2017.
Quantified Self Portrait (One Year Performance) is a frenetic stop motion animation composed of webcam photos and screenshots that software captured from the artist’s computer and smartphone every 15 minutes for an entire year; this is a technique for surveilling remote computer labor.
Quantified Self Portrait (Rhythms). Michael Mandiberg. 2017.
Quantified Self Portrait (Rhythms) sonifies a year of the artist’s heart rate data alongside the sound of email alerts. Mandiberg uses himself as a proxy to hold a mirror to a pathologically overworked and increasingly quantified society, revealing a personal political economy of data. The piece plays for one full year, from January 1, 2017 to January 1, 2018, with each moment representing the data of the exact date and time from the previous year.
4 month performance on instagram, fooled her followers into believing in her character and following her journey from ‘cute girl’ to ‘life goddess’. Bringing fiction to a platform that has been designed for supposedly “authentic” behaviour, interactions and content’
EEG AR: Things We Have Lost. John Craig Freeman. 2015.
Freeman and his team of student research assistants from Emerson College interviewed people on the streets of Los Angeles about things, tangible or intangible, that they have lost. A database of virtual lost objects were created based on what people said they had lost, as well as avataric representations of the people themselves.
I thought these might not be related by had them on:
Clocks. Christian Marclay. 2010.
24-hours long, the installation is a montage of thousands of film and television images of clocks, edited together so they show the actual time.
Cleaning the Mirror. Marina Abramovic. 1995.
Different parts of a skeleton – the head, the chest, the hands, the pelvis, and the feet – on five monitors stacked on top of each other forming a slightly larger than life (human) body. On the parts of the skeleton one sees the hands of the artist scrubbing the bones with a floorbrush.
This project by Michael Sedbon created two artificial ecosystems of photosynthetic bacteria competing for light to explore the emergent behavior of technologies as life forms. I found this concept particularly interesting because I’m currently taking a class related to modeling cognitive and neurobiological systems of adaptive decision making, which shares some similarities with this concept in certain ways.
This project by Christian Mio Loclair is driven by a custom GAN solution called RayGan, which learns possible human poses through 120,000 postures of human bodies without understanding motion sequences. Visual elements like textures and colors are also generated through a separate GAN trained on a curated dataset from visual artists. I found the visualization of these human poses intriguing.
This project created by the Amana Prototyping Lab transformed basic rhythmic gymnastics ribbon movements into algorithms to create a systematized performance simulated in a 3D virtual environment and executed by the robot arm. The goal was to represent the sensibility of human movement using harsh robot mechanics, which is a concept I found quite interesting as it experimented with the contrast between fluid human motion and the rigid nature of robots.
First, Littered MVMNTS [https://www.instagram.com/litteredmvmnts/]: kind of an inverse portraiture of a human mimicking the motion of trash through choreographed movements. I like how the artist is using his art to bring attention to an environmental issue (and utilizing instagram/tik tok platforms to do so, e.g. he selects 15 second snippets), the choices of costume, and that–in line with his message–he picks up the trash when he’s done.
Next are these silver gelatin photographic prints of religious statues defaced by Khmer Rouge looters between 1975-1979 developed by Zhi Wei Hiu (and captured by Zhi’s uncle). Zhi developed these images, which were stored for 30-40 years in non-archival conditions (the photos themselves were taken by his uncle), resulting in signs of fungal growth on the negatives. Zhi further coats the surface of the paper with zinc oxide, an abrasive which captures the motion of a silver stylus across the paper. I thought this was an interesting example of “temporal capture” because the photographic duplicates of the real objects, left to the devices of nature and time, also capture the decay of the captured object.
In this photo Zhi is demonstrating how they want the photo to be displayed for an upcoming exhibition
Finally, Pipilotti Rist’s Open My Glade (Flatten), which shows a video projection of her seemingly pressed against a glass surface, moving side to side, on the large windows at the New Museum’s entrance. This work appeals to me as an example of portraiture because it’s kind of grotesque (and with some scale) while being in a genre and exhibition setting that usually aims for flawlessness. I also like the paradoxes in this image–how we are both given and denied a sense of corporality and how the display medium is suggested by the media (window), but is not the one that caused the effect (pressed against glass).
I’m still in my 3DGS era, so I’m looking a lot at 3D methods of capturing people, things, etc.
I’m not gonna lie, didn’t understand anything more than the images on this ppt BUT I think the premise of being able to simultaneously record movement + texture + mesh is so cool, especially since capturing a texture with a mesh is time consuming in it and of itself (which makes recording 3D the environment around a singular object and all of its faces kinda computationally expensive)¹. I think I’m reminded of this library from Ready Player One, the Halliday Journals, where players can view a moment from Halliday’s life from multiple different angles, zooming in and out, etc.
This project seems to be using MoCap + 3D reconstruction + stop motion-esque principles to create a VR/AR experience for demos. (not really what I’m looking for but the pipeline is interesting)
3D Temporal Scanners:
This project on motion analysis uses photogrammetry with markers and something they’re calling 3D temporal scanner to analyze the gaits of adults. It reminds me of the Chronography of a walking man by Mary (c.1883), mainly because both projects are focusing on human movement and how data can be extrapolated from non-linear motion.
3DTS is lowkey what I’m interested in making (but maybe for 3dGS?). Didn’t really get satisfactory results from a Google search of 3D temporal scanners (wtf with wrong with google’s search engine nowadays, so many ads) so I turned to my best friend ChatGPT:
A 3D temporal scanner is a type of device that captures and analyzes changes over time in three-dimensional space. It can track the motion or deformation of objects and environments across multiple time points, essentially combining spatial and temporal data into a single model or dataset. These scanners are typically used in various fields like medicine, animation, architecture, and scientific research. – ChatGPT 4o
ChatGPT says that some 3dTS can do motion capture + reconstruction + texture capture over a period of time and space (dependent on system).
a very expensive 3d temporal scanner/capture system –> optitrack
Aydin Buyukta’s work is kinda also along the lines of changing the way we view the work in the same way 3D reconstruction can change how we experience things. It makes me kinda give a go with the drone, especially since there’s been quite a few projects in the PostShot Discord that talk about using drone footage for 3DGS.
Note¹: I’m actually not quite sure if this is true, but I’m drawing from my experience of doing the captures for 3DGS cause I must’ve spent like 1hr on capturing all angles of some of my jars 😭.
01A Out-of-Body Experience, by Tobias Gremmler, Adam Zeke
The name is a fascinating visualization of an ethereal concept, where viewers experience a sense of seeing their body from a detached perspective. The project utilizes a combination of Kinect and Oculus to create a mesmerizing point-cloud rendering, which blurs the lines between the physical self and the virtual representation. I appreciate it for its technology to manifest an intangible experience like an OBE, which invites the audience into a deeply personal exploration of presence and perception, understanding reality from an external viewpoint.
01B Virtual Actors in Chinese Opera, by Tobias Gremmler
Created for a theater production that fuses Chinese Opera with New Media, the virtual actors are inspired by shapes, colors and motions of traditional Chinese costumes and dance. The project made me think of how costumes and fashion could reshape a human body.
I like its concept that blending traditional art forms with cutting-edge technology, which is fascinating in the context of temporal capture, particularly immortalizes fleeting, live performances in a digital space. This form of capture moves beyond merely recording an event, allowing the audience to explore nuance. For example, it explores how traditional Chinese opera costumes and gestures, when captured digitally, become abstract patterns of motion, revealing the spiritual essence. I feel like the virtual actor, rooted in tradition, becomes a new entity through the lens of reinterprets.
Gif of The Johnny Cash Project, in which more than 250,000 people individually drew frames for “Ain’t No Grave” to make a crowdsourced music video.
03 Human Blur Series – Penang Blur, Sven Pfrommer
“This mixed media collection is a series based on photographs I took while traveling PENANG / MALAYSIA in 2015. Back in my studio I added painting and mixed media techniques and finalized the work on acrylic, metal, resin coated wood panel or canvas. All works are limited edition of 10.”
I’m impressed by the blurring effect, which evokes a sense of transience, aligning with the idea of capturing people in time—fleeting moments that cannot be grasped in full detail. Instead, I experience the layered complexity of movement, where people are represented as part of a flowing system rather than discrete subjects. This also mirrors the way memory often works: impressions of people are sometimes remembered as hazy or fleeting. Lastly, this abstraction of human figures eliminates individuality, allowing the viewer to focus on the essence of motion, light, and shadow rather than on personal identity.