motion capture

Motion is super intuitive for a person. Your brain says “move” and then you’re limbs move, you’re typing on a keyboard, and you dancing the night away at your friends house warming party. But the task of capturing motion and interpreting this data is a bit more difficult than that. Sougwen Chung takes motion and interprets it as a form in a virtual space. This process of capturing data, then running it through an AI system seems quite interesting, with some weird results. I’m curious as to what different results this sort of workflow can achieve.

GENESIS (2024)

sougwen-2023_LifeLInes_InstallationBlue_1

Looking Outwards 02

Street Ghosts by Artist Paolo Cirio took photos of people found on Google Street View and posted at the same physical locations from where they were taken. Life-size posters were printed in color, cut along the outlines, and then affixed to the walls of public buildings at the precise spot where they appear in Google Street View. His project challenges our perception of privacy, consent, and the boundaries between digital and real-world spaces. By making these ghostly figures visible in the urban environment, Cirio forces us to confront the pervasive nature of surveillance and the erasure of personal identity in the digital age.

The Climate Ribbon

The Climate Ribbon by Sarah Cameron Sunde is a dynamic visual installation that uses real-time data from climate sensors to reflect the ongoing changes in our environment. The project operates by capturing environmental data, such as temperature, humidity, and air quality, and translating this data into an evolving visual display that symbolizes the delicate balance of our ecosystem.

It is  is an arts ritual to grieve what each of us stands to lose to Climate Chaos, and affirm our solidarity as we unite to fight against it.

What inspires me about this project is its ability to merge art and activism seamlessly; it doesn’t just represent climate change, it actively responds to it in real-time, making the audience viscerally aware of the impact of their actions on the environment. However, the project could have expanded its reach by incorporating more interactive elements, allowing viewers to directly influence the data inputs or visualize how their individual carbon footprints contribute to global changes. The concept draws on earlier works of environmental and data-driven art, such as Natalie Jeremijenko’s Environmental Health Clinic, where art and technology are used to engage the public in sustainability issues.

https://www.theclimateribbon.org/

Baraka Custom 70mm Camera

Baraka is a non narrative documentary from the 90s shot on 70mm film, featuring some pretty complex time lapse shots that involve panning tilting and zooming at the same time. This required the team to build their own custom 70mm camera. I think this is interesting because the ‘experimental’ part here doesn’t come from the camera itself but the way the camera is controlled? Anyways the results are stunning and unlike anything I’ve seen so id all this a successful experiment

https://drive.google.com/file/d/1Rh7J7Jholr0WAxqz_nUWkSmU22fDGHrU/view?usp=drive_link

 

 

Capturing Captures

This is actually an ongoing project by a friend (produced as part of The NUUM Collective) that I saw on my Instagram feed. Part of this project shows a stacked series of ultrawide videos showing two people in a room: one is in a fixed position while the other walks forward and stops at various points in space. The visual effect is quite powerful, so I thought I’d reach out to ask about it (capturing the capture?) and find out how it was captured and some background behind its creation. Nun explains:

“This video is about reducing everything to the distance between 2 people, and what meaning can be derived from that… We’re using a positioning system that uses ultra wideband tech to track the position and orientation in space. We’re exploring the different combination[s] and permutations of distance, orientation, and sequencing.” Overall, I’m also drawn to this project for its effective use of small changes within a context of repetition–which is an idea I’ve been trying to explore through my work as well.

Nun further reflected that differences in cultural identity led to different interpretations of how the movements read when the collective examined the footage. Nun mentioned that these were preliminary findings, but that those with more Western influence tended to refer to geometry and the formal relationships (e.g. counterpoint, sequencing) in a way that was more defined/formal, while those with more Eastern influence liked to refer to the relativity over time (for example, frame 2 feeling closer because frame 1 is further) in a way that was more relational and narrative-driven. Nun confirms that they are looking to build a way to more systematically collect such themes that arise.

Finally, it was interesting to note the connection back to this class! Nun said that she’s “personally very influenced by Golan’s teaching. I’ve studied a lot of his materials — specifically these are closely related to the stuff he teaches about interpolation. Getting from point A to point B through different curves.”

Captured: Are you a Bully, Target, or Bystander?

Captured by Hanna Haaslahti

Haaslahti’s immersive digital art installation thrusts viewers into a dual role of spectator and actor, assigning them a “new identity” within a virtual community. Incorporating viewers through face capture technology and crowd simulation, the community explores the theme of “Bully, Target, or Bystander” simulating the viewer’s experience of human crowd behavior where unpredictable moods fester (Diversion Cinema).

CREDIT: Diversion Cinema

CREDIT: Diversion Cinema

By guiding the viewer through uncomfortable and traumatic scenarios of human crowd behavior, the installation probes a severe reaction and extreme reflection regarding our environment and the individuals who surround us. Viewers witness “themselves” within these simulated scenarios, evoking feelings of guilt, horror, or a desire to escape. Seeing oneself in such a vulnerable position – regardless of the role assigned – forces individuals to confront uncomfortable, somewhat unavoidable situations.

Critique

While I love the concept, the current use of colored-bodied individuals somewhat detracts from the reality of the message, reminding viewers that the installation is not entirely real despite seeing their own face. Employing a more visually realistic background or environment, rather than a white backdrop, would further emphasize the art’s message and increase its emotional resonance.

 

Chain of Influences

Haaslahti’s central tool is “computer vision and interactive storytelling,” primarily “interested in how machines shape social relations” (Haaslahti). When observing her past artwork, Haaslati’s perspecitve is clear since her focus is influenced by:

    • Computer vision techniques replicating Big Brother-the feeling of being constantly being watched
    • Visual Perception, real time mapping
    • Participatory simulations through viewers as actors using hyper-realistic capturing and 3D modeling techniques
    • Social implications on human relationships

Haaslahti has not listed sources or a biography.

Links

https://www.diversioncinema.com/post/captured-the-installation-by-hanna-haaslahti-enters-diversion-cinema-s-line-up

https://www.diversioncinema.com/captured

https://www.hannahaaslahti.net/about/

 

Hugues Bruyère – Dpt

I was interested in this on-going exploration by Hugues Bruyère @Dpt. He uses Stable Diffusion and SDXL Turbo to create the real-time images shown on the magnifying glass. The images adapt and change reflecting its background with the given prompt of style Hugues assigns in Stable Diffusion. What I thought was interesting about this particular project was that you get a comparison between what we perceive and how the same subject is perceived by the computer in real time. It is a constantly shifting and adapting filter in the lens of the computer.

Video: https://www.instagram.com/smallfly/reel/C9nKGOnpnh5/

Looking Outwards #2-Black Box Camera

The Black Box Camera, created by Jasper van Loenen, uses artificial intelligence to generate a physical print of a subject based on its own interpretation. The user points the camera at a subject, presses a button, and the internal system analyzes the photo, creates a caption, and uses that to generate a unique image, which is then printed. This project’s use of AI with direct, real-time interaction with environments is what I find so inspiring. What I think the creators got right here capturing the mystery of AI interpretation of visuals of real environments and in real-time. I think the creators could take it a step further by giving users more creative liberty in how the AI is used to generated a new image, allowing ‘AI photographers’ to exist in the same way that we see ‘AI artists’ emerging. I also think it could be cool to make a similar technology that had a more specific transformation step, i.e. used the AI to enhance the image in a specific way rather than a general recreation of the photo. The image produced is limited by the descriptive text that is generated, why go from image -> text -> image when you can go straight from image -> image?

Related Technologies: Internally the Black Box Camera uses Raspberry Pi with a camera module to take a photo when the user presses the button. The photo is then analyzed and a caption is generated, which is used as the input prompt for Dall-e. The resulting image then gets printed by the internal Instax portable photo printer, for which the bluetooth protocol was reverse engineered to control it from custom software.

Black Box Camera – The ‘dark chamber’ of AI

Bit Fall

This sculpture by Julius Popp uses falling streams of water to show words that are only visible as the water is falling. The installation focuses on words from live news feeds, and the artists goal was to highlight how fast moving information and news is in the modern era.

The only thing I would change with this project is that the installation only shows words and not images or icons. If I were to have access to this system, I think I would want to experiment with using a very high contrast live camera feed to show human faces and bodies in the water rather than words. If it worked (which it might not) I’d be really interested in using the water to capture a version of stop motion performance, where the audience only sees the performer through these brief snapshots in the water.

Additional Source: https://www.illuminateproductions.co.uk/bitfall

 

Dan Hoopert – What Does a Tree Sound Like?

Audio Synthesis: What Does a Tree Sound Like?

“Here a single beam of light scans from left to right, creating a 2D cross section of the shape. Using the area of this cross section MIDI data is generated using Houdini and CHOPS. This can be fed into any DAW providing a base for content driven sound design. Field recordings from the object’s natural environment triggered by this data allows a close relationship between the light and sounds that are created, completely unique to the chosen object.  ”

This artwork is created by London-based 3D artist Dan Hoopert. It’s a data-driven piece exploring the relationship between visual and sound. It uses photogrammetry to recreate a tree in 3D space. Silent objects from the real world are given voices in the virtual realm, yet with unnatural, electronic sounds that create an intriguing sense of conflict. Data from the scanning is also visualized through particles that follow the beam of light. I’m especially captivated by the surrealist imagery of the work.