Hugues Bruyère – Dpt

I was interested in this on-going exploration by Hugues Bruyère @Dpt. He uses Stable Diffusion and SDXL Turbo to create the real-time images shown on the magnifying glass. The images adapt and change reflecting its background with the given prompt of style Hugues assigns in Stable Diffusion. What I thought was interesting about this particular project was that you get a comparison between what we perceive and how the same subject is perceived by the computer in real time. It is a constantly shifting and adapting filter in the lens of the computer.

Video: https://www.instagram.com/smallfly/reel/C9nKGOnpnh5/

Looking Outwards #2-Black Box Camera

The Black Box Camera, created by Jasper van Loenen, uses artificial intelligence to generate a physical print of a subject based on its own interpretation. The user points the camera at a subject, presses a button, and the internal system analyzes the photo, creates a caption, and uses that to generate a unique image, which is then printed. This project’s use of AI with direct, real-time interaction with environments is what I find so inspiring. What I think the creators got right here capturing the mystery of AI interpretation of visuals of real environments and in real-time. I think the creators could take it a step further by giving users more creative liberty in how the AI is used to generated a new image, allowing ‘AI photographers’ to exist in the same way that we see ‘AI artists’ emerging. I also think it could be cool to make a similar technology that had a more specific transformation step, i.e. used the AI to enhance the image in a specific way rather than a general recreation of the photo. The image produced is limited by the descriptive text that is generated, why go from image -> text -> image when you can go straight from image -> image?

Related Technologies: Internally the Black Box Camera uses Raspberry Pi with a camera module to take a photo when the user presses the button. The photo is then analyzed and a caption is generated, which is used as the input prompt for Dall-e. The resulting image then gets printed by the internal Instax portable photo printer, for which the bluetooth protocol was reverse engineered to control it from custom software.

Black Box Camera – The ‘dark chamber’ of AI

Dan Hoopert – What Does a Tree Sound Like?

Audio Synthesis: What Does a Tree Sound Like?

“Here a single beam of light scans from left to right, creating a 2D cross section of the shape. Using the area of this cross section MIDI data is generated using Houdini and CHOPS. This can be fed into any DAW providing a base for content driven sound design. Field recordings from the object’s natural environment triggered by this data allows a close relationship between the light and sounds that are created, completely unique to the chosen object.  ”

This artwork is created by London-based 3D artist Dan Hoopert. It’s a data-driven piece exploring the relationship between visual and sound. It uses photogrammetry to recreate a tree in 3D space. Silent objects from the real world are given voices in the virtual realm, yet with unnatural, electronic sounds that create an intriguing sense of conflict. Data from the scanning is also visualized through particles that follow the beam of light. I’m especially captivated by the surrealist imagery of the work.

Looking Outwards 02

In a similar strain of the contrasts between the experiences of the live audience and the captured experience, there was this artist I was introduced to recently who goes by the name Cassils. They’re a visual and performance artist who does some really out-there works, but my favorite of theirs is a performance piece called Becoming an Image. In this work, they essentially pound a giant block of clay in a dark room that is occasionally lit by a flash of a camera periodically. This was performed in front of a live audience. The images are incredible and dynamic, but what I am more interested in is how those images contrast the collectively created images of the live audience, who did not see much. The eye is our first capture tool, and I love how the artist plays with this concept throughout this piece- confronting the truths we capture with our eyes vs. the cameras we have invented.

Portfolio Page

Looking Outwards 02

Trevi Fountain, 1,936 images, 656,699 points

The project “Building Rome in a Day” reconstructs entire cities from crowdsourced photos, collecting millions of images uploaded to Flickr, computing viewpoints to build a 3D digital replica of Roma in one day.

What’s fascinating is that each photo, captured by different individuals, serves as a piece of a larger puzzle, where snapshots are added to the bigger picture, creating something far greater than the sum of parts. Cameras function as distributed storage units, each capturing a fragment of reality from a unique perspective.

Beyond just a snapshot of Rome, it is an ever-evolving mosaic that spans time. Photos serve as historical markers, and as more are added, the model creates a living timeline that bridges Rome’s past, present, and future.

Reference: [1]https://grail.cs.washington.edu/projects/rome/