3D Gaussian Splatting – Polycam

Over the summer I got to read a little about 3D Gaussian Splatting, which is an innovative method to render complex 3d scenes from a video. It is not necessarily a device, but I thought it was an interesting technique to share. It is faster to render and produces unprecedented level of detail compared to more traditional methods using Nerfs. Instead of using polygons to model the subject the system uses a mathematical method called structure-from-motion to find the differences between each photos (frames from the input video) to model using dots or gaussian points in 3d space. Due to the nature of Gaussian points, I think it adopts a brushstroke-like style in the renders.

Examples from Polycam.

Brain Scan

 

Many devices we saw Monday in class were ones I had never thought of as capture devices for creating art, including medical equipment like the ultrasound transducer we experimented with. This inspired me to research medical equipment, and here’s a short list of common technologies we have for scanning brain images, which I found very interesting:

CT (Computed Tomography): An X-ray-based scan that beams X-rays through the head, producing a picture that looks like a horizontal slice of the brain.

How Neuroimaging Can Yield Better Diagnostic Information | Dartmouth

MRI (Magnetic Resonance Imaging): These scans construct an image of the brain by passing a magnetic field over the head. Hydrogen molecules in the brain react to this magnetic field by reverberating and sending a wave back. The scanner records this and turns it into a highly accurate image of the brain.

Brain Mri Images – Browse 42,314 Stock Photos, Vectors, and Video | Adobe  Stock

PET (Positron Emission Tomography): PET involves injecting a small amount of radioactive material into the body, which then accumulates in the brain. The scanner detects this radiation to create images that highlight areas of functional activity, producing a multi-color image of the brain that resembles a heat map.

Amyloid PET Scans May Drastically Change Alzheimer's Diagnosis and Care,  Study Finds

PET technology is particularly interesting due to its ability to visualize brain activity, which I think it could be used to create dynamic, time-lapse pieces representing changes in brain activity over time. For example, visualizing changes in brain activity during different emotional states, and it could be translated into a series of animation.

Looking Outwards #1-Paragraphica

Paragraphica is an innovative camera that utilizes AI and location data to generate a photographic representation of a place and moment based on descriptive text. It gathers data such as the address, weather, and nearby places to create a paragraph that encapsulates the current environment, which is then converted into a unique image using a text-to-image API. The camera includes dials that allow users to control the radius of data collection, the noise in the AI’s image generation, and how closely the image follows the descriptive paragraph. This project’s use of AI with direct, real-time interaction with environments is what I find so inspiring. What I think the creators got right here is there ability to capture and enhance the visuals of real environments and in real-time. I think the creators could take it a step further by allows users to adjust the descriptive text that is used to generate the image, it would be even cooler if you gave the user liberty to image their space in a new way, in real-time. I also think it could be cool to make a similar technology that had a more specific transformation step, i.e. used the AI to enhance the image in a specific way rather than a general recreation of the photo. The image produced is limited by the descriptive text that is generated.

Related Technologies: Paragraphica was created with Raspberry Pi 4, 15-inch touchscreen, 3D printed housing, custom electronics and using Noodl, python and Stable Diffusion API.

Paragraphica – Context to image (AI) camera

Looking Outwards 01

There was an infrared camera and a piece of tinted glass that I found interesting and fun to play with when we were looking at all the capture devices during class time. I was so entertained to see how a piece of glass that I could not see through was completely transparent the second I held it up to the camera. It was a very interesting concept to investigate theatrically or live. It would be an interesting performance where the live performance could not be seen by the audience unless the specific camera was held up towards the glass to reveal the actions within. When playing with the glass and camera, I thought about how it would be cool to explore the forced contrasting experiences of the live audience and the captured experience. If I put a performer in a box of tinted glass who was constantly performing, knowing when they would be captured and displayed: What would that experience feel like?

Looking Outwards 01

Pendulum

Falling In Love Again

Stroboscopic Photography, transformed by Professor Harold Eugene Edgerton[1] from a laboratory instrument into a common device in the 1930s, developed into a technique that captures motion by breaking it into stages. Using a flashing light source during a long exposure, it freezes multiple movements in a single frame. The magic lies in the timing—of the flash, the motion, and the exact press of the shutter—making each shot a blend of unpredictability, but that’s part of the charm.

Yet, it’s often used to capture human movement, but there are many others around that move. If I had the opportunity, I’d capture the subtle movements of plants and cells—motions often invisible to the naked eye. I’m also curious about adding a second flash to see how it might alter.

Reference:
[1] https://www.kalliopeamorphous.com/stroboscopic-photographs-1
[2] https://zidans.ru/blogs/blog/StroboscopicPhotography?lang=en