TypologyMachin: Bubble FishEye Selfie

Radius-960.0 pixel CenterX-1115.5 pixel

 

Machine: Bubble Fisheye Camera & Typology: Selfie with Rich Context

The Machine will create portraits that are extracted from de-warped photographs of soap bubbles, which includes steps of: (1) Create Bubbles (2) Capture Bubbles (3) De-fishing Bubbles.

Bubbles are natural fisheye cameras that capture the colorful world around them. If we take a photo of a bubble with a camera, we will accidentally capture ourselves in the bubble. This gave rise to the idea of ​​making a bubble capture machine, and the later interactive design of the project was influenced by portal picture 360 ​​painting . I think fisheye cameras compress the rich content of the real world, and the process of viewing its image is a possibility of exploring changes from every parts.

 

I researched the formula for making giant bubble solution: it is

If you want to replicate giant bubbles, there are two tips: 1. The prepared solution must be left to stand for 24 hours. The longer the better the bubbles. 2. Compared with iron wire, cotton and linen ropes can be dipped into more solutions.

In order to verify the technical feasibility of defishing, we used a real mirror sphere for experiment. We tried and compared the Photoshop technology flow and the processing technology flow.  Among them, Photoshop Adaptive Wide Angle: (1) Edge pixels are lost. (2) Customizable parameter adjustment capability is poor. Processing Programming: (1) Can obtain specific data from fisheye cameras (2) Customized adjustment parameters (3) More interactive when coding on one’s own. Therefore, I chose processing. The general idea is to extract the output photo pixels and map them back to 2D fisheye pixels from 3D space through triangular change mathematical calculations. Thanks to golan for figuring out the underlying principle with me. We tried different mathematical formulas and finally found the right one!!

 

Let me talk about the technical points of this pipeline: – Bubble Photograph setting: (1)Autofocus (2)Continuous shooting mode(3)High-speed shutter. – Programmable adjustable parameters: (1)thetaMax (2)Center of circle + Radius, which I drawn in the canvas dynamically. – De-fish Criteria: Refer to the original curves and straight lines in the real world. – Explore Interaction:(1)De-fisheye for different focus points based on the location of the user mouse (2)Print parameters.

 

260+photos (12 bubbles are round, 7 bubbles are rich with contexts)

Some objective challenges and solutions. A. Whether:I prepared the bubble water and waited for 24 hours to start one week in advance, however, it rains every day in Pittsburgh. If the wind is strong, the bubbles will burst, not to mention the rainy day, and to capture the reflection of the bubble, the requirement for sunlight is very high. So, I can only wait for the weather to be sunny. B. Photograph of Moving Bubbles:The first time I took a group of photos of bubbles, I found that the pixels of the photo were enough, but the bubbles would be blurred when I focused on them because they were always flying. Later, I used the continuous shooting mode, autofocus, and high-speed shutter, and finally take clear photos of the moving bubbles. C. Bubble Shape:Fisheye restoration hopes that the bubbles are round, but the bubbles are affected by gravity, the surface solution is uneven, and coupled with wind disturbances, round bubbles are really valuable. D. Unwarp Fisheye Image:Fisheye Mathematic and Processing. Thanks to Golan & Leo!!! 🙂

Surprise! The fisheye image captured by the bubble machine actually has two shadows of me. This is because the lower inverted image is from the bubble’s rear surface acting as a large concave mirror. However, the upper upright image is from the bubble’s front acting as a convex mirror!!

Some Findings: I.Round bubbles are better capture machines! II.Dark environments are better for capturing bubble images! III.Don’t blow bubbles too big, they will be deformed! From my perspective,

“Bubbles =Fisheye Camera + Prism Rainbow Filter for Selfie With Rich Context”

Firstly, The surface of the bubble can contain up to 150 unbalanced film layers, reflecting different colors of sunlight.Secondly, I like this capture machine because of its randomness: shooting angle, weather, bubble size and thickness will bring unexpected surprises to the final selfie!

 

slides link

Photogrammetry of UV Coloration with Flowers (ft. Polarizer)

Initially, I took interest in the polarizer sheets, which I can divide the diffused light and the specular light with. I was particularly interested in the visual of image with only the diffused light.

(Test run image: Left (original), Right (diffused only))

My original pipeline was to extract materials from real life object and bring it into a 3D space with the correct diffused and specular maps.

After our first crit class, I received many feedbacks on my subject choice. I could not think of one that I was particularly interested in, but after seeking advice, I grew interested in capturing flowers and how some of them display different patterns when absorbing UV light. Therefore, I wanted to capture and display the UV coloration of different flowers that we normally do not see.

I struggled mostly with finding the “correct” flower. Other problems that came with my subject choice were that flowers wither quickly, they are very fragile and quite small.

(Flower I found while scavenging the neighborhood with the bullseye pattern, but it withered soon after.)

After trying different programs to conduct photogrammetry, RealityScan worked the best for me. I also attached a small piece of polarizer sheet in front of my camera because I wanted the diffused image for the model; there was not a significant difference since I couldn’t use a point light for the photogrammetry.

Here is the collection:

Daisy mum

(Normal)

(Diffused UV)

 

Hemerocallis

(Normal)

(Diffused UV)

(The bullseye pattern is more visible with the UV camera)

 

Dried Nerifolia

(Normal)

(Diffused UV)

My next challenge was to merge two objects with different topology and UV maps so that it has two materials on one model. Long story short, I learned that it is not possible…:’)

Some methods I tried on Blender were

  • Join two objects as one bring the two UV maps together than swap them
  • Transfer Mesh Data + Copy UV Map
  • Link Materials

They all resulted in a broken material like so…

The closest to the result was this, which is not terrible.

Left is the original model with original material; middle is the original model with UV material “successfully” applied and Right is the UV model with UV material.  However, the material still looked broken, so I thought it was best to keep the models separated.

 

Temporal Decay Slit-Scanner

Objective:

To compress the decay of flowers into a single image using the slit-scan technique, creating a typology that visually reconstructs the process of decay over time.

I’ve always been fascinated by the passage of time and how we can visually represent it. Typically, slit-scan photography is used to capture fast motion, often with quirky or distorted effects. My goal, however, was to adapt this technique for a slower process—decay. By using a slit-scan on time-lapse footage, each “slit” represents a longer period, and when compiled together, they reconstruct an object as it changes over time.

Process

Why Flowers?
I chose flowers as my subject because their relatively short lifespan makes them ideal for capturing visible transformations within a short period. Their shape and contour changes as they decay fit perfectly with my goal to visualize time through decay. Initially, I considered using food but opted for flowers to avoid insect issues in my recording space.

Time-Lapse Filming

The setup required a stable environment with constant lighting, a still camera, and no interruptions. I found an unused room in an off-campus drama building, which was perfect as it had once been a dark room. The ceiling had collapsed in the spring, so it’s rarely used, ensuring my setup could remain undisturbed for days.

I used Panasonic G7s, which I sourced from the VMD department. These cameras have built-in time-lapse functionality, allowing me to customize the intervals. I connected the cameras to continuous power and set consistent settings across them—shutter speed, white balance, etc.

The cameras were set to take a picture every 15 minutes over a 7-day period, resulting in 672 images. Not all recordings were perfect, as some flowers shifted during the decay process.

Making Time-Lapse Videos

I imported the images into Adobe Premiere, set each image to a duration of 12 frames, and compiled them into a video at 24 frames per second. This frame rate and duration gave me flexibility in controlling the slit-scan speed. I shot the images in a 4:3 aspect ratio at 4K resolution but resized them to 1200×900 to match the canvas size.

 

Slit-Scan

Using Processing, I automated the slit-scan process. Special thanks to Golan for helping with the coding.

Key Variables in the Code:

  • nFramesToGrab: Controls the number of frames skipped before grabbing the next slit (set to 12 frames in this case, equating to 15 minutes).
  • sourceX: The starting X-coordinate in the video, determining where the slit is pulled from.
  • X: The position where the slit is drawn on the canvas.

For the first scan, I set the direction from right to left. As the X and sourceX coordinates decrease, the image is reconstructed from the decay sequence, with each slit representing a 15-minute interval in the flowers’ lifecycle. In this case, the final scan used approximately 3,144 frames, capturing about 131 hours of the flower’s decay over 5.5 days.

Slit-Scan Result:

  • Hydrangea Right-to-Left: The scan proceeds from right to left, scaning a slit everying 12 frames of the video, pulling a moment from the flower’s decay. The subtle, gradual transformation is captured in a single frame, offering a timeline of the flower’s life compressed into one image.

Expanding the Typology

I experimented with different scan directions and speeds to see how they changed the visual outcome. Beyond right-to-left scans, I tested left-to-right scans, as well as center-out scans, where the slits expand from the middle of the image toward the edges, creating new ways to compress time into form.

  • Hydrangea Left-to-Right with Scanning every 6 Frames

  • Hydrangea Center Out: This version creates a visual expansion from the flower’s center as it decays, offering an interesting play between symmetry and time. The top image is scanning in the speed of every 30 frames, and the bottom image is every 36 frames. We can also see the intersting comprision between different speed of scanning.

  • Sunflower Center Out/ every 30 frames
  • The sunflower fell off of the frame created this streching warping effects.

  • Roses Left-to-Right/ every 12 frames

  • Roses Right-to-Left/ every 18 frames
  • While filming the time-lapse for the roses, they gradually fell toward the camera over time. Due to limited space and lighting, I set up the camera with a shallow apparatus and didn’t expect the fell, which resulted in some blurry footage. However, I think this unintended blur adds a unique artistic quality to the final result.

Conclusion

By adjusting the scanning speed and direction, I was able to create a variety of effects, turning the decay process into a typology of time-compressed visuals. This method can be applied to any time-lapse video to reveal the subtle, gradual changes in the subject matter. Additionally, by incorporating a Y-coordinate, I could extend this typology even further, allowing for multidirectional scans or custom-shaped images.

One challenge was keeping the flowers in the same position for 7 days, as any movement would result in distortions during the scan. Finding the right scanning speed also took some experimentation—it depended heavily on the decay speed and the changes in the flowers’ shapes over time.

Despite these challenges, the slit-scan process succeeded in capturing a beautiful, visual timeline. It condenses time into a single frame, transforming the subtle decay of flowers into an artistic representation of life fading away. This project not only visualizes time but also reshapes it into a new typology—a series of compressed images that track the natural decay of organic forms.

People as Palettes: A Typology Machine

How much does your skin color vary across different parts of your body?

While most of us think of ourselves as having one consistent skin color, this typology machine aims to capture the subtle variations of skin tone within a single individual, creating abstract color portraits that highlight these differences.

I started this project by determining which areas of the body would be the focus for color data collection. To ensure comfort and encourage participation, I selected nine areas: the forehead, upper lip, lower lip, top of the ear (cartilage), earlobe, cheek, palm of the hand, and back of the hand. I also collected hair color data to include in the visuals.

I then constructed a ‘capture box’ equipped with an LED light and a webcam, with a small opening for participants to place their skin. This setup ensured standardized lighting conditions and a consistent distance from the camera. To avoid camera’s automatic adjustments to exposure and tint, I used webcam software that disabled color and lighting corrections, allowing me to capture raw and unfiltered skin tones.

Box building and light testing:

Next, I recruited 87 volunteers and asked each to have six photos taken that would allow me to capture the 9 specific color areas. The photos included front and back of their hands, forehead, ear, cheek, and mouth.

Once the images were collected, I developed an application to allow me to go through each photo, select a 10×10 pixel area and identified the corresponding body part. The color data was then averaged across the 100 pixels, labeled accordingly, and stored in a JSON file, organized by participant and skin location.

A snippet of the image labeling and color selection process:

Using Adobe Illustrator, I wrote another script to map the captured color values into colors in an abstract designs, creating a unique image for each person.

The original shape in Adobe Illustrator and three examples of how the colors where mapped.

Overall, I’m pleased with the project’s outcome. The capture process was successful, and I gained valuable experience automating design workflows. While I didn’t have time to conduct a deeper analysis of the RGB data, the project has opened opportunities for further exploration, including examining patterns in the collected color data.

A grid-like visual representation of the entire dataset:

Separating the Work from the Surface: Typology of CFA Cutting Mats

In this project, I used the “Scan Documents” feature on the iPhone Notes app to isolate evidence of “work” (paint marks, scratches, debris) from the surface of cutting mats.

The Discovery Process

The project just started with wondering if I could use the scanning feature for something other than documents. The in-class activity where we took portable scanners inspired me mainly because it was so fun to do! I also liked the level of focus it gave to the subject you were scanning. However, the portable scanners were limited because of their size and uniformity. This is why I turned to my Notes app scanner, which I knew had the same purpose.

I began testing with what happened to be in front of me- a cutting mat- and was surprised by the result:

The paint marks and scruffs are not as prominent in person as in the scans. To me, the damage seemed like an overlay on top of the cutting board, which made me test more:

This is the same app, and the same cutting board, but instead, this one was slightly in shadow. And when I say slightly, I mean the only shadow in this image was my own shadow. I was intrigued by how it was able to clearly isolate the scruff marks and paint on top of the board.

I decided to pursue cutting mats as my subject of choice because they are often neglected in the creative process. There is nothing that shows your progress more than the surface you work on, and I wanted to see if I could isolate the evidence of work from the mat itself using the simple Apple Notes app document scanner feature.

The Workflow

The workflow I developed was to place the cutting mat either on the floor or table and use the auto-capture feature to select my mat and take a picture. Sometimes the auto-capture will fail to work and I would have to take a general picture and use the four circles to crop where I want the app to scan:

After taking the picture in even lighting (which I defined by fluorescent overhead lighting), I would move the cutting mat to a shadowed location and take another picture and compare.

The Results 

Some Selected Scans:

Some Fun Ones:

Troubleshooting

Things I realized with this project:

  • The contrast between the floor and the mat matters for the auto-capture to work. The scans look best when it is shot by auto-focused capture as they are less likely to be warped and capture the details better. Example:
Manual Shot
Auto-Capture
  • Diffused lighting is key to getting the best scans without light spots.
  • The color of the cutting board matters. Black cutting boards offer better contrast and separation between the surface damage and the mat. However, it is more vulnerable to light spots and the details of the cutting mat itself gets lost. This makes it difficult to get a clean image. Example:

 

Final Thoughts

Ultimately, I am satisfied with where my project went. It is very different from what I had been ideating for the past month, but this project sparked my curiosity more, and it was fun running around Purnell and CFA scanning people’s cutting mats and recognizing some of the projects that were worked on them.

However, I struggled to strike the exact balance I was looking for of both the cutting mat and the damage on top being equally visible due to the factors listed above.  If I pursue this more in the future, I would like to standardize my lighting in a studio setting. I feel that this will give me more flexibility and control with the different cutting mats. Further, I wondered if there was a process where I could use the Notes App feature but not the native iPhone camera so that I could capture it in an even higher resolution. Finally, I am pretty sure that there are more cutting boards than the ones I’ve scanned. If I were to broaden the scope of this project, I would see if I could befriend an architecture kid and go around their studio taking scans.

The Rest of It (or at least the ones worth seeing)

Typology: Pittsburgh Bridges

When is the last time you looked up while driving or walking underneath a bridge?

Through this project, I set out to document bridge damage in a way that is difficult to experience with the naked eye, or that we are likely to overlook or take for granted in our day to day life.

Inspiration and Background

At the beginning of this process, I was really inspired by the TRIP Report on Pennsylvania bridges, a comprehensive report that listed PA’s most at risk bridges and ranked them by order of priority. I was interested in drawing attention to the “underbelly,”  an area that’s often difficult to access, as a way to reveal an aspect of our daily life that tends to be unnoticed.

Initially, I was planning to use a thermal camera to identify damage that’s not visible to the naked eye. After meeting with two engineers from PennDOT, I learned that this method would not be effective in the way I intended, as it only works when taken from the surface of the bridge. It would have been unsafe for me to capture the surface, so I decided to recalibrate my project from capturing invisible damage, to emphasizing visible damage that would otherwise be unnoticed or hard to detect. Here are some examples of my attempts at thermal imaging on the bottom of the bridge. I was looking for heat anomalies, but as you can see the bridges I scanned were completely uniform, regardless of the bridge material or time of day.

Process and Methodology

After thermal imaging, I moved to LiDAR (depth) based 3D modeling. My research revealed this is the favored method for bridge inspectors, but the iPhone camera I was using had a pretty substantial depth limitation that prevented me from getting useful scans. Here are a few examples of my first round.

A LiDAR scan of Murray Avenue Bridge
Murray Avenue Bridge
Polish hill, Railroad Bridge over Liberty Avenue
376 over Swinburne Bridge

These scans were not great at documenting cracks and rust, which are the focus of my project. At Golan’s recommendation I made the switch to photogrammetry. For my workflow, I took 200-300 images of each bridge at all angles, which were then processed through Polycam After that, I uploaded each one to Sketchfab, due to the limitations of Polycam’s UI.

I chose photogrammetry because it allows the viewer to experience our bridges in a way that is not possible with the naked eye or even static photography. Through these 3D captures, it’s possible to see cracks and rust that are 20+ feet away in great detail.

Here’s some pictures of me out capturing. A construction worker saw me near the road and came and gave me a high vis vest for my safety, which was so nice I want to give him a shoutout here!

Results

This project is designed to be interactive. If you’d like to explore on your own device, please go here if you’re viewing on a computer, and here for augmented reality (phone only). I’ve provided a static image of each scan in this post, as well as 5 augmented reality fly throughs, out of the 9 models I captured.

Railroad Bridge over Liberty Ave
Race St Bridge
Penn Ave Bridge
Panther Hollow Bridge
Greenfield Railroad Bridge
Swinburne Bridge
Frazier St Viaduct (376)
Allegheny River Bridge

I am quite pleased with the end result compared to where I started, but there’s still a lot of room for improvement in the quality of the scans. After so many iterations I was restricted by time, but in the future I would prefer to use more advanced modeling software. I would also like to explore hosting them on my own website.

Special thanks to PennDOT bridge engineers Shane Szalankiewicz and Keith Cornelius for their extraordinary assistance in the development of this project.