Photography and Observation

The medium in which we decide to capture an image can substantially influence our perception of the subject. These mediums can expose the object in a new way that we haven’t seen before. A very recent example is the electron microscopy where we would be able to see the low-level structure of objects. Those images that contradict what we originally think of when we think of an object are the ones that we remember the most (looking at popcorn up close and seeing its paper hex-like structure will stick with me).

We can then abstract this idea to say that the medium can also influence the topology of an object. Since images are used to describe and give meaning to a topology, then influencing images will also influence their corresponding topology.

I would argue that different mediums can be objective. These mediums can be objective if you define mediums in terms of their equipment used. Since there is no disagreement on the classification of different imaging equipment, then there shouldn’t be any disagreement among the mediums. These mediums would then be both predictable and scientifically accurate since they reference the technology used to make the mediums.

SEM Results

The above item was a small piece of popcorn. The surface of the popcorn looks smooth, but in the areas where it was broken away, there is almost a paper-like hexagon bee hive structure. This could explain why popcorn feels so soft and airy.

Through this process, I learned how much difference in structure an object can have. The popcorn piece I provided appeared to be very homogeneous to the human eye, but upon closer inspection I found that different regions had different support structures. In some of the images above, you can see that some areas blend between a hex grid and a blobby cave. Next time I eat popcorn I’ll think of this variation in structure (and people will think I’m weird for doing so).

Project To Share Response

Reviewed Posts By:

  • Izzy
  • Christian
  • Olivia
  • Steven
  • Philippe

In response to Izzy’s shared project, the concept of GANs/Style Transfer has been popping up a lot in the world of Art. https://www.thispersondoesnotexist.com is a popular example of where we are currently with our generative abilities, and large companies such as Nvidia are always pushing out new algorithms that get us farther from the uncanny valley to a point where we can’t even tell reals from fakes. Machine Learning is based on the idea that with enough data, you can approximate any function to such a precise degree that not even us humans can tell the difference. With such a large database of images on Google, people can easily scrape larger and larger datasets to train on. Not only are our models growing more accurate, but our datasets are always getting larger.

Gloomy Sunday was posted 2 years ago, and we’ve made such large progress in machine learning over this time. I wonder what type of results we would get now if we were to retrain the model on the must updated Pix2Pix algorithm with a larger dataset (could we make it so convincing that humans would think it’s real?)

The Camera, Transformed By Machine Learning

This article introduces the AI and ML movements into the field of Computer Vision, and more specifically in Cameras. Before 2010, some of the best performing Vision algorithms operated on code written and reasoned by humans. With the rise of AI, autonomous systems capable of understanding patterns among thousands of pieces of data were able to outperform some of the best human algorithms, thus introducing AI into the Vision world.

Now with intelligent vision systems, the operator has less responsibility when taking a photo, and can leave the camera to do more work. The role as an operator should be to have the creative freedom to decide on a photograph when taking it, and the AI should only interfere to enhance the photo if the operator requests it (sometimes AI can make a photo worse). We are currently at the point where operators and cameras work together, but we very well may be entering a future in which cameras can make most of the decisions when taking a photograph.

Project To Share – Light Field Photography with a Hand-held Plenoptic Camera

https://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf

The above paper was one of the founding papers in micro-lens camera technology. Researchers at Stanford wanted to create a system that replicated bug eyes, and ended up using an array of micro lenses, where each one would capture a small patch of an image. These small patches would overlap with neighbors, and could be then used in post processing to exhibit parallax in the image. Having multiple micro lenses also allowed for depth deletion of objects (since the parallax movement corresponds to the depth) which is currently a technology that Google employs. Rather than using multiple cameras like in iPhones, Google Pixel devices that use a single lens have a micro-lens array behind the camera that can be used in depth detection and background blurring.

This research is interesting since it provides a more hardware efficient way of capturing depth/parallax without the need for multiple cameras. It also helps photographers re-adjust the focal points of their images in post-processing since the camera is now capturing more data than a conventional camera, and only deciding on what regions of each micro-lens to sample from when recreating a new image. Thus, it makes use of a large sample of data to create a conventional photo.