Since the advent of the camera, the interaction of user/device has evolved into a complex series of choices that allow for artistic opportunities. One can take advantage of sensor options, such as in long exposure photography. While exploring one may run into the limitations of a sensor, which can be technical, such as max distance on a depth camera, to phenomenal. The influx of machine learning tools allow for the relationship between operator and camera to change. Limits such as a regular photograph being flat are surpassed with relative ease. The question of what the photographer is collecting becomes a focus. Is it an unprocessed data set to be used? Furthermore, a duchampian relationship with AI becomes apparent, questioning whether the AI is the photographer, or the one who frames the camera the AI uses, and the degree of collaboration between the two.
Category: Reading01
Reading01 – Olivia Cunnally
With the advancing technology as discussed in the article such as GANS, Google Clips, and Pinterest Lens, the relationship between user and camera becomes more grey. When it comes to questions of ownership and authorship, I think these terms have to be clearly defined. Since these cameras have the agency to take photos of their own, are they the authors, are those that have created these certain algorithms for them to learn authors themselves, etc. As technology advances, it seems that these discussion lean toward the reality that humanity may have to work alongside machines rather than controlling them and using them as tools. I would argue that this does not eliminate the role of the user though, since images can always be altered by a variety of factors which can be done through the users intention or artistic vision. Although the reality of the photographer we traditionally know is shifting, I do believe the user has input in the relationship with the camera and therefore their role can have an impact on the images captured. Overall, I am still formulating my opinion on cameras that have their own agency. However, though some may interpret this technology as bleak for photographers or may push back against this technology, I believe it can be viewed optimistically as a chance for a new kind of collaboration with technology.
The Camera, Response
When I was reading this article, I found the Clips project to be especially interesting in terms of operator and user for it places the operator as the initial programmer and the program itself. However, the user has decided where to place the camera and has left it there to “record,” so in that sense it is very similar to if someone strategically placed their video camera and left it to record for a prolonged period of time. Therefore, in this sense, this still places authorship and operator to the individual who initially placed the camera.
What drew my attention the most however was how image processing algorithms can use real images to “manufacture” reality by either manipulating the image itself into an ideal or creating data in places where it was lost or not received. This places concepts of reality and how we perceive reality into the programmer’s hands. For when choosing and making algorithms for image processing programs they are deciding what is natural and what is real and projecting these assumptions onto us. This can be extremely harmful for, I personally believe, that we are an extremely dependent on visual information and will almost always define truth to be seeing.
Reading01-The Camera, Transformed by Machine Learning
The article notes that computers aren’t exactly learning to “see” the way humans do. Instead, they’re learning to “see” like computers, and that’s preciecely what makes them useful to humans. The Pintrest Lense is one example that acts as an on the go research aid that functions as an active database, recording what it “see’s” and providing feedback. Although it’s using a camera, I’d say the computer is merely using it as a communication device with the human. The camera simply provides the computer “eyes” to assist us with our query instead of just a brain and words like a search engine would.
This ever-growing fear of machine learning and computers with superior agency to humans is pinpointed in the design decision of the GoogleClips camera; They chose to include a camera button not for functionality, but for the familiarity and comfort of the user. Ordinarily, this makes sense, as any design should appeal to their target audience. However, in this case, lack of requiring human input is a terrifying thought for many. We see this in shows like Black Mirror too. Human’s like to feel in control and like they accomplished something with capturing a photograph, or using any kind of technology. There is value placed on the act of creation, or authorship. I wouldn’t say that cameras have any degree of independent agency that isn’t pre-programmed by humans yet, but they certainly are advancing and becoming smarter to serve as tools in our everyday life. The camera’s trajectory seems to be following the advancement path of the telephone.
The Camera, Transformed by Machine Learning
When I read this article I was reminded of another article I’d read a while ago about a woman who explores the world in Google Street view and takes screenshots. This is her photography, and she sells it online. It presents an interesting question of authorship as well, as she was not the one to take the original photograph. She’s just curating certain frames from the larger collection of photos on Street View taken by Google’s cameras. In this case, I think it’s pretty clear that she is still the photographer and has authored the work; there was intention behind the selection and composition of each of her frames.
The Google Clips camera, on the other hand, doesn’t even require someone to curate the photos. The machine learning model does that. So who’s the author? I think the answer is the authors of all the photographs the model was trained on. The algorithm has been trained what constitutes a moment worthy of photographing from others’ choices of this moment. I don’t really think we need to worry about machines making intentional choices about a “decisive moment” yet, given that all the ML really is doing is predicting what best matches what it’s been trained to do from existing data.
While the definition of a camera may be in constant flux due to changes in technology, I think that a steady definition of a photographer or author could be useful. Intentionality is definitely a part of that, and probably is the most liberal approach to encompassing everything that’s historically been thought of as authored by somebody. Even photographs taken by a machine learning model can be thought to be authored, just not by the machine itself.
Reading 01
We often think of cameras as devices that are commanded by the photographer, but they’ve been slowly moving further and further away from that definition. Most if not all of the students in the class grew up around cameras that had auto-focus and auto-aperture/exposure capabilities. But now that cameras can press the shutter (or the digital equivalent) on their own, we begin to question the role of authorship over the photograph. At this point, I don’t think we need to be asking that question yet. There is still an authorship role present in the use of these cameras: they still need to be placed somewhere, and then the resulting photographs need to be curated. While the actual photo taking is mindless for the photographer, there is still a substantial amount of agency and consideration present in the act of setting up the camera and selecting the resulting photographs. Because of this, and I think this also can be extended for most automated art, the user of a Clip camera or any other camera that takes its own photos still has a role of authorship over the resulting art. That role may be different than it was in the past, but it has yet to vanish.
Reading 01
One of the assumptions we compiled in class was something along the lines of ‘photos are an intentional choice made by the author.’ The way this article conceives of certain image-making systems like GANs or Google Clip show how this assumption can be challenged (beyond just ‘accidental photos’ in the sense of your finger slipping on the phone camera shutter button.) Like all algorithmic art, the author concedes aesthetic control but retains a certain degree of conceptual control. Though the author themself doesn’t decide what exact dog a GAN will generate or what Google Clip will deem salient, the author (or the authors who wrote the program) is still setting the rules for what the system will do. The conceptual space of possible images is limited.
I’ve never understood the concept of ‘creativity’, but maybe in order to have true authorship, there must have been the ability to make something else instead. A GAN (in 2020, at least) can’t be a true author because it doesn’t make true choices (and therefore by some definitions is not conscious.) (Also I don’t know where to draw the line on what makes a ‘true’ choice given the predictability of human behavior, but I definitely don’t think we’ve reached it.) This is why the whole Obvious/Christie’s thing was so silly. AI is not yet a creator, it is still a tool for creators, because its ‘own’ decisions are in fact highly predetermined. Actually, my whole argument is falling apart since I don’t believe in free will, never mind.
As the theorist and artist Grimes once noted, we may be nearing “the end of human-only art.” I’m not sure how we’ll know when it happens, exactly, but there will be a point when it can no longer be denied that AI cameras are making their own decisions, and have thus become photographers.
New algorithms, same quandaries: the AI-augmented camera
What is the role of a photographer when the camera takes its own photos? Christian Ervin, Design Director at Tellart, writes about the matter with a dusting of disempowerment in his piece The Camera, Transformed by Machine Learning. To him, examples of newfangled ai-powered cameras suggest “…a camera with a very different kind of relationship to its operator: a camera with its own basic intelligence, agency, and access to information.”
I would disagree with this characterization of a limp operator, which stems from how Ervin defines the act of photography. In discussing science fiction writer Bruce Sterling musings on futuristic computational photography, he describes photography’s “core action” as: “the selection of a specific vantage point at a specific moment in time.”
Instead, I would suggest the act of photography begins when one selects the camera. The operator selects and sets up a capture device based on how they want to see the world. How they want to understand reality. Indeed, they’ve decided to be an operator of a camera in the first place, to comprehend and remember the world through visual moments.
Then, the nature of the relationship between operator and “cameras with agency” can be boiled down to the size of the black box. How appraised is the operator of the inner workings of the camera? How intimate is the operator with the how that sensor understands the world, and how much knowledge do they have to alter it?
This gulf between operator and self aware sensors is not the chasm that it seems. Indeed, a camera has always been a black box to its casual users, the inner workings of light capture happening within obscured away. Similarly, I can’t be bothered to consider PNG/JPEG compression algorithms when using a smartphone camera. But conceptually, I understand the nature of capture that I should expect from a Polaroid Instant camera versus a digital smartphone camera. Similarly, in the case of Google Clips, I understand that an algorithm trained on well-composed images will capture similar kinds of images. Further, I still decide to photograph my world through that lens and place the camera in a location of my choosing; curatorial tasks of the photographer abound.
The Camera, Transformed By Machine Learning
This article introduces the AI and ML movements into the field of Computer Vision, and more specifically in Cameras. Before 2010, some of the best performing Vision algorithms operated on code written and reasoned by humans. With the rise of AI, autonomous systems capable of understanding patterns among thousands of pieces of data were able to outperform some of the best human algorithms, thus introducing AI into the Vision world.
Now with intelligent vision systems, the operator has less responsibility when taking a photo, and can leave the camera to do more work. The role as an operator should be to have the creative freedom to decide on a photograph when taking it, and the AI should only interfere to enhance the photo if the operator requests it (sometimes AI can make a photo worse). We are currently at the point where operators and cameras work together, but we very well may be entering a future in which cameras can make most of the decisions when taking a photograph.