The article notes that computers aren’t exactly learning to “see” the way humans do. Instead, they’re learning to “see” like computers, and that’s preciecely what makes them useful to humans. The Pintrest Lense is one example that acts as an on the go research aid that functions as an active database, recording what it “see’s” and providing feedback. Although it’s using a camera, I’d say the computer is merely using it as a communication device with the human. The camera simply provides the computer “eyes” to assist us with our query instead of just a brain and words like a search engine would.
This ever-growing fear of machine learning and computers with superior agency to humans is pinpointed in the design decision of the GoogleClips camera; They chose to include a camera button not for functionality, but for the familiarity and comfort of the user. Ordinarily, this makes sense, as any design should appeal to their target audience. However, in this case, lack of requiring human input is a terrifying thought for many. We see this in shows like Black Mirror too. Human’s like to feel in control and like they accomplished something with capturing a photograph, or using any kind of technology. There is value placed on the act of creation, or authorship. I wouldn’t say that cameras have any degree of independent agency that isn’t pre-programmed by humans yet, but they certainly are advancing and becoming smarter to serve as tools in our everyday life. The camera’s trajectory seems to be following the advancement path of the telephone.