Reading this article made me recall the candid Machinery Lecture, and the collection of weird images taken from Google Earth I think it was. Even though the images were captured by a satellite, a human still needs to go through the stills to pick out the ones that indicate some strange phenomena. The camera itself, isn’t able to distinguish between normal and strange behavior, even though it still has the agency to “take photo’s”. I think my big takeaway when it comes to photography based on algorithms and computational methods is that it still needs a conscious human brain to make sense of the images and prescribe significance to them. I don’t think I could say that the relationship between the user and the camera could ever be separated completely.