One particular example of nonhuman photography I have experienced are these cameras that people strap to trees in their back yards that take photos when a motion sensor is tripped. The idea is that people can get photos of animals in their back yards without having to physically be there to take the photo. I remember having one on a tree in the woods behind my house when I was a kid, but I honestly don’t remember getting any photos from it despite having a ton of deer and other animals in the area I live in. In theory, though, these cameras allow people to capture images that they couldn’t capture if they were present. A deer or a coyote might be scared off by the presence of a person, so a candid image of that animal might not be possible from a human photographer who doesn’t have a lot of familiarity photographing wildlife.
The use of algorithms, computers, and networks in modern nonhuman photography intensifies the entanglement between the human and nonhuman. It raises questions about intent and control, about how much control a photographer exerts over their camera (I’m using these terms loosely) when the taking of pictures is defined by a script rater than simply a pressing of shutter. I would argue that perhaps the photographer is exerting more control when the taking of the photo is scripted. The photography is precise, consistent, something that is incredibly hard, if not impossible, for someone to do by hand.