While reading Joanna Zylinska’s text on ‘post-photography’ I kept thinking about how cameras have always been systems that combine natural intelligence with human agency to capture some features of our reality. Aerial LIDAR systems, apart from capturing very accurate models of our environment, have shown to be useful for seeing long-time patterns on the earth’s surface and lead further understanding of past civilizations. These systems rely sincerely on our direction to work, but I agree that they cannot be categorized as cameras but capturing frameworks.
In this effort of recording the nonhuman, the University of Vienna used acoustic cameras to measure elephant vocalizations in Nepal. Acoustic cameras use a geometrical array of microphones strategically oriented to construct a 2-dimensional representation of loud areas in the chosen direction. Aside from location information, the camera was also able to visualize precisely sounds in low frequencies in the presence of ambient noise. The research showed that the sounds are produced in their mouth rather than at the end of their horns, which helped them understand better how they communicate. The difficulty of classifying this system as either sonic or visual encourages the institution of a post-photographic discipline in charge to record complex phenomena of our environment that no longer belong to the threshold of our senses.