The Black Box Camera, created by Jasper van Loenen, uses artificial intelligence to generate a physical print of a subject based on its own interpretation. The user points the camera at a subject, presses a button, and the internal system analyzes the photo, creates a caption, and uses that to generate a unique image, which is then printed. This project’s use of AI with direct, real-time interaction with environments is what I find so inspiring. What I think the creators got right here capturing the mystery of AI interpretation of visuals of real environments and in real-time. I think the creators could take it a step further by giving users more creative liberty in how the AI is used to generated a new image, allowing ‘AI photographers’ to exist in the same way that we see ‘AI artists’ emerging. I also think it could be cool to make a similar technology that had a more specific transformation step, i.e. used the AI to enhance the image in a specific way rather than a general recreation of the photo. The image produced is limited by the descriptive text that is generated, why go from image -> text -> image when you can go straight from image -> image?
Related Technologies: Internally the Black Box Camera uses Raspberry Pi with a camera module to take a photo when the user presses the button. The photo is then analyzed and a caption is generated, which is used as the input prompt for Dall-e. The resulting image then gets printed by the internal Instax portable photo printer, for which the bluetooth protocol was reverse engineered to control it from custom software.