I came across a 3D photo inpainting demo from researchers Shih et al via a twitter post from ML artist Gene Kogan. The neural networks involved create a depth map of the input, then synthesize new color and depth content to (somewhat) realistically fill the spaces between. As Kogan’s experiment shows, it even works on artificially-created images like Deep Dream output.
I wanted to use their Colab to restore the dimensionality of old family photographs. This photo is particularly interesting to me — I think it shows something about love, home, and maybe privilege. Beyond the initial shock factor of bringing these two-decade-old moments ‘to life,’ I thought it was interesting how the algorithm failed to do so perfectly. For example, it couldn’t understand the simple checkered pattern on my mom’s shirt and distorted it. The profile of her face is also imperfect, making the scene feel more like a paper cutout. The kitchen is also a little wobbly, as if I’m viewing the scene through thick glass. As usual, ML has created a visual equivalent of the distortion and degradation of memories in the human brain (… insert my BHA Capstone here.)