I have been experimenting with different ways of implementing the 3D rolling shutter effect using the Intel RealSense depth camera. So far I’ve been able to modify an existing example file, which allows you to view live depth feed using the Python library.
The following is what the output looks like raw from the camera. As you can see it displays a point cloud of the recorded vertices.
From here I wrote a function to modify the vertices being displayed in the live view. Vertices from each new frame get positioned in a queue based on their distance from the lens, and the live view only shows points at the beginning of the queue. After each frame this queue gets shifted so that vertices farther from the camera get fed to the live view a little while after the vertices that are right in front of it.
This video shows the adjustments made to the 3D vector array, which isn’t reflected by the color map, resulting in this interference color pattern. In this example the sampling is reversed so that farther vertices appear sooner than closer vertices.
The main issue I’ve come across is the memory usage of doing these computations live. This clip was sped up several times to get that fluidity of motion because of the dropped frame rate. The next thing I plan on doing is getting the raw data into Unity to make further changes with the previous code I wrote.