For the Final Project, I plan on creating an audio synthesizer that uses motion detection of the user’s hand from their camera to generate sound and visualizations that will be displayed on the screen. The visualizations will probably be layered over the user, and the sound will be constantly generated. Sound properties that will be controlled include volume, pitch, and wavetype, with perhaps occasional “sweet spots” on the screen in which a special sound is played.
As for the visualizations, these will be made to simulate the timbre of the sound itself. For example, if a sine wave is being generated, a more smooth visualization will be drawn, whereas a square wave will imply more rigidity. A rough sketch of how this might play out (without sound) is depicted below.