Project 2; FacePlace

FacePlace explores the intersection of real and virtual worlds, through the examination, digital representation, and sonification of faces. Utilizing FaceOSC, interfaced through Max, we were able to capture single frames, or a picture, of guests’ faces. Although the goals of this project were mostly technical, the experience has proven to be quite thought provoking. The idea of “scanning” your face, only for it to exist in a virtual world leaves the installation’s concept very open ended.

These frames were sent to exist in a virtual world, powered by Max, and experienced in an HTC Vive. Simultaneously, the raw data received over OSC from FaceOSC was analyzed in Max, and then scaled to be appropriately used to control multiple parameters of processors.

Adrian worked on producing the VR space, creating the virtual world, and handling technical issues surrounding the VR delivery, communication between devices, as well as over-all user experience design.

Joey worked hand in hand with Adrian on user experience design, as well as splicing data from FaceOSC (supplying Adrian with matrices over jit.net.send objects), and utilizing this data to create a unique sonic environment for each face, using custom sound files. In order to process these sound files into the virtual world, the HOA library (RIP)  was used for spatialization.

First tests of a virtual world in the Vive, powered by Max.

Testing the VR world, using Seymour as our test subject.

First playtests of the installation.

Experience photos to come.

Git Repository