MagiScan: Accessible 3D Scanning

MagiScan is an AI-driven 3D scanning app that easily captures real-world objects and turns them into 3D models with various format choices. You can even transfer models into NVIDIA Omniverse or Minecraft block structures. It provides a fast and affordable way to create 3D models of real-life objects, utilizing AR and LiDAR technology to quickly create 3D schematic floor plans in both metric and imperial units. The LiDAR tech can capture entire spaces and surroundings up to 5 meters away. You’re essentially capturing something in the physical world and turning it into a new possibility. While the 3D scans are wildly impressive and detailed for a phone app scanner, it took 7 hours to process on the free version, and unfortunately, you’d need to pay a subscription to continue using the app (3 Free Scans).

Image: Various 3D Scans

Create & Export 3D Models to echo3D with the MagiScan App | by echo3D |  echo3D | Medium

Link: https://magiscan.app/

 

Captured: Are you a Bully, Target, or Bystander?

Captured by Hanna Haaslahti

Haaslahti’s immersive digital art installation thrusts viewers into a dual role of spectator and actor, assigning them a “new identity” within a virtual community. Incorporating viewers through face capture technology and crowd simulation, the community explores the theme of “Bully, Target, or Bystander” simulating the viewer’s experience of human crowd behavior where unpredictable moods fester (Diversion Cinema).

CREDIT: Diversion Cinema

CREDIT: Diversion Cinema

By guiding the viewer through uncomfortable and traumatic scenarios of human crowd behavior, the installation probes a severe reaction and extreme reflection regarding our environment and the individuals who surround us. Viewers witness “themselves” within these simulated scenarios, evoking feelings of guilt, horror, or a desire to escape. Seeing oneself in such a vulnerable position – regardless of the role assigned – forces individuals to confront uncomfortable, somewhat unavoidable situations.

Critique

While I love the concept, the current use of colored-bodied individuals somewhat detracts from the reality of the message, reminding viewers that the installation is not entirely real despite seeing their own face. Employing a more visually realistic background or environment, rather than a white backdrop, would further emphasize the art’s message and increase its emotional resonance.

 

Chain of Influences

Haaslahti’s central tool is “computer vision and interactive storytelling,” primarily “interested in how machines shape social relations” (Haaslahti). When observing her past artwork, Haaslati’s perspecitve is clear since her focus is influenced by:

    • Computer vision techniques replicating Big Brother-the feeling of being constantly being watched
    • Visual Perception, real time mapping
    • Participatory simulations through viewers as actors using hyper-realistic capturing and 3D modeling techniques
    • Social implications on human relationships

Haaslahti has not listed sources or a biography.

Links

https://www.diversioncinema.com/post/captured-the-installation-by-hanna-haaslahti-enters-diversion-cinema-s-line-up

https://www.diversioncinema.com/captured

https://www.hannahaaslahti.net/about/

 

The PSiFi: Capturing Human Emotion

The PSiFI 

“Personalized Skin-Integrated Facial Interface” (PSiFI) is an advanced human emotion recognition system using a flexible wearable mask device designed to “recognize and translate human emotions” in real time (Nature Communications).

The PSiFI utilizes strategically placed multimodal triboelectric sensors (TES) to gather data by detecting changes in movement, facial strain, and vibration on the human body: speech, facial expression, gesture, and various physiological signals (temperature, electrodermal activity).

The device incorporates a circuit for wireless data transfer to a machine learning Convolutional Neural Network (CNN) algorithm that classifies facial expressions and speech patterns. The more the classifier trains, the more accurate the analysis of emotion (Nature Communications).

Image: PSiFI      CREDIT: Nature Communications

 

Inspiration/Research

I’ve conducted research where we have used various devices to collect physiological data, and the sensors are ridiculously bulky and temperamental. The ability to capture a variety of human emotions using a combination of machine learning, sensors, and computational power within one device is incredible!

In my art practice, I often explore ways to evoke emotional reactions from the audience. I could imagine creating installations where participants encounter thought-provoking or uncomfortable situations while wearing a motion-sensor mask that analyzes and tracks their every movement and physiological response. This would not only reveal the external, visible reactions to the artwork but also provide insight into more internal, unseen responses.

 

Critique

The researchers’ decided to test the PSiFI in a VR environment allowed for an extremely controlled research environment. By employing a VR-based data concierge that adjusted content-music or a movie trailer-based on user reactions, demonstrated the system’s ability to accurately keep up with the individual’s emotions. 

 

Links

https://thedebrief.org/groundbreaking-wearable-technology-can-accurately-detect-and-decipher-human-emotions-in-real-time/

https://www.nature.com/articles/s41467-023-44673-2#citeas