LoRaWAN Environmental Sensor

LoRaWAN Environmental Sensors: Capturing the Pulse of the Environment

One intriguing capture device that has caught my attention is the LoRaWAN Environmental Sensor. These sensors are capable of monitoring various environmental parameters such as air quality, temperature, and humidity across large areas. The idea of using real-time data to create art is what truly inspires me about these devices.

LoRa environmental sensors refer to environmental sensors equipped with LoRa nodes, such as temperature sensors, humidity sensors, air pressure, indoor air quality sensors, etc. However, unlike traditional environmental sensors, LoRa environmental sensors send the collected data to LoRa gateways through the LoraWan protocol, and then finally transmit them to the server.Since LoRa environmental sensors use LoRa technology to send data, this also determines that its networking is simpler than traditional sensors.

If I had access to LoRaWAN sensors, I would use them to develop a data-driven art installation visualizing the fluctuating environmental feature in urban spaces and transforming unconscious data into dynamic art forms. The installation would be a living reflection of the community we lived in, encouraging viewers to consider their connection to the urban. I think the project could be made even more compelling by incorporating interactive elements where viewers could contribute data or see the impact of their immediate actions in real-time. This would add a layer of personal responsibility and immediacy to the environmental message. This concept is inspired by earlier data-driven projects like Sarah Cameron Sunde’s The Climate Ribbon, which also uses environmental data to create responsive art.

https://www.niubol.com/Product-knowledge/LoRa-environmental-sensors.html

https://www.youtube.com/watch?app=desktop&v=d_TclUAKN10

Captured: Are you a Bully, Target, or Bystander?

Captured by Hanna Haaslahti

Haaslahti’s immersive digital art installation thrusts viewers into a dual role of spectator and actor, assigning them a “new identity” within a virtual community. Incorporating viewers through face capture technology and crowd simulation, the community explores the theme of “Bully, Target, or Bystander” simulating the viewer’s experience of human crowd behavior where unpredictable moods fester (Diversion Cinema).

CREDIT: Diversion Cinema

CREDIT: Diversion Cinema

By guiding the viewer through uncomfortable and traumatic scenarios of human crowd behavior, the installation probes a severe reaction and extreme reflection regarding our environment and the individuals who surround us. Viewers witness “themselves” within these simulated scenarios, evoking feelings of guilt, horror, or a desire to escape. Seeing oneself in such a vulnerable position – regardless of the role assigned – forces individuals to confront uncomfortable, somewhat unavoidable situations.

Critique

While I love the concept, the current use of colored-bodied individuals somewhat detracts from the reality of the message, reminding viewers that the installation is not entirely real despite seeing their own face. Employing a more visually realistic background or environment, rather than a white backdrop, would further emphasize the art’s message and increase its emotional resonance.

 

Chain of Influences

Haaslahti’s central tool is “computer vision and interactive storytelling,” primarily “interested in how machines shape social relations” (Haaslahti). When observing her past artwork, Haaslati’s perspecitve is clear since her focus is influenced by:

    • Computer vision techniques replicating Big Brother-the feeling of being constantly being watched
    • Visual Perception, real time mapping
    • Participatory simulations through viewers as actors using hyper-realistic capturing and 3D modeling techniques
    • Social implications on human relationships

Haaslahti has not listed sources or a biography.

Links

https://www.diversioncinema.com/post/captured-the-installation-by-hanna-haaslahti-enters-diversion-cinema-s-line-up

https://www.diversioncinema.com/captured

https://www.hannahaaslahti.net/about/

 

Hugues Bruyère – Dpt

I was interested in this on-going exploration by Hugues Bruyère @Dpt. He uses Stable Diffusion and SDXL Turbo to create the real-time images shown on the magnifying glass. The images adapt and change reflecting its background with the given prompt of style Hugues assigns in Stable Diffusion. What I thought was interesting about this particular project was that you get a comparison between what we perceive and how the same subject is perceived by the computer in real time. It is a constantly shifting and adapting filter in the lens of the computer.

Video: https://www.instagram.com/smallfly/reel/C9nKGOnpnh5/

3D Gaussian Splatting – Polycam

Over the summer I got to read a little about 3D Gaussian Splatting, which is an innovative method to render complex 3d scenes from a video. It is not necessarily a device, but I thought it was an interesting technique to share. It is faster to render and produces unprecedented level of detail compared to more traditional methods using Nerfs. Instead of using polygons to model the subject the system uses a mathematical method called structure-from-motion to find the differences between each photos (frames from the input video) to model using dots or gaussian points in 3d space. Due to the nature of Gaussian points, I think it adopts a brushstroke-like style in the renders.

Examples from Polycam.

Looking Outwards #2-Black Box Camera

The Black Box Camera, created by Jasper van Loenen, uses artificial intelligence to generate a physical print of a subject based on its own interpretation. The user points the camera at a subject, presses a button, and the internal system analyzes the photo, creates a caption, and uses that to generate a unique image, which is then printed. This project’s use of AI with direct, real-time interaction with environments is what I find so inspiring. What I think the creators got right here capturing the mystery of AI interpretation of visuals of real environments and in real-time. I think the creators could take it a step further by giving users more creative liberty in how the AI is used to generated a new image, allowing ‘AI photographers’ to exist in the same way that we see ‘AI artists’ emerging. I also think it could be cool to make a similar technology that had a more specific transformation step, i.e. used the AI to enhance the image in a specific way rather than a general recreation of the photo. The image produced is limited by the descriptive text that is generated, why go from image -> text -> image when you can go straight from image -> image?

Related Technologies: Internally the Black Box Camera uses Raspberry Pi with a camera module to take a photo when the user presses the button. The photo is then analyzed and a caption is generated, which is used as the input prompt for Dall-e. The resulting image then gets printed by the internal Instax portable photo printer, for which the bluetooth protocol was reverse engineered to control it from custom software.

Black Box Camera – The ‘dark chamber’ of AI

Bit Fall

This sculpture by Julius Popp uses falling streams of water to show words that are only visible as the water is falling. The installation focuses on words from live news feeds, and the artists goal was to highlight how fast moving information and news is in the modern era.

The only thing I would change with this project is that the installation only shows words and not images or icons. If I were to have access to this system, I think I would want to experiment with using a very high contrast live camera feed to show human faces and bodies in the water rather than words. If it worked (which it might not) I’d be really interested in using the water to capture a version of stop motion performance, where the audience only sees the performer through these brief snapshots in the water.

Additional Source: https://www.illuminateproductions.co.uk/bitfall

 

Blind Camera

Artist Diego Trujillo Pisanty created a camera that uses AI to turn sound into images. The images he creates are really surreal and captivating to me, you can sometimes see basic outlines of the things he’s capturing (like the metro or a car) but there’s this otherworldly quality to the images that makes me feel kind of dizzy.

I think this is a really interesting idea, but his scope was pretty narrow (only in Mexico City) and I wish I could see the tool used in other applications like in a park, or in the countryside since the noise is totally different there than in a bustling city. I think if I had access to this tool, I would want to use it to record some of the aspects of every day life that are known for causing hearing damage. Like planes, concerts, gunshots etc. I’m interested in seeing a visual representation of the hearing damage being caused.

Additional Source: https://www.creativeapplications.net/sound/blind-camera-point-and-shoot-sound-to-image/

The PSiFi: Capturing Human Emotion

The PSiFI 

“Personalized Skin-Integrated Facial Interface” (PSiFI) is an advanced human emotion recognition system using a flexible wearable mask device designed to “recognize and translate human emotions” in real time (Nature Communications).

The PSiFI utilizes strategically placed multimodal triboelectric sensors (TES) to gather data by detecting changes in movement, facial strain, and vibration on the human body: speech, facial expression, gesture, and various physiological signals (temperature, electrodermal activity).

The device incorporates a circuit for wireless data transfer to a machine learning Convolutional Neural Network (CNN) algorithm that classifies facial expressions and speech patterns. The more the classifier trains, the more accurate the analysis of emotion (Nature Communications).

Image: PSiFI      CREDIT: Nature Communications

 

Inspiration/Research

I’ve conducted research where we have used various devices to collect physiological data, and the sensors are ridiculously bulky and temperamental. The ability to capture a variety of human emotions using a combination of machine learning, sensors, and computational power within one device is incredible!

In my art practice, I often explore ways to evoke emotional reactions from the audience. I could imagine creating installations where participants encounter thought-provoking or uncomfortable situations while wearing a motion-sensor mask that analyzes and tracks their every movement and physiological response. This would not only reveal the external, visible reactions to the artwork but also provide insight into more internal, unseen responses.

 

Critique

The researchers’ decided to test the PSiFI in a VR environment allowed for an extremely controlled research environment. By employing a VR-based data concierge that adjusted content-music or a movie trailer-based on user reactions, demonstrated the system’s ability to accurately keep up with the individual’s emotions. 

 

Links

https://thedebrief.org/groundbreaking-wearable-technology-can-accurately-detect-and-decipher-human-emotions-in-real-time/

https://www.nature.com/articles/s41467-023-44673-2#citeas

 

Dan Hoopert – What Does a Tree Sound Like?

Audio Synthesis: What Does a Tree Sound Like?

“Here a single beam of light scans from left to right, creating a 2D cross section of the shape. Using the area of this cross section MIDI data is generated using Houdini and CHOPS. This can be fed into any DAW providing a base for content driven sound design. Field recordings from the object’s natural environment triggered by this data allows a close relationship between the light and sounds that are created, completely unique to the chosen object.  ”

This artwork is created by London-based 3D artist Dan Hoopert. It’s a data-driven piece exploring the relationship between visual and sound. It uses photogrammetry to recreate a tree in 3D space. Silent objects from the real world are given voices in the virtual realm, yet with unnatural, electronic sounds that create an intriguing sense of conflict. Data from the scanning is also visualized through particles that follow the beam of light. I’m especially captivated by the surrealist imagery of the work.