Exercise 2: Physical Computing as Foundation

First review the slides related to reaction and interaction.  (Slight correction: light switches can be interactive if they have the context of entertainment.)

Find and analyze an example of physical computing with a goal that is artistic or design based, less than a page and with links to the original.  It should only take 30-60min. Youtube is a great place to start, so are NYU ITP and IDeATe.  Due noon 7 September so I have time to review them before class.

If a project is an exercise or etude  for class it doesn’t qualify for this exercise.  What you’re looking for has to do something with interaction, preferably vaguely interesting, even if it’s just playing/entertainment.   Sandnoisedevice meets plenty of engineering requirements but it doesn’t hold long term interest as it’s more about the technology than the interaction experience or functionality.  If I could add something to this, it would a game where you generate music, something like Rez.  Or perhaps add a second device and use the two as consoles in a music competition game like DDR.

A sample format:

  1. basic description, elaborate on a title or use
  2. what is it supposed to do?
  3. how does it do this?
  4. does it work?
  5. how would you change it?

26 thoughts on “Exercise 2: Physical Computing as Foundation”

  1. The project I’ve chosen to analyze is one by a firm I know quite well, and have worked for in the past. The piece is titled “Stage”, not a great name I know, but it’s descriptive none the less. The piece is much more artistic than technical, having very few different hardware components, and only one real sensor for interaction. That being said, it does provide a profound and sensational experience.

    The piece was commissioned for the backdrop of the keynote stage at a large conference. It was intended to not only serve as an animated backdrop for the speech, but also as an interactive entertainment device for the conference goers. It consists of a series of panels, onto which a responsive animation of sphere’s floating through space is projected using careful projection mapping. A sensor on the stage tracks user’s skeletons and the sphere’s move through space accordingly. The second way it is interactive is through twitter. The piece will pull the most recent post of a picture using the hashtag of the conference, analyze said picture, and use the pixels to color the spheres on the screen.

    I think there are several ways this piece could have been developed further. On the technological side, I think the use of some overhead screens, or further modulation of the projection surface could have improved the immersion effect. Additionally, further functionality and responsiveness could have been added with a second sensor, perhaps a noise sensor that would make the visualization respond to applause, or even a single clap. On the interaction side, I think this could have easily been turned into a two-person experience. Having a sensor on each side could allow for more interesting animations and interactions between two sets of spheres, or even a competition, where opponents would use their bodies to direct the sphere’s, perhaps trying to get the most through the opponent’s side.

    [ http://www.ultralowres.com/stage/ ]

  2. I looked at the introduction of Toyota’s Window to the World (link below). It’s an interactive transparent display designed as the backseat window of a car. Users can draw on it, and learn more about their surroundings with features that gage distance, zoom in on distant objects, and label upcoming sights (ex. “house” and “field”). Presumably, this would work by use of cameras and computation of distance/ some form of processing that includes object recognition.

    The demo featured a little girl interacting with the window, which makes children/parents seems like their target users/buyers. I think that’s really limiting the capacity of the technology. This sort of display would be interesting as a more informational display, including detailed information about surrounding sights (instead of just labels), ETA to specific locations, weather information… Basically, I think they should integrate an AI (possibly one designed for traveling/sightseeing), and that they should consider ubers/lyfts/taxis as the ideal users.

    Toyota’s Window to the World is not currently in production, and it hasn’t really made it past being introduced as a concept (originally introduced ~2011). Working models have potential to function fairly well, but I’d imagine transparent displays (probably OLED) that are *also* made of a road-safe material would be rather expensive (not that expense would necessarily stop them from being bought, but experimenting/testing would probably be a pretty big commitment for the company).

    https://www.youtube.com/watch?v=0PZzFsM-rjM

  3. The project I looked at is called “Dancing with Light” created by Anna Henson and Javier Argota in fall 2016 for IDeATe. Using the existing light wall installation at the Carnegie Library in Knoxville, PA, the piece adds an interactive component. The lightwall is a series of bright white lights set up on the front wall of the library. Users tap on a grid which corresponds to the wall on a mobile device, and different sections of the wall light up in response to the user’s touch. The idea behind the name is that users “dance” with their fingers to interact with the light.

    Although the piece accomplishes exactly what it set out to do, which was to add an interactive component to an existing wall of lights, there are several ways the project could have been further developed. The light patterns generated could be more complex in response to the interaction. For example, the speed at which the user taps the boxes could affect the brightness of the light or generate different specific patterns. The lights are all white, so maybe the project could be expanded to include colored lights that change color based on the way the user taps. It could be expanded to a two-user interface, where one user would control the pattern and one would control the color of the lights. On the technical side, it seems like a fairly straightforward and direct interaction between the user and the lights. To make it even more complex, I think it would be cool if the lights also responded to sound. For example the volume of surrounding sound from different sides could affect the brightness in different regions. In this way the light display would be a visualization of both touch and sound.

    http://ideate.xsead.cmu.edu/gallery/projects/lightwall-interaction-2e2b45d6-97cb-4115-a42a-45b292a4456d

  4. I want to use this exercise to talk about Adam Ben-Dror’s project, The Abovemarine:
    http://ben-dror.com/the-abovemarine/

    What is it:

    The Abovemarine is a fish tank with wheels. Using a camera and computer vision, the Abovemarine watches José the fish (who lives inside the Above marine) and moves according to his position. One could say, it empowers José in a way that no fish that lives in a tank could imagine before.

    What is it meant to do:

    This project is an artistic creation meant to engage our interest “in the beauty and intelligence of these creatures”. Not meant to be a true product, but meant to get us to ask questions like: “Does the fish have intention?”, “Do fish have memories?”. As we see the fish interact with other people/ creatures and world around him, we get to question our understanding of other living creatures.

    Why I think it’s interesting:

    I think this project is fun- It allows fish to engage with the world in a totally different way! A lot of physical computing work has to do with the input of people, but other creatures and the environment can be just as significant a player. By empowering José the fish, we also empower a new way to interact with the companions we know and love.

    How does it work:

    Based on information from Adam’s website, the Abovemarine uses computer vision and a camera to watch José from above (using a rig) and has omni wheels placed underneath to allow the fish to travel anywhere and in any direction. (The rig is also beautifully integrated with the tank, which I find pretty impressing.)

    How would I change it:

    I’m not sure this project needs too much change. I’m interested to see how we can bring clarity (or confusion) to José’s decisions by highlighting a space or problem for him to navigate. I’d love to invite people to interact with José by designing a game where both you (the human) and José (the fish) must work together to achieve a desired goal.

  5. https://www.youtube.com/watch?v=_3VstJSxZlc

    This physical computing project is a small robot that drives around shooting out colorful light patterns as if they are flames. This project is supposed to a fun toy to play with in the darkness. A user can control the robot’s movement with a Wii remote and the robot tells a projector to display a fire-breath style lighting. This seems to not have any practical purpose but it excels in the artistic sense; its purpose is mostly for entertainment. In more detail, the robot uses inputs from a Wii controller to drive around. There is an PlayStation Eye that tracks the movement and orientation of the car and is used in association with Community Core Vision. Then there is the projection of colors that is created by MSAFluid software. This creates cosmos/plasma style lighting that occur in front of the robot.
    One thing I would suggest to the creators is to make the lights reactive to their environment. For instance, if the robot approaches a wall or object in front, the lights could turn red. If there is a lot of empty space in front of the robot, it can adjust its colors to blue. Creating a reactiveness between the robot and its environment increases the level of sophistication. Another idea is to build another one of the robots and have them battle and the colors would represent clashing of “weapons” (actually colors).

  6. https://www.youtube.com/watch?v=koICfiod-vE&t=3s

    http://leoneckert.com/
    The project Biometric Intimacy, by Leon Eckert, is a machine learning algorithm that tests keystroke patterns of a user, and predicts the user’s mood. This project consists of two parts, a background keylogger to track patterns of typing and a user interface to input emotions. The keylogger tracks two mechanical metrics: keypress length and keystroke patterns. The emergent results from the system is the main part i find interesting. In fact, a class I had yesterday we discussed a theoretical product to help students know when to break from work, using a similar AI system.

    Computer buttons are the main physical computing aspect of this project. It is using pre-built hardware, a keyboard, to register typed interaction. I wish the technology would go further and use other types of measurements to come to its conclusions. Is the user’s breathing or heart rate changing? Using a system that cross-references data types to monitor various types of changes, could produce interesting results. The current mood-input interface is not very exciting. I wish the has a physical input. The input could could be some sort of physical desktop input. I wish there was some arrangement of sensors, like a Bop-It, the user could interact with to describe their mood.

    There is no thought regarding the output for this algorithm. How could the computer display various emotions the user is feeling? Should it alert the user of their mood (if they don’t already realize it)? Should the system try to counteract it? Should it try to predict upcoming negative moods and prevent them? I imagine the system releasing fragrances to aid the person’s emotions. The algorithm could predict a persons feeling and choose which fragrance best fits the mood, or people could mix and match their own!

  7. The project I will be describing is Nike’s LED Running Track: https://www.youtube.com/watch?v=ZfZZOIAVUW4

    The concept behind this project is to create a more interactive running experience where users can actually race against themselves around a track. Nike created an interactive LED wall that they placed next to a running track which has a clearly defined “Start” location. A sensor is attached to the bottom of the runner’s shoe which helps the race track the runner’s time and location on the track. After the runner has completed one lap, the LED wall next to the track illustrates the runner running next to the real life runner, thus giving the runner an opportunity to literally “race themselves.” I thought this was a particularly clever way to incentivize more running (and thus work as a profitable ad campaign which was the ultimate goal) as I would remember often times in my swim practice feeling terrible as I had been left behind on the track once again. However, by creating this interactive sensor/display experience, a runner would notice less of how slow or fast they are compared to other runners (which served as quite a deterrent to running for me) and would focus more on whether they are improving when compared to themselves.

    This system itself works quite well; however there are a few problems which I think can be improved. First of all, it seems that a sensor is threaded through the laces of the shoe which helps the interface keep track of the runner’s pace and location around the track. Although it is a solution that works, it seems rather clunky; if there was some way to use a sticker, a phone, or preferably computer vision to track each runner, then I think the use case would just be a little cleaner (the sensor seemed clunky and out of place when laced through the shoes). Secondly, as one might expect, there is a limit to how many runners can run on the track at once; only 30 runners can partake in this experience, perhaps because of sensor shortage and perhaps because there would be too many avatars displayed on the LED screen. This is a problem that I don’t quite know how to solve, as the sensors could potentially be switched out but the excess of avatars would remain.

  8. I was going through some of my youtube subscriptions, and noticed a creator I like had recently worked on a new project, called the Smile Camera:
    https://www.youtube.com/watch?v=XR4SKIIaEVY

    It is designed to capture video while its user smiles, intending to capture moments of happiness throughout the day. A camera is mounted on top of the user to capture video, and electrodes are mounted along muscles on the face to detect when the muscles responsible for smiling move, notifying the camera to begin recording.

    Andrew Maxwell-Parish, the creator of this project, designed it to highlight happiness in small moments throughout life. However, I think it has room for improvement. Technologically, it seems to record both grimaces and smiles, yet these two facial expressions tend to describe different feelings. Perhaps it could monitor other body functions – like heart rate or brain responses – to find some more accurate tell for happiness in a person.

    In terms of expanding the project, it may be interesting to observe other emotions throughout the day. Are there other emotions throughout the day that are meaningful, and how are these emotions connected to our environments? Can we learn about a person through the way they feel about moments in their life? I think a longer term project studying different emotions and relating them to different events in the world may make for an interesting project. There is also room to explore the question: how do people react to being told at each moment when they’re happy? The smile camera lets out noise, and in Andrew’s video you can see him stifling his smile as he’s alerted when he smiles. This also leaves room for commentary on what effect a society devoted to finding happiness may have on a person, though this deviates from Andrew’s initial message.

  9. Time Tunnel Machine by Roxanne Kim
    https://roxannekimblog.wordpress.com/category/phycomp/

    The Time Tunnel Machine is an interactive immersive installation that gives the effect of moving through a tunnel as the user pedals a bicycle. The user sits in a chair and puts their head in a cone that blocks out ambient light, so the user only sees the lights spiraled inside of the cone. As the user pedals, the lights inside the cone turn on and off corresponding to the speed of the pedaling, to create the impression of travelling through space.

    The installation accomplishes this by using a hall effect sensor, an Arduino, and the Javascript library p5. The hall effect sensor and the Arduino is used to calculate the revolutions per minute (RPM) of the pedals, and p5 speeds up or slows down the audio based on the RPM. The Arduino is also used to control the timing of the lights.

    Based on the videos and the documentation from the artist, it seems like the installation works very well. However, I think that it could be improved by engaging more senses and by making the entire unit more cohesive and aesthetically pleasing. For example, the effect of moving only uses sight (the speed of the lights). I think this effect could be enhanced if a fan was added, with the speed of the blades/strength of the wind corresponding to the speed of the pedals. The wind blowing in the face of the user, I think, would greatly contribute to the immersive experience. The installation itself is made of three separate parts: the folding chair, the bicycle pedals, and the cone. From the outside, it looks rather improvised. If the installation had all the parts physically connected to each other and looked more ergonomic, I think that it would appear more inviting to users, and would therefore be able to share the experience with a wider audience.

    As far as interaction goes, I think this is a fairly simple piece- only entertainment for a few minutes. I’d be hesitant to make the recommendation of adding a robust video game component, because that would involve making significant modifications to the design and I think it wouldn’t stay true to the artist’s original concept. I also believe that there are more efficient ways to create such an immersive video game that incorporates pedaling, namely through a virtual reality device. However, I do think that adding a multiplayer component would be acceptable, and perhaps turning it into a simple race between the two players.

  10. Holovect

    https://www.youtube.com/watch?v=kPW7ffUr81g

    This project is called Holovect, which stands for holographic vector display. The purpose of this project is to take computer generated renderings of three-dimensional objects and use lights to draw these images in a three-dimensional space. The user can work with an interface to manipulate the image and move it around using controls. The Holovect can take computer generated 3-D models that have either been programmed or designed. The purpose is to allow a user to make several prototypes of their design and have the opportunity to interact with it and see it in three dimensions without having to wait for a 3-D printer or making a physical prototype. This allows for quicker turn-around time for iterations of a design.
    This project is incredibly well executed and I can imagine it to be very useful in the design and prototyping realm. However, I believe it has much more potential beyond having a quick way to interact with a given 3-D Model or design. If they were to continue working on this project I think it would be interesting if they could implement a feature that uses depth perception sensors or 3D scanners to project pre-existing models. This could be useful when trying to analyze objects that are difficult to view on the current. Another idea might be to introduce a feature that can represent different layers of the mesh possibly using different colored lights. Lastly, I think it would be interesting if the user could be creating and modifying a mesh and the Holovect could project the model in real time. I think this would enhance the interactive experience while boosting the functionality of the product.

  11. https://itp.nyu.edu/thesis2017/project/ishaan-m-chaudhary

    I chose the project “EKTA” by Ishaan Choudhary. Ekta, meaning unity in the language Hindi, is an interactive and immersive art installation meant to help bring strangers together. It consists primarily of sound and light displays to engage the audience. Physically, the installation is box shaped with four slots facing four different sides, where the audience can put their heads in to immerse themselves. The small size of the box forces proximity upon the audience, thus bringing them closer to each other.

    The main feature of the project is the song played during the immersion process. This is the interactive portion of the project; when any one of four target audience puts their head in the slot, a “stem” or part of the song starts playing. When other people join, other stems of the song add to the harmony, meaning that the song can only be fully enjoyed with four different people participating and all four stems of the song synchronized. The aesthetic appeal of the exhibit is further enhanced by interactive lights that change color when an additional person steps in, as well as the infinity mirrors to augment the effect of the lights.

    Overall, the piece does seem to fulfill its objective in bringing people together physically and emotionally. The demonstration video shows people bobbing their heads up and down, clearly in sync with the music and at ease despite the physical proximity. However, to better the product, I think the visual display could be improved. For instance, currently the light is of constant brightness and color. This could be varied to achieve different visual interactive effects, such as different spots of light around a person’s head.

  12. What can a £263k/$400k F35 helmet do?
    https://www.youtube.com/watch?v=Dj4cP4PMPPM

    Basic Description

    The product that I’ve chosen to analyze is from the newest generation of fighter jets developed for the US military: the F-35 Raptor. The product that I am specifically looking at is the pilot’s helmet. The helmet is truly revolutionary because it incorporates convenience of use and versatility all in one small package.

    what is it supposed to do?

    This helmet allows the pilot to view all the information about the airplane directly from his/her helmet display along with a 360 degree exterior view of the airplane. The helmet track the pilot’s head movements and updates the display accordingly in order to simulate a “transparent” airplane. The helmet is also synced up the the F-35 itself and displays real-time information about the plane’s surroundings as well as monitoring its internal systems. In addition to all of that, the pilot can also choose to switch display modes to infrared or other spectrums the naked eye cant see.

    how does it do this?

    The helmet is directly connected to the F-35’s onboard computer which gathers and sorts all the information collected by hundreds of sensors located inside and outside the F-35. Following the pilot’s instructions, the computer then displays what the pilot wishes it to. In order to achieve a near Virtual Reality feel to the helmet display, the helmet must also gather head movement data from the pilot in order to make adjustments to the display.

    does it work?

    The product has been in development for several years and is basically ready for use by current military forces. However, the very high price tag of the helmet is preventing the wide spread use of this technology until costs can be lowered. The video and various sources state that any malfunctions in the helmet are easily fixable since most of the issues are software related as opposed to hardware related.

    how would you change it?

    obviously, the price tag of the object could be lowered since that is the main inhibitor to the usage of this product. I would also probably want to make this product smaller in size as looking around with a heavy helmet could cause discomfort in the neck. Also, making this helmet compatible with older generations of fighter planes would be a huge advantage as it allows for more common uses.

  13. https://www.youtube.com/watch?v=wZdelRBYRnc

    Talk Table is an interactive installation which is touchable and visualizes the peoples talking. By expensing two senses of talking, this project hopes that people can talk each other more joyfully as watching and touching their conversation with the table. In other words, this project is focused on the new experience that is created when the new media art is combined with the natural human behavior like talking.

    Talk table visualizes people’s voice as many particle graphics. The size of the particles corresponds to the loudness of the speech/sound. The particles also randomize their color, movements, brightness. People can also change the color and movement of the particles by touching and moving them.

    The group used 4 mic inputs to receive sound data and diffused infrared light for motion detection. The randomization of particles is computed by Arduino and perhaps other software. From their video demonstration, the table works in a technological aspect. The table is able to respond to the change in sound and pressure; the animation transitions smoothly without lagging; the randomization of particle graphics has sufficient varieties.

    However, from an artistic and functionality aspect, I believe the project needs to rethink the focus of their final product. Since they stated that the purpose of this project is to make people converse more joyfully through the table, the focus of the project should be more about human-human interaction. Right now, and as seen in their video demonstration, participants’ attention is on the table not on the conversations. In fact, the participants are not conversing but rather making noises to make the table animate, and this behavior does not model a natural conversation.

    I would change the project to make the animation work in the benefit of a natural conversation or the relationship between users. One idea is that, instead of particles, the animation could show the emotion of the users by analyzing the tone or loudness of their voice, which give more informational feedback to the users. In addition, the table targets two senses: visual and hearing. But what if the table also incorporates the touch of the users. The table could mimic the texture of sound and benefit users with visual or hearing impairment.

  14. https://courses.ideate.cmu.edu/60-223/f2015/bernband/
    1. basic description, elaborate on a title or use
    BernBand, an ideate project made during the 2016 election season
    2. what is it supposed to do?
    It lets the wearer know whenever someone on twitter uses #FeelTheBern. I think that the most significant takeaway from this project is how it takes something that is normally an abstract concept that we relate to, tweets, and makes it something that we can physically feel. It’s an interesting crossover between two very different domains of everyday life.
    3. how does it do this?
    It uses a SparkCore to monitor twitter over wifi, and when it detects a tweet it uses a Peltier element and a heatsink to heat the wearer’s wrist.
    4. does it work?
    Yep
    5. how would you change it?
    I’d possibly make it more configurable, add other methods of notifying etc. though at a certain point that starts to wander away from the mean idea. I think that there is a lot that could be done with non-traditional forms of notification like this.

  15. Gaurav Balakrishnan

    Dot Braille Smartwatch : https://dotincorp.com/product/dot-watch/
    https://www.youtube.com/watch?v=wkXGERVYmDU

    Dot is a company that creates smart technology for the visually impaired members of our society. They released their Dot Watch only last year catching the attention of many. It is a smartwatch that outputs braille on its watch-face. It is designed to display not only time but notifications from the smart phone. It is also meant to be a cheaper medium to learn braille through interaction.

    It has 3 rows of pins that are magnetically activated and rise to form braille characters. The watch is connected to the phone via bluetooth and when there is a notification on the phone it is relayed to the watch. The watch then creates the characters in braille on the watch-face. It has touch sensors on the face that are able to read input from the user. This allows users to scroll through text when the receive notifications, answer calls, and perform other simple actions.

    From videos of its working, it is observable that it does have the functionality that it claims. It is able to provide text feedback real time and give blind users many functionalities of a smartwatch. By pricing it at $290, they are making it accessible to a larger range of financial capabilities compared to current braille e-readers. It can convert any text-based output on the phone to braille and display it on the watch.

    A space where I think the design could have greatly been improved is by changing the form factor of the device. With their goals, I believe it would have been more useful to have made a larger portable appendage to the phone like a modified phone case with the capability of displaying more information at a time. Braille characters are quite long and it is possible that even a single word doesn’t fit on their display. I think a just as portable design could be developed with more functionality.

  16. I checked out a new product by Snapchat called Spectacles. Spectacles are sunglasses with the ability to record snapchat videos and send them to your phone. In order to buy a pair, you have to find a Snapbot, which are large yellow vending machines with a big interactive eye that looks at you while you decide if you want to spend $130 on the device. The vending machine also uses the screen to display you with an animated pair on your face..

    To use the spectacles, you have to press a button on the top corner of the glasses, and the video will start recording. It will record for up to 30 seconds and then send the videos, broken into 10 second segments, to your phone on the snapchat app where you can then send the video to your friends. There is a light that faces toward your eye near the button that lets you know when you are recording and there is an outward facing light that lets other people know when you are recording. A cool feature of the glasses is that the case they come with has the ability to charge up and then charge the glasses while they sit in the case up to four times without needing to plug into the wall.

    The product is a good idea, but I think there is one major problem that makes them a lot less cool than they could be. That problem is the button you need to press in order to start recording. I think the product’s use is for capturing moments in time, from your perspective, more naturally than pulling out your smartphone and doing the same thing. But in order for it to really be as natural as possible, I think the method for interacting with the glasses should be hands free. Maybe something along the lines of a pattern of blinking, without needing to bring your hand up to your face, would make the glasses more interesting. This would mean adding another sensor and probably making them even more expensive. Spectacles are weird, but they are one of my favorite examples of physical computing.

    Spectacles:
    https://youtu.be/0lkDyH53H4I

    Snapbot:
    https://youtu.be/v-ffiS2HpGU

  17. Link https://www.youtube.com/watch?v=h5n0rw8wo14

    The piece of physical computing that I have chosen to analyze is an interactive proximity-sensing table created by You Tuber Graham Monahan. What he has created is a device that when presented with motion will respond by lighting up. If one holds their hand above the table and moves it across the board’s LEDs they will follow the path of your hand and turn on and off following the path. A secondary aspect of this design is that not only does it illuminate in response to movement, but depending on how close or far you place you hand the lights will dim or brighten. The closer your hand the brighter they are and vice versa.

    This prototype is broken down into three modules with three circuit boards controlling each one to create a seamless flow. In order to properly sense the human hand, the form of detection is infrared. The sensors control the outputs and will modulate the LEDs by brightening and dimming as the hand passes. Once turned on it properly assesses the environment, current heat map, etc, and will calibrate for further use. Once in use the surface can sense movement up to one foot away and from one foot and lower it will manipulate the level of illumination accordingly.

    What is remarkable about this is that this is a simple prototype for the many iterations that came after, all still documented on YouTube. Its construction is not necessarily crude but is certainty not refined. When prompted with a hand the tabletop follows the movement with only a slight delay and mimics back an LED path of your hand successfully and without man handling as I have seen with some other physical computing projects on YouTube. Interestingly, many of my initial critiques are addressed in later iterations of the prototype. I initially thought that implementing color would give it a greater level of complexity. Next, I thought that adding a higher fidelity response via the LEDs would give the interaction that little kick of intrigue. In this specific prototype the LEDs responded in chunks that followed the path of your hand but not the shape. Fortunately for the piece the designer implemented the critiques above, and in the end made something highly successful. Something that could add a sense of dimension to the piece is adding a layer of sound. The closer you are the louder the feedback is and vice versa. This could create a very immersive device that plays with multiple senses.

  18. Haile is a drum playing robot designed at the Georgia Institute of Technology by professor Gil Weinberg that can analyze beats on a drum played by a human, and respond with its own beat that it plays on a physical drum with a mechanical arm.
    One other cool thing about Haile is that all non electrical components are made of wood.

    https://www.youtube.com/watch?v=veQS6tsogAA.

    Haile makes decisions about how to play and what to play in real time, responding to what it hears. This is a robot you can have a jam session with.

    Haile is equipped with a microphone that detects: Pitch of a drum hit, beat patterns in successions of drum hits, amplitude of drum hits, and frequency of drum hits.

    From the pitch and beat patterns, Haile generates a harmonious matching beat, and from amplitude and frequency of hits it determines whether to take a leading role (making it’s own drumming louder and more densely packed with beats) or a following role (quieter drumming with less dense beats.)

    Haile has two arms that extends forward and back, with small batons at the ends that can swing up and down (one slower and harder than the other). Haile’s mounting is adjusted to be at the proper height of a drum placed in front of it, and calibrated so that it’s arm can reach nearly the middle of the drum, and near the edge. Haile and the user are meant to play on opposite sides of the same large drum. Haile controls the beat and volume by swinging the baton that strikes the drum, and the pitch by extending or retracting its arm to decide where on the drum to strike. The arms each have a linear motor controlling the extending and retracting, and a solenoid motor controlling the batons.

    It works really well, passing the Turing test most of the time. Lots of drummers find Haile to be an engaging jamming partner.

    I would give Haile more arms, and copy/alter the basic drumming code to accommodate other kinds of percussion instruments. It would be especially interesting to have Haile hitting two drums with different arm sets and responding and generating new beats according to its own, having a jam session with itself. I’d try to make a roomful of Hailes all playing different instruments, all responding to the sounds of all the other Hailes around them. All robot band!!

  19. The project I have chosen to write about is called Sun Spot. It was made in the fall of 2015 physical computing class by Alex Palatucci, Katelyn Smith, and Rehan Butt. It is a small yellow box that attaches to clothes or bags with a string.

    It is supposed to notify the holder/wearer of this technology when they have been exposed to surplus amounts of Ultraviolet Radiation, thus protecting them from harmful rays.

    Sunspot works with a UV sensor, a microcontroller, and a motor. It records UV radiation once per second, and when it is at a peak for an extended amount of time, buzzes once every hour for up to 4 hours exposed to UV at which point it buzzes continuously until UV exposure is cut off.

    According to the write up, this project was “extremely successful” in achieving its purpose and notifying its user of extended UV radiation exposure.

    The creators also had some ideas for improvement of their project. I would ideally attempt to make it smaller and integrate it into a pair of sunglasses or a hat that shields the sun. That way, while being more convenient to carry, wear and remember, more aesthetically pleasing, it is more functional. If worn on the eyes or head, the areas SunSpot is picking up radiation from are only shielded when the wearer away from radiation. On the other hand, if it is attached on a bag, shade from the wearer’s body or clothes onto sunspot might produce inaccurate readings and data. If it cannot be made smaller, I would make sure it was waterproof so that when doing water sports or sweating, both often associated with sun, it is still functional. I would also make it more easily attachable to different devices. Winter sports such as skiing and snowboarding in sunny states like Colorado are known to cause skin damage because of the sun. I would make it attachable to a snowboard, surfboard, helmet, bike, or skis. In that case, I would also keep a small part of its box as a casing for a small tube of emergency sunscreen in case exposure is unavoidable. It could also record UV radiation exposure times/locations and let you know which activities result in the largest amount of exposure.

  20. A orthopedic drill that has sensors on it tell you bone density, depth, and possible some diagnostivc features?

    Orthopedic surgery often involved drilling into the bone- and we obviously can’t see into the bone
    medium that we are drilling into. We can use x-rays, can do different scans, use biomarkers and biopsies (the later of which are
    very invasive), and we try to understand whta we are getting into before we open the patient up. It’d be great
    to have a record of how the drill was operating/interacting with the bone. Different layers have different densities, and being able to collect
    data like density, composition, mineral content etc. can give a greater image of what the patient is experiencing, the state of their body, as well as contribute
    to medical literature with additional data.

    I think it will include pressure sensors, and estimate density with how easily/how much drag is experienced by the drill, by the bone.
    Maybe it will have a small pocket in a disposable drill head to collect material from the hole made.
    Additional data and biometrics that can be taken will have to be researched.

    The drill at the moment does exist to some extent, and detects bone layers with thresholding technology based off of
    asscociating applied torque/drill mechanics to bone characteristics. It currently does not, however, contribute to sample acqusition of biometrics, but rather
    just focuses on perfecting technique with surgery performance. Having these biometrics would contribute to undertsanding the greater picture of the
    pateint’s wellbeing, as well as contribute to research.

  21. A visiting lecturer came to Carnegie Mellon two years ago to speak to my pre-college architecture class about artistic and technological research. This research project member at Disney talked to us about one of his company’s projects called Botanicus Interacticus, using their newly-developed interactive plant technology.
    Botanicus Interacticus involves artificial and living plants as mediums for users to interact and play with. With computing devices placed in virtually “any” plant soil, the plant turns into a touch pad for music, mouse-like features on a computer (selecting, directing to a new webpage, etc.), and other interactive functions. This computing device in conjunction with the users chosen plant medium detects human gestures, location of touch, proximity between the plant and the user, and the amount of contact. Researchers on the team analyzed the electrical properties of plants and replicated them in the computing device, making these functions possible.
    This device is incredibly entertaining to its users, considering implementing such a design in a Disney park would attract curious children and adults. Today, as the line between nature and technology begins to fade, this type of interactive design may stand at the forefront where these realms of our environment collide. If I were to improve this device I would begin by thinking about what other types of plants or living materials found in nature could become mediums. For example, on a broader scale, they may wire a larger computing device in the ground near a bed of flowers or a field of corn. If humans brushed up against these things perhaps it could trigger not only music, but lights in buildings or on sidewalks as they approach them. Perhaps human interaction doesn’t have to be the only trigger. Sun, rain, or wind may play a factor in the output of the system.
    https://www.disneyresearch.com/project/botanicus-interacticus-interactive-plant-technology/

  22. https://www.media.mit.edu/projects/animastage/overview/

    The project that I would like to discuss is called Animastage. It was developed by the Tangible Media group at the MIT Media Lab. It allows you to animate physical crafts on an actuated stage. Animastage lets you place your craft (paper, 3d doodled or otherwise) on this stage and allows you to manipulate them. Additionally, you can also create a variable landscape to add scene. I think this project is really interesting because after developing animated characters in the virtual realm for so long, we are now pivoting towards using technology as an aid to prototype in the physical world. It’s kind of goofy in that you’re basically prototyping puppets but you not longer have to hold them, the stage does itself. I’m a big fan of using technology in unexpected ways to bring back older artforms. I think animastage does a good job of making animation / puppetry accessible and fun.

    With that being said, the prototype is a definitely in it’s earlier stages. It could use some work in making the animations seem more seamless. At this point, a human could do a better job getting across a narrative through craft/puppetry than this installation. I think it could benefit from being a little quieter and somehow facilitating translational movement across the stage. I really commend the group for a great idea and would love to see how it could develops in further iterations

  23. The artist Hamilton Mestizo has created an interactive artifact called the Echolocalizator. The piece was made as a reflection of technology and how it is integrated in our lives. Digital-virtual technology is an integral tool that has influenced our perception of the real world and its phenomena. This art piece stems from this question of how we develop our own sense of reality. Computing is now able to create a virtualized reality in a virtualized environment that completely removes the real environment the virtual reality is perceived in. The Echolocalizator is a leather helmet that allows the user to perceive the space through sound, based on the concept of echolocation.

    The artifact functions by using a sonars embedded in the headpiece. Inside is a simple electric circuit. An Arduino mini pro outputs a program that associates the sonar signal with an audio file (WAV) stored a microSD. A microcontroller then translates the sound signals into centimeters based on the distance between the object and the sonar. The range extends from 10 to 650 cms. The result is a bineural sound atmosphere that describes the surrounding space.

    I would make the piece involve two people, or a group of people. This would be an interesting social response that would translate the private landscape created in the mind into a public space – harnessing the idea of being alone, together. I would also test out making the range smaller so there are less sound inputs and the user would be able to focus more on the subtle changes and nuances of the sound landscape.

    Website: http://librepensante.org/
    Process Blog: http://ecolocalizador.blogspot.com/

  24. http://www.cardiffmiller.com/artworks/inst/cabinet_of_curiousness.html

    The Cabinet of Curiousness is a 20 drawer cabinet that, when you open a drawer, plays a different recording (singing, opera, radio, news, animal noises, etc).

    It does this by using sensors to tell if the specific drawer is open, and if it is plays the speaker in the specific drawer. I think the interesting moments occur when people open multiple drawers and make a sort of remix, either picking multiple sounds that are similair to creat a hormonious audio, or mixing very different songs.

    I think this piece is strong, I dont think the author should really have altered it as this was their vision but if I where to make something similair I would have included a visual element when opening the drawers (for example, if the noise in the drawer is an opera piece I would have included a diorama of an opera scene)

  25. http://deeplocal.com/projects/netflix-socks.html
    http://makeit.netflix.com/projects/socks

    Netflix socks is an open source project done by Pittsburgh’s own DeepLocal. The premise was this: people fall asleep quite often while watching their favorite show on Netflix. These socks detect when this happens and pause your show so you don’t miss a single minute of it. This is done by using an accelerometer to measure movement – when you are completely still, the assumption is that you have fallen asleep. When the sock thinks you’re asleep, it flashes an led to warn that it is about to pause your show (if you notice, you can just move to tell it you are actually awake). To stop the show, an arduino sends an infrared signal to your television telling it to pause.

    It seems to me that this would work, but most likely gets triggered a lot when you don’t actually want it to. That being said, it really doesn’t need to work – this was all just a marketing campaign. If I were to change the design, I would probably use a pulse sensor to get a more reliable read on when the person is asleep. I would also program the arduino to turn off the television when it pauses Netflix, which would conserve energy and make a nice, dark sleeping environment.

  26. Interactive Particle Print Dress by Shenova Fashion

    https://www.youtube.com/watch?v=guALfLoto10

    1. Basic description, elaborate on a title or use
    This dress is designed by Shenova Fashion Corporate. It’s a particle physics pattern-printed dress, that senses the heart beat rates of the model, and lights up with respect to the heart beat rhythm.

    2. What is it supposed to do?
    The LEDs sewed under the dress would flash on and off by the measured fluctuation of the models’ heartbeat.

    3. How does it do this?
    With the sponsorship of IBM’s Bluemix, the designers incorporated the application created in node.js to allow the heart rate sensors to sync with the led’s brightness and lighting.

    4. Does it work?
    As shown in the video, the LED lights flicker in accordance to the typical rhythm created in heart beats.

    5. How would you change it?
    I would explore more on the aesthetic and technical capabilities of the LED dress, where I would use not only the heart fluctuation data, but also thermal sensors or brainwave sensors to further expand possibilities for wearable sensors.

Leave a Reply