A slow transition we miss on a day to day basis is the degradation of food. I often open up my fridge to find expired milk or some vegetables gone bad. The only way of knowing whether food has gone bad in your fridge is carefully inspecting it.
Before placing food inside fridge attach an RFID to food product and input an associated expiration date into Ardunio ~ possibly using a potentiometer or rotary encoder. Place a 13.56 MHz RFID outside the refrigerator. The range on these devices is approximately 1 meter which means they will be able to read all the RFID tags placed inside the fridge. A screen on the front fridge can display the food associated with every tag and and a red, yellow or green backdrop to easily indicate where in the transition from edible to expiry the food is in.
Proof of Concept:
I used a potentiometer and 3 LED’s to show my proof of concept.
The potentiometer was used to input the current state of the food. I utilized millis() to demonstrate how the colors would change as the time approaches the expiry date. Here, for demonstration purposes, made the day change every 10 seconds.
Arrival is perhaps my favorite film, and while I was already familiar with the Wolfram documentary, and the original Ted Chiang story, I am always excited to revisit these ideas, and learn more about the interplay of language and cognition. I was impressed with the way the Electric Didact dissected this concept in the film and tied it back to the root of the very word “understand.”
Even more interesting to me is when we try to use language to express what we see, thereby translating visual cognition into an audible expression and back again. (As we are aiming to do with our projects in this course.)
This idea reminds me of a study that found that Russian speakers, who have separate words to distinguish between light and dark blue, are quicker to recognize these subtle differences than English speakers when shown two different shades, thus indicating language affecting visual perception right here on our own blue planet.
On the other end of the same cycle, Vox did an interesting piece looking at the evolution of words for color in language across different cultures, beginning almost always with just light and dark, then next to red before blue and green.
I’m curious if there is any way to actively adapt the interconnection between visual and linguistic cognition for use in interface design, or to create new connections by building a new vocabulary to map optical cues to concepts that do not have representations in the visual spectrum.
In relation to both the discussion regarding the visualization of sound/music as well as the work that Ghalya has posted about EJTECH’s work, I wanted to share this project because, well, it looks really cool. The project uses a speaker attached to a metal plate which moves sand that is placed on top of it when notes are played. It’s pretty visually interesting, and might be cool inspiration for something.
I came across this project and thought it would be cool to share. Soft Sound combines sound with fabric in order to play with textiles as an audio-emitting surface, and to create multi-sensory interactions. For example, not only can the fabric project sound, but the vibrations caused by the sound interact with the textile, causing it to throb and move. Soft Sound creates “soft” speakers by applying laser or vinyl cut copper and silver coils onto fabric, and running alternating current through the coils.
I found this project inspiring because it was able to turn sound into a more tangible artifact, since you can feel the sound’s vibrations through the fabric. Think of all the ways this technology can be applied to a variety of different uses! It’s truly inspiring. From e-textiles for wearable technology, to more traditional applications at home and everyday, this project is really interesting.
“An island that cannot hear in an ocean that cannot see.”
Due 11:59pm, 25 September.
Use vision to make an interaction accessible to someone that cannot hear. This is more than the simple state machines in the weekly assignments — for this crit we want a full interactive experience where the device interacts with a person. “Cannot hear” is not defined only as deaf or hard of hearing, it can be a condition where listening/hearing information is impossible. Examples: at a Baroque symphony, wearing ear protection while using loud construction equipment like a jackhammer, at night in a dorm room when the roommate is sleeping.
The inputs that can be used for this interaction are open to whatever makes the interaction work.
Take a look at the syllabus for more information on crits and the goals of this class before starting your project. Remember what we talked about in the first classes, the differences between reactions and interactions. The IDeATe Lending Library is also a good resource (and some of the staff have taken Making Things Interactive!).
Email me if you have any questions or hardware problems. I will be on the road most of Thursday but will have my laptop out at the conference taking notes on presentations.
I was watching college football the other day and caught this commercial during one of the breaks in the action. At first, I was reminded of this class because of the content (designing for accessibility), but then I realized the woman in the commercial was walking around CMU and Pittsburgh. Chieko Asakawa is an IBM fellow and a CMU professor… and blind. She’s widely regarded for her work designing everyday objects for the visually-impaired.
She also has a TED Talk that is pretty widely cited for the benefits of “accessibility” design for the entire population.
The standard alarm clock does not cater to the deaf and hearing impaired. They primarily rely on sound, and very occasionally, vibration. How do the hearing impaired wake up in the morning without having to keep uncomfortable vibrating wearables on throughout the evening?
A pulse-width modulated led alarm built into a wearable sleep mask or pillow would allow the deaf to be awakened in a flash. This alarm could also serve as an unobtrusive method for allowing an individual in a room to wake up, rather than waking up every party in a room at the same time.
Proof of Concept
A light based wakeup alarm would give the hearing impaired the ability to program wakeup times, colors, and light patterns. This would allow users to find the most appropriate settings for there wakeup routine. For instance, a softly increasing power would allow for a more calm morning wakeup, but if a user needed to be sure that of there wakeup in the morning, they could use a bright flashing setting. It currently uses a yellow led and momentary push button to change the state from on to off.
I have to admit, I kinda thought this particular solution was a little cliche at first, but then just this week, I accidentally left a burner on low, and walked away for an hour. Luckily nothing got too badly damaged, but I’ve gained a new respect for practical solutions to everyday problems.
Here’s the plan:
Attach a sensor to the knob to know when it’s not in the off position. This could even be a simple switch (today we’re using a potentiometer in case we someday want to know how high the burner is set).
From there we add a sensor to tell when the cook has walked away. We don’t really want a visual indicator that’s always on when the burner is on, or we’ll learn to ignore it. Today I’m using the HC-SR04 provided in class.
From there it’s just a matter of selecting a timeframe, and an indicator. For the purposes of the demo, we’ll use 5 seconds, but in real life something like 5 minutes is probably about right. For an indicator, I’ve gone with a red LED for the demo, but perhaps a text message or IFTTT notification on my watch would be more practical long term.
Below I’ve laid out the state diagram, wiring of the demo, and a picture of it in action. There’s a link to the zipped code at the bottom.