Overview
Bike Buddy is a bike computer that uses sound to generate its data. Made using minimal components, Bike Buddy uses a simple contact mic that plugs directly into your phone for maximum convenience.
Inspiration
I bike a lot, and thus, I like to keep track of the miles I ride, and how fast I am going. This semester I also built a bike computer with a group of friends for Build18, so I had this in mind when I approached this project. However, unlike the bike computer I helped make for Build18 (seen below), Bike Buddy is minimal and uses an android phone to process and display the data from the wheel.
My Build18 bike computer used rare earth magnets mounted on the wheel to trigger a hall-effect sensor mounted on the fork. This information was then processed and displayed by a light blue bean microcontroller.
Once I had the idea of using sound as the input to a bike computer, I was also inspired by the childhood practice of sticking a card into the spokes of a bike wheel to make lots of sound. I started from this concept of having something stationary on the frame hit all of the spokes, but after several iterations, I settled on what became Bike Buddy.
Technology
I used a piezo contact mic to pick up the sound of a zip tie on the wheel hitting a piece of wood mounted on the front fork. I then plugged this mic (with very minimal circuitry) into an android phone with a custom app.
Process
My initial idea was to put a zip tie around the fork of my bike and have it stick into the spokes. I would then pick up the ticks with the electret microphone. I had intended to mount a light blue bean on the fork in a laser cut enclosure. The bean would do all of the data processing needed to get speed and distance. However, this proved to be overly complicated in several respects. First of all, every revolution, many spokes would be hit, and the spoke pattern on my wheels is not completely even. Additionally, the electret mics have a significant amount of noise (especially at high speeds) which would make detecting the spoke hits difficult. Also, the bean introduced the complex problem of visualizing the data after the sound was processed. Because of this, I settled on mounting a piece of wood onto the fork and having a zip tie around the air valve on my wheel. This way, there would be only one tick per rotation, and I would get a great place to mount a contact mic. The contact mic reduced almost all outside noise, so I was just hearing the ticks of the zip tie hitting the wood piece.
To overcome the issue of visualizing data from the Light Blue Bean, I just cut it out entirely. I plugged the mic directly into the phone via a TRRS plug. In order to get good data, I had to wire the piezo up to a capacitor and a pull down resistor. I also added a 100 Ohm resistor between the right and left audio output channels to ground so that the phone thought it was a pair of earbuds.
I soldered up the circuit on some perfboard and then used heat shrink to protect it and the piezo.
After I had the physical and electronic hardware sorted out, I had to write an app that read the mic and processed the data. I used the official android documentation and lots of googling to solve the many problems that came up when making the app (I have omitted these as many had to do with problems that were specific to my setup). In order to actually read the audio input on the app, I used the AudioRecord class in a separate thread.
Code
The code for the app is on github: https://github.com/arathorn593/Bike-Buddy
Reflection
While working on this project, I learned about how a simple project can still have lots of interesting tech and design problems. However, I am most excited about the future possibilities of this device. Since the processing is done on the phone, the speed and distance data could easily be linked to GPS data or hooked into a quantitative self ecosystem.
Additionally, I am very interested in a potential varient of this device where the mic is mounted directly on the front fork. Then, potholes could be detected and linked to GPS data from the phone. This would provide a way for road conditions for bikes to be mapped in real time.
]]>What started as a sound generating glove mechanism turned into a gestural puppet controller.
Video (that will be replaced with better video shortly):
Idea genesis:
This project began with my fascination with gestural technology. Artists like Laetitia Sonami have been making waves in the world of unlikely sound generation. Sonami’s project “Lady’s Glove” features a sound generating glove that is controlled by finger movement. The result is a cohesive performance in which Sonami combines simple finger flexion with an array of sound effects.
My objective was to do something similar, but instead of embedding the glove with an arsenal of noise, I was more interested in creating a simple sound gradient so that the angle of flexion for each finger was calculated.
How did that turn out?
Not exactly what I described.
Technologies used.
Photos:
coming soon!!!
Gist Link to Max Patch:
https://gist.github.com/LValley/e4b7d09429ce168d6ad0.js
Inspiration Links:
Lady’s Glove
http://sonami.net/ladys-glove/
(and very loosely) Stelarc’s third hand:
http://stelarc.org/?catID=20265
After thoughts:
While this project was somewhat of a wild ride, I am glad that I ended up with a functioning project that made sense.
In the future; definitely, more finite planning.
]]>
Inspiration
At first I wanted to have this exist out in the world, preferably out over one of the rivers. But due to the scope of the project, the time we had and some technical issues, I scaled it down significantly. I decided on a light and silly output of a running animation. I wanted to get more familiar with Processing and figured this would be a good project to start doing that.
Technologies Used
I used a Particle Photon board to the initial signal processing from the piezo microphone that was on the instrument. Then I used Processing to control the animation with a Serial input.
Photos
Here are some photos of earlier prototypes for the propellers
I even tried to make my own propellers so I wouldn’t have to use spoons, but unfortunately I couldn’t get a good form from the vacuum former.
Here’s a sketch for the final animation. Hopefully I can redo the animation so that its more than a woman running.
Code
https://github.com/dcamposzamora/windmillanimation
The file name “Switching_animations.pde” is the first code I showed for critique that switched between 2 different animations. But the second “Slow_still_frames.pde” is the one in the video, where the animation moves depending on the input of the serial. So if the wind is hitting faster the images of the animation switch faster.
External Libraries
I used examples from the Processing reference libraries for this project.
Conclusion
I had a lot of difficulty with this project so the final product is far from what I envisioned. Since I was caught up with the conceptual roadblocks, I had less time to troubleshoot the technical difficulties that arose. But in the end I’m glad I got the microphone and animation working. Initially I thought of this as being just a kind of dumb, fun project to get to know some of software and hardware better but during the critique, the suggestion that something like this could be applied to children’s toys or books was really interesting to me. It makes me wonder what the possibilities of using interactive technologies could be to expand on children’s books (Goosebumps choose-your-own-adventures x100) or cartoons and short animations.
]]>
Make a blog post documenting your project.
Make a sensor from a microphone that measures/detects an environmental condition that is not an audio source. You must convert some other physical energy (displacement, light, electricity, heat) to sound to be sensed by your microphone :
The first week of the project will be follow these steps:
1) Identify the source that you are converting to audio. This may be a human interaction like a button push, or it may be an environmental condition such as wind speed or temperature. For the sake of describing an approach we will use a button press as our example input in the style of Valkyrie Savage’s Lamello.
2) Convert the energy into sound. For the button press I would take the following steps, create a set of tines that get plucked as the button is depressed (see this video of a finger piano), connect those tines to a resonant chamber, and place an electret microphone in or on the chamber.
3)Transform your incoming signal to the frequency domain using an FFT to gain visual confirmation that you can differentiate the signal from noise.
The initial prototype is due Thursday, February 4th.
A working mechanism with the associated FFT displayed on your laptop is required.
Banner image is from Daniel Sierra’s Oscillate
]]>