Video Link
Project Website
Research Paper
Machine manipulation in an industrial/entertainment context
Who: Stanford Shape Lab: Inrak Choi, Elliot W. Hawkes, David L. Christensen, Christopher J. Ploch, Mark Roman Miller, Sean Follmer
What: A haptic interface to mimic interaction with objects. Made for implementation with Virtual Reality Headset,
When: 2016
Where: Stanford Shape Lab
Context: A haptic handheld interface that allows the user to interact with and manipulate digital objects being viewed through VR
What was the question or problem the project is addressing?
The problem that this device addresses and solves is that of the lack of interaction available within immersive virtual reality experiences it tries to bridge the gap between the physical world and the virtual world by making digital objects tangible. This also serves to expand user input to a more intuitive manipulation, instead of handheld controllers and/or gesture, and provides “kinesethic feedback”.
What new insight did it take to make it happen?
This project needed to consider how to mimic physical interactions with non existent objects. They wanted to make a haptic interface that allows for more natural interaction, but construct their interface with a low cost, and light weight design. It required research on ways to minimize size and weight of components in order to allow for a non restrictive experience.
How well did it work?
In my opinion it was very successful. There are some shortcomings, which I mention when talking about the uncertainty of the project. The fact that it is entirely self-contained (wireless and lightweight) make it much more accessible than other haptic interfaces available, and allows for higher level of immersion.
What are ways it relates to our theme?
The relation to our theme of interaction with objects is that this device was specifically made to allow people to use their hands to interact with objects that do not exist in the physical world, which ties into an understanding of what it means to interact with objects
Is there a clear boundary between the machine and the material or object?
The machine and the object do not have a clear boundary physically, but they do visually (you can only either see one or the other, and it is clear which is which), but the object itself is simulated by the machine since it does not actually exist.
Where is the agency?
There is agency in that this is not a stand alone machine, it functions solely when in use by a person.
What kind of uncertainty is present in the process?
There is uncertainty in the material or the feeling of the object, the user is able to feel the presence of the object and to feel the shape of the object, but this device doesn’t deal with weight of objects or the textures of different materials. Aditionally there are no sensors connected with the pinky finger or the palm of the hand.
Related Projects:
SPIDAR: Allows you to interact with digital objects using both of your hands. Restrictive in that you can only use it within a specific space, which limits natural hand movements.
(time @ :34)
HIRO III: Similar concept. This is restrictive for the same reason as SPIDAR. More simiar to the Wolverine, but is more heavy weight and expensive, it is more of a robot and is definitely bigger. The difference between this and wolverine is that the Wolverine is lightweight and moves with your hand. website
PHANToM Haptic Interface: This is a haptic interface specifically for a pen that allows the user to interact with their computer 3 dimensional, but is mainly focused on being a tool to draw or sculpt, instead of interaction. website
Maglev Haptic :Magnetic Levitation Haptic Device. Again, this is more of a tool for interaction and not a simulation of different objects. This allows you to interact with three dimensional objects in a space, does not provide as much feedback. website
What do you consider to be the one key takeaway lesson?
I think that this is definitely one of the most accessible haptic interface devices, and has improved a lot over other designs, but I think the one take away lesson is that there is still a lot of room for improvement and a lot of different factors to consider when trying to construct a device that allows us to interact with things that exist outside of the physical world. While it can be improved in the future it does not appear that hindered by limitations since its not a widely available device. The takeaway lesson here is the novelty ideas and good innovation leave a lot of room for forgiveness.
Is there a way your peers could apply the core idea?
This would apply to project that involve a degree of human interaction, thinking about how it would be interacted with in a way that is not restrictive to the user and instead intuitive and natural.
Prepared Questions:
How do people behave differently within physical or virtual interactions?
If a work is interactive, what are ways to make this clear and inviting to the viewer?
What is the accessibility of your work? (this can relate to accessibility of the work itself (what can it do, what can’t it do) and accessibility of the viewer/user (consider people with disabilities, people with different backgrounds, people of different ages)
Comments are closed.