Project 1 – Physical Computing Studio https://courses.ideate.cmu.edu/48-390/s2016 CMU | Spring 2016 | 48-390 Sun, 02 Oct 2016 15:29:26 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.31 Project 1: Polar Cleaner https://courses.ideate.cmu.edu/48-390/s2016/2016/02/25/project-1-polar-cleaner/ https://courses.ideate.cmu.edu/48-390/s2016/2016/02/25/project-1-polar-cleaner/#respond Thu, 25 Feb 2016 08:42:05 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=184 Continue Reading →]]>
DSC_0044

 

This creation combines the complexity of physical mechanisms and electronics while following the inputs of a camera.

Our main goal was to have the polar system to find all the dirty areas on the whiteboard with a camera and group them up so that special coordinates could be sent to the photon receiver and translated into polar movements on the machine portion. It would then move to that location on clean that area.

Mechanism

Reasoning:

The initial mechanism was based of a tower crane. The way they move is by turning stepper motors and moving along their rails with a gear and pinion or in other words moving in a polar method. Compared to a gantry crane method we believe that this is the most optimal method with time and resources available making this our best choice. When looking at other options of completing this objective we decided we did not want to  create a robot car with an eraser on it due to possible inaccuracies from its weight and wheel contact. It was not a proper solution to the problem. What we did believe is that a stable system system would be more accurate and have less variables to deal with.

 

Physical Components:

The way we would move this system is with two stepper motors.

One of the steppers would be used to increase and decrease the radius while the other would cause rotation in either direction. With this combination theoretically any point could be reached if it were given cartesian coordinates that could be turned into polar for this two stepper system.

 

The way that the whole system was connected to the table was with a multi angle clamps that could be attached to the table for maximum stability

The way that the rail was able to move up and down is due to a cut timing belt and a gear to create a quick gear and pinion system to move the rail up and down. The gear was attached to one of the stepper motors.

 

 

We used Open Beam, a T-slot system that allowed for rapid construction of the rail system and what would hold the stepper motors in place.

Design Issues:

We found that it was very hard to get the rotation motor to work and found that it was because we did not use a gear system to rotate it but rather directly attaching to the head of the motor which did not give it the forces we were trying to initially achieve.

 

(Work in progress)

Camera Processing

-Picture

-Unwarp

-Background

-Detection

-Grouping

-Coordinates

 

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/02/25/project-1-polar-cleaner/feed/ 0
Project1: Whiteboard Cleaning Bot (OpenCV and step motor function part) https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/ https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/#respond Mon, 08 Feb 2016 18:40:32 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=187 Continue Reading →]]> For the project 1. We adapted Computer Vision and Image Processing techniques to detect the marks on board.

To make use of an external webcam, I installed imagesnap on OS X. You can install it through Homebrew or other package manager:

brew install imagesnap

Applying some adjustments to the web camera to increase the contrast of the marks and whiteboard for detection convenience:

thumb_IMG_2758_1024

The first step is setting up OpenCV library using CMake on OS X. I followed the steps in this post:

http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/

Based on the image snaps from the external web camera. I did camera calibration first. I followed the camera calibration project built in OpenCV library:

http://docs.opencv.org/3.0-last-rst/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv

This is a part of the camera data I got:

<camera_matrix type_id="opencv-matrix">
  <rows>3</rows>
  <cols>3</cols>
  <dt>d</dt>
  <data>
    1.4598983691373969e+03 0. 9.5950000000000000e+02 0.
    1.4598983691373969e+03 5.3950000000000000e+02 0. 0. 1.</data></camera_matrix>
<distortion_coefficients type_id="opencv-matrix">
  <rows>5</rows>
  <cols>1</cols>
  <dt>d</dt>
  <data>
    -7.1164648946664952e-02 1.2969996440506684e-01 0. 0.
    -1.6477242833950642e-01</data></distortion_coefficients>
<avg_reprojection_error>3.8687741483526999e-01</avg_reprojection_error>

Then each time when doing mark detection, preprocess the captured image first.

After camera calibration. We need to do a projective transform for the image though. OpenCV has a function: getPerspectiveTransform() (cv::getPerspectiveTransform(const Point2f src[], const Point2f dst[])) Which can be used for the purpose. I just used 4 points’ corresponding relationship during the transformation for the implementation:

Screen Shot 2016-01-25 at 2.06.20 AM

Considering the purpose of the project: we want to clean the marks on the whiteboard. The foreground(marks) and background(whiteboard) are distinguishable in common case due to the high contrast.

When I tried to figure out what method to apply for marks detection. I thought about watershed segmentation at first. It has a good performance while distinguish connected region, thus can provide a relatively more accurate result. (An example of the implementation of watershed algorithm can be found here:

http://www.pyimagesearch.com/2015/11/02/watershed-opencv/)

An example result of the watershed segmentation:

Screen Shot 2016-01-22 at 6.49.23 PM

However, actually the goal for the project is pointing to the opposite way: instead of differentiating connected marks as separate objects, we would like a more “rough” result: that is we want to cluster the marks near to each other as the same object.

So I went for threshold and contour the image, then applied bounding box for each contour. The built in sample in OpenCV tutorial can be found here:

http://docs.opencv.org/master/da/d0c/tutorial_bounding_rects_circles.html#gsc.tab=0

Till now, the result is yet not ideal enough:

Screen Shot 2016-02-08 at 12.33.37 PM

The writing which should be regarded as a single object were departed as much smaller components.

So we need to cluster the nearby contours.

I thought about using k-means clustering. However, we don’t know how many clusters(objects) in the image(that’s exactly what we want to know!). That means we don’t know “k” from k-means. No way to start training.

Then I thought it’s just ok to do distance detection among the contours. And merge the contours within a certain distance.

There are also some references of the realization:

http://dsp.stackexchange.com/questions/2564/opencv-c-connect-nearby-contours-based-on-distance-between-them

http://answers.opencv.org/question/27897/creating-regions-of-interest-roi-by-clustering-fragmented-contours/

http://stackoverflow.com/questions/23842940/clustering-image-segments-in-opencv

Then we got the final result:

Screen Shot 2016-01-30 at 8.54.33 PM

Screen Shot 2016-01-30 at 8.53.09 PM

During the contour detection process. In order to get more robust result. Low-pass filtering and thresholding should be applied. Also, after the perspective wrapping process. More noise will be introduced into the system(mostly along the outline of the image). It’s shown that applying edge detection before contouring is also a good defense to noise:

Screen Shot 2016-01-25 at 2.07.38 AM

After the marks detection. The datas we got are: the corner position of the marks’ bounding box, as well as the width and height of the region:

Screen Shot 2016-01-30 at 8.58.10 PM

We then transfer these datas to photon for the following mechanical function realization.

As for the mechanical function part. We drive 2 step motor separately according to the position and size of the marks.

When applying data transferring. We set an origin in the image. And adding offsets to the value of data according to the origin. Then we convert the xy-coordinate data to polar-coordinate. The radius and the angle values each drives a motor.

The scalar relationship between image, whiteboard and motor:

FullSizeRender

Final result:

The links of the code:

https://gist.github.com/fishirenee/c1f18b0359a4f1856d30

https://gist.github.com/fishirenee/5e7ee7ffefcebf9e3dcf

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/feed/ 0
Project 1 – Sumo Bots https://courses.ideate.cmu.edu/48-390/s2016/2016/01/28/project-1-sumo-bots/ https://courses.ideate.cmu.edu/48-390/s2016/2016/01/28/project-1-sumo-bots/#respond Thu, 28 Jan 2016 08:34:47 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=129 Continue Reading →]]>

Process

 

IMG_20160124_163130459_TOP

IMG_20160117_143435286_HDR

IMG_8427

 

Abstract

Sumo Bots are a pair of miniature robots looking to wipe the opponent out of a small arena. Inspired by a conglomerate of robots that are put in combat with each other, these Sumo Bots are able to recognize the sumo ring to adjust and avoid being pushed out by their foe. They trace their path around the ring, while a dry eraser mounted on the chassis, erases the path of its opponent. These battery powered robots are created with simple laser-cut chassis, Particle Photons (powerful, Arduino-like, wifi-enabled microcontrollers), DC motors and sensors to control the motions of the robots.

 

Related Work

Autonomous Sumo Combat Robot with Pneumatic Flipper by kmmarlow

A sumo robot that uses a flipper to help get its opponents out of the ring.

Sumo Bot kit by Parallax

A complete kit sold by Parallax at $135 for one, and $225 for a complete kit. It uses BASIC Stamp control boards and comes equipped with servos, QTI line sensors, and other components all mounted on an aluminum chassis.

C3D4 Joao Luiz Almeida de Souza Ramos 

A sumo robot who uses extreme speed to knock its opponent out of the ring.

 

Implementation

Each sumo bot used two DC motors for movement. A white board eraser was used as a third point of contact with its terrain, allowing the robot to balance and move easily while performing the erasing task at the same time. To hold everything together, we sandwiched the motors between two plates of acrylic. Additional acrylic pieces were included to hold the motors securely by sandwiching a tab on the motors and securing them with nuts and bolts. We also included holes in the plates that held the marker in place.

On top of the robot we placed our breadboarded circuit. Power was supplied from a rechargeable 9V battery. For the photon’s input voltage, we dropped the 9V down to 5V using a linear regulator. For sensing, we used a reflective object sensor, that was able to sense the line that marked the edge of the sumo ring. The motors required the 9V straight from the battery, so we used MOSFETs to control them. The 3.3V from the Photon I/O pins was not enough to switch the MOSFETs, so we used a level shifter to bump this 3.3V up to 5V. We also included diodes across the motor leads to prevent reverse current from entering our circuit.

To control the robots, we made a basic autonomous program that moved the robot forward until the line sensor saw the line. Then, it turned the robot around to get back into the ring.

The code, schematic, and SolidWorks design files can be found at: https://github.com/arathorn593/sumo_bots_phys_comp

 

Discussion

This project was a good way of getting used to the workflow of physical computing and teaching us how to structure our process. The 2 small fighting robots were very different from our original ideas of the project, where we wanted to have a large robot fighting a small robot so that the robots had different personalities. Essentially, we spent too much planning and thinking about the final product instead of executing and prototyping right away.

We were able to adjust focus on making 2 smaller robots that buzzed around the ring and were battle ready. We were able to get them to operate them on batteries so they were able to fight without any wires getting tangled. The build quality of the robot was enough for them to take some hits and keep on running without issue. The robot is balanced well enough for the robot to erase the opponent’s path while writing its own path.

We would have liked to give the robots different personalities and create more of a story with the interaction between the robots. There are also different control systems we considered and would have explore including an OSC user control, and also a system using different sensors to detect the opponents. To reinforce the changes in code and hardware, we would make aesthetic changes to the robots, like vacuum forming different bodies for the robots befitting their respective “personalities.” In the end if we had more time we would have focused on developing more of narrative.

 

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/01/28/project-1-sumo-bots/feed/ 0
PROJECT 1 – Drawbots https://courses.ideate.cmu.edu/48-390/s2016/2016/01/14/project-1-drawbots/ https://courses.ideate.cmu.edu/48-390/s2016/2016/01/14/project-1-drawbots/#respond Thu, 14 Jan 2016 10:30:08 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=10 Continue Reading →]]> OBJECTIVE

Make a wheeled robot that completes a challenge on a whiteboard table. Your team may choose from one of the challenges below:

Normal Challenge

  • Solve a Maze
    • bonus: make it draw the maze first
  • Draw an algorithmic pattern [S+A]
    • bonus: make a multi-color changer
  • Keep the table clean of marks and debris
    • bonus: do it with CV and an external camera
  • Trace objects placed on the table
    • bonus: color them in when the object is removed

Hard Challenge

  • Two Sumo Bots [D+J]
    • bonus: make them controlled by a phone/remote control
  • Draw a clock
    • bonus: make it an analog clock
  • Draw a face from a vector image
    • bonus: draw from a photograph/video
  • Draw from your phone
    • bonus: make a website anyone can draw from

DETAILS

This is a fast and dirty project which will give you guys the opportunity to program your Photon boards, design physical mechanisms, utilize the tools available in IDeATe, and build with available hardware. We don’t expect these prototypes to be pretty. This project is about the process and the focus should be on functionality and simplicity. You only have a couple of weeks and a lot to learn.

You will work in teams of 2 (may have one team of 3 if there is an odd number). We will assign groups based on the course survey and discussions in class. Your team will select a robot type from the list above and get cranking. We will provide examples of DC motors, servos, stepper motors, ultrasonic depth sensors, photoresistors, etc.

Some example projects are listed below for inspiration and instruction (we will add to this list as you guys find more useful examples):

Projects

Tutorials

Commercially Available

DELIVERABLES

The final (working) prototype is due Tuesday, January 26th.

In addition, a blog post including the following is due 1/26, as well:

  • Process documentation: images/drawings
  • Write-up: Abstract/Related Work/Implementation/Discussion
  • 30 second video: shot on tripod/edited
]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/01/14/project-1-drawbots/feed/ 0