Irenee Liu – Physical Computing Studio https://courses.ideate.cmu.edu/48-390/s2016 CMU | Spring 2016 | 48-390 Sun, 02 Oct 2016 15:29:26 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.31 Final Project Proposal – Irenee https://courses.ideate.cmu.edu/48-390/s2016/2016/03/31/final-project-proposal/ https://courses.ideate.cmu.edu/48-390/s2016/2016/03/31/final-project-proposal/#respond Thu, 31 Mar 2016 14:46:27 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=439

 

Reference:

http://people.csail.mit.edu/mrub/vidmag/

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/03/31/final-project-proposal/feed/ 0
Project1: Whiteboard Cleaning Bot (OpenCV and step motor function part) https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/ https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/#respond Mon, 08 Feb 2016 18:40:32 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=187 Continue Reading →]]> For the project 1. We adapted Computer Vision and Image Processing techniques to detect the marks on board.

To make use of an external webcam, I installed imagesnap on OS X. You can install it through Homebrew or other package manager:

brew install imagesnap

Applying some adjustments to the web camera to increase the contrast of the marks and whiteboard for detection convenience:

thumb_IMG_2758_1024

The first step is setting up OpenCV library using CMake on OS X. I followed the steps in this post:

http://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/

Based on the image snaps from the external web camera. I did camera calibration first. I followed the camera calibration project built in OpenCV library:

http://docs.opencv.org/3.0-last-rst/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv

This is a part of the camera data I got:

<camera_matrix type_id="opencv-matrix">
  <rows>3</rows>
  <cols>3</cols>
  <dt>d</dt>
  <data>
    1.4598983691373969e+03 0. 9.5950000000000000e+02 0.
    1.4598983691373969e+03 5.3950000000000000e+02 0. 0. 1.</data></camera_matrix>
<distortion_coefficients type_id="opencv-matrix">
  <rows>5</rows>
  <cols>1</cols>
  <dt>d</dt>
  <data>
    -7.1164648946664952e-02 1.2969996440506684e-01 0. 0.
    -1.6477242833950642e-01</data></distortion_coefficients>
<avg_reprojection_error>3.8687741483526999e-01</avg_reprojection_error>

Then each time when doing mark detection, preprocess the captured image first.

After camera calibration. We need to do a projective transform for the image though. OpenCV has a function: getPerspectiveTransform() (cv::getPerspectiveTransform(const Point2f src[], const Point2f dst[])) Which can be used for the purpose. I just used 4 points’ corresponding relationship during the transformation for the implementation:

Screen Shot 2016-01-25 at 2.06.20 AM

Considering the purpose of the project: we want to clean the marks on the whiteboard. The foreground(marks) and background(whiteboard) are distinguishable in common case due to the high contrast.

When I tried to figure out what method to apply for marks detection. I thought about watershed segmentation at first. It has a good performance while distinguish connected region, thus can provide a relatively more accurate result. (An example of the implementation of watershed algorithm can be found here:

http://www.pyimagesearch.com/2015/11/02/watershed-opencv/)

An example result of the watershed segmentation:

Screen Shot 2016-01-22 at 6.49.23 PM

However, actually the goal for the project is pointing to the opposite way: instead of differentiating connected marks as separate objects, we would like a more “rough” result: that is we want to cluster the marks near to each other as the same object.

So I went for threshold and contour the image, then applied bounding box for each contour. The built in sample in OpenCV tutorial can be found here:

http://docs.opencv.org/master/da/d0c/tutorial_bounding_rects_circles.html#gsc.tab=0

Till now, the result is yet not ideal enough:

Screen Shot 2016-02-08 at 12.33.37 PM

The writing which should be regarded as a single object were departed as much smaller components.

So we need to cluster the nearby contours.

I thought about using k-means clustering. However, we don’t know how many clusters(objects) in the image(that’s exactly what we want to know!). That means we don’t know “k” from k-means. No way to start training.

Then I thought it’s just ok to do distance detection among the contours. And merge the contours within a certain distance.

There are also some references of the realization:

http://dsp.stackexchange.com/questions/2564/opencv-c-connect-nearby-contours-based-on-distance-between-them

http://answers.opencv.org/question/27897/creating-regions-of-interest-roi-by-clustering-fragmented-contours/

http://stackoverflow.com/questions/23842940/clustering-image-segments-in-opencv

Then we got the final result:

Screen Shot 2016-01-30 at 8.54.33 PM

Screen Shot 2016-01-30 at 8.53.09 PM

During the contour detection process. In order to get more robust result. Low-pass filtering and thresholding should be applied. Also, after the perspective wrapping process. More noise will be introduced into the system(mostly along the outline of the image). It’s shown that applying edge detection before contouring is also a good defense to noise:

Screen Shot 2016-01-25 at 2.07.38 AM

After the marks detection. The datas we got are: the corner position of the marks’ bounding box, as well as the width and height of the region:

Screen Shot 2016-01-30 at 8.58.10 PM

We then transfer these datas to photon for the following mechanical function realization.

As for the mechanical function part. We drive 2 step motor separately according to the position and size of the marks.

When applying data transferring. We set an origin in the image. And adding offsets to the value of data according to the origin. Then we convert the xy-coordinate data to polar-coordinate. The radius and the angle values each drives a motor.

The scalar relationship between image, whiteboard and motor:

FullSizeRender

Final result:

The links of the code:

https://gist.github.com/fishirenee/c1f18b0359a4f1856d30

https://gist.github.com/fishirenee/5e7ee7ffefcebf9e3dcf

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/02/08/project1-whiteboard-cleaning-botopencv-and-step-motor-function-part/feed/ 0
Looking Outwards: Cleaning Bot https://courses.ideate.cmu.edu/48-390/s2016/2016/01/24/looking-outwards-cleaning-bot/ https://courses.ideate.cmu.edu/48-390/s2016/2016/01/24/looking-outwards-cleaning-bot/#respond Sun, 24 Jan 2016 05:31:48 +0000 http://courses.ideate.cmu.edu/physcomp/s16/48-390/?p=71 Continue Reading →]]> 5RNP

Screen Shot 2016-01-24 at 12.46.06 AM

EXPLAINED: A drawing robot exhibition which gathering visitors’ portrait based on image processing.

CHOSEN: This project is a good example as a combination of art and technology. Which gives the robots a different charm. And the usage of computer vision can be a good reference for our Project 1.

CRITIQUED: It’s not the first time that people use camera to capture portrait and train robot to draw the portrait out. However, this exhibition is a good try to attach art elements to robot. Making the topic more appealing rather then just being an impassive technical demo.

RELATED: There are several related projects that also used openCV edge detection algorithm to get portraits.

And for our Project 1. We adopted similar gear & pinion controlling system as the project showing below.

The link and video for this project:

http://patricktresset.com

 

 Jetty Robot

robot_01

EXPLAINED: A cleaning robot with a vision control system built in.

CHOSEN: This cleaning robot has integrated a camera that can not only be used for real-time controlling but also for analyzing and recording. And the robot’s mechanical design is a good example for designing for the specific usage.

CRITIQUED: This robot has a good feature that it makes use of vision controlling system which gives it the ability to work remotely with more flexibility. Also it can meet it’s working environment’s request by assembling chains in 6 different direction.

RELATED: The 6 chains ducts climbing system reminds me of the wall climbing robot that professor showed in class. Which does an awesome job in omni-direction climbing.

VertiGo – A Wall-Climbing Robot including Ground-Wall Transition

 

]]>
https://courses.ideate.cmu.edu/48-390/s2016/2016/01/24/looking-outwards-cleaning-bot/feed/ 0