Robotic Arm Manipulation Using Shape and Color Based On Visual Servoing

Authors

  • Anusha G  Student of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India
  • Ganavi D  Student of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India
  • Revathi H  Student of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India
  • Sahana P  Student of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India
  • M V Sreenivas Rao  Associate Professor of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India
  • Basavanna M  Associate Professor of Department of Electronics and Instrumentation, GSSSIETW, Mysuru, Karnataka, India

Keywords:

Robot Arm, Pick And Place, PCA, Color And Shape

Abstract

The robot arm is an ongoing computer vision project for which many enhancements have been done during the last years; the aim of the paper is to present a system for controlling a robot arm, based on image processing and recognition. One of the common task performed in an industries are picking and placing of the object from one place to another place, hence the aim of our project is to pick and place the object by vision system. Vision system determines the random scattered object on the plane and picks the object by the gripper of the Robotic Arm and places it in a particular location. Without the vision system, it is difficult for the Robotic Arm to detect the colored object. The Arm Edge robot arm is made to pick and place the object based on color thresholding and shape analysis based on Principal Component analysis (PCA).

References

  1. Marchand and F. Chaumette. “Feature tracking for visual servoing purposes”. Advances in Robot Vision - From Domestic Environments to Medical Applications, Sept. 2014.
  2. Mejías, S. Saripalli, P. Campoy, and G. S. Sukhatme. “Visual servoing of an autonomous helicopter in urban areas using feature tracking”. Journal of Field Robotics, vol. 23, no. 3-4, pp. 185-199, 2016.
  3. Prats, P. Martinet, A. del Pobil, and S. Lee. “Robotic execution of everyday tasks by means of external vision/force control”. Intelligent Service Robotics, vol. 1, no. 3, pp. 253-266, Jul. 2017.
  4. Mebarki R., Krupa, A., Chaumette F. “Image moments-based ultrasound visual servoing”. Robotics and Automation, 2017, pp. 113 -119 .
  5. Junping Wang and Hyungsuck Cho. “Micropeg and Hole Alignment Using Image Moments Based Visual Servoing Method”. Transactions on Industrial Electronics, Vol. 55, No. 3, pp. 1286-1294, March 2008.
  6. Rajruangrabin and D. Popa. “Robot head motion control with an emphasis on realism of Neck-Eye coordination during object tracking”. Journal of Intelligent and Robotic Systems, pp. 1-28, Sep. 2016.
  7. H. Liu and H. Wang. “Adaptive visual servoing of robot manipulators” in Advances in Robot Control: from Everyday Physics to Human-Like Movements, Springer, 2006, pp. 55-82.
  8. Maidi, J.-Y. Didier, F. Ababsa, and M. Mallem. “A performance study for camera pose estimation using visual marker based tracking”. Machine Vision and Applications, vol. 21, no. 3, pp. 365-376, Apr. 2015.
  9. Pomares, J., García, G. J., Payá, Torres, F. “Image Motion Estimator to Track Trajectories Specified With Respect to Moving Objects”. Informatics in Control Automation and Robotics, vol. 15, pp. 207-217. 2008.
  10. Greggio, A. Bernardino, C. Laschi, J. Santos-Victor, and P. Dario. “Real-Time 3D stereo tracking and localizing of spherical objects with the iCub robotic platform”. Journal of Intelligent & Robotic Systems, pp. 1-30, Feb. 2011.
  11. J. Sequeira Gonçalves, L. F. Mendonça, J. M. C. Sousa, and J. R. Caldas Pinto. “Uncalibrated Eye-to-Hand Visual Servoing Using Inverse Fuzzy Models”. IEEE Transactions on Fuzzy Systems, Vol. 16, No. 2, pp. 341-353, Apr. 2008.
  12. Janabi-Sharifi, F.; Marey, M. “A Kalman-Filter-Based Method for Pose Estimation in Visual Servoing”. Robotics, IEEE Transactions on, vol.26, no.5, pp. 939-947, Oct. 2013.
  13. Kragic and H. I. Chrisensen. “Survey on visual servoing for manipulation”. Technical report, Computational Vision and Active Perception Laboratory, University of Stockholm, Sweden. 2012.
  14. OpenCV Reference Manual v2.2. Available online.línea:https://code.ros.org/trac/opencv/export/4761/trunk/opencv/doc/opencv.pdf.[June 2010].
  15. Tse-Wei Chen, Yi-Ling Chen, Shao-Yi Chien, “Fast image segmentation based on K-Means clustering with histograms in HSV color space”, In Multimedia Signal Processing, 2008 IEEE 10th Workshop on, pp.322-325, Oct. 2008.
  16. Rafael C. Gonzalez, Richard E. Woods. Digital Image Processing. Upper Saddle River, NJ. Prentice Hall. 2002.
  17. Suzuki and K. Abe. “Topological structural analysis of digitized binary images by border following”. Computer Vision, Graphics, and Image Processing, vol. 30, no. 1, pp. 32-46, Apr. 1985.
  18. Harris and M.J. Stephens. “A combined corner and edge detector”. Alvey Vision Conference, pp. 147–152, 1988.
  19. Shi and C. Tomasi, “Good features to track”. 9th IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600. June 1994.
  20. Gómez-Allende, Darío Maravall. “Reconocimiento de Formas y Visión Artificia”l. Wilmington, DE. Addison-Wesley Iberoamericana. 1994.
  21. Faugeras, Oliver and Long Quang-Tuan. “The Geometry of Multiple Images”. Cambridge: MA. The MIT Press. 2001.
  22. Hartley Richard and Zisserman Andrew. “Multiple View Geometry in Computer Vision”. Cambridge. Cambridge University Press.

Downloads

Published

2018-05-08

Issue

Section

Research Articles

How to Cite

[1]
Anusha G, Ganavi D, Revathi H, Sahana P, M V Sreenivas Rao, Basavanna M, " Robotic Arm Manipulation Using Shape and Color Based On Visual Servoing, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 4, Issue 6, pp.733-739, May-June-2018.