Home

Resume

Publications

Research

Courses

Contact

Object Pose Estimation After Open-Loop Grasping



Overview


In this project we propose a technique of finding good grasps, and blindly estimating the pose of an object picked from a bin by using finger encoder information only.

Abstract


Traditionally, grasping in robotics has relied on using accurate sensor information coupled with detailed 3D models of objects to produce a desired grasp for an object, followed by careful planning to the estimated pose of the object. Before grasping the object, the robot typically has some prior knowledge of where the object, and relies on this knowledge after the object has been grasped to perform a manipulation task. This approach is effective when good models of the object and hand are known, and if the object is in an uncluttered environment. However, the traditional approach has considerable difficulties with cluttered environments, noisy sensors, and poorly articulated hands. In this project we propose a technique of finding good grasps, and blindly estimating the pose of an object picked from a bin by using finger encoder information only. Our technique relies on a three-stage cascade of machine learning algorithms to first decide whether a grasp is suitable, and then, given the grasp was suitable, estimate its pose in the hand. To test our approach, we used an ABB industrial robot arm with an inexpensive, single-actuator, three degree of freedom, three-fingered robot hand, which blindly picked highlighters out of a bin using a pre-recorded grasp strategy. The robot then showed the grasped markers to a camera, which gave ground truth estimates for the positions of the markers in the robot's hand. We then had the robot perform 2000 grasps to give us a data set of ground truth object poses and their corresponding finger poses.

Links


Project Video (youtube)

Project Presentation (pptx)

Project Report (pdf)