• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-11
  • C1
  • 8
  • paper
  • Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator

    Paper number

    IAC-11,C1,8,4,x11084

    Author

    Dr. Marco Sabatini, Università di Roma "La Sapienza", Italy

    Coauthor

    Dr. Riccardo Monti, University of Rome “La Sapienza”, Italy

    Coauthor

    Prof. Paolo Gasbarri, Università di Roma "La Sapienza", Italy

    Coauthor

    Prof. Giovanni B. Palmerini, Universita' di Roma 'La Sapienza', Italy

    Year

    2011

    Abstract
    Optical navigation for guidance and control of robotic systems is a well-established technique from both theoretic and practical points of view. According to the positioning of the camera, the problem can be approached in two ways: the first one, “hand-in-eye”, consists in a fixed camera, external to the robot, which allows to determine the position of the target object to be reached. On the contrary, the second one, “eye-in-hand”, consists in a camera accommodated on the end-effector of the manipulator. The target object position in this case is determined not in an absolute reference frame, but with respect to the mobile camera (relative position). In this paper, the algorithms and the test campaign applied to the case of the planar multibody manipulator developed in their Guidance and Navigation Laboratory at the University of Rome La Sapienza are reported with respect to both cases.
    For both optical navigation methods, a first and fundamental step is the camera image processing, in order to extract the needed information (the “features”). A noisy environment can lead to an incorrect identification of the target object, and to an eventual failure of the grasping mission. From this point of view, the eye-in-hand configuration seems to be more robust, since it allows for a large number of possible algorithms to overcome this problem.
    In fact, with a fixed camera you can only rely on an advanced image processing in order to discard all the information related to the non-target objects. In the case that a non-target object is erroneously identified as target object, the manipulator will try to catch it with no other possible means to interrupt this erroneous maneuver. On the contrary, the operative field of the eye-in-hand configuration is mobile. A non-target object that is detected by mistake, can enter or exit the camera field of view during the maneuver, when the camera gets closer to the target.
    Clearly, the maneuver can be successful only if the guidance algorithms is able to distinguish a false, newly entered target, from the real target. The proposed solution modify the Image Based Vision System with a an intelligent and adaptive correction based on the time evolution of the recorded image features. The fundamental characteristics are a wipe-search mode, followed by a tracking mode with a robust target position predictor. A number of test case are analyzed and main results reported.
    Abstract document

    IAC-11,C1,8,4,x11084.brief.pdf

    Manuscript document

    IAC-11,C1,8,4,x11084.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.