• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-07
  • C1
  • 2
  • paper
  • Autonomous Capture of Free-Floating Objects using Predictive Approach

    Paper number

    IAC-07-C1.2.07

    Author

    Mr. Joel Robert, McGill University, Canada

    Coauthor

    Prof. Inna Sharf, McGill University, Canada

    Year

    2007

    Abstract
    Robotic manipulators are playing an increasingly important role in the development of space. Applications include maintaining the space station, and deploying and retrieving satellites from orbit for repair. It is becoming obvious that, for safety and economic reasons, automation of these operations is desirable. Developing and testing such methods in space is prohibitively expensive and very difficult on earth due to the presence of gravity.
    
    For this purpose, a new facility which enables the simulation of autonomous capture of free-floating objects has been developed. A neutrally buoyant and balanced airship emulates a free-floating object. The inertial properties of small to medium-size satellites can be artificially reproduced using six ducted propellers positioned along the principal axes. The test bed also includes a 6 degree-of-freedom manipulator that moves on a 3-meter track. The robot is equipped with an eye-in-hand stereo vision system.
    
    To date, a tracking-based controller has been successfully implemented in the facility. The redundancy resolution and joint limits avoidance is achieved by a combination of the gradient projection and the task-reconstruction methods. This scheme guarantees capture, as long as the target remains in the field of view of the camera and within the workspace of the robot. However, the method exhibits poor behaviour when facing a medium to fast-moving target.
    
    The method presented in this paper improves the capture procedure developed to date, both in terms of time and degree of certainty, by adding a prediction aspect to the algorithm. The trajectory of the airship is predicted from vision data using a Kalman filter. The latter is based on a linear model of the deviation of the trajectory with respect to a nominal pure-spin trajectory. The corresponding joint-space capture positions of the robot are then approximated using a redundancy-resolution scheme based on the concept of redundancy angle. The joint kinematic limits of the robot and a point-to-point planner are finally used to compute the closest possible interception location. The capture location is actively recomputed while the robot moves toward the interception location and updated information about the airship trajectory becomes available. This method has been called active-prediction-planning-execution in the literature. The final capture is achieved by the visual servoing algorithm.
    
    In the paper, we will present simulations of the new controller and the experimental results with the robot-airship facility showing the performance of the method.
    Abstract document

    IAC-07-C1.2.07.pdf

    Manuscript document

    IAC-07-C1.2.07.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.