• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-10
  • C1
  • 4
  • paper
  • Nonlinear Estimation for Vision-based Target Attitude Measurement in Space Operation

    Paper number

    IAC-10.C1.4.11

    Author

    Mr. Haifeng Su, College of Astronautics,Northwestern Polytechnical University, China

    Coauthor

    Prof. Xiaokui Yue, Northwestern Polytechnical University, China

    Year

    2010

    Abstract
    The use of vision system as the navigation and tracking sensor of space assets has become more and more attractive since it is a very efficient choice due to its compact size, low cost and rich information which provides. The nonlinear estimation framework for vision-based target measurement is addressed within this paper where space robot manipulation is considered as reference scenario. The work presented deals with the measurement of state parameter of the target attitude including kinematic and dynamic parameters and its size from stereo vision, since the state vector is the basis for navigation and control in unmanned space operation.
    The estimation framework for vision measurement is composed of vision information estimation and feedback of feature point estimation. In the estimation framework, the components of state vector of the target will be divided into two groups, the position state and the others, consisting velocity, acceleration, attitude angles and target size. Since the position information could be derived from vision measurement directly at each frame, the position components of the state variables are solved in particle filter, and other state components represented by nonlinear equations are handled by UKF.
    The sequence of real time image obtained from the stereo cameras is processed to extract the information of the target. Since image processing is time consuming, the estimation of the features at next interval from the filters represented in the image plane would be feedback to the image processing phase, to reduce the searching scope of image plane in the process of stereo matching and feature tracking.
    The research is evaluated in simulation and implemented with a CCD binocular camera system in the scenario in which a terrestrial robot mounted with cameras tracking a balloon in random velocity. This method could be easily modified to the practical space operation missions.
    The designed framework and algorithm is first simulated with a PC to process an artificial mocking video using C++ language, then implemented in an embedded stereo vision system mounted on a wheeled terrestrial robot to chase a balloon bouncing on the floor simulating the target as the scenario of space operation. The target (balloon) could be caught by the robot within medium moving speed (released to free from hand). The vision-based estimation framework and algorithm provides reasonable responding rate and robustness.
    Abstract document

    IAC-10.C1.4.11.brief.pdf

    Manuscript document

    (absent)