• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-14
  • A3
  • 2D
  • paper
  • Visual localization of the “Jade Rabbit” Rover in Chang’e-3 Lunar probe Mission

    Paper number

    IAC-14,A3,2D,25,x26821

    Author

    Mrs. Jia Wang, 1)Science and technology on aerospace flight dynamics laboratory, Beijing, China;2)Beijing aerospace control center, Beijing, China, China

    Year

    2014

    Abstract
    China's Chang'e-3 probe, which includes its first lunar rover named Yutu or Jade Rabbit, successfully landed on the moon in December 14, 2013. So far, the Jade Rabbit has traveled about 114.82 meters on the surface of the moon. During the travel, the Jade Rabbit needs to approach and reach the scientific targets specified by scientists, where the rover navigation and positioning is the foundation. Positioning accuracies affect the success of scientific exploration. Currently, two-dimensional positioning accuracy of same-beam interferometric measurement is approximately 100 meters, which is far from meeting the requirements of rover’s path planning. In addition, the rover's inertial navigation system can also locate the rover with an accuracy of 10% of the driving distance, while the positioning error is accumulated with the distance increasing. In the Chang’e-3 mission, a stereo vision system composed of two navigation cameras is used for rover positioning. Generally, the distance between the two navigation stations to be located is about 7 meters, while the navigation camera height is only 1.6 meters. Thus, the images in overlapping area between two adjacent stations have large deformation. Navigation camera baseline is only 270mm, so overlapping of the left-camera and right-camera images on the same station is not less than many as 90%, and meanwhile with a good similarity. This paper presents the algorithm of positioning the rover with visual information. Firstly, SIFT (Scale-invariant feature transform) matching method is used to match feature points of two left images acquired at two adjacent stations. Then, the correlation coefficient matching method and the least squares matching algorithm are used to match the feature points of the left and right images acquired at the same station. Finally, with the feature points acquired by image matching, bundle adjustment model is constructed to calculate the relative position of two adjacent stations. Through iterative solution of positioning results, we can get the precise coordinates of the rover position under the lunar landing coordinate frame. Visual positioning method in this paper was fully verified in the ground laboratory and the positioning accuracy is better than 5%. This method has been successfully applied to the localization of the Jade Rabbit, guiding the Jade Rabbit to complete the detection in 16 scientific stations.
    Abstract document

    IAC-14,A3,2D,25,x26821.brief.pdf

    Manuscript document

    (absent)