• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-15
  • A3
  • 2C
  • paper
  • A Novel Matching Method of Large Deformed Images for Positioning Lunar Rovers with Large Distance

    Paper number

    IAC-15,A3,2C,11,x31244

    Author

    Dr. Chuankai Liu, China

    Coauthor

    Dr. Baofeng WANG, 1)Science and Technology on Aerospace Flight Dynamics Laboratory, China,2)Beijing Aerospace Control Center, China, China

    Year

    2015

    Abstract
    In the Lunar Exploration, one important mission of the ground teleoperation center is estimating the position of each navigation station that the rover reaches, which facilitates the rover to gradually approach and finally get to the scientific probing targets. Accurate rover positioning is usually achieved in two steps. First, the inertial navigation information is used to roughly estimate the position of the rover in the current station, and then vision-based positioning method, taking the inaccurate result as an initial input, is used to calculate the rover position with a high accuracy. Currently, the most perspective vision-based high-accuracy positioning methods are implemented with multi-camera photogrammetric model, which takes the homonymous points extracted from camera images as the intersection points of camera bundles to establish observation equations. The amount, spatial distribution and matching accuracy of homonymous points will affect the positioning accuracy of the rover. In the case of large-span moving, the images acquired by rover in two navigation stations with a fairly big distance are difficult to be matched due to the existence of large scale and rotation transformations, reflected view of the same scenery, and different illumination conditions between acquired images. Traditional appearance matching algorithms, like SIFT and its derived methods, often fail in handling the above situations. Some improved appearance matching algorithms, which take the affine transformation of images into account, such as affine SIFT, perform better than traditional ones in handling large-scale transformation problem, but they are still not able to obtain satisfactory results in tackling the above situations. In this paper, we proposed a novel matching method, which first utilize the imaging relation contained in the large-span moving of the rover to simulate the approximate geometric transformation between large-deformed images, and then take approximate transformed images as transitive ones to achieve precise matching. In this matching method, a more general geometric transformation model is used to compensate for the current matching algorithms in tacking some special situations. With this method, the approximate positioning information of the rover can be utilized in the image matching, which reduces unnecessary computations and improves the efficiency of the rover positioning. To validate the proposed method, we conduct experiments with lunar surface images acquired by Chang’E-3 rover, and make detailed comparison on the performances of different methods, considering amount, speed, correct rate and accuracy.
    Abstract document

    IAC-15,A3,2C,11,x31244.brief.pdf

    Manuscript document

    IAC-15,A3,2C,11,x31244.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.