• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-17
  • A3
  • 2C
  • paper
  • Vision based navigation for autonomous planetary landing

    Paper number

    IAC-17,A3,2C,1,x40949

    Author

    Mr. Luca Losi, Politecnico di Milano, Italy

    Coauthor

    Prof. Michèle Lavagna, Politecnico di Milano, Italy

    Year

    2017

    Abstract
    Vision Based algorithms for relative navigation represent nowadays a trending topic in the Computer Vision field, but are still not widely exploited in space exploration missions due to their high computational costs and low TRL. In recent years anyway, a step in this direction has been taken by a lot of companies and agencies, with the development of dedicated algorithms and hardware (e.g. LVS and APLNav by NASA, SINPLEX by DLR), enhancing the possibility to adapt Computer Vision techniques to space applications. Vision Based algorithms for planetary landing represent a very promising tool: Cameras are cheap and reliable sensors with great potential in terms of obtainable accuracy in spacecraft state reconstruction.
    This paper presents a Vision Based algorithm for spacecraft Terrain Relative Navigation during landing designed from scratch at PoliMi DAER, based on a monocamera working in the visible spectrum.
    The Vision Based Navigation algorithm works processing images from the monocamera. First two frames are used for initialization: Features are extracted from the first image and tracked onto the second; a set of 2D to 2D correspondences is obtained, relative pose between the frames is calculated and a sparse map of 3D points is initialized exploiting triangulation. 2D features are then tracked for each subsequent frame and correlated to the 3D map: this way a set of 3D to 2D correspondences is obtained and used to solve the Perspective-n-Point problem, which along with a RANSAC routine set to delete incoming outliers (wrong match between target image and map), gives as result a first estimate of the relative position of the camera. Bundle Adjustment, an optimization technique widely diffused in Computer Vision, is applied both on the map and relative pose during initialization, and on each successive step to the obtained camera pose only to increase the overall navigation accuracy. Each time the number of tracked features drops below a fixed threshold, a new map is triangulated and merged with the existing one.
    Performance assessment of the navigation system using synthetic video sequences of different landing trajectories on the Moon surface is presented, along with preliminary results of the experimental calibration and verification campaign on PoliMI-DAER facility for GNC testing with HIL; the facility includes a Mitsubishi PA-10 robotic arm to simulate the lander's dynamics with monocamera mounted on tip, a calibrated 2.4x4\textit{m} Lunar surface diorama and a dimmable 5600K LED lighting system to provide a fully controllable illumination environment.
    Abstract document

    IAC-17,A3,2C,1,x40949.brief.pdf

    Manuscript document

    IAC-17,A3,2C,1,x40949.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.