• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-18
  • A3
  • IP
  • paper
  • Validation campaign of Vision-Based Navigation algorithm for autonomous planetary landing

    Paper number

    IAC-18,A3,IP,24,x48268

    Author

    Mr. Luca Losi, Italy, Politecnico di Milano

    Coauthor

    Prof. Michèle Lavagna, Italy, Politecnico di Milano

    Year

    2018

    Abstract
    Spacecraft autonomy is a key task to enable future space exploration
    missions and, by mimic human beings, visual sensors play a privileged role to support on board navigation and state vector reconstruction for autonomous localization. Indeed, image processing for Vision-based Navigation is being nowadays largely improved so to become the very next frontier of navigation systems for space exploration. Dedicated algorithms and hardware are under constant development by different companies and agencies (e.g. APLNav by NASA, ATON by DLR, PILOT system from Airbus, AGNC software developed by NGC Aerospace), with the fifirst navigation systems fully relying on cameras currently on-flight in missions such as NASA's OSIRIS-ReX or Hayabusa 2 from JAXA. Within this framework, the paper discusses the further enhancement of a
    single camera relative Vision-Based Navigation algorithm for planetary
    landing, fully developed at PoliMI Department of Aerospace Science and Technology (DAER). The proposed Vision-Based Navigation is based on features extraction and tracking from on board acquired images. A local sparse map of the observed environment is built and used for relative navigation with dedicated Computer Vision algorithms that give an estimation of the position and attitude of the spacecraft (pose). Bundle Adjustment (BA), a widely diffused optimization technique usually exploited in Structure from Motion (SfM) pipelines, is applied in an efficient way both on the reconstructed map and relative pose at each step with different confifigurations to increase the overall navigation accuracy. A preliminary performance assessment of the navigation algorithm is expounded exploiting ad-hoc generated synthetic video sequences of different landing trajectories on the Moon surface. Furthermore, the results of a first full calibration and verifification campaign of the algorithm performed on the dedicated experimental GNC test facility with hardware in the loop, available at PoliMI-DAER premise are presented. The facility, equipped with a Mitsubishi PA-10 robotic arm to reproduce the 6 DoF lander dynamics, a camera, a calibrated 2.4x2m Lunar surface diorama and a dimmable 5600 K LED lighting system to provide fully controllable illumination environment; is available for Vision-Based systems verification and validation up to TRL5. In this fifirst campaign, open-loop simulations have been performed that have proven the Navigation algorithm to properly work with good accuracy and robustness against different landing scenario.
    Abstract document

    IAC-18,A3,IP,24,x48268.brief.pdf

    Manuscript document

    (absent)