• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-14
  • C1
  • 7
  • paper
  • Autonomous vision-based hazard map generator for planetary landing phases

    Paper number

    IAC-14,C1,7,3,x25607

    Author

    Mr. Paolo Lunghi, Politecnico di Milano, Italy

    Coauthor

    Prof. Michèle Lavagna, Politecnico di Milano, Italy

    Year

    2014

    Abstract
    The paper presents a hazard detection and landing map generator based on a single camera acquisition, sufficiently light to run onboard during the landing phase of a planetary exploration mission.
    Autonomous, precise and safe landing capability is a key feature for next generation space missions: scientifically relevant places may be associated with hazardous terrain features or confined in very limited areas; in other cases there is no possibility to completely characterize a predefined landing area with the required accuracy. The short duration of the landing phase together with telecommunications delays require a high level of onboard autonomy in the GNC, coupled with light computational mechanisms. In such a scenario, the ability to distinguish hazardous from safe landing areas and consequently correct the landing trajectory becomes crucial.
    Algorithms development is made more difficult by uncertainties in the knowledge of the morphological structures to be encountered during the landing phase, being the environment not perfectly known in advance. The well-known generalization properties of  Artificial Neural Networks (ANN) is here exploited to build maps online, flexible with respect to the conditions exploited on ground to settle the classification mechanism.
    The calibration of the hazardous regions classification is based on is a quite long and complex operation: once more, ANN are simple to be implemented and computationally efficient, even during their training phase.
    A set of self-organizing maps are exploited to characterize different terrain features at different scales. Each activated neuron corresponds to a class of combinations of morphological properties, such as shadows, roughness and slopes. Then, a set of feedforward ANN interprets these parameters to produce a unique hazard index for each pixel of the original image. Finally, the map is filtered in order to correlate each index to the surrounding elements.
    Once the final hazard map is available, the target landing site is updated, considering merit parameters such as hazard index, extension of safe landing areas and proximity to nominal target.
    The algorithm requires less than 6 seconds to analyze a 1000x1000 image, on an Intel i7-2630QM processor in Matlab environment. Different training methods are investigated: the exploitation of both artificial and real images is compared.
    Results for different scenarios in both Martian and Lunar landing cases are shown and discussed, in order to highlight the effectiveness of the proposed system. Sensitivity to environmental parameters, such as light conditions, trajectory inclination and camera attitude is investigated. Finally, possible future improvements are suggested.
    Abstract document

    IAC-14,C1,7,3,x25607.brief.pdf

    Manuscript document

    IAC-14,C1,7,3,x25607.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.