• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-22
  • B2
  • 7
  • paper
  • Lunarpoint: Interest point detector and descriptor for Lunar Landscapes

    Paper number

    IAC-22,B2,7,12,x70240

    Author

    Mr. Quazi Saimoon Islam, Estonia, University of Tartu

    Coauthor

    Mr. Hans Teras, Estonia, University of Tartu

    Coauthor

    Ms. Karin Kruuse, Estonia, University of Tartu

    Coauthor

    Dr. Mihkel Pajusalu, Estonia, University of Tartu

    Year

    2022

    Abstract
    In this paper, we present our results from adapting and training of interest point detectors and descriptors towards Lunar application. Similar solutions are used in terrestrial use cases, one example of such is the Convolutional Neural Network (CNN) framework developed by Daniel DeTone et al. (SuperPoint). We utilize simulated textured Lunar Landscape imagery and surface images from the Apollo missions as a training dataset for the network. Furthermore, we compare the trained models with traditional interest point detectors for Lunar use case.
    
    Reliable interest point detection is a key element to geometric computer vision tasks such as visual odometry, Structure from Motion (SfM) and camera calibration. Utilizing machine learning based approaches for this task can lead to an improved level of reliability and accuracy, while being capable of performing in real-time compared to traditional interest point detection. At the same time, space-qualified hardware for machine learning is starting to emerge. Building upon the established self-supervised framework of SuperPoint, our methods strive to provide an improved interest point detector and descriptor for use on-board future Lunar rover visual navigation architecture (such as the one being developed in parallel to and supporting our project “Operations and mobility planning system for Lunar rover missions” with Milrem Robotics and ESA ESOC).
    
    The training dataset for the framework includes images extracted from a simulated Lunar Landscape which incorporates remote sensing data collected from various Lunar orbiters (such as Lunar Reconnaissance Orbiter) to artificially generate a realistic sandbox lunar environment. The simulated dataset harnesses Unreal game development engine (Unreal Engine 5) to produce high quality camera images which provide a close analogue to realistic lighting and shadowed region conditions that are expected on potential Lunar rover missions. The framework is also tested upon actual images collected from the surface of the moon ranging from images collected during the Apollo missions, other publicly available images from recent Lunar rover missions (e.g. Yutu-2) to consolidate the fidelity of the interest point detection. The machine-learning based system is compared against SIFT (Scale Invariant Feature Transform), which is industry standard and also works on Lunar images.
    Abstract document

    IAC-22,B2,7,12,x70240.brief.pdf

    Manuscript document

    IAC-22,B2,7,12,x70240.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.