• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-14
  • B4
  • 6A
  • paper
  • A Multi-ocular Smart System for Vision-based Space Navigation

    Paper number

    IAC-14,B4,6A,2,x25102

    Author

    Mr. Giuseppe Capuano, TECHNO SYSTEM DEV., Italy

    Coauthor

    Mr. Raffaele Ascolese, TECHNO SYSTEM DEV., Italy

    Coauthor

    Mr. Daniele Titomanlio, TECHNO SYSTEM DEV., Italy

    Coauthor

    Mr. Pasquale Longobardi, TECHNO SYSTEM DEV., Italy

    Coauthor

    Mr. Maurizio De Nino, Techno System Developments S.R.L., Italy

    Coauthor

    Mr. Giuseppe Formicola, TECHNO SYSTEM DEV., Italy

    Year

    2014

    Abstract
    Vision-based navigation can be considered as a major enabling navigation technology in support of numerous space applications, such as lander and rover autonomous navigation for deep space exploration, precise Rendezvous and Docking for On-Orbit Servicing  and high accuracy relative navigation for spacecraft Formation-Flying. In this context, the Multi-ocular Smart System  (MOSS) has been designed to provide real time processing capabilities significantly higher than the ones of similar systems currently available. The system is characterized by low mass and low power consumption, then being also compatible with small space platforms. MOSS comprises four different items: a trinocular camera, two monocular cameras and a High-performance Processing Unit for Visual-based Navigation (HPVN).
    The trinocular camera consists of two monochrome sensors, which provide stereovision and one color HD image sensor for panoramic vision. The two monocular cameras adopt different image sensors and lenses; the former is based on a CMOS color HD image sensor and fixed lens, while the latter is equipped with a CCD color HD sensor and motorized lens, offering auto-iris, autofocus and zoom. 
    All the cameras can either operate in stand-alone mode, thus minimizing mass, volume, and power consumption or they can be integrated with a very powerful processing electronics (HPVN). Capable of supporting up to four HD video inputs, the HPVN performs a number of (hardware accelerated) image processing algorithms: loss-less and/or lossy compression, feature detection and tracking and real-time disparity map calculation. Expected performances are for instance at least 240Mpixel/s as input video image rate for real time compression and a real time disparity map calculation at 30fps for an image size of 1280x720 pixels. The MOSS components can be combined in such a way to satisfy requirements of different operational scenarios. The development of the HW platform of MOSS has been completed and the paper reports an analysis of its basic functionalities and performances, in terms of resolution, loss-less and lossy compression of the acquired images, video frame rate, I/Fs to the Spacecraft OBDH, etc. The paper is then dedicated to the R&D activities, still on going, regarding the advanced functionalities, like stereovision as well as feature extraction and tracking that are being validated through the implementation of specific vision-based navigation algorithms. Finally the integrated MOSS is tested for several simulated mission environments.
    Abstract document

    IAC-14,B4,6A,2,x25102.brief.pdf

    Manuscript document

    IAC-14,B4,6A,2,x25102.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.