• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-21
  • C1
  • 1
  • paper
  • Segmentation-Driven 6D Pose Estimation of Spacecraft for Vision-based Relative Navigation in Space

    Paper number

    IAC-21,C1,1,4,x66766

    Author

    Mr. Karl Martin Kajak, Germany, DLR (German Aerospace Center)

    Coauthor

    Dr. Christie Maddock, United Kingdom, University of Strathclyde

    Coauthor

    Dr. Heike Frei, Germany, DLR (German Aerospace Center)

    Coauthor

    Dr. Kurt Schwenk, Germany, German Aerospace Center (DLR)

    Year

    2021

    Abstract
    Vision-based relative navigation technology is a key enabler of several areas of the space industry such as on-orbit servicing and space debris removal. A particularly demanding scenario is navigating in relation to a non-cooperative target offering no navigational aid, unable to stabilize its attitude. Previously, the state-of-the-art in vision-based relative navigation has relied on traditional image processing and template matching techniques. However, outside of the space industry, object pose estimation techniques rely on convolutional neural networks (CNN). This is due to CNNs flexibility towards arbitrary pose estimation targets, their ability to use whatever available target features, and robustness towards lighting and occlusions. The use of CNNs for visual relative navigation is still relatively unexplored in terms of how their unique advantages can best be exploited.
    
    This paper presents a relative navigation system integrating a novel keypoint-regressing CNN for pose estimation, paving a way to capitalize on the potential of CNNs for relative navigation systems. The navigation system is shown to work on real images representative of a relative navigation scenario. A proven CNN-based navigation system can serve as a basis for further beneficial research directions with the aim to surpass traditional image processing and template matching methods in terms of robustness and flexibility.
    
    The relative navigation system presented in this paper is as follows. The pose of the target spacecraft is initialized via a combination of a segmentation-based keypoint regression CNN and a Perspective-n-Point solver, operating on monocular images. This pose estimate is then passed to a Kalman filter, forming a relative navigation system. A loss function is presented that enables training a keypoint regressor on targets exhibiting a rotational axis of symmetry with a limited order. A synthetic image dataset is generated using Blender as a rendering engine. The CNN is trained using the synthetic image dataset, but the system’s navigation performance is evaluated on realistic images gathered from cameras at the European Proximity Operations Simulator 2.0 (EPOS 2.0) robotic hardware-in-the-loop laboratory. The realistic images represent a close-range approach manoeuvre relative to a representative physical model of a target non-cooperative spacecraft. The relative attitude and position estimation accuracy is evaluated along the approach trajectory.
    Abstract document

    IAC-21,C1,1,4,x66766.brief.pdf

    Manuscript document

    IAC-21,C1,1,4,x66766.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.