Robust Navigation Framework for Uncooperative Proximity Operations using Monocular Images and Deep Learning
- Paper number
IAC-20,C1,1,1,x58493
- Author
Mr. Kuldeep Rambhai Barad, The Netherlands, Delft University of Technology (TU Delft)
- Coauthor
Mr. Lorenzo Pasqualetto Cassinis, The Netherlands, Delft University of Technology (TU Delft)
- Coauthor
Dr. Alessandra Menicucci, The Netherlands, Delft University of Technology (TU Delft)
- Coauthor
Prof. Eberhard Gill, The Netherlands, Delft University of Technology
- Year
2020
- Abstract
This work proposes a navigation framework for closed-loop pose initialization and tracking of an uncooperative spacecraft in close-proximity using convolutional neural networks (CNN). The emphasis is put on improving the robustness of CNN-based navigation systems to enable reliable on-orbit operations for applications like active debris removal and on-orbit servicing. With a monocular camera as the sole navigation sensor, the problem of pose initialization and tracking relative to a known uncooperative target is tackled using a three-stage integrated navigation loop. In the first stage, state-of-the-art deep learning architectures are used to improve robustness to illumination and background conditions as compared to traditional image processing algorithms. Deriving from the advances in computer vision techniques for human pose estimation, the first stage uses a single shot detection network serially with a high-resolution keypoint regression network. The detection network is trained to accurately localize the target spacecraft within a 2D bounding box. The regression network is trained to subsequently regress the location of each keypoint within this region of interest and produce heatmaps for each corresponding keypoint in the known 3D model. The feature heatmaps are used to associate the uncertainty in keypoint detection by quantifying a covariance matrix for each feature. This information is used in the second stage to solve the perspective equations using a Covariant Efficient Procrustes Perspective-n-Points (CEPPnP) solver. Compared to methods that do not account for feature uncertainty, this approach improves pose estimation accuracy by taking the quality of detection into account. In the final stage, this pose estimation pipeline is integrated with an Extended Kalman Filter to track the pose of the target spacecraft in close-proximity trajectories, using associated covariance in the loop. The pose estimation pipeline in the first two stages is benchmarked on the Spacecraft Pose Estimation Dataset (SPEED) and compared with the other state-of-the-art methods. Subsequently, an image dataset of the ESA’s Envisat spacecraft is introduced, which in addition to improved training, enables testing of integrated navigation loop using sequences of images emulating close-proximity scenarios. The dataset contains a training set of synthetically rendered texture-randomized images of Envisat, while the test sets also contain real images of a scaled Envisat model acquired at ESA’s GNC Rendezvous Approach and Landing Simulator testbed. The whole framework is tested on the Envisat image datasets and shows objective improvements in robustness of navigation solution to variation in texture, scale, and illumination expected in real on-orbit operations.
- Abstract document
- Manuscript document
(absent)