• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-20
  • C1
  • 2
  • paper
  • Relative pose estimation of non-cooperative targets based on point cloud and image fusion

    Paper number

    IAC-20,C1,2,3,x57557

    Author

    Dr. Jiaqian Hu, China, Nanjing University of Aeronautics and Astronautics

    Coauthor

    Prof. Shuang Li, China, Nanjing University of Aeronautics and Astronautics

    Coauthor

    Dr. Hongwei Yang, China, Nanjing University of Aeronautics and Astronautics

    Coauthor

    Dr. Baichun Gong, China, Nanjing University of Aeronautics and Astronautics

    Year

    2020

    Abstract
    This paper proposes an algorithm for attitude estimation of spatial non-cooperative targets based on Convolutional Neural Networks (CNN). Point cloud and image fusion methods are used to obtain a dense target point cloud. The obtained dense target point cloud is used to train a trainable end-to-end neural network to achieve relative pose estimation. Image and LiDAR are two types of sensors commonly used in orbital spacecraft. The image data is high in pixels and contains a lot of information, but it is difficult to obtain depth information. The point cloud data can directly obtain the depth information of the target, but the point clouds obtained by the current LiDAR are relatively sparse and can not accurately and effectively estimate the target's attitude. Therefore, this paper adopts the method of Convolutional Neural Network (CNN) to realize the fusion of point cloud data and image data. The point cloud is compensated by the high-resolution information of the image to obtain a denser target point cloud. After obtaining a denser target point cloud, we employ a trainable end-to-end deep neural network to estimate the relative pose. First, PointNet++ is adopted to perform feature extraction on the source point cloud and target point cloud. Then the points are weighted according to the extracted features of the source point cloud and the first N points with high weight are selected as key points. The initial value of relative pose is used to sample the corresponding points of the N key points in the target point cloud and then generate C candidate corresponding point groups. Feature extraction is performed on all candidate corresponding point groups, and a set of virtual corresponding points is generated. Based on the generated virtual corresponding points, the relative pose of the source point cloud and the target point cloud is estimated. Our keypoint extractor is trained through an end-to-end structure, enabling the system to avoid target interference and make full use of the significant features of fixed targets to achieve high robustness. The key contribution is not to find the corresponding points among the existing points, but that we naturally generate these points based on the matching probability learned between a set of candidate points, thereby improving the registration accuracy.
    Abstract document

    IAC-20,C1,2,3,x57557.brief.pdf

    Manuscript document

    (absent)