• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-19
  • B2
  • IP
  • paper
  • A Space Targets Visual Localization Method for On-orbit Spacecrafts Based on Neural Network

    Paper number

    IAC-19,B2,IP,6,x49875

    Author

    Ms. Yue-Jiao Wang, China, Xi’an Microelectronics Technology Institute, China Aerospace Science and Technology Corporation (CASC)

    Coauthor

    Dr. Zhong Ma, China, Xi'an Microelectronics Technology Institute

    Coauthor

    Prof. Xuehan Tang, China, Xi'an Microelectronics Technology Institute, China Academy of Space Electronics Technology (CASET), China Aerospace Science and Technology Corporation (CASC)

    Coauthor

    Mr. Zhuping Wang, China, Xi'an Microelectronics Technology Institute, CASC

    Coauthor

    Mr. Longqing Gong, China, Xi'an Microelectronics Technology Institute, CASC

    Year

    2019

    Abstract
    The ability to measure the distance of variant space targets is crucial for the spacecrafts when performing complex tasks such as capturing space debris or removing the orbital garbage, and the rendezvous and docking with other spacecrafts. Especially for the non-cooperative targets, it is necessary to obtain the spatial distance information of the targets. In recent years, deep learning has developed rapidly, which overcomes the drawback that traditional visual methods generally require artificially specified features such as SIFT, HOG, etc., and are less susceptible to the effects of illumination and target location. Therefore, this paper proposes a new space targets visual localization method based on deep neural network which collects and calculates visual sensor data to sense the relative location of targets. Specifically, a binocular stereo camera is installed in front of the spacecrafts. The camera sequentially returns the two adjacent frames of infrared images in front of the line of sight. Firstly, the images are sent into the embedded computing platform for image processing. A self-supervised framework is then performed for training convolutional neural network. The training phase is performed on the ground. The neural network is initialized using the SuperPoint architecture and retrained on our own spacecrafts dataset consisting of satellites, space debris, orbital garbage, space shuttles, runaway spacecrafts, etc. After the training phase, the neural network model is obtained and can be used to jointly extract feature points and descriptors from images on board. Moreover, network-based motion consistency algorithm is proposed to match feature points of two images by descriptors. Finally, a relatively high-resolution depth map of the scene is obtained as the distance between the spacecrafts and the space targets by triangulation calculation, and the location of the space targets are then sent back to the spacecrafts. The spacecrafts perform the corresponding action according to the location. The proposed space targets visual localization method based on neural network combines the perception of deep learning, enhances the robustness of the original method, and lays a solid technical foundation for improving the high intelligence and autonomous controllability of our spacecrafts.
    Abstract document

    IAC-19,B2,IP,6,x49875.brief.pdf

    Manuscript document

    (absent)