• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-21
  • B3
  • 6-A5.3
  • paper
  • multi-sensor fusion for autonomous positioning of robots in deep space

    Paper number

    IAC-21,B3,6-A5.3,11,x65463

    Author

    Ms. Zipei Shuai, China, University of Electronic Science and Technology of China (UESTC)

    Coauthor

    Prof. Hongyang Yu, China, University of Electronic Science and Technology of China (UESTC)

    Year

    2021

    Abstract
    Achieving more accurate autonomous positioning of mobile robots in the environment is the basis for humans to explore new applications in space. In order to achieve accurate autonomous localization of onboard mobile robot in space, we need to find a suitable way for the space scene among the existing positioning methods and make improvements. At this stage, scholars have proposed many algorithms to realize robot autonomous positioning based on the ground. However, these algorithms often not only have the problems of high computational complexity, but also some technologies can't be used in space, such as geomagnetic positioning. Robot positioning in space station belongs to indoor positioning, so the satellite positioning cannot be used. Therefore, in order to achieve a more robust and efficient autonomous localization algorithm for space mobile robot, an algorithm is proposed in this paper to realize the autonomous localization of mobile robot in a map by fusing monocular camera and LiDAR (Light-Laser Detection and Ranging) technology. The algorithm makes full use of the information provided by depth learning model and laser point cloud. The monocular camera is used to create the training data set: firstly, we project the 3D point cloud map of the known scene into a grid map, define the appropriate coordinate axis for the grid map, and determine the precise coordinates of each grid. Next, we use a monocular camera to take a certain number of pictures for the scene, and assign a certain grid to each picture, so each picture has specific coordinates corresponding to it. After the training set is established, a deep learning algorithm is used to train a model that can use two-dimensional images to determine the coordinates of the camera's location. This is where the mobile robot is. Finally, combined with the LiDAR information of the mobile robot, the estimated coordinates are modified to get more accurate positioning. The algorithm of mobile robot localization based on the fusion of vision sensor and LiDAR proposed in this paper is an improvement over the existing single sensor localization algorithm, which combines the existing deep learning model with the information provided by LiDAR. It improves the positioning accuracy of the space mobile robot. It improves the working efficiency of the mobile robot. And it provides a better and higher guarantee for the navigation and other operations of the space mobile robot, and promotes the better development of robot industry and space exploration.
    Abstract document

    IAC-21,B3,6-A5.3,11,x65463.brief.pdf

    Manuscript document

    (absent)