Experimental Validation of Synthetic Training Set for Deep Learning Vision-Based Navigation Systems for Lunar Landing
- Paper number
IAC-20,A3,2C,23,x60183
- Author
Mr. Stefano Silvestrini, Italy, Politecnico di Milano
- Coauthor
Dr. Paolo Lunghi, Italy, Politecnico di Milano
- Coauthor
Ms. Margherita Piccinin, Italy, Politecnico di Milano
- Coauthor
Mr. Giovanni Zanotti, Italy, Politecnico di Milano
- Coauthor
Prof. Michèle Lavagna, Italy, Politecnico di Milano
- Year
2020
- Abstract
A vision-based navigation system using AI to solve the task of pinpoint landing on the Moon is being developed at Politecnico di Milano - ASTRA research team. The Moon landing scenario consists in the spacecraft descent on the South Pole from an altitude of 100 km down to 3 km. A 2D planar Moon landing is taken as reference. The AI-system for landing uses a uses a Convolutional Neural Network - Long/Short Term Memory (CNN-LSTM) architecture coupled with a navigation filter. The deep network is trained using supervised learning approach, due to the fact that in the Moon landing scenarios, knowledge on target landing location can be exploited. The AI-based navigation algorithm performance derives from the way the training is performed. For this reason, it is critical to set-up coherent dataset to perform AI-system training. On one hand, synthetic images allow to compensate for the lack of real data, but on the other hand only real images include all the characteristics needed for a correct functioning of the AI-system. This paper presents the strategy for the dataset generation for both training and validation, which includes synthetic images and data acquired in ARGOS robotic facility at Politecnico di Milano. For synthetic images generation, PANGU software is employed to create images to be fed to the network. The work logic for image generation is as follows: a trajectory simulator is used to derive camera pose and illumination conditions, then a shape model is employed to create the synthetic data. The images sequence is augmented with traditional data augmentation techniques. These images cover a variable range of illumination and surface visibility. Furthermore, complex and realistic noises and image generation features can be included, such as FOV, aperture, exposure time, distortion and noise dark current. Such images are characterized and verified through the experimentally acquired ground truth. In ARGOS, the navigation camera is mounted on the end effector of a 6DoF robotic arm; the arm is controlled so that its tip follows a predefined trajectory, which reproduces the wanted spacecraft dynamics in proximity of a target. The illumination system is composed by two sets of LEDs which can manually be oriented to change the environmental conditions with respect to target landing location. ARGOS is equipped with a lunar terrain diorama for planetary landing simulation.
- Abstract document
- Manuscript document
IAC-20,A3,2C,23,x60183.pdf (🔒 authorized access only).
To get the manuscript, please contact IAF Secretariat.