Artificial Intelligence Techniques in Autonomous Vision-Based Navigation System for Lunar Landing
- Paper number
IAC-20,C1,2,1,x60080
- Author
Mr. Stefano Silvestrini, Italy, Politecnico di Milano
- Coauthor
Dr. Paolo Lunghi, Italy, Politecnico di Milano
- Coauthor
Ms. Margherita Piccinin, Italy, Politecnico di Milano
- Coauthor
Mr. Giovanni Zanotti, Italy, Politecnico di Milano
- Coauthor
Prof. Michèle Lavagna, Italy, Politecnico di Milano
- Year
2020
- Abstract
Traditional vision-based relative navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using AI to solve the task of pinpoint landing on the Moon. The Moon landing scenario consists in the spacecraft descent on the South Pole from the parking orbit to the powered descent phase. A 2D planar Moon landing is taken as reference, nevertheless the approach is easily applicable to a 3D scenario. The presented architecture is based on a Convolutional Neural Network (CNN) coupled with a Recurrent Neural Network trained with supervised learning approach. The CNN is used to extract features that are then processed by a fully connected layer, which performs a regression and outputs directly the lander state position and velocity. As mentioned, the regression task is executed by a recurrent neural network. In particular a Long-Short Term Memory (LSTM) recurrent neural network is employed for the superior performance and its capability of solving the vanishing gradient issue during training. The dataset for training includes images with different illumination and surface viewing conditions. The supervised learning approach is preferred since the knowledge of the landing area can be exploited in the pinpoint Moon landing scenario. Thus, the CNN can be trained with an appropriate dataset of synthetic images of the landing area at different relative poses and illumination conditions. The CNN-LSTM has proven excellent performance and are very well developed for image processing and model prediction in non-space applications. The use of a Recurrent Network brings the advantage of retrieving also time-series information. This can allow to estimate also the velocity of the lander. The AI-system is coupled with a navigation filter which refines the estimate and perform sensor-fusion with other measurement sources. The developed AI-based algorithm is deployed into relevant hardware executing the system. The hardware suite comprises a VPU board, in charge of executing the support navigation algorithm, coupled with an FPGA board, which executes the trained network. The AI-system is tested using both synthetic images, different from the training set, and images generated using real hardware in the Politecnico di Milano - ASTRA robotics laboratory with different surface viewing conditions. The performance of the AI-system is compared to classical algorithms.
- Abstract document
- Manuscript document
IAC-20,C1,2,1,x60080.pdf (🔒 authorized access only).
To get the manuscript, please contact IAF Secretariat.