• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-17
  • D1
  • 3
  • paper
  • Artificial satellites substructures identification by automatic input image segmentation.

    Paper number

    IAC-17,D1,3,3,x40910

    Author

    Mr. Marco Ciarambino, Politecnico di Milano, Italy

    Coauthor

    Prof. Michèle Lavagna, Politecnico di Milano, Italy

    Year

    2017

    Abstract
    The knowledge of the composition of a particular object in space is crucial for a copious variety of missions. From satellites servicing to the estimation of the inertia properties of an uncooperative object, the capability to recognize on board components of an orbiting object while flying in its proximity, largely expand the possibilities to effectively interact with the proximity flying segment. 
    Currently, to physically interact with any kind of orbiting object, a complete a priori knowledge of its geometry and mass distribution is required, undermining the possible range of robotic services practicable in space.  To extend the applicability and the feasibility of services to offer in orbit, a software to recognize the various components of an artificial satellite by means of a camera is under development at Politecnico di Milano - Department of Aerospace Science and Technology (PoliMi - DAER). The ultimate objective of the research is to obtain a first coherent estimation of the materials and the geometrical composition (and thus the mass) of either known or unknown artificial satellite onto which the services provider probe needs to operate on. This information allows to approximate the inertia matrix of the target system, the identify the most appropriate element on the target object to grasp, to unambiguously detect the berth to connect the service tool to and so on. The recognition software relies on convolutional neural networks for image processing and segmentation. It acquires a single image in greyscale and outputs a map with the corresponding classification for each image pixel. Multiple labels specify the different parts/materials of the object under examination. The convolutional neural network is trained and tested with photo-realistic synthetic images generated from NASA 3D Resources (https://nasa3d.arc.nasa.gov/models) probes CADs, rendered spanning different sunlight conditions and satellites attitudes. Tests with actual images are performed as well. It is already planned to expand the capability of the system including additional sensors –such as thermal images– besides the visible spectrum camera.  In the paper, details about the dataset generation and neural network training are shown. Test results from the calibration and verification campaign for both synthetic and actual data are analyzed and critically reported.
    Abstract document

    IAC-17,D1,3,3,x40910.brief.pdf

    Manuscript document

    (absent)