Detecting objects in the geostationary ring from image sequences
- Paper number
IAC-21,A6,1,7,x62698
- Author
Mr. Lukasz Tulczyjew, Poland, KP Labs
- Coauthor
Mr. Michal Myller, Poland, KP Labs
- Coauthor
Mr. Pawel Sanocki, Poland, KP Labs
- Coauthor
Mr. Adam Mika, Poland, KP Labs
- Coauthor
Mr. Szymon Piechaczek, Poland, KP Labs
- Coauthor
Dr. Daniel Kostrzewa, Poland, KP Labs
- Coauthor
Dr. Michal Kawulok, Poland, KP Labs
- Coauthor
Dr. Jakub Nalepa, Poland, KP Labs
- Year
2021
- Abstract
Object detection and localization are the important tasks in image analysis and – due to their practical applications – they have been thoroughly researched in the literature. There exist a plethora of both classical and machine learning methods for detecting objects from different image modalities, also in the context of detecting targets, such as satellites or space debris, in the geostationary ring. In this paper, we present our approach for this task, and focus on image sequences captured by a low-cost ground-based telescope. The dataset, provided by the European Space Agency (ESA) in the framework of the Spot GEO Challenge, consisted of 6400 grayscale image sequences, each of which contains five consecutive frames of some unknown portion of the sky. Since the images are captured from the ground, each pixel corresponds to an arc length of approximately 800 m at GEO, therefore the target objects are not larger than 1 pixel in area. Given that the exposure time may be long, and that there are atmospheric distortions that affect the acquisition process (alongside other real-life factors, such as sensor noise, weather effects, and so forth), the moving targets are commonly “blurred” over several neighboring pixels. Our approach exploits both classical image analysis and deep learning techniques in a hybrid processing pipeline. The algorithm is decomposed into three principal parts: (i) extraction of object candidates, in which we reduce the number of potential objects through employing a battery of techniques such as image filtering and thresholding, (ii) classification of the extracted candidates into object/background categories which is performed by a convolutional neural network (CNN), and lastly, (iii) a line detection algorithm, which benefits from the information captured within the entire sequence of images to establish the trajectories of moving objects. Our CNN encompasses the feature extractor which encapsulates three two-dimensional convolutional layers, alongside the classifier, containing three fully connected layers. The CNN’s input is a two-dimensional image patch, where the central pixel is the object of interest. Here, we estimate the probability of a given sample being a target or not. To validate the performance of our pipeline, we utilize the F1 score and the mean squared error. Finally – by utilizing our approach – we were among 10 best solutions in the ESA Spot GEO Challenge (the Barebones team).
- Abstract document
- Manuscript document
IAC-21,A6,1,7,x62698.pdf (🔒 authorized access only).
To get the manuscript, please contact IAF Secretariat.