• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-17
  • D1
  • 6
  • paper
  • a visual identification method of orbital fractionated objects

    Paper number

    IAC-17,D1,6,9,x40703

    Year

    2017

    Abstract
    With the alarming rise of the number of spacecraft orbiting the Earth, there is a critical need to enable future satellites to perform autonomous obstacle detection and avoidance. In this paper, we propose a vision-based object detection technique to identify nearby spacecraft using structural cues.
    
    Depending on the customer’s requirements and the objectives of the mission, the shape and dimensions of a spacecraft will likely differ from those of the other. Thus, for the initial visual detection phase, the uniqueness in contours of each fractionated object (i.e. the satellite) is used by a Canny edge detector to visually identify and differentiate the subject.
    
    However, when the spacecraft is part of a constellation, differentiating it from the other twin satellites would require a more detailed visual algorithm to further analyze the surfaces of the subject. Furthermore, to reduce the heat loss caused by thermal radiation, spacecraft are loosely wrapped in Multi-layer insulation (MLI) blankets, which offer very little textural information for visual detection of the spacecraft surfaces. The above constraints and the variety of illumination conditions would affect the effectiveness of the detection in most cases. However, using an Oriented Fast and Rotated Brief (ORB) feature detector, which is invariant to orientation and rotation, we are able to approach the subject from any direction and any angle with confidence in still detecting the subject in real-time.
    
    Early detection results utilized a total of 3000 sample images of positive and negative images of spacecraft in orbit from various angles and lighting conditions. The vision algorithm offers 85\% accuracy in detection with a 900ms overall execution runtime. Due to limited available payload and processing power, the simulated tests were achieved on a custom Android application using a Sony Xperia Z1 compact smartphone for imaging and feature detection in real-time.
    
    To further enhance the detector’s accuracy, we analyzed the use of solar cells with different textures to improve feature matching. By using non-reflective, highly textured solar panels, the vision algorithm is enabled to analyze a larger surface area of the satellite, and in return perform feature detections more accurately and efficiently. The preliminary feature detection results using highly texturized Tesla solar cells increased the accuracy in detection to 95\% with a total of 500ms in overall execution runtime.
    Abstract document

    IAC-17,D1,6,9,x40703.brief.pdf

    Manuscript document

    (absent)