• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-13
  • V
  • 3-B2.8
  • paper
  • Deep Space Autonomous Navigation and Exploration System

    Paper number

    IAC-13,V,3-B2.8,2,x20311

    Author

    Mr. Anand Patil, BASAVESHWAR ENGINEERING COLLEGE, India

    Year

    2013

    Abstract
    Deep space navigation is a greater challenge for a spacecraft, because of non-reliable, high-cost navigation. In deep space missions, the distance between the Earth and Spacecraft will be very large, which poses a problem for controlling the spacecraft from earth, due to limited bandwidth available for communication, high transmission latency and the scarce opportunity for uplinking commands from Earth. In this paper, a complete system for Autonomous Navigation is described and analyzed, which operates in two stages. Stage 1 involves, image processing, trajectory determination and maneuver computation and Stage 2 involves, completely newly developed vision based Autonomous Navigation System for increasing accuracy in deep space missions.  In stage 1, the image processing is done using images of the stars taken from a CCD camera. The images obtained are processed for identifying the target object which involves, star pattern identification, individual star magnitude measurement, and position of the target object in the star cluster. Once the image is processed and target is identified, the orbit determination process starts, which is done in three steps, first, the spatial alignment of consecutive data frames; second, the detection of loop closures; third, the globally consistent alignment of the complete data sequence. Alignment between frames is computed by jointly optimizing over both appearance and pattern matching. The Star pattern identification algorithm iterates between associating each point in one frame to the closest point in the other frame and computing the rigid transformation that minimizes distance between the point pairs. Parallel loop closure detection uses the sparse feature points to match the current frame against previous observations, taking spatial constraints into account. Fusing Step 1 and Step 2, sparse visual features from the two frames and associating them with their corresponding values to generate feature points of the target image, by which the vehicles complete position and velocity can be obtained.
    Abstract document

    IAC-13,V,3-B2.8,2,x20311.brief.pdf

    Manuscript document

    IAC-13,V,3-B2.8,2,x20311.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.