• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-21
  • B1
  • 4
  • paper
  • Multiple-image super-resolution reconstruction using deep learning: A Sentinel-2 case study

    Paper number

    IAC-21,B1,4,8,x66091

    Author

    Dr. Michal Kawulok, Poland, KP Labs

    Coauthor

    Mr. Tomasz Tarasiewicz, Poland, KP Labs

    Coauthor

    Mr. Maciej Ziaja, Poland, KP Labs

    Coauthor

    Ms. Diana Tyrna, Poland, KP Labs

    Coauthor

    Dr. Daniel Kostrzewa, Poland, KP Labs

    Coauthor

    Dr. Jakub Nalepa, Poland, KP Labs

    Year

    2021

    Abstract
    Multiple-image super-resolution (SR) reconstruction consists in enhancing image spatial resolution based on multiple observations of the same area. As the images captured at subsequent revisits of a satellite imaging sensor carry different portions of the underlying spatial information, employing appropriate information fusion may lead to reconstructing high-resolution data that are quite close to the ground truth, which is important for most of the Earth observation scenarios. This is in contrast to the intensively explored problem of single-image SR which allows for obtaining visually plausible outcomes even at high magnification factors, but has limited capabilities of recovering the actual ground-truth information.
    
    Recently, several SR techniques underpinned with convolutional neural networks emerged that allow for reconstructing a high-resolution image from multiple images acquired with the PROBA-V sensor. PROBA-V captures images of different spatial resolution, namely at 100\,m and 300\,m ground sampling distance, which makes it relatively easy to collect sufficient amounts of real-world data that can be used for training deep models. Unfortunately, such a scenario is infeasible for most of other satellites, including Sentinel-2, hence the training data must be either simulated or acquired relying on imaging sensors of higher spatial resolution. 
    
    In this paper, we present our end-to-end pipeline for enhancing multispectral images captured within the Sentinel-2 mission, and we demonstrate how we addressed the most important problems concerned with deploying multiple-image SR in practice. First, we discuss the problem of preparing the training data that includes three options: (\textit{i}) simulating the low-resolution images, (\textit{ii}) using transfer learning that exploits the PROBA-V images, and (\textit{iii}) exploiting the WorldView images as a high-resolution reference. Subsequently, we show how the existing deep architectures can be adopted for processing multispectral data to fully benefit from the correlations between the individual spectral bands. Finally, we address the problem of temporal changes within a bunch of images presenting the same region captured at different times. 
    
    We report the results of our experimental study, in which we assess the reconstruction quality on the qualitative and quantitative basis. Overall, we demonstrate that multiple-image SR can be successfully performed for Sentinel-2 images, resulting in substantial information gain compared with the interpolation and single-image SR. This allows for increasing the capabilities of using Sentinel-2 in a variety of practical Earth observation scenarios, but even more importantly, the presented methodology may be helpful in exploiting multiple-image SR for enhancing images captured with other satellites.
    Abstract document

    IAC-21,B1,4,8,x66091.brief.pdf

    Manuscript document

    IAC-21,B1,4,8,x66091.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.