• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-23
  • B1
  • IP
  • paper
  • Reducing annotation efforts in the case of adding segmentation to a deep change detection neural network

    Paper number

    IAC-23,B1,IP,42,x77464

    Author

    Dr. Hyojung Ahn, Korea Aerospace Research Institute (KARI), Korea, Republic of

    Year

    2023

    Abstract
    In recent years, numerous studies have been conducted to improve the performance of artificial intelligence-based building change detection methods using the data acquired by satellites or aircraft systems through remote sensing. Compared to traditional methods, deep learning techniques exhibit superior performance; however, they require a sufficient amount of quality pre-annotated data in the correct format, the preparation of which is often a time-consuming and laborious task. Specifically, change detection by segmentation is a popular class of change detection method that incorporates a detection branch into the workflow to improve the final change detection results. However, building segmentation annotations required for supervised learning are often not included in openly available change detection datasets, and creating a full set of pixel-wise annotations constitutes a huge drawback facing this method. Therefore, in this study we propose an improved change detection method based on segmentation to overcome the deficiency in segmentation annotations. Our method comprises three stages: label matching, building detection, and final change detection. In the label matching stage, each change detection annotation is matched with the building it represents in the corresponding image, thus creating a dataset that is usable by the segmentation model but with incomplete annotations. In the detection stage, our proposed focused information guided segmentation (FIGS) method and region of interest mask (ROI) constructed from the novel greenness index provides prior information during the training stage, which helps guide the model to train on accurately labeled regions. Finally, in the change detection stage, a pixel-wise change map is generated by using pre-trained features from the detection stage. Our proposed method is verified on an open-source dataset and found to be sufficiently effective compared to the results of models trained on full-label datasets during the segmentation stage. We demonstrate the robustness of the proposed label matching process by comparing it against a correctly matched dataset, and we show that incorporating FIGS and the greenness index helps improve the performance of the segmentation model, and subsequently, the change detection results, even when faced with a significant shortage of annotated samples. Further, in addition to change detection tasks, the proposed method can be applied to other segmentation problems that use a partially labeled dataset.
    Abstract document

    IAC-23,B1,IP,42,x77464.brief.pdf

    Manuscript document

    IAC-23,B1,IP,42,x77464.pdf (🔒 authorized access only).

    To get the manuscript, please contact IAF Secretariat.