• Home
  • Current congress
  • Public Website
  • My papers
  • root
  • browse
  • IAC-23
  • B1
  • IP
  • paper
  • Convolutional neural network for SAR image classification of the Amazon Rainforest

    Paper number

    IAC-23,B1,IP,14,x79385

    Author

    Mr. Diogo Sens, Private, Brazil

    Year

    2023

    Abstract
    This work intends to present a convolutional neural network (CNN) model developed to classify Synthetic Aperture Radar (SAR) images around the city of São Félix do Xingu, Brazil, to differentiate deforested regions from preserved ones in the Amazon Rainforest. São Félix do Xingu is one of the Brazilian cities with the most degraded area in the Amazon biome: in 2020, the deforested area reached 19,886.2 $km^2$, according to the Brazilian government. At the same time, the city is close to national parks and indigenous territories, which creates a good contrast between impacted and preserved forestry. We collected the SAR images from the European Space Agency (ESA) platform Copernicus. The satellite was Sentinel-1a (Interferometric Wide Swath Mode, polarization VH and VV). The date of observation was 07/05/2021. We used the SNAP software for preprocessing, which consisted of the following steps: splitting, applying orbit file, removal of thermal noise, calibration, debursting, multi-looking, applying speckle filter, and terrain correction. The result was a set of raster files with two channels, one for each polarization of the backscattering coefficients (&$\sigma VH$& and &$\sigma VV$&). The remaining processes were made in Python, using Jupyter Notebook. The first step was creating a third channel with the cross-polarization ratio (&$\dfrac{\sigma VH}{\sigma VV}$&), and converting the raster files to png images (each channel became the RGB pattern of the images). Once the backscattering coefficient behaves differently with superficial (i.e. exposed landscape) objects and volumetric (i.e forest canopy) ones, the resulting RGB images highlighted the deforested regions from the preserved ones. The pictures were split into 1872 smaller pictures, manually labeled, and separated between train (70\%), validating (15\%), and test (15\%) datasets. Using the training and validating datasets, we developed a convolutional neural network based on the VGG16 architecture (16 layers, between convolutional, pooling, flattening, and dense layers). The training comprised three scenarios: 2 labels (deforested, preserved), 3 labels (totally preserved, partially deforested, totally deforested), and 4 labels (totally preserved, partially preserved, partially deforested, and totally deforested), using 10, 20, and 40 epochs. The best scenario was 2 labels and 10 epochs (89,32\% of accuracy). More labels tended to underfit and more epochs, to overfit.
    Abstract document

    IAC-23,B1,IP,14,x79385.brief.pdf

    Manuscript document

    (absent)