top of page

AI Case Study

Facebook and CrowdAI researchers develop a machine learning method to identify geographic areas heavily affected by natural disasters

Researchers from Facebook and CrowdAI develop a CNN method to segment satellite images of areas affected by disaster by comparing manmade structures in the before images with the after images and calculating the differences. When tested on two natural disasters - Hurricane Harvey flooding and the Santa Rosa fire. The method achieves strong correlation between the groundtruth of destruction for these events and the CNN calculations.

Industry

Technology

Internet Services Consumer

Project Overview

The researchers "propose to identify disaster-impacted areas by comparing the change in man-made features extracted from satellite imagery. Using a pre-trained semantic segmentation model we extract man-made features (e.g. roads, buildings) on the before and after imagery of the disaster affected area. Then, we compute the difference of the two segmentation masks to identify
change. As our CNN model detects man-made features before disaster but fails to detect some of them after disaster, we can infer areas of maximal impact using change detection. We propose a metric to quantify this impact called Disaster Impact Index (DII)."

Reported Results

"In order to validate our results we identified two natural disasters: Hurricane Harvey flood and Santa Rosa fire. Using the human
annotated dataset of actual disaster impacted areas for the Harvey flood and the FRAP dataset for the Santa Rosa fire, we are able to prove a positive correlation between DII and actual disaster impacted areas." The high correlation between the CNN-based detection and the ground truth indicates the method is promising for further development and implementation.

Technology

"For semantic segmentation model, we use a Residual Inception Skip network following [8]. This model is a convolutional encoder-decoder architecture with the inception modules instead of standard convolution blocks. The inception models were originally proposed in [13] but with asymmetric convolutions. For example, a 3 × 3 convolution is replaced with a 3 × 1 convolution, then batch norm followed by 1 × 3 convolution. This is useful since it reduces the number of parameters and gives similar performance. All weights are initialized with the He norm [11] and all convolutional layers are followed by batch normalization layers which in turn are followed by activation layers. Following the architecture proposed in [8], we also used leaky ReLUs with a slope of −0.1x as our activation function. We used a continuous version of the Dice score as our loss function."

Function

Risk

Audit

Background

"The use of satellite imagery has become increasingly popular for disaster monitoring and response. After a disaster, it is important to prioritize rescue operations, disaster response and coordinate relief efforts. These have to be carried out in a fast and efficient manner since resources are often limited in disaster affected areas
and it’s extremely important to identify the areas of maximum damage. However, most of the existing disaster mapping efforts are manual which is time-consuming and often leads to erroneous results."

Benefits

Data

"We trained our model by combining two publicly available high-resolution satellite imagery semantic segmentation datasets, namely Spacenet [4] and Deepglobe [7]. Spacenet is a corpus of commercial satellite imagery and labeled training data which consists of building footprints for various cities around the world at resolutions ranging from 30-50 cm/pixel. The DeepGlobe dataset is created from DigitalGlobe Vivid+ satellite imagery [2] containing roads, buildings and landcover labels at resolution of 50 cm/pixel. To show that our method generalizes across feature types and datasets, we also used another dataset of lower resolution imagery (around 3 m/pixel) from Planet Labs [3] to train the roads model."

bottom of page