2 Mapping mangroves using Sentinel1&2
I used Colombia as study region for this analysis. However, the method can be applied everywhere. The steps to map mangroves and other common land cover classes are:
- Pre-processing/Exporting Sentinel-1 and Sentinel-2.
- Organizing a basemap (get training data!!!).
- Applying an object-based classifier.
All previous steps can be easily modified for specific regions and conditions.
First thing is adding the repository to your Earth Engine account:
https://code.earthengine.google.com/?accept_repo=users/murillop/mapping_mangroves
# and call the main library:
= require('users/murillop/mapping_mangroves:mangroves'); var man
In that the repository you can find the main library plus the examples to run the analysis.
2.1 Preprocessing/Exporting Sentinel-1 and Sentinel-2
Preprocessing data from S1 requires the effective removal of speckle noise. We use Perona-Malik filter to remove speckle for each image.
For Sentinel-2 we remove clouds and shadows. All functions allows to obtain better spectral information for the period 2019-2020.
After improving the data, a image composite using the “median”. We also –> calculate the standard deviation and percentiles 20th and 80th. Finally we save the resulting composite as an asset.
The only parameter needed are:
= require('users/murillop/mapping_mangroves:mangroves');
var man
= ee.Geometry.Polygon(
var region -78.63099738574668, 2.746827033203151],
[[[-78.63099738574668, 2.2295862108717603],
[-77.70677253223106, 2.2295862108717603],
[-77.70677253223106, 2.746827033203151]]], null, false);
[
= region;
var AOI
= {
var params : '2019-01-01',
START_DATE: '2021-01-01',
END_DATE: 60, #//CLOUDY_PIXEL_PERCENTAGE. Select less than 60%
CLOUD_FILTER : 20, #//PROBABILITY greater than 20%
CLD_PRB_THRESH : 0.15, #//For shadows
NIR_DRK_THRESH : 1, #//Project shadows from clouds using distance =1
CLD_PRJ_DIST : 50 #//Buffer around clouds
BUFFER };
After modify your parameters you can apply the functions needed for pre-processing
Example: Get all S2 images based on parameters.
= man.get_s2_sr_cld_col(AOI, params);
var s2_sr_cld_col_eval print (s2_sr_cld_col_eval.size(),'number of S2 images');
and to apply cloud masking and shadow masking
= s2_sr_cld_col_eval.map(function(img){return man.add_cld_shdw_mask(img, params)});
var s2 =s2.map(function(img){return man.apply_cld_shdw_mask(img, params)}); s2
See the rest of the script on the repository 1.Export_S1_S2_composite
2.2 Organizing a basemap (get training data)
Getting training data for a classification process is a exhausting task. Mostly visual and manually collected, training data is key for a success classification process.Here we use previous maps created by Colombia Government from which we select specific land cover classes. The basemap follows the Corine Land Cover (CLC) methodology which is the official methodology employs by official Colombian agencies. See official CLC documentation can be found here
For this exercise, five classes were used: 1. Mangroves 2. Water 3. Dense Forest >80% tree canopy cover 4. Non-forest <20% tree canopy cover 5. Other vegetation (shrub, herbaceous, wet forest)
The code used 2.Classification_S1_S2 is located in the repository.
This function add specific land cover types.
= require('users/murillop/NASA_mangroves:mangroves');
var man # Get Corine Land cover classes
= man.getCORINE() var corine
The CLC is from 2012. I update this map using Hansen and JRC datasets for 2019. This process helps to obtain classes more “pure” pixels, discard outliers and update them.
For instance in the case of mangroves class, I update the class taking into account the presence of water in S1 and forest/non-forest mask derived from Hansen.
#Update **manglar** class when it is water and when it is non-forest.
= corine.select('manglar')
var manglar1 .updateMask(vv_water.eq(0)) #//remove water
.updateMask(corine.select('F-NF').eq(1)) #//remove when it is non-forest
.remap([0],[1]).int()
.rename('class');
2.3 Indices and an object-based classifier
We calculate a set of different relevant spectral metrics from the exported asset. The code used 2.Classification_S1_S2. Relevant vegetation indices and GLCM textural metrics for mapping mangroves were calculate.
To add them into the exported asset.
//# Read S1+S2
= ee.Image("projects/mangrovescience/SilviaCarbon_COL/Sentinel_Predictors/s1_s2_2019-01-01_2021-01-01")
var s1_s2 //# Read S2 only and calculate different predictors (Indices+GLCM indicators)
=s1_s2.select('B.*');
var s2 =man.doIndices2(ee.ImageCollection(s2)); s2
Now we have S2 data, then we add S1 bands:
=ee.Image("projects/mangrovescience/SilviaCarbon_COL/Sentinel_Predictors/s1_s2_2019-01-01_2021-01-01")
var s1 .select('VV','VH');
= s2.addBands(s1);
var s1s2_new = s1s2_new.bandNames();
var bands print(s1s2_new, 's1s2_new');
Once I have the final set of predictors we can apply a object-based classifier.
Object-based classifiers are more robust than conventional pixel-based analysis. We used the SNIC algorithm to detect clusters using the Mangroves Vegetation Index (MVI). See article here.
User can modify MVI for any other index or metric. They current available indices are: NBR, NDVI, NDMI, MNDWI, MVI, CMRI, GLCM (8 metrics). If you want to add your own index or a different that is not available yet, user should update the main library mangroves.
//# Define SNIC
= man.snic_mangroves_sentinel(s1s2_new.select(bands), 'MVI'); #// You can pick other index
var snic_sen1_2 =snic_sen1_2.bandNames().remove('clusters');
var predictionBands= snic_sen1_2.select(predictionBands); snic_sen1_2
Now we can create sampled points for the five classes.
= basemap.addBands(ee.Image.pixelLonLat())
var stratified .stratifiedSample({
: 10,
numPoints: 'class',
classBand: 20,
scale: snic_sen1_2.geometry(), //new_area,
region:[1,2,3,4,5], // 1=manglar // 2 = forest // 3= nonforest //4 water //5 Herbazal-
classValues: [1000, 1000, 1000, 1000, 500],
classPoints.map(function(f) {
})f.setGeometry(ee.Geometry.Point([f.get('longitude'), f.get('latitude')]));
return
});print ('Points per class', stratified.reduceColumns(ee.Reducer.frequencyHistogram(),['class']));
We can export the training (70%) and testing points (30%) for further improvement and export the final map with specific metadata.
Export.image.toAsset({
: image_classified.setMulti({
image: '16-03-2021', //Change the date
processing:"Second S1+S2 5 classes SNIC",
comment: 'https://code.earthengine.google.com/174f2b05945fd70898de48ef6d02c4d8', //#Save the link of the code
code: 'PauloJ',
user
}),: 'S1S2_classification_SNIC',
description: 20,
scale: 'SilviaCarbon_COL/S1S2_2019_2020/S1S2_class_SNIC_R2',
assetId: image_classified.geometry(), //new_area
region: 1e13
maxPixels })
2.4 Accuracy and cleaning the final product
A basic accuracy assessment using the testing points help us to understand how well the classification process performs. See code 3.Accuracy_test_S1_S2
Finally, post-processing is needed to clean out final map. Using the Digital Elevation model and some geometries for the Caribbean and Pacific regions were used to produce the final map. See code 4.Cleaning S1_S2_classification