Satellite acquisitions from LANDSAT (LS) and CBERS programs are widely used in monitoring land cover dynamics. In the acquired products, clouds form opaque objects are obscuring parts of the scene and preventing a reliable extraction of information from these areas. Consequently, cloud shadows create similar problems, as the reflected intensity of the shadowed areas is highly reduced, generating additional info gaps. The problem can be handled by replacing clouds/shadows pixels from other close-date acquisitions, but that would assume a prior knowledge of the spatial distribution of clouds and their corresponding shadows in a scene. This research introduces a method that provides the clouds/shadows layers and their percentage in LS (TM & ETM+) and CBERS (HRCC) scenes. The approach relies on a set of literature indicators to create a composite image that enhances the visual differentiation of clouds/shadows from other objects. The created composite RGB are then warped to a relative luminance raster calculated from the linear bands components. Afterwards, the raster is processed by a K-means unsupervised classifier with a definite number of classes in order to isolate the target-layer pixels. Next, the statistical mode for the population of each class is calculated, compared and used to select the cloud/shadow class automatically, and finally the results are refined by a set of morphological filters. The processing chain avoids the usage of thresholds and highly reduces the user intervention. The achieved outcomes on various test cases are promising and stable, and encourage further developments.

Automatic clouds/shadows extraction method from CBERS-2 CCD and LANDSAT data

HARB, MOSTAPHA;DE VECCHI, DANIELE;GAMBA, PAOLO ETTORE;DELL'ACQUA, FABIO;
2015-01-01

Abstract

Satellite acquisitions from LANDSAT (LS) and CBERS programs are widely used in monitoring land cover dynamics. In the acquired products, clouds form opaque objects are obscuring parts of the scene and preventing a reliable extraction of information from these areas. Consequently, cloud shadows create similar problems, as the reflected intensity of the shadowed areas is highly reduced, generating additional info gaps. The problem can be handled by replacing clouds/shadows pixels from other close-date acquisitions, but that would assume a prior knowledge of the spatial distribution of clouds and their corresponding shadows in a scene. This research introduces a method that provides the clouds/shadows layers and their percentage in LS (TM & ETM+) and CBERS (HRCC) scenes. The approach relies on a set of literature indicators to create a composite image that enhances the visual differentiation of clouds/shadows from other objects. The created composite RGB are then warped to a relative luminance raster calculated from the linear bands components. Afterwards, the raster is processed by a K-means unsupervised classifier with a definite number of classes in order to isolate the target-layer pixels. Next, the statistical mode for the population of each class is calculated, compared and used to select the cloud/shadow class automatically, and finally the results are refined by a set of morphological filters. The processing chain avoids the usage of thresholds and highly reduces the user intervention. The achieved outcomes on various test cases are promising and stable, and encourage further developments.
2015
978-1-4799-7929-5
978-1-4799-7929-5
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11571/1121522
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact