Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series
Autor(a) principal: | |
---|---|
Data de Publicação: | 2022 |
Outros Autores: | , , |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://dx.doi.org/10.3390/rs14010144 http://hdl.handle.net/11449/230138 |
Resumo: | Clouds are one of the major limitations to crop monitoring using optical satellite images. Despite all efforts to provide decision-makers with high-quality agricultural statistics, there is still a lack of techniques to optimally process satellite image time series in the presence of clouds. In this regard, in this article it was proposed to add a Multi-Layer Perceptron loss function to the pix2pix conditional Generative Adversarial Network (cGAN) objective function. The aim was to enforce the generative model to learn how to deliver synthetic pixels whose values were proxies for the spectral response improving further crop type mapping. Furthermore, it was evaluated the generalization capacity of the generative models in producing pixels with plausible values for images not used in the training. To assess the performance of the proposed approach it was compared real images with synthetic images generated with the proposed approach as well as with the original pix2pix cGAN. The comparative analysis was performed through visual analysis, pixel values analysis, semantic segmentation and similarity metrics. In general, the proposed approach provided slightly better synthetic pixels than the original pix2pix cGAN, removing more noise than the original pix2pix algorithm as well as providing better crop type semantic segmentation; the semantic segmentation of the synthetic image generated with the proposed approach achieved an F1-score of 44.2%, while the real image achieved 44.7%. Regarding the generalization, the models trained utilizing different regions of the same image provided better pixels than models trained using other images in the time series. Besides this, the experiments also showed that the models trained using a pair of images selected every three months along the time series also provided acceptable results on images that do not have cloud-free areas. |
id |
UNSP_ba86d07b6b2ceb5e86de25587dbcd889 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/230138 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time seriesCGANCloud removalCrop type mappingCustom loss functionImage-to-imageRemote sensingSAR to optical image translationSynthetic imagesClouds are one of the major limitations to crop monitoring using optical satellite images. Despite all efforts to provide decision-makers with high-quality agricultural statistics, there is still a lack of techniques to optimally process satellite image time series in the presence of clouds. In this regard, in this article it was proposed to add a Multi-Layer Perceptron loss function to the pix2pix conditional Generative Adversarial Network (cGAN) objective function. The aim was to enforce the generative model to learn how to deliver synthetic pixels whose values were proxies for the spectral response improving further crop type mapping. Furthermore, it was evaluated the generalization capacity of the generative models in producing pixels with plausible values for images not used in the training. To assess the performance of the proposed approach it was compared real images with synthetic images generated with the proposed approach as well as with the original pix2pix cGAN. The comparative analysis was performed through visual analysis, pixel values analysis, semantic segmentation and similarity metrics. In general, the proposed approach provided slightly better synthetic pixels than the original pix2pix cGAN, removing more noise than the original pix2pix algorithm as well as providing better crop type semantic segmentation; the semantic segmentation of the synthetic image generated with the proposed approach achieved an F1-score of 44.2%, while the real image achieved 44.7%. Regarding the generalization, the models trained utilizing different regions of the same image provided better pixels than models trained using other images in the time series. Besides this, the experiments also showed that the models trained using a pair of images selected every three months along the time series also provided acceptable results on images that do not have cloud-free areas.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Department of Cartography São Paulo State University, Roberto Simonsen 305Department of Mathematics and Computer Science São Paulo State University, Roberto Simonsen 305Department of Remote Sensing and Photogrammetry Finnish Geospatial Research Institute (FGI) National Land Survey of Finland, Geodeetinrinne 2Department of Cartography São Paulo State University, Roberto Simonsen 305Department of Mathematics and Computer Science São Paulo State University, Roberto Simonsen 305CAPES: 88882.433956/2019-01CAPES: 88887.310463/2018-00CAPES: 88887.473380/2020-00Universidade Estadual Paulista (UNESP)National Land Survey of FinlandChristovam, Luiz E. [UNESP]Shimabukuro, Milton H. [UNESP]Galo, Maria de Lourdes B. T. [UNESP]Honkavaara, Eija2022-04-29T08:38:07Z2022-04-29T08:38:07Z2022-01-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://dx.doi.org/10.3390/rs14010144Remote Sensing, v. 14, n. 1, 2022.2072-4292http://hdl.handle.net/11449/23013810.3390/rs140101442-s2.0-85122012580Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengRemote Sensinginfo:eu-repo/semantics/openAccess2024-06-18T15:01:11Zoai:repositorio.unesp.br:11449/230138Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T15:57:10.469069Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
title |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
spellingShingle |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series Christovam, Luiz E. [UNESP] CGAN Cloud removal Crop type mapping Custom loss function Image-to-image Remote sensing SAR to optical image translation Synthetic images |
title_short |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
title_full |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
title_fullStr |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
title_full_unstemmed |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
title_sort |
Pix2pix conditional generative adversarial network with mlp loss function for cloud removal in a cropland time series |
author |
Christovam, Luiz E. [UNESP] |
author_facet |
Christovam, Luiz E. [UNESP] Shimabukuro, Milton H. [UNESP] Galo, Maria de Lourdes B. T. [UNESP] Honkavaara, Eija |
author_role |
author |
author2 |
Shimabukuro, Milton H. [UNESP] Galo, Maria de Lourdes B. T. [UNESP] Honkavaara, Eija |
author2_role |
author author author |
dc.contributor.none.fl_str_mv |
Universidade Estadual Paulista (UNESP) National Land Survey of Finland |
dc.contributor.author.fl_str_mv |
Christovam, Luiz E. [UNESP] Shimabukuro, Milton H. [UNESP] Galo, Maria de Lourdes B. T. [UNESP] Honkavaara, Eija |
dc.subject.por.fl_str_mv |
CGAN Cloud removal Crop type mapping Custom loss function Image-to-image Remote sensing SAR to optical image translation Synthetic images |
topic |
CGAN Cloud removal Crop type mapping Custom loss function Image-to-image Remote sensing SAR to optical image translation Synthetic images |
description |
Clouds are one of the major limitations to crop monitoring using optical satellite images. Despite all efforts to provide decision-makers with high-quality agricultural statistics, there is still a lack of techniques to optimally process satellite image time series in the presence of clouds. In this regard, in this article it was proposed to add a Multi-Layer Perceptron loss function to the pix2pix conditional Generative Adversarial Network (cGAN) objective function. The aim was to enforce the generative model to learn how to deliver synthetic pixels whose values were proxies for the spectral response improving further crop type mapping. Furthermore, it was evaluated the generalization capacity of the generative models in producing pixels with plausible values for images not used in the training. To assess the performance of the proposed approach it was compared real images with synthetic images generated with the proposed approach as well as with the original pix2pix cGAN. The comparative analysis was performed through visual analysis, pixel values analysis, semantic segmentation and similarity metrics. In general, the proposed approach provided slightly better synthetic pixels than the original pix2pix cGAN, removing more noise than the original pix2pix algorithm as well as providing better crop type semantic segmentation; the semantic segmentation of the synthetic image generated with the proposed approach achieved an F1-score of 44.2%, while the real image achieved 44.7%. Regarding the generalization, the models trained utilizing different regions of the same image provided better pixels than models trained using other images in the time series. Besides this, the experiments also showed that the models trained using a pair of images selected every three months along the time series also provided acceptable results on images that do not have cloud-free areas. |
publishDate |
2022 |
dc.date.none.fl_str_mv |
2022-04-29T08:38:07Z 2022-04-29T08:38:07Z 2022-01-01 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.3390/rs14010144 Remote Sensing, v. 14, n. 1, 2022. 2072-4292 http://hdl.handle.net/11449/230138 10.3390/rs14010144 2-s2.0-85122012580 |
url |
http://dx.doi.org/10.3390/rs14010144 http://hdl.handle.net/11449/230138 |
identifier_str_mv |
Remote Sensing, v. 14, n. 1, 2022. 2072-4292 10.3390/rs14010144 2-s2.0-85122012580 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
Remote Sensing |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808128585484992512 |