Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization

Detalhes bibliográficos
Autor(a) principal: Tracewski, Lukasz
Data de Publicação: 2017
Outros Autores: Bastin, Lucy, Fonte, Cidalia C.
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10316/44075
https://doi.org/10.1080/10095020.2017.1373955
Resumo: This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.
id RCAP_5c73f7d80ea5433840147777b6d95e5c
oai_identifier_str oai:estudogeral.uc.pt:10316/44075
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterizationLand coverLand usevolunteered geographic information (VGI)photographconvolutional neural networkmachine learningThis paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.2017-09-18info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://hdl.handle.net/10316/44075http://hdl.handle.net/10316/44075https://doi.org/10.1080/10095020.2017.1373955https://doi.org/10.1080/10095020.2017.1373955eng1993-51531009-5020http://dx.doi.org/10.1080/10095020.2017.1373955Tracewski, LukaszBastin, LucyFonte, Cidalia C.info:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2021-06-29T10:03:12Zoai:estudogeral.uc.pt:10316/44075Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T20:53:44.843409Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
spellingShingle Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
Tracewski, Lukasz
Land cover
Land use
volunteered geographic information (VGI)
photograph
convolutional neural network
machine learning
title_short Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_full Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_fullStr Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_full_unstemmed Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_sort Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
author Tracewski, Lukasz
author_facet Tracewski, Lukasz
Bastin, Lucy
Fonte, Cidalia C.
author_role author
author2 Bastin, Lucy
Fonte, Cidalia C.
author2_role author
author
dc.contributor.author.fl_str_mv Tracewski, Lukasz
Bastin, Lucy
Fonte, Cidalia C.
dc.subject.por.fl_str_mv Land cover
Land use
volunteered geographic information (VGI)
photograph
convolutional neural network
machine learning
topic Land cover
Land use
volunteered geographic information (VGI)
photograph
convolutional neural network
machine learning
description This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.
publishDate 2017
dc.date.none.fl_str_mv 2017-09-18
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10316/44075
http://hdl.handle.net/10316/44075
https://doi.org/10.1080/10095020.2017.1373955
https://doi.org/10.1080/10095020.2017.1373955
url http://hdl.handle.net/10316/44075
https://doi.org/10.1080/10095020.2017.1373955
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv 1993-5153
1009-5020
http://dx.doi.org/10.1080/10095020.2017.1373955
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799133823773966336