Robust image features creation by learning how to merge visual and semantic attributes
Autor(a) principal: | |
---|---|
Data de Publicação: | 2021 |
Tipo de documento: | Dissertação |
Idioma: | eng |
Título da fonte: | Biblioteca Digital de Teses e Dissertações da USP |
Texto Completo: | https://www.teses.usp.br/teses/disponiveis/55/55134/tde-17032021-122717/ |
Resumo: | There are known advantages of using semantic attributes to improve image representation. However, studying how to use such attributes to improve visual subspaces and its effects on coarse and fine-grained classification were still to be investigated. This research report a Visual-Semantic Encoder (VSE) built from a neural network undercomplete autoencoder, that combines visual features and semantic attributes to form a compact subspace containing each domains most relevant properties. It is observed empirically that a learned latent space can better represent image features and even allow one to interpret results in the light of the nature of semantic attributes, offering a path for explainable learning. Experiments were performed in four benchmark datasets where VSE was compared against state-of-the-art algorithms for dimensionality reduction. The algorithm shows to be robust for up to 20% degradation of semantic attributes and is as efficient as LLE for learning a low-dimensional feature space with rich class representativeness, offering possibilities for future work on the deployment of an automatic gathering of semantic data to improve representations. Additionally, the study suggests experimentally that adding high-level concepts to image representations adds linearity to the feature space, allowing PCA to perform well in combining visual and semantic features for enhancing class separability. At last, experiments were performed for zero-shot learning, where VSE and PCA outperform SAE, the state-of-the-art algorithm proposed by Kodirov, Xiang and Gong (2017), and JDL, the joint discriminative learning framework proposed by Zhang and Saligrama (2016), which demonstrates the viability of merging semantic and visual data at both training and test time for learning aspects that transcend class boundaries that allow the classification of unseen data. |
id |
USP_d8fe48a0a2a435fc32616ebd916e2140 |
---|---|
oai_identifier_str |
oai:teses.usp.br:tde-17032021-122717 |
network_acronym_str |
USP |
network_name_str |
Biblioteca Digital de Teses e Dissertações da USP |
repository_id_str |
2721 |
spelling |
Robust image features creation by learning how to merge visual and semantic attributesCriando características de imagens robustas por meio do aprendizado da fusão de atributos visuais e semânticosAprendizado de característicasAprendizado de variedadesAutoencoderAutoencoderClassificação de imagensFeature learningImage classificationManifold learningThere are known advantages of using semantic attributes to improve image representation. However, studying how to use such attributes to improve visual subspaces and its effects on coarse and fine-grained classification were still to be investigated. This research report a Visual-Semantic Encoder (VSE) built from a neural network undercomplete autoencoder, that combines visual features and semantic attributes to form a compact subspace containing each domains most relevant properties. It is observed empirically that a learned latent space can better represent image features and even allow one to interpret results in the light of the nature of semantic attributes, offering a path for explainable learning. Experiments were performed in four benchmark datasets where VSE was compared against state-of-the-art algorithms for dimensionality reduction. The algorithm shows to be robust for up to 20% degradation of semantic attributes and is as efficient as LLE for learning a low-dimensional feature space with rich class representativeness, offering possibilities for future work on the deployment of an automatic gathering of semantic data to improve representations. Additionally, the study suggests experimentally that adding high-level concepts to image representations adds linearity to the feature space, allowing PCA to perform well in combining visual and semantic features for enhancing class separability. At last, experiments were performed for zero-shot learning, where VSE and PCA outperform SAE, the state-of-the-art algorithm proposed by Kodirov, Xiang and Gong (2017), and JDL, the joint discriminative learning framework proposed by Zhang and Saligrama (2016), which demonstrates the viability of merging semantic and visual data at both training and test time for learning aspects that transcend class boundaries that allow the classification of unseen data.Existem vantagens conhecidas em usar atributos semânticos para melhorar a representação de imagens. No entanto, o estudo de como esses atributos melhoram subespaços visuais e os efeitos sobre a classificação de dados grosseiros e granulares ainda estava para ser investigado. Esta pesquisa reporta um Codificador Visual-Semântico (VSE) construído a partir de um autoencoder sub completo, formado por uma rede neural que combina características semânticas e visuais para formar um espaço compacto que contém as propriedades mais relevantes de cada domínio. É observado empiricamente que o espaço latente aprendido pode melhor representar as características de imagens e inclusive permite a interpretação dos resultados baseado na natureza dos atributos semânticos, oferecendo um caminho para a aprendizagem explicável. Os experimentos foram realizados em quatro bases de dados benchmark onde o VSE foi comparado com algoritmos do estado-da-arte para a redução de dimensionalidade. O algoritmo se mostra robusto para até 20% de degradação dos dados semânticos e é tão eficiente quanto o LLE para aprender um espaço de baixa dimensionalidade com rica representatividade, oferecendo possibilidades para trabalhos futuros na aplicação de um coletor automático de dados semânticos para melhorar as representações. Ademais, o estudo sugere experimentalmente que a inclusão de conceitos de alto nível à representação de imagens adiciona linearidade ao espaço de características, permitindo que o PCA tenha boa performance na combinação de propriedades visuais e semânticas para melhorar a separabilidade das classes. Por fim, experimentos foram realizados no âmbito de zero-shot learning, onde VSE e PCA superam SAE, o algoritmo estado-da-arte proposto por Kodirov, Xiang and Gong (2017), e JDL, o framework de aprendizado discriminativo conjunto proposto por Zhang and Saligrama (2016), o que demonstra a viabilidade da mesclagem de dados semânticos e visuais nas etapas de treino e teste para para aprender aspectos que transcendem as fronteiras de classes e permitem a classificação de dados desconhecidos.Biblioteca Digitais de Teses e Dissertações da USPPonti, Moacir AntonelliResende, Damares Crystina Oliveira de2021-01-21info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttps://www.teses.usp.br/teses/disponiveis/55/55134/tde-17032021-122717/reponame:Biblioteca Digital de Teses e Dissertações da USPinstname:Universidade de São Paulo (USP)instacron:USPLiberar o conteúdo para acesso público.info:eu-repo/semantics/openAccesseng2021-06-23T20:42:33Zoai:teses.usp.br:tde-17032021-122717Biblioteca Digital de Teses e Dissertaçõeshttp://www.teses.usp.br/PUBhttp://www.teses.usp.br/cgi-bin/mtd2br.plvirginia@if.usp.br|| atendimento@aguia.usp.br||virginia@if.usp.bropendoar:27212021-06-23T20:42:33Biblioteca Digital de Teses e Dissertações da USP - Universidade de São Paulo (USP)false |
dc.title.none.fl_str_mv |
Robust image features creation by learning how to merge visual and semantic attributes Criando características de imagens robustas por meio do aprendizado da fusão de atributos visuais e semânticos |
title |
Robust image features creation by learning how to merge visual and semantic attributes |
spellingShingle |
Robust image features creation by learning how to merge visual and semantic attributes Resende, Damares Crystina Oliveira de Aprendizado de características Aprendizado de variedades Autoencoder Autoencoder Classificação de imagens Feature learning Image classification Manifold learning |
title_short |
Robust image features creation by learning how to merge visual and semantic attributes |
title_full |
Robust image features creation by learning how to merge visual and semantic attributes |
title_fullStr |
Robust image features creation by learning how to merge visual and semantic attributes |
title_full_unstemmed |
Robust image features creation by learning how to merge visual and semantic attributes |
title_sort |
Robust image features creation by learning how to merge visual and semantic attributes |
author |
Resende, Damares Crystina Oliveira de |
author_facet |
Resende, Damares Crystina Oliveira de |
author_role |
author |
dc.contributor.none.fl_str_mv |
Ponti, Moacir Antonelli |
dc.contributor.author.fl_str_mv |
Resende, Damares Crystina Oliveira de |
dc.subject.por.fl_str_mv |
Aprendizado de características Aprendizado de variedades Autoencoder Autoencoder Classificação de imagens Feature learning Image classification Manifold learning |
topic |
Aprendizado de características Aprendizado de variedades Autoencoder Autoencoder Classificação de imagens Feature learning Image classification Manifold learning |
description |
There are known advantages of using semantic attributes to improve image representation. However, studying how to use such attributes to improve visual subspaces and its effects on coarse and fine-grained classification were still to be investigated. This research report a Visual-Semantic Encoder (VSE) built from a neural network undercomplete autoencoder, that combines visual features and semantic attributes to form a compact subspace containing each domains most relevant properties. It is observed empirically that a learned latent space can better represent image features and even allow one to interpret results in the light of the nature of semantic attributes, offering a path for explainable learning. Experiments were performed in four benchmark datasets where VSE was compared against state-of-the-art algorithms for dimensionality reduction. The algorithm shows to be robust for up to 20% degradation of semantic attributes and is as efficient as LLE for learning a low-dimensional feature space with rich class representativeness, offering possibilities for future work on the deployment of an automatic gathering of semantic data to improve representations. Additionally, the study suggests experimentally that adding high-level concepts to image representations adds linearity to the feature space, allowing PCA to perform well in combining visual and semantic features for enhancing class separability. At last, experiments were performed for zero-shot learning, where VSE and PCA outperform SAE, the state-of-the-art algorithm proposed by Kodirov, Xiang and Gong (2017), and JDL, the joint discriminative learning framework proposed by Zhang and Saligrama (2016), which demonstrates the viability of merging semantic and visual data at both training and test time for learning aspects that transcend class boundaries that allow the classification of unseen data. |
publishDate |
2021 |
dc.date.none.fl_str_mv |
2021-01-21 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/masterThesis |
format |
masterThesis |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
https://www.teses.usp.br/teses/disponiveis/55/55134/tde-17032021-122717/ |
url |
https://www.teses.usp.br/teses/disponiveis/55/55134/tde-17032021-122717/ |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
|
dc.rights.driver.fl_str_mv |
Liberar o conteúdo para acesso público. info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Liberar o conteúdo para acesso público. |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.coverage.none.fl_str_mv |
|
dc.publisher.none.fl_str_mv |
Biblioteca Digitais de Teses e Dissertações da USP |
publisher.none.fl_str_mv |
Biblioteca Digitais de Teses e Dissertações da USP |
dc.source.none.fl_str_mv |
reponame:Biblioteca Digital de Teses e Dissertações da USP instname:Universidade de São Paulo (USP) instacron:USP |
instname_str |
Universidade de São Paulo (USP) |
instacron_str |
USP |
institution |
USP |
reponame_str |
Biblioteca Digital de Teses e Dissertações da USP |
collection |
Biblioteca Digital de Teses e Dissertações da USP |
repository.name.fl_str_mv |
Biblioteca Digital de Teses e Dissertações da USP - Universidade de São Paulo (USP) |
repository.mail.fl_str_mv |
virginia@if.usp.br|| atendimento@aguia.usp.br||virginia@if.usp.br |
_version_ |
1809090792890826752 |