Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement
Autor(a) principal: | |
---|---|
Data de Publicação: | 2023 |
Outros Autores: | , , , |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://dx.doi.org/10.1016/j.inffus.2022.09.006 http://hdl.handle.net/11449/247622 |
Resumo: | This paper proposes a novel multimodal self-supervised architecture for energy-efficient audio-visual (AV) speech enhancement that integrates Graph Neural Networks with canonical correlation analysis (CCA-GNN). The proposed approach lays its foundations on a state-of-the-art CCA-GNN that learns representative embeddings by maximizing the correlation between pairs of augmented views of the same input while decorrelating disconnected features. The key idea of the conventional CCA-GNN involves discarding augmentation-variant information and preserving augmentation-invariant information while preventing capturing of redundant information. Our proposed AV CCA-GNN model deals with multimodal representation learning context. Specifically, our model improves contextual AV speech processing by maximizing canonical correlation from augmented views of the same channel and canonical correlation from audio and visual embeddings. In addition, it proposes a positional node encoding that considers a prior-frame sequence distance instead of a feature-space representation when computing the node's nearest neighbors, introducing temporal information in the embeddings through the neighborhood's connectivity. Experiments conducted on the benchmark ChiME3 dataset show that our proposed prior frame-based AV CCA-GNN ensures a better feature learning in the temporal context, leading to more energy-efficient speech reconstruction than state-of-the-art CCA-GNN and multilayer perceptron. |
id |
UNSP_023b7ea82d3ba541875b100f8ce5f70d |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/247622 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancementCanonical correlation analysisGraph Neural NetworksMultimodal learningPositional encodingPrior frames neighborhoodThis paper proposes a novel multimodal self-supervised architecture for energy-efficient audio-visual (AV) speech enhancement that integrates Graph Neural Networks with canonical correlation analysis (CCA-GNN). The proposed approach lays its foundations on a state-of-the-art CCA-GNN that learns representative embeddings by maximizing the correlation between pairs of augmented views of the same input while decorrelating disconnected features. The key idea of the conventional CCA-GNN involves discarding augmentation-variant information and preserving augmentation-invariant information while preventing capturing of redundant information. Our proposed AV CCA-GNN model deals with multimodal representation learning context. Specifically, our model improves contextual AV speech processing by maximizing canonical correlation from augmented views of the same channel and canonical correlation from audio and visual embeddings. In addition, it proposes a positional node encoding that considers a prior-frame sequence distance instead of a feature-space representation when computing the node's nearest neighbors, introducing temporal information in the embeddings through the neighborhood's connectivity. Experiments conducted on the benchmark ChiME3 dataset show that our proposed prior frame-based AV CCA-GNN ensures a better feature learning in the temporal context, leading to more energy-efficient speech reconstruction than state-of-the-art CCA-GNN and multilayer perceptron.Ministerio de Ciencia e InnovaciónEusko JaurlaritzaEngineering and Physical Sciences Research CouncilCMI Lab School of Engineering and Informatics University of Wolverhampton, EnglandDepartment of Computing São Paulo State University, BauruTECNALIA Basque Research & Technology Alliance (BRTA), BizkaiaUniversity of the Basque Country (UPV/EHU), BizkaiaSchool of Computing Edinburgh Napier University, ScotlandDeepCI, ScotlandDepartment of Computing São Paulo State University, BauruEngineering and Physical Sciences Research Council: EP/T021063/1University of WolverhamptonUniversidade Estadual Paulista (UNESP)Basque Research & Technology Alliance (BRTA)University of the Basque Country (UPV/EHU)Edinburgh Napier UniversityDeepCIPassos, Leandro A.Papa, João Paulo [UNESP]Del Ser, JavierHussain, AmirAdeel, Ahsan2023-07-29T13:21:14Z2023-07-29T13:21:14Z2023-02-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/article1-11http://dx.doi.org/10.1016/j.inffus.2022.09.006Information Fusion, v. 90, p. 1-11.1566-2535http://hdl.handle.net/11449/24762210.1016/j.inffus.2022.09.0062-s2.0-85138109331Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengInformation Fusioninfo:eu-repo/semantics/openAccess2024-04-23T16:10:45Zoai:repositorio.unesp.br:11449/247622Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T15:45:44.211548Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
title |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
spellingShingle |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement Passos, Leandro A. Canonical correlation analysis Graph Neural Networks Multimodal learning Positional encoding Prior frames neighborhood |
title_short |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
title_full |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
title_fullStr |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
title_full_unstemmed |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
title_sort |
Multimodal audio-visual information fusion using canonical-correlated Graph Neural Network for energy-efficient speech enhancement |
author |
Passos, Leandro A. |
author_facet |
Passos, Leandro A. Papa, João Paulo [UNESP] Del Ser, Javier Hussain, Amir Adeel, Ahsan |
author_role |
author |
author2 |
Papa, João Paulo [UNESP] Del Ser, Javier Hussain, Amir Adeel, Ahsan |
author2_role |
author author author author |
dc.contributor.none.fl_str_mv |
University of Wolverhampton Universidade Estadual Paulista (UNESP) Basque Research & Technology Alliance (BRTA) University of the Basque Country (UPV/EHU) Edinburgh Napier University DeepCI |
dc.contributor.author.fl_str_mv |
Passos, Leandro A. Papa, João Paulo [UNESP] Del Ser, Javier Hussain, Amir Adeel, Ahsan |
dc.subject.por.fl_str_mv |
Canonical correlation analysis Graph Neural Networks Multimodal learning Positional encoding Prior frames neighborhood |
topic |
Canonical correlation analysis Graph Neural Networks Multimodal learning Positional encoding Prior frames neighborhood |
description |
This paper proposes a novel multimodal self-supervised architecture for energy-efficient audio-visual (AV) speech enhancement that integrates Graph Neural Networks with canonical correlation analysis (CCA-GNN). The proposed approach lays its foundations on a state-of-the-art CCA-GNN that learns representative embeddings by maximizing the correlation between pairs of augmented views of the same input while decorrelating disconnected features. The key idea of the conventional CCA-GNN involves discarding augmentation-variant information and preserving augmentation-invariant information while preventing capturing of redundant information. Our proposed AV CCA-GNN model deals with multimodal representation learning context. Specifically, our model improves contextual AV speech processing by maximizing canonical correlation from augmented views of the same channel and canonical correlation from audio and visual embeddings. In addition, it proposes a positional node encoding that considers a prior-frame sequence distance instead of a feature-space representation when computing the node's nearest neighbors, introducing temporal information in the embeddings through the neighborhood's connectivity. Experiments conducted on the benchmark ChiME3 dataset show that our proposed prior frame-based AV CCA-GNN ensures a better feature learning in the temporal context, leading to more energy-efficient speech reconstruction than state-of-the-art CCA-GNN and multilayer perceptron. |
publishDate |
2023 |
dc.date.none.fl_str_mv |
2023-07-29T13:21:14Z 2023-07-29T13:21:14Z 2023-02-01 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.1016/j.inffus.2022.09.006 Information Fusion, v. 90, p. 1-11. 1566-2535 http://hdl.handle.net/11449/247622 10.1016/j.inffus.2022.09.006 2-s2.0-85138109331 |
url |
http://dx.doi.org/10.1016/j.inffus.2022.09.006 http://hdl.handle.net/11449/247622 |
identifier_str_mv |
Information Fusion, v. 90, p. 1-11. 1566-2535 10.1016/j.inffus.2022.09.006 2-s2.0-85138109331 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
Information Fusion |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
1-11 |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808128558383497216 |