Reinforcing learning in Deep Belief Networks through nature-inspired optimization

Detalhes bibliográficos
Autor(a) principal: Roder, Mateus [UNESP]
Data de Publicação: 2021
Outros Autores: Passos, Leandro Aparecido [UNESP], de Rosa, Gustavo H. [UNESP], de Albuquerque, Victor Hugo C., Papa, João Paulo [UNESP]
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Institucional da UNESP
Texto Completo: http://dx.doi.org/10.1016/j.asoc.2021.107466
http://hdl.handle.net/11449/233128
Resumo: Deep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network's learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks.
id UNSP_4df64b1054dc5c5eb5df8e37fab0433b
oai_identifier_str oai:repositorio.unesp.br:11449/233128
network_acronym_str UNSP
network_name_str Repositório Institucional da UNESP
repository_id_str 2946
spelling Reinforcing learning in Deep Belief Networks through nature-inspired optimizationDeep Belief NetworkMetaheuristic optimizationOptimizationResidual networksRestricted Boltzmann machinesDeep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network's learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Department of Computing São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01Graduate Program on Teleinformatics Engineering Federal University of Ceará, FortalezaGraduate Program on Telecommunication Engineering Federal Institute of Education Science and Technology of CearáDepartment of Computing São Paulo State University, Av. Eng. Luiz Edmundo Carrijo Coube, 14-01FAPESP: #2013/07375-0FAPESP: #2014/12236-1FAPESP: #2017/25908-6FAPESP: #2018/21934-5FAPESP: #2019/02205-5FAPESP: #2019/07665-4FAPESP: #2019/07825-1CNPq: #304315/2017-6CNPq: #307066/2017-7CNPq: #427968/2018-6CNPq: #430274/2018-1Universidade Estadual Paulista (UNESP)Federal University of CearáScience and Technology of CearáRoder, Mateus [UNESP]Passos, Leandro Aparecido [UNESP]de Rosa, Gustavo H. [UNESP]de Albuquerque, Victor Hugo C.Papa, João Paulo [UNESP]2022-05-01T04:26:36Z2022-05-01T04:26:36Z2021-09-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1016/j.asoc.2021.107466Applied Soft Computing, v. 108.1568-4946http://hdl.handle.net/11449/23312810.1016/j.asoc.2021.1074662-s2.0-85105581286Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengApplied Soft Computinginfo:eu-repo/semantics/openAccess2024-04-23T16:11:01Zoai:repositorio.unesp.br:11449/233128Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T23:54:39.862840Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false
dc.title.none.fl_str_mv Reinforcing learning in Deep Belief Networks through nature-inspired optimization
title Reinforcing learning in Deep Belief Networks through nature-inspired optimization
spellingShingle Reinforcing learning in Deep Belief Networks through nature-inspired optimization
Roder, Mateus [UNESP]
Deep Belief Network
Metaheuristic optimization
Optimization
Residual networks
Restricted Boltzmann machines
title_short Reinforcing learning in Deep Belief Networks through nature-inspired optimization
title_full Reinforcing learning in Deep Belief Networks through nature-inspired optimization
title_fullStr Reinforcing learning in Deep Belief Networks through nature-inspired optimization
title_full_unstemmed Reinforcing learning in Deep Belief Networks through nature-inspired optimization
title_sort Reinforcing learning in Deep Belief Networks through nature-inspired optimization
author Roder, Mateus [UNESP]
author_facet Roder, Mateus [UNESP]
Passos, Leandro Aparecido [UNESP]
de Rosa, Gustavo H. [UNESP]
de Albuquerque, Victor Hugo C.
Papa, João Paulo [UNESP]
author_role author
author2 Passos, Leandro Aparecido [UNESP]
de Rosa, Gustavo H. [UNESP]
de Albuquerque, Victor Hugo C.
Papa, João Paulo [UNESP]
author2_role author
author
author
author
dc.contributor.none.fl_str_mv Universidade Estadual Paulista (UNESP)
Federal University of Ceará
Science and Technology of Ceará
dc.contributor.author.fl_str_mv Roder, Mateus [UNESP]
Passos, Leandro Aparecido [UNESP]
de Rosa, Gustavo H. [UNESP]
de Albuquerque, Victor Hugo C.
Papa, João Paulo [UNESP]
dc.subject.por.fl_str_mv Deep Belief Network
Metaheuristic optimization
Optimization
Residual networks
Restricted Boltzmann machines
topic Deep Belief Network
Metaheuristic optimization
Optimization
Residual networks
Restricted Boltzmann machines
description Deep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network's learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks.
publishDate 2021
dc.date.none.fl_str_mv 2021-09-01
2022-05-01T04:26:36Z
2022-05-01T04:26:36Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://dx.doi.org/10.1016/j.asoc.2021.107466
Applied Soft Computing, v. 108.
1568-4946
http://hdl.handle.net/11449/233128
10.1016/j.asoc.2021.107466
2-s2.0-85105581286
url http://dx.doi.org/10.1016/j.asoc.2021.107466
http://hdl.handle.net/11449/233128
identifier_str_mv Applied Soft Computing, v. 108.
1568-4946
10.1016/j.asoc.2021.107466
2-s2.0-85105581286
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv Applied Soft Computing
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.source.none.fl_str_mv Scopus
reponame:Repositório Institucional da UNESP
instname:Universidade Estadual Paulista (UNESP)
instacron:UNESP
instname_str Universidade Estadual Paulista (UNESP)
instacron_str UNESP
institution UNESP
reponame_str Repositório Institucional da UNESP
collection Repositório Institucional da UNESP
repository.name.fl_str_mv Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)
repository.mail.fl_str_mv
_version_ 1808129563661697024