Reinforcement learning with spiking neural networks
Autor(a) principal: | |
---|---|
Data de Publicação: | 2023 |
Tipo de documento: | Tese |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UFPE |
dARK ID: | ark:/64986/001300000fvcj |
Texto Completo: | https://repositorio.ufpe.br/handle/123456789/54351 |
Resumo: | Artificial intelligence systems have made impressive progress in recent years, but they still lag behind simple biological brains in terms of control capabilities and power con- sumption. Spiking neural networks (SNNs) seek to emulate the energy efficiency, learning speed, and temporal processing of biological brains. However, in the context of reinforce- ment learning (RL), SNNs still fall short of traditional neural networks. The primary aim of this work is to bridge the performance gap between spiking models and powerful deep RL (DRL) algorithms on specific tasks. To this end, we have proposed new architectures that have been compared, both in terms of learning speed and final accuracy, to DRL algorithms and classical tabular RL approaches. This thesis consists of three stages. The initial stage presents a simple spiking model that addresses the scalability limitations of related models in terms of the state space. The model is evaluated on two classical RL problems: grid-world and acrobot. The results suggest that the proposed spiking model is comparable to both tabular and DRL algorithms, while maintaining an advantage in terms of complexity over the DRL algorithm. In the second stage, we further explore the proposed model by combining it with a binary feature extraction network. A binary con- volutional neural network (CNN) is pre-trained on a set of naturalistic RGB images and a separate set of images is used as observations on a modified grid-world task. We present improvements in architecture and dynamics to address this more challenging task with image observations. As before, the model is experimentally compared to state-of-the-art DRL algorithms. Additionally, we provide supplementary experiments to present a more detailed view of the connectivity and plasticity between different layers of the network. The third stage of this thesis presents a novel neuromorphic architecture for solving RL problems with real-valued observations. The proposed model incorporates feature extrac- tion layers, with the addition of temporal difference (TD)-error modulation and eligibility traces, building upon prior work. An ablation study confirms the significant impact of these components on the proposed model’s performance. Our model consistently outper- forms the tabular approach and successfully discovers stable control policies in mountain car, cart-pole and acrobot environments. Although the proposed model does not outper- form PPO in terms of optimal performance, it offers an appealing trade-off in terms of computational and hardware implementation requirements: the model does not require an external memory buffer nor global error gradient computation, and synaptic updates occur online, driven by local learning rules and a broadcast TD-error signal. We conclude by highlighting the limitations of our approach and suggest promising directions for future research. |
id |
UFPE_96b54a33fcf7505b8b991b12734fdbbe |
---|---|
oai_identifier_str |
oai:repositorio.ufpe.br:123456789/54351 |
network_acronym_str |
UFPE |
network_name_str |
Repositório Institucional da UFPE |
repository_id_str |
2221 |
spelling |
CHEVTCHENKO, Sergio Fernandovitchhttp://lattes.cnpq.br/5146318019503884http://lattes.cnpq.br/6321179168854922LUDERMIR, Teresa Bernarda2023-12-22T11:52:29Z2023-12-22T11:52:29Z2023-08-15CHEVTCHENKO, Sérgio Fernandovitch. Reinforcement learning with spiking neural networks. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023.https://repositorio.ufpe.br/handle/123456789/54351ark:/64986/001300000fvcjArtificial intelligence systems have made impressive progress in recent years, but they still lag behind simple biological brains in terms of control capabilities and power con- sumption. Spiking neural networks (SNNs) seek to emulate the energy efficiency, learning speed, and temporal processing of biological brains. However, in the context of reinforce- ment learning (RL), SNNs still fall short of traditional neural networks. The primary aim of this work is to bridge the performance gap between spiking models and powerful deep RL (DRL) algorithms on specific tasks. To this end, we have proposed new architectures that have been compared, both in terms of learning speed and final accuracy, to DRL algorithms and classical tabular RL approaches. This thesis consists of three stages. The initial stage presents a simple spiking model that addresses the scalability limitations of related models in terms of the state space. The model is evaluated on two classical RL problems: grid-world and acrobot. The results suggest that the proposed spiking model is comparable to both tabular and DRL algorithms, while maintaining an advantage in terms of complexity over the DRL algorithm. In the second stage, we further explore the proposed model by combining it with a binary feature extraction network. A binary con- volutional neural network (CNN) is pre-trained on a set of naturalistic RGB images and a separate set of images is used as observations on a modified grid-world task. We present improvements in architecture and dynamics to address this more challenging task with image observations. As before, the model is experimentally compared to state-of-the-art DRL algorithms. Additionally, we provide supplementary experiments to present a more detailed view of the connectivity and plasticity between different layers of the network. The third stage of this thesis presents a novel neuromorphic architecture for solving RL problems with real-valued observations. The proposed model incorporates feature extrac- tion layers, with the addition of temporal difference (TD)-error modulation and eligibility traces, building upon prior work. An ablation study confirms the significant impact of these components on the proposed model’s performance. Our model consistently outper- forms the tabular approach and successfully discovers stable control policies in mountain car, cart-pole and acrobot environments. Although the proposed model does not outper- form PPO in terms of optimal performance, it offers an appealing trade-off in terms of computational and hardware implementation requirements: the model does not require an external memory buffer nor global error gradient computation, and synaptic updates occur online, driven by local learning rules and a broadcast TD-error signal. We conclude by highlighting the limitations of our approach and suggest promising directions for future research.FACEPENos últimos anos, sistemas de inteligência artificial têm progredido de forma impres- sionante, mas ainda estão aquém de cérebros biológicos simples em termos de capacidades de controle e consumo de energia. As redes neurais de impulsos (SNNs) buscam emular a eficiência energética, velocidade de aprendizado e processamento temporal de cérebros biológicos. No entanto, no contexto de aprendizado por reforço (RL), as SNNs ainda ficam aquém das redes neurais tradicionais. O objetivo principal deste trabalho é aproximar em termos de desempenho os modelos SNN dos algoritmos de aprendizagem profunda por reforço (DRL) em tarefas específicas. Para isso, propomos novas arquiteturas que foram comparadas, tanto em termos de velocidade de aprendizado quanto de precisão final, com algoritmos DRL e abordagens RL tabulares clássicas. Esta tese consiste em três etapas. A etapa inicial apresenta um modelo simples de uma rede de impulsos que aborda as limitações de escalabilidade de modelos relacionados em termos do espaço de estados. O modelo é avaliado em dois problemas clássicos de RL: grid-world e acrobot. Os resultados sugerem que o modelo proposto é comparável ao algoritmo tabular e a DRL, mantendo uma vantagem em termos de complexidade sobre o algoritmo DRL. Na segunda etapa, exploramos mais o modelo proposto, combinando-o com uma rede binária para extração de características. Uma rede neural convolucional (CNN) binária é pré-treinada em um conjunto de imagens RGB naturalistas e um conjunto separado de imagens é usado como observações em uma ambiente modificado de grid-world. Melhorias na arquitetura e na dinâmica são apresentadas para tratar esse problema mais complexo, com observações de imagens. Como antes, o modelo é comparado experimentalmente com algoritmos DRL do estado da arte. Além disso, experimentos complementares são fornecidos com objetivo de apresentar uma visão mais detalhada da conectividade e plasticidade entre diferentes ca- madas da rede. A terceira etapa desta tese apresenta uma nova arquitetura neuromórfica para resolver problemas de RL com observações de valores reais. O modelo proposto incor- pora camadas de redução de dimensionalidade, com a adição de modulação por TD-error e eligibility traces, baseando-se em trabalhos anteriores. Um estudo adicional é focado em confirmar o impacto significativo desses componentes no desempenho do modelo proposto. O modelo supera consistentemente a abordagem tabular e descobre com sucesso políticas de controle estáveis nos ambientes mountain car, cart-pole e acrobot. Embora o modelo proposto não supere o PPO em termos de latência, ele oferece uma alternativa em termos de requisitos computacionais e de hardware: o modelo não requer um buffer de memória externo nem computação de gradiente de erro global. Além disso, as atualizações sináp- ticas ocorrem online, por meio de regras de aprendizado local e um sinal de erro global. A tese conclui apresentando limitações da pesquisa e sugestões de trabalhos futuros.engUniversidade Federal de PernambucoPrograma de Pos Graduacao em Ciencia da ComputacaoUFPEBrasilAttribution-NonCommercial-NoDerivs 3.0 Brazilhttp://creativecommons.org/licenses/by-nc-nd/3.0/br/info:eu-repo/semantics/openAccessAprendizagem por reforçoSTDPRedes neurais de impulsosFEASTODESAReinforcement learning with spiking neural networksinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisdoutoradoreponame:Repositório Institucional da UFPEinstname:Universidade Federal de Pernambuco (UFPE)instacron:UFPEORIGINALTESE Sergio Fernandovitch Chevtchenko.pdfTESE Sergio Fernandovitch Chevtchenko.pdfapplication/pdf11222030https://repositorio.ufpe.br/bitstream/123456789/54351/1/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdff5c348646e0c7dc6e8c2062063351117MD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8811https://repositorio.ufpe.br/bitstream/123456789/54351/2/license_rdfe39d27027a6cc9cb039ad269a5db8e34MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-82362https://repositorio.ufpe.br/bitstream/123456789/54351/3/license.txt5e89a1613ddc8510c6576f4b23a78973MD53TEXTTESE Sergio Fernandovitch Chevtchenko.pdf.txtTESE Sergio Fernandovitch Chevtchenko.pdf.txtExtracted texttext/plain217338https://repositorio.ufpe.br/bitstream/123456789/54351/4/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdf.txte68b6a4ddc2eeda241e7d334600303d8MD54THUMBNAILTESE Sergio Fernandovitch Chevtchenko.pdf.jpgTESE Sergio Fernandovitch Chevtchenko.pdf.jpgGenerated Thumbnailimage/jpeg1088https://repositorio.ufpe.br/bitstream/123456789/54351/5/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdf.jpg274af5390bba9c224d87ded2a0fb87acMD55123456789/543512024-01-05 02:40:40.314oai:repositorio.ufpe.br:123456789/54351VGVybW8gZGUgRGVww7NzaXRvIExlZ2FsIGUgQXV0b3JpemHDp8OjbyBwYXJhIFB1YmxpY2l6YcOnw6NvIGRlIERvY3VtZW50b3Mgbm8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRQoKCkRlY2xhcm8gZXN0YXIgY2llbnRlIGRlIHF1ZSBlc3RlIFRlcm1vIGRlIERlcMOzc2l0byBMZWdhbCBlIEF1dG9yaXphw6fDo28gdGVtIG8gb2JqZXRpdm8gZGUgZGl2dWxnYcOnw6NvIGRvcyBkb2N1bWVudG9zIGRlcG9zaXRhZG9zIG5vIFJlcG9zaXTDs3JpbyBEaWdpdGFsIGRhIFVGUEUgZSBkZWNsYXJvIHF1ZToKCkkgLSBvcyBkYWRvcyBwcmVlbmNoaWRvcyBubyBmb3JtdWzDoXJpbyBkZSBkZXDDs3NpdG8gc8OjbyB2ZXJkYWRlaXJvcyBlIGF1dMOqbnRpY29zOwoKSUkgLSAgbyBjb250ZcO6ZG8gZGlzcG9uaWJpbGl6YWRvIMOpIGRlIHJlc3BvbnNhYmlsaWRhZGUgZGUgc3VhIGF1dG9yaWE7CgpJSUkgLSBvIGNvbnRlw7pkbyDDqSBvcmlnaW5hbCwgZSBzZSBvIHRyYWJhbGhvIGUvb3UgcGFsYXZyYXMgZGUgb3V0cmFzIHBlc3NvYXMgZm9yYW0gdXRpbGl6YWRvcywgZXN0YXMgZm9yYW0gZGV2aWRhbWVudGUgcmVjb25oZWNpZGFzOwoKSVYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIG9icmEgY29sZXRpdmEgKG1haXMgZGUgdW0gYXV0b3IpOiB0b2RvcyBvcyBhdXRvcmVzIGVzdMOjbyBjaWVudGVzIGRvIGRlcMOzc2l0byBlIGRlIGFjb3JkbyBjb20gZXN0ZSB0ZXJtbzsKClYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIFRyYWJhbGhvIGRlIENvbmNsdXPDo28gZGUgQ3Vyc28sIERpc3NlcnRhw6fDo28gb3UgVGVzZTogbyBhcnF1aXZvIGRlcG9zaXRhZG8gY29ycmVzcG9uZGUgw6AgdmVyc8OjbyBmaW5hbCBkbyB0cmFiYWxobzsKClZJIC0gcXVhbmRvIHRyYXRhci1zZSBkZSBUcmFiYWxobyBkZSBDb25jbHVzw6NvIGRlIEN1cnNvLCBEaXNzZXJ0YcOnw6NvIG91IFRlc2U6IGVzdG91IGNpZW50ZSBkZSBxdWUgYSBhbHRlcmHDp8OjbyBkYSBtb2RhbGlkYWRlIGRlIGFjZXNzbyBhbyBkb2N1bWVudG8gYXDDs3MgbyBkZXDDs3NpdG8gZSBhbnRlcyBkZSBmaW5kYXIgbyBwZXLDrW9kbyBkZSBlbWJhcmdvLCBxdWFuZG8gZm9yIGVzY29saGlkbyBhY2Vzc28gcmVzdHJpdG8sIHNlcsOhIHBlcm1pdGlkYSBtZWRpYW50ZSBzb2xpY2l0YcOnw6NvIGRvIChhKSBhdXRvciAoYSkgYW8gU2lzdGVtYSBJbnRlZ3JhZG8gZGUgQmlibGlvdGVjYXMgZGEgVUZQRSAoU0lCL1VGUEUpLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gQWJlcnRvOgoKTmEgcXVhbGlkYWRlIGRlIHRpdHVsYXIgZG9zIGRpcmVpdG9zIGF1dG9yYWlzIGRlIGF1dG9yIHF1ZSByZWNhZW0gc29icmUgZXN0ZSBkb2N1bWVudG8sIGZ1bmRhbWVudGFkbyBuYSBMZWkgZGUgRGlyZWl0byBBdXRvcmFsIG5vIDkuNjEwLCBkZSAxOSBkZSBmZXZlcmVpcm8gZGUgMTk5OCwgYXJ0LiAyOSwgaW5jaXNvIElJSSwgYXV0b3Jpem8gYSBVbml2ZXJzaWRhZGUgRmVkZXJhbCBkZSBQZXJuYW1idWNvIGEgZGlzcG9uaWJpbGl6YXIgZ3JhdHVpdGFtZW50ZSwgc2VtIHJlc3NhcmNpbWVudG8gZG9zIGRpcmVpdG9zIGF1dG9yYWlzLCBwYXJhIGZpbnMgZGUgbGVpdHVyYSwgaW1wcmVzc8OjbyBlL291IGRvd25sb2FkIChhcXVpc2nDp8OjbykgYXRyYXbDqXMgZG8gc2l0ZSBkbyBSZXBvc2l0w7NyaW8gRGlnaXRhbCBkYSBVRlBFIG5vIGVuZGVyZcOnbyBodHRwOi8vd3d3LnJlcG9zaXRvcmlvLnVmcGUuYnIsIGEgcGFydGlyIGRhIGRhdGEgZGUgZGVww7NzaXRvLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gUmVzdHJpdG86CgpOYSBxdWFsaWRhZGUgZGUgdGl0dWxhciBkb3MgZGlyZWl0b3MgYXV0b3JhaXMgZGUgYXV0b3IgcXVlIHJlY2FlbSBzb2JyZSBlc3RlIGRvY3VtZW50bywgZnVuZGFtZW50YWRvIG5hIExlaSBkZSBEaXJlaXRvIEF1dG9yYWwgbm8gOS42MTAgZGUgMTkgZGUgZmV2ZXJlaXJvIGRlIDE5OTgsIGFydC4gMjksIGluY2lzbyBJSUksIGF1dG9yaXpvIGEgVW5pdmVyc2lkYWRlIEZlZGVyYWwgZGUgUGVybmFtYnVjbyBhIGRpc3BvbmliaWxpemFyIGdyYXR1aXRhbWVudGUsIHNlbSByZXNzYXJjaW1lbnRvIGRvcyBkaXJlaXRvcyBhdXRvcmFpcywgcGFyYSBmaW5zIGRlIGxlaXR1cmEsIGltcHJlc3PDo28gZS9vdSBkb3dubG9hZCAoYXF1aXNpw6fDo28pIGF0cmF2w6lzIGRvIHNpdGUgZG8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRSBubyBlbmRlcmXDp28gaHR0cDovL3d3dy5yZXBvc2l0b3Jpby51ZnBlLmJyLCBxdWFuZG8gZmluZGFyIG8gcGVyw61vZG8gZGUgZW1iYXJnbyBjb25kaXplbnRlIGFvIHRpcG8gZGUgZG9jdW1lbnRvLCBjb25mb3JtZSBpbmRpY2FkbyBubyBjYW1wbyBEYXRhIGRlIEVtYmFyZ28uCg==Repositório InstitucionalPUBhttps://repositorio.ufpe.br/oai/requestattena@ufpe.bropendoar:22212024-01-05T05:40:40Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE)false |
dc.title.pt_BR.fl_str_mv |
Reinforcement learning with spiking neural networks |
title |
Reinforcement learning with spiking neural networks |
spellingShingle |
Reinforcement learning with spiking neural networks CHEVTCHENKO, Sergio Fernandovitch Aprendizagem por reforço STDP Redes neurais de impulsos FEAST ODESA |
title_short |
Reinforcement learning with spiking neural networks |
title_full |
Reinforcement learning with spiking neural networks |
title_fullStr |
Reinforcement learning with spiking neural networks |
title_full_unstemmed |
Reinforcement learning with spiking neural networks |
title_sort |
Reinforcement learning with spiking neural networks |
author |
CHEVTCHENKO, Sergio Fernandovitch |
author_facet |
CHEVTCHENKO, Sergio Fernandovitch |
author_role |
author |
dc.contributor.authorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/5146318019503884 |
dc.contributor.advisorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/6321179168854922 |
dc.contributor.author.fl_str_mv |
CHEVTCHENKO, Sergio Fernandovitch |
dc.contributor.advisor1.fl_str_mv |
LUDERMIR, Teresa Bernarda |
contributor_str_mv |
LUDERMIR, Teresa Bernarda |
dc.subject.por.fl_str_mv |
Aprendizagem por reforço STDP Redes neurais de impulsos FEAST ODESA |
topic |
Aprendizagem por reforço STDP Redes neurais de impulsos FEAST ODESA |
description |
Artificial intelligence systems have made impressive progress in recent years, but they still lag behind simple biological brains in terms of control capabilities and power con- sumption. Spiking neural networks (SNNs) seek to emulate the energy efficiency, learning speed, and temporal processing of biological brains. However, in the context of reinforce- ment learning (RL), SNNs still fall short of traditional neural networks. The primary aim of this work is to bridge the performance gap between spiking models and powerful deep RL (DRL) algorithms on specific tasks. To this end, we have proposed new architectures that have been compared, both in terms of learning speed and final accuracy, to DRL algorithms and classical tabular RL approaches. This thesis consists of three stages. The initial stage presents a simple spiking model that addresses the scalability limitations of related models in terms of the state space. The model is evaluated on two classical RL problems: grid-world and acrobot. The results suggest that the proposed spiking model is comparable to both tabular and DRL algorithms, while maintaining an advantage in terms of complexity over the DRL algorithm. In the second stage, we further explore the proposed model by combining it with a binary feature extraction network. A binary con- volutional neural network (CNN) is pre-trained on a set of naturalistic RGB images and a separate set of images is used as observations on a modified grid-world task. We present improvements in architecture and dynamics to address this more challenging task with image observations. As before, the model is experimentally compared to state-of-the-art DRL algorithms. Additionally, we provide supplementary experiments to present a more detailed view of the connectivity and plasticity between different layers of the network. The third stage of this thesis presents a novel neuromorphic architecture for solving RL problems with real-valued observations. The proposed model incorporates feature extrac- tion layers, with the addition of temporal difference (TD)-error modulation and eligibility traces, building upon prior work. An ablation study confirms the significant impact of these components on the proposed model’s performance. Our model consistently outper- forms the tabular approach and successfully discovers stable control policies in mountain car, cart-pole and acrobot environments. Although the proposed model does not outper- form PPO in terms of optimal performance, it offers an appealing trade-off in terms of computational and hardware implementation requirements: the model does not require an external memory buffer nor global error gradient computation, and synaptic updates occur online, driven by local learning rules and a broadcast TD-error signal. We conclude by highlighting the limitations of our approach and suggest promising directions for future research. |
publishDate |
2023 |
dc.date.accessioned.fl_str_mv |
2023-12-22T11:52:29Z |
dc.date.available.fl_str_mv |
2023-12-22T11:52:29Z |
dc.date.issued.fl_str_mv |
2023-08-15 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/doctoralThesis |
format |
doctoralThesis |
status_str |
publishedVersion |
dc.identifier.citation.fl_str_mv |
CHEVTCHENKO, Sérgio Fernandovitch. Reinforcement learning with spiking neural networks. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023. |
dc.identifier.uri.fl_str_mv |
https://repositorio.ufpe.br/handle/123456789/54351 |
dc.identifier.dark.fl_str_mv |
ark:/64986/001300000fvcj |
identifier_str_mv |
CHEVTCHENKO, Sérgio Fernandovitch. Reinforcement learning with spiking neural networks. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023. ark:/64986/001300000fvcj |
url |
https://repositorio.ufpe.br/handle/123456789/54351 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.rights.driver.fl_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ |
eu_rights_str_mv |
openAccess |
dc.publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.publisher.program.fl_str_mv |
Programa de Pos Graduacao em Ciencia da Computacao |
dc.publisher.initials.fl_str_mv |
UFPE |
dc.publisher.country.fl_str_mv |
Brasil |
publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.source.none.fl_str_mv |
reponame:Repositório Institucional da UFPE instname:Universidade Federal de Pernambuco (UFPE) instacron:UFPE |
instname_str |
Universidade Federal de Pernambuco (UFPE) |
instacron_str |
UFPE |
institution |
UFPE |
reponame_str |
Repositório Institucional da UFPE |
collection |
Repositório Institucional da UFPE |
bitstream.url.fl_str_mv |
https://repositorio.ufpe.br/bitstream/123456789/54351/1/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdf https://repositorio.ufpe.br/bitstream/123456789/54351/2/license_rdf https://repositorio.ufpe.br/bitstream/123456789/54351/3/license.txt https://repositorio.ufpe.br/bitstream/123456789/54351/4/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdf.txt https://repositorio.ufpe.br/bitstream/123456789/54351/5/TESE%20Sergio%20Fernandovitch%20Chevtchenko.pdf.jpg |
bitstream.checksum.fl_str_mv |
f5c348646e0c7dc6e8c2062063351117 e39d27027a6cc9cb039ad269a5db8e34 5e89a1613ddc8510c6576f4b23a78973 e68b6a4ddc2eeda241e7d334600303d8 274af5390bba9c224d87ded2a0fb87ac |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE) |
repository.mail.fl_str_mv |
attena@ufpe.br |
_version_ |
1815172811873845248 |