Exploring multi-agent deep reinforcement learning in IEEE very small size soccer
Autor(a) principal: | |
---|---|
Data de Publicação: | 2023 |
Tipo de documento: | Dissertação |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UFPE |
dARK ID: | ark:/64986/00130000100fg |
Texto Completo: | https://repositorio.ufpe.br/handle/123456789/54823 |
Resumo: | Robot soccer is regarded as a prime example of a dynamic and cooperative multi-agent environment, as it can demonstrate a variety of complexities. Reinforcement learning is a promising technique for optimizing decision-making in these complex systems, as it has recently achieved great success due to advances in deep neural networks, as shown in problems such as autonomous driving, games, and robotics. In multi-agent systems reinforcement learning re- search is tackling challenges such as cooperation, partial observability, decentralized execution, communication, and complex dynamics. On difficult tasks, modeling the complete problem in the learning environment can be too difficult for the algorithms to solve. We can simplify the environment to enable learning, however, policies learned in simplified environments are usually not optimal in the full environment. This study explores whether deep multi-agent re- inforcement learning outperforms single-agent counterparts in an IEEE Very Small Size Soccer setting, a task that presents a challenging problem of cooperation and competition with two teams facing each other, each having three robots. We investigate diverse learning paradigms efficacies in achieving the core objective of goal scoring, assessing cooperation by compar- ing the results of multi-agent and single-agent paradigms. Results indicate that simplifications made to the learning environment to facilitate learning may diminish cooperation’s importance and also introduce biases, driving the learning process towards conflicting policies misaligned with the original challenge. |
id |
UFPE_b675a4ea6b1a1cb60a873b1d5bc3f9b2 |
---|---|
oai_identifier_str |
oai:repositorio.ufpe.br:123456789/54823 |
network_acronym_str |
UFPE |
network_name_str |
Repositório Institucional da UFPE |
repository_id_str |
2221 |
spelling |
MARTINS, Felipe Bezerrahttp://lattes.cnpq.br/6129506437474224http://lattes.cnpq.br/1931667959910637BASSANI, Hansenclever de França2024-01-26T18:28:09Z2024-01-26T18:28:09Z2023-09-27MARTINS, Felipe Bezerra. Exploring multi-agent deep reinforcement learning in IEEE very small size soccer. 2023. Dissertação (Mestrado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023.https://repositorio.ufpe.br/handle/123456789/54823ark:/64986/00130000100fgRobot soccer is regarded as a prime example of a dynamic and cooperative multi-agent environment, as it can demonstrate a variety of complexities. Reinforcement learning is a promising technique for optimizing decision-making in these complex systems, as it has recently achieved great success due to advances in deep neural networks, as shown in problems such as autonomous driving, games, and robotics. In multi-agent systems reinforcement learning re- search is tackling challenges such as cooperation, partial observability, decentralized execution, communication, and complex dynamics. On difficult tasks, modeling the complete problem in the learning environment can be too difficult for the algorithms to solve. We can simplify the environment to enable learning, however, policies learned in simplified environments are usually not optimal in the full environment. This study explores whether deep multi-agent re- inforcement learning outperforms single-agent counterparts in an IEEE Very Small Size Soccer setting, a task that presents a challenging problem of cooperation and competition with two teams facing each other, each having three robots. We investigate diverse learning paradigms efficacies in achieving the core objective of goal scoring, assessing cooperation by compar- ing the results of multi-agent and single-agent paradigms. Results indicate that simplifications made to the learning environment to facilitate learning may diminish cooperation’s importance and also introduce biases, driving the learning process towards conflicting policies misaligned with the original challenge.CAPESO futebol de robôs é considerado um excelente exemplo de ambiente multiagente dinâ- mico e cooperativo, podendo demonstrar uma variedade de complexidades. A aprendizagem por reforço é uma técnica promissora para otimizar a tomada de decisões nestes sistemas complexos, obtendo recentemente grande sucesso devido aos avanços nas redes neurais pro- fundas, como mostrado em problemas de direção autônoma, jogos e robótica. Em sistemas multiagentes, a pesquisa de aprendizagem por reforço está enfrentando desafios de coopera- ção, observabilidade parcial, execução descentralizada, comunicação e dinâmicas complexas. Em tarefas difíceis, modelar o problema completo no ambiente de aprendizagem pode ser muito desafiador para os algoritmos resolverem, podemos simplificar o ambiente para permitir a aprendizagem, contudo, as políticas aprendidas em ambientes simplificados geralmente não são ideais no ambiente completo. Este estudo explora se a aprendizagem profunda por reforço multiagente supera as contrapartes de agente único em um ambiente de futebol de robôs da categoria IEEE Very Small Size Soccer, uma tarefa que apresenta um problema desafiador de cooperação e competição com duas equipes frente a frente, cada uma com três robôs. In- vestigamos a eficácia de diversos paradigmas de aprendizagem em alcançar o objetivo central de realizar gols, avaliando a cooperação, comparando os resultados de paradigmas multiagen- tes e de agente único. Os resultados indicam que as simplificações introduzidas no ambiente para facilitar a aprendizagem podem diminuir a importância da cooperação e introduzir vieses, conduzindo o processo ao aprendizado de políticas conflitantes e desalinhadas com o desafio original.engUniversidade Federal de PernambucoPrograma de Pos Graduacao em Ciencia da ComputacaoUFPEBrasilAttribution-NonCommercial-NoDerivs 3.0 Brazilhttp://creativecommons.org/licenses/by-nc-nd/3.0/br/info:eu-repo/semantics/openAccessInteligência computacionalAprendizado por reforçoRobóticaSistemas multiagentesExploring multi-agent deep reinforcement learning in IEEE very small size soccerinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesismestradoreponame:Repositório Institucional da UFPEinstname:Universidade Federal de Pernambuco (UFPE)instacron:UFPEORIGINALDISSERTAÇÃO Felipe Bezerra Martins.pdfDISSERTAÇÃO Felipe Bezerra Martins.pdfapplication/pdf10349793https://repositorio.ufpe.br/bitstream/123456789/54823/1/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf91ac2cd8433d0654ded77c47b85b4ecaMD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8811https://repositorio.ufpe.br/bitstream/123456789/54823/2/license_rdfe39d27027a6cc9cb039ad269a5db8e34MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-82362https://repositorio.ufpe.br/bitstream/123456789/54823/3/license.txt5e89a1613ddc8510c6576f4b23a78973MD53TEXTDISSERTAÇÃO Felipe Bezerra Martins.pdf.txtDISSERTAÇÃO Felipe Bezerra Martins.pdf.txtExtracted texttext/plain166329https://repositorio.ufpe.br/bitstream/123456789/54823/4/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf.txtd7daca5b0d9cef337df4164e0660bd96MD54THUMBNAILDISSERTAÇÃO Felipe Bezerra Martins.pdf.jpgDISSERTAÇÃO Felipe Bezerra Martins.pdf.jpgGenerated Thumbnailimage/jpeg1236https://repositorio.ufpe.br/bitstream/123456789/54823/5/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf.jpg13bee4f36153551d170ce59cb59bb697MD55123456789/548232024-01-27 02:22:23.734oai:repositorio.ufpe.br:123456789/54823VGVybW8gZGUgRGVww7NzaXRvIExlZ2FsIGUgQXV0b3JpemHDp8OjbyBwYXJhIFB1YmxpY2l6YcOnw6NvIGRlIERvY3VtZW50b3Mgbm8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRQoKCkRlY2xhcm8gZXN0YXIgY2llbnRlIGRlIHF1ZSBlc3RlIFRlcm1vIGRlIERlcMOzc2l0byBMZWdhbCBlIEF1dG9yaXphw6fDo28gdGVtIG8gb2JqZXRpdm8gZGUgZGl2dWxnYcOnw6NvIGRvcyBkb2N1bWVudG9zIGRlcG9zaXRhZG9zIG5vIFJlcG9zaXTDs3JpbyBEaWdpdGFsIGRhIFVGUEUgZSBkZWNsYXJvIHF1ZToKCkkgLSBvcyBkYWRvcyBwcmVlbmNoaWRvcyBubyBmb3JtdWzDoXJpbyBkZSBkZXDDs3NpdG8gc8OjbyB2ZXJkYWRlaXJvcyBlIGF1dMOqbnRpY29zOwoKSUkgLSAgbyBjb250ZcO6ZG8gZGlzcG9uaWJpbGl6YWRvIMOpIGRlIHJlc3BvbnNhYmlsaWRhZGUgZGUgc3VhIGF1dG9yaWE7CgpJSUkgLSBvIGNvbnRlw7pkbyDDqSBvcmlnaW5hbCwgZSBzZSBvIHRyYWJhbGhvIGUvb3UgcGFsYXZyYXMgZGUgb3V0cmFzIHBlc3NvYXMgZm9yYW0gdXRpbGl6YWRvcywgZXN0YXMgZm9yYW0gZGV2aWRhbWVudGUgcmVjb25oZWNpZGFzOwoKSVYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIG9icmEgY29sZXRpdmEgKG1haXMgZGUgdW0gYXV0b3IpOiB0b2RvcyBvcyBhdXRvcmVzIGVzdMOjbyBjaWVudGVzIGRvIGRlcMOzc2l0byBlIGRlIGFjb3JkbyBjb20gZXN0ZSB0ZXJtbzsKClYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIFRyYWJhbGhvIGRlIENvbmNsdXPDo28gZGUgQ3Vyc28sIERpc3NlcnRhw6fDo28gb3UgVGVzZTogbyBhcnF1aXZvIGRlcG9zaXRhZG8gY29ycmVzcG9uZGUgw6AgdmVyc8OjbyBmaW5hbCBkbyB0cmFiYWxobzsKClZJIC0gcXVhbmRvIHRyYXRhci1zZSBkZSBUcmFiYWxobyBkZSBDb25jbHVzw6NvIGRlIEN1cnNvLCBEaXNzZXJ0YcOnw6NvIG91IFRlc2U6IGVzdG91IGNpZW50ZSBkZSBxdWUgYSBhbHRlcmHDp8OjbyBkYSBtb2RhbGlkYWRlIGRlIGFjZXNzbyBhbyBkb2N1bWVudG8gYXDDs3MgbyBkZXDDs3NpdG8gZSBhbnRlcyBkZSBmaW5kYXIgbyBwZXLDrW9kbyBkZSBlbWJhcmdvLCBxdWFuZG8gZm9yIGVzY29saGlkbyBhY2Vzc28gcmVzdHJpdG8sIHNlcsOhIHBlcm1pdGlkYSBtZWRpYW50ZSBzb2xpY2l0YcOnw6NvIGRvIChhKSBhdXRvciAoYSkgYW8gU2lzdGVtYSBJbnRlZ3JhZG8gZGUgQmlibGlvdGVjYXMgZGEgVUZQRSAoU0lCL1VGUEUpLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gQWJlcnRvOgoKTmEgcXVhbGlkYWRlIGRlIHRpdHVsYXIgZG9zIGRpcmVpdG9zIGF1dG9yYWlzIGRlIGF1dG9yIHF1ZSByZWNhZW0gc29icmUgZXN0ZSBkb2N1bWVudG8sIGZ1bmRhbWVudGFkbyBuYSBMZWkgZGUgRGlyZWl0byBBdXRvcmFsIG5vIDkuNjEwLCBkZSAxOSBkZSBmZXZlcmVpcm8gZGUgMTk5OCwgYXJ0LiAyOSwgaW5jaXNvIElJSSwgYXV0b3Jpem8gYSBVbml2ZXJzaWRhZGUgRmVkZXJhbCBkZSBQZXJuYW1idWNvIGEgZGlzcG9uaWJpbGl6YXIgZ3JhdHVpdGFtZW50ZSwgc2VtIHJlc3NhcmNpbWVudG8gZG9zIGRpcmVpdG9zIGF1dG9yYWlzLCBwYXJhIGZpbnMgZGUgbGVpdHVyYSwgaW1wcmVzc8OjbyBlL291IGRvd25sb2FkIChhcXVpc2nDp8OjbykgYXRyYXbDqXMgZG8gc2l0ZSBkbyBSZXBvc2l0w7NyaW8gRGlnaXRhbCBkYSBVRlBFIG5vIGVuZGVyZcOnbyBodHRwOi8vd3d3LnJlcG9zaXRvcmlvLnVmcGUuYnIsIGEgcGFydGlyIGRhIGRhdGEgZGUgZGVww7NzaXRvLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gUmVzdHJpdG86CgpOYSBxdWFsaWRhZGUgZGUgdGl0dWxhciBkb3MgZGlyZWl0b3MgYXV0b3JhaXMgZGUgYXV0b3IgcXVlIHJlY2FlbSBzb2JyZSBlc3RlIGRvY3VtZW50bywgZnVuZGFtZW50YWRvIG5hIExlaSBkZSBEaXJlaXRvIEF1dG9yYWwgbm8gOS42MTAgZGUgMTkgZGUgZmV2ZXJlaXJvIGRlIDE5OTgsIGFydC4gMjksIGluY2lzbyBJSUksIGF1dG9yaXpvIGEgVW5pdmVyc2lkYWRlIEZlZGVyYWwgZGUgUGVybmFtYnVjbyBhIGRpc3BvbmliaWxpemFyIGdyYXR1aXRhbWVudGUsIHNlbSByZXNzYXJjaW1lbnRvIGRvcyBkaXJlaXRvcyBhdXRvcmFpcywgcGFyYSBmaW5zIGRlIGxlaXR1cmEsIGltcHJlc3PDo28gZS9vdSBkb3dubG9hZCAoYXF1aXNpw6fDo28pIGF0cmF2w6lzIGRvIHNpdGUgZG8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRSBubyBlbmRlcmXDp28gaHR0cDovL3d3dy5yZXBvc2l0b3Jpby51ZnBlLmJyLCBxdWFuZG8gZmluZGFyIG8gcGVyw61vZG8gZGUgZW1iYXJnbyBjb25kaXplbnRlIGFvIHRpcG8gZGUgZG9jdW1lbnRvLCBjb25mb3JtZSBpbmRpY2FkbyBubyBjYW1wbyBEYXRhIGRlIEVtYmFyZ28uCg==Repositório InstitucionalPUBhttps://repositorio.ufpe.br/oai/requestattena@ufpe.bropendoar:22212024-01-27T05:22:23Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE)false |
dc.title.pt_BR.fl_str_mv |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
title |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
spellingShingle |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer MARTINS, Felipe Bezerra Inteligência computacional Aprendizado por reforço Robótica Sistemas multiagentes |
title_short |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
title_full |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
title_fullStr |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
title_full_unstemmed |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
title_sort |
Exploring multi-agent deep reinforcement learning in IEEE very small size soccer |
author |
MARTINS, Felipe Bezerra |
author_facet |
MARTINS, Felipe Bezerra |
author_role |
author |
dc.contributor.authorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/6129506437474224 |
dc.contributor.advisorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/1931667959910637 |
dc.contributor.author.fl_str_mv |
MARTINS, Felipe Bezerra |
dc.contributor.advisor1.fl_str_mv |
BASSANI, Hansenclever de França |
contributor_str_mv |
BASSANI, Hansenclever de França |
dc.subject.por.fl_str_mv |
Inteligência computacional Aprendizado por reforço Robótica Sistemas multiagentes |
topic |
Inteligência computacional Aprendizado por reforço Robótica Sistemas multiagentes |
description |
Robot soccer is regarded as a prime example of a dynamic and cooperative multi-agent environment, as it can demonstrate a variety of complexities. Reinforcement learning is a promising technique for optimizing decision-making in these complex systems, as it has recently achieved great success due to advances in deep neural networks, as shown in problems such as autonomous driving, games, and robotics. In multi-agent systems reinforcement learning re- search is tackling challenges such as cooperation, partial observability, decentralized execution, communication, and complex dynamics. On difficult tasks, modeling the complete problem in the learning environment can be too difficult for the algorithms to solve. We can simplify the environment to enable learning, however, policies learned in simplified environments are usually not optimal in the full environment. This study explores whether deep multi-agent re- inforcement learning outperforms single-agent counterparts in an IEEE Very Small Size Soccer setting, a task that presents a challenging problem of cooperation and competition with two teams facing each other, each having three robots. We investigate diverse learning paradigms efficacies in achieving the core objective of goal scoring, assessing cooperation by compar- ing the results of multi-agent and single-agent paradigms. Results indicate that simplifications made to the learning environment to facilitate learning may diminish cooperation’s importance and also introduce biases, driving the learning process towards conflicting policies misaligned with the original challenge. |
publishDate |
2023 |
dc.date.issued.fl_str_mv |
2023-09-27 |
dc.date.accessioned.fl_str_mv |
2024-01-26T18:28:09Z |
dc.date.available.fl_str_mv |
2024-01-26T18:28:09Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/masterThesis |
format |
masterThesis |
status_str |
publishedVersion |
dc.identifier.citation.fl_str_mv |
MARTINS, Felipe Bezerra. Exploring multi-agent deep reinforcement learning in IEEE very small size soccer. 2023. Dissertação (Mestrado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023. |
dc.identifier.uri.fl_str_mv |
https://repositorio.ufpe.br/handle/123456789/54823 |
dc.identifier.dark.fl_str_mv |
ark:/64986/00130000100fg |
identifier_str_mv |
MARTINS, Felipe Bezerra. Exploring multi-agent deep reinforcement learning in IEEE very small size soccer. 2023. Dissertação (Mestrado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023. ark:/64986/00130000100fg |
url |
https://repositorio.ufpe.br/handle/123456789/54823 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.rights.driver.fl_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ |
eu_rights_str_mv |
openAccess |
dc.publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.publisher.program.fl_str_mv |
Programa de Pos Graduacao em Ciencia da Computacao |
dc.publisher.initials.fl_str_mv |
UFPE |
dc.publisher.country.fl_str_mv |
Brasil |
publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.source.none.fl_str_mv |
reponame:Repositório Institucional da UFPE instname:Universidade Federal de Pernambuco (UFPE) instacron:UFPE |
instname_str |
Universidade Federal de Pernambuco (UFPE) |
instacron_str |
UFPE |
institution |
UFPE |
reponame_str |
Repositório Institucional da UFPE |
collection |
Repositório Institucional da UFPE |
bitstream.url.fl_str_mv |
https://repositorio.ufpe.br/bitstream/123456789/54823/1/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf https://repositorio.ufpe.br/bitstream/123456789/54823/2/license_rdf https://repositorio.ufpe.br/bitstream/123456789/54823/3/license.txt https://repositorio.ufpe.br/bitstream/123456789/54823/4/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf.txt https://repositorio.ufpe.br/bitstream/123456789/54823/5/DISSERTA%c3%87%c3%83O%20Felipe%20Bezerra%20Martins.pdf.jpg |
bitstream.checksum.fl_str_mv |
91ac2cd8433d0654ded77c47b85b4eca e39d27027a6cc9cb039ad269a5db8e34 5e89a1613ddc8510c6576f4b23a78973 d7daca5b0d9cef337df4164e0660bd96 13bee4f36153551d170ce59cb59bb697 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE) |
repository.mail.fl_str_mv |
attena@ufpe.br |
_version_ |
1815172958669242368 |