Methods and algorithms for knowledge reuse in multiagent reinforcement learning.

Detalhes bibliográficos
Autor(a) principal: Silva, Felipe Leno da
Data de Publicação: 2019
Tipo de documento: Tese
Idioma: eng
Título da fonte: Biblioteca Digital de Teses e Dissertações da USP
Texto Completo: http://www.teses.usp.br/teses/disponiveis/3/3141/tde-21112019-113201/
Resumo: Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. However, the learning process has a high sample-complexity to infer an effective policy, especially when multiple agents are simultaneously actuating in the environment. We here propose to take advantage of previous knowledge, so as to accelerate learning in multiagent RL problems. Agents may reuse knowledge gathered from previously solved tasks, and they may also receive guidance from more experienced friendly agents to learn faster. However, specifying a framework to integrate knowledge reuse into the learning process requires answering challenging research questions, such as: How to abstract task solutions to reuse them later in similar yet different tasks? How to define when advice should be given? How to select the previous task most similar to the new one and map correspondences? and How to defined if received advice is trustworthy? Although many methods exist to reuse knowledge from a specific knowledge source, the literature is composed of methods very specialized in their own scenario that are not compatible. We propose in this thesis to reuse knowledge both from previously solved tasks and from communication with other agents. In order to accomplish our goal, we propose several flexible methods to enable each of those two types of knowledge reuse. Our proposed methods include: Ad Hoc Advising, an inter-agent advising framework, where agents can share knowledge among themselves through action suggestions; and an extension of the object-oriented representation to multiagent RL and methods to leverage it for reusing knowledge. Combined, our methods provide ways to reuse knowledge from both previously solved tasks and other agents with state-of-the-art performance. Our contributions are first steps towards more flexible and broadly applicable multiagent transfer learning methods, where agents will be able to consistently combine reused knowledge from multiple sources, including solved tasks and other learning agents.
id USP_b62166e00ec1f86c427cfa7310faf37b
oai_identifier_str oai:teses.usp.br:tde-21112019-113201
network_acronym_str USP
network_name_str Biblioteca Digital de Teses e Dissertações da USP
repository_id_str 2721
spelling Methods and algorithms for knowledge reuse in multiagent reinforcement learning.Métodos e algoritmos para reúso de conhecimento em aprendizado por reforço multiagente.Aprendizado por ReforçoAprendizado por Reforço MultiagenteInteligência artificialMultiagent Reinforcement LearningMultiagent SystemsReinforcement LearningSistemas MultiagenteTransfer LearningTransferência de conhecimentoReinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. However, the learning process has a high sample-complexity to infer an effective policy, especially when multiple agents are simultaneously actuating in the environment. We here propose to take advantage of previous knowledge, so as to accelerate learning in multiagent RL problems. Agents may reuse knowledge gathered from previously solved tasks, and they may also receive guidance from more experienced friendly agents to learn faster. However, specifying a framework to integrate knowledge reuse into the learning process requires answering challenging research questions, such as: How to abstract task solutions to reuse them later in similar yet different tasks? How to define when advice should be given? How to select the previous task most similar to the new one and map correspondences? and How to defined if received advice is trustworthy? Although many methods exist to reuse knowledge from a specific knowledge source, the literature is composed of methods very specialized in their own scenario that are not compatible. We propose in this thesis to reuse knowledge both from previously solved tasks and from communication with other agents. In order to accomplish our goal, we propose several flexible methods to enable each of those two types of knowledge reuse. Our proposed methods include: Ad Hoc Advising, an inter-agent advising framework, where agents can share knowledge among themselves through action suggestions; and an extension of the object-oriented representation to multiagent RL and methods to leverage it for reusing knowledge. Combined, our methods provide ways to reuse knowledge from both previously solved tasks and other agents with state-of-the-art performance. Our contributions are first steps towards more flexible and broadly applicable multiagent transfer learning methods, where agents will be able to consistently combine reused knowledge from multiple sources, including solved tasks and other learning agents.O Aprendizado por Reforço (Reinforcement Learning - RL) é uma das técnicas mais bem-sucedidas para treinar agentes através de interações com o ambiente. Entretanto, o processo de aprendizado tem uma alta complexidade em termos de amostras de interação com o ambiente para que uma política efetiva seja aprendida, especialmente quando múltiplos agentes estão atuando simultaneamente. Este trabalho propõe reusar conhecimento prévio para acelerar o aprendizado em RL multiagente. Os agentes podem reusar conhecimento adquirido em tarefas resolvidas previamente, e também podem receber instruções de agentes com mais experiência para aprender mais rápido. Porém, especificar um arcabouço que integre reuso de conhecimento no processo de aprendizado requer responder questões de pesquisa desafiadoras, tais como: Como abstrair soluções para que sejam reutilizadas no futuro em tarefas similares porém diferentes? Como definir quando aconselhamentos entre agentes devem ocorrer? Como selecionar as tarefas passadas mais similares à nova a ser resolvida e mapear correspondências? e Como definir se um conselho recebido é confiável? Apesar de diversos métodos existirem para o reúso de conhecimento de uma fonte em específico, a literatura é composta por métodos especializados em um determinado cenário, que não são compatíveis com outros métodos. Nesta tese é proposto o reúso de conhecimento tanto de tarefas prévias como de outros agentes. Para cumprir este objetivo, diversos métodos flexíveis são propostos para que cada um destes dois tipos de reúso de conhecimento seja possível. Os métodos propostos incluem: Ad Hoc Advising, no qual agentes compartilham conhecimento através de sugestões de uma extensão da representação orientada a objetos para RL multiagente e métodos para aproveitá-la no reúso de conhecimento. Combinados, os métodos propostos propõem formas de se reusar o conhecimento proveniente tanto de tarefas prévias quanto de outros agentes com desempenho do estado da arte. As contribuições dessa tese são passos iniciais na direção a métodos mais flexíveis de transferência de conhecimento multiagentes, onde agentes serão capazes de combinar consistentemente conhecimento reusado de múltiplas origens, incluindo tarefas resolvidas e outros agentes.Biblioteca Digitais de Teses e Dissertações da USPCosta, Anna Helena RealiSilva, Felipe Leno da2019-09-06info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisapplication/pdfhttp://www.teses.usp.br/teses/disponiveis/3/3141/tde-21112019-113201/reponame:Biblioteca Digital de Teses e Dissertações da USPinstname:Universidade de São Paulo (USP)instacron:USPLiberar o conteúdo para acesso público.info:eu-repo/semantics/openAccesseng2019-12-06T14:23:02Zoai:teses.usp.br:tde-21112019-113201Biblioteca Digital de Teses e Dissertaçõeshttp://www.teses.usp.br/PUBhttp://www.teses.usp.br/cgi-bin/mtd2br.plvirginia@if.usp.br|| atendimento@aguia.usp.br||virginia@if.usp.bropendoar:27212019-12-06T14:23:02Biblioteca Digital de Teses e Dissertações da USP - Universidade de São Paulo (USP)false
dc.title.none.fl_str_mv Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
Métodos e algoritmos para reúso de conhecimento em aprendizado por reforço multiagente.
title Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
spellingShingle Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
Silva, Felipe Leno da
Aprendizado por Reforço
Aprendizado por Reforço Multiagente
Inteligência artificial
Multiagent Reinforcement Learning
Multiagent Systems
Reinforcement Learning
Sistemas Multiagente
Transfer Learning
Transferência de conhecimento
title_short Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
title_full Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
title_fullStr Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
title_full_unstemmed Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
title_sort Methods and algorithms for knowledge reuse in multiagent reinforcement learning.
author Silva, Felipe Leno da
author_facet Silva, Felipe Leno da
author_role author
dc.contributor.none.fl_str_mv Costa, Anna Helena Reali
dc.contributor.author.fl_str_mv Silva, Felipe Leno da
dc.subject.por.fl_str_mv Aprendizado por Reforço
Aprendizado por Reforço Multiagente
Inteligência artificial
Multiagent Reinforcement Learning
Multiagent Systems
Reinforcement Learning
Sistemas Multiagente
Transfer Learning
Transferência de conhecimento
topic Aprendizado por Reforço
Aprendizado por Reforço Multiagente
Inteligência artificial
Multiagent Reinforcement Learning
Multiagent Systems
Reinforcement Learning
Sistemas Multiagente
Transfer Learning
Transferência de conhecimento
description Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. However, the learning process has a high sample-complexity to infer an effective policy, especially when multiple agents are simultaneously actuating in the environment. We here propose to take advantage of previous knowledge, so as to accelerate learning in multiagent RL problems. Agents may reuse knowledge gathered from previously solved tasks, and they may also receive guidance from more experienced friendly agents to learn faster. However, specifying a framework to integrate knowledge reuse into the learning process requires answering challenging research questions, such as: How to abstract task solutions to reuse them later in similar yet different tasks? How to define when advice should be given? How to select the previous task most similar to the new one and map correspondences? and How to defined if received advice is trustworthy? Although many methods exist to reuse knowledge from a specific knowledge source, the literature is composed of methods very specialized in their own scenario that are not compatible. We propose in this thesis to reuse knowledge both from previously solved tasks and from communication with other agents. In order to accomplish our goal, we propose several flexible methods to enable each of those two types of knowledge reuse. Our proposed methods include: Ad Hoc Advising, an inter-agent advising framework, where agents can share knowledge among themselves through action suggestions; and an extension of the object-oriented representation to multiagent RL and methods to leverage it for reusing knowledge. Combined, our methods provide ways to reuse knowledge from both previously solved tasks and other agents with state-of-the-art performance. Our contributions are first steps towards more flexible and broadly applicable multiagent transfer learning methods, where agents will be able to consistently combine reused knowledge from multiple sources, including solved tasks and other learning agents.
publishDate 2019
dc.date.none.fl_str_mv 2019-09-06
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/doctoralThesis
format doctoralThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://www.teses.usp.br/teses/disponiveis/3/3141/tde-21112019-113201/
url http://www.teses.usp.br/teses/disponiveis/3/3141/tde-21112019-113201/
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv
dc.rights.driver.fl_str_mv Liberar o conteúdo para acesso público.
info:eu-repo/semantics/openAccess
rights_invalid_str_mv Liberar o conteúdo para acesso público.
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.coverage.none.fl_str_mv
dc.publisher.none.fl_str_mv Biblioteca Digitais de Teses e Dissertações da USP
publisher.none.fl_str_mv Biblioteca Digitais de Teses e Dissertações da USP
dc.source.none.fl_str_mv
reponame:Biblioteca Digital de Teses e Dissertações da USP
instname:Universidade de São Paulo (USP)
instacron:USP
instname_str Universidade de São Paulo (USP)
instacron_str USP
institution USP
reponame_str Biblioteca Digital de Teses e Dissertações da USP
collection Biblioteca Digital de Teses e Dissertações da USP
repository.name.fl_str_mv Biblioteca Digital de Teses e Dissertações da USP - Universidade de São Paulo (USP)
repository.mail.fl_str_mv virginia@if.usp.br|| atendimento@aguia.usp.br||virginia@if.usp.br
_version_ 1809090971116240896