On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection

Detalhes bibliográficos
Autor(a) principal: Li, Kai
Data de Publicação: 2019
Outros Autores: Ni, Wei, Tovar, Eduardo, Jamalipour, Abbas
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10400.22/15286
Resumo: Unmanned Aerial Vehicles (UAVs) with Microwave Power Transfer (MPT) capability provide a practical means to deploy a large number of wireless powered sensing devices into areas with no access to persistent power supplies. The UAV can charge the sensing devices remotely and harvest their data. A key challenge is online MPT and data collection in the presence of on-board control of a UAV (e.g., patrolling velocity) for preventing battery drainage and data queue overflow of the devices, while up-to-date knowledge on battery level and data queue of the devices is not available at the UAV. In this paper, an on-board deep Q-network is developed to minimize the overall data packet loss of the sensing devices, by optimally deciding the device to be charged and interrogated for data collection, and the instantaneous patrolling velocity of the UAV. Specifically, we formulate a Markov Decision Process (MDP) with the states of battery level and data queue length of devices, channel conditions, and waypoints given the trajectory of the UAV; and solve it optimally with Q-learning. Furthermore, we propose the on-board deep Q-network that enlarges the state space of the MDP, and a deep reinforcement learning based scheduling algorithm that asymptotically derives the optimal solution online, even when the UAV has only outdated knowledge on the MDP states. Numerical results demonstrate that our deep reinforcement learning algorithm reduces the packet loss by at least 69.2%, as compared to existing non-learning greedy algorithms.
id RCAP_122ec4974b721305d91542d1514cea5a
oai_identifier_str oai:recipp.ipp.pt:10400.22/15286
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data CollectionUnmanned aerial vehicleMicrowave power transferOnline resource allocationDeep reinforcement learningMarkov decision processUnmanned Aerial Vehicles (UAVs) with Microwave Power Transfer (MPT) capability provide a practical means to deploy a large number of wireless powered sensing devices into areas with no access to persistent power supplies. The UAV can charge the sensing devices remotely and harvest their data. A key challenge is online MPT and data collection in the presence of on-board control of a UAV (e.g., patrolling velocity) for preventing battery drainage and data queue overflow of the devices, while up-to-date knowledge on battery level and data queue of the devices is not available at the UAV. In this paper, an on-board deep Q-network is developed to minimize the overall data packet loss of the sensing devices, by optimally deciding the device to be charged and interrogated for data collection, and the instantaneous patrolling velocity of the UAV. Specifically, we formulate a Markov Decision Process (MDP) with the states of battery level and data queue length of devices, channel conditions, and waypoints given the trajectory of the UAV; and solve it optimally with Q-learning. Furthermore, we propose the on-board deep Q-network that enlarges the state space of the MDP, and a deep reinforcement learning based scheduling algorithm that asymptotically derives the optimal solution online, even when the UAV has only outdated knowledge on the MDP states. Numerical results demonstrate that our deep reinforcement learning algorithm reduces the packet loss by at least 69.2%, as compared to existing non-learning greedy algorithms.Institute of Electrical and Electronics EngineersRepositório Científico do Instituto Politécnico do PortoLi, KaiNi, WeiTovar, EduardoJamalipour, Abbas20192120-01-01T00:00:00Z2019-01-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttp://hdl.handle.net/10400.22/15286eng1939-935910.1109/TVT.2019.2945037metadata only accessinfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2023-03-13T12:59:14Zoai:recipp.ipp.pt:10400.22/15286Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T17:35:02.365369Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
title On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
spellingShingle On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
Li, Kai
Unmanned aerial vehicle
Microwave power transfer
Online resource allocation
Deep reinforcement learning
Markov decision process
title_short On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
title_full On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
title_fullStr On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
title_full_unstemmed On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
title_sort On-Board Deep Q-Network for UAV-Assisted Online Power Transfer and Data Collection
author Li, Kai
author_facet Li, Kai
Ni, Wei
Tovar, Eduardo
Jamalipour, Abbas
author_role author
author2 Ni, Wei
Tovar, Eduardo
Jamalipour, Abbas
author2_role author
author
author
dc.contributor.none.fl_str_mv Repositório Científico do Instituto Politécnico do Porto
dc.contributor.author.fl_str_mv Li, Kai
Ni, Wei
Tovar, Eduardo
Jamalipour, Abbas
dc.subject.por.fl_str_mv Unmanned aerial vehicle
Microwave power transfer
Online resource allocation
Deep reinforcement learning
Markov decision process
topic Unmanned aerial vehicle
Microwave power transfer
Online resource allocation
Deep reinforcement learning
Markov decision process
description Unmanned Aerial Vehicles (UAVs) with Microwave Power Transfer (MPT) capability provide a practical means to deploy a large number of wireless powered sensing devices into areas with no access to persistent power supplies. The UAV can charge the sensing devices remotely and harvest their data. A key challenge is online MPT and data collection in the presence of on-board control of a UAV (e.g., patrolling velocity) for preventing battery drainage and data queue overflow of the devices, while up-to-date knowledge on battery level and data queue of the devices is not available at the UAV. In this paper, an on-board deep Q-network is developed to minimize the overall data packet loss of the sensing devices, by optimally deciding the device to be charged and interrogated for data collection, and the instantaneous patrolling velocity of the UAV. Specifically, we formulate a Markov Decision Process (MDP) with the states of battery level and data queue length of devices, channel conditions, and waypoints given the trajectory of the UAV; and solve it optimally with Q-learning. Furthermore, we propose the on-board deep Q-network that enlarges the state space of the MDP, and a deep reinforcement learning based scheduling algorithm that asymptotically derives the optimal solution online, even when the UAV has only outdated knowledge on the MDP states. Numerical results demonstrate that our deep reinforcement learning algorithm reduces the packet loss by at least 69.2%, as compared to existing non-learning greedy algorithms.
publishDate 2019
dc.date.none.fl_str_mv 2019
2019-01-01T00:00:00Z
2120-01-01T00:00:00Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10400.22/15286
url http://hdl.handle.net/10400.22/15286
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv 1939-9359
10.1109/TVT.2019.2945037
dc.rights.driver.fl_str_mv metadata only access
info:eu-repo/semantics/openAccess
rights_invalid_str_mv metadata only access
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Institute of Electrical and Electronics Engineers
publisher.none.fl_str_mv Institute of Electrical and Electronics Engineers
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799131442680168448