SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously
Autor(a) principal: | |
---|---|
Data de Publicação: | 2020 |
Outros Autores: | , , , , , , |
Tipo de documento: | Artigo de conferência |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://hdl.handle.net/11449/209189 |
Resumo: | One of the fundamental dilemmas of mobile robotics is the use of sensory information to locate an agent in geographic space. In this paper, we developed a global relocation system to predict the robot's position and avoid unforeseen actions from a monocular image, which we named SpaceYNet. We incorporated Inception layers to symmetric layers of down-sampling and upsampling to solve depth-scene and 6-DoF estimation simultaneously. Also, we compared SpaceYNet to PoseNet - a state of the art in robot pose regression using CNN - in order to evaluate it. The comparison comprised one public dataset and one created in a broad indoor environment. SpaceYNet showed higher accuracy in global percentages when compared to PoseNet. |
id |
UNSP_10c7a576b30c9917d7a39e301d9e3538 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/209189 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression SimultaneouslyDatasetdepth-sceneposeregressionrobotOne of the fundamental dilemmas of mobile robotics is the use of sensory information to locate an agent in geographic space. In this paper, we developed a global relocation system to predict the robot's position and avoid unforeseen actions from a monocular image, which we named SpaceYNet. We incorporated Inception layers to symmetric layers of down-sampling and upsampling to solve depth-scene and 6-DoF estimation simultaneously. Also, we compared SpaceYNet to PoseNet - a state of the art in robot pose regression using CNN - in order to evaluate it. The comparison comprised one public dataset and one created in a broad indoor environment. SpaceYNet showed higher accuracy in global percentages when compared to PoseNet.Univ Fed Paraiba, Joao Pessoa, Paraiba, BrazilUniv Estadual Paulista, Sao Paulo, BrazilUniv Estadual Paulista, Sao Paulo, BrazilIeeeUniv Fed ParaibaUniversidade Estadual Paulista (Unesp)Aragao, DunfreyNascimento, TiagoMondini, Adriano [UNESP]Paiva, A. C.Conci, A.Braz, G.Almeida, JDSFernandes, LAF2021-06-25T11:50:57Z2021-06-25T11:50:57Z2020-01-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/conferenceObject217-222Proceedings Of The 2020 International Conference On Systems, Signals And Image Processing (iwssip), 27th Edition. New York: Ieee, p. 217-222, 2020.2157-8672http://hdl.handle.net/11449/209189WOS:000615731300038Web of Sciencereponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengProceedings Of The 2020 International Conference On Systems, Signals And Image Processing (iwssip), 27th Editioninfo:eu-repo/semantics/openAccess2021-10-23T19:23:38Zoai:repositorio.unesp.br:11449/209189Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T23:46:38.647542Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
title |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
spellingShingle |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously Aragao, Dunfrey Dataset depth-scene pose regression robot |
title_short |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
title_full |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
title_fullStr |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
title_full_unstemmed |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
title_sort |
SpaceYNet: A Novel Approach to Pose and Depth-Scene Regression Simultaneously |
author |
Aragao, Dunfrey |
author_facet |
Aragao, Dunfrey Nascimento, Tiago Mondini, Adriano [UNESP] Paiva, A. C. Conci, A. Braz, G. Almeida, JDS Fernandes, LAF |
author_role |
author |
author2 |
Nascimento, Tiago Mondini, Adriano [UNESP] Paiva, A. C. Conci, A. Braz, G. Almeida, JDS Fernandes, LAF |
author2_role |
author author author author author author author |
dc.contributor.none.fl_str_mv |
Univ Fed Paraiba Universidade Estadual Paulista (Unesp) |
dc.contributor.author.fl_str_mv |
Aragao, Dunfrey Nascimento, Tiago Mondini, Adriano [UNESP] Paiva, A. C. Conci, A. Braz, G. Almeida, JDS Fernandes, LAF |
dc.subject.por.fl_str_mv |
Dataset depth-scene pose regression robot |
topic |
Dataset depth-scene pose regression robot |
description |
One of the fundamental dilemmas of mobile robotics is the use of sensory information to locate an agent in geographic space. In this paper, we developed a global relocation system to predict the robot's position and avoid unforeseen actions from a monocular image, which we named SpaceYNet. We incorporated Inception layers to symmetric layers of down-sampling and upsampling to solve depth-scene and 6-DoF estimation simultaneously. Also, we compared SpaceYNet to PoseNet - a state of the art in robot pose regression using CNN - in order to evaluate it. The comparison comprised one public dataset and one created in a broad indoor environment. SpaceYNet showed higher accuracy in global percentages when compared to PoseNet. |
publishDate |
2020 |
dc.date.none.fl_str_mv |
2020-01-01 2021-06-25T11:50:57Z 2021-06-25T11:50:57Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/conferenceObject |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
Proceedings Of The 2020 International Conference On Systems, Signals And Image Processing (iwssip), 27th Edition. New York: Ieee, p. 217-222, 2020. 2157-8672 http://hdl.handle.net/11449/209189 WOS:000615731300038 |
identifier_str_mv |
Proceedings Of The 2020 International Conference On Systems, Signals And Image Processing (iwssip), 27th Edition. New York: Ieee, p. 217-222, 2020. 2157-8672 WOS:000615731300038 |
url |
http://hdl.handle.net/11449/209189 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
Proceedings Of The 2020 International Conference On Systems, Signals And Image Processing (iwssip), 27th Edition |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
217-222 |
dc.publisher.none.fl_str_mv |
Ieee |
publisher.none.fl_str_mv |
Ieee |
dc.source.none.fl_str_mv |
Web of Science reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808129551300034560 |