Inertial Sensed Ego-motion for 3D Vision

Detalhes bibliográficos
Autor(a) principal: Lobo, Jorge
Data de Publicação: 2004
Outros Autores: Dias, Jorge
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10316/8156
https://doi.org/10.1002/rob.10122
Resumo: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to a vertically aligned world frame of reference. In order to detect the ground plane, a histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks. © 2004 Wiley Periodicals, Inc.
id RCAP_28d631d8c5e1accf53a0ec8cc0c045ca
oai_identifier_str oai:estudogeral.uc.pt:10316/8156
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Inertial Sensed Ego-motion for 3D VisionInertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to a vertically aligned world frame of reference. In order to detect the ground plane, a histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks. © 2004 Wiley Periodicals, Inc.2004info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://hdl.handle.net/10316/8156http://hdl.handle.net/10316/8156https://doi.org/10.1002/rob.10122engJournal of Robotic Systems. 21:1 (2004) 3-12Lobo, JorgeDias, Jorgeinfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2020-05-25T12:06:24Zoai:estudogeral.uc.pt:10316/8156Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T20:57:54.165780Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Inertial Sensed Ego-motion for 3D Vision
title Inertial Sensed Ego-motion for 3D Vision
spellingShingle Inertial Sensed Ego-motion for 3D Vision
Lobo, Jorge
title_short Inertial Sensed Ego-motion for 3D Vision
title_full Inertial Sensed Ego-motion for 3D Vision
title_fullStr Inertial Sensed Ego-motion for 3D Vision
title_full_unstemmed Inertial Sensed Ego-motion for 3D Vision
title_sort Inertial Sensed Ego-motion for 3D Vision
author Lobo, Jorge
author_facet Lobo, Jorge
Dias, Jorge
author_role author
author2 Dias, Jorge
author2_role author
dc.contributor.author.fl_str_mv Lobo, Jorge
Dias, Jorge
description Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to a vertically aligned world frame of reference. In order to detect the ground plane, a histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks. © 2004 Wiley Periodicals, Inc.
publishDate 2004
dc.date.none.fl_str_mv 2004
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10316/8156
http://hdl.handle.net/10316/8156
https://doi.org/10.1002/rob.10122
url http://hdl.handle.net/10316/8156
https://doi.org/10.1002/rob.10122
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv Journal of Robotic Systems. 21:1 (2004) 3-12
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799133869289504768