Vision and distance integrated sensor (Kinect) for an autonomous robot

Detalhes bibliográficos
Autor(a) principal: Ribeiro, Paulo Rogério de Almeida
Data de Publicação: 2011
Outros Autores: Ribeiro, A. Fernando, Lopes, Gil
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/1822/25743
Resumo: This work presents an application of the Microsoft Kinect camera for an autonomous mobile robot. In order to drive autonomously one main issue is the ability to recognize signalling panels positioned overhead. The Kinect camera can be applied in this task due to its double integrated sensor, namely vision and distance. The vision sensor is used to perceive the signalling panel, while the distance sensor is applied as a segmentation filter, by eliminating pixels by their depth in the object’s background. The approach adopted to perceive the symbol from the signalling panel consists in: a) applying the depth image filter from the Kinect camera; b) applying Morphological Operators to segment the image; c) a classification is carried out with an Artificial Neural Network and a simple Multilayer Perceptron network that can correctly classify the image. This work explores the Kinect camera depth sensor and hence this filter avoids heavy computational algorithms to search for the correct location of the signalling panels. It simplifies the next tasks of image segmentation and classification. A mobile autonomous robot using this camera was used to recognize the signalling panels on a competition track of the Portuguese Robotics Open.
id RCAP_b4dc215a19dd47412d7847ecd1f13c2c
oai_identifier_str oai:repositorium.sdum.uminho.pt:1822/25743
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Vision and distance integrated sensor (Kinect) for an autonomous robotKinectSensorsAutonomous robotThis work presents an application of the Microsoft Kinect camera for an autonomous mobile robot. In order to drive autonomously one main issue is the ability to recognize signalling panels positioned overhead. The Kinect camera can be applied in this task due to its double integrated sensor, namely vision and distance. The vision sensor is used to perceive the signalling panel, while the distance sensor is applied as a segmentation filter, by eliminating pixels by their depth in the object’s background. The approach adopted to perceive the symbol from the signalling panel consists in: a) applying the depth image filter from the Kinect camera; b) applying Morphological Operators to segment the image; c) a classification is carried out with an Artificial Neural Network and a simple Multilayer Perceptron network that can correctly classify the image. This work explores the Kinect camera depth sensor and hence this filter avoids heavy computational algorithms to search for the correct location of the signalling panels. It simplifies the next tasks of image segmentation and classification. A mobile autonomous robot using this camera was used to recognize the signalling panels on a competition track of the Portuguese Robotics Open.Universidade do MinhoRibeiro, Paulo Rogério de AlmeidaRibeiro, A. FernandoLopes, Gil20112011-01-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttp://hdl.handle.net/1822/25743eng0874-9019info:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2023-07-21T12:04:47Zoai:repositorium.sdum.uminho.pt:1822/25743Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T18:55:06.668210Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Vision and distance integrated sensor (Kinect) for an autonomous robot
title Vision and distance integrated sensor (Kinect) for an autonomous robot
spellingShingle Vision and distance integrated sensor (Kinect) for an autonomous robot
Ribeiro, Paulo Rogério de Almeida
Kinect
Sensors
Autonomous robot
title_short Vision and distance integrated sensor (Kinect) for an autonomous robot
title_full Vision and distance integrated sensor (Kinect) for an autonomous robot
title_fullStr Vision and distance integrated sensor (Kinect) for an autonomous robot
title_full_unstemmed Vision and distance integrated sensor (Kinect) for an autonomous robot
title_sort Vision and distance integrated sensor (Kinect) for an autonomous robot
author Ribeiro, Paulo Rogério de Almeida
author_facet Ribeiro, Paulo Rogério de Almeida
Ribeiro, A. Fernando
Lopes, Gil
author_role author
author2 Ribeiro, A. Fernando
Lopes, Gil
author2_role author
author
dc.contributor.none.fl_str_mv Universidade do Minho
dc.contributor.author.fl_str_mv Ribeiro, Paulo Rogério de Almeida
Ribeiro, A. Fernando
Lopes, Gil
dc.subject.por.fl_str_mv Kinect
Sensors
Autonomous robot
topic Kinect
Sensors
Autonomous robot
description This work presents an application of the Microsoft Kinect camera for an autonomous mobile robot. In order to drive autonomously one main issue is the ability to recognize signalling panels positioned overhead. The Kinect camera can be applied in this task due to its double integrated sensor, namely vision and distance. The vision sensor is used to perceive the signalling panel, while the distance sensor is applied as a segmentation filter, by eliminating pixels by their depth in the object’s background. The approach adopted to perceive the symbol from the signalling panel consists in: a) applying the depth image filter from the Kinect camera; b) applying Morphological Operators to segment the image; c) a classification is carried out with an Artificial Neural Network and a simple Multilayer Perceptron network that can correctly classify the image. This work explores the Kinect camera depth sensor and hence this filter avoids heavy computational algorithms to search for the correct location of the signalling panels. It simplifies the next tasks of image segmentation and classification. A mobile autonomous robot using this camera was used to recognize the signalling panels on a competition track of the Portuguese Robotics Open.
publishDate 2011
dc.date.none.fl_str_mv 2011
2011-01-01T00:00:00Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/1822/25743
url http://hdl.handle.net/1822/25743
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv 0874-9019
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799132335436726272