NLOOK : a computational attention model for robot vision

Detalhes bibliográficos
Autor(a) principal: Heinen, Milton Roberto
Data de Publicação: 2009
Outros Autores: Engel, Paulo Martins
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Institucional da UFRGS
Texto Completo: http://hdl.handle.net/10183/72579
Resumo: The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.
id UFRGS-2_5f1af7b9a6e8b7cd504508c51015aee7
oai_identifier_str oai:www.lume.ufrgs.br:10183/72579
network_acronym_str UFRGS-2
network_name_str Repositório Institucional da UFRGS
repository_id_str
spelling Heinen, Milton RobertoEngel, Paulo Martins2013-06-19T01:43:54Z20090104-6500http://hdl.handle.net/10183/72579000733068The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.application/pdfengJournal of the Brazilian Computer Society. Porto Alegre. Vol. 15, n. 3 (2009 Sept.), p. 3-17Inteligência artificialVisão computacionalRobot visionVisual attentionSelective attentionFocus of attentionBiomimetic visionNLOOK : a computational attention model for robot visioninfo:eu-repo/semantics/articleinfo:eu-repo/semantics/otherinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/openAccessreponame:Repositório Institucional da UFRGSinstname:Universidade Federal do Rio Grande do Sul (UFRGS)instacron:UFRGSORIGINAL000733068.pdf000733068.pdfTexto completo (inglês)application/pdf3134877http://www.lume.ufrgs.br/bitstream/10183/72579/1/000733068.pdf203d9a1abedd018458cda0c8e8232f68MD51TEXT000733068.pdf.txt000733068.pdf.txtExtracted Texttext/plain54559http://www.lume.ufrgs.br/bitstream/10183/72579/2/000733068.pdf.txtfe5a339d955441953935e848f6e8ba58MD52THUMBNAIL000733068.pdf.jpg000733068.pdf.jpgGenerated Thumbnailimage/jpeg2113http://www.lume.ufrgs.br/bitstream/10183/72579/3/000733068.pdf.jpg301804ed103a45e481c0fc83c14f7e78MD5310183/725792022-02-22 04:50:50.092261oai:www.lume.ufrgs.br:10183/72579Repositório de PublicaçõesPUBhttps://lume.ufrgs.br/oai/requestopendoar:2022-02-22T07:50:50Repositório Institucional da UFRGS - Universidade Federal do Rio Grande do Sul (UFRGS)false
dc.title.pt_BR.fl_str_mv NLOOK : a computational attention model for robot vision
title NLOOK : a computational attention model for robot vision
spellingShingle NLOOK : a computational attention model for robot vision
Heinen, Milton Roberto
Inteligência artificial
Visão computacional
Robot vision
Visual attention
Selective attention
Focus of attention
Biomimetic vision
title_short NLOOK : a computational attention model for robot vision
title_full NLOOK : a computational attention model for robot vision
title_fullStr NLOOK : a computational attention model for robot vision
title_full_unstemmed NLOOK : a computational attention model for robot vision
title_sort NLOOK : a computational attention model for robot vision
author Heinen, Milton Roberto
author_facet Heinen, Milton Roberto
Engel, Paulo Martins
author_role author
author2 Engel, Paulo Martins
author2_role author
dc.contributor.author.fl_str_mv Heinen, Milton Roberto
Engel, Paulo Martins
dc.subject.por.fl_str_mv Inteligência artificial
Visão computacional
topic Inteligência artificial
Visão computacional
Robot vision
Visual attention
Selective attention
Focus of attention
Biomimetic vision
dc.subject.eng.fl_str_mv Robot vision
Visual attention
Selective attention
Focus of attention
Biomimetic vision
description The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.
publishDate 2009
dc.date.issued.fl_str_mv 2009
dc.date.accessioned.fl_str_mv 2013-06-19T01:43:54Z
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
info:eu-repo/semantics/other
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10183/72579
dc.identifier.issn.pt_BR.fl_str_mv 0104-6500
dc.identifier.nrb.pt_BR.fl_str_mv 000733068
identifier_str_mv 0104-6500
000733068
url http://hdl.handle.net/10183/72579
dc.language.iso.fl_str_mv eng
language eng
dc.relation.ispartof.pt_BR.fl_str_mv Journal of the Brazilian Computer Society. Porto Alegre. Vol. 15, n. 3 (2009 Sept.), p. 3-17
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:Repositório Institucional da UFRGS
instname:Universidade Federal do Rio Grande do Sul (UFRGS)
instacron:UFRGS
instname_str Universidade Federal do Rio Grande do Sul (UFRGS)
instacron_str UFRGS
institution UFRGS
reponame_str Repositório Institucional da UFRGS
collection Repositório Institucional da UFRGS
bitstream.url.fl_str_mv http://www.lume.ufrgs.br/bitstream/10183/72579/1/000733068.pdf
http://www.lume.ufrgs.br/bitstream/10183/72579/2/000733068.pdf.txt
http://www.lume.ufrgs.br/bitstream/10183/72579/3/000733068.pdf.jpg
bitstream.checksum.fl_str_mv 203d9a1abedd018458cda0c8e8232f68
fe5a339d955441953935e848f6e8ba58
301804ed103a45e481c0fc83c14f7e78
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositório Institucional da UFRGS - Universidade Federal do Rio Grande do Sul (UFRGS)
repository.mail.fl_str_mv
_version_ 1801224793479970816