Ocular Recognition Using Deep Features for Identity Authentication
Autor(a) principal: | |
---|---|
Data de Publicação: | 2020 |
Outros Autores: | |
Tipo de documento: | Artigo de conferência |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://dx.doi.org/10.1109/IWSSIP48289.2020.9145418 http://hdl.handle.net/11449/199229 |
Resumo: | Recently, ocular biometrics has been gaining importance in Biometrics due to the poor performance obtained in some cases by biometric systems based on characteristics of the whole face. This paper presents a new method for person authentication based on ocular deep features, which are extracted from the ocular region of the face by using a very deep CNN (Convolutional Neural Network). Another interesting aspect of our method is that, instead of using directly the deep features as input for the authentication system, it uses the difference between the probe and gallery deep features. So, our method adopts a pairwise strategy. A binary support vector machine is trained to determine whether a given difference vector is genuine or impostor. The proposed new method based on such pairwise strategy was evaluated using the ocular left set of the UBIPr dataset and five pre-trained CNN architectures. When using the pre-trained VGG-Face the proposed method obtained a state-of-the-art result (3.18% of Equal Error Rate). |
id |
UNSP_036a06b2a31f21801e8886ee85ee4c86 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/199229 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
Ocular Recognition Using Deep Features for Identity Authenticationconvolutional neural networksdeep learningocular biometricsperson authenticationRecently, ocular biometrics has been gaining importance in Biometrics due to the poor performance obtained in some cases by biometric systems based on characteristics of the whole face. This paper presents a new method for person authentication based on ocular deep features, which are extracted from the ocular region of the face by using a very deep CNN (Convolutional Neural Network). Another interesting aspect of our method is that, instead of using directly the deep features as input for the authentication system, it uses the difference between the probe and gallery deep features. So, our method adopts a pairwise strategy. A binary support vector machine is trained to determine whether a given difference vector is genuine or impostor. The proposed new method based on such pairwise strategy was evaluated using the ocular left set of the UBIPr dataset and five pre-trained CNN architectures. When using the pre-trained VGG-Face the proposed method obtained a state-of-the-art result (3.18% of Equal Error Rate).São Paulo State University (UNESP)São Paulo State University (UNESP)Universidade Estadual Paulista (Unesp)Vizoni, Marcelo V. [UNESP]Marana, Aparecido N. [UNESP]2020-12-12T01:34:15Z2020-12-12T01:34:15Z2020-07-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/conferenceObject155-160http://dx.doi.org/10.1109/IWSSIP48289.2020.9145418International Conference on Systems, Signals, and Image Processing, v. 2020-July, p. 155-160.2157-87022157-8672http://hdl.handle.net/11449/19922910.1109/IWSSIP48289.2020.91454182-s2.0-85089143654Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengInternational Conference on Systems, Signals, and Image Processinginfo:eu-repo/semantics/openAccess2021-10-23T05:02:14Zoai:repositorio.unesp.br:11449/199229Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T19:14:43.096944Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
Ocular Recognition Using Deep Features for Identity Authentication |
title |
Ocular Recognition Using Deep Features for Identity Authentication |
spellingShingle |
Ocular Recognition Using Deep Features for Identity Authentication Vizoni, Marcelo V. [UNESP] convolutional neural networks deep learning ocular biometrics person authentication |
title_short |
Ocular Recognition Using Deep Features for Identity Authentication |
title_full |
Ocular Recognition Using Deep Features for Identity Authentication |
title_fullStr |
Ocular Recognition Using Deep Features for Identity Authentication |
title_full_unstemmed |
Ocular Recognition Using Deep Features for Identity Authentication |
title_sort |
Ocular Recognition Using Deep Features for Identity Authentication |
author |
Vizoni, Marcelo V. [UNESP] |
author_facet |
Vizoni, Marcelo V. [UNESP] Marana, Aparecido N. [UNESP] |
author_role |
author |
author2 |
Marana, Aparecido N. [UNESP] |
author2_role |
author |
dc.contributor.none.fl_str_mv |
Universidade Estadual Paulista (Unesp) |
dc.contributor.author.fl_str_mv |
Vizoni, Marcelo V. [UNESP] Marana, Aparecido N. [UNESP] |
dc.subject.por.fl_str_mv |
convolutional neural networks deep learning ocular biometrics person authentication |
topic |
convolutional neural networks deep learning ocular biometrics person authentication |
description |
Recently, ocular biometrics has been gaining importance in Biometrics due to the poor performance obtained in some cases by biometric systems based on characteristics of the whole face. This paper presents a new method for person authentication based on ocular deep features, which are extracted from the ocular region of the face by using a very deep CNN (Convolutional Neural Network). Another interesting aspect of our method is that, instead of using directly the deep features as input for the authentication system, it uses the difference between the probe and gallery deep features. So, our method adopts a pairwise strategy. A binary support vector machine is trained to determine whether a given difference vector is genuine or impostor. The proposed new method based on such pairwise strategy was evaluated using the ocular left set of the UBIPr dataset and five pre-trained CNN architectures. When using the pre-trained VGG-Face the proposed method obtained a state-of-the-art result (3.18% of Equal Error Rate). |
publishDate |
2020 |
dc.date.none.fl_str_mv |
2020-12-12T01:34:15Z 2020-12-12T01:34:15Z 2020-07-01 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/conferenceObject |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.1109/IWSSIP48289.2020.9145418 International Conference on Systems, Signals, and Image Processing, v. 2020-July, p. 155-160. 2157-8702 2157-8672 http://hdl.handle.net/11449/199229 10.1109/IWSSIP48289.2020.9145418 2-s2.0-85089143654 |
url |
http://dx.doi.org/10.1109/IWSSIP48289.2020.9145418 http://hdl.handle.net/11449/199229 |
identifier_str_mv |
International Conference on Systems, Signals, and Image Processing, v. 2020-July, p. 155-160. 2157-8702 2157-8672 10.1109/IWSSIP48289.2020.9145418 2-s2.0-85089143654 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
International Conference on Systems, Signals, and Image Processing |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
155-160 |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808129040400252928 |