Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks
Autor(a) principal: | |
---|---|
Data de Publicação: | 2018 |
Outros Autores: | |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
Texto Completo: | http://hdl.handle.net/10400.6/9178 |
Resumo: | This work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC). |
id |
RCAP_49ff7eeebae43952fe4f75de136dd7ea |
---|---|
oai_identifier_str |
oai:ubibliorum.ubi.pt:10400.6/9178 |
network_acronym_str |
RCAP |
network_name_str |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
repository_id_str |
7160 |
spelling |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning FrameworksPeriocular recognitionSoft BiometricsVisual SurveillanceHomeland SecurityThis work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC).uBibliorumProença, H.Neves, João2020-02-10T14:53:13Z20182018-01-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttp://hdl.handle.net/10400.6/9178eng10.1109/TIFS.2017.2771230info:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2023-12-15T09:49:50Zoai:ubibliorum.ubi.pt:10400.6/9178Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-20T00:49:20.823362Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse |
dc.title.none.fl_str_mv |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
title |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
spellingShingle |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks Proença, H. Periocular recognition Soft Biometrics Visual Surveillance Homeland Security |
title_short |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
title_full |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
title_fullStr |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
title_full_unstemmed |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
title_sort |
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks |
author |
Proença, H. |
author_facet |
Proença, H. Neves, João |
author_role |
author |
author2 |
Neves, João |
author2_role |
author |
dc.contributor.none.fl_str_mv |
uBibliorum |
dc.contributor.author.fl_str_mv |
Proença, H. Neves, João |
dc.subject.por.fl_str_mv |
Periocular recognition Soft Biometrics Visual Surveillance Homeland Security |
topic |
Periocular recognition Soft Biometrics Visual Surveillance Homeland Security |
description |
This work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC). |
publishDate |
2018 |
dc.date.none.fl_str_mv |
2018 2018-01-01T00:00:00Z 2020-02-10T14:53:13Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/10400.6/9178 |
url |
http://hdl.handle.net/10400.6/9178 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
10.1109/TIFS.2017.2771230 |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.source.none.fl_str_mv |
reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação instacron:RCAAP |
instname_str |
Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação |
instacron_str |
RCAAP |
institution |
RCAAP |
reponame_str |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
collection |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) |
repository.name.fl_str_mv |
Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação |
repository.mail.fl_str_mv |
|
_version_ |
1799136385863516160 |