Automatic glaucoma screening with low-cost devices

Detalhes bibliográficos
Autor(a) principal: Neto, Alexandre Henrique da Costa
Data de Publicação: 2021
Tipo de documento: Dissertação
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10348/10401
Resumo: Glaucoma is a silent disease that does not show symptoms until it is too late, causing irreversible blindness. Broader screening programs are limited by specialised teams and equipment's high costs. Current machine learning methods can help glaucoma screening, lower the cost, and extend it to larger populations. In less developed countries, medical centres capable of screening glaucoma are scarce. Low-cost lenses attached to mobile devices can increase the screening ampleness alerting patients to go to medical centres earlier for a more profound evaluation. These devices can capture images with enough quality and run machine learning models (e.g. convolutional neural networks, CNNs) reported by the scientific community to achieve excellent results in medical imaging. However, these solutions are not yet in use and need further study. In this work, we explore and compare the contributions of state-of-the-art classification and segmentation CNN's methods for automatic glaucoma screening with retinal images acquired both by retinographers and low-cost lenses attached to mobile devices. We used classification methods with the Xcpetion, ResNet152 V2 and the Inception ResNet V2 models. To support glaucoma classifiers predictions, we produced and analysed the model’s activation maps, allowing specialists to understand and trust the results achieved. In the clinical, the glaucoma assessment is commonly based on the Cup to Disc Ratio (CDR) criterium, an evident indicator for specialists. For this reason, additionally, we use the U-Net architecture with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the models were trained and evaluated with high-quality retinal images from public databases (RIM-ONE, DRISHTI-GS and REFUGE). The classification models were trained and evaluated additionally with a private dataset, with retinal low-quality images acquired with a smartphone with a low-cost lens coupled. The classification models achieved performances with similar results to the state-of-theart and producing activation maps gives interpretability to analysed and discuss the output, making it easier for the ophthalmologist to understand the model’s decision. The optic disc (OD) and cup segmentation on high-quality images (public datasets) reached high performances and the glaucoma classification through CDRs reached state-of-theart results proving the significance of the CDRs indicators to detect glaucoma. The representations of the OD and cup outlines help clinicians to an easier examination based on CDRs criterium.The same methods of classification were applied in the low-quality private database with slightly lower results but showing a contribution to the use of cheaper lens attached to mobile devices. This type of lens can expand and turn mass screening more accessible in remote areas.
id RCAP_19a42bea73837efd73de66306dec510d
oai_identifier_str oai:repositorio.utad.pt:10348/10401
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Automatic glaucoma screening with low-cost devicesMachine learningglaucoma screeningGlaucoma is a silent disease that does not show symptoms until it is too late, causing irreversible blindness. Broader screening programs are limited by specialised teams and equipment's high costs. Current machine learning methods can help glaucoma screening, lower the cost, and extend it to larger populations. In less developed countries, medical centres capable of screening glaucoma are scarce. Low-cost lenses attached to mobile devices can increase the screening ampleness alerting patients to go to medical centres earlier for a more profound evaluation. These devices can capture images with enough quality and run machine learning models (e.g. convolutional neural networks, CNNs) reported by the scientific community to achieve excellent results in medical imaging. However, these solutions are not yet in use and need further study. In this work, we explore and compare the contributions of state-of-the-art classification and segmentation CNN's methods for automatic glaucoma screening with retinal images acquired both by retinographers and low-cost lenses attached to mobile devices. We used classification methods with the Xcpetion, ResNet152 V2 and the Inception ResNet V2 models. To support glaucoma classifiers predictions, we produced and analysed the model’s activation maps, allowing specialists to understand and trust the results achieved. In the clinical, the glaucoma assessment is commonly based on the Cup to Disc Ratio (CDR) criterium, an evident indicator for specialists. For this reason, additionally, we use the U-Net architecture with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the models were trained and evaluated with high-quality retinal images from public databases (RIM-ONE, DRISHTI-GS and REFUGE). The classification models were trained and evaluated additionally with a private dataset, with retinal low-quality images acquired with a smartphone with a low-cost lens coupled. The classification models achieved performances with similar results to the state-of-theart and producing activation maps gives interpretability to analysed and discuss the output, making it easier for the ophthalmologist to understand the model’s decision. The optic disc (OD) and cup segmentation on high-quality images (public datasets) reached high performances and the glaucoma classification through CDRs reached state-of-theart results proving the significance of the CDRs indicators to detect glaucoma. The representations of the OD and cup outlines help clinicians to an easier examination based on CDRs criterium.The same methods of classification were applied in the low-quality private database with slightly lower results but showing a contribution to the use of cheaper lens attached to mobile devices. This type of lens can expand and turn mass screening more accessible in remote areas.O glaucoma é uma doença silenciosa que não apresenta sintomas até que seja tarde demais, causando cegueira irreversível. Programas de rastreio mais amplos são limitados por equipas especializadas e equipamentos de alto custo. Os métodos atuais de machine learning podem ajudar o rastreio de glaucoma, reduzir o custo e estendê-lo a populações maiores. Em países menos desenvolvidos, os centros médicos capazes de rastrear o glaucoma são escassos. Lentes de baixo custo acopladas a dispositivos móveis podem aumentar a amplitude do rastreio, alertando os pacientes para irem aos centros médicos mais cedo para uma avaliação mais profunda. Esses dispositivos podem capturar imagens com qualidade suficiente e executar modelos de machine learning (por exemplo, redes neurais convolucionais, CNNs) reportados pela comunidade científica por alcançarem excelentes resultados em imagens médicas. No entanto, essas soluções ainda não estão em uso e precisam de mais estudos. Neste trabalho, exploramos e comparamos as contribuições dos métodos de CNNs de classificação e segmentação para o rastreamento automático de glaucoma, com imagens de retina adquiridas por retinógrafos e lentes de baixo custo acopladas a dispositivos móveis. Usamos métodos de classificação com os modelos Xcpetion, ResNet152 V2 e Inception ResNet V2. Para apoiar as previsões dos classificadores de glaucoma, produzimos e analisamos os mapas de ativação dos modelos, permitindo que os especialistas entendessem e confiassem nos resultados alcançados. Na clínica, a avaliação do glaucoma é comumente baseada no critério do rácio de escavação/disco (CDR), um indicador evidente para especialistas. Por esse motivo, além disso, usamos a arquitetura U-Net com os modelos Inception ResNet V2 e Inception V3 como backbone para segmentar e estimar o CDR. Para ambas as tarefas, os modelos foram treinados e avaliados com imagens de retina de alta qualidade de bases de dados públicas (RIM-ONE, DRISHTI-GS e REFUGE). Os modelos de classificação foram treinados e avaliados adicionalmente com um conjunto de dados privados, com imagens de retina de baixa qualidade adquiridas com um smartphone com lente de baixo custo acoplada. Os desempenhos dos modelos de classificação alcançaram resultados semelhantes aos do estado da arte e a produção de mapas de ativação dá interpretabilidade para analisar e discutir a classificação, tornando mais fácil para o oftalmologista entender a decisão do modelo. O disco ótico (OD) e a segmentação da escavação em imagens de alta qualidade (dados públicos) alcançaram alto desempenho e a classificação do glaucoma por meio de CDRs alcançou resultados de alta performance, comprovando a importância dos indicadores de CDRs para detetar glaucoma. As representações do OD e contornos da escavação ajudam os médicos a um exame mais fácil com base nos critérios dos CDRs. Os mesmos métodos de classificação foram aplicados na base de dados privada de baixa qualidade com resultados ligeiramente inferiores, mas mostrando uma contribuição para o uso de lentes mais baratas acopladas a dispositivos móveis. Esse tipo de lente pode expandir e tornar o rastreio em massa mais acessível principalmente em áreas remotas.2021-05-21T08:54:22Z2021-03-24T00:00:00Z2021-03-24info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfapplication/pdfapplication/pdfhttp://hdl.handle.net/10348/10401engNeto, Alexandre Henrique da Costainfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2024-02-02T12:49:06Zoai:repositorio.utad.pt:10348/10401Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-20T02:04:46.087464Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Automatic glaucoma screening with low-cost devices
title Automatic glaucoma screening with low-cost devices
spellingShingle Automatic glaucoma screening with low-cost devices
Neto, Alexandre Henrique da Costa
Machine learning
glaucoma screening
title_short Automatic glaucoma screening with low-cost devices
title_full Automatic glaucoma screening with low-cost devices
title_fullStr Automatic glaucoma screening with low-cost devices
title_full_unstemmed Automatic glaucoma screening with low-cost devices
title_sort Automatic glaucoma screening with low-cost devices
author Neto, Alexandre Henrique da Costa
author_facet Neto, Alexandre Henrique da Costa
author_role author
dc.contributor.author.fl_str_mv Neto, Alexandre Henrique da Costa
dc.subject.por.fl_str_mv Machine learning
glaucoma screening
topic Machine learning
glaucoma screening
description Glaucoma is a silent disease that does not show symptoms until it is too late, causing irreversible blindness. Broader screening programs are limited by specialised teams and equipment's high costs. Current machine learning methods can help glaucoma screening, lower the cost, and extend it to larger populations. In less developed countries, medical centres capable of screening glaucoma are scarce. Low-cost lenses attached to mobile devices can increase the screening ampleness alerting patients to go to medical centres earlier for a more profound evaluation. These devices can capture images with enough quality and run machine learning models (e.g. convolutional neural networks, CNNs) reported by the scientific community to achieve excellent results in medical imaging. However, these solutions are not yet in use and need further study. In this work, we explore and compare the contributions of state-of-the-art classification and segmentation CNN's methods for automatic glaucoma screening with retinal images acquired both by retinographers and low-cost lenses attached to mobile devices. We used classification methods with the Xcpetion, ResNet152 V2 and the Inception ResNet V2 models. To support glaucoma classifiers predictions, we produced and analysed the model’s activation maps, allowing specialists to understand and trust the results achieved. In the clinical, the glaucoma assessment is commonly based on the Cup to Disc Ratio (CDR) criterium, an evident indicator for specialists. For this reason, additionally, we use the U-Net architecture with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the models were trained and evaluated with high-quality retinal images from public databases (RIM-ONE, DRISHTI-GS and REFUGE). The classification models were trained and evaluated additionally with a private dataset, with retinal low-quality images acquired with a smartphone with a low-cost lens coupled. The classification models achieved performances with similar results to the state-of-theart and producing activation maps gives interpretability to analysed and discuss the output, making it easier for the ophthalmologist to understand the model’s decision. The optic disc (OD) and cup segmentation on high-quality images (public datasets) reached high performances and the glaucoma classification through CDRs reached state-of-theart results proving the significance of the CDRs indicators to detect glaucoma. The representations of the OD and cup outlines help clinicians to an easier examination based on CDRs criterium.The same methods of classification were applied in the low-quality private database with slightly lower results but showing a contribution to the use of cheaper lens attached to mobile devices. This type of lens can expand and turn mass screening more accessible in remote areas.
publishDate 2021
dc.date.none.fl_str_mv 2021-05-21T08:54:22Z
2021-03-24T00:00:00Z
2021-03-24
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10348/10401
url http://hdl.handle.net/10348/10401
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
application/pdf
application/pdf
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799137134307704832