Diabetic retinopathy detection based on deep learning

Detalhes bibliográficos
Autor(a) principal: Zago, Gabriel Tozatto
Data de Publicação: 2019
Tipo de documento: Tese
Idioma: por
Título da fonte: Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)
Texto Completo: http://repositorio.ufes.br/handle/10/13661
Resumo: Detecting the early signs of diabetic retinopathy (DR) is essential, as timely treatment might reduce or even prevent vision loss. Moreover, automatically localizing the regions of the retinal image that might contain lesions can favorably assist specialists in the task of detection. At the same time, poor-quality retinal images do not allow an accurate medical diagnosis, and it is inconvenient for a patient to return to a medical center to repeat the fundus photography exam. In this thesis, we argue that it is possible to propose a pipeline based on quality assessment and red lesion localization to achieve automatic DR detection with performance similar to experts considering that a rough segmentation is sufficient to produce a discriminant marker of a lesion. A robust automatic system is proposed to assess the quality of retinal images aiming at assisting health care professionals during a fundus photograph exam. We propose a convolutional neural network (CNN) pretrained on non-medical images for extracting general image features. The weights of the CNN are further adjusted via a fine-tuning procedure, resulting in a performant classifier using only with a small quantity of labeled images. We also designed a lesion localization model using a deep network patch-based approach. Our goal was to reduce the complexity of the implementation while improving its performance. For this purpose, we designed an efficient procedure (including two convolutional neural network models) for selecting the training patches, such that the challenging examples would be given special attention during the training process. Using the labeling of the region, a DR decision can be given to the initial image, without the need for special training. Our patch based approach allows the model to be trained with only 28 images achieving similar results to works that used over a million of labeled images. The CNN performance for quality assessment was evaluated on two publicly available databases (i.e., DRIMDB and ELSA-Brasil) using two different procedures: intra-database and inter-database cross-validation. The CNN achieves an area under the receiver operating characteristic curve (AUC) of 99.98% on DRIMDB and an AUC of 98.56% on ELSA-Brasil in the inter-database experiment, where training and testing were not performed on the same database. These results suggest the robustness of the proposed model to various image acquisitions without requiring special adaptation, thus making it a good candidate for use in operational clinical scenarios. The lesion localization model was trained on the Standard Diabetic Retinopathy Database, Calibration Level 1 (DIARETDB1) database and was tested on several databases (including Messidor) without any further adaptation. It reaches an area under the receiver operating characteristic curve of 0.912 - 95%CI 0.897-0.928 for DR screening, and a sensitivity of 0.940-95%CI 0.921-0.959. These values are competitive with other state-of-the-art approaches. The results suggest that the given hypothesis is confirmed.
id UFES_0e1e2bfaa093d5d0ff4fccf8015d7068
oai_identifier_str oai:repositorio.ufes.br:10/13661
network_acronym_str UFES
network_name_str Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)
repository_id_str 2108
spelling Diabetic retinopathy detection based on deep learningtitle.alternative Imagens de retinaAprendizado profundoRetinopatia diabéticaRedes neurais convolucionaisQualidade de imagemLocalização de lesãoRetinal imagesDeep learningDiabetic retinopathyConvolutional neural networksImage qualityLesion localizationsubject.br-rjbnEngenharia ElétricaDetecting the early signs of diabetic retinopathy (DR) is essential, as timely treatment might reduce or even prevent vision loss. Moreover, automatically localizing the regions of the retinal image that might contain lesions can favorably assist specialists in the task of detection. At the same time, poor-quality retinal images do not allow an accurate medical diagnosis, and it is inconvenient for a patient to return to a medical center to repeat the fundus photography exam. In this thesis, we argue that it is possible to propose a pipeline based on quality assessment and red lesion localization to achieve automatic DR detection with performance similar to experts considering that a rough segmentation is sufficient to produce a discriminant marker of a lesion. A robust automatic system is proposed to assess the quality of retinal images aiming at assisting health care professionals during a fundus photograph exam. We propose a convolutional neural network (CNN) pretrained on non-medical images for extracting general image features. The weights of the CNN are further adjusted via a fine-tuning procedure, resulting in a performant classifier using only with a small quantity of labeled images. We also designed a lesion localization model using a deep network patch-based approach. Our goal was to reduce the complexity of the implementation while improving its performance. For this purpose, we designed an efficient procedure (including two convolutional neural network models) for selecting the training patches, such that the challenging examples would be given special attention during the training process. Using the labeling of the region, a DR decision can be given to the initial image, without the need for special training. Our patch based approach allows the model to be trained with only 28 images achieving similar results to works that used over a million of labeled images. The CNN performance for quality assessment was evaluated on two publicly available databases (i.e., DRIMDB and ELSA-Brasil) using two different procedures: intra-database and inter-database cross-validation. The CNN achieves an area under the receiver operating characteristic curve (AUC) of 99.98% on DRIMDB and an AUC of 98.56% on ELSA-Brasil in the inter-database experiment, where training and testing were not performed on the same database. These results suggest the robustness of the proposed model to various image acquisitions without requiring special adaptation, thus making it a good candidate for use in operational clinical scenarios. The lesion localization model was trained on the Standard Diabetic Retinopathy Database, Calibration Level 1 (DIARETDB1) database and was tested on several databases (including Messidor) without any further adaptation. It reaches an area under the receiver operating characteristic curve of 0.912 - 95%CI 0.897-0.928 for DR screening, and a sensitivity of 0.940-95%CI 0.921-0.959. These values are competitive with other state-of-the-art approaches. The results suggest that the given hypothesis is confirmed.A detecção precoce de retinopatia diabética (RD) é essencial, pois o tratamento oportuno pode reduzir ou até impedir a perda da visão. Além disso, a localização automática das regiões da imagem da retina que podem conter lesões pode auxiliar os especialistas na tarefa de detecção da doença. Ao mesmo tempo, imagens de baixa qualidade não permitem um diagnóstico médico preciso e causam o inconveniente de o paciente ter de retornar ao centro médico para repetir o exame de fundo do olho. Nesta tese, argumentamos que é possível propor um sistema com base na avaliação da qualidade da imagem e na localização de lesões vermelhas para detectar automaticamente RD com desempenho semelhante ao de especialistas, considerando que uma segmentação aproximada é suficiente para produzir um marcador discriminante de uma lesão. Um sistema automático robusto é proposto para avaliar a qualidade das imagens de retina visando auxiliar os profissionais de saúde durante um exame de fundo de olho. Propomos uma rede neural convolucional (CNN) pré-treinada em imagens não médicas para extrair características gerais de imagem. Os pesos da CNN são ajustados através de um procedimento de ajuste fino, resultando em um classificador de bom desempenho ajustado com uma pequena quantidade de imagens rotuladas. Também projetamos um modelo de localização de lesões usando uma abordagem de aprendizado profundo baseada em regiões. Nosso objetivo é reduzir a complexidade do modelo e melhorar seu desempenho. Para esse fim, desenvolvemos um procedimento (incluindo dois modelos de redes neurais convolucionais) para selecionar as regiões utilizadas no treinamento, de modo que os exemplos desafiadores recebessem atenção especial durante o processo de treinamento. Usando anotações de região, uma predição de RD pode ser definida na imagem inicial, sem a necessidade de treinamento especial. Nossa abordagem baseada em região permite que o modelo seja treinado com apenas 28 imagens, resultando em desempenho semelhante a trabalhos que usaram mais de um milhão de imagens rotuladas. O desempenho da CNN para avaliação da qualidade foi medido através de dois bancos de dados publicamente disponíveis (DRIMDB e ELSA-Brasil) usando dois procedimentos diferentes: validação cruzada intra-banco de dados e inter-banco de dados. A CNN alcançou uma área sob a curva característica de operação do receptor (AUC) de 99,98% no DRIMDB e uma AUC de 98,56% no ELSA-Brasil no experimento interbancos (ou seja, com treinamento e testes não realizados no mesmo conjunto de dados). Esses resultados mostram a robustez do modelo proposto para vários dispositivos de aquisição de imagens sem a necessidade de adaptação especial, tornando-o um bom candidato para uso em cenários operacionais. O modelo de localização da lesão foi treinado no banco de dados de Retinopatia Diabética Padrão, Nível de Calibração 1 (DIARETDB1) e testado em vários bancos de dados (incluindo Messidor) sem qualquer adaptação adicional. Alcançou uma AUC de 0,912 - 95% IC 0,897-0,928 para a triagem de RD e uma sensibilidade de 0,940-95% CI 0,921-0,959. Esses valores são similares a outras abordagens do estado da arte. Os resultados sugerem que a hipótese da proposta é confirmada.Universidade Federal do Espírito SantoBRDoutorado em Engenharia ElétricaCentro TecnológicoUFESPrograma de Pós-Graduação em Engenharia ElétricaAndreao, Rodrigo Varejaohttps://orcid.org/0000000268005700http://lattes.cnpq.br/5589662366089944https://orcid.org/0000000322286751http://lattes.cnpq.br/8771088249434104Conci, Aurahttps://orcid.org/0000-0003-0782-2501http://lattes.cnpq.br/5601388085745497Ciarelli, Patrick Marqueshttps://orcid.org/0000000331774028http://lattes.cnpq.br/1267950518719423Fernandes, Mariana Rampinellihttps://orcid.org/0000-0001-8483-5838http://lattes.cnpq.br/6481644695559950Rauber, Thomas Walterhttps://orcid.org/0000000263806584http://lattes.cnpq.br/0462549482032704Zago, Gabriel Tozatto2024-05-29T22:11:58Z2024-05-29T22:11:58Z2019-12-20info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisTextapplication/pdfhttp://repositorio.ufes.br/handle/10/13661porinfo:eu-repo/semantics/openAccessreponame:Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)instname:Universidade Federal do Espírito Santo (UFES)instacron:UFES2024-10-13T17:38:45Zoai:repositorio.ufes.br:10/13661Repositório InstitucionalPUBhttp://repositorio.ufes.br/oai/requestopendoar:21082024-10-13T17:38:45Repositório Institucional da Universidade Federal do Espírito Santo (riUfes) - Universidade Federal do Espírito Santo (UFES)false
dc.title.none.fl_str_mv Diabetic retinopathy detection based on deep learning
title.alternative
title Diabetic retinopathy detection based on deep learning
spellingShingle Diabetic retinopathy detection based on deep learning
Zago, Gabriel Tozatto
Imagens de retina
Aprendizado profundo
Retinopatia diabética
Redes neurais convolucionais
Qualidade de imagem
Localização de lesão
Retinal images
Deep learning
Diabetic retinopathy
Convolutional neural networks
Image quality
Lesion localization
subject.br-rjbn
Engenharia Elétrica
title_short Diabetic retinopathy detection based on deep learning
title_full Diabetic retinopathy detection based on deep learning
title_fullStr Diabetic retinopathy detection based on deep learning
title_full_unstemmed Diabetic retinopathy detection based on deep learning
title_sort Diabetic retinopathy detection based on deep learning
author Zago, Gabriel Tozatto
author_facet Zago, Gabriel Tozatto
author_role author
dc.contributor.none.fl_str_mv Andreao, Rodrigo Varejao
https://orcid.org/0000000268005700
http://lattes.cnpq.br/5589662366089944
https://orcid.org/0000000322286751
http://lattes.cnpq.br/8771088249434104
Conci, Aura
https://orcid.org/0000-0003-0782-2501
http://lattes.cnpq.br/5601388085745497
Ciarelli, Patrick Marques
https://orcid.org/0000000331774028
http://lattes.cnpq.br/1267950518719423
Fernandes, Mariana Rampinelli
https://orcid.org/0000-0001-8483-5838
http://lattes.cnpq.br/6481644695559950
Rauber, Thomas Walter
https://orcid.org/0000000263806584
http://lattes.cnpq.br/0462549482032704
dc.contributor.author.fl_str_mv Zago, Gabriel Tozatto
dc.subject.por.fl_str_mv Imagens de retina
Aprendizado profundo
Retinopatia diabética
Redes neurais convolucionais
Qualidade de imagem
Localização de lesão
Retinal images
Deep learning
Diabetic retinopathy
Convolutional neural networks
Image quality
Lesion localization
subject.br-rjbn
Engenharia Elétrica
topic Imagens de retina
Aprendizado profundo
Retinopatia diabética
Redes neurais convolucionais
Qualidade de imagem
Localização de lesão
Retinal images
Deep learning
Diabetic retinopathy
Convolutional neural networks
Image quality
Lesion localization
subject.br-rjbn
Engenharia Elétrica
description Detecting the early signs of diabetic retinopathy (DR) is essential, as timely treatment might reduce or even prevent vision loss. Moreover, automatically localizing the regions of the retinal image that might contain lesions can favorably assist specialists in the task of detection. At the same time, poor-quality retinal images do not allow an accurate medical diagnosis, and it is inconvenient for a patient to return to a medical center to repeat the fundus photography exam. In this thesis, we argue that it is possible to propose a pipeline based on quality assessment and red lesion localization to achieve automatic DR detection with performance similar to experts considering that a rough segmentation is sufficient to produce a discriminant marker of a lesion. A robust automatic system is proposed to assess the quality of retinal images aiming at assisting health care professionals during a fundus photograph exam. We propose a convolutional neural network (CNN) pretrained on non-medical images for extracting general image features. The weights of the CNN are further adjusted via a fine-tuning procedure, resulting in a performant classifier using only with a small quantity of labeled images. We also designed a lesion localization model using a deep network patch-based approach. Our goal was to reduce the complexity of the implementation while improving its performance. For this purpose, we designed an efficient procedure (including two convolutional neural network models) for selecting the training patches, such that the challenging examples would be given special attention during the training process. Using the labeling of the region, a DR decision can be given to the initial image, without the need for special training. Our patch based approach allows the model to be trained with only 28 images achieving similar results to works that used over a million of labeled images. The CNN performance for quality assessment was evaluated on two publicly available databases (i.e., DRIMDB and ELSA-Brasil) using two different procedures: intra-database and inter-database cross-validation. The CNN achieves an area under the receiver operating characteristic curve (AUC) of 99.98% on DRIMDB and an AUC of 98.56% on ELSA-Brasil in the inter-database experiment, where training and testing were not performed on the same database. These results suggest the robustness of the proposed model to various image acquisitions without requiring special adaptation, thus making it a good candidate for use in operational clinical scenarios. The lesion localization model was trained on the Standard Diabetic Retinopathy Database, Calibration Level 1 (DIARETDB1) database and was tested on several databases (including Messidor) without any further adaptation. It reaches an area under the receiver operating characteristic curve of 0.912 - 95%CI 0.897-0.928 for DR screening, and a sensitivity of 0.940-95%CI 0.921-0.959. These values are competitive with other state-of-the-art approaches. The results suggest that the given hypothesis is confirmed.
publishDate 2019
dc.date.none.fl_str_mv 2019-12-20
2024-05-29T22:11:58Z
2024-05-29T22:11:58Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/doctoralThesis
format doctoralThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://repositorio.ufes.br/handle/10/13661
url http://repositorio.ufes.br/handle/10/13661
dc.language.iso.fl_str_mv por
language por
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv Text
application/pdf
dc.publisher.none.fl_str_mv Universidade Federal do Espírito Santo
BR
Doutorado em Engenharia Elétrica
Centro Tecnológico
UFES
Programa de Pós-Graduação em Engenharia Elétrica
publisher.none.fl_str_mv Universidade Federal do Espírito Santo
BR
Doutorado em Engenharia Elétrica
Centro Tecnológico
UFES
Programa de Pós-Graduação em Engenharia Elétrica
dc.source.none.fl_str_mv reponame:Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)
instname:Universidade Federal do Espírito Santo (UFES)
instacron:UFES
instname_str Universidade Federal do Espírito Santo (UFES)
instacron_str UFES
institution UFES
reponame_str Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)
collection Repositório Institucional da Universidade Federal do Espírito Santo (riUfes)
repository.name.fl_str_mv Repositório Institucional da Universidade Federal do Espírito Santo (riUfes) - Universidade Federal do Espírito Santo (UFES)
repository.mail.fl_str_mv
_version_ 1818368043636490240