UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS

Detalhes bibliográficos
Autor(a) principal: Barandas, Marília da Silveira Gouveia
Data de Publicação: 2023
Tipo de documento: Dissertação
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10362/162131
Resumo: Uncertainty is an inevitable and essential aspect of the worldwe live in and a fundamental aspect of human decision-making. It is no different in the realm of machine learning. Just as humans seek out additional information and perspectives when faced with uncertainty, machine learning models must also be able to account for and quantify the uncertainty in their predictions. However, the uncertainty quantification in machine learning models is often neglected. By acknowledging and incorporating uncertainty quantification into machine learning models, we can build more reliable and trustworthy systems that are better equipped to handle the complexity of the world and support clinical decisionmaking. This thesis addresses the broad issue of uncertainty quantification in machine learning, covering the development and adaptation of uncertainty quantification methods, their integration in the machine learning development pipeline, and their practical application in clinical decision-making. Original contributions include the development of methods to support practitioners in developing more robust and interpretable models, which account for different sources of uncertainty across the core components of the machine learning pipeline, encompassing data, the machine learning model, and its outputs. Moreover, these machine learning models are designed with abstaining capabilities, enabling them to accept or reject predictions based on the level of uncertainty present. This emphasizes the importance of using classification with rejection option in clinical decision support systems. The effectiveness of the proposed methods was evaluated across databases with physiological signals from medical diagnosis and human activity recognition. The results support that uncertainty quantification was important for more reliable and robust model predictions. By addressing these topics, this thesis aims to improve the reliability and trustworthiness of machine learning models and contribute to fostering the adoption of machineassisted clinical decision-making. The ultimate goal is to enhance the trust and accuracy of models’ predictions and increase transparency and interpretability, ultimately leading to better decision-making across a range of applications.
id RCAP_f1beff0704c1f31ff3070aed2b6ee0b6
oai_identifier_str oai:run.unl.pt:10362/162131
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONSMachine learningUncertainty quantificationClassification with rejection optionInterpretabilityClinical decision makingDomínio/Área Científica::Engenharia e Tecnologia::Outras Engenharias e TecnologiasUncertainty is an inevitable and essential aspect of the worldwe live in and a fundamental aspect of human decision-making. It is no different in the realm of machine learning. Just as humans seek out additional information and perspectives when faced with uncertainty, machine learning models must also be able to account for and quantify the uncertainty in their predictions. However, the uncertainty quantification in machine learning models is often neglected. By acknowledging and incorporating uncertainty quantification into machine learning models, we can build more reliable and trustworthy systems that are better equipped to handle the complexity of the world and support clinical decisionmaking. This thesis addresses the broad issue of uncertainty quantification in machine learning, covering the development and adaptation of uncertainty quantification methods, their integration in the machine learning development pipeline, and their practical application in clinical decision-making. Original contributions include the development of methods to support practitioners in developing more robust and interpretable models, which account for different sources of uncertainty across the core components of the machine learning pipeline, encompassing data, the machine learning model, and its outputs. Moreover, these machine learning models are designed with abstaining capabilities, enabling them to accept or reject predictions based on the level of uncertainty present. This emphasizes the importance of using classification with rejection option in clinical decision support systems. The effectiveness of the proposed methods was evaluated across databases with physiological signals from medical diagnosis and human activity recognition. The results support that uncertainty quantification was important for more reliable and robust model predictions. By addressing these topics, this thesis aims to improve the reliability and trustworthiness of machine learning models and contribute to fostering the adoption of machineassisted clinical decision-making. The ultimate goal is to enhance the trust and accuracy of models’ predictions and increase transparency and interpretability, ultimately leading to better decision-making across a range of applications.A incerteza é um aspeto inevitável e essencial do mundo em que vivemos e um aspeto fundamental na tomada de decisão humana. Não é diferente no âmbito da aprendizagem automática. Assim como os seres humanos, quando confrontados com um determinado nível de incerteza exploram novas abordagens ou procuram recolher mais informação, também os modelos de aprendizagem automática devem ter a capacidade de ter em conta e quantificar o grau de incerteza nas suas previsões. No entanto, a quantificação da incerteza nos modelos de aprendizagem automática é frequentemente negligenciada. O reconhecimento e incorporação da quantificação de incerteza nos modelos de aprendizagem automática, irá permitir construir sistemas mais fiáveis, melhor preparados para apoiar a tomada de decisão clinica em situações complexas e com maior nível de confiança. Esta tese aborda a ampla questão da quantificação de incerteza na aprendizagem automática, incluindo o desenvolvimento e adaptação de métodos de quantificação de incerteza, a sua integração no pipeline de desenvolvimento de modelos de aprendizagem automática e a sua aplicação prática na tomada de decisão clínica. Nos contributos originais, inclui-se o desenvolvimento de métodos para apoiar os profissionais de desenvolvimento na criação de modelos mais robustos e interpretáveis, que tenham em consideração as diferentes fontes de incerteza nos diversos componenteschave do pipeline de aprendizagem automática: os dados, o modelo de aprendizagem automática e os seus resultados. Adicionalmente, os modelos de aprendizagem automática são construídos com a capacidade de se abster, o que permite aceitar ou rejeitar uma previsão com base no nível de incerteza presente, o que realça a importância da utilização de modelos de classificação com a opção de rejeição em sistemas de apoio à decisão clínica. A eficácia dos métodos propostos foi avaliada em bases de dados contendo sinais fisiológicos provenientes de diagnósticos médicos e reconhecimento de atividades humanas. As conclusões sustentam a importância da quantificação da incerteza nos modelos de aprendizagem automática para obter previsões mais fiáveis e robustas. Desenvolvendo estes tópicos, esta tese pretende aumentar a fiabilidade e credibilidade dos modelos de aprendizagem automática, promovendo a utilização e desenvolvimento dos sistemas de apoio à decisão clínica. O objetivo final é aumentar o grau de confiança e a fiabilidade das previsões dos modelos, bem como, aumentar a transparência e interpretabilidade, proporcionando uma melhor tomada de decisão numa variedade de aplicações.Gamboa, HugoRUNBarandas, Marília da Silveira Gouveia2024-01-11T11:20:57Z20232023-01-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttp://hdl.handle.net/10362/162131enginfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2024-03-11T05:44:56Zoai:run.unl.pt:10362/162131Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-20T03:58:46.209291Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
title UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
spellingShingle UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
Barandas, Marília da Silveira Gouveia
Machine learning
Uncertainty quantification
Classification with rejection option
Interpretability
Clinical decision making
Domínio/Área Científica::Engenharia e Tecnologia::Outras Engenharias e Tecnologias
title_short UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
title_full UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
title_fullStr UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
title_full_unstemmed UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
title_sort UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS
author Barandas, Marília da Silveira Gouveia
author_facet Barandas, Marília da Silveira Gouveia
author_role author
dc.contributor.none.fl_str_mv Gamboa, Hugo
RUN
dc.contributor.author.fl_str_mv Barandas, Marília da Silveira Gouveia
dc.subject.por.fl_str_mv Machine learning
Uncertainty quantification
Classification with rejection option
Interpretability
Clinical decision making
Domínio/Área Científica::Engenharia e Tecnologia::Outras Engenharias e Tecnologias
topic Machine learning
Uncertainty quantification
Classification with rejection option
Interpretability
Clinical decision making
Domínio/Área Científica::Engenharia e Tecnologia::Outras Engenharias e Tecnologias
description Uncertainty is an inevitable and essential aspect of the worldwe live in and a fundamental aspect of human decision-making. It is no different in the realm of machine learning. Just as humans seek out additional information and perspectives when faced with uncertainty, machine learning models must also be able to account for and quantify the uncertainty in their predictions. However, the uncertainty quantification in machine learning models is often neglected. By acknowledging and incorporating uncertainty quantification into machine learning models, we can build more reliable and trustworthy systems that are better equipped to handle the complexity of the world and support clinical decisionmaking. This thesis addresses the broad issue of uncertainty quantification in machine learning, covering the development and adaptation of uncertainty quantification methods, their integration in the machine learning development pipeline, and their practical application in clinical decision-making. Original contributions include the development of methods to support practitioners in developing more robust and interpretable models, which account for different sources of uncertainty across the core components of the machine learning pipeline, encompassing data, the machine learning model, and its outputs. Moreover, these machine learning models are designed with abstaining capabilities, enabling them to accept or reject predictions based on the level of uncertainty present. This emphasizes the importance of using classification with rejection option in clinical decision support systems. The effectiveness of the proposed methods was evaluated across databases with physiological signals from medical diagnosis and human activity recognition. The results support that uncertainty quantification was important for more reliable and robust model predictions. By addressing these topics, this thesis aims to improve the reliability and trustworthiness of machine learning models and contribute to fostering the adoption of machineassisted clinical decision-making. The ultimate goal is to enhance the trust and accuracy of models’ predictions and increase transparency and interpretability, ultimately leading to better decision-making across a range of applications.
publishDate 2023
dc.date.none.fl_str_mv 2023
2023-01-01T00:00:00Z
2024-01-11T11:20:57Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10362/162131
url http://hdl.handle.net/10362/162131
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799138168106123264