Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies

Detalhes bibliográficos
Autor(a) principal: Ribeiro, Manuel António de Melo Chinopa de Sousa
Data de Publicação: 2020
Tipo de documento: Dissertação
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: http://hdl.handle.net/10362/113651
Resumo: Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground, we discuss how to map the internal representations of a neural network to the concepts of an ontology, exam ine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.
id RCAP_1d1f80bffc2bf724ea58de9025f855f2
oai_identifier_str oai:run.unl.pt:10362/113651
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologiesartificial intelligenceneural networksontologiesexplainable AIDomínio/Área Científica::Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e InformáticaArtificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground, we discuss how to map the internal representations of a neural network to the concepts of an ontology, exam ine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.Leite, JoãoRUNRibeiro, Manuel António de Melo Chinopa de Sousa2021-03-10T23:04:41Z2021-0120202021-01-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttp://hdl.handle.net/10362/113651enginfo:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2024-03-11T04:56:34Zoai:run.unl.pt:10362/113651Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-20T03:42:21.462679Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
title Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
spellingShingle Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
Ribeiro, Manuel António de Melo Chinopa de Sousa
artificial intelligence
neural networks
ontologies
explainable AI
Domínio/Área Científica::Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informática
title_short Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
title_full Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
title_fullStr Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
title_full_unstemmed Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
title_sort Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies
author Ribeiro, Manuel António de Melo Chinopa de Sousa
author_facet Ribeiro, Manuel António de Melo Chinopa de Sousa
author_role author
dc.contributor.none.fl_str_mv Leite, João
RUN
dc.contributor.author.fl_str_mv Ribeiro, Manuel António de Melo Chinopa de Sousa
dc.subject.por.fl_str_mv artificial intelligence
neural networks
ontologies
explainable AI
Domínio/Área Científica::Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informática
topic artificial intelligence
neural networks
ontologies
explainable AI
Domínio/Área Científica::Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informática
description Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground, we discuss how to map the internal representations of a neural network to the concepts of an ontology, exam ine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.
publishDate 2020
dc.date.none.fl_str_mv 2020
2021-03-10T23:04:41Z
2021-01
2021-01-01T00:00:00Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10362/113651
url http://hdl.handle.net/10362/113651
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799138035085869056