Explainable automated pain recognition in cats
Autor(a) principal: | |
---|---|
Data de Publicação: | 2023 |
Outros Autores: | , , , , , , , , , , , , , |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://dx.doi.org/10.1038/s41598-023-35846-6 http://hdl.handle.net/11449/247508 |
Resumo: | Manual tools for pain assessment from facial expressions have been suggested and validated for several animal species. However, facial expression analysis performed by humans is prone to subjectivity and bias, and in many cases also requires special expertise and training. This has led to an increasing body of work on automated pain recognition, which has been addressed for several species, including cats. Even for experts, cats are a notoriously challenging species for pain assessment. A previous study compared two approaches to automated ‘pain’/‘no pain’ classification from cat facial images: a deep learning approach, and an approach based on manually annotated geometric landmarks, reaching comparable accuracy results. However, the study included a very homogeneous dataset of cats and thus further research to study generalizability of pain recognition to more realistic settings is required. This study addresses the question of whether AI models can classify ‘pain’/‘no pain’ in cats in a more realistic (multi-breed, multi-sex) setting using a more heterogeneous and thus potentially ‘noisy’ dataset of 84 client-owned cats. Cats were a convenience sample presented to the Department of Small Animal Medicine and Surgery of the University of Veterinary Medicine Hannover and included individuals of different breeds, ages, sex, and with varying medical conditions/medical histories. Cats were scored by veterinary experts using the Glasgow composite measure pain scale in combination with the well-documented and comprehensive clinical history of those patients; the scoring was then used for training AI models using two different approaches. We show that in this context the landmark-based approach performs better, reaching accuracy above 77% in pain detection as opposed to only above 65% reached by the deep learning approach. Furthermore, we investigated the explainability of such machine recognition in terms of identifying facial features that are important for the machine, revealing that the region of nose and mouth seems more important for machine pain classification, while the region of ears is less important, with these findings being consistent across the models and techniques studied here. |
id |
UNSP_232dd476fd669cc216133c71119ec924 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/247508 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
Explainable automated pain recognition in catsManual tools for pain assessment from facial expressions have been suggested and validated for several animal species. However, facial expression analysis performed by humans is prone to subjectivity and bias, and in many cases also requires special expertise and training. This has led to an increasing body of work on automated pain recognition, which has been addressed for several species, including cats. Even for experts, cats are a notoriously challenging species for pain assessment. A previous study compared two approaches to automated ‘pain’/‘no pain’ classification from cat facial images: a deep learning approach, and an approach based on manually annotated geometric landmarks, reaching comparable accuracy results. However, the study included a very homogeneous dataset of cats and thus further research to study generalizability of pain recognition to more realistic settings is required. This study addresses the question of whether AI models can classify ‘pain’/‘no pain’ in cats in a more realistic (multi-breed, multi-sex) setting using a more heterogeneous and thus potentially ‘noisy’ dataset of 84 client-owned cats. Cats were a convenience sample presented to the Department of Small Animal Medicine and Surgery of the University of Veterinary Medicine Hannover and included individuals of different breeds, ages, sex, and with varying medical conditions/medical histories. Cats were scored by veterinary experts using the Glasgow composite measure pain scale in combination with the well-documented and comprehensive clinical history of those patients; the scoring was then used for training AI models using two different approaches. We show that in this context the landmark-based approach performs better, reaching accuracy above 77% in pain detection as opposed to only above 65% reached by the deep learning approach. Furthermore, we investigated the explainability of such machine recognition in terms of identifying facial features that are important for the machine, revealing that the region of nose and mouth seems more important for machine pain classification, while the region of ears is less important, with these findings being consistent across the models and techniques studied here.Information Systems Department University of HaifaFaculty of Electrical Engineering Technion Israel Institute of TechnologyDepartment of Small Animal Medicine and Surgery University of Veterinary Medicine HannoverCats Protection National Cat Centre, SussexSchool of Veterinary Medicine and Animal Science São Paulo State University (Unesp)School of Life Sciences Joseph Bank Laboratories University of LincolnSchool of Veterinary Medicine and Animal Science São Paulo State University (Unesp)University of HaifaIsrael Institute of TechnologyUniversity of Veterinary Medicine HannoverNational Cat CentreUniversidade Estadual Paulista (UNESP)University of LincolnFeighelstein, MarceloHenze, LeaMeller, SebastianShimshoni, IlanHermoni, BenBerko, MichaelTwele, FriederikeSchütter, AlexandraDorn, NoraKästner, SabineFinka, LaurenLuna, Stelio P. L. [UNESP]Mills, Daniel S.Volk, Holger A.Zamansky, Anna2023-07-29T13:17:58Z2023-07-29T13:17:58Z2023-12-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://dx.doi.org/10.1038/s41598-023-35846-6Scientific Reports, v. 13, n. 1, 2023.2045-2322http://hdl.handle.net/11449/24750810.1038/s41598-023-35846-62-s2.0-85160893355Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengScientific Reportsinfo:eu-repo/semantics/openAccess2023-07-29T13:17:58Zoai:repositorio.unesp.br:11449/247508Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T23:18:32.867307Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
Explainable automated pain recognition in cats |
title |
Explainable automated pain recognition in cats |
spellingShingle |
Explainable automated pain recognition in cats Feighelstein, Marcelo |
title_short |
Explainable automated pain recognition in cats |
title_full |
Explainable automated pain recognition in cats |
title_fullStr |
Explainable automated pain recognition in cats |
title_full_unstemmed |
Explainable automated pain recognition in cats |
title_sort |
Explainable automated pain recognition in cats |
author |
Feighelstein, Marcelo |
author_facet |
Feighelstein, Marcelo Henze, Lea Meller, Sebastian Shimshoni, Ilan Hermoni, Ben Berko, Michael Twele, Friederike Schütter, Alexandra Dorn, Nora Kästner, Sabine Finka, Lauren Luna, Stelio P. L. [UNESP] Mills, Daniel S. Volk, Holger A. Zamansky, Anna |
author_role |
author |
author2 |
Henze, Lea Meller, Sebastian Shimshoni, Ilan Hermoni, Ben Berko, Michael Twele, Friederike Schütter, Alexandra Dorn, Nora Kästner, Sabine Finka, Lauren Luna, Stelio P. L. [UNESP] Mills, Daniel S. Volk, Holger A. Zamansky, Anna |
author2_role |
author author author author author author author author author author author author author author |
dc.contributor.none.fl_str_mv |
University of Haifa Israel Institute of Technology University of Veterinary Medicine Hannover National Cat Centre Universidade Estadual Paulista (UNESP) University of Lincoln |
dc.contributor.author.fl_str_mv |
Feighelstein, Marcelo Henze, Lea Meller, Sebastian Shimshoni, Ilan Hermoni, Ben Berko, Michael Twele, Friederike Schütter, Alexandra Dorn, Nora Kästner, Sabine Finka, Lauren Luna, Stelio P. L. [UNESP] Mills, Daniel S. Volk, Holger A. Zamansky, Anna |
description |
Manual tools for pain assessment from facial expressions have been suggested and validated for several animal species. However, facial expression analysis performed by humans is prone to subjectivity and bias, and in many cases also requires special expertise and training. This has led to an increasing body of work on automated pain recognition, which has been addressed for several species, including cats. Even for experts, cats are a notoriously challenging species for pain assessment. A previous study compared two approaches to automated ‘pain’/‘no pain’ classification from cat facial images: a deep learning approach, and an approach based on manually annotated geometric landmarks, reaching comparable accuracy results. However, the study included a very homogeneous dataset of cats and thus further research to study generalizability of pain recognition to more realistic settings is required. This study addresses the question of whether AI models can classify ‘pain’/‘no pain’ in cats in a more realistic (multi-breed, multi-sex) setting using a more heterogeneous and thus potentially ‘noisy’ dataset of 84 client-owned cats. Cats were a convenience sample presented to the Department of Small Animal Medicine and Surgery of the University of Veterinary Medicine Hannover and included individuals of different breeds, ages, sex, and with varying medical conditions/medical histories. Cats were scored by veterinary experts using the Glasgow composite measure pain scale in combination with the well-documented and comprehensive clinical history of those patients; the scoring was then used for training AI models using two different approaches. We show that in this context the landmark-based approach performs better, reaching accuracy above 77% in pain detection as opposed to only above 65% reached by the deep learning approach. Furthermore, we investigated the explainability of such machine recognition in terms of identifying facial features that are important for the machine, revealing that the region of nose and mouth seems more important for machine pain classification, while the region of ears is less important, with these findings being consistent across the models and techniques studied here. |
publishDate |
2023 |
dc.date.none.fl_str_mv |
2023-07-29T13:17:58Z 2023-07-29T13:17:58Z 2023-12-01 |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.1038/s41598-023-35846-6 Scientific Reports, v. 13, n. 1, 2023. 2045-2322 http://hdl.handle.net/11449/247508 10.1038/s41598-023-35846-6 2-s2.0-85160893355 |
url |
http://dx.doi.org/10.1038/s41598-023-35846-6 http://hdl.handle.net/11449/247508 |
identifier_str_mv |
Scientific Reports, v. 13, n. 1, 2023. 2045-2322 10.1038/s41598-023-35846-6 2-s2.0-85160893355 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
Scientific Reports |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808129505482506240 |