Automatic pose detection in farm animals

Detalhes bibliográficos
Autor(a) principal: Agudelo, John Fredy Ramirez
Data de Publicação: 2022
Outros Autores: Montoya, Jose Fernando Guarín, Mazo, Sebastian Bedoya
Tipo de documento: preprint
Idioma: spa
Título da fonte: SciELO Preprints
Texto Completo: https://preprints.scielo.org/index.php/scielo/preprint/view/3705
Resumo: Contextualization: Animals use a wide range of variations on their body poses that can be interpreted as information about their health, welfare, or activity level. However, direct observation of these poses is a great time-consuming activity and economically unfeasible task for farmers. Although, with the use of computer vision techniques, it is possible to implement automatic observation systems on farms making this complicated and costly activity a viable alternative. Knowledge gap: Currently, there is a lack of a pose estimation model exclusively for farm animals, which allows the development of automatic posture detection systems. Purpose: The objective of this work was to evaluate the performance of a re-trained neural network model for the detection of poses in some ruminant species and horses. Methodology: More than ten thousand images of ruminants and horses were downloaded from the Imagenet database. From these images, 2000 cattle and 591 other species were selected for re-training and evaluation of the model, respectively. These images were labelled with the COCO Annotator software. This process consisted of the manual identification of eight key points on the animals’ anatomy in each image. The retraining process was carried out with the detectron2 library in Python. Object Keypoint Similarity was used to quantify the precision of the model. Results and conclusions: The Object Keypoint Similarity index established that the learning developed by the model to identify key points in cattle can be used for the same task in other farm animals. Horses and buffaloes had the best detection results. In conclusion, a relatively small data set of position in animals allows evaluating the generalizability of the inference of the models within (cattle) and outside the domain (other ruminants and equines). This type of work serves as a baseline for the development of automatic farm animal monitoring systems.
id SCI-1_fcbdc32abcf6e440479a6a01834a73ee
oai_identifier_str oai:ops.preprints.scielo.org:preprint/3705
network_acronym_str SCI-1
network_name_str SciELO Preprints
repository_id_str
spelling Automatic pose detection in farm animalsDetección automática de posición corporal en animales de granjaDeep LearningPrecision LivestockAutomationAprendizaje profundoGanadería de precisiónAutomatizaciónContextualization: Animals use a wide range of variations on their body poses that can be interpreted as information about their health, welfare, or activity level. However, direct observation of these poses is a great time-consuming activity and economically unfeasible task for farmers. Although, with the use of computer vision techniques, it is possible to implement automatic observation systems on farms making this complicated and costly activity a viable alternative. Knowledge gap: Currently, there is a lack of a pose estimation model exclusively for farm animals, which allows the development of automatic posture detection systems. Purpose: The objective of this work was to evaluate the performance of a re-trained neural network model for the detection of poses in some ruminant species and horses. Methodology: More than ten thousand images of ruminants and horses were downloaded from the Imagenet database. From these images, 2000 cattle and 591 other species were selected for re-training and evaluation of the model, respectively. These images were labelled with the COCO Annotator software. This process consisted of the manual identification of eight key points on the animals’ anatomy in each image. The retraining process was carried out with the detectron2 library in Python. Object Keypoint Similarity was used to quantify the precision of the model. Results and conclusions: The Object Keypoint Similarity index established that the learning developed by the model to identify key points in cattle can be used for the same task in other farm animals. Horses and buffaloes had the best detection results. In conclusion, a relatively small data set of position in animals allows evaluating the generalizability of the inference of the models within (cattle) and outside the domain (other ruminants and equines). This type of work serves as a baseline for the development of automatic farm animal monitoring systems.Contextualización: Los animales utilizan una amplia gama de variaciones en sus poses corporales que puede ser interpretada como información sobre su estado de salud, bienestar o actividad individual. Sin embargo, la observación directa de estas posiciones es una tarea que demanda tiempo y es económicamente inviable en las empresas pecuarias. Gracias a las técnicas de visión por computador es posible que la implementación de sistemas de observación automática en las granjas sea una alternativa viable. Vacío de conocimiento: Actualmente no existe un modelo de estimación de pose de uso exclusivo con animales de granja que permita el desarrollo de sistemas de detección automática de posturas. Propósito: El objetivo de este trabajo fue evaluar el desempeño de un modelo de redes neuronales re-entrenado para la detección de poses en algunas especies de rumiantes y en caballos. Metodología: De la base de datos Imagenet, se descargaron más de diez mil imágenes de rumiantes y equinos. De estas imágenes, se seleccionaron 2000 de vacunos y 591 de otras especies para el re-entrenamiento y evaluación del modelo, respectivamente. Estas imágenes fueron etiquetadas con el programa COCO Annotator. Este proceso consistió en la identificación manual de ocho puntos claves de la anatomía de los animales en cada imagen. El proceso de re-entrenamiento fue realizado con la librería detectron2 en Python. Para cuantificar la precisión del modelo se utilizó la similitud de puntos claves de objeto. Resultados y conclusiones: El índice de similitud de puntos claves de objeto permitió establecer que el aprendizaje desarrollado por el modelo para identificar puntos clave en vacunos puede ser utilizado para la misma tarea en otros animales de granja. Caballos y búfalos presentaron los mejores resultados de detección. En conclusión, un conjunto de datos relativamente pequeño de la posición en animales permite evaluar la generalización de la inferencia de los modelos dentro (vacunos) y fuera del dominio (otros rumiantes y equinos). Este tipo trabajos sirven como línea base para el desarrollo de sistemas de monitoreo automático de animales de granja.SciELO PreprintsSciELO PreprintsSciELO Preprints2022-03-08info:eu-repo/semantics/preprintinfo:eu-repo/semantics/publishedVersionapplication/pdfhttps://preprints.scielo.org/index.php/scielo/preprint/view/370510.1590/SciELOPreprints.3705spahttps://preprints.scielo.org/index.php/scielo/article/view/3705/6856Copyright (c) 2022 John Fredy Ramirez Agudelo, Jose Fernando Guarín Montoya, Sebastian Bedoya Mazohttps://creativecommons.org/licenses/by/4.0info:eu-repo/semantics/openAccessAgudelo, John Fredy RamirezMontoya, Jose Fernando GuarínMazo, Sebastian Bedoyareponame:SciELO Preprintsinstname:SciELOinstacron:SCI2022-03-02T18:45:22Zoai:ops.preprints.scielo.org:preprint/3705Servidor de preprintshttps://preprints.scielo.org/index.php/scieloONGhttps://preprints.scielo.org/index.php/scielo/oaiscielo.submission@scielo.orgopendoar:2022-03-02T18:45:22SciELO Preprints - SciELOfalse
dc.title.none.fl_str_mv Automatic pose detection in farm animals
Detección automática de posición corporal en animales de granja
title Automatic pose detection in farm animals
spellingShingle Automatic pose detection in farm animals
Agudelo, John Fredy Ramirez
Deep Learning
Precision Livestock
Automation
Aprendizaje profundo
Ganadería de precisión
Automatización
title_short Automatic pose detection in farm animals
title_full Automatic pose detection in farm animals
title_fullStr Automatic pose detection in farm animals
title_full_unstemmed Automatic pose detection in farm animals
title_sort Automatic pose detection in farm animals
author Agudelo, John Fredy Ramirez
author_facet Agudelo, John Fredy Ramirez
Montoya, Jose Fernando Guarín
Mazo, Sebastian Bedoya
author_role author
author2 Montoya, Jose Fernando Guarín
Mazo, Sebastian Bedoya
author2_role author
author
dc.contributor.author.fl_str_mv Agudelo, John Fredy Ramirez
Montoya, Jose Fernando Guarín
Mazo, Sebastian Bedoya
dc.subject.por.fl_str_mv Deep Learning
Precision Livestock
Automation
Aprendizaje profundo
Ganadería de precisión
Automatización
topic Deep Learning
Precision Livestock
Automation
Aprendizaje profundo
Ganadería de precisión
Automatización
description Contextualization: Animals use a wide range of variations on their body poses that can be interpreted as information about their health, welfare, or activity level. However, direct observation of these poses is a great time-consuming activity and economically unfeasible task for farmers. Although, with the use of computer vision techniques, it is possible to implement automatic observation systems on farms making this complicated and costly activity a viable alternative. Knowledge gap: Currently, there is a lack of a pose estimation model exclusively for farm animals, which allows the development of automatic posture detection systems. Purpose: The objective of this work was to evaluate the performance of a re-trained neural network model for the detection of poses in some ruminant species and horses. Methodology: More than ten thousand images of ruminants and horses were downloaded from the Imagenet database. From these images, 2000 cattle and 591 other species were selected for re-training and evaluation of the model, respectively. These images were labelled with the COCO Annotator software. This process consisted of the manual identification of eight key points on the animals’ anatomy in each image. The retraining process was carried out with the detectron2 library in Python. Object Keypoint Similarity was used to quantify the precision of the model. Results and conclusions: The Object Keypoint Similarity index established that the learning developed by the model to identify key points in cattle can be used for the same task in other farm animals. Horses and buffaloes had the best detection results. In conclusion, a relatively small data set of position in animals allows evaluating the generalizability of the inference of the models within (cattle) and outside the domain (other ruminants and equines). This type of work serves as a baseline for the development of automatic farm animal monitoring systems.
publishDate 2022
dc.date.none.fl_str_mv 2022-03-08
dc.type.driver.fl_str_mv info:eu-repo/semantics/preprint
info:eu-repo/semantics/publishedVersion
format preprint
status_str publishedVersion
dc.identifier.uri.fl_str_mv https://preprints.scielo.org/index.php/scielo/preprint/view/3705
10.1590/SciELOPreprints.3705
url https://preprints.scielo.org/index.php/scielo/preprint/view/3705
identifier_str_mv 10.1590/SciELOPreprints.3705
dc.language.iso.fl_str_mv spa
language spa
dc.relation.none.fl_str_mv https://preprints.scielo.org/index.php/scielo/article/view/3705/6856
dc.rights.driver.fl_str_mv Copyright (c) 2022 John Fredy Ramirez Agudelo, Jose Fernando Guarín Montoya, Sebastian Bedoya Mazo
https://creativecommons.org/licenses/by/4.0
info:eu-repo/semantics/openAccess
rights_invalid_str_mv Copyright (c) 2022 John Fredy Ramirez Agudelo, Jose Fernando Guarín Montoya, Sebastian Bedoya Mazo
https://creativecommons.org/licenses/by/4.0
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv SciELO Preprints
SciELO Preprints
SciELO Preprints
publisher.none.fl_str_mv SciELO Preprints
SciELO Preprints
SciELO Preprints
dc.source.none.fl_str_mv reponame:SciELO Preprints
instname:SciELO
instacron:SCI
instname_str SciELO
instacron_str SCI
institution SCI
reponame_str SciELO Preprints
collection SciELO Preprints
repository.name.fl_str_mv SciELO Preprints - SciELO
repository.mail.fl_str_mv scielo.submission@scielo.org
_version_ 1797047827336003584