Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks

Detalhes bibliográficos
Autor(a) principal: João Pedro Moreira Ferreira
Data de Publicação: 2020
Tipo de documento: Dissertação
Idioma: eng
Título da fonte: Repositório Institucional da UFMG
Texto Completo: http://hdl.handle.net/1843/38880
https://orcid.org/0000-0002-8093-9880
Resumo: Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this thesis, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data.
id UFMG_76a49c26f79af6253c1e49e0a91f4f2c
oai_identifier_str oai:repositorio.ufmg.br:1843/38880
network_acronym_str UFMG
network_name_str Repositório Institucional da UFMG
repository_id_str
spelling Erickson Rangel do Nascimentohttp://lattes.cnpq.br/6900352659470721Renato José MartinsDiego Roberto Colombo DiasMarcos de Oliveira Lage FerreiraMário Fernando Montenegro Camposhttp://lattes.cnpq.br/0866273910879686João Pedro Moreira Ferreira2021-12-17T20:19:05Z2021-12-17T20:19:05Z2020-10-30http://hdl.handle.net/1843/38880https://orcid.org/0000-0002-8093-9880Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this thesis, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data.A síntese de movimento humano utilizando técnicas de aprendizado de máquina tem se tornado cada vez mais promissora para reduzir a necessidade de captura de dados para a produção de animações. Aprender a mover-se de maneira natural a partir de um áudio, e particularmente aprender a dançar, é uma tarefa difícil que humanos frequentemente realizam com pouco esforço. Cada movimento de dança é único, mas ainda assim esses movimentos preservam as principais características do estilo de dança. A maioria das abordagens existentes para o problema de síntese de dança utiliza redes convolucionais clássicas e redes neurais recursivas no processo de aprendizagem. No entanto, elas enfrentam problemas no treinamento e na variabilidade dos resultados devido à geometria não Euclideana da estrutura da variedade do espaco de movimento. Nesta dissertação é proposta uma nova abordagem inspirada em redes convolucionais em grafos para tratar o problema de geração automática de dança a partir de áudio. O método proposto utiliza uma estratégia de treinamento adversário condicionada a uma música para sintetizar movimentos naturais preservando movimentos característicos dos diferentes estilos musicais. O método proposto foi avaliado em um estudo de usuário e com três métricas quantitativas, comumente empregadas para avaliar modelos generativos. Os resultados mostram que a abordagem proposta utilizando redes convolucionais em grafos supera o estado da arte em geração de dança condicionada a música em diferentes experimentos. Além disso, o modelo proposto é mais simples, mais fácil de ser treinado, e capaz de gerar movimentos com estilo mais realista baseado em diferentes métricas qualitativas e quantitativas do que o estado da arte. Vale ressaltar que o método proposto apresentou uma qualidade visual nos movimentos gerados comparável a movimentos reais.CNPq - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas GeraisCAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorengUniversidade Federal de Minas GeraisPrograma de Pós-Graduação em Ciência da ComputaçãoUFMGBrasilICX - DEPARTAMENTO DE CIÊNCIA DA COMPUTAÇÃOComputação – TesesMovimento humano –TesesVisão por computador – TesesRedes neurais convolucionais – TesesHuman motion generationSound and dance processingMulti-modal learningConditional adversarial netsGraph convolutional neural networksSynthesizing realistic human dance motions conditioned by musical data using graph convolutional networksSíntese de performances realísticas de dança condicionada a dados musicais utilizando redes convolucionais em grafosinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/openAccessreponame:Repositório Institucional da UFMGinstname:Universidade Federal de Minas Gerais (UFMG)instacron:UFMGLICENSElicense.txtlicense.txttext/plain; charset=utf-82118https://repositorio.ufmg.br/bitstream/1843/38880/4/license.txtcda590c95a0b51b4d15f60c9642ca272MD54ORIGINALjoao_master_dissertation.pdfjoao_master_dissertation.pdfapplication/pdf9648469https://repositorio.ufmg.br/bitstream/1843/38880/3/joao_master_dissertation.pdf59b7579a54e2f9da45a0178151cced47MD531843/388802021-12-17 17:19:06.198oai:repositorio.ufmg.br:1843/38880TElDRU7Dh0EgREUgRElTVFJJQlVJw4fDg08gTsODTy1FWENMVVNJVkEgRE8gUkVQT1NJVMOTUklPIElOU1RJVFVDSU9OQUwgREEgVUZNRwoKQ29tIGEgYXByZXNlbnRhw6fDo28gZGVzdGEgbGljZW7Dp2EsIHZvY8OqIChvIGF1dG9yIChlcykgb3UgbyB0aXR1bGFyIGRvcyBkaXJlaXRvcyBkZSBhdXRvcikgY29uY2VkZSBhbyBSZXBvc2l0w7NyaW8gSW5zdGl0dWNpb25hbCBkYSBVRk1HIChSSS1VRk1HKSBvIGRpcmVpdG8gbsOjbyBleGNsdXNpdm8gZSBpcnJldm9nw6F2ZWwgZGUgcmVwcm9kdXppciBlL291IGRpc3RyaWJ1aXIgYSBzdWEgcHVibGljYcOnw6NvIChpbmNsdWluZG8gbyByZXN1bW8pIHBvciB0b2RvIG8gbXVuZG8gbm8gZm9ybWF0byBpbXByZXNzbyBlIGVsZXRyw7RuaWNvIGUgZW0gcXVhbHF1ZXIgbWVpbywgaW5jbHVpbmRvIG9zIGZvcm1hdG9zIMOhdWRpbyBvdSB2w61kZW8uCgpWb2PDqiBkZWNsYXJhIHF1ZSBjb25oZWNlIGEgcG9sw610aWNhIGRlIGNvcHlyaWdodCBkYSBlZGl0b3JhIGRvIHNldSBkb2N1bWVudG8gZSBxdWUgY29uaGVjZSBlIGFjZWl0YSBhcyBEaXJldHJpemVzIGRvIFJJLVVGTUcuCgpWb2PDqiBjb25jb3JkYSBxdWUgbyBSZXBvc2l0w7NyaW8gSW5zdGl0dWNpb25hbCBkYSBVRk1HIHBvZGUsIHNlbSBhbHRlcmFyIG8gY29udGXDumRvLCB0cmFuc3BvciBhIHN1YSBwdWJsaWNhw6fDo28gcGFyYSBxdWFscXVlciBtZWlvIG91IGZvcm1hdG8gcGFyYSBmaW5zIGRlIHByZXNlcnZhw6fDo28uCgpWb2PDqiB0YW1iw6ltIGNvbmNvcmRhIHF1ZSBvIFJlcG9zaXTDs3JpbyBJbnN0aXR1Y2lvbmFsIGRhIFVGTUcgcG9kZSBtYW50ZXIgbWFpcyBkZSB1bWEgY8OzcGlhIGRlIHN1YSBwdWJsaWNhw6fDo28gcGFyYSBmaW5zIGRlIHNlZ3VyYW7Dp2EsIGJhY2stdXAgZSBwcmVzZXJ2YcOnw6NvLgoKVm9jw6ogZGVjbGFyYSBxdWUgYSBzdWEgcHVibGljYcOnw6NvIMOpIG9yaWdpbmFsIGUgcXVlIHZvY8OqIHRlbSBvIHBvZGVyIGRlIGNvbmNlZGVyIG9zIGRpcmVpdG9zIGNvbnRpZG9zIG5lc3RhIGxpY2Vuw6dhLiBWb2PDqiB0YW1iw6ltIGRlY2xhcmEgcXVlIG8gZGVww7NzaXRvIGRlIHN1YSBwdWJsaWNhw6fDo28gbsOjbywgcXVlIHNlamEgZGUgc2V1IGNvbmhlY2ltZW50bywgaW5mcmluZ2UgZGlyZWl0b3MgYXV0b3JhaXMgZGUgbmluZ3XDqW0uCgpDYXNvIGEgc3VhIHB1YmxpY2HDp8OjbyBjb250ZW5oYSBtYXRlcmlhbCBxdWUgdm9jw6ogbsOjbyBwb3NzdWkgYSB0aXR1bGFyaWRhZGUgZG9zIGRpcmVpdG9zIGF1dG9yYWlzLCB2b2PDqiBkZWNsYXJhIHF1ZSBvYnRldmUgYSBwZXJtaXNzw6NvIGlycmVzdHJpdGEgZG8gZGV0ZW50b3IgZG9zIGRpcmVpdG9zIGF1dG9yYWlzIHBhcmEgY29uY2VkZXIgYW8gUmVwb3NpdMOzcmlvIEluc3RpdHVjaW9uYWwgZGEgVUZNRyBvcyBkaXJlaXRvcyBhcHJlc2VudGFkb3MgbmVzdGEgbGljZW7Dp2EsIGUgcXVlIGVzc2UgbWF0ZXJpYWwgZGUgcHJvcHJpZWRhZGUgZGUgdGVyY2Vpcm9zIGVzdMOhIGNsYXJhbWVudGUgaWRlbnRpZmljYWRvIGUgcmVjb25oZWNpZG8gbm8gdGV4dG8gb3Ugbm8gY29udGXDumRvIGRhIHB1YmxpY2HDp8OjbyBvcmEgZGVwb3NpdGFkYS4KCkNBU08gQSBQVUJMSUNBw4fDg08gT1JBIERFUE9TSVRBREEgVEVOSEEgU0lETyBSRVNVTFRBRE8gREUgVU0gUEFUUk9Dw41OSU8gT1UgQVBPSU8gREUgVU1BIEFHw4pOQ0lBIERFIEZPTUVOVE8gT1UgT1VUUk8gT1JHQU5JU01PLCBWT0PDiiBERUNMQVJBIFFVRSBSRVNQRUlUT1UgVE9ET1MgRSBRVUFJU1FVRVIgRElSRUlUT1MgREUgUkVWSVPDg08gQ09NTyBUQU1Cw4lNIEFTIERFTUFJUyBPQlJJR0HDh8OVRVMgRVhJR0lEQVMgUE9SIENPTlRSQVRPIE9VIEFDT1JETy4KCk8gUmVwb3NpdMOzcmlvIEluc3RpdHVjaW9uYWwgZGEgVUZNRyBzZSBjb21wcm9tZXRlIGEgaWRlbnRpZmljYXIgY2xhcmFtZW50ZSBvIHNldSBub21lKHMpIG91IG8ocykgbm9tZXMocykgZG8ocykgZGV0ZW50b3IoZXMpIGRvcyBkaXJlaXRvcyBhdXRvcmFpcyBkYSBwdWJsaWNhw6fDo28sIGUgbsOjbyBmYXLDoSBxdWFscXVlciBhbHRlcmHDp8OjbywgYWzDqW0gZGFxdWVsYXMgY29uY2VkaWRhcyBwb3IgZXN0YSBsaWNlbsOnYS4KRepositório de PublicaçõesPUBhttps://repositorio.ufmg.br/oaiopendoar:2021-12-17T20:19:06Repositório Institucional da UFMG - Universidade Federal de Minas Gerais (UFMG)false
dc.title.pt_BR.fl_str_mv Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
dc.title.alternative.pt_BR.fl_str_mv Síntese de performances realísticas de dança condicionada a dados musicais utilizando redes convolucionais em grafos
title Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
spellingShingle Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
João Pedro Moreira Ferreira
Human motion generation
Sound and dance processing
Multi-modal learning
Conditional adversarial nets
Graph convolutional neural networks
Computação – Teses
Movimento humano –Teses
Visão por computador – Teses
Redes neurais convolucionais – Teses
title_short Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
title_full Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
title_fullStr Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
title_full_unstemmed Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
title_sort Synthesizing realistic human dance motions conditioned by musical data using graph convolutional networks
author João Pedro Moreira Ferreira
author_facet João Pedro Moreira Ferreira
author_role author
dc.contributor.advisor1.fl_str_mv Erickson Rangel do Nascimento
dc.contributor.advisor1Lattes.fl_str_mv http://lattes.cnpq.br/6900352659470721
dc.contributor.advisor-co1.fl_str_mv Renato José Martins
dc.contributor.referee1.fl_str_mv Diego Roberto Colombo Dias
dc.contributor.referee2.fl_str_mv Marcos de Oliveira Lage Ferreira
dc.contributor.referee3.fl_str_mv Mário Fernando Montenegro Campos
dc.contributor.authorLattes.fl_str_mv http://lattes.cnpq.br/0866273910879686
dc.contributor.author.fl_str_mv João Pedro Moreira Ferreira
contributor_str_mv Erickson Rangel do Nascimento
Renato José Martins
Diego Roberto Colombo Dias
Marcos de Oliveira Lage Ferreira
Mário Fernando Montenegro Campos
dc.subject.por.fl_str_mv Human motion generation
Sound and dance processing
Multi-modal learning
Conditional adversarial nets
Graph convolutional neural networks
topic Human motion generation
Sound and dance processing
Multi-modal learning
Conditional adversarial nets
Graph convolutional neural networks
Computação – Teses
Movimento humano –Teses
Visão por computador – Teses
Redes neurais convolucionais – Teses
dc.subject.other.pt_BR.fl_str_mv Computação – Teses
Movimento humano –Teses
Visão por computador – Teses
Redes neurais convolucionais – Teses
description Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this thesis, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data.
publishDate 2020
dc.date.issued.fl_str_mv 2020-10-30
dc.date.accessioned.fl_str_mv 2021-12-17T20:19:05Z
dc.date.available.fl_str_mv 2021-12-17T20:19:05Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/1843/38880
dc.identifier.orcid.pt_BR.fl_str_mv https://orcid.org/0000-0002-8093-9880
url http://hdl.handle.net/1843/38880
https://orcid.org/0000-0002-8093-9880
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.publisher.none.fl_str_mv Universidade Federal de Minas Gerais
dc.publisher.program.fl_str_mv Programa de Pós-Graduação em Ciência da Computação
dc.publisher.initials.fl_str_mv UFMG
dc.publisher.country.fl_str_mv Brasil
dc.publisher.department.fl_str_mv ICX - DEPARTAMENTO DE CIÊNCIA DA COMPUTAÇÃO
publisher.none.fl_str_mv Universidade Federal de Minas Gerais
dc.source.none.fl_str_mv reponame:Repositório Institucional da UFMG
instname:Universidade Federal de Minas Gerais (UFMG)
instacron:UFMG
instname_str Universidade Federal de Minas Gerais (UFMG)
instacron_str UFMG
institution UFMG
reponame_str Repositório Institucional da UFMG
collection Repositório Institucional da UFMG
bitstream.url.fl_str_mv https://repositorio.ufmg.br/bitstream/1843/38880/4/license.txt
https://repositorio.ufmg.br/bitstream/1843/38880/3/joao_master_dissertation.pdf
bitstream.checksum.fl_str_mv cda590c95a0b51b4d15f60c9642ca272
59b7579a54e2f9da45a0178151cced47
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
repository.name.fl_str_mv Repositório Institucional da UFMG - Universidade Federal de Minas Gerais (UFMG)
repository.mail.fl_str_mv
_version_ 1803589480504885248