A two-level item response theory model to evaluate automatic speech synthesis and recognition systems

Detalhes bibliográficos
Autor(a) principal: OLIVEIRA, Chaina Santos
Data de Publicação: 2023
Tipo de documento: Tese
Idioma: eng
Título da fonte: Repositório Institucional da UFPE
Texto Completo: https://repositorio.ufpe.br/handle/123456789/52538
Resumo: Automatic speech recognition systems (ASRs) have become popular in different ap- plications. Ideally, ASRs should be tested under different scenarios by adopting diverse speech test data (e.g., diverse sentences and speakers). Relying on audio test data recorded using human speakers is time-consuming. An alternative is to use text-to-speech (TTS) tools to synthesize audios given a set of sentences and virtual speakers. The ASR under test receives the synthesized audios and the transcription errors are recorded for evalu- ation. Despite the availability of TTS tools, not all synthesized speeches have the same quality. It is important to evaluate the usefulness of speakers and the relevance of sen- tences for ASR evaluation. In this work, we propose a two-level Item Response Theory (IRT) model to simultaneously evaluate ASRs, speakers and sentences, which is original in the literature. IRT is a paradigm from psychometrics to estimate the ability of human respondents based on their responses to items with different levels of difficulty. In the first level of the proposed model, an item is a synthesized speech, a respondent is an ASR system and each response is the transcription accuracy observed when a synthesized speech is adopted for testing an ASR system. IRT is then used to estimate the difficulty of each synthesized speech as well as the ability of each ASR system. In the second level, the difficulty of each synthesized speech is decomposed into the sentence’s difficulty and discrimination and the speaker’s quality. The difficulty of a synthesized speech tends to be high when it is generated from a difficult sentence and a bad speaker, and sentences with greater discriminations tend to better differentiate between good and bad speakers. The ASR’s ability is high when it is robust to hard speeches in turn. Before performing the experiments with the two-IRT level model we propose in this work, we executed a preliminary case study to verify the viability of applying IRT in the context of speech evaluation. In this first case study, IRT was applied to evaluate 62 speakers (from four TTS tools) and to characterize the difficulty of 12 different sentences. The experiments presented interesting insights about the relevance of applying IRT to evaluate sentences and speakers, which inspired us to explore other scenarios. So, we modeled the two-IRT level model already introduced and executed the second case study. Four ASR systems were adopted to transcribe synthesized speeches from 100 benchmark sentences and 75 speakers. Performed experiments revealed useful insights on how the quality of speech synthesis and recognition can be affected by distinct factors (e.g., sentence difficulty and speaker ability). We also explored the impact of pitch, rate, and noise insertion on pa- rameter estimation and system performance.
id UFPE_6bb3116611453ab232d6b33215d8b9d1
oai_identifier_str oai:repositorio.ufpe.br:123456789/52538
network_acronym_str UFPE
network_name_str Repositório Institucional da UFPE
repository_id_str 2221
spelling OLIVEIRA, Chaina Santoshttp://lattes.cnpq.br/8883571259444620http://lattes.cnpq.br/2984888073123287PRUDÊNCIO, Ricardo Bastos Cavalcante2023-09-29T17:12:13Z2023-09-29T17:12:13Z2023-06-19OLIVEIRA, Chaina Santos. A two-level item response theory model to evaluate automatic speech synthesis and recognition systems. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023.https://repositorio.ufpe.br/handle/123456789/52538Automatic speech recognition systems (ASRs) have become popular in different ap- plications. Ideally, ASRs should be tested under different scenarios by adopting diverse speech test data (e.g., diverse sentences and speakers). Relying on audio test data recorded using human speakers is time-consuming. An alternative is to use text-to-speech (TTS) tools to synthesize audios given a set of sentences and virtual speakers. The ASR under test receives the synthesized audios and the transcription errors are recorded for evalu- ation. Despite the availability of TTS tools, not all synthesized speeches have the same quality. It is important to evaluate the usefulness of speakers and the relevance of sen- tences for ASR evaluation. In this work, we propose a two-level Item Response Theory (IRT) model to simultaneously evaluate ASRs, speakers and sentences, which is original in the literature. IRT is a paradigm from psychometrics to estimate the ability of human respondents based on their responses to items with different levels of difficulty. In the first level of the proposed model, an item is a synthesized speech, a respondent is an ASR system and each response is the transcription accuracy observed when a synthesized speech is adopted for testing an ASR system. IRT is then used to estimate the difficulty of each synthesized speech as well as the ability of each ASR system. In the second level, the difficulty of each synthesized speech is decomposed into the sentence’s difficulty and discrimination and the speaker’s quality. The difficulty of a synthesized speech tends to be high when it is generated from a difficult sentence and a bad speaker, and sentences with greater discriminations tend to better differentiate between good and bad speakers. The ASR’s ability is high when it is robust to hard speeches in turn. Before performing the experiments with the two-IRT level model we propose in this work, we executed a preliminary case study to verify the viability of applying IRT in the context of speech evaluation. In this first case study, IRT was applied to evaluate 62 speakers (from four TTS tools) and to characterize the difficulty of 12 different sentences. The experiments presented interesting insights about the relevance of applying IRT to evaluate sentences and speakers, which inspired us to explore other scenarios. So, we modeled the two-IRT level model already introduced and executed the second case study. Four ASR systems were adopted to transcribe synthesized speeches from 100 benchmark sentences and 75 speakers. Performed experiments revealed useful insights on how the quality of speech synthesis and recognition can be affected by distinct factors (e.g., sentence difficulty and speaker ability). We also explored the impact of pitch, rate, and noise insertion on pa- rameter estimation and system performance.FACEPESistemas de reconhecimento da fala têm se tornado populares em diversas aplicações. Idealmente, tais sistemas devem ser testados em diferentes cenários, com diversos tipos de falas, sentenças e locutores. Adquirir dados de teste a partir de falas humanas gravadas é custoso em questão de tempo. Uma alternativa é usar ferramentas text-to-speech (TTS) para sintetizar áudios dado um conjunto de sentenças e locutores virtuais. Desta forma, o sistema que está sendo testado recebe um áudio sintetizado, faz a transcrição e os erros de transcrição são coletados para posterior avaliação. Apesar da grande disponibilidade de serviços de síntese da fala, nem todas as falas sintetizadas têm a mesma qualidade. É importante avaliar a utilidade dos locutores e das sentenças para a avaliação do sistema de reconhecimento da fala. Assim, este trabalho propõe um modelo de Teoria de Resposta ao Item (TRI) de dois níveis para avaliar locutores, sentenças e sistemas de reconhecimento da fala, o que é original na literatura. TRI é uma abordagem da psicometria para estimar a habilidade de respondentes humanos, tendo como base as suas respostas a itens com diferentes níveis de dificuldade. No primeiro nível do modelo proposto, um item é uma fala sintética, um respondente é um sistema de reconhecimento da fala, e cada resposta é a acurácia da transcrição de uma fala sintetizada por um sistema de reconhecimento da fala. Um modelo de TRI é, então, usado para estimar a dificuldade de cada fala sintetizada e a habilidade do sistema de reconhecimento da fala. No segundo nível, a dificuldade de cada fala sintética é decomposta em: dificuldade e discriminação da sentença, e a qual- idade do locutor. A dificuldade da fala sintética tende a ser alta quando ela é gerada a partir de uma sentença difícil e um locutor ruim, e sentenças com discriminações maiores tendem a diferenciar melhor entre locutores bons e ruins. A habilidade de um sistema de reconhecimento da fala é alta quando ele é robusto a falas difíceis. Antes de executar experimentos com o modelo TRI de dois níveis proposto neste trabalho, nós executamos um estudo de caso preliminar para verificar a viabilidade de aplicar TRI no contexto de avaliação da fala. Nesta experimentação inicial, um modelo TRI de um nível foi usado para avaliar 62 locutores (de quatro sistemas de síntese da fala) e 12 sentenças. Os re- sultados mostraram a relevância de aplicar TRI para avaliar sentenças e locutores dentro deste contexto, o que nos estimulou a elaborar outros estudos de caso. Em seguida, desen- volvemos o modelo TRI de dois níveis e executamos experimentos usando tal abordagem. Desta vez, quatro sistemas de reconhecimento da fala foram adotados para transcrever as falas sintéticas resultantes de 100 sentenças de benchmark e 75 locutores. Os experimentos mostraram como a qualidade da síntese e reconhecimento das falas pode ser afetada por fatores diversos, como a dificuldade da sentença e a habilidade dos locutores. Também exploramos o impacto de pitch, rate e da inserção de ruído na estimação dos parâmetros e no desempenho dos sistemas.engUniversidade Federal de PernambucoPrograma de Pos Graduacao em Ciencia da ComputacaoUFPEBrasilAttribution-NonCommercial-NoDerivs 3.0 Brazilhttp://creativecommons.org/licenses/by-nc-nd/3.0/br/info:eu-repo/semantics/embargoedAccessInteligência computacionalBenchmark de falaReconhecimento da falaA two-level item response theory model to evaluate automatic speech synthesis and recognition systemsinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisdoutoradoreponame:Repositório Institucional da UFPEinstname:Universidade Federal de Pernambuco (UFPE)instacron:UFPECC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8811https://repositorio.ufpe.br/bitstream/123456789/52538/2/license_rdfe39d27027a6cc9cb039ad269a5db8e34MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-82362https://repositorio.ufpe.br/bitstream/123456789/52538/3/license.txt5e89a1613ddc8510c6576f4b23a78973MD53ORIGINALTESE Chaina Santos Oliveira.pdfTESE Chaina Santos Oliveira.pdfapplication/pdf2716445https://repositorio.ufpe.br/bitstream/123456789/52538/1/TESE%20Chaina%20Santos%20Oliveira.pdf4f7bae3e0939524ff74c6293c6a9cd05MD51TEXTTESE Chaina Santos Oliveira.pdf.txtTESE Chaina Santos Oliveira.pdf.txtExtracted texttext/plain241142https://repositorio.ufpe.br/bitstream/123456789/52538/4/TESE%20Chaina%20Santos%20Oliveira.pdf.txt571453b3c1bd70865f3b7dd6760f6965MD54THUMBNAILTESE Chaina Santos Oliveira.pdf.jpgTESE Chaina Santos Oliveira.pdf.jpgGenerated Thumbnailimage/jpeg1193https://repositorio.ufpe.br/bitstream/123456789/52538/5/TESE%20Chaina%20Santos%20Oliveira.pdf.jpg6d59da0d266cf0b472e02d29ac3cd9d3MD55123456789/525382023-09-30 02:34:32.326oai:repositorio.ufpe.br:123456789/52538VGVybW8gZGUgRGVww7NzaXRvIExlZ2FsIGUgQXV0b3JpemHDp8OjbyBwYXJhIFB1YmxpY2l6YcOnw6NvIGRlIERvY3VtZW50b3Mgbm8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRQoKCkRlY2xhcm8gZXN0YXIgY2llbnRlIGRlIHF1ZSBlc3RlIFRlcm1vIGRlIERlcMOzc2l0byBMZWdhbCBlIEF1dG9yaXphw6fDo28gdGVtIG8gb2JqZXRpdm8gZGUgZGl2dWxnYcOnw6NvIGRvcyBkb2N1bWVudG9zIGRlcG9zaXRhZG9zIG5vIFJlcG9zaXTDs3JpbyBEaWdpdGFsIGRhIFVGUEUgZSBkZWNsYXJvIHF1ZToKCkkgLSBvcyBkYWRvcyBwcmVlbmNoaWRvcyBubyBmb3JtdWzDoXJpbyBkZSBkZXDDs3NpdG8gc8OjbyB2ZXJkYWRlaXJvcyBlIGF1dMOqbnRpY29zOwoKSUkgLSAgbyBjb250ZcO6ZG8gZGlzcG9uaWJpbGl6YWRvIMOpIGRlIHJlc3BvbnNhYmlsaWRhZGUgZGUgc3VhIGF1dG9yaWE7CgpJSUkgLSBvIGNvbnRlw7pkbyDDqSBvcmlnaW5hbCwgZSBzZSBvIHRyYWJhbGhvIGUvb3UgcGFsYXZyYXMgZGUgb3V0cmFzIHBlc3NvYXMgZm9yYW0gdXRpbGl6YWRvcywgZXN0YXMgZm9yYW0gZGV2aWRhbWVudGUgcmVjb25oZWNpZGFzOwoKSVYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIG9icmEgY29sZXRpdmEgKG1haXMgZGUgdW0gYXV0b3IpOiB0b2RvcyBvcyBhdXRvcmVzIGVzdMOjbyBjaWVudGVzIGRvIGRlcMOzc2l0byBlIGRlIGFjb3JkbyBjb20gZXN0ZSB0ZXJtbzsKClYgLSBxdWFuZG8gdHJhdGFyLXNlIGRlIFRyYWJhbGhvIGRlIENvbmNsdXPDo28gZGUgQ3Vyc28sIERpc3NlcnRhw6fDo28gb3UgVGVzZTogbyBhcnF1aXZvIGRlcG9zaXRhZG8gY29ycmVzcG9uZGUgw6AgdmVyc8OjbyBmaW5hbCBkbyB0cmFiYWxobzsKClZJIC0gcXVhbmRvIHRyYXRhci1zZSBkZSBUcmFiYWxobyBkZSBDb25jbHVzw6NvIGRlIEN1cnNvLCBEaXNzZXJ0YcOnw6NvIG91IFRlc2U6IGVzdG91IGNpZW50ZSBkZSBxdWUgYSBhbHRlcmHDp8OjbyBkYSBtb2RhbGlkYWRlIGRlIGFjZXNzbyBhbyBkb2N1bWVudG8gYXDDs3MgbyBkZXDDs3NpdG8gZSBhbnRlcyBkZSBmaW5kYXIgbyBwZXLDrW9kbyBkZSBlbWJhcmdvLCBxdWFuZG8gZm9yIGVzY29saGlkbyBhY2Vzc28gcmVzdHJpdG8sIHNlcsOhIHBlcm1pdGlkYSBtZWRpYW50ZSBzb2xpY2l0YcOnw6NvIGRvIChhKSBhdXRvciAoYSkgYW8gU2lzdGVtYSBJbnRlZ3JhZG8gZGUgQmlibGlvdGVjYXMgZGEgVUZQRSAoU0lCL1VGUEUpLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gQWJlcnRvOgoKTmEgcXVhbGlkYWRlIGRlIHRpdHVsYXIgZG9zIGRpcmVpdG9zIGF1dG9yYWlzIGRlIGF1dG9yIHF1ZSByZWNhZW0gc29icmUgZXN0ZSBkb2N1bWVudG8sIGZ1bmRhbWVudGFkbyBuYSBMZWkgZGUgRGlyZWl0byBBdXRvcmFsIG5vIDkuNjEwLCBkZSAxOSBkZSBmZXZlcmVpcm8gZGUgMTk5OCwgYXJ0LiAyOSwgaW5jaXNvIElJSSwgYXV0b3Jpem8gYSBVbml2ZXJzaWRhZGUgRmVkZXJhbCBkZSBQZXJuYW1idWNvIGEgZGlzcG9uaWJpbGl6YXIgZ3JhdHVpdGFtZW50ZSwgc2VtIHJlc3NhcmNpbWVudG8gZG9zIGRpcmVpdG9zIGF1dG9yYWlzLCBwYXJhIGZpbnMgZGUgbGVpdHVyYSwgaW1wcmVzc8OjbyBlL291IGRvd25sb2FkIChhcXVpc2nDp8OjbykgYXRyYXbDqXMgZG8gc2l0ZSBkbyBSZXBvc2l0w7NyaW8gRGlnaXRhbCBkYSBVRlBFIG5vIGVuZGVyZcOnbyBodHRwOi8vd3d3LnJlcG9zaXRvcmlvLnVmcGUuYnIsIGEgcGFydGlyIGRhIGRhdGEgZGUgZGVww7NzaXRvLgoKIApQYXJhIHRyYWJhbGhvcyBlbSBBY2Vzc28gUmVzdHJpdG86CgpOYSBxdWFsaWRhZGUgZGUgdGl0dWxhciBkb3MgZGlyZWl0b3MgYXV0b3JhaXMgZGUgYXV0b3IgcXVlIHJlY2FlbSBzb2JyZSBlc3RlIGRvY3VtZW50bywgZnVuZGFtZW50YWRvIG5hIExlaSBkZSBEaXJlaXRvIEF1dG9yYWwgbm8gOS42MTAgZGUgMTkgZGUgZmV2ZXJlaXJvIGRlIDE5OTgsIGFydC4gMjksIGluY2lzbyBJSUksIGF1dG9yaXpvIGEgVW5pdmVyc2lkYWRlIEZlZGVyYWwgZGUgUGVybmFtYnVjbyBhIGRpc3BvbmliaWxpemFyIGdyYXR1aXRhbWVudGUsIHNlbSByZXNzYXJjaW1lbnRvIGRvcyBkaXJlaXRvcyBhdXRvcmFpcywgcGFyYSBmaW5zIGRlIGxlaXR1cmEsIGltcHJlc3PDo28gZS9vdSBkb3dubG9hZCAoYXF1aXNpw6fDo28pIGF0cmF2w6lzIGRvIHNpdGUgZG8gUmVwb3NpdMOzcmlvIERpZ2l0YWwgZGEgVUZQRSBubyBlbmRlcmXDp28gaHR0cDovL3d3dy5yZXBvc2l0b3Jpby51ZnBlLmJyLCBxdWFuZG8gZmluZGFyIG8gcGVyw61vZG8gZGUgZW1iYXJnbyBjb25kaXplbnRlIGFvIHRpcG8gZGUgZG9jdW1lbnRvLCBjb25mb3JtZSBpbmRpY2FkbyBubyBjYW1wbyBEYXRhIGRlIEVtYmFyZ28uCg==Repositório InstitucionalPUBhttps://repositorio.ufpe.br/oai/requestattena@ufpe.bropendoar:22212023-09-30T05:34:32Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE)false
dc.title.pt_BR.fl_str_mv A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
title A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
spellingShingle A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
OLIVEIRA, Chaina Santos
Inteligência computacional
Benchmark de fala
Reconhecimento da fala
title_short A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
title_full A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
title_fullStr A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
title_full_unstemmed A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
title_sort A two-level item response theory model to evaluate automatic speech synthesis and recognition systems
author OLIVEIRA, Chaina Santos
author_facet OLIVEIRA, Chaina Santos
author_role author
dc.contributor.authorLattes.pt_BR.fl_str_mv http://lattes.cnpq.br/8883571259444620
dc.contributor.advisorLattes.pt_BR.fl_str_mv http://lattes.cnpq.br/2984888073123287
dc.contributor.author.fl_str_mv OLIVEIRA, Chaina Santos
dc.contributor.advisor1.fl_str_mv PRUDÊNCIO, Ricardo Bastos Cavalcante
contributor_str_mv PRUDÊNCIO, Ricardo Bastos Cavalcante
dc.subject.por.fl_str_mv Inteligência computacional
Benchmark de fala
Reconhecimento da fala
topic Inteligência computacional
Benchmark de fala
Reconhecimento da fala
description Automatic speech recognition systems (ASRs) have become popular in different ap- plications. Ideally, ASRs should be tested under different scenarios by adopting diverse speech test data (e.g., diverse sentences and speakers). Relying on audio test data recorded using human speakers is time-consuming. An alternative is to use text-to-speech (TTS) tools to synthesize audios given a set of sentences and virtual speakers. The ASR under test receives the synthesized audios and the transcription errors are recorded for evalu- ation. Despite the availability of TTS tools, not all synthesized speeches have the same quality. It is important to evaluate the usefulness of speakers and the relevance of sen- tences for ASR evaluation. In this work, we propose a two-level Item Response Theory (IRT) model to simultaneously evaluate ASRs, speakers and sentences, which is original in the literature. IRT is a paradigm from psychometrics to estimate the ability of human respondents based on their responses to items with different levels of difficulty. In the first level of the proposed model, an item is a synthesized speech, a respondent is an ASR system and each response is the transcription accuracy observed when a synthesized speech is adopted for testing an ASR system. IRT is then used to estimate the difficulty of each synthesized speech as well as the ability of each ASR system. In the second level, the difficulty of each synthesized speech is decomposed into the sentence’s difficulty and discrimination and the speaker’s quality. The difficulty of a synthesized speech tends to be high when it is generated from a difficult sentence and a bad speaker, and sentences with greater discriminations tend to better differentiate between good and bad speakers. The ASR’s ability is high when it is robust to hard speeches in turn. Before performing the experiments with the two-IRT level model we propose in this work, we executed a preliminary case study to verify the viability of applying IRT in the context of speech evaluation. In this first case study, IRT was applied to evaluate 62 speakers (from four TTS tools) and to characterize the difficulty of 12 different sentences. The experiments presented interesting insights about the relevance of applying IRT to evaluate sentences and speakers, which inspired us to explore other scenarios. So, we modeled the two-IRT level model already introduced and executed the second case study. Four ASR systems were adopted to transcribe synthesized speeches from 100 benchmark sentences and 75 speakers. Performed experiments revealed useful insights on how the quality of speech synthesis and recognition can be affected by distinct factors (e.g., sentence difficulty and speaker ability). We also explored the impact of pitch, rate, and noise insertion on pa- rameter estimation and system performance.
publishDate 2023
dc.date.accessioned.fl_str_mv 2023-09-29T17:12:13Z
dc.date.available.fl_str_mv 2023-09-29T17:12:13Z
dc.date.issued.fl_str_mv 2023-06-19
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/doctoralThesis
format doctoralThesis
status_str publishedVersion
dc.identifier.citation.fl_str_mv OLIVEIRA, Chaina Santos. A two-level item response theory model to evaluate automatic speech synthesis and recognition systems. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023.
dc.identifier.uri.fl_str_mv https://repositorio.ufpe.br/handle/123456789/52538
identifier_str_mv OLIVEIRA, Chaina Santos. A two-level item response theory model to evaluate automatic speech synthesis and recognition systems. 2023. Tese (Doutorado em Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2023.
url https://repositorio.ufpe.br/handle/123456789/52538
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv Attribution-NonCommercial-NoDerivs 3.0 Brazil
http://creativecommons.org/licenses/by-nc-nd/3.0/br/
info:eu-repo/semantics/embargoedAccess
rights_invalid_str_mv Attribution-NonCommercial-NoDerivs 3.0 Brazil
http://creativecommons.org/licenses/by-nc-nd/3.0/br/
eu_rights_str_mv embargoedAccess
dc.publisher.none.fl_str_mv Universidade Federal de Pernambuco
dc.publisher.program.fl_str_mv Programa de Pos Graduacao em Ciencia da Computacao
dc.publisher.initials.fl_str_mv UFPE
dc.publisher.country.fl_str_mv Brasil
publisher.none.fl_str_mv Universidade Federal de Pernambuco
dc.source.none.fl_str_mv reponame:Repositório Institucional da UFPE
instname:Universidade Federal de Pernambuco (UFPE)
instacron:UFPE
instname_str Universidade Federal de Pernambuco (UFPE)
instacron_str UFPE
institution UFPE
reponame_str Repositório Institucional da UFPE
collection Repositório Institucional da UFPE
bitstream.url.fl_str_mv https://repositorio.ufpe.br/bitstream/123456789/52538/2/license_rdf
https://repositorio.ufpe.br/bitstream/123456789/52538/3/license.txt
https://repositorio.ufpe.br/bitstream/123456789/52538/1/TESE%20Chaina%20Santos%20Oliveira.pdf
https://repositorio.ufpe.br/bitstream/123456789/52538/4/TESE%20Chaina%20Santos%20Oliveira.pdf.txt
https://repositorio.ufpe.br/bitstream/123456789/52538/5/TESE%20Chaina%20Santos%20Oliveira.pdf.jpg
bitstream.checksum.fl_str_mv e39d27027a6cc9cb039ad269a5db8e34
5e89a1613ddc8510c6576f4b23a78973
4f7bae3e0939524ff74c6293c6a9cd05
571453b3c1bd70865f3b7dd6760f6965
6d59da0d266cf0b472e02d29ac3cd9d3
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE)
repository.mail.fl_str_mv attena@ufpe.br
_version_ 1802310783634243584