Sound pressure level prediction from video frames using deep convolutional neural networks
Autor(a) principal: | |
---|---|
Data de Publicação: | 2019 |
Tipo de documento: | Dissertação |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UFRJ |
Texto Completo: | http://hdl.handle.net/11422/14030 |
Resumo: | Some CCTV systems do not have microphones. As a result, sound pressure information is not available in such systems. A method to generate traffic sound pressure estimates using solely video frames as input data is presented. To that end, we trained several combinations of models based on pretrained convolutional networks using a dataset that was automatically generated by a single camera with a mono microphone pointing at a busy traffic crossroad with cars, trucks, and motorbikes. For neural network training from that dataset, color images are used as neural network inputs, and true sound pressure level values are used as neural network targets. A correlation of 0.607 in preliminary results suggest that sound pressure level targets are sufficient for convolutional neural networks to detect sound generating sources within a traffic scene. This hypothesis is tested by evaluating the class activation maps (CAM) of a model with the required global average pooling+fully connected layer structure. We find that the CAM strongly highlights sources that produce large sound pressure values such as buses and faintly highlights objects associated with lower sound pressure such as cars. The neural network with the lowest MSE was cross-validated with 6 folds and the best model was evaluated in the test set. The best model attained a correlation of approximately 0.6 in three of the test videos and correlations of 0.272 and 0.207 in two of the test videos. The low correlation in the two last videos was associated with a traffic warden that constantly whistles: a characteristic not present in the training set. The overall correlation using the whole test set was 0.647. A correlation of 0.844 with a longer term (1 minute) sound pressure level (Leq) estimate using all test videos indicate that estimation of longer term sound pressure levels is less sensitive to sporadic noise in the dataset. |
id |
UFRJ_70193d1e0855ff5dc80a7199719e7d1b |
---|---|
oai_identifier_str |
oai:pantheon.ufrj.br:11422/14030 |
network_acronym_str |
UFRJ |
network_name_str |
Repositório Institucional da UFRJ |
repository_id_str |
|
spelling |
Sound pressure level prediction from video frames using deep convolutional neural networksPredição do nível de pressão sonora a partir de frames de vídeo com redes neurais convolucionais profundasConvolutional neural networksTraffic noise intensityNon-linear regressionNonlinear predictionCNPQ::ENGENHARIAS::ENGENHARIA ELETRICASome CCTV systems do not have microphones. As a result, sound pressure information is not available in such systems. A method to generate traffic sound pressure estimates using solely video frames as input data is presented. To that end, we trained several combinations of models based on pretrained convolutional networks using a dataset that was automatically generated by a single camera with a mono microphone pointing at a busy traffic crossroad with cars, trucks, and motorbikes. For neural network training from that dataset, color images are used as neural network inputs, and true sound pressure level values are used as neural network targets. A correlation of 0.607 in preliminary results suggest that sound pressure level targets are sufficient for convolutional neural networks to detect sound generating sources within a traffic scene. This hypothesis is tested by evaluating the class activation maps (CAM) of a model with the required global average pooling+fully connected layer structure. We find that the CAM strongly highlights sources that produce large sound pressure values such as buses and faintly highlights objects associated with lower sound pressure such as cars. The neural network with the lowest MSE was cross-validated with 6 folds and the best model was evaluated in the test set. The best model attained a correlation of approximately 0.6 in three of the test videos and correlations of 0.272 and 0.207 in two of the test videos. The low correlation in the two last videos was associated with a traffic warden that constantly whistles: a characteristic not present in the training set. The overall correlation using the whole test set was 0.647. A correlation of 0.844 with a longer term (1 minute) sound pressure level (Leq) estimate using all test videos indicate that estimation of longer term sound pressure levels is less sensitive to sporadic noise in the dataset.Alguns sistemas de CCTV não possuem microfones. Como resultado, a informação de pressão sonora não está disponível nesses sistemas. Um método para gerar estimativas da pressão sonora usando apenas quadros de vídeos é apresentado. Para tal, 64 combinações de modelos baseados em redes convolucionais foram treinadas a partir de uma base de dados gerada automaticamente por dados de uma câmera com um microfone mono apontada para um cruzamento com tráfego intenso de carros, caminhões e motos. Para treinar as redes neurais, imagens coloridas são usadas como entradas da rede e valores reais de pressão sonora são usados como alvos da rede. Correlação 0.607 em resultados iniciais sugere que usar valores de pressão sonora média como alvos são suficientes para que redes neurais convolucionais detectem as fontes geradoras do áudio numa cena de tráfego. Essa hipótese é testada ao se avaliar os mapas de ativação de classe (CAM) de um modelo com o formato global average pooling+camada fully connected. Por fim, os CAMs ressaltaram fortemente objetos associados a altos valores de pressão sonora como ônibus e realçaram fracamente objetos associados a menores níveis de pressão sonora como carros. Foi feita validação cruzada no modelo com menor MSE com 6 folds e melhor modelo foi avaliado no conjunto de teste. Esse modelo obteve correlação próxima de 0.6 em três dos vídeos de teste e correlação 0.272 e 0.207 em outros dois vídeos de teste. A baixa correlação foi associada ao barulho constante do apito de um guarda de trânsito presente somente nesses dois últimos vídeos: característica ausente no conjunto de treino. A correlação nos dados de teste calculada conjuntamente foi de 0.647. Uma correlação de 0.844 ao usar Leq com intervalo de tempo maior (1 minuto) usando todos os videos de teste indica que a estimação do ruído no dataset.Universidade Federal do Rio de JaneiroBrasilInstituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de EngenhariaPrograma de Pós-Graduação em Engenharia ElétricaUFRJGomes, José Gabriel Rodriguez Carneirohttp://lattes.cnpq.br/0167354254513842http://lattes.cnpq.br/8123046464465333Torres, Julio Cesar BoscherHaddad , Diego BarretoMazza, Leonardo Oliveira2021-04-05T01:56:49Z2023-12-21T03:07:35Z2019-06info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesishttp://hdl.handle.net/11422/14030enginfo:eu-repo/semantics/openAccessreponame:Repositório Institucional da UFRJinstname:Universidade Federal do Rio de Janeiro (UFRJ)instacron:UFRJ2023-12-21T03:07:35Zoai:pantheon.ufrj.br:11422/14030Repositório InstitucionalPUBhttp://www.pantheon.ufrj.br/oai/requestpantheon@sibi.ufrj.bropendoar:2023-12-21T03:07:35Repositório Institucional da UFRJ - Universidade Federal do Rio de Janeiro (UFRJ)false |
dc.title.none.fl_str_mv |
Sound pressure level prediction from video frames using deep convolutional neural networks Predição do nível de pressão sonora a partir de frames de vídeo com redes neurais convolucionais profundas |
title |
Sound pressure level prediction from video frames using deep convolutional neural networks |
spellingShingle |
Sound pressure level prediction from video frames using deep convolutional neural networks Mazza, Leonardo Oliveira Convolutional neural networks Traffic noise intensity Non-linear regression Nonlinear prediction CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA |
title_short |
Sound pressure level prediction from video frames using deep convolutional neural networks |
title_full |
Sound pressure level prediction from video frames using deep convolutional neural networks |
title_fullStr |
Sound pressure level prediction from video frames using deep convolutional neural networks |
title_full_unstemmed |
Sound pressure level prediction from video frames using deep convolutional neural networks |
title_sort |
Sound pressure level prediction from video frames using deep convolutional neural networks |
author |
Mazza, Leonardo Oliveira |
author_facet |
Mazza, Leonardo Oliveira |
author_role |
author |
dc.contributor.none.fl_str_mv |
Gomes, José Gabriel Rodriguez Carneiro http://lattes.cnpq.br/0167354254513842 http://lattes.cnpq.br/8123046464465333 Torres, Julio Cesar Boscher Haddad , Diego Barreto |
dc.contributor.author.fl_str_mv |
Mazza, Leonardo Oliveira |
dc.subject.por.fl_str_mv |
Convolutional neural networks Traffic noise intensity Non-linear regression Nonlinear prediction CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA |
topic |
Convolutional neural networks Traffic noise intensity Non-linear regression Nonlinear prediction CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA |
description |
Some CCTV systems do not have microphones. As a result, sound pressure information is not available in such systems. A method to generate traffic sound pressure estimates using solely video frames as input data is presented. To that end, we trained several combinations of models based on pretrained convolutional networks using a dataset that was automatically generated by a single camera with a mono microphone pointing at a busy traffic crossroad with cars, trucks, and motorbikes. For neural network training from that dataset, color images are used as neural network inputs, and true sound pressure level values are used as neural network targets. A correlation of 0.607 in preliminary results suggest that sound pressure level targets are sufficient for convolutional neural networks to detect sound generating sources within a traffic scene. This hypothesis is tested by evaluating the class activation maps (CAM) of a model with the required global average pooling+fully connected layer structure. We find that the CAM strongly highlights sources that produce large sound pressure values such as buses and faintly highlights objects associated with lower sound pressure such as cars. The neural network with the lowest MSE was cross-validated with 6 folds and the best model was evaluated in the test set. The best model attained a correlation of approximately 0.6 in three of the test videos and correlations of 0.272 and 0.207 in two of the test videos. The low correlation in the two last videos was associated with a traffic warden that constantly whistles: a characteristic not present in the training set. The overall correlation using the whole test set was 0.647. A correlation of 0.844 with a longer term (1 minute) sound pressure level (Leq) estimate using all test videos indicate that estimation of longer term sound pressure levels is less sensitive to sporadic noise in the dataset. |
publishDate |
2019 |
dc.date.none.fl_str_mv |
2019-06 2021-04-05T01:56:49Z 2023-12-21T03:07:35Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/masterThesis |
format |
masterThesis |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/11422/14030 |
url |
http://hdl.handle.net/11422/14030 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.publisher.none.fl_str_mv |
Universidade Federal do Rio de Janeiro Brasil Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia Programa de Pós-Graduação em Engenharia Elétrica UFRJ |
publisher.none.fl_str_mv |
Universidade Federal do Rio de Janeiro Brasil Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia Programa de Pós-Graduação em Engenharia Elétrica UFRJ |
dc.source.none.fl_str_mv |
reponame:Repositório Institucional da UFRJ instname:Universidade Federal do Rio de Janeiro (UFRJ) instacron:UFRJ |
instname_str |
Universidade Federal do Rio de Janeiro (UFRJ) |
instacron_str |
UFRJ |
institution |
UFRJ |
reponame_str |
Repositório Institucional da UFRJ |
collection |
Repositório Institucional da UFRJ |
repository.name.fl_str_mv |
Repositório Institucional da UFRJ - Universidade Federal do Rio de Janeiro (UFRJ) |
repository.mail.fl_str_mv |
pantheon@sibi.ufrj.br |
_version_ |
1815456013826916352 |