Train me if you can: decentralized learning on the deep edge

Detalhes bibliográficos
Autor(a) principal: Costa, Diogo André Veiga
Data de Publicação: 2022
Outros Autores: Costa, Miguel Ângelo Peixoto, Pinto, Sandro
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
Texto Completo: https://hdl.handle.net/1822/79831
Resumo: The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems.
id RCAP_be45489c43ec749adc1841b2271af914
oai_identifier_str oai:repositorium.sdum.uminho.pt:1822/79831
network_acronym_str RCAP
network_name_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository_id_str 7160
spelling Train me if you can: decentralized learning on the deep edgeFederated learningMachine learningArtificial neural networksArtificial intelligenceMachine learning algorithmsIntelligent systemsInternet of thingsArm cortex-MScience & TechnologyThe end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems.This work has been supported by FCT - Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. This work has also been supported by FCT within the PhD Scholarship Project Scope: SFRH/BD/146780/2019.Multidisciplinary Digital Publishing Institute (MDPI)Universidade do MinhoCosta, Diogo André VeigaCosta, Miguel Ângelo PeixotoPinto, Sandro2022-05-062022-05-06T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttps://hdl.handle.net/1822/79831engCosta, D.; Costa, M.; Pinto, S. Train Me If You Can: Decentralized Learning on the Deep Edge. Appl. Sci. 2022, 12, 4653. https://doi.org/10.3390/app120946532076-341710.3390/app120946534653https://www.mdpi.com/2076-3417/12/9/4653info:eu-repo/semantics/openAccessreponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãoinstacron:RCAAP2023-07-21T12:44:46Zoai:repositorium.sdum.uminho.pt:1822/79831Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireopendoar:71602024-03-19T19:42:31.406282Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informaçãofalse
dc.title.none.fl_str_mv Train me if you can: decentralized learning on the deep edge
title Train me if you can: decentralized learning on the deep edge
spellingShingle Train me if you can: decentralized learning on the deep edge
Costa, Diogo André Veiga
Federated learning
Machine learning
Artificial neural networks
Artificial intelligence
Machine learning algorithms
Intelligent systems
Internet of things
Arm cortex-M
Science & Technology
title_short Train me if you can: decentralized learning on the deep edge
title_full Train me if you can: decentralized learning on the deep edge
title_fullStr Train me if you can: decentralized learning on the deep edge
title_full_unstemmed Train me if you can: decentralized learning on the deep edge
title_sort Train me if you can: decentralized learning on the deep edge
author Costa, Diogo André Veiga
author_facet Costa, Diogo André Veiga
Costa, Miguel Ângelo Peixoto
Pinto, Sandro
author_role author
author2 Costa, Miguel Ângelo Peixoto
Pinto, Sandro
author2_role author
author
dc.contributor.none.fl_str_mv Universidade do Minho
dc.contributor.author.fl_str_mv Costa, Diogo André Veiga
Costa, Miguel Ângelo Peixoto
Pinto, Sandro
dc.subject.por.fl_str_mv Federated learning
Machine learning
Artificial neural networks
Artificial intelligence
Machine learning algorithms
Intelligent systems
Internet of things
Arm cortex-M
Science & Technology
topic Federated learning
Machine learning
Artificial neural networks
Artificial intelligence
Machine learning algorithms
Intelligent systems
Internet of things
Arm cortex-M
Science & Technology
description The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems.
publishDate 2022
dc.date.none.fl_str_mv 2022-05-06
2022-05-06T00:00:00Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv https://hdl.handle.net/1822/79831
url https://hdl.handle.net/1822/79831
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv Costa, D.; Costa, M.; Pinto, S. Train Me If You Can: Decentralized Learning on the Deep Edge. Appl. Sci. 2022, 12, 4653. https://doi.org/10.3390/app12094653
2076-3417
10.3390/app12094653
4653
https://www.mdpi.com/2076-3417/12/9/4653
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Multidisciplinary Digital Publishing Institute (MDPI)
publisher.none.fl_str_mv Multidisciplinary Digital Publishing Institute (MDPI)
dc.source.none.fl_str_mv reponame:Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
instname:Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron:RCAAP
instname_str Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
instacron_str RCAAP
institution RCAAP
reponame_str Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
collection Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos)
repository.name.fl_str_mv Repositório Científico de Acesso Aberto de Portugal (Repositórios Cientìficos) - Agência para a Sociedade do Conhecimento (UMIC) - FCT - Sociedade da Informação
repository.mail.fl_str_mv
_version_ 1799132978698256384