Neural Networks (NN), although successfully applied to several Artificial Intelligence tasks, are often unnecessarily over-parametrized. In edge/fog computing, this might make their training prohibitive on resource-constrained devices, contrasting with the current trend of decentralizing intelligence from remote data centres to local constrained devices. Therefore, we investigate the problem of training effective NN models on constrained devices having a fixed, potentially small, memory budget. We target techniques that are both resource-efficient and performance effective while enabling significant network compression. Our Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training, identifying neurons that marginally contribute to the model accuracy. DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training. Freed memory is reused by a dynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy, improving its convergence and effectiveness. We assess the performance of DynHP through reproducible experiments on three public datasets, comparing them against reference competitors. Results show that DynHP compresses a NN up to 10 times without significant performance drops (up to 3.5% additional error w.r.t. the competitors), reducing up to 80% the training memory occupancy.

Dynamic hard pruning of Neural Networks at the edge of the internet

Valerio L;Nardini FM;Passarella A;Perego R
2022

Abstract

Neural Networks (NN), although successfully applied to several Artificial Intelligence tasks, are often unnecessarily over-parametrized. In edge/fog computing, this might make their training prohibitive on resource-constrained devices, contrasting with the current trend of decentralizing intelligence from remote data centres to local constrained devices. Therefore, we investigate the problem of training effective NN models on constrained devices having a fixed, potentially small, memory budget. We target techniques that are both resource-efficient and performance effective while enabling significant network compression. Our Dynamic Hard Pruning (DynHP) technique incrementally prunes the network during training, identifying neurons that marginally contribute to the model accuracy. DynHP enables a tunable size reduction of the final neural network and reduces the NN memory occupancy during training. Freed memory is reused by a dynamic batch sizing approach to counterbalance the accuracy degradation caused by the hard pruning strategy, improving its convergence and effectiveness. We assess the performance of DynHP through reproducible experiments on three public datasets, comparing them against reference competitors. Results show that DynHP compresses a NN up to 10 times without significant performance drops (up to 3.5% additional error w.r.t. the competitors), reducing up to 80% the training memory occupancy.
2022
Istituto di informatica e telematica - IIT
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Artificial neural networks
Compression
Pruning
Resource-constrained devices
File in questo prodotto:
File Dimensione Formato  
prod_465627-doc_182875.pdf

solo utenti autorizzati

Descrizione: Dynamic hard pruning of Neural Networks at the edge of the interne
Tipologia: Versione Editoriale (PDF)
Dimensione 1.6 MB
Formato Adobe PDF
1.6 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
prod_465627-doc_182912.pdf

accesso aperto

Descrizione: Preprint- Dynamic hard pruning of Neural Networks at the edge of the interne
Tipologia: Versione Editoriale (PDF)
Dimensione 1.15 MB
Formato Adobe PDF
1.15 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/445442
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact