Recent work on sample efficient training of Deep Neural Networks (DNNs) proposed a semi-supervised methodology based on biologically inspired Hebbian learning, combined with traditional backprop-based training. Promising results were achieved on various computer vision benchmarks, in scenarios of scarce labeled data availability. However, current Hebbian learning solutions can hardly address large-scale scenarios due to their demanding computational cost. In order to tackle this limitation, in this contribution, we investigate a novel solution, named FastHebb (FH), based on the reformulation of Hebbian learning rules in terms of matrix multiplications, which can be executed more efficiently on GPU. Starting from Soft-Winner-Takes-All (SWTA) and Hebbian Principal Component Analysis (HPCA) learning rules, we formulate their improved FH versions: SWTA-FH and HPCA-FH. We experimentally show that the proposed approach accelerates training speed up to 70 times, allowing us to gracefully scale Hebbian learning experiments on large datasets and network architectures such as ImageNet and VGG.

Scalable bio-inspired training of Deep Neural Networks with FastHebb

Lagani G.;Falchi F.;Gennaro C.;Amato G.
2024

Abstract

Recent work on sample efficient training of Deep Neural Networks (DNNs) proposed a semi-supervised methodology based on biologically inspired Hebbian learning, combined with traditional backprop-based training. Promising results were achieved on various computer vision benchmarks, in scenarios of scarce labeled data availability. However, current Hebbian learning solutions can hardly address large-scale scenarios due to their demanding computational cost. In order to tackle this limitation, in this contribution, we investigate a novel solution, named FastHebb (FH), based on the reformulation of Hebbian learning rules in terms of matrix multiplications, which can be executed more efficiently on GPU. Starting from Soft-Winner-Takes-All (SWTA) and Hebbian Principal Component Analysis (HPCA) learning rules, we formulate their improved FH versions: SWTA-FH and HPCA-FH. We experimentally show that the proposed approach accelerates training speed up to 70 times, allowing us to gracefully scale Hebbian learning experiments on large datasets and network architectures such as ImageNet and VGG.
2024
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Hebbian learning
Deep learning
Neural networks
Biologically inspired
File in questo prodotto:
File Dimensione Formato  
FastHebb___Elsevier.pdf

accesso aperto

Descrizione: This is the Submitted version (preprint) of the following paper: Lagani G. et al. “Scalable bio-inspired training of Deep Neural Networks with FastHebb”, 2024 submitted to “Neurocomputing”. The final published version is available on the publisher’s website https://www.sciencedirect.com/science/article/pii/S0925231224006386?via=ihub.
Tipologia: Documento in Pre-print
Licenza: Altro tipo di licenza
Dimensione 499.73 kB
Formato Adobe PDF
499.73 kB Adobe PDF Visualizza/Apri
1-s2.0-S0925231224006386-main.pdf

accesso aperto

Descrizione: Scalable bio-inspired training of Deep Neural Networks with FastHebb
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.68 MB
Formato Adobe PDF
1.68 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/500961
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact