We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases.
Hebbian semi-supervised learning in a sample efficiency setting
Lagani G;Falchi F;Gennaro C;Amato G
2021
Abstract
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases.| File | Dimensione | Formato | |
|---|---|---|---|
|
prod_457534-doc_177561.pdf
accesso aperto
Descrizione: Preprint - Hebbian semi-supervised learning in a sample efficiency setting
Tipologia:
Documento in Pre-print
Licenza:
Creative commons
Dimensione
1.19 MB
Formato
Adobe PDF
|
1.19 MB | Adobe PDF | Visualizza/Apri |
|
prod_457534-doc_177734.pdf
solo utenti autorizzati
Descrizione: Hebbian semi-supervised learning in a sample efficiency setting
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
2.2 MB
Formato
Adobe PDF
|
2.2 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
Hebb_SmplEff___Elsevier-1.pdf
accesso aperto
Descrizione: This is the Author Accepted Manuscript (postprint) of the following paper: Lagani G. et al. “Hebbian semi-supervised learning in a sample efficiency setting”, published in “Neural Networks” Vol. 143, pp. 719-731, 2021. DOI: 10.1016/j.neunet.2021.08.003.
Tipologia:
Documento in Post-print
Licenza:
Nessuna licenza dichiarata (non attribuibile a prodotti successivi al 2023)
Dimensione
1.13 MB
Formato
Adobe PDF
|
1.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


