We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via statistical-mechanics tools, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters (e.g. quality and quantity of the training dataset, network storage, noise) that is valid in the limit of large network size and structureless datasets. Moreover, we establish a bridge between macroscopic observables standardly used in statistical mechanics and loss functions typically used in the machine learning. As technical remarks, from the analytical side, we extend Guerra's interpolation to tackle the non-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka's approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensor, overall obtaining a novel and broad approach to investigate unsupervised learning in neural networks, beyond the shallow limit.

Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning

Giannotti F;
2023

Abstract

We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via statistical-mechanics tools, and numerically, via Monte Carlo simulations. In particular, we obtain a phase diagram summarizing their performance as a function of the control parameters (e.g. quality and quantity of the training dataset, network storage, noise) that is valid in the limit of large network size and structureless datasets. Moreover, we establish a bridge between macroscopic observables standardly used in statistical mechanics and loss functions typically used in the machine learning. As technical remarks, from the analytical side, we extend Guerra's interpolation to tackle the non-Gaussian distributions involved in the post-synaptic potentials while, from the computational counterpart, we insert Plefka's approximation in the Monte Carlo scheme, to speed up the evaluation of the synaptic tensor, overall obtaining a novel and broad approach to investigate unsupervised learning in neural networks, beyond the shallow limit.
2023
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Spin glasses
Cost and loss functions
Hebbian learning
File in questo prodotto:
File Dimensione Formato  
prod_485979-doc_201503.pdf

solo utenti autorizzati

Descrizione: Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.9 MB
Formato Adobe PDF
1.9 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
prod_485979-doc_201504.pdf

accesso aperto

Descrizione: Preprint - Dense Hebbian neural networks: a replica symmetric picture of unsupervised learning
Tipologia: Documento in Pre-print
Licenza: Creative commons
Dimensione 2.53 MB
Formato Adobe PDF
2.53 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/462971
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact