The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand but is part of the learning process. In particular, the consistency of the empirical risk minimization (ERM) principle is analyzed, when the points in the input space are generated by employing a purely deterministic algorithm (deterministic learning). When the output generation is not subject to noise, classical number-theoretic results, involving discrepancy and variation, enable the establishment of a sufficient condition for the consistency of the ERM principle. In addition, the adoption of low-discrepancy sequences enables the achievement of a learning rate of O(1/L), with L being the size of the training set. An extension to the noisy case is provided, which shows that the good properties of deterministic learning are preserved, if the level of noise at the output is not high. Simulation results confirm the validity of the proposed approach.

Deterministic design for neural network learning: An approach based on discrepancy

Cristiano Cervellera;Marco Muselli
2004

Abstract

The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand but is part of the learning process. In particular, the consistency of the empirical risk minimization (ERM) principle is analyzed, when the points in the input space are generated by employing a purely deterministic algorithm (deterministic learning). When the output generation is not subject to noise, classical number-theoretic results, involving discrepancy and variation, enable the establishment of a sufficient condition for the consistency of the ERM principle. In addition, the adoption of low-discrepancy sequences enables the achievement of a learning rate of O(1/L), with L being the size of the training set. An extension to the noisy case is provided, which shows that the good properties of deterministic learning are preserved, if the level of noise at the output is not high. Simulation results confirm the validity of the proposed approach.
2004
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - IEIIT
Istituto di Studi sui Sistemi Intelligenti per l'Automazione - ISSIA - Sede Bari
Deterministic learning
discrepancy
empirical risk minimization (ERM)
learning rate
variation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/146726
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 48
  • ???jsp.display-item.citation.isi??? 38
social impact