The traditional extreme learning machine (ELM) approach is based on a random assignment of the hidden weight values, while the linear coefficients of the output layer are determined analytically. This brief presents an analysis based on geometric properties of the sampling points used to assign the weight values, investigating the replacement of random generation of such values with low-discrepancy sequences (LDSs). Such sequences are a family of sampling methods commonly employed for numerical integration, yielding a more efficient covering of multidimensional sets with respect to random sequences, without the need for any computationally intensive procedure. In particular, we prove that the universal approximation property of the ELM is guaranteed when LDSs are employed, and how an efficient covering affects the convergence positively. Furthermore, since LDSs are generated deterministically, the results do not have a probabilistic nature. Simulation results confirm, in practice, the good theoretical properties given by the combination of ELM with LDSs.

Low-Discrepancy Points for Deterministic Assignment of Hidden Weights in Extreme Learning Machines

Cervellera C;
2016

Abstract

The traditional extreme learning machine (ELM) approach is based on a random assignment of the hidden weight values, while the linear coefficients of the output layer are determined analytically. This brief presents an analysis based on geometric properties of the sampling points used to assign the weight values, investigating the replacement of random generation of such values with low-discrepancy sequences (LDSs). Such sequences are a family of sampling methods commonly employed for numerical integration, yielding a more efficient covering of multidimensional sets with respect to random sequences, without the need for any computationally intensive procedure. In particular, we prove that the universal approximation property of the ELM is guaranteed when LDSs are employed, and how an efficient covering affects the convergence positively. Furthermore, since LDSs are generated deterministically, the results do not have a probabilistic nature. Simulation results confirm, in practice, the good theoretical properties given by the combination of ELM with LDSs.
2016
Istituto di Studi sui Sistemi Intelligenti per l'Automazione - ISSIA - Sede Bari
Istituto di iNgegneria del Mare - INM (ex INSEAN)
Discrepancy
Extreme learning machines
Low-discrepancy sequences
universal approximation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/316802
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? 15
social impact