An optimization-based learning algorithm for feedforward neural networks is presented, in which the network weights are determined by minimizing a sliding-window cost. THe algorithm is particularly well-suited for batchlearning and allows one to deal with large data sets in a computatiionally efficient way. An analysis of its convergence and robustness properties is made. Simulation results confirm the effectiveness of the algorithm and its adavantages over learning based on backpropagation and extended Kalman filter.
Optimization-based learning with bounded error for feedforward neural networks
2002
Abstract
An optimization-based learning algorithm for feedforward neural networks is presented, in which the network weights are determined by minimizing a sliding-window cost. THe algorithm is particularly well-suited for batchlearning and allows one to deal with large data sets in a computatiionally efficient way. An analysis of its convergence and robustness properties is made. Simulation results confirm the effectiveness of the algorithm and its adavantages over learning based on backpropagation and extended Kalman filter.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


