Approximate solution of a general N-stage stochastic optimal control problem is considered. It is known that discretizing uniformly the state components in applying dynamic programming may lead this procedure to incur the 'curse of dimensionality'. Approximating networks, i.e., linear combinations of parametrized basis functions provided with density properties in some normed linear spaces, are then defined and used in two approximate methods (examples of such networks are neural networks with one hidden layer and linear output activation functions, radial basis functions, etc.). The first one consists of approximating the optimal cost-to-go functions in dynamic programming (such a technique is known in literature as 'neuro-dynamic programming'); the second method reduces the original functional optimization problem to a nonlinear programming one that is solved by means of stochastic approximation. Approximating networks of suitable types benefit by the property that the number of parameters to be optimized and the number of samples to be used for approximating some classes of regular functions increase only linearly (or moderately) with the dimensions of the arguments of the functions and the number of samples used to train the networks. We deem that such properties may enable us to solve N-stage stochastic optimal problems often avoiding the curse of dimensionality. The two methods are tested and compared in an example involving a 10-dimension state vector.

Approximating networks, dynamic programming and stochastic approximation

Cervellera C;
2000

Abstract

Approximate solution of a general N-stage stochastic optimal control problem is considered. It is known that discretizing uniformly the state components in applying dynamic programming may lead this procedure to incur the 'curse of dimensionality'. Approximating networks, i.e., linear combinations of parametrized basis functions provided with density properties in some normed linear spaces, are then defined and used in two approximate methods (examples of such networks are neural networks with one hidden layer and linear output activation functions, radial basis functions, etc.). The first one consists of approximating the optimal cost-to-go functions in dynamic programming (such a technique is known in literature as 'neuro-dynamic programming'); the second method reduces the original functional optimization problem to a nonlinear programming one that is solved by means of stochastic approximation. Approximating networks of suitable types benefit by the property that the number of parameters to be optimized and the number of samples to be used for approximating some classes of regular functions increase only linearly (or moderately) with the dimensions of the arguments of the functions and the number of samples used to train the networks. We deem that such properties may enable us to solve N-stage stochastic optimal problems often avoiding the curse of dimensionality. The two methods are tested and compared in an example involving a 10-dimension state vector.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/273397
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact