The design of state estimators for nonlinear dynamic systems affected by disturbances is addressed in a functional optimization framework. The estimator contains an innovation function that has to be chosen within a suitably defined class of functions in such a way to minimize a cost functional given by the worst-case ratio of the Lp norms of the estimation error and the disturbances. Since this entails an infinite-dimensional optimization problem that under general hypotheses cannot be solved analytically, an approximate solution is sought by minimizing the cost functional over linear combinations of simple "basis functions," represented by computational units with adjustable parameters. The selection of the parameters is made by solving a constrained nonlinear programming problem, where the constraints are given by pointwise conditions that ensure the well-definiteness of the functional and the existence of a solution. Penalty terms are introduced in the cost function to account for constraints imposed on points that result from sampling the sets to which the trajectories of the state and of the estimation error belong. To ensure an efficient covering of the sets, low-discrepancy sampling techniques are exploited that generate samples deterministically spread in a uniform way, without leaving regions of the space undersampled.

Functional Optimal Estimation Problems and Their Solution by Nonlinear Approximation Schemes

C Cervellera;
2007

Abstract

The design of state estimators for nonlinear dynamic systems affected by disturbances is addressed in a functional optimization framework. The estimator contains an innovation function that has to be chosen within a suitably defined class of functions in such a way to minimize a cost functional given by the worst-case ratio of the Lp norms of the estimation error and the disturbances. Since this entails an infinite-dimensional optimization problem that under general hypotheses cannot be solved analytically, an approximate solution is sought by minimizing the cost functional over linear combinations of simple "basis functions," represented by computational units with adjustable parameters. The selection of the parameters is made by solving a constrained nonlinear programming problem, where the constraints are given by pointwise conditions that ensure the well-definiteness of the functional and the existence of a solution. Penalty terms are introduced in the cost function to account for constraints imposed on points that result from sampling the sets to which the trajectories of the state and of the estimation error belong. To ensure an efficient covering of the sets, low-discrepancy sampling techniques are exploited that generate samples deterministically spread in a uniform way, without leaving regions of the space undersampled.
2007
Istituto di Studi sui Sistemi Intelligenti per l'Automazione - ISSIA - Sede Bari
Optimal estimation
Infinite-dimensional optimization
Nonlinear approximation schemes
Nonlinear programming
Low-discrepancy sequences
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/24432
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 7
social impact