Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.
Fast Stochastic MPC Implementation via Policy Learning
Mammarella Martina;Dabbene Fabrizio;
2022
Abstract
Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.