We address the optimal control of level sets associated with the solution of normal flow equations. The problem consists in finding the normal velocity to the front described by a certain level set in such a way to minimize a given cost functional. First, the considered problem is shown to admit a solution on a suitable space of functions. Then, since in general it is difficult to solve it analytically, an approximation scheme that relies on the extended Ritz method is proposed to find suboptimal solutions. Specifically, the control law is forced to take on a neural structure depending nonlinearly on a finite number of parameters to be tuned, i.e., the neural weights. The selection of the optimal weights is performed with two different approaches. The first one employs classical line-search descent methods, while the second one is based on a quasi-Newton optimization that can be regarded as a neural learning based on the extended Kalman filter. If compared to line-search methods, such an approach reveals to be successful with a reduced computational effort and an increased robustness with respect to the trapping into local minima, as confirmed by simulations in both two and three dimensions.
Optimal control of propagating fronts by using level set methods and neural approximations
M Gaggero
2019
Abstract
We address the optimal control of level sets associated with the solution of normal flow equations. The problem consists in finding the normal velocity to the front described by a certain level set in such a way to minimize a given cost functional. First, the considered problem is shown to admit a solution on a suitable space of functions. Then, since in general it is difficult to solve it analytically, an approximation scheme that relies on the extended Ritz method is proposed to find suboptimal solutions. Specifically, the control law is forced to take on a neural structure depending nonlinearly on a finite number of parameters to be tuned, i.e., the neural weights. The selection of the optimal weights is performed with two different approaches. The first one employs classical line-search descent methods, while the second one is based on a quasi-Newton optimization that can be regarded as a neural learning based on the extended Kalman filter. If compared to line-search methods, such an approach reveals to be successful with a reduced computational effort and an increased robustness with respect to the trapping into local minima, as confirmed by simulations in both two and three dimensions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.