We focus on the solution of a class of unconstrained optimization problems, wherethe evaluation of the objective function is possibly costly and the use of exact algorithmsmay require a too large computational burden. Several real applications, includedin the latter class, claim for optimization methods where the derivatives of theobjective function are unavailable and/or the objective function must be treated as a'black-box'. Many design optimization [15] and shape optimization [11, 26] problemsbelong to the latter class; moreover, the derivatives computed with finite differencesmay be much inaccurate. Here, expensive simulations provide information to the optimizer,so that each function evaluation could require up to several CPU-hours. On theother hand, for continuously differentiable functions the use of heuristics may yieldinadequate and/or unsatisfactory results [17].We consider here the evolutionary Particle Swarm Optimization (PSO) algorithm [12].We introduce some globally convergent modifications of PSO by drawing our inspirationfrom [14], so that sequences of points are generated which admit stationary limitpoints for the objective function. The latter result is carried out for a generalized PSOscheme, where suitable ranges of the parameters are identified in order to possiblyavoid diverging trajectories for the particles [1]. © 2010 Nova Science Publishers, Inc. All rights reserved.

Globally convergent modifications of particle swarm optimization for unconstrained optimization

Campana Emilio Fortunato;Peri Daniele
2011

Abstract

We focus on the solution of a class of unconstrained optimization problems, wherethe evaluation of the objective function is possibly costly and the use of exact algorithmsmay require a too large computational burden. Several real applications, includedin the latter class, claim for optimization methods where the derivatives of theobjective function are unavailable and/or the objective function must be treated as a'black-box'. Many design optimization [15] and shape optimization [11, 26] problemsbelong to the latter class; moreover, the derivatives computed with finite differencesmay be much inaccurate. Here, expensive simulations provide information to the optimizer,so that each function evaluation could require up to several CPU-hours. On theother hand, for continuously differentiable functions the use of heuristics may yieldinadequate and/or unsatisfactory results [17].We consider here the evolutionary Particle Swarm Optimization (PSO) algorithm [12].We introduce some globally convergent modifications of PSO by drawing our inspirationfrom [14], so that sequences of points are generated which admit stationary limitpoints for the objective function. The latter result is carried out for a generalized PSOscheme, where suitable ranges of the parameters are identified in order to possiblyavoid diverging trajectories for the particles [1]. © 2010 Nova Science Publishers, Inc. All rights reserved.
2011
Istituto di iNgegneria del Mare - INM (ex INSEAN)
9781616685270
Derivative-free methods
Global optimization
Globally convergent methods.
Particle swarm optimization
Unconstrained optimization
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/297368
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact