The comparison between parallel trials and single search in supervised learning is approached by introducing an appropriate formalism based on random variables theory. The fundamental role played by the probability P(t) that an optimization algorithm converges in the interval [0,t] is thus emphasized. The work is divided in two parts: in the first one some basic theorems are shown and the general problem is reduced in complexity. Afterwards, examples of behaviours for P(t) are examined and analysis is made for three general classes of functions. In the second part parallel trials and single search are compared for three optimization algorithms: pure random search, grid method and random walk.
Parallel trials versus single search in supervised learning
M Muselli;
1991
Abstract
The comparison between parallel trials and single search in supervised learning is approached by introducing an appropriate formalism based on random variables theory. The fundamental role played by the probability P(t) that an optimization algorithm converges in the interval [0,t] is thus emphasized. The work is divided in two parts: in the first one some basic theorems are shown and the general problem is reduced in complexity. Afterwards, examples of behaviours for P(t) are examined and analysis is made for three general classes of functions. In the second part parallel trials and single search are compared for three optimization algorithms: pure random search, grid method and random walk.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.