Fuzzy learning vector quantization (FLVQ), also known as the fuzzy Kohonen clustering network, was developed to improve performance and usability of on-line hard-competitive Kohnen's vector quantization and soft-competitive self organizing map (SOM) algorithms. The FLVQ effectiveness seems to depend on the range of change of the weighting exponent m(t). In the first part of this work, extreme m(t) values (1 and ?, respectively) are employed to investigate FLVQ asymptotic behaviors. This analysis shows that when m(t) tends to either one of its extremes, FLVQ is affected by trivial vector quantization, which causes centroids collapse in the grand mean of the input data set. No analytical criterion has been found to improve the heuristic choice of the range of m(t) change. In the second part of this paper, two FLVQ and SOM classification experiments of remote sensed data are presented. In these experiments the two nets are connected in cascade to a supervised second stage, based on the delta rule. Experimental results confirm that FLVQ performance can be greatly affected by the user's definition of the range of change of the weighting exponent. Moreover, FLVQ shows instability when its traditional termination criterion is applied. Empirical recommendations are proposed for the enhancement of FLVQ robustness. Both the analytical and the experimental data reported seem to indicate that the choice of the range of m(t) change is still open to discussion and that alternative clustering neural-network approaches should be developed to pursue during training: 1) monotone reduction of the neurons' learning rate and 2) monotone reduction of the overlap among neuron receptive fields.
Model transitions in descending FLVQ
Blonda P;Pasquariello G;Satalino;
1998
Abstract
Fuzzy learning vector quantization (FLVQ), also known as the fuzzy Kohonen clustering network, was developed to improve performance and usability of on-line hard-competitive Kohnen's vector quantization and soft-competitive self organizing map (SOM) algorithms. The FLVQ effectiveness seems to depend on the range of change of the weighting exponent m(t). In the first part of this work, extreme m(t) values (1 and ?, respectively) are employed to investigate FLVQ asymptotic behaviors. This analysis shows that when m(t) tends to either one of its extremes, FLVQ is affected by trivial vector quantization, which causes centroids collapse in the grand mean of the input data set. No analytical criterion has been found to improve the heuristic choice of the range of m(t) change. In the second part of this paper, two FLVQ and SOM classification experiments of remote sensed data are presented. In these experiments the two nets are connected in cascade to a supervised second stage, based on the delta rule. Experimental results confirm that FLVQ performance can be greatly affected by the user's definition of the range of change of the weighting exponent. Moreover, FLVQ shows instability when its traditional termination criterion is applied. Empirical recommendations are proposed for the enhancement of FLVQ robustness. Both the analytical and the experimental data reported seem to indicate that the choice of the range of m(t) change is still open to discussion and that alternative clustering neural-network approaches should be developed to pursue during training: 1) monotone reduction of the neurons' learning rate and 2) monotone reduction of the overlap among neuron receptive fields.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


