The paper describes a speech coding system based on an ear model followed by a set of Multi-Layer Networks (MLN). MLNs are trained to learn how to recognize articulatory features like the place and manner of articulation. Experiments are performed on 10 English vowels showing a recognition rate higher than 95% for new speakers. When features are used for recognition, comparable results are obtained for vowels and diphthongs not used for training and pronounced by new speakers. This suggests that MLNs suitably fed by the data computed by an ear model have good generalization capabilities over new speakers and new sounds.
On the Use of an Ear Model and Multi-Layered Networks for Automatic Speech Recognition
Cosi P
1989
Abstract
The paper describes a speech coding system based on an ear model followed by a set of Multi-Layer Networks (MLN). MLNs are trained to learn how to recognize articulatory features like the place and manner of articulation. Experiments are performed on 10 English vowels showing a recognition rate higher than 95% for new speakers. When features are used for recognition, comparable results are obtained for vowels and diphthongs not used for training and pronounced by new speakers. This suggests that MLNs suitably fed by the data computed by an ear model have good generalization capabilities over new speakers and new sounds.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


