Two speaker independent speech recognition experiments, regarding the automatic discrimination of the Italian alphabet I-set and E-set , two very difficult Italian phonetic classes, will be described. The speech signal is analyzed by a recently developed joint synchrony/mean-rate auditory processing scheme and a fully-connected feed-forward recurrent BP network was used for the classification stage. The achieved speaker independent mean recognition rate was 65%, for the I- set and 88% for the E-set showing rather satisfactory results given the difficulty of both tasks.

Speaker Independent Phonetic Recognition Using Auditory Modelling and Recurrent Neural Networks

Cosi P;
1994

Abstract

Two speaker independent speech recognition experiments, regarding the automatic discrimination of the Italian alphabet I-set and E-set , two very difficult Italian phonetic classes, will be described. The speech signal is analyzed by a recently developed joint synchrony/mean-rate auditory processing scheme and a fully-connected feed-forward recurrent BP network was used for the classification stage. The achieved speaker independent mean recognition rate was 65%, for the I- set and 88% for the E-set showing rather satisfactory results given the difficulty of both tasks.
1994
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Istituto di Scienze e Tecnologie della Cognizione - ISTC
978-3-540-19887-1
Speaker Independent
Phonetic Recognition
Auditory Modelling
Recurrent Neural Networks
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/16043
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact