In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.

Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

Miaschi;Alessio;Brunato;Dominique;Dell'Orletta;Felice;Venturi;Giulia
2022

Abstract

In this paper, we present an in-depth investigation of the linguistic knowledge encoded by the transformer models currently available for the Italian language. In particular, we investigate how the complexity of two different architectures of probing models affects the performance of the Transformers in encoding a wide spectrum of linguistic features. Moreover, we explore how this implicit knowledge varies according to different textual genres and language varieties.
2022
Istituto di linguistica computazionale "Antonio Zampolli" - ILC
Neural Language Models
Interpretability
Language Varieties
File in questo prodotto:
File Dimensione Formato  
prod_469733-doc_190324.pdf

accesso aperto

Descrizione: Probing_Linguistic_Knowledge_in_Italian_Neural_Language_Models_across_Language_Varieties
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 1.74 MB
Formato Adobe PDF
1.74 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/443057
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact