The work presented in this paper investigates the ability of BERT neural language model pretrained in Italian to embed syntactic dependency relationships into its layers, by approximating a Dependency Parse Tree. To this end, a structural probe, namely a supervised model able to extract linguistic structures from a language model, has been trained leveraging the contextual embeddings from the layers of BERT. An experimental assessment has been performed using an Italian version of BERT-base model and a set of datasets for Italian labelled with Universal Dependencies formalism. The results, achieved using standard metrics of dependency parsers, have shown that a knowledge of the Italian syntax is embedded in central-upper layers of the BERT model, according to what observed in literature for the English case. In addition, the probe has been also used to experimentally evaluate the BERT model behaviour in case of two specific syntactic phenomena in Italian, namely null-subject and subject-verb-agreement, showing better performance than an Italian state-of-the-art parser. These findings can open a path for the development of new hybrid approaches, exploiting the probe to integrate or improve limits or weaknesses in analysing articulated constructions of Italian syntax, traditionally complex to be parsed.

Assessing BERT's ability to learn Italian syntax: a study on null-subject and agreement phenomena

Raffaele Guarasci;Stefano Silvestri;Giuseppe De Pietro;Massimo Esposito
2021

Abstract

The work presented in this paper investigates the ability of BERT neural language model pretrained in Italian to embed syntactic dependency relationships into its layers, by approximating a Dependency Parse Tree. To this end, a structural probe, namely a supervised model able to extract linguistic structures from a language model, has been trained leveraging the contextual embeddings from the layers of BERT. An experimental assessment has been performed using an Italian version of BERT-base model and a set of datasets for Italian labelled with Universal Dependencies formalism. The results, achieved using standard metrics of dependency parsers, have shown that a knowledge of the Italian syntax is embedded in central-upper layers of the BERT model, according to what observed in literature for the English case. In addition, the probe has been also used to experimentally evaluate the BERT model behaviour in case of two specific syntactic phenomena in Italian, namely null-subject and subject-verb-agreement, showing better performance than an Italian state-of-the-art parser. These findings can open a path for the development of new hybrid approaches, exploiting the probe to integrate or improve limits or weaknesses in analysing articulated constructions of Italian syntax, traditionally complex to be parsed.
2021
Neural language model
Syntactic phenomena
Dependency parse tree
Structural probe
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/398750
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact