In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.

Methods, models and tools for improving the quality of textual annotations

MT Artese;I Gagliardi
2022

Abstract

In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.
2022
Istituto di Matematica Applicata e Tecnologie Informatiche - IMATI -
multilingual datasets
word embedding models
sequence2sequence
LSTM
neural networks
machine learning algorithms
semantic relatedness
syntactic similarity
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/456759
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact