In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.
Methods, models and tools for improving the quality of textual annotations
MT Artese;I Gagliardi
2022
Abstract
In multilingual textual archives, the availability of textual annotation, that is keywords either manually or automatically associated with texts, is something worth exploiting to improve user experience and successful navigation, search and visualization. It is therefore necessary to study and develop tools for this exploitation. The paper aims to define models and tools for handling textual annotations, in our case keywords of a scientific library. With the background of NLP, machine learning and deep learning approaches are presented. They allow us, in supervised and unsupervised ways, to increase the quality of keywords. The different steps of the pipeline are addressed, and different solutions are analyzed, implemented, evaluated and compared, using statistical methods, machine learning and artificial neural networks as appropriate. If possible, off-the-shelf solutions will also be compared. The models are trained on different datasets already available or created ad hoc with common characteristics with the starting dataset. The results obtained are presented, commented and compared with each other.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.