Visual Question Answering (VQA) is gaining momentum for its ability of bridging Computer Vision and Natural Language Processing. VQA approaches mainly rely on Machine Learning algorithms that need to be trained on large annotated datasets. Once trained, a machine learning model is barely portable on a different domain. This calls for agile methodologies for building large annotated datasets from existing resources. The cultural heritage domain represents both a natural application of this task and an extensive source of data for training and validating VQA models. To this end, by using data and models from ArCo, the knowledge graph of the Italian cultural heritage, we generated a large dataset for VQA in Italian and English. We describe the results and the lessons learned by our semi-automatic process for the dataset generation and discuss the employed tools for data extraction and transformation.
A Large Visual Question Answering Dataset for Cultural Heritage
Asprino L.;Bulla L.;Marinucci L.;Presutti V.
2022
Abstract
Visual Question Answering (VQA) is gaining momentum for its ability of bridging Computer Vision and Natural Language Processing. VQA approaches mainly rely on Machine Learning algorithms that need to be trained on large annotated datasets. Once trained, a machine learning model is barely portable on a different domain. This calls for agile methodologies for building large annotated datasets from existing resources. The cultural heritage domain represents both a natural application of this task and an extensive source of data for training and validating VQA models. To this end, by using data and models from ArCo, the knowledge graph of the Italian cultural heritage, we generated a large dataset for VQA in Italian and English. We describe the results and the lessons learned by our semi-automatic process for the dataset generation and discuss the employed tools for data extraction and transformation.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.