We introduce MAIA (Multimodal AI Assessment), a multimodal dataset developed as a core component of a competenceoriented benchmark designed for fine-grained investigation of the reasoning abilities of Visual Language Models (VLMs) on videos. The MAIA benchmark is characterized by several distinctive features. To the best of our knowledge, MAIA is the first Italian-native benchmark addressing video understanding: videos were carefully selected to reflect Italian culture, and the language data (ie, questions and reference answers) were produced by native-Italian speakers. Second, MAIA explicitly includes twelve reasoning categories that are specifically designed to assess the reasoning abilities of VLMs on videos. Third, we structured the dataset to support two aligned tasks (ie, a statement verification and an open-ended visual question answering) built on the same datapoints, this way allowing to assess VLM coherence across task formats. Finally MAIA integrates, by design, state-of-the-art LLMs in the development process of the benchmark, taking advantage of their linguistic and reasoning capabilities both for data augmentation and for assessing and improving the overall quality of the data. In the paper we focus on the design principles and the data collection methodology, highlighting how MAIA provides a significant advancement with respect to other available dataset for VLM benchmarking. Data available at GitHub.

MAIA: a Benchmark for Multimodal AI Assessment

Alessio Miaschi;
2025

Abstract

We introduce MAIA (Multimodal AI Assessment), a multimodal dataset developed as a core component of a competenceoriented benchmark designed for fine-grained investigation of the reasoning abilities of Visual Language Models (VLMs) on videos. The MAIA benchmark is characterized by several distinctive features. To the best of our knowledge, MAIA is the first Italian-native benchmark addressing video understanding: videos were carefully selected to reflect Italian culture, and the language data (ie, questions and reference answers) were produced by native-Italian speakers. Second, MAIA explicitly includes twelve reasoning categories that are specifically designed to assess the reasoning abilities of VLMs on videos. Third, we structured the dataset to support two aligned tasks (ie, a statement verification and an open-ended visual question answering) built on the same datapoints, this way allowing to assess VLM coherence across task formats. Finally MAIA integrates, by design, state-of-the-art LLMs in the development process of the benchmark, taking advantage of their linguistic and reasoning capabilities both for data augmentation and for assessing and improving the overall quality of the data. In the paper we focus on the design principles and the data collection methodology, highlighting how MAIA provides a significant advancement with respect to other available dataset for VLM benchmarking. Data available at GitHub.
Campo DC Valore Lingua
dc.authority.orgunit Istituto di linguistica computazionale "Antonio Zampolli" - ILC en
dc.authority.people Davide Testa en
dc.authority.people Giovanni Bonetta en
dc.authority.people Raffaella Bernardi en
dc.authority.people Alessandro Bondielli en
dc.authority.people Alessandro Lenci en
dc.authority.people Alessio Miaschi en
dc.authority.people Lucia Passaro en
dc.authority.people Bernardo Magnini en
dc.collection.id.s 71c7200a-7c5f-4e83-8d57-d3d2ba88f40d *
dc.collection.name 04.01 Contributo in Atti di convegno *
dc.contributor.appartenenza Istituto di linguistica computazionale "Antonio Zampolli" - ILC *
dc.contributor.appartenenza.mi 918 *
dc.contributor.area Non assegn *
dc.date.accessioned 2026/03/03 16:58:10 -
dc.date.available 2026/03/03 16:58:10 -
dc.date.firstsubmission 2026/03/03 15:44:21 *
dc.date.issued 2025 -
dc.date.submission 2026/03/03 15:44:21 *
dc.description.abstracteng We introduce MAIA (Multimodal AI Assessment), a multimodal dataset developed as a core component of a competenceoriented benchmark designed for fine-grained investigation of the reasoning abilities of Visual Language Models (VLMs) on videos. The MAIA benchmark is characterized by several distinctive features. To the best of our knowledge, MAIA is the first Italian-native benchmark addressing video understanding: videos were carefully selected to reflect Italian culture, and the language data (ie, questions and reference answers) were produced by native-Italian speakers. Second, MAIA explicitly includes twelve reasoning categories that are specifically designed to assess the reasoning abilities of VLMs on videos. Third, we structured the dataset to support two aligned tasks (ie, a statement verification and an open-ended visual question answering) built on the same datapoints, this way allowing to assess VLM coherence across task formats. Finally MAIA integrates, by design, state-of-the-art LLMs in the development process of the benchmark, taking advantage of their linguistic and reasoning capabilities both for data augmentation and for assessing and improving the overall quality of the data. In the paper we focus on the design principles and the data collection methodology, highlighting how MAIA provides a significant advancement with respect to other available dataset for VLM benchmarking. Data available at GitHub. -
dc.description.allpeople Testa, Davide; Bonetta, Giovanni; Bernardi, Raffaella; Bondielli, Alessandro; Lenci, Alessandro; Miaschi, Alessio; Passaro, Lucia; Magnini, Bernardo -
dc.description.allpeopleoriginal Davide Testa, Giovanni Bonetta, Raffaella Bernardi, Alessandro Bondielli, Alessandro Lenci, Alessio Miaschi, Lucia Passaro, Bernardo Magnini en
dc.description.fulltext open en
dc.description.numberofauthors 8 -
dc.identifier.source manual *
dc.identifier.uri https://hdl.handle.net/20.500.14243/570744 -
dc.language.iso eng en
dc.relation.ispartofbook Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025) en
dc.subject.keywordseng multimodal, vllm, evaluation -
dc.subject.singlekeyword multimodal *
dc.subject.singlekeyword vllm *
dc.subject.singlekeyword evaluation *
dc.title MAIA: a Benchmark for Multimodal AI Assessment en
dc.type.driver info:eu-repo/semantics/conferenceObject -
dc.type.full 04 Contributo in convegno::04.01 Contributo in Atti di convegno it
dc.type.miur 273 -
iris.mediafilter.data 2026/03/04 02:52:23 *
iris.orcid.lastModifiedDate 2026/03/03 16:58:10 *
iris.orcid.lastModifiedMillisecond 1772553490576 *
iris.sitodocente.maxattempts 1 -
Appare nelle tipologie: 04.01 Contributo in Atti di convegno
File in questo prodotto:
File Dimensione Formato  
2025.clicit-1.106.pdf

accesso aperto

Licenza: Creative commons
Dimensione 6.6 MB
Formato Adobe PDF
6.6 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/570744
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact