Different task interpretations are a highly undesired element in interactive video retrieval evaluations. When a participating team focuses partially on a wrong goal, the evaluation results might become partially misleading. In this paper, we propose a process for refining known-item and open-set type queries, and preparing the assessors that judge the correctness of submissions to open-set queries. Our findings from recent years reveal that a proper methodology can lead to objective query quality improvements and subjective participant satisfaction with query clarity.
Improving query and assessment quality in text-based interactive video retrieval evaluation
Coccomini D;Messina N;
2023
Abstract
Different task interpretations are a highly undesired element in interactive video retrieval evaluations. When a participating team focuses partially on a wrong goal, the evaluation results might become partially misleading. In this paper, we propose a process for refining known-item and open-set type queries, and preparing the assessors that judge the correctness of submissions to open-set queries. Our findings from recent years reveal that a proper methodology can lead to objective query quality improvements and subjective participant satisfaction with query clarity.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
prod_485364-doc_201346.pdf
solo utenti autorizzati
Descrizione: Improving query and assessment quality in text-based interactive video retrieval evaluation
Tipologia:
Versione Editoriale (PDF)
Dimensione
529.77 kB
Formato
Adobe PDF
|
529.77 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.