Due to the complexity of topical opinion retrieval systems, standard measures, such as MAP or precision, do not fully succeed in assessing their performances. In this paper we introduce an evaluation framework based on artificially defined opinion classifiers. Using a Monte Carlo sampling, we perturb a relevance ranking by the outcomes of these classifiers and analyse how the opinion retrieval performance changes. In this way it is possible to assess the performance of an approach to opinion mining from that of the overall system and to clarify how relevance and opinion are affected by each other.

Assessing the quality of opinion retrieval systems

Carlo Gaibisso;
2010

Abstract

Due to the complexity of topical opinion retrieval systems, standard measures, such as MAP or precision, do not fully succeed in assessing their performances. In this paper we introduce an evaluation framework based on artificially defined opinion classifiers. Using a Monte Carlo sampling, we perturb a relevance ranking by the outcomes of these classifiers and analyse how the opinion retrieval performance changes. In this way it is possible to assess the performance of an approach to opinion mining from that of the overall system and to clarify how relevance and opinion are affected by each other.
2010
Istituto di Analisi dei Sistemi ed Informatica ''Antonio Ruberti'' - IASI
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/71255
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact