Due to the complexity of topical opinion retrieval systems, standard measures, such as MAP or precision, do not fully succeed in assessing their performances. In this paper we introduce an evaluation framework based on artificially defined opinion classifiers. Using a Monte Carlo sampling, we perturb a relevance ranking by the outcomes of these classifiers and analyse how the opinion retrieval performance changes. In this way it is possible to assess the performance of an approach to opinion mining from that of the overall system and to clarify how relevance and opinion are affected by each other.
Assessing the quality of opinion retrieval systems
Carlo Gaibisso;
2010
Abstract
Due to the complexity of topical opinion retrieval systems, standard measures, such as MAP or precision, do not fully succeed in assessing their performances. In this paper we introduce an evaluation framework based on artificially defined opinion classifiers. Using a Monte Carlo sampling, we perturb a relevance ranking by the outcomes of these classifiers and analyse how the opinion retrieval performance changes. In this way it is possible to assess the performance of an approach to opinion mining from that of the overall system and to clarify how relevance and opinion are affected by each other.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.