Crowdsourcing is a mechanism by means of which groups of people are able to execute a task by sharing ideas, efforts and resources. Thanks to the online technologies, crowdsourcing has become in the last decade an even more utilized process in different and diverse fields. An instance of such process is the so-called "label aggregation problem": in practice, it is the evaluation of an item by groups of agents, where each agent gives its own judgment of it. Starting from the individual evaluations of their members, how can the groups give their global assessment? In this work, by means of a game-theoretical, evolutionary approach, we show that in most cases the majority rule (the group evaluation is the evaluation of the majority of its members) is still the best way to get a reliable group evaluation, even when the agents are not the best experts of the topic at stake; on the other hand, we also show that noise ({\it i.e.}, fortuitous errors, misunderstanding, or every possible source of non-deterministic outcomes) can undermine the efficiency of the procedure in non-trivial situations. Therefore, in order to make the process as reliable as possible, the presence of noise and its effects should be carefully taken into account.

Noise and fluctuations can undermine the efficiency of Majority Rule in Group Evaluation problems

Daniele Vilone
2022

Abstract

Crowdsourcing is a mechanism by means of which groups of people are able to execute a task by sharing ideas, efforts and resources. Thanks to the online technologies, crowdsourcing has become in the last decade an even more utilized process in different and diverse fields. An instance of such process is the so-called "label aggregation problem": in practice, it is the evaluation of an item by groups of agents, where each agent gives its own judgment of it. Starting from the individual evaluations of their members, how can the groups give their global assessment? In this work, by means of a game-theoretical, evolutionary approach, we show that in most cases the majority rule (the group evaluation is the evaluation of the majority of its members) is still the best way to get a reliable group evaluation, even when the agents are not the best experts of the topic at stake; on the other hand, we also show that noise ({\it i.e.}, fortuitous errors, misunderstanding, or every possible source of non-deterministic outcomes) can undermine the efficiency of the procedure in non-trivial situations. Therefore, in order to make the process as reliable as possible, the presence of noise and its effects should be carefully taken into account.
2022
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Crowdsourcing
Collective problem solving
Game Theory
Evolutionary simulations
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/439866
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact