The aim of this book, carried out in quite a user-friendly way, is clear from its title: to systematize a general theory of 'trust'; to provide an organicmodelof this very com-plex and dynamic phenomenon on cognitive, affective, social (interactive and collective) levels. Why approach such a scientific project, not only from the point of view of Cognitive and Behavioral Sciences, but also from Artificial Intelligence (AI) and in particular 'Agent' theory domains? Actually, trust for Information and Communication Technologies (ICT) is for us just an application, a technological domain. In particular, we have been working (with many other scholars) 1 in promoting and developing a tradition of studies about trust with Autonomous Agents and in Multi-Agent Systems (MAS). The reason is that we believe that an AI oriented approach can provide - without reductionisms - good systematic and operational instruments for the explicit and well-defined representation of goals, beliefs, complex mental states (like expectations), and their dynamics, and also for modeling social action, mind, interaction, and networks. An AI approach with its programmatic 'naivet´ e' (but being careful to avoid simplistic assumptions and reductions of trust to technical tricks - see Chapter 12) is also useful for revising the biasing and distorting 'traditions' that we find in specific literature (philosophy, psychology, sociology, economics, etc.), which is one of the causes of the recognized 'babel' of trust notions and definitions (see below, Section 0.2). However, our 'tradition' of research at ISTC-CNR (Castelfranchi, Falcone, Conte, Lorini, Miceli, Paglieri, Paolucci, Pezzulo, Tummolini, and many collaborators like Poggi, De Rosis, Giardini, Piunti, Marzo, Calvi, Ulivieri, and several others) is a broader and Cognitive Science-oriented tradition: to systematically study the'cognitive mediators of social action': that is, the mental representations supporting social behaviors and collective and institutional phenomena; like: cooperation, social functions, norms, power, social emotions (admiration, envy, pity, shame, guilt, etc.). Thus, trust was an unavoidable and perfect subject: on the one hand, it is absolutely cru-cial for social interaction and for collective and institutional phenomena (and one should explain 'why'); on the other hand, it is a perfect example of a necessary cognitive 'medi-ator' of sociality, and of integration of mind and interaction, of epistemic and motivational representations, of reasoning and affects. Our effort is in this tradition and frame (see below, Section 0.3).
Trust Theory : a socio-cognitive and computational model
Castelfranchi C;Falcone R
2010
Abstract
The aim of this book, carried out in quite a user-friendly way, is clear from its title: to systematize a general theory of 'trust'; to provide an organicmodelof this very com-plex and dynamic phenomenon on cognitive, affective, social (interactive and collective) levels. Why approach such a scientific project, not only from the point of view of Cognitive and Behavioral Sciences, but also from Artificial Intelligence (AI) and in particular 'Agent' theory domains? Actually, trust for Information and Communication Technologies (ICT) is for us just an application, a technological domain. In particular, we have been working (with many other scholars) 1 in promoting and developing a tradition of studies about trust with Autonomous Agents and in Multi-Agent Systems (MAS). The reason is that we believe that an AI oriented approach can provide - without reductionisms - good systematic and operational instruments for the explicit and well-defined representation of goals, beliefs, complex mental states (like expectations), and their dynamics, and also for modeling social action, mind, interaction, and networks. An AI approach with its programmatic 'naivet´ e' (but being careful to avoid simplistic assumptions and reductions of trust to technical tricks - see Chapter 12) is also useful for revising the biasing and distorting 'traditions' that we find in specific literature (philosophy, psychology, sociology, economics, etc.), which is one of the causes of the recognized 'babel' of trust notions and definitions (see below, Section 0.2). However, our 'tradition' of research at ISTC-CNR (Castelfranchi, Falcone, Conte, Lorini, Miceli, Paglieri, Paolucci, Pezzulo, Tummolini, and many collaborators like Poggi, De Rosis, Giardini, Piunti, Marzo, Calvi, Ulivieri, and several others) is a broader and Cognitive Science-oriented tradition: to systematically study the'cognitive mediators of social action': that is, the mental representations supporting social behaviors and collective and institutional phenomena; like: cooperation, social functions, norms, power, social emotions (admiration, envy, pity, shame, guilt, etc.). Thus, trust was an unavoidable and perfect subject: on the one hand, it is absolutely cru-cial for social interaction and for collective and institutional phenomena (and one should explain 'why'); on the other hand, it is a perfect example of a necessary cognitive 'medi-ator' of sociality, and of integration of mind and interaction, of epistemic and motivational representations, of reasoning and affects. Our effort is in this tradition and frame (see below, Section 0.3).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.