Cognitive Science is not a unitary discipline, but a cross-disciplinary research domain. Accordingly, there is no single accepted definition of trust in cognitive science and we will refer to quite distinct literature from neuroscience to philosophy, from Artificial Intelligence (AI) and agent theories to psychology and sociology, etc. Our paradigm is Socio-Cognitive AI, in particular Agents and Multi-Agents modeling. On the one side we use formal modeling of AI architectures for a clear scientific characterization of cognitive representations and their processing, and we endow AI Agents with cognitive and social minds. On the other side we use Multi-Agent Systems (MAS) for the experimental simulation of interaction and social emergent phenomena. By arguing for the following claims, we focus on some of the most controversial issues in this domain: (a) trust does not involve a single and unitary mental state, (b) trust is an evaluation that implies a motivational aspect, (c) trust is a way to exploit ignorance, (d) trust is, and is used as, a signal, (e) trust cannot be reduced to reciprocity, (f) trust combines rationality and feeling, (g) trust is not only related to other persons but can be applied to instruments, technologies, etc. The basic message of this chapter is that "trust" is a complex object of inquiry and must be treated as such. It thus deserves a non-reductive definition and modeling.

Trust: Perspectives in Cognitive Science

Cristiano Castelfranchi;
2020

Abstract

Cognitive Science is not a unitary discipline, but a cross-disciplinary research domain. Accordingly, there is no single accepted definition of trust in cognitive science and we will refer to quite distinct literature from neuroscience to philosophy, from Artificial Intelligence (AI) and agent theories to psychology and sociology, etc. Our paradigm is Socio-Cognitive AI, in particular Agents and Multi-Agents modeling. On the one side we use formal modeling of AI architectures for a clear scientific characterization of cognitive representations and their processing, and we endow AI Agents with cognitive and social minds. On the other side we use Multi-Agent Systems (MAS) for the experimental simulation of interaction and social emergent phenomena. By arguing for the following claims, we focus on some of the most controversial issues in this domain: (a) trust does not involve a single and unitary mental state, (b) trust is an evaluation that implies a motivational aspect, (c) trust is a way to exploit ignorance, (d) trust is, and is used as, a signal, (e) trust cannot be reduced to reciprocity, (f) trust combines rationality and feeling, (g) trust is not only related to other persons but can be applied to instruments, technologies, etc. The basic message of this chapter is that "trust" is a complex object of inquiry and must be treated as such. It thus deserves a non-reductive definition and modeling.
2020
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Trust
Philosophy
Cognitive Modelling
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/405084
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact