Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framework, we find that agent populations consistently converge toward agreement, not through sycophancy or blind conformity, but via a structured and asymmetric persuasion process. Agents are more likely to accept, and thus be persuaded by, opinions that are more agreeable relative to the discussion framing, revealing a directional bias in how opinions evolve. LLM agents selectively adopt peers' views, showing neither bounded confidence nor indiscriminate agreement. Moreover, agents frequently produce fallacious arguments, and are significantly influenced by them: logical fallacies, especially those of relevance and credibility, play a measurable role in driving opinion change. These results not only uncover emergent behaviours in agents' dynamics, but also highlight the dual role of LLMs as both generators and victims of flawed reasoning, raising important considerations for their deployment in socially sensitive contexts.

Selective agreement, not sycophancy: investigating opinion dynamics in LLM interactions

Cau E.
Co-primo
;
Pansanella V.
Co-primo
;
Rossetti G.
Ultimo
2025

Abstract

Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framework, we find that agent populations consistently converge toward agreement, not through sycophancy or blind conformity, but via a structured and asymmetric persuasion process. Agents are more likely to accept, and thus be persuaded by, opinions that are more agreeable relative to the discussion framing, revealing a directional bias in how opinions evolve. LLM agents selectively adopt peers' views, showing neither bounded confidence nor indiscriminate agreement. Moreover, agents frequently produce fallacious arguments, and are significantly influenced by them: logical fallacies, especially those of relevance and credibility, play a measurable role in driving opinion change. These results not only uncover emergent behaviours in agents' dynamics, but also highlight the dual role of LLMs as both generators and victims of flawed reasoning, raising important considerations for their deployment in socially sensitive contexts.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Large language model
Opinion dynamics
Logical fallacies
Social simulations
Agent based model
File in questo prodotto:
File Dimensione Formato  
s13688-025-00579-1.pdf

accesso aperto

Descrizione: Selective agreement, not sycophancy: investigating opinion dynamics in LLM interactions
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 2.29 MB
Formato Adobe PDF
2.29 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/563118
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact