Can an AI truly be your digital twin? Technology companies, startups, and even academic researchers increasingly claim so. From grief-bots that promise to let you talk with deceased loved ones to clinical tools designed to predict patients' treatment preferences, personalized Large Language Models are being marketed as faithful replications of individual identity, personality, and values. The digital twin label—borrowed from industrial engineering, where it describes computational models precisely mirroring physical systems—lends these claims an aura of scientific credibility. But is this metaphor appropriate, or is it dangerously misleading? This paper argues that applying the digital twin metaphor to personalized LLMs constitutes a systematic mischaracterization with serious ethical consequences. To this end, we first outline three plausible interpretations of the PDT metaphor—behaviorist, representational, and phenomenal. We then show that current AI systems do not satisfy the conditions of any of these interpretations, rendering the PDT metaphor ultimately inappropriate. The consequences of this metaphorical overreach are far from abstract. When vulnerable individuals—grieving families, patients facing incapacity, people seeking psychological support—interact with systems marketed as preserving human essence, they risk forming attachments to sophisticated illusions. The metaphor fosters misplaced trust, distorts public understanding of AI capabilities, and shapes policy debates on false premises. As these technologies proliferate into healthcare, legal proceedings, and intimate relationships, metaphorical precision becomes an ethical imperative. We conclude by proposing alternative frameworks that honestly represent what these systems can and cannot do.

Personalised LLMs and the risks of the digital twin metaphor

Annoni, Marco
Co-primo
;
Battisti, Davide
Co-primo
;
2026

Abstract

Can an AI truly be your digital twin? Technology companies, startups, and even academic researchers increasingly claim so. From grief-bots that promise to let you talk with deceased loved ones to clinical tools designed to predict patients' treatment preferences, personalized Large Language Models are being marketed as faithful replications of individual identity, personality, and values. The digital twin label—borrowed from industrial engineering, where it describes computational models precisely mirroring physical systems—lends these claims an aura of scientific credibility. But is this metaphor appropriate, or is it dangerously misleading? This paper argues that applying the digital twin metaphor to personalized LLMs constitutes a systematic mischaracterization with serious ethical consequences. To this end, we first outline three plausible interpretations of the PDT metaphor—behaviorist, representational, and phenomenal. We then show that current AI systems do not satisfy the conditions of any of these interpretations, rendering the PDT metaphor ultimately inappropriate. The consequences of this metaphorical overreach are far from abstract. When vulnerable individuals—grieving families, patients facing incapacity, people seeking psychological support—interact with systems marketed as preserving human essence, they risk forming attachments to sophisticated illusions. The metaphor fosters misplaced trust, distorts public understanding of AI capabilities, and shapes policy debates on false premises. As these technologies proliferate into healthcare, legal proceedings, and intimate relationships, metaphorical precision becomes an ethical imperative. We conclude by proposing alternative frameworks that honestly represent what these systems can and cannot do.
2026
Centro Interdipartimentale per l'Etica e l'Integrità nella Ricerca
Digital Twin, Artificial Intelligence, Identity
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/570861
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ente

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact