The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, such as ChatGPT, to resolve a conflict and apologize. After reviewing some possible, non-exhaustive, explanations for the moral wrongness of using AI tools in such cases, I introduce the notion of second-person authenticity: a form of authenticity that is assessed by the other per- son in the relationship (e.g., a partner). I then argue that at least some actions within romantic relationships should respect a standard of authentic conduct since the value of such actions depends on who actually performs them and not only on the quality of the outcome produced. Therefore, using AI tools in such circumstances may prevent agents from meeting this standard. I conclude by suggesting that the proposed theoretical framework could also apply to other human-to-human relationships, such as the doctor-patient relationship, when these are mediated by AI; I offer some preliminary reflections on such applications.

Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships?

Battisti, Davide
2025

Abstract

The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, such as ChatGPT, to resolve a conflict and apologize. After reviewing some possible, non-exhaustive, explanations for the moral wrongness of using AI tools in such cases, I introduce the notion of second-person authenticity: a form of authenticity that is assessed by the other per- son in the relationship (e.g., a partner). I then argue that at least some actions within romantic relationships should respect a standard of authentic conduct since the value of such actions depends on who actually performs them and not only on the quality of the outcome produced. Therefore, using AI tools in such circumstances may prevent agents from meeting this standard. I conclude by suggesting that the proposed theoretical framework could also apply to other human-to-human relationships, such as the doctor-patient relationship, when these are mediated by AI; I offer some preliminary reflections on such applications.
2025
AI-mediated communicationL
Authenticity
Ethics of AI in relationships
Human relationships
Romantic relationships
Doctor-patient relationships
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/552449
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ente

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? ND
social impact