The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.

AI and Language: New Forms for Old Discriminations? A Case Study in Google Translate and Canva

Martina Mattiazzi
Primo
2025

Abstract

The development of artificial intelligence (AI) is one of the greatest technological revolutions in recent human history. AI technology is widely used in various fields, including education. In this field, AI is studied as a discipline, and used as a tool to overcome social barriers. Like any human revolution, however, it is necessary to be careful about it and consider that the growing use of these new informatic systems also entails risks. One of them, it is the reinforcement of gender stereotypes and discrimination against women through linguistics feedback. Trough an experimental analysis conducted on common AI-integrated app –Google Translate and Canva–we will investigate linguistic behaviours such as responding to a command prompts. From the results obtained, we can demonstrate the existence of gender biases in the AI’s productions, both in textual and visual language. Gender biases are consequences of the structural inequalities present in society: it is not the technology that is sexist, but it is the dataset on which it is based, which in turn is based on the results produced by users and published on internet. In a society based on democracy and equality, it is important to ensure that the use of such a widespread technology as AI does not perpetuate existing stereotypes and does not allow to become a new form of strengthening discriminations. From a linguistics perspective, this means paying attention to the linguistic outputs, both textual and visual, provided by the AI and checking the dataset it has been training on. Due to their central role in the education of new generations, schools and institutions should prepare students for a critical vision of the phenomenon and provide them with the tools to contrast it. This path could start from teaching AI mechanisms and ethics of technology to students and using an inclusive language in the educational context.
2025
Istituto di Studi sul Mediterraneo - ISMed
El desarrollo de la inteligencia artificial (IA) es una de las más grandes revoluciones tecnológicas de la historia reciente de la humanidad. La tecnología de inteligencia artificial se utiliza ampliamente en diversos campos, incluida la educación. En este sector, la IA se estudia como disciplina y se utiliza como herramienta para superar ciertas barreras sociales. Sin embargo, como todas las revoluciones humanas, es necesario considerar que el uso creciente de nuevos sistemas informáticos también entraña riesgos. Uno de ellos es el refuerzo de los estereotipos de género y de la discriminación contra las mujeres en las producciones lingüísticas. A través de un análisis experimental realizado en herramientas comerciales comunes integradas con la IA –Google Translate y Canva– se investigará su comportamiento lingüístico como respuestas a indicaciones de comando. De los resultados obtenidos se puede demostrar la existencia de prejuicios de género en sus outputs, tanto en el lenguaje textual como en el visual. Los sesgos de género son consecuencias de las desigualdades estructurales presentes en la sociedad: no es la tecnología lo que es sexista, sino los conjuntos de datos en los que se basa, que a su vez se basan en los resultados producidos por los usuarios y usuarias y publicados en Internet. Para una sociedad basada en la democracia y en la igualdad, es importante garantizar que el uso de una tecnología tan extendida como la IA no perpetúe los estereotipos existentes y que no se convierta en un nuevo instrumento para fortalecer las discriminaciones. Desde una perspectiva lingüística, esto significa prestar atención a los resultados lingüísticos devueltos por la IA, tanto textual como visual, y verificar el conjunto de datos utilizado por la IA. Por su papel central en la educación de las nuevas generaciones, las escuelas y las instituciones deben preparar a las y los estudiantes para una visión crítica del fenómeno y brindarles las herramientas para contrastarlo. Este camino podría comenzar enseñando los mecanismos de la IA y su ética en clase, y utilizando un lenguaje inclusivo en el contexto educativo.
artificial intelligence, bias, Canva, Google translate, inclusive language, linguistics, linguistic sexism, stereotypes.
inteligencia artificial, sesgos, Canva, Google translate, lenguaje inclusivo, lingüística, sexismo lingüístico, estereotipos.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/533467
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact