This research proposal is framed in the interdisciplinary exploration of the socio-cultural implications that AI exerts on individual and groups. The focus concerns contexts where models can amplify discriminations through algorithmic biases, e.g., in recommendation and ranking systems or abusive language detection classifiers, and the debiasing of their automated decisions to become beneficial and just for everyone. To address these issues, the main objective of the proposed research project is to develop a framework to perform fairness auditing and debiasing of both classifiers and datasets, starting with, but not limited to, abusive language detection, thus broadening the approach toward other NLP tasks. Ultimately, by questioning the effectiveness of adjusting and debiasing existing resources, the project aims at developing truly inclusive, fair, and explainable models by design.

Fairness auditing, explanation and debiasing in linguistic data and language models

2023

Abstract

This research proposal is framed in the interdisciplinary exploration of the socio-cultural implications that AI exerts on individual and groups. The focus concerns contexts where models can amplify discriminations through algorithmic biases, e.g., in recommendation and ranking systems or abusive language detection classifiers, and the debiasing of their automated decisions to become beneficial and just for everyone. To address these issues, the main objective of the proposed research project is to develop a framework to perform fairness auditing and debiasing of both classifiers and datasets, starting with, but not limited to, abusive language detection, thus broadening the approach toward other NLP tasks. Ultimately, by questioning the effectiveness of adjusting and debiasing existing resources, the project aims at developing truly inclusive, fair, and explainable models by design.
2023
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Inglese
Longo L.
xAI-2023 - LB-D-DC xAI-2023 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings
xAI-2023 - 1st World Conference on eXplainable Artificial
241
248
https://ceur-ws.org/Vol-3554/
26-28/07/2023
Lisbon, Portugal
Responsible NLP
Explainability
Interpretability
Fairness
1
open
Marchiori Manerba, M
273
info:eu-repo/semantics/conferenceObject
04 Contributo in convegno::04.01 Contributo in Atti di convegno
   Science and technology for the explanation of AI decision making
   XAI
   H2020
   834756
File in questo prodotto:
File Dimensione Formato  
prod_490206-doc_204217.pdf

accesso aperto

Descrizione: Fairness auditing, explanation and debiasing in linguistic data and language models
Tipologia: Versione Editoriale (PDF)
Dimensione 1.24 MB
Formato Adobe PDF
1.24 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/452076
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact