Learning ranking models that are both explainable and effective is an emerging topic within the research area of explainable AI. Several Learning-to-Rank (LtR) algorithms have been recently proposed that build models that are simple to explain and, at the same time, almost as effective as their state-of-the-art, black-box counterparts. In this work, we propose Interpretable LambdaMART (ILMART), a novel framework with different strategies to constrain the state-of-the-art LtR LambdaMART algorithm to generate interpretable models, i.e., ensembles whose trees can use either single features (main effects) or a limited number of interacting features (interaction effects). ILMART facilitates a straightforward tradeoff between model explainability and effectiveness by precisely tuning the quantity of main and interaction effects during the learning phase. We show that slightly increasing their number allows ILMART models to reach ranking performances at par with full-complexity LambdaMART ones. Furthermore, reproducible experiments conducted on publicly available LtR datasets demonstrate that ILMART can improve nDCG@10 by up to 10% compared to state-of-the-art competitors while preserving an explainable structure. Finally, we explore the relationship between model explainability and inference efficiency by introducing a novel and easy-to-implement scoring algorithm for ILMART ranking models, achieving up to a speedup compared to the baseline.

Explainable, effective, and efficient learning-to-rank models using ILMART

Nardini F. M.;Perego R.;Veneri A.
2025

Abstract

Learning ranking models that are both explainable and effective is an emerging topic within the research area of explainable AI. Several Learning-to-Rank (LtR) algorithms have been recently proposed that build models that are simple to explain and, at the same time, almost as effective as their state-of-the-art, black-box counterparts. In this work, we propose Interpretable LambdaMART (ILMART), a novel framework with different strategies to constrain the state-of-the-art LtR LambdaMART algorithm to generate interpretable models, i.e., ensembles whose trees can use either single features (main effects) or a limited number of interacting features (interaction effects). ILMART facilitates a straightforward tradeoff between model explainability and effectiveness by precisely tuning the quantity of main and interaction effects during the learning phase. We show that slightly increasing their number allows ILMART models to reach ranking performances at par with full-complexity LambdaMART ones. Furthermore, reproducible experiments conducted on publicly available LtR datasets demonstrate that ILMART can improve nDCG@10 by up to 10% compared to state-of-the-art competitors while preserving an explainable structure. Finally, we explore the relationship between model explainability and inference efficiency by introducing a novel and easy-to-implement scoring algorithm for ILMART ranking models, achieving up to a speedup compared to the baseline.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Explainable Boosting
Explainable Ranking
ILMART
LambdaMART
File in questo prodotto:
File Dimensione Formato  
3733232.pdf

accesso aperto

Descrizione: Explainable, Effective, and Efficient Learning-to-Rank Models Using ILMART
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 7.54 MB
Formato Adobe PDF
7.54 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/556023
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact