Recent studies in Learning to Rank (LtR) have shown the possibility of effectively distilling a neural network from an ensemble of regression trees. This fully enables the use of neural-based ranking models in query processors of modern Web search engines. Nevertheless, ensembles of regression trees outperform neural models both in terms of efficiency and effectiveness on CPU. In this paper, we propose a framework to design and train neural networks outperforming ensembles of regression trees. After distilling the networks from tree-based models, we exploit an efficiency-oriented pruning technique that works by sparsifying the most computationally intensive layers of the model. Moreover, we develop inference time predictors, which help devise neural network architectures that match the desired efficiency requirements. Comprehensive experiments on two public learning-to-rank datasets show that the neural networks produced with our novel approach are competitive in terms of effectiveness-efficiency trade-off when compared with tree-based ensembles by providing up to 4x inference time speed-up without degradation of the ranking quality.

Distilled neural networks for efficient learning to rank: (Extended Abstract)

Nardini F. M.
;
Rulli C.
;
Trani S.
;
2024

Abstract

Recent studies in Learning to Rank (LtR) have shown the possibility of effectively distilling a neural network from an ensemble of regression trees. This fully enables the use of neural-based ranking models in query processors of modern Web search engines. Nevertheless, ensembles of regression trees outperform neural models both in terms of efficiency and effectiveness on CPU. In this paper, we propose a framework to design and train neural networks outperforming ensembles of regression trees. After distilling the networks from tree-based models, we exploit an efficiency-oriented pruning technique that works by sparsifying the most computationally intensive layers of the model. Moreover, we develop inference time predictors, which help devise neural network architectures that match the desired efficiency requirements. Comprehensive experiments on two public learning-to-rank datasets show that the neural networks produced with our novel approach are competitive in terms of effectiveness-efficiency trade-off when compared with tree-based ensembles by providing up to 4x inference time speed-up without degradation of the ranking quality.
2024
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
979-8-3503-1715-2
Distillation
Efficiency
Learning to Rank
Matrix Multiplication
Neural Networks
Pruning
File in questo prodotto:
File Dimensione Formato  
Distilled_Neural_Networks_for_Efficient_Learning_to_Rank_Extended_Abstract.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 133.36 kB
Formato Adobe PDF
133.36 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/525362
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact