We focus on two recently proposed algorithms in the family of "boosting"-based learners for automated text classification, AdaBoost. MH and AdaBoost.MHKR. While the former is a realization of the well-known AdaBoost algorithm speci.cally aimed at multi-label text categorization, the latter is a generalization of the former based on the idea of learning a committee of classifier sub-committees. Both algorithms have been among the best performers in text categorization experiments so far. A problem in the use of both algorithms is that they require documents to be represented by binary vectors, indicating presence or absence of the terms in the document. As a consequence, these algorithms cannot take full advantage of the "weighted" representations (consisting of vectors of continuous attributes) that are customary in information retrieval tasks, and that provide a much more significant rendition of the document's content than binary representations.In this paper we address the problem of exploiting the potential of weighted representations in the context of AdaBoost-like algorithms by discretizing the continuous attributes through the application of entropybased discretization methods. We present experimental results on the Reuters-21578 text categorization collection, showing that for both algorithms the version with discretized continuous attributes outperforms the version with traditional binary representations.
Discretizing continuous attributes in AdaBoost for text categorization
Sebastiani F;
2003
Abstract
We focus on two recently proposed algorithms in the family of "boosting"-based learners for automated text classification, AdaBoost. MH and AdaBoost.MHKR. While the former is a realization of the well-known AdaBoost algorithm speci.cally aimed at multi-label text categorization, the latter is a generalization of the former based on the idea of learning a committee of classifier sub-committees. Both algorithms have been among the best performers in text categorization experiments so far. A problem in the use of both algorithms is that they require documents to be represented by binary vectors, indicating presence or absence of the terms in the document. As a consequence, these algorithms cannot take full advantage of the "weighted" representations (consisting of vectors of continuous attributes) that are customary in information retrieval tasks, and that provide a much more significant rendition of the document's content than binary representations.In this paper we address the problem of exploiting the potential of weighted representations in the context of AdaBoost-like algorithms by discretizing the continuous attributes through the application of entropybased discretization methods. We present experimental results on the Reuters-21578 text categorization collection, showing that for both algorithms the version with discretized continuous attributes outperforms the version with traditional binary representations.File | Dimensione | Formato | |
---|---|---|---|
prod_68277-doc_122691.pdf
solo utenti autorizzati
Descrizione: Discretizing Continuous Attributes in AdaBoost for Text Categorization
Tipologia:
Versione Editoriale (PDF)
Dimensione
191.28 kB
Formato
Adobe PDF
|
191.28 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.