In the last five years there has been a flurry of work on information extraction, i.e., on algorithms capable of extracting, from informal and unstructured texts, mentions of concepts relevant to a given application. Most of this literature is about methods based on supervised learning, i.e., methods for training an information extraction system from manually annotated examples. While a lot of work has been devoted to devising learning methods that generate more and more accurate information extractors, no work has been devoted to investigating the effect of the quality of training data on the learning process. Low quality in training data often derives from the fact that the person who has annotated the data is different from the one against whose judgment the automatically annotated data must be evaluated. In this paper we test the impact of such data quality issues on the accuracy of information extraction systems as applied to the clinical domain. We do this by comparing the accuracy deriving from training data annotated by the authoritative coder (i.e., the one who has also annotated the test data, and by whose judgment we must abide), with the accuracy deriving from training data annotated by a different coder.
Low-quality training data in information extraction
Sebastiani F
2015
Abstract
In the last five years there has been a flurry of work on information extraction, i.e., on algorithms capable of extracting, from informal and unstructured texts, mentions of concepts relevant to a given application. Most of this literature is about methods based on supervised learning, i.e., methods for training an information extraction system from manually annotated examples. While a lot of work has been devoted to devising learning methods that generate more and more accurate information extractors, no work has been devoted to investigating the effect of the quality of training data on the learning process. Low quality in training data often derives from the fact that the person who has annotated the data is different from the one against whose judgment the automatically annotated data must be evaluated. In this paper we test the impact of such data quality issues on the accuracy of information extraction systems as applied to the clinical domain. We do this by comparing the accuracy deriving from training data annotated by the authoritative coder (i.e., the one who has also annotated the test data, and by whose judgment we must abide), with the accuracy deriving from training data annotated by a different coder.File | Dimensione | Formato | |
---|---|---|---|
prod_327140-doc_99697.pdf
accesso aperto
Descrizione: Low-quality training data in information extraction
Tipologia:
Versione Editoriale (PDF)
Dimensione
166.51 kB
Formato
Adobe PDF
|
166.51 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.