Software systems trained via machine learning to automatically classify open-ended answers (a.k.a. verbatims) are by now a reality. Still, their adoption in the survey coding industry has been less widespread than it might have been. Among the factors that have hindered a more massive takeup of this technology are the effort involved in manually coding a sufficient amount of training data, the fact that small studies do not seem to justify this effort, and the fact that the process needs to be repeated anew when brand new coding tasks arise. In this article, we will argue for an approach to building verbatim classifiers that we will call 'Interactive Learning,' and that addresses all the above problems. We will show that, for the same amount of training effort, interactive learning delivers much better coding accuracy than standard "non-interactive" learning. This is especially true when the amount of data we are willing to manually code is small, which makes this approach attractive also for small-scale studies. Interactive learning also lends itself to reusing previously trained classifiers for dealing with new (albeit related) coding tasks. Interactive learning also integrates better in the daily workflow of the survey specialist and delivers a better user experience overall.

Building automated survey coders via interactive machine learning

Esuli A;Moreo Fernandez AD;Sebastiani F
2019

Abstract

Software systems trained via machine learning to automatically classify open-ended answers (a.k.a. verbatims) are by now a reality. Still, their adoption in the survey coding industry has been less widespread than it might have been. Among the factors that have hindered a more massive takeup of this technology are the effort involved in manually coding a sufficient amount of training data, the fact that small studies do not seem to justify this effort, and the fact that the process needs to be repeated anew when brand new coding tasks arise. In this article, we will argue for an approach to building verbatim classifiers that we will call 'Interactive Learning,' and that addresses all the above problems. We will show that, for the same amount of training effort, interactive learning delivers much better coding accuracy than standard "non-interactive" learning. This is especially true when the amount of data we are willing to manually code is small, which makes this approach attractive also for small-scale studies. Interactive learning also lends itself to reusing previously trained classifiers for dealing with new (albeit related) coding tasks. Interactive learning also integrates better in the daily workflow of the survey specialist and delivers a better user experience overall.
2019
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Machine learning
Text classification
Survey coding
File in questo prodotto:
File Dimensione Formato  
prod_401327-doc_139462.pdf

solo utenti autorizzati

Descrizione: Building automated survey coders via interactive machine learning
Tipologia: Versione Editoriale (PDF)
Dimensione 782.75 kB
Formato Adobe PDF
782.75 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
prod_401327-doc_139935.pdf

accesso aperto

Descrizione: Building automated survey coders via interactive machine learning
Tipologia: Versione Editoriale (PDF)
Dimensione 694.23 kB
Formato Adobe PDF
694.23 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/359366
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact