Formative assessment is regarded as one of the main challenges in MOOC research and practice. While MOOC participants need formative assessment and feedback to self-regulate their own learning, providing timely and personalised feedback to large cohorts of students poses issues in terms of scalability and sustainability that require designers to work out technological solutions to produce effective feedback. This paper puts forward a proposal for a type of feedback which is particularly suited to assess non declarative knowledge, like critical thinking about a complex subject. The feedback proposed relies on the power of comparison with peers' opinions and practice, and is complementary to the most well-known technique of peer review, which requires some degree of synchronicity between participants. The proposed feedback consists in visualising a table containing the answers and behaviours of individual participants side-by-side to descriptive statistics of the analogous data concerning the whole cohort. Quantitative and qualitative data were collected to investigate self-reported usefulness and potential of the feedback. While usefulness was statistically higher the mid-point of the scale, no significant difference was found when considering the nature of data (answers to surveys vs actions carried out) as an independent variable. A few suggestions on how to improve this feedback were also identified.

Fostering reflection through automatic feedback in MOOCs: a strategy leveraging on participants' Self-Regulated Learning skills

Donatella Persico;Flavio Manganello;Francesca Maria Dagnino;Marcello Passarelli;Francesca Pozzi;Andrea Ceregini;Giovanni Caruso
2019

Abstract

Formative assessment is regarded as one of the main challenges in MOOC research and practice. While MOOC participants need formative assessment and feedback to self-regulate their own learning, providing timely and personalised feedback to large cohorts of students poses issues in terms of scalability and sustainability that require designers to work out technological solutions to produce effective feedback. This paper puts forward a proposal for a type of feedback which is particularly suited to assess non declarative knowledge, like critical thinking about a complex subject. The feedback proposed relies on the power of comparison with peers' opinions and practice, and is complementary to the most well-known technique of peer review, which requires some degree of synchronicity between participants. The proposed feedback consists in visualising a table containing the answers and behaviours of individual participants side-by-side to descriptive statistics of the analogous data concerning the whole cohort. Quantitative and qualitative data were collected to investigate self-reported usefulness and potential of the feedback. While usefulness was statistically higher the mid-point of the scale, no significant difference was found when considering the nature of data (answers to surveys vs actions carried out) as an independent variable. A few suggestions on how to improve this feedback were also identified.
2019
Istituto per le Tecnologie Didattiche - ITD - Sede Genova
978-84-09-14755-7
automatic feedback
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/391425
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact