University ranking systems are being implemented by different organizations in an attempt to evaluate and compare Higher Education Institutions at a global level. Despite being more and more widely used, ranking systems are strongly criticized for their social and economic implications as well as for limitations in their technical implementation. One of the most relevant limitations is that they do not consider the specific characteristics of online education. Despite all of the benchmarking tools tailored to evaluate online programmes or courses, online universities risk that their position in most rankings misrepresents their actual quality compared to that of traditional universities. Thus, building a ranking system able to reflect the specific nature of online universities, in such a way that they are not evaluated through unsuitable indicators devised for traditional universities, is definitely a need that deserves to be addressed to protect quality in the online world. However, there is a number of challenging aspects to be considered in order to develop a ranking tool specifically designed for online universities. These mainly include, but are not limited to, the need to identify the most adequate criteria and indicators to reflect and measure the specificities of online universities. Moving from these premises, we started focusing on the definition of the main criteria to be considered when assessing and ranking online universities. To this end, we took a participatory approach, involving several stakeholders and informants in an attempt to reach the broader Higher Education Institutions community. This approach was implemented through a first phase where we collaboratively elaborated a preliminary set of criteria for online Higher Education Institutions, and a second phase where a two-round Delphi Study and a national workshop were run to refine, enrich and evaluate the initial set of criteria. In this paper, we present the approach adopted and the findings of the participatory workshop. We conducted the national participatory workshop with 38 participants from different background (including academic professors and researchers, educators, private organizations and institutional representatives). The workshop included a morning session devoted to a round-table discussion and an afternoon session consisting of a group work discussion-based activity. The round-table discussion was video-recorded and transcribed. In addition, during both sessions, researchers' field notes were collected. Transcribed data and researchers' field notes from the round-table discussion were analysed following a thematic analysis approach, while the data generated within the group work discussion-based activity were statistically treated and then interpreted on the basis of researchers' field notes. There was a significant feedback in reference to the proposed list of criteria during both sessions. The round-table discussion underlined the relationship between ranking systems, quality assurance measures, and accreditation systems, in most cases by identifying their different aims. The main points emerged from the discussion are recommendation for the technical implementation of any future ranking system, which should be: statistically robust, clearly defined (transparent) and as objective as possible; capable of catering for the needs of different audiences; able to consider quality at all levels, from the micro-level (Course) to the meso-level (Department) and the macro-level (Institution); able to elicit reliable and accurate data from different sources. The group work discussion-based activity, in addition to having produced a ranked list of the proposed criteria based on their (perceived) relative importance, underlined the difficulty of keeping some of the proposed criteria separate, and therefore suggested ways to merge them into broader categories. The group work discussion-based activity also pointed out that the terms used to define the proposed criteria and parameters are in most of the cases subject to a very wide range of interpretations, and that some criteria should be added in order to consider specific system figures. Overall, the actions put in place so far have turned out to be quite effective in terms of feedback collected. We have begun to develop, test and refine representative performance online quality education indicators based on common criteria. The participatory approach allowed us to enable stakeholders' reflection on online universities' peculiar nature and discussion towards the definition of criteria and indicators to be used to rank online universities. Among the main conclusions of this work, the importance of teaching, student support and student experience turned out to be higher than any other criteria, organization, teacher support, research, sustainability and technological infrastructure are middle ground criteria, while reputation was deemed the least important criteria.
Towards the Creation of a Ranking System for Online Universities: Quali-Quantitative Analysis of a Participatory Workshop
Manganello F;Passarelli M;Persico D;Pozzi;
2018
Abstract
University ranking systems are being implemented by different organizations in an attempt to evaluate and compare Higher Education Institutions at a global level. Despite being more and more widely used, ranking systems are strongly criticized for their social and economic implications as well as for limitations in their technical implementation. One of the most relevant limitations is that they do not consider the specific characteristics of online education. Despite all of the benchmarking tools tailored to evaluate online programmes or courses, online universities risk that their position in most rankings misrepresents their actual quality compared to that of traditional universities. Thus, building a ranking system able to reflect the specific nature of online universities, in such a way that they are not evaluated through unsuitable indicators devised for traditional universities, is definitely a need that deserves to be addressed to protect quality in the online world. However, there is a number of challenging aspects to be considered in order to develop a ranking tool specifically designed for online universities. These mainly include, but are not limited to, the need to identify the most adequate criteria and indicators to reflect and measure the specificities of online universities. Moving from these premises, we started focusing on the definition of the main criteria to be considered when assessing and ranking online universities. To this end, we took a participatory approach, involving several stakeholders and informants in an attempt to reach the broader Higher Education Institutions community. This approach was implemented through a first phase where we collaboratively elaborated a preliminary set of criteria for online Higher Education Institutions, and a second phase where a two-round Delphi Study and a national workshop were run to refine, enrich and evaluate the initial set of criteria. In this paper, we present the approach adopted and the findings of the participatory workshop. We conducted the national participatory workshop with 38 participants from different background (including academic professors and researchers, educators, private organizations and institutional representatives). The workshop included a morning session devoted to a round-table discussion and an afternoon session consisting of a group work discussion-based activity. The round-table discussion was video-recorded and transcribed. In addition, during both sessions, researchers' field notes were collected. Transcribed data and researchers' field notes from the round-table discussion were analysed following a thematic analysis approach, while the data generated within the group work discussion-based activity were statistically treated and then interpreted on the basis of researchers' field notes. There was a significant feedback in reference to the proposed list of criteria during both sessions. The round-table discussion underlined the relationship between ranking systems, quality assurance measures, and accreditation systems, in most cases by identifying their different aims. The main points emerged from the discussion are recommendation for the technical implementation of any future ranking system, which should be: statistically robust, clearly defined (transparent) and as objective as possible; capable of catering for the needs of different audiences; able to consider quality at all levels, from the micro-level (Course) to the meso-level (Department) and the macro-level (Institution); able to elicit reliable and accurate data from different sources. The group work discussion-based activity, in addition to having produced a ranked list of the proposed criteria based on their (perceived) relative importance, underlined the difficulty of keeping some of the proposed criteria separate, and therefore suggested ways to merge them into broader categories. The group work discussion-based activity also pointed out that the terms used to define the proposed criteria and parameters are in most of the cases subject to a very wide range of interpretations, and that some criteria should be added in order to consider specific system figures. Overall, the actions put in place so far have turned out to be quite effective in terms of feedback collected. We have begun to develop, test and refine representative performance online quality education indicators based on common criteria. The participatory approach allowed us to enable stakeholders' reflection on online universities' peculiar nature and discussion towards the definition of criteria and indicators to be used to rank online universities. Among the main conclusions of this work, the importance of teaching, student support and student experience turned out to be higher than any other criteria, organization, teacher support, research, sustainability and technological infrastructure are middle ground criteria, while reputation was deemed the least important criteria.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


