Social Network Analysis of Learning: Application of Significance Tests to Massive Open Online Courses (MOOCs)

Cloud created by:

Rebecca Ferguson
15 May 2017

Steve Cayzer and Simon Coton

Abstract:Modern research into Massive Open Online Courses (MOOCs) using network analysis techniques is widening as the field grows. However, results are usually not validated using statistical significance tests, or test assumptions are incorrectly applied. Here the use and application of different statistical significance tests is explored, including parametric, non-parametric and permutation tests on two University of Bath hosted MOOC courses to ascertain how network cohesion; a measure of user connectedness, changes over the duration of a MOOC run. Analysis demonstrates that transitivity; a measure of connectedness between three or more participants, increases over the duration of a MOOC run, which has design implications that could reduce participant dropout rates in future MOOC courses. The results for reciprocity; a measure of connectedness between two users, on the other hand, were inconclusive. Findings from the application of significance tests established that Spearman’s rank-order correlation coefficient is the most appropriate significance test for the given dataset. As well as this, the selection of significance levels and incorrect application of assumptions in significance tests are seen to produce misleading results. This study operates as a guide for future MOOC researchers who choose to navigate the complex environment of statistical significance tests as a way to understand and validate results, as well as highlighting the potential to reduce course dropout rates, by encouraging transitive relationships between users early on in the course. Future work could look at connectivity based on course topic or using other forms of triadic relations.

Extra content

Embedded Content

added by Rebecca Ferguson

Contribute