Automated Evaluation of Comments in a MOOC Discussion Forum
Cloud created by:
15 May 2017
The potential to gain actionable outcomes from the analysis of learner activity in Massive Open Online Courses (MOOCs) is attracting interest among educators looking to use data to improve teaching and learning in this environment. However, learning analytics derived from quantitative web interaction metrics is questioned, as they do not necessarily reflect learners’ critical thinking – an essential component of collaborative learning. Research indicates that pedagogical content analysis methods have value in measuring critical discourse in small scale, formal, online learning environments, but little research has been carried out on high volume, informal, MOOC forums. The challenge in this setting is to develop valid and reliable indicators that operate successfully at scale. Learning Analytics research suggests that machine learning techniques can be effective in automatically identifying pedagogical activity, and have the potential to provide practical insights of use to educators.
In this presentation, I report on initial findings from a case study where a machine learning approach to the pedagogical analysis of comments was developed and applied to MOOC discussion forums. This approach was derived from manually rating 1500 MOOC comments using established pedagogical content analysis methods, and the associations established between these ratings and linguistic and word count analysis. A machine learning algorithm was developed from this data, applied to unrated comments in a ‘live’ MOOC, and the automatically generated ratings were shared with educators working on the MOOC. The educators were then interviewed on the utility of this approach. Results indicate that while questioning the importance of monitoring critical thinking in the context of their MOOC, in practice educators explicitly sought out learner comments that exhibited features associated with high level engagement. Although some educators were critical of individual rating decisions, in general they found them to be reasonably accurate, and considered the feedback to have potential in assisting the management of high volumes of comment data.