The web-site is now in readonly mode. Login and registration are disabled. (28 June 2019)


Cloud created by:

Simon Cook
2 May 2014


Headings, emboldened key words and interesting points


The research looks at various things such as: charging employers access to student data; time spent on tasks; drop-out rates; ways of guarding against cheating in online tests; cultural and technological barriers to participation etc.  I have highlighted a range of research methods that I have seen used, for example: virtual ethnography; learning analytics; standardized tests (before and after design);  semi-structured interviews; Item Response Theory, Scale Linking, and Score Equating (to produce `fair and equitable test scores'); online questionnaires.  I did not have time to finish it off, but I hope this is useful material to discuss.




Is anything known about the educational impact of MOOCs, as distinct from their news impact?


Research Ethics in Emerging Forms of Online Learning: Issues Arising from a Hypothetical Study on a MOOC

Esposito, Antonella. Electronic Journal of e-Learning10.3 (2012): 315-325.

focuses on ethics issues implied in a hypothetical virtual ethnography study aiming to gain insights on participants' experience in an emergent context of networked learning



Press the Escape key to close


Providers of Free MOOC's Now Charge Employers for Access to Student Data

Young, Jeffrey R.. Chronicle of Higher Education: 0. Chronicle of Higher Education. 1255 23rd Street NW Suite 700. (December 4, 2012)


MOOCs Need More Work; So Do CS Graduates

By: Guzdial, Mark; Adams, Joel C.

COMMUNICATIONS OF THE ACM  Volume: 57   Issue: 1   Pages: 18-19   Published: JAN 2014


No, the course was not a success. Of course, the data are problematic: Many people have observed that MOOCs often have terrible retention rates, but is reten tion an accurate measure of success? We had 21,934 students enrolled, 14,771 of whom were active in the course. Our 26 lecture videos were viewed 95,631 times. Students submitted work for evaluation 2,942 times and completed 19,571 peer assessments (the means by which their writing was evaluated). However, only 238 students received a completion certificate—meaning that they completed all assignments and received satisfactory scores




What research methods were used?

MOOCs and the funnel of participation

Doug Clow

The Open University, Walton Hall, Milton Keynes, United Kingdom

Massive Online Open Courses (MOOCs) are growing substantially in numbers, and also in interest from the educational community. MOOCs offer particular challenges for what is becoming accepted as mainstream practice in learning analytics.Partly for this reason, and partly because of the relative newness of MOOCs as a widespread phenomenon, there is not yet a substantial body of literature on the learning analytics of MOOCs. However, one clear finding is that drop-out/non-completion rates are substantially higher than in more traditional education.


This paper explores these issues, and introduces the metaphor of a 'funnel of participation' to reconceptualise the steep drop-off in activity, and the pattern of steeply unequal participation, which appear to be characteristic of MOOCs and similar learning environments. Empirical data to support this funnel of participation are presented from three online learning sites: iSpot (observations of nature), Cloudworks ('a place to share, find and discuss learning and teaching ideas and experiences'), and openED 2.0, a MOOC on business and management that ran between 2010--2012. Implications of the funnel for MOOCs, formal education, and learning analytics practice are discussed.


Correlating skill and improvement in 2 MOOCs with a student's time on tasks

Because MOOCs offer complete logs of student activities for each student there is hope that it may be possible to find out which activities are the most useful for learning. We start this quest by examining correlations between time spent on specific course resources and various measures of student performance: score on assessments, skill as defined by Item Response Theory, improvement in skill over the period of the course, and conceptual improvement as measured by a pre-post test.

MOOCs Need More Work; So Do CS Graduates

By: Guzdial, Mark; Adams, Joel C.

COMMUNICATIONS OF THE ACM  Volume: 57   Issue: 1   Pages: 18-19   Published: JAN

In terms of empirical studies, Mike had an advantage that Karen did not — there are standardized tests for mea- suring the physics knowledge he was testing, and he used those tests before and after the course.


Patterns of Engagement in Connectivist MOOCs.

Source: Journal of Online Learning & Teaching . Jun2013, Vol. 9 Issue 2, p149-159. 11p.

Author(s): Milligan, Colin; Littlejohn, Allison; Margaryan, Anoush

Abstract: Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized - active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.

Research Ethics in Emerging Forms of Online Learning: Issues Arising from a Hypothetical Study on a MOOC

Esposito, Antonella. Electronic Journal of e-Learning10.3 (2012): 315-325.

focuses on ethics issues implied in a hypothetical virtual ethnography study aiming to gain insights on participants' experience in an emergent context of networked learning.   Topics such as privacy concerns in a public online setting, choice between overt and covert research, researcher as observer or participant, narrow or loosely defined application of the informed consent and anonymity are outlined, presenting a range of different options. This article intends to show that ethical decisions are an iterative procedure and an integral part of the research design process.






Fair and Equitable Measurement of Student Learning in MOOCs: An Introduction to Item Response Theory, Scale Linking, and Score Equating 

AUTHORS J. Patrick Meyer, Ph.D. University of Virginia

Shi Zhu, Ph.D. University of Virginia

Massive open online courses (MOOCs) are playing an increasingly important role in higher education around the world, but despite their popularity, the measurement of student learning in these courses is hampered by cheating and other problems that lead to unfair evaluation of student learning. In this paper, we describe a framework for maintaining test security and preventing one form of cheating in online assessments. We also introduce readers to item response theory, scale linking, and score equating to demonstrate the way these methods can produce fair and equitable test scores

Item analysis lies at the heart of evaluating the quality of tests developed through classical methods. Item difficulty and discrimination are two statistics in an item analysis. Item difficulty is the mean item score and item discrimination is the correlation between the item score and test score. These statistics allow instructors to identify problematic items such as those that are too easy or too difficult for students and items that are unrelated to the overall score. Instructors can then improve the measure by revising or eliminating poorly functioning items. An end goal of item analysis is to identify good items and maximize score reliability.

 Although classical methods are widely used and easy to implement, they suffer from a number of limitations that are less evident to instructors. One limitation is that classical test theory applies to test scores, not item scores. Item difficulty and discrimination in the classical model are ad hoc statistics that guide test development. They are not parameters in the model. Through rules-of-thumb established through research and practice (see Allen & Yen, 1979), these statistics aid item selection and help optimize reliability. However, they do not quantify the contribution of an individual item to our understanding of the measured trait.


The Professors behind the MOOC Hype

falseKolowich, Steve. Chronicle of Higher Education (March 18, 2013): 0.

The largest-ever survey of professors who have taught MOOCs, or massive open online courses, shows that the process is time-consuming, but, according to the instructors, often successful. Nearly half of the professors felt their online courses were as rigorous academically as the versions they taught in the classroom. The survey, conducted by "The Chronicle," attempted to reach every professor who has taught a MOOC. The online questionnaire was sent to 184 professors in late February, and 103 of them responded



What could be known about MOOCs?

MOOCs Need More Work; So Do CS Graduates

By: Guzdial, Mark; Adams, Joel C.

COMMUNICATIONS OF THE ACM  Volume: 57   Issue: 1   Pages: 18-19   Published: JAN

Our team is now investigating why so few students completed the course, but we have some hypotheses. For one thing, stu- dents who did not complete all three major assignments could not pass the course. Many struggled with technology, espe- cially in the final assignment, in which they were asked to create a video presentation based on a personal philosophy or belief. Some students, for privacy and cultural reasons, chose not to complete that assignment, even when we changed the guidelines


The Impact of MOOCs on Higher Education

falseDennis, Marguerite. College and University88.2 (2012): 24-30


the author discusses the impact of MOOCs on the following: (1) Accreditation agencies; (2) Book publishers; (3) Federal and state subsidies; (4) Rating agencies; (5) Advanced Placement exams; (6) Enrollment and retention managers; (7) Branch campuses; (8) Career counselors; (9) Chief financial officers; (10) Facilities managers; (11) IT managers; (12) Students; (13) Faculty; (14) Venture capitalists; and (15) For-profit schools. The author concludes that MOOCs will not replace colleges and universities. They will supplement--not replace--traditional higher education

Are the research methods being developed `new’?






Extra content

Embedded Content


Contribute to the discussion

Please log in to post a comment. Register here if you haven't signed up yet.