Cloudworks is no longer accepting new user registrations, and will be closing down on 24th June 2019. We hope to make a read-only archive of the site available soon after.
Comparing Human and Computing Marking
Phil Butcher at CALRG09
Phil Butcher drew on a long history (back to 1971) in response matching used in computer marking of text responses to questions. Answers would typically be about 20 words. The system using this algorithm had become OpenMark online morking but restricted in some areas. A visit to Robert Harding in Cambridge showed them a system that impressed but also investigated an alternative called Intelligent Asessment Technologies (IAT). That proved to be more usable and so has now been brought in to OpenMark. Tests showed that the computer was consistently in greater agreement with the question author than the average of 6 human markers. But it did take a lot of effort to train IAT but the pay back is high in the OU as there are large numbers. The next comparison was with OpenMark's internal algorithm and a variation using regular expressions. The surprise was that the algorithmic manipulation of keywords (old OpenMark algorithm) was just as effective as the computational linguistics. This means that effort is now going into coding extra OpenMark questions rather than computational linguistics. The final stage is to try to automate the extraction of the algorithmic rules automatically.
Add extra content