Project additional information: SRAFTE

Cloud created by:

IET Research
13 May 2011

Extra content


This report has been commissioned by The Higher Education Academy (The Academy) in January 2010. The purpose of the project was to:

  • Consult the academic community about which references on assessment and feedback with technology enhancement are most useful to practitioners;
  • Synthesise the main points from these references;
  • Provide signposts for readers to the original documents for further study.

The target readers for this report are:

  • Academics who are using technology enhancement for assessment and feedback;
  • Learning technologists working with academic colleagues;
  • Managers of academic departments.

The report addresses the main questions this audience are likely to have about assessment and feedback with technology enhancement, i.e., what can I do? How can I do it? What evidence can I use to convince stakeholders?

The report is structured around the steps involved in the creation and use of technology-enhanced assessment and feedback. The majority of the references concern the first two stages in the process: deciding to use technology to achieve a specific goal, and the design of the assessment and feedback.


The brief was to identify which publications practitioners find particularly useful amongst the many materials and papers available on technological support and enhancement for assessment and feedback, and then to focus on those references that were evidence-based, widely available and peer-reviewed. Of the 142 references which were recommended, the report reviews the 124 which were accessible, including journal articles, reports, books, and websites.

We asked practitioners for references that they would recommend to other practitioners. We explained that we were particularly interested in “evidence-based” publications. We anticipated the “evidence” that would be provided would be high-quality statistical analyses showing that actual change had taken place in either learning, performance, or at an organisational level. However, while all reviewed references were from reputable sources and the majority were peer-reviewed (67.7%), only a minority provided quantitative data (28.2%), of which relatively few provided the experimental designs or statistical analyses (18.5%) needed to show that technology had provided a measurable enhancement. Most references focused on the reaction of students and teachers to the use of technology for assessment and feedback. So, although it may be ideal to have high-quality evidence before implementing a new assessment approach, in the absence of this level of support, the insights and advice of reputable authors are valued by other practitioners and their messages have impact.

The report provides a detailed introduction to and summary of references useful to practitioners on assessment and feedback with technology enhancement applicable to Higher (and Further) Education in a UK context.

Key messages

The practitioners we consulted recommended articles that demonstrate the effectiveness of technology-enhanced assessment and feedback, challenge assumptions about the use of technology-based methods or give clear support for specific learning designs.

The review of the surveys and case studies that were considered valuable showed that technology can enable HEIs to deliver the characteristics of assessment and feedback that make them effective, such as frequent, on-demand formative tests with tailored feedback. Use of technology can enhance teaching by making effective learning designs such as these possible. Without the use of technology, constraints such as time, cost, student numbers, and geographical or temporal distribution would make these learning designs impractical to deliver.

Here are just some of the messages that are supported by evidence in the recommended literature:

  • Black et al’s work shows an effect size of between 0.34 and 0.46 for assessment for learning (Black 2003), see page 8;
  • Tutors can use technology-enhanced methods to implement effective learning designs that would not otherwise be possible because of factors such as time constraints, student numbers and geographical or temporal distribution, see page 9; 
  • Effective regular, online testing can encourage student learning and improve their performance in tests, see page 10;
  • Student retention and inclusion can be increased by using technology-enhanced methods. Exam anxiety can also be reduced, see page 17;
  • Using technology-based methods does not disadvantage women or older students, see page 21;
  • Automated marking can be more reliable than human markers and there is no medium effect between paper and computerized exams, see pages 21 and 22;
  • The success of assessment and feedback with technology-enhancement lies with the pedagogy rather than the technology itself; technology is an enabler, see page 26;
  • Technology-enhanced assessment is not restricted to simple questions and clear-cut right and wrong answers, much more sophisticated questions are being used as well, see page 28;
  • Modern technology can be matched to the learning characteristics of the contemporary learner, see page 28;
  • The design of appropriate and constructive feedback plays a vital role in the success of assessment, especially assessment for learning. The literature offers detailed guidance on designing effective feedback such as conditions, research-backed principles and a typology, as well as specific advice for the design of audio feedback and peer assessment, see page 32;
  • What the characteristics of useful technologies to use for assessment and feedback are, see page 40;
  • Taking a team approach to the creation of technology-enhanced assessment and feedback is valuable because successful implementation requires skills in the application of technology and how to use to the technology itself as well as learning and the subject content, see page 42;
  • Staff development and support are vital when introducing and developing assessment and feedback with technology-enhancement, see page 43;
  • Testing the assessment and feedback to ensure it is reliable and valid and piloting it with people who are similar to or understand the targeted students are important stages in the development process. A good reporting system can help academics see and analyse the results (including student answers) and will help refine the assessment and feedback, see page 45;
  • It is important to prepare students to take the assessments that use technology enhancement by practising with similar levels of assessment using the same equipment and methods. This is similar to being able to practice on past papers, see page 48;
  • The reports generated by many technology-enhanced assessment systems are very helpful in checking the reliability and validity of each test item and the test as a whole, see page 49.


The references that were recommended to us are clearly having an impact on current practice and are found valuable. We would welcome more high-quality statistical studies that offer evidence to support the lessons that practitioners have learned from experience. It would also be interesting to do further work on why these particular articles have an impact on practitioners.

Denise Whitelock
10:45 on 14 May 2011

Embedded Content

added by IET Research


Denise Whitelock
11:26am 14 May 2011

A comprehensive presentation of SRAFTE findings can be found at

Contribute to the discussion

Please log in to post a comment. Register here if you haven't signed up yet.