Cloudworks is no longer accepting new user registrations, and will be closing down on 24th June 2019. We hope to make a read-only archive of the site available soon after.
Criteria for Effective Learning Design: Review of models and methods
Cloud created by:
31 August 2011
Criteria sets or statements form the backbone of any benchmark or assessment methodology. The Capability Maturity Model, for example, has been used for enhancing the management of software production projects and is based on the idea of a staged method that promises to lead organisations from a position of immaturity to maturity makes a series of statements that define each ‘level’ or ‘process perspective.’ This has since attracted more widespread interest with the Portfolio, Programme and Project Management Maturity Model or P3M3 (version 1: 2006, version 2: Sowden, 2010), E-Learning Maturity Model (Marshall, 2006), Open Educational Practices Maturity Matrix, (OPAL, 2011) broadly following in principle and structure. Also check out JISC's E-learning Benchmarking and Pathfinder Project and related blog).
Of course, this is a particular representation and method of how to understand the process of teaching and learning design and delivery. Each methodology will offer benefits such as useful techniques or ideas for criteria that define valuable processes or practices. For example, the maturity model identifies seven very useful aspects of management processes and ideas for constituent parts and has the scope to develop quite fine criteria/measures that promote greater clarity and transparency. Equally, each will have limitations - for example in this case some may consider the maturity model as too time consuming to implement, too simple a model of how complex systems evolve, or too prescriptive focusing on process rather than practices.
Reflecting on our experiences may help us to combine approaches and guard against too narrow a view of Curriculum Design audit and review. For example, when I was working recently on a project seeking to establish measures for benchmarking good assessment practice (Cross & Whitelock, 2010), two potential issues considered with a broadly maturity model approach were:
- Potential for a ‘false horizon’ – where a desire to achieve a maturity in processes becomes the goal rather than the aim of achieving the best, right or most effective practices, procedures or outcomes.
- Importance of taking in to account multiple perspectives on a process. Basing a maturity benchmark on the written down processes or management reports could hide fragmented realities and perpetuate the ‘it’s-all-going-well’ narrative so an approach would need to mitigate for this. For example, we found that different stakeholders may have very different views about how well a process is working. By asking two or more questions relating to the same measure/indicator (about both process maturity and other things such as effectiveness, sufficiency, etc.) you can begin to triangulate perspectives.
This cloud is the counterpart to a second which asks for your contributions to a bank of criteria for monitoring, managing and evaluating curriculum and learning design. We would encourage you to use that cloud to suggest specific criteria or measures and this cloud to discuss how these could be used - the pros and cons of particular methodologies (such as the maturity model).
We look forward to hearing your thoughts and reflections.