Tool: A Learning Design Support Environment (LDSE)

Cloud created by:

Juliette Culver
7 July 2009

This project builds on the findings of the London Pedagogical Planner project which finished in 2008)

  • THE LDSE project is funded under the EPSRC/ESRC TLRP programme, see the following link for more information
  • LDSE aims to respond to the users’ requirements
  • Scaffolds the design process
  • Aim is to focus on users requirements to enable building on the work of others
  • Uses computation representations of a learning pattern to interpret thte learning experience it offers
  • Uses theoretical criteria like the Conversational Framework

Extra content

Live blog of presentation at the Design Bash hosted by the OU July 2009

  • LDSE scaffold the design process. Need to engage whole academic workforce, can't just work it out and give it to them. To enable lecturers to become teaching designers, treating teaching as experimental process, building on what other people have done, publishing things back to community that they discover
  • Different ways of instantiating sequence in computationally understandable way. Interpret what student experience is going through sequence and evaluate against theoretical criteria
  • One scenario for supporting learning design - types of decision made e.g. select learning outcome, select learning desing from list found, preview, test against theory, specify topic, select asset from found list found, select evaluation criteria, inspect monitoring, review, run again, publish
  •  LAMS does the evaluation criteria/monitoring. Lookng at first stages instead in LDSE
  • Flash demo of LDSEeker - can search list of learning outcomes, can choose one and will return number of sequences e.g. in LAMS, Pheobe, GLOTool, can preview in tool and annotate
  • Conversational Framework - most minimal account could possible have that represents what needs to go on in learning process , possible to represent theories of learning in terms of mapping onto conversational framework
  • iCoper - representation of a learning pattern, can do a deeper analysis of one particular part useing same format   http://www.icoper.org/
  • Can check against checklist based on Conversational Framework e.g. Does it motivate learning toa) access the teacher/expert concept b)  ask questions of the teacher or their peers, c) Offer their own idea to teacher or peers etc for each link in the framework
  • Can represent links in CF by different LAMS activities and therefore see possible gaps in checklist
  • Being able to search federated system of pattern repository, couple trust, need walled garden to search.
  • Question: How do you evaluate if a sequence is good or bad? Answer: In terms of that checklist - need all of it ideally. Or choose approach and use only a couple as criteria. But much better if you have all of it. Evaluation is extent to which match between sequence and the checklist.
  • Question: What about grain level? Won't get whole checklist if look at fine grain-leveled. Not judgemental, just tells you what it is

Juliette Culver
10:46 on 7 July 2009 (Edited 16:01 on 17 September 2009)

Embedded Content