Evaluating next generation technology enhanced learning
Cloud created by:
27 March 2011
ARV2011 Alpine Rendezvous
Evaluation Strand led by Giasemi Vavoula and Päivi Häkkinen.
STELLAR Grand Challenge 2: Orchestrating Learners
Key research questions related to this challenge include:
- What is the role of the teacher/more knowledgeable other in orchestrating learning and how does this relate to collaboration and the knowledge of students?
- What is the role of assessment and evaluation in learning and how can technology play a role?
- From the point of view of the learner what is the relationship between higher-order skills and learning of a particular knowledge domain and what is the role of technology in this respect?
- How can we identify the current learning trajectory or a person? Would it be beneficial to make them aware of trajectory switches?
What do we aim for when building collaborative learning environments?
Computer-supported collaborative learning (CSCL) is not a united field in terms of theory and methodology; therefore there are no straightforward guidelines for evaluation. There are several major positions or theoretical levels
- Learning as an individual phenomenon – collaboration only valued to the extent that it results in learning outcomes for individual minds.
- Collaborative learning that can benefit an entire community of practice
- Intermediate positions may acknowledge that benefits accrue at group and individual levels in parallel
- The role of context, an independent variable that influences learning as opposed to a view of the context as intrinsic to and constituent of the learning process
Possible focus of analysis
- Participation activity
- Learning outcomes
- Level of discussion
- Reciprocity between different perspectives in discussions
- Sequences of interaction – knowledge-building episodes.
07:05 on 29 March 2011
Paivi, one of the two evaluation strand leaders, is interested in collaborative knowledge construction and activities.
She sees the main evaluation challenges relate to basic processes and mechanisms of learning rather than tools such as high quality collaboration, integrating in a meaningful way what is done individually etc.
Unit of analysis is key. Focus on learning and evaluation.
07:06 on 29 March 2011
‘Most reporting systems focus on how many horses get led to the water, and how difficult it was to get them there, but never quite get around to finding out whether the horses drank the water and stayed healthy.’
Patton, M. (1997) Utilisation-Focused Evaluation: the new century text. Sage.
What do we need to take into account when evaluating technology-enhanced learning?
- Know the learners
- Learning here and now – what learners do, think, and feel, on the spot?
- Learning after and beyond – what learners retain, how and when they us it, how they transform it, how it transforms them
- Learning here but later – how learners change the learning, how knowledge evolves, how the community grows, how the community adapts
- Capture learning in context and across contexts – becomes more difficult as you move from formal settings to designed informal settings, to free-form informal settings, to everyday life. What is the context? Which learning needs, objectives, methods, activities and tools are important?
- Measure learning outcomes. This becomes more complex as you move from cognitive outcomes to issues such as how learning changes attitudes, emotions, values and beliefs
- Ethics. Becoming more complex as you move from informed consent to taking into account whose needs, objectives, outcomes, outcomes and values are being considered. How are these negotiated? Whose agency, agenda or sponsorship is important?
- Technology. How does it affect the learning? Gets more complex as you move from usability to utility to appeal, trends and consumer culture.
- The bigger picture. How does this affect the learning institution. How does it affect practices. Moves from local impact to global impact
- Changing the in/formality balance. Does the formal environment remain a formal environment?
- Capture and analyse learning in and across contexts
- Assess usability of technology
- Look to changes in the learner and in the learning process
- Consider organisational issues in the adoption of new learning practices
- Must span the lifecycle of the learning innovation that is being evaluated
07:24 on 29 March 2011
Giasemi's evaluation workshop plans
6 Challenges in technology enhanced learning:
Challenge 1 – capturing learning in and across contexts.
Challenge 2 – measuring learning outcomes
Challenge 3 – ethics
Challenge 4 – the T in TEL
Challenge 5 – seeing the bigger picture
Challenge 6 – changing the informal formal balance
Challenge 1 - Formal/informal settings. Designed format settings, informal settings. Across settings. Location and space layout, social setting, learning needs and objectives ,learning methods, activities and tools.
Challenge 2 – Extra challenge when technology is involved as the interactions don’t only take place with the technology – conversations with other learners etc. How can we be sure what is the result of the technology and what is the result of the learning experience as a whole.
Challenge 3 – Informed consent is fairly challenging. If we want people to use technology in their personal space and then report back, how can they be sure what will happen to their personal data. In terms of designing personal experience – whose needs are we designing for and how are these needs negotiated for. Ethical oblication to come in and look at this. Who is the technology enhanced learning experience driven by. Whose technology is it? Whose agenda? Whose technology?
Challenge 3 – Usability, utility, appeal/trends/consensus/culture – increasing complexity of measurement.
Challenge 4 – seeing the bigger picture
5 Precepts – Framework for evaluating TEL
- Capture and analyse learning across contexts
- Assess usability of technology and how it affects the learning experience
- Look beyond measurable cognitive gaines into challenges in the learner and learning process and practice
- Consider organisational issues
- Span the lifecycle of the learning innovation
Two stages of data collection
- Collect stakeholder expectations
- Collect data on learning experiences
- Analyse the gaps between expectations and experience
What we’re going to do:
- Challenge the challenges
- Scrutinise the precepts
- Critique M3’s flexibility, adaptability, usefulness
- Hopefully identify one grand challenge to contribute to ARV
07:35 on 29 March 2011
Evaluation Grand Challenges:
Giasemi's group came up with the following evaluation grand challenges
Evaluation Grand Challenges:
- How do you identify learning success from data collected from everyday informal learning.
- Assessment is a fundamental grand challenge as we move into more open environments.
- Evaluation needs to be adaptive, agile to match what is happening at the design and development level.
- Evaluation objectives need to be aligned with the learning and the methods we use need to measure what matters not just what can be measured.
- Who owns the evaluation – who is it for and whose interests does it server
- Standardisation whilst at the same time allowing space for evaluation.
- We need to allocate adequate resources so that evaluation and assessment is meaningful.
10:11 on 29 March 2011 (Edited 10:22 on 29 March 2011)
Feedback from Päivi’s group
Activity for groups one and two: Develop some guidelines to the Finnish government, which wants to introduce wider use of technology in schools, moving it into the mainstream for teaching and learning.
- Study the particular technologies used in schools
- Evaluate and understand how teachers and students are using the technology - personal use in social settings
- Introduce programmes to train teachers to use technology to support teaching and learning
- Understand types of learning in different settings
- Divide evaluation into two – technology (evaluate how students use content for learning) and design (how does it affect the learning experience; what are the influences of technology on pedagogy?)
- Quality of results produced by this approach v those produced by traditional approach. Evaluate interactions between learners, teachers, technology.
Proposed grand challenge
How can we carry out effective evaluation when the learner becomes the expert and may know more than the teacher?
Spent a lot of time discussing proposed framework and the challenge of evaluating individual and group networks. Reviewed some approaches to evaluation that have been high on the agenda of the UK government; considered shift from public examination results towards an acceptance of looking at learning practices
Proposed grand challenges
- How can we teach governments to ask better questions?
- How can we construct evaluations of TEL that capture complexities of interaction between policy, strategic leadership, teacher and student?
10:32 on 29 March 2011