Evaluation plan for OER module for teacher training

Cloud created by:

Ida Brandão
23 February 2013

Evaluation Plan

OER module for teacher training

Introduction and Background

This evaluation plan follows similar procedure of the course «Inclusion and Accessing Technology», developed in the scope of an European project.   

The partners agreed to adapt it to their national context and validate it on different modalities, on a tutor base and on a self-study approach. In either case, each partner will have to present a report with the results of the evaluation. 

One of the partners has already validated the course with a group of 23 SEN teachers, belonging to a network of ICT Resources Centres for Special Needs, who acted as participants and as evaluators at the same time. 

A pair of facilitators conducted the course on a peer base and dialog with the participants. 

This OER module will be added to the course and will be delivered as a single learning unit as well. This issue will be discussed with the european partners, according to the following plan.

Purposes

The purposes of the evaluation are to improve the new module according to suggestions from the point of view of participants. 

The module will follow a formative evaluation as the previous course did. The participants will be SEN teachers that will act as participants and evaluators as well. 

Assessment of participants - the teachers had to create an eportfolio and gather the assignments of each topic (4 topics), a technical forum to discuss difficulties with tools (the use of some Web. 2.0 tools were required) and a thematic forum for each topic were open. 

At the end, the participants who fulfilled all tasks received a certificate as validators of the course.

Audiences

Two groups of participants/evaluators will be involved.

  1. Teachers      that have SEN pupils in their classes
  2. SEN teachers belonging to the ICT Resources Centres for special needs.

Decisions and Questions

Each european partner will decide in which modality (tutor-based, self-study) to run the course and/or just the OER module.

Each partner will decide which target teachers to involve.

Each partner will adapt the new module (translate/replace resources, adapt activities).

Each partner will apply the evaluation method and tools agreed so that a final report may be produced and eventual comparisons and conclusions will be extracted.

Methods

The participants will be informed of the evaluation purposes and the need for their feedback along the course/learning unit and, in particular, at the end with their feed-back on an evaluation forum of discussion and an online questionnaire.  

Sample

The participants from whom data will be collected for the evaluation are identified.  The
facilitators will also be evaluators themselves.

Instruments

The former course was run in Moodle Platform and the final topic focused on the course evaluation with a forum with several lines of discussion.

An evaluation questionnaire was produced in Moodle and replied by each participant.

The same instruments may be used and eventually adapted once more.

Limitations

The evaluators will be teachers, it won’t be possible to involve other professionals, graphic designers, ICT experts, researchers.

Logistics and time

The final evaluation report will integrate the national reports of each partner.

A report template will be agreed and followed.

An English version of the course/OER module is provided and each partner will make the necessary adaptations to each context.

The evaluation questionnaire will be provided as well and translated to each language.

The final report will have to be delivered till the end of 2013, therefore each partner will have to plan and validate the course/OER module within that deadline.

Budget

Each partner has a European budget for the whole project, but no specific expenses are devoted to course evaluation. 

Extra content

Embedded Content

Contribute

Tom Reeves
10:46pm 23 February 2013


Ida, thank you for sharing your formative evaluation plan. It appears that you are building on a solid foundation of previous evaluation experience. I want to comment of two things. First, it is sometimes useful to use an evaluation matrix to illustrate the relationship between specific questions and the data collection methods used in the evaluation. On one axis of the matrix can be listed the questions that are to be addressed by the evaluation. On the other axis of the matrix can be listed all the data collection methods that are reliable, valid, and feasible for this particular evaluation. An example of such a matrix can be seen here: http://treeves.coe.uga.edu/edit8350/EM.html The matrix not only provides an overview of the evaluation methods in the plan. It also helps to ensure that each question is addressed by one or more data collection methods. Although it is not always possible in every evaluation, it is desirable to “triangulate” most questions with more than one evaluation method.

Second, you wrote that a report template is to be developed and followed. This is a very good idea. One report format that I have used breaks evaluation reports into mini-reports that are no longer than two pages, each with four subsections. The first subsection can be an attention-grabbing headline that is intended to motivate the reader to focus on the issue or problem addressed in the specific subsection. The headline can be an abbreviated version of the results (e.g., “Use of the OER by SEN teachers shown to enhance learning significantly.”) or a quote from one of the evaluation participants (e.g., “After completing this module, I finally recognize the real potential of interactive learning online.”).

The second subsection contains the results or findings that are related to the issue highlighted in the headline. This section will often include tables or figures that summarize the findings. The data is generally reported in this section in as straightforward a manner as possible without subjective interpretation.

The third subsection presents a discussion or interpretation of the data. This subsection is meant to communicate the evaluator’s informed perspectives concerning the meaning of the data or results as delineated in the second subsection. This is usually the longest part of the two-page format. 

The fourth and last subsection of this two-page report format presents a bottom-line recommendation concerning the issue. The two-page format starts by getting the audience to focus on the issue (e.g., the OER is highly valued by the SEN teachers); next it provides the data (e.g., SEN teacher ratings of the module averaged 4.7 on a five point Likert scale); then it presents an interpretation or discussion (e.g., although the evidence clearly indicates that SEN teachers value this OER, they were able to make several recommendations for extending and enhancing the module); and finally it leads to a bottom-line recommendation. The purpose of the bottom-line is to recommend a logical next step for using the results in making decisions (e.g., the self-study version of the module should be translated into French, Spanish, Portuguese, German, and Italian without delay).

More information about this format for reporting an evaluation can be found here: http://treeves.coe.uga.edu/edit8350/ERS.html

Obrigado pelo seu valioso trabalho, Ida.

Tiffany Crosby
11:04pm 23 February 2013


I'm not overly familiar with your design project, so that could be part of the reason for my project, but could you elaborate on what specifically the evaluators would need to provide feedback on? Will the teachers participating in the pilot complete a survey? What questions would be on the survey? I'm very interested in the type of questions people are included in evaluation and whether that impacts the usefulenss of the feedback from a design standpoint.

Tiffany

Ida Brandão
11:02am 25 February 2013 (Edited 11:10am 25 February 2013)


Prof. Reeves, 

I appreciate your support and suggestions. and I hope to clarify Tiffany's doubts. 

Perhaps it would be useful to clarify that the course « Inclusion and Accessing Technology» was tested (in my country) by SEN teachers that belong to a national network of ICT Resources Centres for Special Needs. They assess pupils' needs for assistive technology and they provide information and peer training to teachers that deal with SEN pupils (primary and secondary school levels). 

As explained, OER module is a development of that course, either a topic to add or a single learning unit. It's still open for discussion. I may even consider to test it both ways in my country with different groups of teachers. 

The evaluation tools we decided to apply previously, were a final forum of discussion addressing the following items, in the scope of future course releases

  • pre-conditions to attend an online course (we've decided to clarify pre-conditions, for some participants it was a first experience of full online learning)
  • face to face sessions on Moodle and Web 2.0 tools (some participants considered that it might be useful in the future to have one or two F2F sessions to address technical issues)
  • target participants profile (embracing mainstream teachers, technical/therapeutic staff, parents of SEN pupils)
  • course duration and extension (participants found the duration of the course too short, to explore, produce and comment on other colleagues works)
  • topics/contents review (eventually merging two first topics on inclusion policies and SEN measures)
  • activities/assignments review (eventually F2F sessions to train free Web 2.0 tools, considering local courses) 

Moodle questionnaire (most european partners agreed to use a translated version, with eventual adaptations)

  • Participants' profile - professional experience; special needs expertise
  • Course organization - learning objectives; workload; time organization
  • Participants' engagement - (active) participation; tasks fulfillment
  • Learning environment and methods
  • Learning resources (readings, links, bibliography...) 

We've used mostly Lickert scales (5 point) and open comments in the questionnaire. 

Each european partner was free to choose:

  • modalities of tutor-based or self-study learning - in the first case formative assessment will take place and final evaluation, in the second case only final course evaluation questionnaire
  • target teachers: Sen teachers  or mainstream teachers with SEN pupils 
  • adaptation of the english version of the course, made available in Moodle - some partners will revise the topics, resources will be translated/adapted/replaced by national ones (none of the european partners has english as mother language)
  • learning environment  - some will run the course in Moodle, other partners will use their  own institutional platforms 

The final  Report is already structured but it's quite flexible to accomodate all the different approaches and conclusions of partners. For the moment, there's only one validation completed that was already inserted in the report for partners to have an idea of our outcomes. It's too extensive, since qualitative answers and open comments were priviledged (we are dealing with small groups). The quantitative answers of the lickert scales were given by Moodle questionnaire automatically. 

I don't know how much of the report template will remain at the end, owing to the flexibility agreed. 

I think that the evaluation matrix is useful, though we have no big teams of developers/experts/designers doing the job. Each european partner is using very few resources, practically those that are involved directly in the project. The same people are playing different roles - course design developers, facilitators, observers. 

In our case we had a pair of facilitators (including myself) and it was important to have a colleague's outside look and suggestions. In practical terms, the evaluation relies mostly on the participants/validators (looked as peers). 

It was important that the participation in the course might also be useful as an immersive learning experience. The assessment implied an eportfolio for topic reflection and activities/tasks (participants were free to choose the tool, a negative experience with Mahara, led some to turn to Wordpress, Weebly, etc). The assignments were linked to institutional SEN publications reviews and production of learning activities linked to symbol communication and accessible materials. Different mindmap tools were used, Jclic author tool, Book Builder (UDL), etc.

The results were highly satisfactory, participants were engaged and 22 out of the 23 were certified, having completed all the tasks. 

The critical issue was the duration of the course, participants required more time for each topic. In practical terms 2 weeks were dedicated to each topic. Assignments required the use of certain tools (some participants were trying them for the first time) and comments on colleagues works were also required,

My expectation would be that some of the ICT Resources Centres for Special Needs  might adapt the course to run it, at local level, involving local Teacher Training Centres to organize and get official accreditation for teachers/participants.

The other alternative would be to run the course in an open way addressed to a a mixture of participants (teachers, technical/therapeutic staff, parents of SEN children). This diversity of participants might bring different viewpoints, eventually more enriching.

Itana Gimenes
7:48pm 3 March 2013


Hi all,

Rich discussions, I have learned a lot with the comments. 

Itana

Contribute to the discussion

Please log in to post a comment. Register here if you haven't signed up yet.