Contributions of the evaluation to the development of the product

Contributions of the evaluation to the development of the product

• Alert the team(s) to what has been learnt from previous projects of a similar kind. • Monitor progress on the multiple tasks of the programme. • Provide feedback on the products as they develop. • Trial products with users, thereby assisting the programme to meet user needs.

The evaluators’ contributions to ensuring excellence in a CIT programme need to include a direct input into the development of any CIT products. Normally, the fi rst need is for the evaluators to contribute ideas from the wider educational research literature, much of which may be unknown to the programme team(s). For example, as an evaluator of a TLTP project developing courseware, one of my contributions was to review the literature on learning theory and feed this into the development process (Somekh 1996). Decisions that followed about the software development could then draw upon this knowledge which was not previously available to the project team. To be effective, this kind of input needs to be an integral part of established trust in working relations, rather than a one-off input. Evaluation is, unavoidably, an intervention; so it makes sense to acknowledge this and ensure that it is a positive intervention. A good example of a recent European evaluation of a CIT programme that adopted this kind of approach is the evaluation of the Telematics for Teacher Training Programme (T3). The T3 Evaluation, which was directed by Wim Veen from the University of Technology, Delft, The Netherlands, used the Concerns- Based Adoption Model (CBAM) developed by Hall et al. (1975).

The foci of the evaluation effort was on: formative evaluation of the development and implementation of the new teaching practices using Telematics within the partner universities involved, and

summative evaluation of outcomes and impact of the project as a whole and of the development of pedagogical approaches for Telematics learning environments.

(Davis et al. 2000)

Honest feedback is of enormous value to programme directors and teams. My own preference is for very open preliminary feedback within a confi dential framework that ensures that the focus is upon learning rather than upon judgement. I often distinguish between a draft report that I feel will make a good preliminary discussion document with the team(s) and the report that ultimately I write for publication. They have different purposes and can be dealt with differently.

The evaluator often acts as a broker of judgements on the products through collecting a range of data including stakeholders’ perceptions. They can observe usage, interview users, count frequency of use, issue questionnaires, as well as trying the product out themselves as users. They have a particularly important role in feeding information back to the team(s), especially if the different stakeholders have

The role of evaluation 143 different responses. Because of their relatively powerless position, students’ views may

be given less credence without the mediation of an evaluator. Without the evaluator they might not even have been asked for an opinion, but their views may be the ones that are most critical to the eventual success of the uptake of the product.