Aims of the evaluation

Aims of the evaluation

• Identify and address the various purposes of the evaluation. • Address both the formative and the summative aspects of the evaluation. • Meet the needs of stakeholders. • Generate both quantitative information and explanatory theories.

It is very important for the evaluators to begin by exploring the various purposes of the evaluation, to ensure that they are able to address any purposes that may have been intentionally or unintentionally concealed as well as stated purposes. House (2000, p. 80), for example, warns that important values issues may be hidden from sponsors and participants if the evaluation employs methods such as cost–benefi t analysis without careful investigation of embedded assumptions. This process of clarifi cation is important in identifying the partisan interests of different stakeholders so that they can be taken into account, while ensuring that the evaluation takes a non-partisan view. This might involve adopting what Datta (2000, p. 2) calls the role of ‘an advocate for pluralism’.

Evaluations are always politicised, sometimes strongly so. The development of policy for CIT in education is contested because it is costly and because it is often the source and focus of aspirations. Power is exercised at different levels. Evaluators

The role of evaluation 141 need to make judgements about the extent to which different players have power, and

monitor the shifts in the political context. Inevitably, therefore, evaluation is itself a political activity. It is better to acknowledge this than try to ignore it. Evaluation is an inherently moral activity that involves the evaluator in wielding power, however large or small, according to ethical principles (see the previous section).

Programmes developing CIT for education always have multiple stakeholders, including national or EU policy-makers, the programme team(s), commercial partners and users, such as teachers, students and local policy-makers. All these stakeholders have rights and all are likely to have different needs. It is a matter of democratic principle to address all their needs; but it is also a matter of professionalism and effi cacy, since the stakeholders all have the potential to assist or to block the take- up of products. Clarifying the sponsors’ purposes enables evaluators to get some understanding of the time frame for decision-making, so that they are in the best possible position to infl uence future policy: for example, sometimes an interim report may have more impact if it is produced by a particular date. It also helps in deciding on the balance to be struck between the formative and summative purposes of the evaluation: for example, if the purpose of the evaluation from the sponsor’s point of view is mainly symbolic, there is little point in placing the main emphasis upon

a detailed summative report; instead time can be better spent on producing more detailed formative feedback for the team(s). Depending upon the purposes of the evaluation and the needs of the various stakeholders, the evaluators will need to make different judgements on the right balance between quantitative measurement and the development of qualitative understanding. Both are important in the evaluation of CIT programmes which always involve stakeholders from technological or business backgrounds, who place

a high value on statistical measures, as well as ‘users’ who fail to respond as positively as expected to CIT. User’s problems often arise because they fi nd CIT confusing or threatening, and understanding the reasons for this requires the exercise of careful judgement on the basis of qualitative data, for example from observations and interviews. There is a duty to provide both factual information about, and explanations for, progress/success that can provide the basis for future action. To ensure that the programme remains on track to achieve excellence, there is also a duty to include formative elements in the focus of the evaluation. Bhola (2000) gives an example of

a form of ‘impact evaluation’ which addresses both of these needs by including three types of impact: ‘impact by design, impact by interaction and impact by emergence’ and requires the evaluators to use subtle, skilful, professional judgement in reaching conclusions that take a wide range of factors into consideration:

The evaluator will have to learn to listen and then to go beyond people’s utterances. The unsaid will have to be heard; the invisible will have to be seen; both shadows and foreshadows will have to be registered. The evaluator will then have to ‘hypothesize’ plausible connections between the initial intervention and the impact by emergence and, in John Dewey’s words, ‘seek appropriate warrant to assert the reality of impact that emerged in people’s lives’ (Bhola 2000, p. 165).

142 Research methods for ICT in education