The potential of assessment as a lever for pedagogical innovation with ICT

The potential of assessment as a lever for pedagogical innovation with ICT

Despite the complexity of pedagogical practice, and indeed as a result of its inter- dependence with the regulatory frameworks of the national community, there is very strong evidence that innovations in pedagogy can be introduced rapidly if they are tied to changes in what is assessed. When I became a teacher in England in the 1970s it was a time of considerable curriculum reform. The Schools Council,

a body that brought together teachers’ unions, policy-makers and representatives of local education authorities and universities, had a responsibility for reforming schools. New curricula were developed by university researchers, working closely with teachers, with funding from either the Schools Council (e.g. the History Project and the Humanities Curriculum Project) or charitable foundations (e.g. the Nuffi eld Science Projects, and the Ford Teaching Project) and were grounded in state-of- the art disciplinary knowledge and theories of child development and learning. An example from the USA was Bruner’s Man a Course of Study (MACOS) curriculum in which elementary children were given artefacts, texts and fi lms from the social and life sciences to investigate in order to ‘discover’ what it means to be human (e.g. documentary fi lms about the kinship patterns of baboons; and the way of life of the Netsilik Eskimos). These new curricula incorporated innovatory pedagogical practices since curriculum was understood to be enacted in the process of teaching (Stenhouse 1975). There was a recognition that what was learnt by students might not be the same as what the teacher intended to teach, and the new curricula were linked to forms of assessment which would allow teaching to be more closely aligned with students’ social and cognitive needs. In England, to enable these reforms to

be introduced, examination boards (at the time all linked with major universities) developed new syllabuses especially to assess the intended curriculum outcomes. For example, the Cambridge Board introduced the ‘Plain Texts’ English Literature syllabus which permitted students to bring unmarked copies of the texts they had studied into the examination room, thereby shifting the balance of the focus of what was assessed to critical response and interpretation rather than factual recall and the ability to quote from memory.

This strong link between what is assessed and the process of educational innovation is in line with the broad socio-cultural framework developed in Chapter 1. Human activity is object-oriented and performed in inter-relationship with others; so it is a necessary condition for changes in educational practice that educational purposes should be renewed in line with values recognised and formally sanctioned by all the phenomenal levels of the community. In education systems where public

Inside innovation 43 examinations play an important role in rewarding the achievements of students

and teachers, changes in what they set out to assess provide a publicly recognised organising framework for innovation. Without fundamental changes to the aims, purposes and practices of assessment, pedagogic innovations are likely to be very seriously constrained. There is considerable evidence from research that this is the case for the innovation of ICT in education in England.

At the heart of policies for ICT in education in England there is a confusion of purposes. Interestingly, this can be seen to mirror the shifts in discourses surrounding computers in the home outlined in the previous section – is the computer there to assist children’s learning, to develop their ICT skills in preparation for work, or as a ‘playable tool’? McFarlane (2001) lists three discourses surrounding ICT in UK policy documents that assume, without relationship to one another, that it is: a set of skills or competences; a vehicle for teaching and learning; or an agent of transformative change. She points out that each has signifi cant implications for assessing and accrediting learning outcomes. Moreover, as the national curriculum gives priority to the fi rst of these, the teacher training national curriculum to the second, and policy documents emanating from the government to the third, it is clear that teachers and schools are placed in an impossible position in trying to respond to all three. In a brief review of research she identifi es that the educational gains attributed to ICT are largely in terms of learning processes such as problem-solving capability and critical thinking skills, which ‘are surely desirable outcomes of the compulsory education phase’ but are not captured in regimes of assessment focused on measuring the acquisition of subject knowledge. Impacts of ICT use are, therefore, ‘indirect rather than direct effects on learning as measured through test performances’. The implication of McFarlane’s argument is that ICT is changing the nature and quality of students’ learning but that this is not currently being measured; and that teachers, as a result, may be discouraged from using ICT as a vehicle for teaching and learning across the curriculum since its use does not lead to rewards for students in terms of improved test scores. This effect is exacerbated by the status of ICT skills as a separately assessed national curriculum subject.

There is a small amount of evidence of the impact of ICT on students’ learning as measured by traditional methods, but this appears to be predominantly linked to students’ use from home of websites that contain materials closely tailored to improving test scores. The ImpaCT2 evaluation (Harrison et al. 2002) showed small but statistically signifi cant gains in Science GCSE examinations for 16-year-olds and in national tests for English at age 11 and science at age 13, as well as positive indications which were not statistically signifi cant in one or two other subjects at all three levels. However, as students reported relatively low levels of use of ICT in lessons (e.g. 30 per cent of 16 year olds reported using it in English lessons in ‘some weeks’), it seems that these gains were the result of using ICT at home, including websites that provided self-assessment tests to help with revision. In Scotland, Livingston and Condie (2004) report positively on the evaluation of the SCHOLAR support materials for students preparing for Higher and Advanced Higher examinations at

16 and 17. SCHOLAR was a collaborative project between Heriot-Watt university and schools and FE colleges, with the aim of improving students’ attainment and

44 Understanding innovation encouraging greater take-up of university places. SCHOLAR materials included

printed text booklets, interactive on-line materials, assessment materials, revision materials and an on-line discussion forum/noticeboard (the latter was hardly used). An analysis of examination results showed a degree of superior performance for students in the study sample. Students were very positive about the facilities with more than 50 per cent reporting using the on-line materials at least 3–4 times per month. However, the majority of students’ use of SCHOLAR was at home, which was probably the reason for teachers underestimating the extent of their use. Writing from the point of view of an examination board, Raikes and Harding (2003) make it clear that the need to ensure there is no discontinuity between years, so that standards can be compared, makes radical changes in assessment impossible. They discuss possibilities for computerising traditional examinations and propose the need for a transitional period when paper-based and computerised versions of traditional tests would be offered alongside one another. They are enthusiastic about the effi ciency gains likely to result from computer-based tests, marking and record-keeping, but clearly regard transformation of the system to assess new kinds of knowledge and learning as something beyond the scope of the current system.

McCormick (2004) takes McFarlane’s article as a starting point for exploring the relationship between assessment and ICT, although his focus is on formative assessment, as part of the teaching and learning process, rather than on formal assessment of learning gains in national tests and examinations. His article provides insights into current practices and the impetus towards innovation in the two separate educational fi elds of ICT and assessment. He shows that most of the work that has looked at ways of using ICT for assessment ‘ignores developments in the fi eld of assessment, particularly with regard to … what has now become known as the fi eld of “assessment for learning” ’ (ibid., p. 117). Equally, those developing innovatory practices in this form of assessment have paid little attention to the facilities that ICT offers to support this, for example through the development of digital portfolios. Researchers in the two fi elds have tended to work discretely and there has also been very little sharing of evidence between developing ICT practices at HE/FE and school levels.

McCormick’s article is useful in identifying many of the tensions that teachers face in using ICT within the current regulatory frameworks for curriculum and assessment: for example, using ICT for an hour a week in a specialist suite makes it impossible to embed ICT in subject teaching; and teaching students with access to the Internet, where they have easy access to cutting and pasting material, raises questions about ‘the nature of evidence of learning’ that teachers should look for and reward. McCormick ends by discussing possibilities for identifying and assessing ‘new outcomes’ which result from students using ICT. These may be in relation to students’ using multimedia authoring tools ‘to allow them to externalise their thinking and to express their ideas through this media in ways that are not evident through conventional tests’ (ibid., p. 130). New understandings of learning as more distributed between collaborating partners and resulting from activities in which ICT tools act as extensions of learners and co-construct learners’ agency (Salomon 1993a), and major changes in the development and representation of knowledge, such as

Inside innovation 45 those discussed earlier in this chapter, pose diffi cult challenges for assessment. For

example, if electronic communications and Internet use are an integral part of the learner’s practices, how can it make sense to assess their performance without allowing them access to these tools? I would argue, rather, that students should be assessed on what they can achieve when working in new ways they have developed to make use of the affordances of these tools that have the potential to transform learning.