Evaluation in management education A vis

Management Learning

http://mlq.sagepub.com/

Evaluation in management education: A visual approach to drawing out emotion in

student learning

Jenna Ward and Harriet Shortt Management Learning published online 19 November 2012 DOI: 10.1177/1350507612459169

The online version of this article can be found at: http://mlq.sagepub.com/content/early/2012/09/24/1350507612459169

Published by: http://www.sagepublications.com

Additional services and information for Management Learning can be found at:

Email Alerts: http://mlq.sagepub.com/cgi/alerts

Subscriptions: http://mlq.sagepub.com/subscriptions

Reprints: http://www.sagepub.com/journalsReprints.nav

Permissions: http://www.sagepub.com/journalsPermissions.nav

>> OnlineFirst Version of Record - Nov 19, 2012

What is This?

Downloaded from mlq.sagepub.com at Univ of the West of England on December 7, 2012

2012 459169 MLQ0010.1177/1350507612459169Management LearningWard and Shortt

Article

Management Learning

Evaluation in management 0(0) 1 –18

© The Author(s) 2012

education: A visual approach Reprints and permission:

sagepub.co.uk/journalsPermissions.nav

to drawing out emotion in DOI: 10.1177/1350507612459169

mlq.sagepub.com

student learning

Jenna Ward

De Montfort University, UK

Harriet Shortt

University of the West of England, UK

Abstract

This article introduces a confluent method of evaluation from the qualitative paradigm that encourages student feedback via a sensory route, namely, participant-produced drawings. Through a phenomenological qualitative inquiry carried out at a UK university where the use of participant-produced drawings were piloted, three areas for consideration with regards to enhancing the evaluation of undergraduate provision in management education were identified: (a) giving students space to emotionally respond to their learning, (b) acknowledging the temporal aspect of student learning and (c) offering students the opportunity to set and shape the evaluative agenda. Participant-produced drawing is offered as a method of evaluation that is appreciative of the cognitive-affective learning debate and the rapidly changing nature of higher education practice. We argue that this method provides rich evaluative data on the affective nature of learning that is not as easily explored by traditional, quantitative methods.

Keywords

Emotions, evaluation, visual methodology, drawing, affective learning

Introduction

This article introduces an alternative qualitative method of evaluation that encourages emotional and reflective responses from undergraduate students via a sensory route, namely, participant- produced drawings (Kearney and Hyle, 2004; Munoz et al., 2011; Vince and Broussine, 1996). Through a phenomenological qualitative inquiry carried out at a UK university where the use of participant-produced drawings were piloted in an evaluative context, three key areas for considera- tion with regards to enhancing the evaluation of undergraduate provision in management education

Corresponding author:

Jenna Ward, Leicester Business School, De Montfort University, The Gateway, Leicester LE1 9BH, UK. Email: jeward@dmu.ac.uk

2 Management Learning 0(0) were identified: (a) giving students space to emotionally respond to their learning, (b) acknowledg-

ing the temporal aspect of student learning and (c) offering students the opportunity to set and shape the evaluative agenda. Participant-produced drawings offer an additional and alternative method of enhancing student feedback by providing richer emotional responses to learning and management education experience than the quantitative questionnaire. We extend recently pub- lished contributions on the perceived importance of the affective nature of learning (Armstrong and Fukami, 2010; Sitzmann et al., 2010), by reflecting a ‘confluent learning’ (Castillo, 1974) approach within the context of undergraduate evaluations of management education.

Castillo’s (1974) advocacy of ‘confluent educational approaches’ is based on the fundamental assumption that ‘all learning should be a blending of the cognitive and affective domains . . . that allows students to develop their emotional abilities along with their intellectual abilities’ (p. viii). Shepherd (2004), along with others (including Elliott, 2008; Griffiths et al., 2005; Mazen, 2011; Vince, 2010), calls for further research into the impact of emotions on learning and a ‘requirement to focus on how students “feel” rather than how, or what, they “think”’ (Shepherd, 2004: 274). This is not to argue that learning is a solely emotional process and that cognitive elements should be ignored, instead the focus, as Castillo (1974) highlights, should be on exploring the synergy between emotion and cognition in teaching and learning (Brown, 2000), and one, we argue, that should be better addressed during students’ evaluations of modules and courses.

Indicative of Goleman’s (1995) view that ‘students who are anxious, angry, or depressed don’t learn; people who are caught in these states do not take in information efficiently or deal with it well’ (p. 7) is the idea that emotional states affect the ability to learn. So why are emotions and emotional responses so often marginalised in the way students evaluate teaching and learning, and specifically, within the provision of undergraduate management education? This is not to say that attempts are not being made in existing evaluation techniques to explore the impact of emotions and affect on learning (Kraiger et al., 1993; Sitzmann et al., 2010), but as Brown (2000) points out, ‘it is as though the students have taken the close association between learning and emotion for granted, as something so obvious it does not need special comment’ (p. 283). We challenge this observation in light of our empirical research, where student emotions become the focus. We pro- pose that students are very much aware of the relationship between emotions and their ability to learn and evaluate, but it may be business and management schools and their teaching staff who underestimate the extent to which ‘. . . the process and activity of learning is imbued with emo- tional resonance’ (Brown, 2000: 283). This article aims to centralise emotion using a visual approach to evaluation.

As rapid and evolutionary changes to learning and education take place on a global level, openness to exploring new methods through which undergraduate student’s voices are heard and heeded (and acknowledged as such) and those that make students feel that they are central to institutional decision-making and service offering will be vital in a strategy ensuring positive feedback in an environment in which ‘students are consumers’ (Cuthbert, 2010; Modell, 2005). National Student Survey (NSS; Higher Education Funding Council [HEFCE]) and National Survey of Student Engagement (NSSE) scores, in the United Kingdom and the United States, respectively, play an important role in shaping public (and academic) perceptions of institu- tions. In the United States, many institutions use student evaluations of teaching to provide direct feedback on individual teaching staff and modules while making this information public to increase competition and transparency within the sector. The 2011 White Paper entitled ‘Students at the Heart of the System’ attempts to replicate this approach in the United Kingdom, as higher education (HE) institutions are put under escalating pressures to improve the ‘student experience’ (Department for Business Innovation and Skills, 2011) through improvements in

Ward and Shortt

3 ‘teaching, assessment, feedback and preparation for the world of work’ (p. 4). Furthermore,

some now view students as those ‘empowered’ to embody the role of ‘consumer’ with the HEFCE taking on a new role as ‘consumer champion’. Indeed, over the past decade, we have seen an increase in the view of students as ‘consumers’ of education (Modell, 2005; Popli, 2005), and academic institutions have been encouraged to consider all facets of their ‘service’ in terms of quality and delivery. We might speculate then, that in light of the recent policy changes, the cuts and the domino effect to increased fees (Coughlan, 2011; Littlemore, 2011), we will arguably see an augmentation in student expectations. Whether students are considered consumers or not, with sizeable yearly fees, they (and fee-paying parents) will undoubtedly be questioning what they receive in return and be carefully evaluating the teaching and learning experience (Higher Education Policy Institute, 2011). In fact, new government changes in England are focused on making more evaluative data available to the public in easily accessible and understandable forms in a bid to ‘help students choose the best course for them and drive an improvement in the quality of teaching’ (Department for Business Innovation and Skills, 2011: 34). The discourse of ‘Quality Assurance’ (Quality Assurance Agency [QAA], 2010; Welsh and Dey, 2002) underpinning management education institutions, and HE more broadly, is arguably driving teaching professionals and management to gather a wealth of student feed- back on our teaching, our courses and our institutions as a whole. From September 2012, a standardised presentation of these data will be compulsory on all English HE institution web- sites in the form of Key Information Sets (KIS). In an associated response to the White Paper’s (Department for Business Innovation and Skills, 2011) call to ‘present information more imagi- natively’ (p. 30), we also argue that it should be gathered more imaginatively as we, along with Sitzmann et al. (2010), contend the ‘usefulness’ of current evaluation methods and the data they produce, since much appears to focus solely on cognitive learning.

Thus, as student voices, feelings and perceptions of institutional service offerings take increas- ing precedence with respect to recruitment, quality control, and perhaps most importantly funding, we need to develop ways of obtaining rich, detailed evaluatory feedback that reliably reflects levels of ‘customer satisfaction’. As Smith and Smith (1994) comment, evaluating education is inherently problematic because ‘human beings, involved in a learning process, cannot be subjected to the kinds of clinical trials undertaken by scientific research . . . teaching and learning are complex activities’ (p. 527). Recognising emotion as an important part of learning requires it to be better addressed during evaluation.

Current approaches to evaluation in management education

Evaluation is an analytical process that is intrinsic to good teaching.

(Ramsden, 1992: 209)

Typically, business and management schools’ modules and courses are evaluated by students using questionnaires that are chiefly ‘formative and developmental’ (Hounsell, in Fry et al., 2003: 198), in that they are ‘intended to assist in change, development and improvement in teaching’ (Canon and Newble, 2000: 209) and are therefore ‘quality’ – driven. Such approaches can be classified as belonging to the ‘scientific’ or ‘systems’ schools of thought as highlighted by Easterby-Smith (1988) as they focus on either pre- and post-course testing or measuring out- comes against predetermined ‘models’, ‘rankings’ or objectives (Elliott, 2008; Smith and Smith, 1994). For example, Elliott (2008) uses the model of ‘anxiety and learning’ proposed by Vince (1996) in order to structure her exploration of students’ responses to educational experiences

4 Management Learning 0(0) within a Master of Business Administration (MBA) programme. Nonetheless, Coffey and Gibbs

(2001) note that some quantitative data from questionnaires may be used for ‘summative pur- poses’, such as part of a teachers’ own reflective practice (Race, 2007), or indeed, as ‘evidence of teaching competence for teacher accreditation by the Institute for Learning and Teaching’ (Coffey and Gibbs, 2001: 89).

Questionnaires themselves, such as those currently used in the NSS (United Kingdom) and NSSE (United States), are an attractive method of obtaining feedback from a large number of stu- dents and prove popular with course and programme leaders, often at the end of the academic year. Feedback gathered in this way is quick and data can be ‘fed into institutional reviews and quality procedures’ (Race, 2007: 196), since the criteria and ratings are mostly set by the institutions them- selves. In addition, the increased importance that has been placed on ‘student satisfaction survey’ outcomes in university rankings and reputation stakes has led some institutions to see course and mid-year programme feedback as opportunities to train and condition their cohorts into responding favourably when it comes to national surveys, such as the NSS or NSSE. For example, one UK institution has reversed the Likert scale traditionally used to provide midterm feedback to mirror the NSS format. Changes to the questions asked and information elicited has also come under direct scrutiny and revision in an attempt to mitigate the risks of achieving badly on the NSS. These extreme measures are not only symptomatic of HE institution’s views of feedback and evaluation but are also representative of how much time and care students are perceived to invest in the pro- cess. There seems, we argue, to have been a shift in the function of evaluation methods; they are no longer solely used to help teachers, schools and institutions improve the quality of the student experience as part of a cyclical process of continuous improvement, but evaluation has become synonymous with measuring achievement symptomatic of the ‘authoritarianism of most student- educator relations’ (Perriton and Reynolds, 2004).

Recent developmental research in the arena of student evaluation of educational quality argues for a rigorous approach, with some stating that systems of evaluation are not explicit enough in terms of providing valuable data for improvement of teaching on specific courses (Palihawadana and Holmes, 1999). Despite recent contributions to the literature on how we approach evaluation, in practice, methods continue to be based on the quantitative paradigm with the use of question- naires, surveys, statistical analysis and numerical ratings (Mmobuosi, 1985). The asking of ‘why’ or ‘how’ appears to have been relatively marginalised from the quantitative format, which still remains predominantly focused on the ‘measurable’ (Shevlin et al., 2000). Indeed, methods used to evaluate management learning, particularly in UK HE institutions, are positivistic.

Alternative methods of feedback that have been offered are student interviews, focus groups and student representation on formal departmental boards (Harvey, 1998; Mmobuosi, 1985). Offering an insight into how students feel, these more qualitative, phenomenological approaches allow teachers to seek feedback from students with more open-ended questions and form a way of ‘pooling thoughts and reactions’ (Hounsell, in Fry et al., 2003: 204). Although Race (2007) sug- gests such methods can be ‘costly in terms of time and effort’ and students may feel anxious since these methods are not anonymous like questionnaires (p. 204), he does advocate their usefulness, since group feedback is more likely to encourage students to ‘go beyond the agenda’ (p. 205) and to be forthcoming with their comments.

We argue that current evaluation methods (frequently those based in the quantitative paradigm) offer only a partial account of the learning experience and appear to marginalise the importance of affective dimensions of learning, such as motivation and student–teacher relationships. By reject- ing the emphasis on positivist measurements, we explore an alternative view of evaluation, which focuses on description and complexity, providing students with the ‘space’ to both explore and

Ward and Shortt

5 voice their own emotions and feelings, releasing them from institutional constraints and taboos

(including subordinate position within institutional hierarchy, implicit emphasis placed on perfor- mance and measurable outcomes, exclusion of emotion). We propose that participant-produced drawings help to synthesise cognition and emotion to reflect the complexity of learning in manage- ment education and provide rich data on emotional responses and experiences (Gibson, 1983; Parlett and Hamilton, 1972). As Weber and Mitchell (1995) state, ‘drawings offer a different kind of glimpse into human sense-making than written or spoken texts do, because they can express that which is not easily put into words; the ineffable, the elusive, the not-yet-thought-through, and the subconscious’ (p. 34).

Participant-produced drawing

Participant-produced drawing, as a method of inquiry, has its roots in the area of organisational research (Jensen et al., 2007; Kearney and Hyle, 2004; Munoz et al., 2011; Vince and Broussine, 1996; Ward, 2009). Such studies have helped managers and employees explore issues, such as organisational culture and change, and draw out details and experiences that may not otherwise be heard when using more traditional methods of inquiry. This method asks participants to project how they feel onto paper and draw images that help them to articulate thoughts, feelings and impressions. Participants then discuss their images and meanings either with the researcher alone or with their peers. This visual approach to understanding individuals’ feelings conceivably helps participants articulate the often ‘intangible’ parts of everyday life and encourages a different per- spective on what is experienced. The underlying assumption of visual methods is that ‘non-rational forms of self-expression can elicit the non-verbal, tacit, emotional knowledge . . .’ (Jensen et al., 2007: 359).

Our reasons for using drawing as a visual method for evaluation are threefold; first, the drawing activity is used as an icebreaker for the group, thereby reducing the formality associated with the non-anonymous evaluation situation; second, the drawings themselves stimulate conversation and discussion, as Zuboff (1988) argues, ‘. . . these pictures functioned as a catalyst . . .’ (p. 141) in terms of sharing collective meanings, emotions and experiences; and third, images are said to bridge the gap between what is apparently private and subjective and the apparently collective and social (Samuels, 1993). Visual representations are argued to be effective in expressing feelings because they are products of emotions, relationships, instincts and conflicts (Samuels, 1985). Based on the assumption that ‘learning and its application depend, not on the direct influence of externally located objects, but on how the learner experiences and evaluates those objects’ (Mmobuosi, 1985: 263), we propose that participant-produced drawings are able to provide rich data and tap into emotions that are not easily captured by quantitative methods.

The discussion generated between the researcher and participant or as a group also encourages participants to reflect on what they have drawn and why; this process of critical refection allows for tacit assumptions and knowledge to be discovered in disparate and consensual speech (Cunliffe and Easterby-Smith, 2004). In this process, the drawings themselves are often peripheral, as they have to be ‘explicitly placed in a range of contextually specific dialogues and must be regarded as an expression of context’ (Vince and Broussine, 1996: 9–10). It is the projected dialogue that is there to be analysed against the backdrop of the drawings themselves.

The participant-produced drawing has the potential to be a useful qualitative method of evaluat- ing management learning, specifically undergraduate provision. It aims to directly acknowledge the synergy between emotion and cognition within the students’ evaluative experience and gather feedback that will specifically help staff engage with changes and improvements in their teaching

6 Management Learning 0(0) practice, while also making students (arguably, our ‘consumers’, as acknowledged previously) feel

they are a central part of both educational and service development.

Method

This qualitative research study was based in the context of a large undergraduate final year com- pulsory ‘Organisational Behaviour’ course in a UK Business School. The course takes a critical approach to organisational theory and explores such topics as organisational learning, power and political behaviour and change. Both authors worked as seminar tutors on this course and facili- tated three sessions in which the participant-produced drawing exercise was carried out by a total of 21 students. Due to time-tabling issues and other constraints, we adopted a method of ‘conveni- ence sampling’. Convenience sampling ‘ . . . relies on available subjects – those who are close at hand or easily accessible’ (Berg, 2007: 43). This meant that our sample (n = 21) included male and female, home and overseas students, none of whom were involved more than once. Had this not been a pilot study, we would recommend implementation of the method to the entire cohort. However, we acknowledge that in this case, availability and interest were determining factors in the sample but not to the detriment of the study, which was carried out at the end of the year prior to the examination period.

As this is a grounded, emergent piece of research, we adapted the participant-produced drawing method, previously used as a method of inquiry in organisational and management research, into a qualitative method of confluent evaluation. We began with one emotive open-ended question:

‘How does it feel to be a student on [course name]?’ 1 This question equipped students with the confidence to respond in a personal way, as the focus of the question is upon them as an individual. In this respect, these kinds of questions help them move away from the cognitive responsibilities of having to respond via a predetermined ‘model’ or ‘ranking’ and encourage them to project how they feel on to the paper.

Researcher involvement is limited with this method, particularly with regards to making sense of what the students draw, though as can be heard in the data presented for discussion, there were occasions in which students benefited from encouragement and support to verbalise their feelings and meanings. Yet, beyond this ‘facilitation’, an important part of the process, is an ability to relin- quish control of the students and that which is being evaluated, meaning that the researcher must avoid projecting his or her own feelings or thought structures onto the images and not be defensive in an attempt to respond to the feedback comments. It is vital that students are allowed to explore their own emotions with regards to their learning, and have the freedom to do so, without having

to defend their own affective responses. 2 Certainly, we contend, this method offers a greater sense of ‘freedom’ than other quantitatively – based evaluation methods, although we do acknowledge that this conceptual notion of ‘freedom’ remains bounded by the pre-existing power structures that imbue the relationship between teacher and student (Sinclair, 2007).

Procedure

Each student was given a sheet of A1 paper and coloured markers with which to respond to the following question in 15 minutes of allotted time:

Draw your individual response to the following question, ‘how does it feel to be a student on [course title]?’ Please try to avoid using words in your pictures. Remember, this is not about your drawing ability and you will get the opportunity to discuss your picture.

Ward and Shortt

7 After 15 minutes, they were asked to sit back with the group and share their individual drawings

and feelings, while others were encouraged to ask questions, share their opinions and be part of the meaning making. What is initially seen as private and individual is made sense of in a shared and collective way (Samuels, 1993). Where students felt isolated or reluctant to contribute a feeling or response, we, as researchers, encouraged and supported, thereby inspiring them to have the confi- dence to share and value their thoughts and feelings.

In a bid to reflect the cognitive and affective synergy, and to reflect the body of coherent evidence in visual methods literature, regarding participants giving meaning to their drawings (or photographs, or documentary films) in order to ‘frame their own experiences’ (Kearney and Hyle, 2004: 362; Shortt, 2010; Ward, 2009), we did not consider the images in isolation. To appreciate the requirement of ‘additional verbal interpretation by the participants them- selves’ (Kearney and Hyle, 2004: 361), we audio-recorded the group discussion and made handwritten notes. Later, we re-familiarised ourselves with the data by transcribing all dia- logue and notes from the sessions into a clear transcript. Given the multifaceted nature of visual analysis more broadly and that we privileged the meanings our students ascribed to their images (rather than making our own interpretations), we adopted an ‘open-ended pro- cess’ (Collier, 2001; Shortt, 2010; Van Leeuwen and Jewitt, 2001) by collaboratively applying

a comparative thematic analysis that encouraged the interpretation of the images within the context of the associated dialogue. We then examined these emergent themes in relation to existing knowledge.

Thus, considerations that emerged from this analysis included the following: (a) giving students space to emotionally respond to their learning, (b) acknowledging the temporal aspect of student learning, and (c) offering students the opportunity to set and shape the evaluative agenda, are dis- cussed in the following.

Discussion

We explore these areas of consideration by drawing on the literature in relation to current methods of evaluation, emotions and feelings in student learning and the value of participant-produced drawings. This discussion attempts to draw back and critically consider how participant-produced drawing offers an alternative type of response that puts emotion and cognition on a more equal plane.

Four drawings (Figures 1–4) are presented to help illustrate three areas of consideration for evaluation in undergraduate management learning. These four images were chosen specifically as they, and the dialogue they produced, were representative of the effectiveness of the confluent evaluation method and appeared to communicate the importance of certain ideas that are often marginalised by positivistic evaluation methods. In addition, due to issues of anonymity, some drawings and conversations were excluded as they referred to specific elements of the course that would have led to identification.

Giving students space to emotionally respond to their learning Both the images and the associated dialogue show how students were able and open to valuing and

articulating an emotional response in their evaluation. Emotionality was constructed and commu- nicated through the use of ‘emotion words and emotion talk’ (Fineman, 2003: 16), such as ‘love’, ‘anger’, ‘depressing’, ‘frustrated’, ‘proud’, ‘blossoming’, ‘gruesome’ and ‘sad’. The students drew smiley and sad faces (Figures 1 and 2), bleeding bodies (Figure 3), rainbows (Figure 1), and

8 Management Learning 0(0)

Figure 1. Rainbow.

Figure 2. Blind eyes.

blooming flowers (Figure 3), and these images allowed the students to communicate their emo- tional ‘states’ and their feelings about their teaching and learning experience. For example,

. . . I started off quite sad . . . because everything seemed so disjointed and separate . . . it didn’t seem to fit . . . in my head and it just kept going in different places . . . initially I felt like I just wasn’t getting it

Ward and Shortt

Figure 3. Blossoming.

– everything was disjointed. But then . . . I don’t know what happened, it could be the rainbow actually . . . I learnt to see [the course] as a rainbow . . . then I got to the happy face! (Katrina, Figure 1)

Emotion and feeling are not necessarily clearly evident in Figures 3 and 4, though the ver- bal description and the dialogue that provoked give a highly emotional account of the students’ experience. This apparent ‘lack of emotion’ in the visual serves as a helpful reminder of the pertinent link between text and image. Qualitative evaluative methods require sensitivity towards the sense-making process (Weick, 1995) and reassert the importance of meaning – making being the province of the illustrator as individual and the group as a collective. From one objective, reading drawings may appear void of emotion, but to the illustrator, a different set of meanings may exist and to them the image may be imbued with feeling and emotion only decipherable to them. In a similar vein, Kearney and Hyle (2004) note, ‘when the primary feelings and emotions were not vividly evident in the drawings the process of drawing itself seemed to prepare respondents to more easily share these as a part of their personal interpreta- tions of their drawings’ (p. 372).

10 Management Learning 0(0)

Figure 4. Pathway.

It is also evidential in the data presented that ‘learning is an emotional experience as well as an intellectual one . . .’ (Brown, 2000: 275). Brown (2000) argues this close association between learning and emotion has been taken for granted by students as ‘something so obvious it does not need special comment’ (p. 286). Testimony to this can be heard in Gemma’s comments in relation to a recent revision session:

G: ‘She [tutor] felt relieved because we had something to say . . . if we said something wrong she wouldn’t have said anything . . . and I thought do you know, just **** off!’

Ward and Shortt

11 Brian added his support by saying,

B: ‘[Tutor] sounds like she is dying inside . . . and a little bit of me died inside when she walked in!’

Gemma and Brian’s accounts of how they felt during that particular teaching and learning activ- ity are clearly emotional, and they felt little inhibition with regards to articulating their feelings towards particular members of staff. Once again, this speaks to the literature that is concerned with links between motivation and the affective nature of learning. The very idea that a member of staff simply walking into a room can evoke such powerful emotions in a student arguably suggests that this method creates appropriate space conducive to exploring personal motivations and honest emotional responses (Nossiter and Biberman, 1990). It also acknowledges the power of a natural- istic (Easterby-Smith, 1988) and phenomenological approach to evaluation because it provides an insight into the way emotions and feelings impact the learning process, and thereby, arguably, gives a more detailed picture of the programme/module/course as a whole.

What is most important to note here is that although the response was highly emotional and the students were re-living their feelings, this method in no way led to an outpouring of emotion that became uncontrollable. In fact, this method enabled the students to reveal the emotional accounts and stories they wished to share. The use of participant-produced drawings in this context has added to other management learning and education research that suggests that drawings are an extremely fruit- ful way of tapping into the emotional lives of participants (Kearney and Hyle, 2004; Meyer, 1991; Munoz et al., 2011). For example, Kearney and Hyle (2004) state that their participants (employees in an educational institution) ‘noted that less . . . emotional information would have been reported had the drawing exercise not taken place’ (p. 376; italics added). Specifically, the visual methodology used here serves to further establish how important and rich emotional data may be accessed in management learning and education research. Previous studies using visual- and arts-based approaches argue that these methods uncover ‘more meaningful and honest verbal reports’ (Kearney and Hyle, 2004: 380), and here, we argue that they give students space to emotionally respond to their learning.

Acknowledging the temporal aspect of student learning From many of the students’ images and subsequent narratives, the notion of a journey and/or growth

in their learning became clear. This method led to students drawing timelines, seeds germinating into flowers (Figure 3) and paths through mountains (Figure 4), which helped them articulate and identify that their learning was a ‘progression’ from a ‘state’, usually at the start of the year, to their current ‘state’, now, at the end of the year, a theme that echoes Brown (2000) in what she called, ‘That was (me) then, and this is (me) now’ (p. 288) and is perhaps an indication that these particular students have undergone what is becoming known as ‘transformative learning’ (Bramming, 2007; Cunliffe, 2002, 2009; Mazen, 2011). Using this approach to evaluation, the temporal nature of learn- ing, particularly in the cases of ‘transformative learning’, can be identified. A recent concern of the quality assurance literature (see Bramming, 2007; Horsburgh, 1999; Popli, 2005) notes the chal- lenge, when gathering student feedback, in identifying where and when and how learning of this nature occurs; other methods of evaluation and self-assessment currently offer only a snapshot, thereby giving a static account. As Sitzmann et al. (2010) argue, cognitive-orientated methods are somewhat weaker since they do not take into consideration the impact of motivation on learning. For students, this confluent evaluation method provides a forum to articulate affective links between motivation and learning and where, when, how and why this occurs. Students talked about moving

12 Management Learning 0(0) through feelings of confusion, followed by frustration and subsequently arriving at a state of under-

standing. This may be an attempt at self-assessment (Armstrong and Fukami, 2010; Sitzmann et al., 2010) of cognitive learning, but perhaps more persuasively, it is indicative of the temporal nature of the student experience and the impact on their perceived learning.

For example: K: I think it’s quite satisfying . . . from being where I was and thinking what are all these

things! . . . and, like, where I am now . . . I feel kind of good . . . I think I’ve got it! (Katrina; Figure 1)

Furthermore, with some encouragement from the group, Tony explained his feelings in relation to what he had drawn (Figure 3):

T: Mine is stupid Author 1:

No it isn’t! T:

I don’t know what that thing is [points to bleeding figure]. Just sometimes throughout the year, some subjects were lost on me and I just wanted to give up. But that was just a joke so let’s ignore it.

Author 1: No, let’s not ignore it. What is all of this red? T:

It’s a pool of blood. [laughter from group] So it’s quite gruesome and sometimes

I get fed-up and I hate [member of staff]. But then on a more positive light this is what I came up with. This is the beginning of the year, it is supposed to be a seed . . . but now I have blossomed! [laughter]. I have exploded out of that seed!

B:

I think it is quite good because [member of staff] has planted a seed for a new way of thinking!

The students were able to discuss how they felt about the course in the past and the present, and the participant-produced drawing allowed them to view personal experiences retrospectively, to identify what interested and/or motivated them (or not), why and when.

Certainly, in terms of ‘student-centred learning’ (Canon and Newble, 2000), we must be alert to where and when students’ learning is taking place, when it becomes challenging and where and when they might require support. This was made pertinent in Amelia’s description of her image (Figure 4):

A: So I start with a long way through. At the start there is a lot of people trying to get through the area – over the fire. Some decided to stop at the beginning because it was too hard. So they prefer holidays, staying near the sea . . . So they don’t go to the lectures. But then people train to go through the fire . . . go through ‘the way’ and there is a big hand stop- ping them. Because each time they go to a lecture they find it so depressing. And there is another hand trying to help them . . . and they are going to the top of the mountain together!

So far, the rich narrative accounts above highlight the importance of how students move through learning and see it retrospectively as a journey. What participant-produced drawings offer as a method of confluent evaluation is a conscious appreciation of the dynamic, shifting experience of learning (Vince, 2011). By acknowledging and embracing the temporal nature of learning, the ‘evaluator’ must also consider whether responses are ‘normal and represent the expected position at the point in the course or whether something unanticipated is being revealed’ (Gibson, 1983:

Ward and Shortt

13 154). As Vince (2011) states, ‘learning as a dynamic process (of becoming) means that interpreta-

tion in the midst of others who may be similarly confused or uncertain) becomes a resource for learning’ (p. 336). Democratic, participative methods, such as participant-produced drawings, help us move beyond static conceptualisations, may help to temper bias in student evaluations (McNatt, 2010) and enhance the student experience through shared meaning making.

Offering students the opportunity to set and shape the agenda Such interpretation and contextualisation issues are perhaps made more complex by the varied topics of

discussion the students were engaged in. This method seemed to release the students involved from the institutional constraints and taboos, allowing them to discuss what they wanted, what they felt was important and what they felt most strongly about. This raised a plethora of ‘unexpected’ issues, includ- ing how teachers use the students’ ideas; how it makes them feel when other academic members of the staff attend lectures and how the core texts, seminars and lectures align, the variation in teaching quality across the module that led to the oversubscription of some classes, and the perceived attitude of the staff.

B: The lack of communication between the lecturers and the students . . . they don’t seem to explain . . . as well as they could. S:

They seem to laugh it off . . . well not laugh it off, but when people express their concerns over it, they don’t seem to care . . . just their general attitude . . . they just don’t seem to

be . . . honest . . . F:

I don’t know if it’s just me, but they don’t seem to be very enthusiastic about what they are teaching.

It was clear that in this uninhibited forum, student’s comments would be received without judgement. There appeared to be a sense of ‘ownership’ with regards to the subject matter deliber- ated and debated by the students. Here, the students were not only ‘going beyond the agenda’ (Race, 2007: 205) in their group feedback, they were constructing their own.

When asked to reflect on the participant-produced drawing session itself and to comment on using this approach, a student said,

M: ‘It’s useful . . . ’cos when we were sitting in the lecture, it was me, Steve and Tim and we just looked at each other and all ended up putting pretty much the same thing, but here you get different views from different people and it makes it . . . well, ‘multiple perspectives’! (Michael)

C:

I always feel better when someone listens to our opinion . . . (Clive) G:

Talking about it is much better. It sparks other feelings . . . the other good thing is that other tutors have tried to justify why things have been that way . . . but this doesn’t! (Gemma)

F:

I don’t think you can get your feelings and all your thoughts across by ticking a box . . . (Fiona) B:

It’s the best way to represent your emotions . . . draw a face. You can put a smile on it or

a frown . . . (Brian) Just as Michael notes above, a number of students identified participant-produced drawing as a

good way of finding out about others’ opinions and reflections and promoting further thought. It appears that the issues and criteria discussed may not have been ‘planned’ or ‘intentionally’

14 Management Learning 0(0) addressed by the students but were emergent in nature and only became apparent due to the draw-

ings produced at this time and the dialogue that followed. This does not, of course, make them any less important or relevant as criteria for evaluation; it only serves to highlight that, given time to reflect, students can delve deeper into issues that current criteria may not address. As Meyer (1991) notes, ‘visual instruments seem uniquely suited to situations where a researcher . . . prefers not to force informants into his or her cognitive framework . . .’ (p. 232).

This method of evaluation permits the students to compose and direct the criteria on which they assess their student experience. These findings direct our attention towards re-negotiating whose responsibility (Castillo, 1974) it is for setting the criteria on which students evaluate their experi- ences. It also promotes reflection on the extent to which we can gather rich feedback from students that is valuable in a way that offers real development and improvement opportunities for teaching professionals and those involved in all forms of management learning and education.

Conclusions and implications for management learning and education This article has raised three areas of consideration for the evaluation of undergraduate provision in

management learning. First, this method of evaluation facilitates the projection of emotion and the articulation of feelings, thereby fostering critical reflection from the students themselves. Encouraging an ‘affective’ response to affective elements of learning appreciates and values other types of knowledge that are a very real part of management and business practice, yet are often perceived to be overlooked in favour of ‘detached contemplation’ (Chia and Holt, 2008) within our business and management schools. Extending this point, Gibson (1983) argues that ‘open evalua- tion sessions can also provide participants with a rich source of ideas about issues such as com- munication and inter-personal behaviour which are themselves central to the development of managerial effectiveness’ (p. 153).

Second, participant-produced drawings as a confluent approach to evaluation allows us to appreciate a temporal aspect to student learning and provides some indication as to whether trans- formative learning has taken place (Bramming, 2007; Mazen, 2011), highlighting the what, why, when and where student learning does, or does not, take place. This method furthers our knowledge and understanding in the field of evaluation by moving beyond a static conceptualisation of the student experience, often at the end of a course, for example, and draws us towards an appreciation of the nuanced, temporal and dynamic nature of learning.

Third, it allows students to have a clear, uninterrupted, self-directed ‘voice’ that can be (liter- ally) heard, where there is no ‘defence’ from teachers and where the students set the criteria for feedback. The students here did not ‘go beyond the agenda’ (Race, 2007) but instead created their own. Student-led criteria on which students themselves then comment is arguably entirely student- centred (and perhaps ‘consumer centred’) and offers us an ‘authentic’ account of what it is that warrants merit, change or improvement. This arguably helps us move towards a more ‘authenti- cally democratic relationship’ (Elliott, 2008: 273) by releasing them from institutional constraints imposed by evaluation techniques from the ‘scientific’ or ‘systems’ approaches imbued by the traditional authoritarian power structures (Perriton and Reynolds, 2004), to provide a depth of feedback that is perhaps traditionally perceived as ‘taboo’.

Further reflections and directions for future research There are a number of limitations to this confluent evaluation method. First, our convenience sam-

pling method meant that the students who participated in this study had volunteered, and therefore,

Ward and Shortt

15 were perhaps receptive to experiencing a different approach to evaluation. However, it must be

considered that such an unusual approach to feedback may, in some cases, lead to anxiety (Vince, 2010). Further research may be fruitful in exploring the impact of any anxiety that may arise from unconventional methods of teaching and any impact that learning may have on the usefulness of this method. Second, as is evident in the excerpts, some of the student comments are personal. For many members of staff, hearing these comments, expressed in such an emotive way, may be an uncomfortable experience. Further reflection and research may explore how we might share this ‘difficult’ feedback from students with the rest of the teaching team and our colleagues in a way that is constructive in shaping teaching, learning and the student experience. However, part of this method of evaluation is about embracing the affective elements of teaching and learning, and by definition, this requires us to be reflexive in our own practices.

Third, we acknowledge that current approaches to evaluation in undergraduate management education are appropriate and adept at collating and communicating large volumes of data in a timely manner and can be ‘fed into institutional reviews and quality procedures’ (Race, 2007: 196). In fact, one of the English HE system’s strengths is its ability to collect data on itself. However, critiques of this nature are reminiscent of Chia and Holt’s (2008) claim that business schools and management education have come to ‘privilege . . . detached contemplation over involved action’ (p. 471). As is the case with most qualitative methods, participant-produced drawings produce large volumes of rich data in complex and unstructured formats that are not conducive to high volume processing and data management. Nonetheless, this does not negate its usefulness and further adds depth to the portfolio of evaluative methods open to business and management schools.

Finally, drawing on Armstrong and Fukami’s (2010) observations relating to affective evalua- tion, we also appreciate the potential bias and contention that may occur as a result of student expectations of teaching and learning, differing from academic scholarly pedagogy.

The factors that produce a student’s positive affective evaluation of a course are likely not the same factors that produce his or her learning. Simply put we are not always happy while we are learning. (Armstrong and Fukami, 2010: 336)

This balance may be played out in institutions internationally, as we attempt to maintain academic integrity in our teaching, learning and research. Perhaps the real challenge here is that of interpreta- tion. As is the case with all evaluative feedback, for it to be useful, it requires contextualisation and interpretation. The evaluator is required to reflect on the source and significance of varying percep- tions of the same event and to determine whether responses are ‘normal’ at that point within the course (Gibson, 1983); however, by lifting the constraints of the positivistic approaches to reflect the complexity of learning, we are also adding to the complexity of interpreting what we hear into meaningful feedback for improving the learning experience. Such phenomenological, naturalist approaches are no longer conducive to ‘measuring outcomes’ (Easterby-Smith, 1981).

However, the current climate of change in which management education is operating re-emphasises the continuing contentious discourse around students as consumers (Cuthbert, 2010; Modell, 2005; Popli, 2005). The Higher Education Academy, along with lord Brown’s report and the recent UK Government White Paper (Department for Business Innovation and Skills, 2011), highlights the need for HE institutions to develop a more student-centred approach to their service design and provision with respect to all aspects of the student experience. Following debates concerning cognitive-affective learning and self-assessment, we argue for similar issues to be reflected in methods of evaluation in a bid to improve feedback and development cycles. In addition, we would like to see further research exploring other ways in which this type of affective method could be used to explore other emotional and political issues within organisations more generally (Vince, 2011).

16 Management Learning 0(0) This research offers an additional and alternative method of evaluation in undergraduate provi-

sion that reflects a confluent, phenomenological approach to teaching, learning and evaluation, namely, participant-produced drawing. One that allows them to set their own agenda for discussion and feedback and makes them feel a central part of institutional decision-making. We argue that these factors will play an increasingly bigger role in the way student-consumers evaluate their overall satisfaction in light of the changes that are fast-approaching.

Thus, we encourage researchers and teaching professionals within business and management schools to be prudent in their approaches to evaluation and feedback, taking into account affective characteristics of the undergraduate student experience. We also encourage future research into other confluent evaluation approaches and more specifically how the rich, qualitative data gathered via these methods can be collated, understood and used in an effective and efficient manner that will allow them to be just as effective as those methods based in the quantitative paradigm.

Funding

This research received no specific grant from any funding agency in the public, commercial of not-for-profit sectors.

Acknowledgements