http: www.apsce.net icce icce2009 C2 proceedi...

(1)

Identifying Learning Opportunities in Online

Collaboration: A variation theory approach

Anindito ADITOMOa,b, Peter REIMANNb a

Faculty of Psychology, The University of Surabaya, Indonesia

b

CoCo Research Center, The University of Sydney, Australia [email protected]

Abstract: The focus of this paper is on the issue of evaluating learning which occurs in the process of online collaboration. The goal of this paper is to demonstrate the utility of a theoretical framework (variation theory) to assess the process of online collaboration. To this end, discourse of six online groups collaborating using wiki was examined. Variation theory made possible the observation that, although the groups differed markedly in their collaboration process, they focused on the same aspects of the object of learning, while ignoring one key aspect. Further analysis suggested that this missed learning opportunity may have led the students to develop a ‘misconception’ of the topic. These findings point to the value of variation theory as a framework for analysing online discourse to make claims related to students’ learning.

Keywords: Variation theory, learning opportunity, online collaboration, wiki.

Introduction

The focus of this paper is on the issue of evaluating learning which occurs in the process of online collaboration. The central problem here concerns how teachers and researchers can assess or evaluate what students, having participated in an episode of online collaboration, have learned from their collaboration. Put differently, this is an issue of identifying learning opportunities afforded by a particular episode of online collaboration.

It is important for educators to assess, in one way or another, their students’ collaboration processes, because it does not always proceed in an ideal manner. In group work, responsibility is distributed, and thus group members may be tempted to give less-than-optimal effort or “free ride” [19]. Issues related to social and academic status may prevent lower-status members from actively participating in the collaboration [10]. Even groups composed of high-achieving students may fail to solve problems, especially when their interaction is incoherent as judged from how solution proposals were proposed and responded to [1].

The process of collaboration can of course be examined from many perspectives. The goal of this paper is to demonstrate the utility of a relatively new theoretical framework (variation theory [13]) to assess the process of online collaboration. In particular, the study illustrates how variation theory can highlight features of the collaboration process to identify what individual students’ may have learned from their collaboration process.

1. Previous studies

How is learning conceptualised and assessed based on online group discourse data in previous studies of online collaboration? A survey of the literature revealed two distinct ways approaches: one which leans on a broadly cognitive perspective, and another which is based on sociocultural theories of learning.


(2)

1.1 Learning as an internal, cognitive process

Some researchers attempt to make claims, based on the analysis of online group discourse, about the quality of students’ cognitive processing. For example, Hara et al. analysed cognitive and metacognitive skills as they were reflected in the online discourse to “better understand the mental processes involved in discussions.” [9, page 123]. In work by Schellens et al., online discourse data was also used to infer individual cognitive processes (average knowledge construction scores for each individual student was used as the main dependent variable in their study) [20]. Other authors have also made similar, though often implicit, inferences about the quality of individual cognitive processes from online discourse data [2] [7] [25].

This kind of interpretation implies a view of learning that is in line with the framing assumption of cognitive constructivism. From this perspective, learners are seen as active information processors, and learning as an active process of constructing (that is, not passively acquiring) mental representations [14]. Collaboration discourse is seen as containing traces of individual students’ active knowledge construction. Consequently, researchers can look for evidence of certain verbal behaviours which are assumed to reflect learning.

The perspective is evident in the coding schemes used by studies of online collaboration, many of which include categories of verbal contributions assumed to reflect more or less active individual knowledge construction. For instance, Bloom’s taxonomy of cognitive skills is often used to categorise discourse into lower- (e.g. recalling) or higher-order thinking(e.g. applying, evaluating, or synthesising concepts) [2] [21] [25]. Individual learning is inferred based on the frequency of these favourable contributions: more contribution units (e.g. sentences, paragraphs, or messages) of these types mean more active knowledge construction going on the students’ mind and thus more learning.

1.2Learning as participating in social practices

Some authors analyse collaboration discourse to make claims not about individual learning, but about the quality of the group discourse itself. Nussbaum et al.’s [16] study of “vee” diagrams to support argumentation using wiki, for example, coded the students’ discussion notes in terms of the number of arguments and counter-arguments, compromises, and creative solutions. When discussing their results, Nussbaum et al. seem to be talking more about the quality of the groups’ argumentation (i.e. they examined the data to make claims about group-level properties).

A similar interpretive stance on online group discourse underpins Garrison and

colleagues’ influential community of inquiry model [6]. This model proposes three interrelated elements of a successful community of inquiry: cognitive presence, social presence, and teaching presence. Cognitive presence (the phase of critical inquiry that a discussion group is in) is regarded as the most basic element; social presence (emotional expression, open communication, and group cohesion) and teaching presence (design and facilitation) are positioned as conditions which make cognitive presence possible. What is relevant to our discussion here is that the phases of critical inquiry are seen as a group-level property. Whether a group is in the first (triggering event) or the last phase (resolution) of critical inquiry is a matter of group achievement. Thus, in some studies using this model [6] [15], collaboration process data is used to infer not individual learning, but group-level qualities of the discourse.

This shift of focus away from internal individual processes towards group-level properties and processes is broadly in line with cultural-historical or situative views of cognition and learning [8]. Cognition, according to this perspective, is accomplished socially through interaction among people and various cultural resources. Cognition becomes not the internal property of an individual, but an emergent property of the group or activity system. Hence, learning is necessarily situated in specific socio-cultural contexts [3]. At the individual level, this process involves becoming more attuned to the constraints and affordances of the material


(3)

and conceptual resources which mediate the social practices of a community, as well as developing meaningful identities within that community [8].

Studies of online collaborative learning that are explicitly informed by the sociocultural or situative perspective are starting to emerge. If the code-and-count content analysis technique is used in these studies, it is usually complemented with qualitative analysis of discourse. One example is Stahl’s [23] analysis of group cognition in synchronous (text-based chat) collaboration in mathematics.

Stahl’s research program examines middle and high-school students solving mathematical problems using online chat environments. The prime interest of this research program is on

group cognition, which is “the group’s experience of intersubjectivity, common ground, and shared world” [23]. Central in Stahl’s approach to investigate group cognition is the notion of

member methods: the ways in which people accomplish their social practices, or the ways they conduct and make sense of their social interaction. To examine member methods and group cognition, Stahl draws heavily from conversation analysis, which provides micro-analytic techniques of examining conversations and the ways talk participants locally produce social order which gives coherence to (or, enables) their interaction.

2. Theoretical framework

2.1Limitation of previous approaches of assessing individual learning opportunities It appears that the constructivist interpretation addresses directly the issue of what students have learned from an episode of online collaboration. However, while this interpretation does allow inference about learning at the individual level, it gives no information about what the students might have learned about the topic or material under study. Studies informed by this perspective give no serious description of the topics, concepts, or materials which the students were learning about [9] [20] [25]. The studies expressed their findings in terms of the relative

amount of learning-related processes reflected in the online discourse. They do not provide information on what students have learned about the subject matter under study. For instance, if analysis shows that certain aspects of a topic might have been misunderstood or neglected, the teacher can use this information in preparing subsequent discussions/tasks.

On the other hand, studies of online collaboration informed by sociocultural perspectives do not directly address individual-level learning possibilities. From this perspective, online group discourse is examined as a group or social accomplishment. Content analysis results are used to make claims about the quality of the group discourse (e.g. Group A’s discussion is at a more advanced phase of critical inquiry compared to Group B’s). When group discourse is examined more qualitatively, results can yield insights about the nature or quality of the group discourse (e.g. the nature of a group’s successful problem solving activity) and mechanisms of online interaction. These, however, speak little about what individual students might have taken from an episode of online collaboration.

Each perspective has yielded valuable findings and insights about online collaboration; however, neither addresses the issue of what individual students might have learned from an episode of collaboration. Consequently, an alternative approach on learning is needed.

2.2Variation theory as an alternative analytic approach

Variation theory has been productively applied to analyse teacher-centred classrooms, but has not been widely used to peer-directed collaboration [2]. This theory is rooted in phenomenography, which holds that learning is always learning about something. Furthermore, as many phenomenography studies have shown, every object (material and conceptual) can be understood/experienced in qualitatively different ways, and each ‘way of understanding’ an object involves the discernment (becoming aware) of certain features/aspects of that object [11]. Learning equates to becoming able to understand an object in a new way, which means becoming able to discern of certain (previously unnoticed) aspects of an object.


(4)

possible variation in certain aspects of an object is required to be able to discern those aspects. Without variation, those aspects will be taken for granted and will not “appear” in a person’s focal awareness. Two ways in which dimensions of variation can be opened are contrasts and

generalisations [13]. Contrast is the juxtaposition of instances with non-instances. For example, to understand what the concept “apple” means, a child must experience apples and non-apples. Generalisation occurs through the generation of different instances of one aspect or object; generalisations bring into awareness the non-critical aspects of an object. By experiencing various instances of apples (of different sizes and colours), for example, a child will discern that size and colour are not critical aspects of the concept “apple”.

From this perspective, pedagogy can be described as the opening of dimensions of variation which enable students to discern certain aspects of an object. The collection of aspects which can be discerned during a teaching/learning process constitutes the space of possible learning afforded by that process. Variation theory studies of face-to-face classrooms have demonstrated that teachers enact differently the same intended object of learning. That is, teachers’ explanation opens different dimensions of variations, enabling the discernment of different aspects of the object and thus affording the development of different kinds of understandings [18].

The issue is whether the process enables the discernment of the critical/important aspects

needed to understand an object in the intended way. Of course, in some cases, a learning situation might fail to open dimensions of variation corresponding to important aspects. Variation theory studies have shown that when a critical aspect was not varied by the teacher, students are not likely to discern that aspect, which greatly decreases the student’s ability to develop the intended understanding [12][17].

3. Method

3.1Participants

To explore the utility of variation theory for identifying individual learning opportunities, this study looked at six online groups (of 2 or 3 postgraduate students, totalling 16 students) working on the same task in the same pedagogical context. This task was selected as an instance of online collaboration which centers on a joint artefact (see below), in which students had to critically compare-and-contrast individually produced artefacts to negotiate a new, joint artefact.

3.2Context: the task and the object of learning

Students’ in this study had to learn about “cognitive task analysis” (CTA), an instructional design method/approach informed by information processing theories of learning. The specific CTA approach studied was called “knowledge mapping” (KM), which is a procedure of representing (in a node-and-link diagram) the conceptual and procedural knowledge components underlying a certain competence (Figure 1).

Figure 1 Example of a knowledge map (from http://coe.sdsu.edu/EDTEC544/Modules)

Specifically, students were asked to individually perform a CTA to identify conceptual and procedural knowledge underlying the ability “to find online journals from the university’s


(5)

library website”. Then, in groups, they were to compare and evaluate each other’s individual knowledge maps, and build a better group map. This activity covered a two-week period.

The goal of the task is to understand KM as an instructional design approach or tool. To develop this understanding, students need to discern the structural and the representational aspects of KM. Structural aspects included the different node and link types of a KM. Discerning the representational aspect means understanding that knowledge mapping is a way of representing procedural and conceptual knowledge underlying the ability to perform something (a complete explanation of the critical aspects of the topic, while central to the analysis performed in this study, is beyond the scope of this paper).

3.3Data analysis

The main form of data was the groups’ online discourse and actions (e.g. uploading pictures or files, creating new wiki pages). Data analysis followed studies of classroom lessons informed by variation theory [17][18].

The analysis process is similar to a general model of qualitative data analysis described by Cresswell [5]. The process starts with a preliminary examination of the course module and curriculum to identify the intended object of learning (i.e. what kind of understanding about the object of learning was to be developed through the activity, which has been presented in above). This was followed by data retrieval and data organisation. The wiki data had to be organised into a temporally sequenced transcript (the structure of a completed wiki page does not necessarily reflect the temporal sequence of its development). Each wiki contribution was copied into a table in Microsoft Word, along with the author and time-stamp information.

The transcripts were then read and analysed over several rounds. In the first reading, the transcripts were annotated (describing the actions that were performed in each contribution). The number of words and wiki versions contributed by each student were also counted. In addition, this reading led to the identification of three broad contribution types: substantive

(related to the object of learning), group coordination (related to who does what by when), and wiki structuring (actions performed to structure the wiki page, such as creating links, headings, or new pages).

The second round of reading focused on producing descriptions of each group’s collaboration process; whereas the third reading focused on describing and identifying learning opportunities. A learning opportunity is identified when an aspect (of the object of learning) is discussed in a way which opens a dimension of variation corresponding to that aspect. Theoretically, the opening of a dimension of variation occurs by the four forms of variation: contrasts, generalizations, separations, or fusions.

The analysis is iterative in two ways. First, the analysis and preliminary analysis are recursive: although the preliminary analysis has identified a number of important structural aspects of knowledge maps, the further analysis of the data revealed that students also noticed aspects which were not explicitly explained by the course module. Second, the act of writing or presenting findings also produced insights which led to further analysis or readings of the data. For example, producing a narrative for each group’s collaboration process yielded new insights about learning opportunities.

3.4Analytic rigor

An important criterion of quality or credibility of qualitative research is the plausibility of the researcher’s interpretation of data [22]. This criterion is based on the assumption that although social reality is multifaceted and socially constructed, there are interpretations and accounts of data, along with claims based on that data, which are more plausible than others. Thus, the process of online collaboration can be interpreted in multiple ways, depending on the

theoretical lens which is used by the researcher. However, when a particular theoretical lens is adopted, then plausible and less-plausible interpretations can be made of the data. In the present study, this means that there are more or less plausible accounts of students’ learning opportunities, as defined by variation theory, which can be observed in their online discourse.


(6)

account of the analytic process [22]. In the present study, transparency was attempted by presenting “layers of interpretation” which goes from raw data, to low-inference descriptions, to higher-inference descriptions, and finally to claims related to the study’s questions. The first two layers are presented in a table which contains the transcript (raw data) and annotations generated from the first-round reading of the transcript. Higher-inference descriptions are given in two forms: 1) narrative descriptions of each group’s collaboration process (which include descriptions of their learning opportunities) and 2) a summary of the learning opportunities presented in all 6 groups’ collaboration. Hopefully, providing these layers of interpretation would allow readers to trace back the analytic process.

Another technique to enhance interpretive plausibility in qualitative research is by bracketing the researcher’s expectations [24]. In the present study, this was attempted by not setting specific hypotheses about the kinds of learning opportunities that were afforded by the task and online learning environment. In other words, there were no preferred expectations concerning the successfulness or effectiveness of the task and online environment in facilitating the intended learning outcomes. This hopefully minimised the potential bias of favouring certain findings (e.g. the identification of all important learning opportunities) over others (e.g. finding that the wiki medium hindered certain learning processes).

4. Results and discussion

There are obvious differences in the group’s level of activity. Figure 3 shows that some groups are more active in the wiki than others. The data also shows that Groups 1, 3, 4, and 5 managed to build a joint map, whereas Groups 2 and 6 only performed a peer evaluation activity (but did not engage in any collaborative map building). Furthermore, Group 1 conducted three online synchronous meetings totalling almost 2.5 hours (not depicted in Figure 3).

Average word contributions per student in the wiki

891

268.3 289

824

753.5

415

0 100 200 300 400 500 600 700 800 900 1000

Group 1 Group 2 Group 3 Group 4 Group 5 Group 6

Figure 2 Average (per student) word contributions in the wiki

Despite obvious differences in activity levels, the groups were similar in how they enacted the object of learning: in evaluating and building their maps, students focused solely on structural aspects of KM. The discussions opened dimension of variations corresponding to those structural aspects. Most groups (except Group 2) had the opportunity, through their collaboration, to discern two structural aspects explained in the module: link and nodes. Most groups (again, except Group 2) also had the opportunity to discern the different contents that their map may contain. In addition, some groups also discerned structural aspects not explained by the course module: description of goal node, overall structure, and direction of flow (Table 1).

Missing, however, were opportunities to discern the representational aspect of KM. The students in this study were challenged with building maps which represented the knowledge underlying an ability to find an e-journal. However, they did not evaluate or make decisions about their maps in terms of how well the maps represented the knowledge components underlying that ability.


(7)

Table 1 Pattern of learning opportunities observed in each group’s collaboration process Dimensions of variation for each group

Critical aspects of the topic

Group 1 Group 2 Group 3 Group 4 Group 5 Group 6

Node types X X X X X

Link types X X X X X

Relations between nodes X X X

Goal node description X X X X

Overall structure X X

Direction of flow X X

The absence of a critical dimension of variation suggests a discrepancy between the intended and the enacted objects of learning. In other words, the online collaboration might not have afforded the opportunity to “see” the object of learning as intended by the course or teacher. Indeed, the data suggests that a knowledge map was viewed or understood differently: not as an instructional design tool, but as a “visual navigation” tool. In some groups, the view of KMs as user guides was expressed explicitly. For example, in Group 4’s peer evaluation, Ann commented that Robert’s pre-collaboration map was:

A clear, colourful representation of one potential way to access and search the library's databases. (Ann, Group 4, April 09, 20:27)

In Group 2’s peer evaluation, Mac wrote that Kim’s pre-collaboration map was a clear instruction to access e-journals:

… anyone who would read it would understand the logic behind the map and be able to follow it as instruction as to how to gain access to the ejournals. Perhaps simply linking up the 'click'

instructions more clearly with the next element would enable users to more easily identify when to click and what that action would do. (Mac, Group 2, April 14, 15:04)

Members of Group 5 and 6 had similar comments. This suggests that some students might have viewed their maps as representing steps towards a goal (as opposed to knowledge

components) and functioning as a user guide (as opposed to an instructional design tool). In addition to explicit comments, there are other indicators which suggest that (some) students hold this, perhaps more intuitive, view of knowledge maps. This is shown in several group’s discussion about the direction of arrows in their map. In Group 1, Kelly initially built her map with arrows pointing towards the goal node. In Group 4, the students thought that the arrows in their map should point towards the goal node, as the following quotations (quoted previously above) demonstrate:

… it's also occured to me that the arrows are going the wrong way. … if the goal is to find a certain eJournal, that the arrows would be pointing toward the goal, and not away? Just a thought. (Ann, Group 4, April 14, 20:45, emphasis added)

Ann’s comment was responded by Robert on the next day:

… I too thought it strange to begin with the goal at the beginning of the flow sequence. In particular, I thought that it conflicted with the final node, where arguably the goal should be located. Perhaps the goal node should be positioned at the beginning and at the end of the flow sequence? (Robert, Group 4, April 15, 12:16)

Here Robert agreed that it is not logical to have the goal node at the beginning of a KM. Instead, it should be located in the end, as an end result of the series of steps. Again, this reflects a conception of knowledge maps not as representing knowledge components, but representing steps to be followed for someone to find an online journal. Evidence of this “visual navigator” view can be found in the transcript of all six groups.

5. Conclusions

This study has demonstrated that variation theory can be employed to reveal what was possible to learn with regard to the object of learning. The analysis also revealed an important


(8)

gap between the intended and the enacted object of learning and a potential misconception of that object.

If applied to the data in this study, an information processing approach which infers individual cognitive processes would likely find members of Groups 1 and 4 as engaging in somewhat higher quality cognitive processes (certainly when compared to members of Groups 2 and 6, who engaged in a much shorter collaboration and produced less argumentative and elaborative discourse). Although this observation is valuable, it obscures the finding that all groups, including the most active groups, did not have the opportunity to discern an important aspect of the object of learning.

Meanwhile, a sociocultural approach would likely yield fine-grained insights about the mechanisms by which the groups conducted their collaboration. This approach might, for instance, shed light on why Group 6 did not succeed in sustaining their collaboration. However, again, this would not produce information about students’ way of understanding of the object of learning.

Furthermore, the variation theory analytic approach may be easier for teachers to use and adopt in their practice [2]. This is mainly because variation theory focuses on the object of learning or the subject matter. In general, teachers are more familiar to their subject matter than to learning theories or analytic frameworks derived from those theories. The variation theory approach essentially requires teachers to identify the critical aspects of their subject matter, and to examine whether and how those aspects are discussed in each student group.

References

[1] Barron, B. (2003). When smart groups fail. The Journal of the Learning Sciences, 12(3), 307-359.

[2] Booth, S., & Hulten, M. (2003). Opening dimensions of variation: An empirical study of learning in a Web-based discussion.

Instructional Science, 31, 65-86.

[3] Brown, J. S., Collins, A. M., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.

[4] Bullen, M. (1998). Participation and critical thinking in online university distance education. The Journal of Distance Education, 13(2).

[5] Cresswell, J. W. (2007). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (2nd Ed.). Thousand Oaks: Sage Publications.

[6] Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.

[7] Gilbert, P. K., & Dabbagh, N. (2005). How to structure online discussions for meaningful discourse: a case study. British Journal of Educational Technology, 36(1), 5-18.

[8] Greeno, J. G. (1997). On claims that anwer the wrong questions. Educational Researcher, 26(1), 5-17.

[9] Hara, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course.

Instructional Science, 28, 115-152.

[10] Lloyd, P., & Cohen, E. G. (1999). Peer status in middle school: a natural treatment for unequal participation. Social Psychology of Education, 3, 193-216.

[11] Marton, F., & Booth, S. (1997). Learning and Awareness. New Jersey: Mahwah.

[12] Marton, F., & Pang, M. F. (2006). On some necessary conditions of learning. The Journal of the Learning Sciences, 15(2), 193-220.

[13] Marton, F., & Tsui, A. B. M. (Eds.). (2004). Classroom Discourse and the Space of Learning. New Jersey: Mahwah.

[14] Mayer, R. E. (1996). Learners as information processors: Legacies and limitations of educational psychology's second metaphor.

Educational Psychologist, 31(3&4), 151-161.

[15] Newman, D. R., Johnson, C., Webb, B., & Cochrane, C. (1997). Evaluating the quality of learning in computer supported co-operative learning. Journal of the American Society for Information Science, 48(6), 484-495.

[16] Nussbaum, E. M., Winsor, D. L., Aqui, Y. M., & Poliquin, A. M. (2007). Putting the pieces together: Online argumentation vee diagrams enhance thinking during discussions. International Journal of Computer-Supported Collaborative Learning, 2(4), 479-500.

[17] Pang, M. F., & Marton, F. (2003). Beyond "lesson study": Comparing two ways of facilitating the grasp of some economic concepts. Instructional Science, 31, 175-194.

[18] Runesson, U. (1999). Teaching as constituting a space of variation. Paper presented at the EARLI Conference.

[19] Salomon, G., & Globerson, T. (1989). When teams do not function the way they ought to. International Journal of Educational Research, 13, 89-99.

[20] Schellens, T., & Valcke, M. (2006). Fostering knowledge construction in university students through asynchronous discussion groups. Computers & Education, 46, 349–370.

[21] Schrire, S. (2004). Interaction and cognition in asynchronous computer conferencing. Instructional Science, 32(6), 475-502. [22] Seale, C. (2002). Quality issues in qualitative inquiry. Qualitative Social Work, 1(1), 97-100.

[23] Stahl, G. (2005). Group cognition in computer-assisted collaborative learning. Journal of Computer Assisted Learning, 21, 79-90. [24] Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4),

522-537.

[25] Zhu, E. (2006). Interaction and cognitive engagement: An analysis of four asynchronous online discussions. Instructional Science, 34, 451-480.


(1)

and conceptual resources which mediate the social practices of a community, as well as developing meaningful identities within that community [8].

Studies of online collaborative learning that are explicitly informed by the sociocultural or situative perspective are starting to emerge. If the code-and-count content analysis technique is used in these studies, it is usually complemented with qualitative analysis of discourse. One example is Stahl’s [23] analysis of group cognition in synchronous (text-based chat) collaboration in mathematics.

Stahl’s research program examines middle and high-school students solving mathematical problems using online chat environments. The prime interest of this research program is on

group cognition, which is “the group’s experience of intersubjectivity, common ground, and shared world” [23]. Central in Stahl’s approach to investigate group cognition is the notion of

member methods: the ways in which people accomplish their social practices, or the ways they conduct and make sense of their social interaction. To examine member methods and group cognition, Stahl draws heavily from conversation analysis, which provides micro-analytic techniques of examining conversations and the ways talk participants locally produce social order which gives coherence to (or, enables) their interaction.

2. Theoretical framework

2.1Limitation of previous approaches of assessing individual learning opportunities

It appears that the constructivist interpretation addresses directly the issue of what students have learned from an episode of online collaboration. However, while this interpretation does allow inference about learning at the individual level, it gives no information about what the students might have learned about the topic or material under study. Studies informed by this perspective give no serious description of the topics, concepts, or materials which the students were learning about [9] [20] [25]. The studies expressed their findings in terms of the relative

amount of learning-related processes reflected in the online discourse. They do not provide information on what students have learned about the subject matter under study. For instance, if analysis shows that certain aspects of a topic might have been misunderstood or neglected, the teacher can use this information in preparing subsequent discussions/tasks.

On the other hand, studies of online collaboration informed by sociocultural perspectives do not directly address individual-level learning possibilities. From this perspective, online group discourse is examined as a group or social accomplishment. Content analysis results are used to make claims about the quality of the group discourse (e.g. Group A’s discussion is at a more advanced phase of critical inquiry compared to Group B’s). When group discourse is examined more qualitatively, results can yield insights about the nature or quality of the group discourse (e.g. the nature of a group’s successful problem solving activity) and mechanisms of online interaction. These, however, speak little about what individual students might have taken from an episode of online collaboration.

Each perspective has yielded valuable findings and insights about online collaboration; however, neither addresses the issue of what individual students might have learned from an episode of collaboration. Consequently, an alternative approach on learning is needed.

2.2Variation theory as an alternative analytic approach

Variation theory has been productively applied to analyse teacher-centred classrooms, but has not been widely used to peer-directed collaboration [2]. This theory is rooted in phenomenography, which holds that learning is always learning about something. Furthermore, as many phenomenography studies have shown, every object (material and conceptual) can be understood/experienced in qualitatively different ways, and each ‘way of understanding’ an object involves the discernment (becoming aware) of certain features/aspects of that object [11]. Learning equates to becoming able to understand an object in a new way, which means becoming able to discern of certain (previously unnoticed) aspects of an object.

The central tenet of variation theory is that to discern certain aspects of an object, a person needs to experience variation corresponding to those aspects [13]. The awareness of


(2)

possible variation in certain aspects of an object is required to be able to discern those aspects. Without variation, those aspects will be taken for granted and will not “appear” in a person’s focal awareness. Two ways in which dimensions of variation can be opened are contrasts and

generalisations [13]. Contrast is the juxtaposition of instances with non-instances. For example, to understand what the concept “apple” means, a child must experience apples and non-apples. Generalisation occurs through the generation of different instances of one aspect or object; generalisations bring into awareness the non-critical aspects of an object. By experiencing various instances of apples (of different sizes and colours), for example, a child will discern that size and colour are not critical aspects of the concept “apple”.

From this perspective, pedagogy can be described as the opening of dimensions of variation which enable students to discern certain aspects of an object. The collection of aspects which can be discerned during a teaching/learning process constitutes the space of possible learning afforded by that process. Variation theory studies of face-to-face classrooms have demonstrated that teachers enact differently the same intended object of learning. That is, teachers’ explanation opens different dimensions of variations, enabling the discernment of different aspects of the object and thus affording the development of different kinds of understandings [18].

The issue is whether the process enables the discernment of the critical/important aspects

needed to understand an object in the intended way. Of course, in some cases, a learning situation might fail to open dimensions of variation corresponding to important aspects. Variation theory studies have shown that when a critical aspect was not varied by the teacher, students are not likely to discern that aspect, which greatly decreases the student’s ability to develop the intended understanding [12][17].

3. Method 3.1Participants

To explore the utility of variation theory for identifying individual learning opportunities, this study looked at six online groups (of 2 or 3 postgraduate students, totalling 16 students) working on the same task in the same pedagogical context. This task was selected as an instance of online collaboration which centers on a joint artefact (see below), in which students had to critically compare-and-contrast individually produced artefacts to negotiate a new, joint artefact.

3.2Context: the task and the object of learning

Students’ in this study had to learn about “cognitive task analysis” (CTA), an instructional design method/approach informed by information processing theories of learning. The specific CTA approach studied was called “knowledge mapping” (KM), which is a procedure of representing (in a node-and-link diagram) the conceptual and procedural knowledge components underlying a certain competence (Figure 1).

Figure 1 Example of a knowledge map (from http://coe.sdsu.edu/EDTEC544/Modules)

Specifically, students were asked to individually perform a CTA to identify conceptual and procedural knowledge underlying the ability “to find online journals from the university’s


(3)

library website”. Then, in groups, they were to compare and evaluate each other’s individual knowledge maps, and build a better group map. This activity covered a two-week period.

The goal of the task is to understand KM as an instructional design approach or tool. To develop this understanding, students need to discern the structural and the representational aspects of KM. Structural aspects included the different node and link types of a KM. Discerning the representational aspect means understanding that knowledge mapping is a way of representing procedural and conceptual knowledge underlying the ability to perform something (a complete explanation of the critical aspects of the topic, while central to the analysis performed in this study, is beyond the scope of this paper).

3.3Data analysis

The main form of data was the groups’ online discourse and actions (e.g. uploading pictures or files, creating new wiki pages). Data analysis followed studies of classroom lessons informed by variation theory [17][18].

The analysis process is similar to a general model of qualitative data analysis described by Cresswell [5]. The process starts with a preliminary examination of the course module and curriculum to identify the intended object of learning (i.e. what kind of understanding about the object of learning was to be developed through the activity, which has been presented in above). This was followed by data retrieval and data organisation. The wiki data had to be organised into a temporally sequenced transcript (the structure of a completed wiki page does not necessarily reflect the temporal sequence of its development). Each wiki contribution was copied into a table in Microsoft Word, along with the author and time-stamp information.

The transcripts were then read and analysed over several rounds. In the first reading, the transcripts were annotated (describing the actions that were performed in each contribution). The number of words and wiki versions contributed by each student were also counted. In addition, this reading led to the identification of three broad contribution types: substantive

(related to the object of learning), group coordination (related to who does what by when), and wiki structuring (actions performed to structure the wiki page, such as creating links, headings, or new pages).

The second round of reading focused on producing descriptions of each group’s collaboration process; whereas the third reading focused on describing and identifying learning opportunities. A learning opportunity is identified when an aspect (of the object of learning) is discussed in a way which opens a dimension of variation corresponding to that aspect. Theoretically, the opening of a dimension of variation occurs by the four forms of variation: contrasts, generalizations, separations, or fusions.

The analysis is iterative in two ways. First, the analysis and preliminary analysis are recursive: although the preliminary analysis has identified a number of important structural aspects of knowledge maps, the further analysis of the data revealed that students also noticed aspects which were not explicitly explained by the course module. Second, the act of writing or presenting findings also produced insights which led to further analysis or readings of the data. For example, producing a narrative for each group’s collaboration process yielded new insights about learning opportunities.

3.4Analytic rigor

An important criterion of quality or credibility of qualitative research is the plausibility of the researcher’s interpretation of data [22]. This criterion is based on the assumption that although social reality is multifaceted and socially constructed, there are interpretations and accounts of data, along with claims based on that data, which are more plausible than others. Thus, the process of online collaboration can be interpreted in multiple ways, depending on the

theoretical lens which is used by the researcher. However, when a particular theoretical lens is adopted, then plausible and less-plausible interpretations can be made of the data. In the present study, this means that there are more or less plausible accounts of students’ learning opportunities, as defined by variation theory, which can be observed in their online discourse.


(4)

account of the analytic process [22]. In the present study, transparency was attempted by presenting “layers of interpretation” which goes from raw data, to low-inference descriptions, to higher-inference descriptions, and finally to claims related to the study’s questions. The first two layers are presented in a table which contains the transcript (raw data) and annotations generated from the first-round reading of the transcript. Higher-inference descriptions are given in two forms: 1) narrative descriptions of each group’s collaboration process (which include descriptions of their learning opportunities) and 2) a summary of the learning opportunities presented in all 6 groups’ collaboration. Hopefully, providing these layers of interpretation would allow readers to trace back the analytic process.

Another technique to enhance interpretive plausibility in qualitative research is by bracketing the researcher’s expectations [24]. In the present study, this was attempted by not setting specific hypotheses about the kinds of learning opportunities that were afforded by the task and online learning environment. In other words, there were no preferred expectations concerning the successfulness or effectiveness of the task and online environment in facilitating the intended learning outcomes. This hopefully minimised the potential bias of favouring certain findings (e.g. the identification of all important learning opportunities) over others (e.g. finding that the wiki medium hindered certain learning processes).

4. Results and discussion

There are obvious differences in the group’s level of activity. Figure 3 shows that some groups are more active in the wiki than others. The data also shows that Groups 1, 3, 4, and 5 managed to build a joint map, whereas Groups 2 and 6 only performed a peer evaluation activity (but did not engage in any collaborative map building). Furthermore, Group 1 conducted three online synchronous meetings totalling almost 2.5 hours (not depicted in Figure 3).

Average word contributions per student in the wiki

891

268.3 289

824

753.5

415

0 100 200 300 400 500 600 700 800 900 1000

Group 1 Group 2 Group 3 Group 4 Group 5 Group 6

Figure 2 Average (per student) word contributions in the wiki

Despite obvious differences in activity levels, the groups were similar in how they enacted the object of learning: in evaluating and building their maps, students focused solely on structural aspects of KM. The discussions opened dimension of variations corresponding to those structural aspects. Most groups (except Group 2) had the opportunity, through their collaboration, to discern two structural aspects explained in the module: link and nodes. Most groups (again, except Group 2) also had the opportunity to discern the different contents that their map may contain. In addition, some groups also discerned structural aspects not explained by the course module: description of goal node, overall structure, and direction of flow (Table 1).

Missing, however, were opportunities to discern the representational aspect of KM. The students in this study were challenged with building maps which represented the knowledge underlying an ability to find an e-journal. However, they did not evaluate or make decisions about their maps in terms of how well the maps represented the knowledge components underlying that ability.


(5)

Table 1 Pattern of learning opportunities observed in each group’s collaboration process Dimensions of variation for each group

Critical aspects of the topic

Group 1 Group 2 Group 3 Group 4 Group 5 Group 6

Node types X X X X X

Link types X X X X X

Relations between nodes X X X

Goal node description X X X X

Overall structure X X

Direction of flow X X

The absence of a critical dimension of variation suggests a discrepancy between the intended and the enacted objects of learning. In other words, the online collaboration might not have afforded the opportunity to “see” the object of learning as intended by the course or teacher. Indeed, the data suggests that a knowledge map was viewed or understood differently: not as an instructional design tool, but as a “visual navigation” tool. In some groups, the view of KMs as user guides was expressed explicitly. For example, in Group 4’s peer evaluation, Ann commented that Robert’s pre-collaboration map was:

A clear, colourful representation of one potential way to access and search the library's databases. (Ann, Group 4, April 09, 20:27)

In Group 2’s peer evaluation, Mac wrote that Kim’s pre-collaboration map was a clear instruction to access e-journals:

… anyone who would read it would understand the logic behind the map and be able to follow it as instruction as to how to gain access to the ejournals. Perhaps simply linking up the 'click'

instructions more clearly with the next element would enable users to more easily identify when to click and what that action would do. (Mac, Group 2, April 14, 15:04)

Members of Group 5 and 6 had similar comments. This suggests that some students might have viewed their maps as representing steps towards a goal (as opposed to knowledge

components) and functioning as a user guide (as opposed to an instructional design tool). In addition to explicit comments, there are other indicators which suggest that (some) students hold this, perhaps more intuitive, view of knowledge maps. This is shown in several group’s discussion about the direction of arrows in their map. In Group 1, Kelly initially built her map with arrows pointing towards the goal node. In Group 4, the students thought that the arrows in their map should point towards the goal node, as the following quotations (quoted previously above) demonstrate:

… it's also occured to me that the arrows are going the wrong way. … if the goal is to find a certain eJournal, that the arrows would be pointing toward the goal, and not away? Just a thought. (Ann, Group 4, April 14, 20:45, emphasis added)

Ann’s comment was responded by Robert on the next day:

… I too thought it strange to begin with the goal at the beginning of the flow sequence. In particular, I thought that it conflicted with the final node, where arguably the goal should be located. Perhaps the goal node should be positioned at the beginning and at the end of the flow sequence? (Robert, Group 4, April 15, 12:16)

Here Robert agreed that it is not logical to have the goal node at the beginning of a KM. Instead, it should be located in the end, as an end result of the series of steps. Again, this reflects a conception of knowledge maps not as representing knowledge components, but representing steps to be followed for someone to find an online journal. Evidence of this “visual navigator” view can be found in the transcript of all six groups.

5. Conclusions

This study has demonstrated that variation theory can be employed to reveal what was possible to learn with regard to the object of learning. The analysis also revealed an important


(6)

gap between the intended and the enacted object of learning and a potential misconception of that object.

If applied to the data in this study, an information processing approach which infers individual cognitive processes would likely find members of Groups 1 and 4 as engaging in somewhat higher quality cognitive processes (certainly when compared to members of Groups 2 and 6, who engaged in a much shorter collaboration and produced less argumentative and elaborative discourse). Although this observation is valuable, it obscures the finding that all groups, including the most active groups, did not have the opportunity to discern an important aspect of the object of learning.

Meanwhile, a sociocultural approach would likely yield fine-grained insights about the mechanisms by which the groups conducted their collaboration. This approach might, for instance, shed light on why Group 6 did not succeed in sustaining their collaboration. However, again, this would not produce information about students’ way of understanding of the object of learning.

Furthermore, the variation theory analytic approach may be easier for teachers to use and adopt in their practice [2]. This is mainly because variation theory focuses on the object of learning or the subject matter. In general, teachers are more familiar to their subject matter than to learning theories or analytic frameworks derived from those theories. The variation theory approach essentially requires teachers to identify the critical aspects of their subject matter, and to examine whether and how those aspects are discussed in each student group.

References

[1] Barron, B. (2003). When smart groups fail. The Journal of the Learning Sciences, 12(3), 307-359.

[2] Booth, S., & Hulten, M. (2003). Opening dimensions of variation: An empirical study of learning in a Web-based discussion.

Instructional Science, 31, 65-86.

[3] Brown, J. S., Collins, A. M., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.

[4] Bullen, M. (1998). Participation and critical thinking in online university distance education. The Journal of Distance Education, 13(2).

[5] Cresswell, J. W. (2007). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (2nd Ed.). Thousand Oaks: Sage Publications.

[6] Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.

[7] Gilbert, P. K., & Dabbagh, N. (2005). How to structure online discussions for meaningful discourse: a case study. British Journal of Educational Technology, 36(1), 5-18.

[8] Greeno, J. G. (1997). On claims that anwer the wrong questions. Educational Researcher, 26(1), 5-17.

[9] Hara, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course.

Instructional Science, 28, 115-152.

[10] Lloyd, P., & Cohen, E. G. (1999). Peer status in middle school: a natural treatment for unequal participation. Social Psychology of Education, 3, 193-216.

[11] Marton, F., & Booth, S. (1997). Learning and Awareness. New Jersey: Mahwah.

[12] Marton, F., & Pang, M. F. (2006). On some necessary conditions of learning. The Journal of the Learning Sciences, 15(2), 193-220.

[13] Marton, F., & Tsui, A. B. M. (Eds.). (2004). Classroom Discourse and the Space of Learning. New Jersey: Mahwah.

[14] Mayer, R. E. (1996). Learners as information processors: Legacies and limitations of educational psychology's second metaphor.

Educational Psychologist, 31(3&4), 151-161.

[15] Newman, D. R., Johnson, C., Webb, B., & Cochrane, C. (1997). Evaluating the quality of learning in computer supported co-operative learning. Journal of the American Society for Information Science, 48(6), 484-495.

[16] Nussbaum, E. M., Winsor, D. L., Aqui, Y. M., & Poliquin, A. M. (2007). Putting the pieces together: Online argumentation vee diagrams enhance thinking during discussions. International Journal of Computer-Supported Collaborative Learning, 2(4), 479-500.

[17] Pang, M. F., & Marton, F. (2003). Beyond "lesson study": Comparing two ways of facilitating the grasp of some economic concepts. Instructional Science, 31, 175-194.

[18] Runesson, U. (1999). Teaching as constituting a space of variation. Paper presented at the EARLI Conference.

[19] Salomon, G., & Globerson, T. (1989). When teams do not function the way they ought to. International Journal of Educational Research, 13, 89-99.

[20] Schellens, T., & Valcke, M. (2006). Fostering knowledge construction in university students through asynchronous discussion groups. Computers & Education, 46, 349–370.

[21] Schrire, S. (2004). Interaction and cognition in asynchronous computer conferencing. Instructional Science, 32(6), 475-502. [22] Seale, C. (2002). Quality issues in qualitative inquiry. Qualitative Social Work, 1(1), 97-100.

[23] Stahl, G. (2005). Group cognition in computer-assisted collaborative learning. Journal of Computer Assisted Learning, 21, 79-90. [24] Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research. Qualitative Health Research, 11(4),

522-537.

[25] Zhu, E. (2006). Interaction and cognitive engagement: An analysis of four asynchronous online discussions. Instructional Science, 34, 451-480.