08832323.2010.510153

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Asynchronous Knowledge Sharing and
Conversation Interaction Impact on Grade in an
Online Business Course
Kenneth David Strang
To cite this article: Kenneth David Strang (2011) Asynchronous Knowledge Sharing and
Conversation Interaction Impact on Grade in an Online Business Course, Journal of Education
for Business, 86:4, 223-233, DOI: 10.1080/08832323.2010.510153
To link to this article: http://dx.doi.org/10.1080/08832323.2010.510153

Published online: 21 Apr 2011.

Submit your article to this journal

Article views: 151

View related articles


Citing articles: 6 View citing articles

Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 11 January 2016, At: 22:17

JOURNAL OF EDUCATION FOR BUSINESS, 86: 223–233, 2011
C Taylor & Francis Group, LLC
Copyright 
ISSN: 0883-2323
DOI: 10.1080/08832323.2010.510153

Asynchronous Knowledge Sharing and Conversation
Interaction Impact on Grade in an Online Business
Course
Kenneth David Strang
Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016


APPC Market Research, Sydney, New South Wales, Australia; State University of New York, Plattsburgh, Plattsburgh,
New York; and University of Atlanta, Atlanta, Georgia, USA

Student knowledge sharing and conversation theory interactions were coded from asynchronous discussion forums to measure the effect of learning-oriented utterances on academic
performance. The sample was 3 terms of an online business course (in an accredited MBA program) at a U.S.-based university. Correlation, stepwise regression, and multiple least squares
regression were used to create a statistically significant model with 4 interaction factors that
captured 89% of adjusted variance effect on grade. Although factor multicollinearity was excessive, the model supported a hypothesis that more student interaction in all 4 discussion
forums predicted a higher grade. Certain types of asynchronous forums presented negative
factor coefficients, which implied too much interaction may be counterproductive (cognitive
load theory or the law of diminishing returns).
Keywords: academic performance, asynchronous conversation utterances, distance education,
e-learning, online business course, knowledge sharing, student interaction

There is a need to increase student interaction during online courses, promote critical thinking, and thereby improve learning—this is encouraged by regional and international accreditation bodies (www.aacsb.edu, www.detc.org,
www.efmd.org) and considered good online teaching practice (Costin & Hamilton, 2009; Grandzol, 2004). Kolb and
Kolb (2005) stressed the need to improve student interaction, which they asserted is “in contrast to the ‘transmission’
model on which much current educational practice is based”
(p. 198). Other researchers have advocated for more student
interaction to improve e-learning (Johnson & Aragon, 2003;
Strang, 2010b; Tatsis & Koleza, 2008).

More research is also needed. Online education practitioners continue to face credibility challenges. Despite
the ‘no significant difference’ literature between online
and classroom effectiveness (Bata-Jones & Avery, 2004;
Bernard, Abrami, Lou, Borokhovski, et al., 2004; Joint,
2003; McLaren, 2004; Olson & Wisher, 2002; Russell, 2002;
Stacey & Rice, 2002; Strang, 2009a; Webb, Gill, & Poe,

Correspondence should be addressed to Kenneth David Strang, State
University of New York, Plattsburgh, School of Business and Economics,
Redcay Hall, 101 Broad Street, Plattsburgh, NY 12901, USA. E-mail: [email protected]

2005), a meta-analysis of 232 comparative studies found that
although there was no average difference in achievement between residential and distance education courses, the results
demonstrated an unacceptable variance (Bernard, Abrami,
Lou, Borokhovski et al., 2004). More so, a,
Substantial number of [distance education] applications provide better achievement results, [. . .] and have higher retention rates than their classroom counterparts [. . . on] the
other hand, a substantial number of [distance education] applications are far worse than classroom instruction. (Bernard,
Abrami, Lou, Borokhovski, et al., 2004, p. 406)

In particular, an online course study found students perceived a lack of interactivity between peers and faculty

(Glenn, Jones, & Hoyt, 2003).
In contrast other researchers contend self-directed learning is more effective for some adults (Brookfield, 1993;
Hiemstra & Brockett, 1994). If this were true for online
courses, student interaction would not be needed to improve
grade. Although it is acknowledged some students prefer
a self-directed learning approach (Ponton, Derrick, & Carr,
2005) or favor an individualistic reflective-process learning
style (Strang, 2008, 2009a), it is argued that interaction is
needed in formal learning because effective group interaction

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

224

K. D. STRANG

is essential in most workplaces (Chien, 2004; Ellinger, 2004;
Kessels & Poell, 2004). Furthermore, online communication
and peer collaboration are becoming essential graduate skills
to teach for contemporary employment (Barrie & Ginns,

2007; Tsai, Hwang, Tseng, & Hwang, 2008). However, clear
empirical proof is needed to show that online student interaction improves (or decreases) learning outcome, which argues
against (or supports) the self-directed learning hypothesis.
Another research problem is that many studies of online courses do not assess the effectiveness of e-learning
performance, or they rely solely on student self-report perceptions (Bernard, Abrami, Lou, & Borokhovski, 2004).
A meta-analysis found most studies lacked systematic approaches to measure the effectiveness of e-learning interaction; the authors claimed researchers simply described dynamics observed online (Tallent-Runnels et al., 2006). They
complained that “studies point to student preferences, faculty
satisfaction, and student motivation as primary delivery system determinants [. . .] new research is needed that measures
impact on academic success and thinking skills” (TallentRunnels et al., p. 117). Consequently, the impact of online
student interaction on learning outcome needs to be examined and documented.
In this study I review the education psychology literature
to identify best practices for examining student interaction
in online courses and techniques to measure interactions.
Student interactions are captured from an intact student sample over several terms of the same online business course
at an accredited U.S.-based university. Quantitative statistical techniques are used to measure the impact of student
interactions on grade.

LITERATURE REVIEW
First, in terms of rationale for this study, a basic tenet of
adult learning is that interaction of some sort is needed, either with peers, the materials, the professor, or the learning environment itself (Schunk, 2004). There are many relevant theories in the education psychology literature that

can explain e-learning, yet the scope of this study is to examine the asynchronous interaction impact on performance.
Learning-focused online student interaction normally takes
place in asynchronous discussion forums (Czubaj, 2000; Illeris, 2003), but it is acknowledged that productive student
interaction can occur via synchronous (virtual) classrooms
(Strang, 2010b)—note that the university courses in this
study utilized only online discussion forums for student interactions.
E-Learning Using Knowledge Sharing and
Conversational Interaction
The next task is to propose best practices interaction theories
that promote e-learning in online courses. Knowledge sharing
and conversation theories have been posited to improve learn-

ing through online asynchronous student interaction (Brewer
& Brewer, 2010; Kienle, 2009; Mooij, 2009; Wise, Padmanabhana, & Duffy, 2009). The knowledge-sharing concept of
socialization–externalization–combination–internalization
(SECI) posits that team members learn by sharing tacit and
explicit knowledge through dialog interactions (Nonaka &
Konno, 1998). Nonaka, Toyama, and Konno (2001) argued
that knowledge-sharing interaction dialog is facilitated
(not hindered) by online technology. Brewer and Brewer

emphasized the importance of knowledge sharing in business
and as an e-learning subject.
In the SECI knowledge creation model, critical thinking
occurs through knowledge articulation and peer dialog interactions (Nonaka & Teece, 2001). Peer interactions allow
mental models of personal best practices explicit for sharing
(Strang, 2010a). The SECI model has been cited in several
studies to demonstrate effective learning-focused online interaction (Konidari & Abernot, 2007; Strang, 2010b; Tatsis
& Koleza, 2006).
Pask, Kallikourdis, and Scott (1975) and Duncan (1995)
developed the conversation theory model to explain how student learning was influenced by verbal dialog and information technology. Pask (1975) presented conversation theory
in a way that applies knowledge sharing, in that students
learn relationships among concepts by teaching back. Teachback occurs when an individual interacts with a peer (using
dialog) about what he or she has learned. This is useful for
tacit knowledge sharing.
Other researchers, namely Baker, Jensen, and Kolb
(2002), leveraged knowledge sharing and conversation theory in experiential learning, suggesting a shared meaning
can be obtained by students through the “interplay of tacit
and explicit dimensions of knowledge” (p. 4). In their extension to experiential learning theory, they claimed tacit
knowledge and deep understanding can be effectively learned
through conversational dialogue: “we must both hear and listen” (Baker et al., p. 5). In experiential learning, the conversational space is opened to the extent that students develop

the ability to perform both activities (speaking and listening
interactions), using the tension between epistemological discourse and ontological recourse to drive the dialogue forward
(Baker et al.).
Although conversation theory predates conventional online course delivery, the principles have been widely and
recently advocated to improve e-learning in a number of empirical studies, as summarized by Clark and Mayer (2003).
Finally, applying knowledge-sharing and conversation theory
during online courses was found to be effective for e-learning
(Crow & Smith, 2005; Kosnik, 2001; Strang, 2010c; Tan,
2003).
Measuring Knowledge Sharing and
Conversation Utterances in Online Courses
Given that online student interactions that apply knowledgesharing and conversation theories should improve e-learning,

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

ASYNCHRONOUS INTERACTION IMPACT

in this subsection I investigate how to this improvement is
measured. Wertsch (1998) discussed the principles of knowledge sharing and conversation for learning (albeit without the
advantage of using online technology), but his point was that

multiple ongoing dialog interactions were needed to overcome cultural differences or personal speech inflections. He
found a learning interaction “involves at least two voices:
the voice of the cultural tool [. . .] and the voice of the agent
producing utterances in a unique speech situation” (Wertsch,
p. 99). By his implication, student-to-student and studentto-professor utterances during an online course would contribute to knowledge sharing and learning even if a student
was rephrasing earlier dialogue.
A common approach to measure the online learning interaction impact on performance is through the use of psychometric tests. Rovai, Wighting, Baker, and Grooms (2009)
developed a survey instrument to assess “perceived cognitive, affective, and psychomotor learning” (p. 11), which
was based on Bloom’s popularized Taxonomy for Learning Cognitive-Affective-Psychomotor domains (Krathwohl,
Bloom, & Masia, 1964). In the present study I quantified interaction significance using student self-report measures but
performance was not assessed. However, a critical deficiency
noted previously in the literature was the lack of objective
indicators for e-learning effectiveness beyond student selfreports of satisfaction and perceptions, or merely evaluating academic outcomes (Tallent-Runnels et al., 2006). Thus,
based on this advice, it would be necessary to capture objective metrics of knowledge sharing and conversational dialog,
along with actual performance related to those factors. There
are several relevant studies mentioned subsequently that provide insight about measuring e-learning interactions.
In a study of online conversation theory, Sherry, Billig,
and Tavalin (2000) found that students’ interaction with one
another and with their professor improved learning outcomes
and satisfaction. Wise et al. (2009) found that learning could

be improved by applying conversation theory and knowledge sharing principles during online courses. Brewer and
Brewer (2010) proposed a theoretical model that integrated
the cognitive domain of Bloom’s taxonomy with knowledge
sharing, and human resource management interaction typically needed in business organizations. A recent study of
conversation theory in online MBA courses concluded that
“knowledge articulation [dialogue] will allow [students] to
improve most of their remaining DQ [asynchronous discussion forum] deliverables, moderately improve their essay paper, and strongly improve their case study analysis” (Strang,
2010c, p. 105). Unfortunately none of these studies specifically assessed the effect of student online asynchronous interactions on their academic performance.
Tatsis and Koleza (2008) measured student interaction
impact on performance by analyzing conversation utterances
(using social-interpersonal factors such as face-saving versus face-threatening expressions). They emphasized that all
actual dialog utterances should be captured (and relevant ex-

225

pressions coded) because any dialogue can impact learning.
In their model, typical speech interactions between the participants involved illocutionary utterances, which were direct
and conventional speech exchanges, as well as perlocutionary utterances, which were indirect and sometimes unpredictable but could still impact learning (Tatsis & Koleza,
2008). Professor-initiated utterances during class are usually
directed toward all students so these are considered more

useful for generic e-learning (benefits all students). Thus, although all online utterances should be measured, they do not
necessarily have equal impact on e-learning outcome.
Clark and Mayer (2003) didn’t rely on student self-report
opinions in their research either but instead advocated “an
evidence-based practice” (p. 2). They pointed out that online interaction or collaboration doesn’t automatically improve learning results (it has to be properly structured).
Furthermore, they claimed too much student interaction (or
too much course material dissemination from the professor)
could produce cognitive overload, thus negatively impacting e-learning (Clark & Mayer). This presents a justification that it is necessary to capture the specific amount of
dialog utterances during e-learning, and in particular, those
that contribute toward formal deliverables. In this study, it is
posited that the previous can be accomplished by assessing
the amount of relevant student–student and student–professor
conversation utterances, which take place in asynchronous
discussion forum topics designated as formal deliverables
(when trivial utterances such as “thank you,” “yes,” and so
on are excluded from the analysis).

Research Propositions and Hypothesis
In light of the previous discussion, I posited that the application of knowledge sharing and conversation theories
during online courses should to improve academic performance. More specifically, higher amounts of online knowledge sharing and conversation between students as well as
student–professor interactions should result in proportionately higher marks, if all other factors are controlled as much
as possible.
Because this study concerns an online course that
uses asynchronous discussion forums to record formal
student–student and student–professor interactions, and
given that it is posited that knowledge sharing and conversation theory improve e-learning, I hypothesized that quantifying learning-oriented utterances would create an indicator that can predict academic performance, as conceptually
shown in Figure 1. In this study, the formal asynchronous
forums were general discussion, research, case study, and
project.
Because it has been asserted that learning takes place at
the individual level, and given that professor interactions are
normally intended to benefit all students, it is logical to measure only the student utterances, to relate student interactions

226

K. D. STRANG

FIGURE 1

Hypothetical model of asynchronous interaction impact on performance.

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

to their grade. The asynchronous interaction indicator should
predict performance, as hypothesized subsequently.
Hypothesis 1 (H 1 ): higher amounts of asynchronous knowledge sharing and conversation theory interactions during
an online course, measured by student initiated learningoriented utterances, would result in higher student grades
(individual level of analysis).

METHOD
This was an ongoing project using action research (ZuberSkerritt, 1993) to improve the effectiveness of online courses.
The present study used mixed methods (Creswell, 2003) to
transform qualitative data into quantitative indicators and to
test the hypothesis.
Subjects and Study Context
The sample consisted of 53 students that completed the same
online business course within an accredited MBA degree program at a U.S.-based university (intact convenience group).
This sample excluded two students that did not complete
due to withdrawals (all of the 53 students completed the
course and were given a final grade). In terms of demographics, 100% reported being employed full-time and 55% were
women.
The entrance requirements for the degree program were:
baccalaureate degree with a GPA of at least 2.5 on a 4.0 scale
(62.5%), three letters of recommendation indicating the candidate’s ability to pursue graduate study (at least one from
a professor or academic advisor if the student was presently
studying or completed a degree within the last three years),
acceptable English language skills, and submission of GRE
scores. Because 4% of the students reported using English
as a second language (or they were not U.S. residents or citizens), they had already taken and passed a TOEFL (meeting
the minimum 550 threshold). The GRE General Test score
means were the following: 660 for verbal reasoning, 670 for
quantitative reasoning, and 4 for analytical writing.
These students took the same 12-week course—Applied
E-business Management Information Systems (EMIS).
There were no repeat students (statistical nonreplacement).
This course was offered over three contiguous terms in the
same academic program year. No changes were made to any

aspect of the course during the sample frame. The course
was taught by the same core faculty teaching team: two instructors (full professors) and one teaching assistant. Usually
one instructor taught the online courses (this study) and the
other taught the residential mode, with both collaborating
throughout each term to ensure the courses were equal in
content, delivery, and assessment. The online courses were
delivered using Blackboard for the asynchronous discussion
forum components, the assignment submission, and grades.
Six asynchronous discussion forums were set up, with
the first two designated for chatting and course materials,
respectively. The last four constituted the formal deliverables: general discussions, research and analysis, case studies, and project report. All deliverables were in writing (text,
graphics, or numbers), to be posted into the designated discussion forum and there were no quizzes or exams. The
general discussion amounted to ongoing Socratic and conversational questions posted by the professor (to stimulate
discussion related to the learning objectives), which students
were required to answer, and build on each other’s submissions throughout the course. The research area contained specific topics the students had to research, cite (in APA style),
and compare and contrast with each other’s findings. The
case study contained two empirical problems that required
a best practices recommendation from students, which was
achieved through decomposition of the problems and formulation of proposed solutions. The project deliverable was a
group effort that required a charter, plan, and completion of
a proposed solution for a business problem of their choice
(related to the course materials, learning objectives, and approved by the professor).
Statistical Procedures and Measures
Descriptive estimates were first used to allow other researchers to assess the sample characteristics, as well as to
ensure factors and variables met the assumptions for the subsequent statistical procedures. The level of confidence was
set to 95% for all tests.
The factors of interest were the knowledge sharing and
conversation theory interactions that took place in the asynchronous forums designated for the four deliverables. The
first three forums (general discussion, research, and case
study) contained threaded questions, answers, and followup dialog in which contributions were made by all students.
The fourth area was structured identically, but was slightly

ASYNCHRONOUS INTERACTION IMPACT

227

TABLE 1
Example of Coding Asynchronous Knowledge Sharing Conversation Interactions
Forum
General discussion

Research analysis

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

Case study

Project

Segment of student interaction utterances

Count

S6: Actually proposed new system is part of decision-making process—I enumerated four types in my previous
discussion on this topic: 1. Keep/customize existing system
2. Custom-build new system
3. Buy and implement new system covering all requirements
4. Find SaaS system to deliver
S5: The article lists the CRM disadvantages while providing more capabilities and reliability, Sugar Suite loads
slower than vTiger CRM and is not so easy to use. Problems may also arise if a user doesn’t lock the Installation
after finishing it. Contrary to vTiger CRM, some of its add-ons are not free for installing and should be ordered
additionally. Another disadvantage of SugarCRM is the very resource-consuming upgrade process. A SugarCRM
upgrade can rarely be completed successfully on a shared server because the upgrade times out.
S12: pasting link for we can reference this later www.siteground.com/sugarcrm vtiger.htm
S8: I can give some context on this one—the idea is that the current ERP is built to handle enterprise sales with long
sales cycles but the business is moving to a model where many smaller sales (potentially automated) will occur that
requires a completely transaction-oriented model for reviewing leads and focusing a tight integration between sales
and support (since customers will be more self-service)
S9: So you would be using the “ERP method” to convince the VP to buy into the system?
S1: Let me put it this way—I know that to do this assignment for work I would do it as a PowerPoint—what I’m
trying to understand is what constraints we have in delivering this project for the class—from the email, I sense we
need “references” and we need it to be written in a formal language I assume APA.
S2: I would also tend towards a “shorter” and critical issues-oriented discussion, with lists and comparisons between
options rather than a structured set of sections per se—so I simply would like to ensure that we are meeting
expectations for the class because I’m definitely sensing that I cannot approach this project the way I would for a
work situation.

S6 = 5

S5 = 7

S12 = 1
S8 = 5

S9 = 1
S1 = 3

S2 = 2

Note. S = student.

different in that only the students in the same team posted
comments in their own area (thus it was demarcated by
group). Project teams were expected to carry out threaded
discussions to document all aspects of their project. The
project report was also counted.
The interactions were quantified by counting the learningoriented utterance phrases made by students and tallying
these counts for each student, within each of the four asynchronous deliverable forums. The scope for a learningoriented phrase was a sentence or fragment that made a
comment, question, citation, or reflection about any topic
relevant to the course materials, learning objectives, or subject being discussed (social chatter and short trivial replies
were excluded). I made the decision to allocate a phrase as a
learning-oriented utterance, which was later rereviewed by a
colleague (the other professor teaching the residential mode
class). After a collegiate debate we arrived at a consensus
on whether the phrase was learning-oriented (only 1% of the
total phrases were debated in this way, and changed from the
original coding).
The project report deliverable was coded slightly differently because some of the utterances were in the form
of the written group report posted into the asynchronous
project forum (and not strictly interactions per se). This forum contained both individual student interactions (discussions similar to the other three forums) as well as the groupauthored project report. Professor messages were excluded
from counting, but student question and responses to the pro-

fessor were included if they met the previous criteria used
for the other three forums (relevant to course and objectives).
Because the report was considered an important indicator of
performance, the sentences in this report were treated as utterances, in a manner similar to how the other three forums
were coded. For the project report, a count was made of all
relevant sentences (which were treated as utterance phrases),
and that same count of utterances was allocated to each team
member, as all students in the group were expected and required to jointly author the report. Table 1 illustrates example
interaction coding results from the study data.

RESULTS
First, descriptive estimates were calculated to display the
sample characteristics. Then, the hypothesis tests were conducted and evaluated using correlation and regression.
Exploratory Data Analysis
Table 2 lists the important descriptive estimates of the sample: means and standard deviations along with Kurtosis and
skew to indicate distribution normality (kurtosis was the only
result, below −2, that indicated an unusually flat peak, so it
was not surprising to see a low skewness, neither of which
was any cause for concern with this sample). A key indicator
was the overall final grade mean of 75% (SD = 0.10%) for

228

K. D. STRANG
TABLE 3
Correlations of Interaction Utterances and
Performance in Sample

TABLE 2
Descriptive Statistics of Online Course Sample
(n = 53)

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

Asynchronous interactions (phrase
utterances)

Variable

General

Research

Case
study

M
SE
Median
SD
Sample variance
Kurtosis
Skewness
Minimum
Maximum

176
9.28
151
67.59
4568.78
−1.35
0.09
87
272

210
9.68
154
70.50
4970.77
−2.04
0.02
131
287

402
7.47
424
54.38
2956.80
−0.64
−1.07
306
453

Asynchronous interactions (phrase utterances)

Project

Final
grade

123
3.58
114
26.05
678.55
−0.65
1.02
97
170

0.75
0.01
0.73
0.10
0.01
−0.92
0.40
0.59
0.96

all students (N = 53), which was not significantly different
than previous academic year performance (for all terms) in
this course, t(151) = 1.879, p = .381. From this it may be
assumed that course difficulty level was consistent.
With respect to the online asynchronous utterances, the
case study forum had the most (M = 402, SD = 54), and
in fact twice it had as many interactions as the next higher
category of research dialog (M = 210, SD = 71). Even the
student with the minimum interactions in the case study discussion was higher than maximum of all other students in all
other online asynchronous forums. This immediately suggests there may be an effect of case study interactions on the
academic outcome (simply due to the magnitude and lower
relative variance)—as the hypothesis was that more interactions would result in higher grades, this is one factor to
carefully analyze. General discussion was slightly lower (M
= 176, SD = 68) and project dialog was lower by a similar
amount (M = 123, SD = 26). Based on experience with this
course over the last few years, there is a normative trend
of more interaction with the case study and project deliverables, followed by research dialog, but usually the general
discussion is the forum with the lowest student interactions.
Preliminary Hypothesis Testing Analysis
The Pearson Product Moment correlations of all factors and
the independent variable (final grade performance) are listed
in Table 3 (any coefficient beyond ±0.3 was generally considered significant). Obviously there was significant correlation
between certain discussion forums (research and case study
was 0.58, whereas research and general was 0.52 and research
and projects was 0.68). This doesn’t necessarily mean particular students were talkative in the asynchronous forums,
but instead this is likely a precursor of an underlying learning style dimension of the students, whereby high levels of
interaction in one deliverable forum would be expected from
the same students in other forums. The first reflection in such

Pearson correlation

General

Research

Case study

Project

Research discussion
Case study discussion
Project discussion
Final grade

0.514
−0.250
−0.029
−0.326


0.583
0.680
0.054


0.391
0.397


0.179

a situation might be to remove one of these from the model,
but on the other hand, each forum served a different learning
purpose.
The correlations with the final grade were of most interest
for testing the hypothesis that high interactions would relate
to high performance (at the individual student level of analysis). Case study interaction was positively correlated with
grade (0.40), which was earlier suspected due the magnitude
of the utterances as compared with the other asynchronous
forums. This suggests that more online knowledge sharing
and conversation among students is moderately and positively related to their final grade.
The surprising result was the moderate negative correlation between general discussion forum interactions and final
grade (−0.33). Drawing on prior experience in this course,
the deduction about this is that because the general forum
contained generic discussion (including questions), those students that read and understood the theories well would not
likely have had as many inquiries or dialog (in this topic), as
compared with those students that were having difficulty. Furthermore, general discussion was a topic that attracted panic
questions when students had skipped through their materials quickly (superficial learning), and then needed help with
several related subjects rather than with a specific deliverable (thus the choice of general forum). Lower interactions
in the general discussion related to students having a strong
understanding of all materials.
The lower correlation of project discussion with grade
(0.18) can also be logically explained from teacher experience. Projects were team deliverables; therefore, more time
was spent upfront creating a plan and charter that clearly
outlined the roles and responsibilities of each member. It is
assumed that when team member duties were clearly laid out
(taking advantage of the strengths and weaknesses of each
member), the actual tasks required less discussion later on
to create the report. In this line of thinking, when teams did
not have a solid plan, they required more ongoing discussion,
which is less efficient (and this also competed for time with
other concurrent work), and therefore more interaction in the
project discussion forum meant a lower performing team unit
and a corresponding lower final grade. Nonetheless, a certain
level of interaction was expected which positively correlated
with the final grade. Interaction in the research discussion

229

ASYNCHRONOUS INTERACTION IMPACT
TABLE 4
Stepwise Regression Models of Interactions on Performance
Number
1
1
3
2
2
3
4

R2

Adj. R2

 Adj. R2

C-p

General

0.107
0.158
0.215
0.205
0.213
0.245
0.902

0.089
0.141
0.167
0.174
0.181
0.198
0.894

389.9
364.8
340.8
343.3
339.7
326.1
5.0

S

0.052
0.026
0.007
0.007
0.017
0.696

Research

S
S
S
S

S
S

Case

S
S
S
S
S
S

Project

S

S
S

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

Note. Values sorted by adjusted R2. “S” indicates factor entered into step-wise regression model.

did not significantly correlate with final grade, but as pointed
out previously, research instead correlated with general and
case study interactions (separately not simultaneously). This
is reasonable, as more peer dialog in either topic (case study
or general discussion) could trigger the need to undertake
and discuss additional research.
Hypothesis Testing Results
Given that correlations between the interaction factors (particularly research with case study as well as with general
discussion), along with the low correlation of certain factors with grade, the next step was to confirm which of these
factors should be included in the model to test their effect
on grade. To accomplish this, stepwise multiple regression
was utilized, by first entering a factor into the model, and
then another, trying all combinations, while recording their
incremental ability to capture the variance on performance.
Table 4 lists the stepwise regression model combinations
(in each row), with the better models toward the bottom.
Each row shows the number of variables entered into the
model, the R2, adjusted R2, and delta (incremental amount of
adjusted R2 captured from the previous model, with negatives
meaning less), followed by the C-p statistic. An S in the right
columns of Table 4 indicates which factor was selected in
the stepwise regression (each row in Table 4 constitutes a
different model). The first model used one factor (general
discussion interactions) and captured 8.9% of the variance
impact on final grade. The third model used three interaction
factors (general discussion, case study, and project report),
capturing 16.7% of the variance effect on grade (which was
0.026 more than the second model that used only case study
as the factor).
A best practices method to select significant predictors in
a multiple regression model is to find the row where k − 1
number of factors is close to C-p statistic (Levine, Stephan,
Krehbiel, & Berenson, 2005). Adjusted R2 is an important
estimate because it reflects the number of variables needed
in the model. Applying this technique, the best combination
of factors was the last row with all included in the model
(capturing 89.4% of variance).

Another method of identifying the best factors in a regression model is to try all combinations of factors (using the best
subsets procedure in the statistical software), sort the resulting matrix by adjusted R2 (ascending sequence), then use the
C-p as a cutoff for any ties, following the logic of Levine
(2005), but instead compare the delta (relative change) in
adjusted R2 from model to model (Strang, 2009b). Using this
technique, it is clear that the last row in Table 4 captures a very
large amount of incremental factor variance (delta adjusted
R2) effect on grade (.69) when all four factors were included,
which corroborates the previous technique. The reason this
cautionary step was taken (assessing delta adjusted R2) was
due to the high correlation between general and other factors and the low correlation of certain factors with grade.
For example, when general was excluded from the model
(second last row in Table 4), the adjusted R2 was 19.8%,
which was 0.017 more relative variance captured using only
three factors—which would be a more parsimonious statistical model (Keppel & Wickens, 2004). However, it is obvious
that the four-factor interaction model is the best in terms of
capturing combined (89%) and incremental variance (70%)
on grade. Thus, the conclusion is all four factors (general,
research, case study and project) should be in the model, as
together they capture an adjusted R2 of 89.4% of variance on
final grade. This result (adjusted R2 of 89.4%) is very good;
it is considered a large effect (Cohen, 1992).
Now with the four significant interaction factors selected
from the stepwise multiple regression, the next step was to
calculate detailed estimates of the effects on academic performance (final grade). Least squares multiple regression was
used to test this complete regression model for significant
effect size (omnibus model test) and then to estimate the coefficients, t tests, p values, and other statistical benchmarks.
The results are presented in Table 5 (detailed estimates first,
then an omnibus test).
The first critical result in Table 5 is the omnibus test, which
is the analysis of variance (ANOVA) of the four factors in the
regression model to determine the grade effect significance.
In this situation the estimate was good (R2 = .902; adj. R2 =
.894), F(4, 48) = 110.81, P = .000. This made it permissible
to interpret the detailed coefficient estimates. The omnibus

230

K. D. STRANG
TABLE 5
Regression of Asynchronous Forum Interactions on Course Performance

Predictor
Constant
Case study
General
Research
Projects

Coefficient

SD

t

p

VIF

Hypothesis

−9.2900
0.0203
0.0184
−0.0269
0.0351

0.540
0.001
0.001
0.002
0.001

−17.30
18.86
17.98
−18.38
18.38

.000
.000
.000
.000
.000

162
228
508
118

Supported
Supported
Supported
Supported

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

Note. Model (omnibus values): F(4, 48) = 110.81, p = .00; R2 = .902; Adj. R2 = .894. Durbin-Watson = 1.71.

test was a method triangulation for the stepwise regression
as both results were equal.
The interaction factors are listed in Table 5 as predictors
(the constant is the slope), followed by the coefficients, standard deviations, t-test estimates, and p values. The key estimates to examine are the t tests and p values. All t tests were
considered significant following the rule of ≥ ±2 (J¨oreskog,
S¨orbom, & Wallentin, 2006) and clearly all p values were
zero (a good result which supported the hypothesis). From
this, the hypothesis can be accepted in which each of the four
factors (used together in the model) is statistically significant, capturing a large variance effect on final grade. On the
other hand, there are a few caveats to discuss subsequently,
as identified by the other indicators.
Most importantly, the Variance Inflation Factor (VIF) was
calculated to detect undesirable independent factor statistical
interaction by estimating how much coefficient variance was
driven by multicollinearity. The most desired VIF is 1, which
means a predictor is orthogonal to the others in the matrix
(no significant correlation), whereas a VIF higher than 10
indicates multicollinearity (Tamhane & Dunlop, 2000). Some
statisticians recommend removing factors with a VIF greater
than 5 (Snee, 1973); others suggest removing any greater than
3 (Carlson, Thorne, & Krehbiel, 2004). All factors had very
large VIFs, ranging from 111 to 508. Based on the statistical
literature, the model may be unreliable, as the independent
factors are likely confounding one another when predicting
grade.
However, it is argued here (in educational psychology)
that these asynchronous discussions do overlap in the sense
that knowledge sharing conversation utterances in one particular forum may well improve student e-learning across several forums. For example, the comments and findings from
one topic could help with the learning and report writing in
another forum (e.g., research likely helps all other deliverables). In fact, this was statistically implied when the high
Pearson correlation was discovered between certain factors
(research with case study, project, and general discussions).
Furthermore, interaction in one forum could reduce the need
to interact in another forum, such as was likely the situation with projects, whereby the lower interactions (and the
logical sequencing of project toward the end of the course)
could signify that the learning curve had reached it zenith

at that point, and thus many questions had already been answered and much of the research was available to complete
the project. Thus, in this study, the decision was made to
retain all four factors despite the high VIF estimates.
Because the interaction factors (utterances in the four
asynchronous discussion forums) were posted periodically
over time, autoregression potential was checked on the dependent variable (final grade) using the Durbin-Watson (DW)
d estimate. The acceptable benchmark for the DW d is close
to or less than or equal to 2 (Levine et al., 2005). In this
sample the DW d estimate of 1.71 was acceptable, meaning
there was no autoregression detected.
Finally, given the decision to accept this statistically significant four-factor model to estimate the effect of asynchronous interaction in the forums on final grade, the details of the regression can be evaluated. The regression equation for the model was the following: final grade = −9.29
+0.0203 ∗ Case study +0.0184 ∗ General −0.0269 ∗ Research +0.0351 ∗ Projects.
The coefficients in previous equation can now be mathematically interpreted. A positive coefficient suggests that
higher values of the factor would result in higher values of
the dependent variable (in this case the final grade). In similar fashion, negative coefficients in a regression model mean
that they decrease the dependent variable. The only negative coefficient factor multiplier in the model is for research,
which signifies that lower amounts of utterances in the research forum and higher amounts of utterances in the other
three forums generally result in higher grades.
In this statistically significant regression model, the coefficients can be used as predictors, to forecast grade, for given
levels of interactions in each of the asynchronous forums
(but this induction applies to this sample frame). Actually,
the range of the factors shown in Table 2 should be maintained for the multiple regression formula to work correctly
in this model. Thus, the minimum for research forum interactions is 131 and the maximum is 287. In keeping with
this logic, if it were desired to model a student with an average amount of knowledge sharing and conversation interaction (by entering the mean utterances observed for each
factor listed in Table 2), the predicted grade is the following:
−9.29 + (0.0203∗ 402) + (0.0184∗ 176) + (−0.0269∗ 210)
+ (0.0351∗ 123) = 0.7773. This corroborates the model

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:17 11 January 2016

ASYNCHRONOUS INTERACTION IMPACT

since this is equal to the grade mean (78%) reported in
Table 2.
On the other hand, the model is not totally reliable for prediction, due to the excessive intercorrelation detected through
different (triangulated) statistical methods. First, undesired
factor intercorrelation was foreshadowed by the moderate
positive Pearson correlation found between research and the
other three factors of case study, projects, and general discussion interaction. Further to this, general discussion had
a negative correlation with grade, whereas project and research factors had only a small positive correlation with
grade—leaving case study with the only moderate and positive correlation to grade (this is important to note as a better
correlation model would have presented at least weak positive correlation between each factor and the dependent variable grade). Another warning sign of potential confounding
factors was the huge incremental factor variance increase in
the stepwise multiple regression (delta adjusted r2 in the last
row of Table 4 was .69)—such a large increase when only
one more factor was added (to form the final four-factor solution) indicates hidden interaction was likely captured in
the model. Finally, the excessive VIFs noted during the least
squares multiple regression (Table 5) confirmed these factors
were confounding one another.
In light of the previous results, an experiment was conducted to test the stability of this model. Monte Carlo simulation was utilized, entering random combinations of utterances for each factor, but within thresholds of the minimum and maximum values for each factor from the descriptive statistics of Table 2. The result of this simulation produced several instances of invalid outcomes (some simulated
grades were negative or higher than 100%). Therefore, this
model should still be considered conceptual: more research
is needed.

DISCUSSION
Conclusions
Theoretically and empirically this study accomplished the
objective—a statistically significant model was developed,
but high factor correlations constrain the implications.
Overall, the study indicated that applied knowledgesharing and conversation theory, as represented by higher
levels of student asynchronous discussion forum interactions, improved academic performance in this online business course. Certainly it is possible that other theories could
have caused the higher interactions and/or improved grade
(e.g., better technology or higher student motivation), and
thus more research is necessary.
At a more detailed level, in this study I hypothesized that
if students applied higher amounts of knowledge-sharing and
conversation theory in asynchronous discussion forums, elearning would be increased, resulting in a higher final grade.

231

The sample included three terms of the same online business
course (N = 53), using the same syllabus, materials, context, professor, and technology. The interaction counts were
calculated by coding learning-oriented utterances posted by
students in each of these four official asynchronous discussion forums. Least squares regression produced a statistically
significant four-factor model that supported that hypothesis
(adjusted R2 = .894, p = .000), F(4, 48) = 110.81, R2 =
.902. In this model, more student interactions in the project,
case study, research, and general discussion forums led to a
higher final grade.
Statistically, the interaction factors as a whole had a significant effect in capturing the variance on final grade (more
interactions produced higher grades). However, the exact
amount of best practices interactions for a particular asynchronous discussion forum cannot be accurately predicted
due to the high multicollinearity detected between the four
factors. At best it can be posited that higher interactions in the
case study forum, moderate interactions in the discussion and
project forums, and lower interactions in the research forum,
all in a relative sense, increase final grade (and vice versa). It
is clear from the model that very low interactions in one or all
four asynchronous discussion forums would predict a lower
grade, yet a large number of interactions do not necessarily
translate into a passing grade, either. This is likely due to cognitive overload and the law of diminishing returns—students
must rationalize their knowledge sharing and conversation
interaction effort expended toward e-learning.
As implied previously, there seemed to be different best
practices levels of student knowledge sharing and conversation interactions for particular asynchronous discussion forums. Case study had the highest amount of interactions and
the largest positive correlation with grade (+0.40). General
discussion and research interactions did not correlate strongly
with grade, but (along with case study) they contributed to
capturing variance on grade in the multiple regression model.
A lower amount of project discussion forum interactions was
rationalized to be due to the group mode of this fourth deliverable (project report), whereby some team interaction was
needed, to form the charter and plan, but thereafter more offline effort was put into writing the coauthored report (and not
captured within the forum). The procedure used to codify the
project forum interactions treated the report sentences as utterance phrases, such that each student in the team received
equal credit for all report interactions, in addition to their
individual postings in the project discussion forum. This accounted for a lower utterance count in the project forum, and
thus had a moderate impact on grade in terms of interaction.
Project interactions had the highest coefficient in the multiple regression model (+0.0351), suggesting moderate but
high-quality group interactions in the project asynchronous
discussion (accompanied by an effective succinct coauthored
project report), increased final grade. The other three forums
(case study, research and general discussion) were different from the project forum in the sense that all interactions

232

K. D. STRANG

in the former were individual learning-oriented knowledgesharing conversations (thus, most utterances were better). Not
surprisingly, higher interaction counts in the case study, research, and general dis

Dokumen yang terkait