Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.79.6.333-338

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

A Study of Teaching and Testing Strategies for
a Required Statistics Course for Undergraduate
Business Students
John A. Lawrence & Ram P. Singhania
To cite this article: John A. Lawrence & Ram P. Singhania (2004) A Study of Teaching and
Testing Strategies for a Required Statistics Course for Undergraduate Business Students,
Journal of Education for Business, 79:6, 333-338, DOI: 10.3200/JOEB.79.6.333-338
To link to this article: http://dx.doi.org/10.3200/JOEB.79.6.333-338

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 18

View related articles


Citing articles: 3 View citing articles

Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 12 January 2016, At: 23:31

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

A Study of Teaching and Testing
Strategies for a Required
Statistics Course for
Undergraduate Business Students
JOHN A. LAWRENCE
RAM P. SINGHANIA
California State University
Fullerton, California

C


olleges and universities continually must try to improve the
design, teaching methodologies, and
testing strategies of their courses. The
higher education system is being challenged to provide increased educational
opportunities without increased budgets. Advances in information technology fortunately have brought many
innovative alternatives to both teaching
and testing methods. Some of the most
important factors have been the development, availability, and increasing
popularity of the Internet. Advances in
hardware technology, software programs, and presentation software are
just some of the factors that have affected the delivery of university courses
inside and outside of the classrooms. In
addition, according to a Department of
Education study published in July 1997
in USA Today, an increasing nontraditional university population finds itself
as “bargain hunting, time-strapped
shoppers who value convenience and
flexibility over prestige” (“Tax breaks
will make . . . ,” p. 14A). The capabilities and quality of online courses,

course costs, and convenience have led
to a situation in which teaching and testing approaches in the same course vary
significantly, not only from instructor to
instructor but also from section to section taught by the same instructor
through varying delivery modes.

ABSTRACT. In this investigation of
student performance in introductory
business statistics classes, the authors
performed two separate controlled
studies to compare performance in (a)
distance-learning versus traditionallydelivered courses and (b) multiplechoice versus problem-solving tests.
Results of the first study, based on the
authors’ several semesters of experience teaching the course in both
distance-learning and traditional formats, show that the distance-learning
students did not fare as well as those
taking the course in the traditional format. The results of the second study, in
which a common set of students took
both multiple-choice and written
exams in the same semester, showed no

significant difference in performance.

In this article, we report on the results
of a comparison of alternative methods of
teaching and testing in a required statistics course for undergraduate business
students at California State University,
Fullerton (CSUF). This course is similar
to undergraduate courses required at
almost all AACSB-accredited business
schools. At CSUF each semester more
than 1,000 students are enrolled in more
than 25 different sections taught by more
than a dozen full- and part-time faculty
members. Although all instructors are
required to use the same text, they can
differ in their choice of delivery mode,
emphasis on Excel approaches, extent of
computer laboratory time, applied relationship between theory and practice and

between hand calculation and computer

use, and type of testing. With such a variety of instructional approaches, the
school administrators, course coordinator, and even the individual instructors
can become concerned about maintaining
quality and consistency in instruction.
Traditional Versus DistanceLearning Performance
Driven by the widespread availability
and increasing popularity of the Internet,
as well as by intense competition for students, distance learning has become a
popular delivery mode for all types of
university courses and programs.
According to the National Center for
Education Statistics, by 1998 more than
44% of all higher education institutions
offered distance-learning courses, an
increase of more than one third compared with just 3 years earlier (Finn,
1998). Lawrence (2003) found that there
were more than 1.3 million total enrollments in over 50,000 distance-learning
course offerings. In June 2000, the
National Education Association reported
that more than 90% of its members were

at institutions offering or considering
offering distance-learning courses and
that more than 10% of its members
already had taught at least one course
online. Now, virtually every major
American university offers these courses.
July/August 2004

333

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

Distance-learning courses reach a broader student audience and have the potential to address students’ needs better at
significantly lower costs. Further, as
there is little evidence that any of the factors favoring the increased popularity of
distance-learning courses will reverse
course in the near future, it is relatively
safe to conclude that nothing will alter
the increased acceptance of these courses. Indeed, in June 2001 the National
Governors Association, although citing

the need to oversee the quality of such
courses, nonetheless enthusiastically
endorsed expanded distance-learning
opportunities (Lawrence, 2003).
As distance learning gains an increasingly wider audience, many educators are
concerned about how learning in the distance-learning courses compares with
that in traditional courses. Distance learning is still in its infancy, and there are
numerous ways to deliver a distancelearning course (Phillips, 2001). Delivery
platforms such as Blackboard and WebCT have narrowed the approaches to
some extent, but there are still a plethora
of distance-learning approaches on the
landscape. At their annual meetings, professional educational organizations such
as the Institute for Operations Research
and Management now regularly schedule
sessions devoted to the teaching of statistics and other quantitative subjects via
distance learning.
Despite such a growing research
interest in distance education, “there is a
relative paucity of true, original
research dedicated to explaining or predicting phenomena related to distance

learning” (Phipps & Merisotis, 1999).
Thus, although some researchers have
presented arguments against distance
learning, others have concluded that distance learning compares favorably with
classroom instruction. Some have even
argued that distance education does not
merit the granting of degrees, whereas
others have indicated that students
undergoing distance education have
learning outcomes comparable to, if not
better than, students in traditional classroom settings.
In our first study, we compared student performance in introductory statistics courses delivered both by distance
learning and in a traditional teaching
style. We decided to look at results from
334

Journal of Education for Business

many different perspectives. We compared test scores from all distancelearning and traditional students taking
the courses. Then we compared the average final grades of students in both

courses. Students who do not complete
the course either withdraw early enough
and receive a “W” or drop out late in the
semester (usually because it is unlikely
that they will receive a passing grade)
and receive a WU (which is equivalent to
an “F”). In a third comparison, we compared the average course grades of students who finished the course with the
corresponding grades of students who
received a WU. Finally, we compared
the percentage of students who received
a WU with the percentage who dropped
the course (W or WU) for any reason.
We evaluated the following five
hypotheses:
H1: Students who take the traditional
course will have higher average test
scores than distance-learning students
who take the equivalent course.
H2: The average course grade of students who finish a traditional course
will be higher than that of students who

finish the equivalent distance-learning
course.
H3: The average course grade of students who finish or receive a WU in a
traditional course will be higher than
that of students who finish or receive a
WU in the equivalent distance-learning
course.
H4: The percentage of students taking a WU will be greater in the distancelearning course than in the equivalent
traditional course.
H5: The percentage of students taking a W or a WU will be greater in the
distance-learning course than in the
equivalent traditional course.
Method
The traditional course is taught in a
computer laboratory through a combination of instructor-generated PowerPoint slides and traditional whiteboard
lecturing approaches. The course is
heavily Excel based. In the traditional
course, students are given a small
amount of class time in the laboratory to
master these Excel concepts.

There is little skimping on theory, but

with the exception of a few very simple
problems on exams that test whether students can perform hand calculations, the
exams are computer based with large
data sets. Emphasis is on problem formulation, manipulating data sets, reading
and interpreting Excel generated output,
and drawing appropriate business conclusions. The course includes one Excel
computer project, which typically has the
student use Internet databases to track a
stock over a specified period of time and
use regression models to draw conclusions about the stock’s performance
compared with other financial indicators.
The distance-learning course tackles
the same concepts in a similar manner.
The only difference is that instead of
receiving face-to-face instruction, students listen to approximately 40 prerecorded PowerPoint modules narrated by
the instructor to simulate face-to-face
lecture material. Each module lasts
between 10 to 40 minutes and is replete
with dynamic motion and narration.
Students also may download nonaudio
print versions of the slides and take
notes during the narration. Embedded in
these slides are the theory, applications,
hand calculations, and the Excel
approach to statistics problem solving.
The only required meetings are for the
exams, although periodic chat sessions
and optional in-class review sessions
are scheduled before each exam.
Because of the difficulty that some
students had accessing the narrated
PowerPoint lectures over the Internet,
we provided distance-learning students
with a CD with the same lectures as
those on the Internet. Students taking
the course in the traditional manner had
exactly the same access to the Web site,
had the same project and homework,
and took the same exams. Although they
also could listen to the lectures over the
Internet, they were not provided with
the CD.
From spring 2001 through spring
2003, a 7-semester period that includes
one summer session and one intersession, we offered the course four times in
both formats, twice in a distancelearning format only, and once in the traditional format only. The fact that we
offered the course in both traditional and
distance-learning formats in 4 different
semesters allowed for paired t-test com-

parisons. We discuss our results from
these paired comparisons and the overall
performance using all 11 sections (the 6
semesters in which we taught the course
in a distance-learning format and the 5
semesters in which we taught it in the traditional format) using two-sample t tests
with equal variances (the data passed the
equal variances tests).
Results

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

We obtained the following results in
our first study:
H1: There was strong evidence that
the traditional students outperformed
the distance-learning students on tests
(for the paired data, p = .0018; when we
considered all data, p = .0036). The
average difference in test scores using
the paired data was 6.557 with a margin
of error of 4.011. When we used all
data, the average difference was 6.001
with a margin of error of 4.276.
H2: There was also strong evidence
that the average grade of students completing the class given in the traditional
format would be higher than the average
grade of students completing the equivalent distance-learning course (for
paired data for 4 semesters, p = .0097;
when we included all 11 courses, p =
.0130). When we used only the paired
data, the average difference in the average grade (based on the usual 4-point
scale) was .250 with a margin of error of
.173; when we used all 11 courses, the
average difference was .259 with a margin of error of .220.
H3: For those either completing the
course or receiving a WU, there was
moderate evidence based on paired
semester data (p value = .0785) that the
average grade of those taking the traditional course was higher than the average grade of those taking the distancelearning course. There was significant
evidence of this same result when we
used all semester data (p = .0184). The
average difference in the average grade
(again based on the usual 4-point scale)
was about .6 of a point with a margin of
error of between .55 and 1.1.
H4: There was moderate evidence
that the percentage of students who did
not finish the class and who received a
WU (rather than an F) was greater in the
distance-learning courses than in the

traditional ones (for paired data for 4
semesters, p = .0872; when we used all
11 courses, p = .0339). The average difference in these rates based on only the
4 paired semesters was 18.5%, but with
a margin of error of ± 33.2%. When we
used all 11 courses, the average difference between the two groups was 15.7%
with a margin of error of 17.1%.
H5: There was moderate evidence
based on the paired data that the percentage of students who did not finish
the class and received a WU or a W

Possible Confounding Factors
Although this study ruled out instructor performance as a confounding factor, several other factors could have
affected the outcome.
1. This distance-learning course differs from most other distance-learning
statistics courses. It is not a shortanswer, multiple-choice, Blackboardstyle distance-learning course. Instead,
lectures are recorded and we use
dynamic PowerPoint slides to simulate

Trend timelines show that the difference in overall
performance between the two groups narrowed
over time, the average grade in distance-learning
classes increased, and the percentage of students
[dropping the] distance-learning classes
decreased.
(appropriately dropping the course
before the university-sanctioned drop
date) was greater in the distancelearning courses than in the traditional
courses (p = .0551). The average difference based on only paired semesters
was 24.4% with a margin of error of ±
34.6%. When we used all 11 courses,
there was significant evidence of such a
difference (p = .0240); the average difference between distance-learning and
traditional courses was 19.5% with a
margin of error of 19.3%.
Conclusions
According to the results of our first
study, the distance-learning students did
not fare as well as those taking the same
course in a traditional format. We would
hope to be able to conclude, however,
that the gap in such performance differences narrowed over time. Trend timelines show that the difference in overall
performance between the two groups
narrowed over time, the average grade
in distance-learning classes increased,
and the percentage of students receiving
Ws and or WUs in distance-learning
classes decreased. However, none of
these trend estimates could be found to
be statistically different from 0.

a classroom lecture format. There are
various link aids including sample tests,
answered homework, Excel files, and so
forth, but the student must spend time
listening to the recorded lectures and
navigating the Web site. In short, the
distance-learning student must exhibit
more discipline in this type of distancelearning course than he or she would in
others.
2. When we first offered the
distance-learning course, the recorded
lectures were accessed solely from the
Web site. The first distance-learning
class was given in the spring 2001
semester. At that time, many students
had access to only 28.8K modems. Even
with 56K modems, many students experienced severe buffering, causing problems with their ability to hear the lectures from home effectively. Those with
this problem had to come to campus to
listen to the lectures (over T1 lines),
negating much of the benefit of taking a
distance-learning class. After the first
semester in which the course was given,
we distributed CDs with the recorded
lectures to the distance-learning students. Students taking the course in the
traditional mode had access to the distance-learning recorded lectures but not
to the CD. In the brief period since
July/August 2004

335

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

spring 2001, many more students now
have gained access to DSL and cable
modems and even T1 lines at work.
3. Until the fall 2003 semester—that
is, for the entire length of this study—
we identified distance-learning Web
courses in the class schedule only by a
superscript symbol next to the course.
Students then had to find the list of symbols to ascertain whether a particular
course was a Web course. Consequently,
about half of the students did not realize
that they had signed up for a distancelearning course; this situation occurred
even in the most recent offering of the
distance-learning course. Furthermore,
although there were perhaps 25 other
sections of the course, demand for the
course still exceeded the total number of
seats in all sections. Hence many students took the distance-learning course
even though they would have preferred
to take a traditional course.
4. We administered the distancelearning course through the main campus at California State University,
Fullerton. Although the course was nominally scheduled in the evenings (with
the exception of the summer course), the
distance-course class had the same characteristics as the student population taking the traditional courses. With few
exceptions, this population was comprised of students in their early 20s. The
traditional course was given at California State University, Fullerton’s South
County branch campus (located at Mission Viejo until the fall 2002 semester, at
which time it relocated to El Toro). Students in these courses usually are older
(their approximate average age is 26)
and more affluent than the traditional
students, and one could assume that they
value education more. They typically
have full-time jobs, and more of them
tend to have families compared with students at the main campus. Although we
do not have evidence demonstrating a
difference between these two sets of students, these are factors to consider.
5. The distance-learning course is
evolving constantly. The instructor is in
an exponential learning mode and still
considering what does and does not
work. More and better information is
added to the Web site each semester.
6. In the last 2 years, many more students have been exposed to other Web
336

Journal of Education for Business

courses. As students become more comfortable taking Web courses, their performance should improve.
Problem-Solving Versus
Multiple-Choice Testing
In our second study, we compared
student performance on traditional
problem-solving tests with that on
multiple-choice tests. A primary reason
for making this comparison was an
avalanche of student complaints about
how hard the written tests were in this
course. In fact, during the 5 most recent
semesters that we taught the statistics
course, the most negative complaint by
far on our student evaluations concerned
the perceived difficulty of the tests. A
second reason for making this comparison was a prevailing notion that somehow students can guess their way
through multiple-choice tests (leading
some to refer to them as multiple-guess
tests). Although such a comparison has
been done in the past, in this study the
same students (taught by the same
instructor) took both kinds of test,
which eliminated the bias of instructor
differences.
In this study, we were interested not
only in comparing the overall differences between student performance on
problem-solving tests and that on multiple-choice exams, but also in investigating whether there was a learning curve
of improvement. We evaluated the following three hypotheses:
H6: Students will perform better on
average on the first problem-solving
exam than on the first multiple-choice
exam.
H7: Students will perform better on
average on the second multiple-choice
exam than on the second problemsolving exam.
H8: The overall average test grade
received on problem-solving tests will
differ from that received on multiplechoice exams.
Method
During spring 2003, the statistics
course that we used as the basis of this
study was taught in the classroom, with
an emphasis on mastering the concepts

in a traditional manner. We used Excel
to solve several problems and a portion
of class time to interpret Excel printouts
(particularly for hypothesis tests and
regression analyses), but most problems
were solved by hand and/or calculator.
We gave classes and tests in a traditional classroom setting. Using this
teaching paradigm, we gave both traditional and multiple-choice exams to the
same students, allowing for a testing/
learning outcomes comparison of these
two approaches to test taking.
We graded the course primarily on
four examinations. The first examination, which covered basic descriptive
statistics, probability theory, and discrete probability distributions, was a traditional problem-solving test. The second examination, which covered
continuous probability distributions,
sampling distributions, estimation, and
hypothesis testing, was a multiplechoice test with each question having
four choices including the correct
answer. The third examination, which
covered regression analyses, multiple
regression, analysis of variance, and
chi-square tests for multinomial distribution and contingency tables, was again
a problem-solving test, whereas the
the comprehensive final examination
was again a multiple-choice exam.
Students could use correct answers
for multiple-choice exam questions as
clues to solve the problem correctly.
However, this approach would have
been successful only if the student had a
basic understanding of the concepts; the
student could not simply guess the correct answers. Thus, many of the wrong
answers were generated by students
who did not know all the steps for solving the problem. All tests were similar
in terms of complexity and the number
of questions, and students were told in
advance which type of test to expect.
Exactly 100 students completed this
statistics course with its alternating exam
pattern. This format allows for pairwise t
test comparisons. Using pairwise t tests,
we compared the grades on the first
problem-solving exam with those on the
first multiple-choice exam, the grades on
the second problem-solving exam with
those on the second multiple-choice
exam, and the overall average grade for
the two problem-solving exams with the

overall average grade for the two
multiple-choice exams.
Results

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

We obtained the following results for
the three hypotheses formulated for our
second study:
H6: We found strong evidence (p =
.0008) that the students, given no previous familiarity with the instructors’ testing, fared significantly better on the first
problem-solving exam than on the first
multiple-choice exam. The average difference in test scores between the first
problem-solving and the first multiplechoice exam was 6.20 points with a
margin of error of ± 3.78 points.
H7: We found strong evidence (p =
.0096) to conclude that the students,
given the experience of one problemsolving and one multiple-choice exam,
now scored significantly better on the
second multiple-choice exam than on
the second problem-solving exam. The
average difference in test scores
between the second multiple-choice
exam and the second problem-solving
exam was 4.17 points with a margin of
error of ± 2.34 points.
H8: However, when we compared the
combined test results for both problemsolving tests with the combined test
results for both multiple-choice tests,
we could find no significant difference
in student performance (p = .3990).
Based on the paired data, the average
difference in test scores between
problem-solving and multiple-choice
exams was only 1.01 points, and the
margin of error was ± 2.38 points.
Possible Confounding Factors
Although the experimental design of
this study removed the instructor and
the students as possible confounding
factors, the following factors could have
affected our results:
1. Although we took care to assign
the same number of problems with similar complexity on all tests, there actually could have been some differences
in the complexity of the tests. Student
perception about the complexity, rather
than actual complexity, also could have
been a confounding factor.

2. The subject matter in this course
lends itself to varying degrees of difficulty. Some material on the first test
actually may have been learned in previous college probability courses or
even in high school. As the course progresses, there is less likelihood of the
students having been exposed previously to the material.
3. As the course progresses, students
become more familiar with the format
and wording of the tests. This learning
curve may have allowed the students to
be more at ease when taking tests. On
the other hand, students who performed
poorly on early exams may have experienced increased tension instead and
may have become “psyched out” when
taking subsequent tests.
These are all factors worth exploring
in future studies.
Student Perceptions Based on
Student Evaluations
Even though most academics have
raised some concerns about using student evaluations to measure the performance and quality of instruction,
research studies have found that such
evaluations can be reliable and valid. In
an exhaustive study, Cohen (1981) concluded that the overall correlation
between instructor ratings and student
achievement was .43 and that the overall correlation between course ratings
and student achievement was .47. However, many studies also indicate that student evaluations of faculty members
should not be compared across disciplines and course levels. Instructor evaluations also can be influenced by the
faculty member’s gender (Bachen,
McLoughlin, & Garcia, 1999; Basow,
1995). Thus, we feel that an investigation of quality of instruction should
include a comparison of student evaluations but that researchers should try to
make sure that any bias is removed.
Overall instructor ratings in these
studies were based on responses to a
seven-question evaluation conducted at
the end of the semester in which this
study was conducted and the previous
semester. Students used a 4-point scale
to rate the instructor’s ability to communicate, preparation for the class, and
willingness to help along with the exam

covering the subject matter, class and
project assignments, and overall
instructor effectiveness.
In the first study, both the traditional
and the distance-learning statistics classes
were delivered by the same instructor.
This instructor constantly received
extremely high student evaluations from
students, much higher than department
norms. In fact, his average student evaluation grade for the six distance-learning
classes in this study was 3.42 (out of possible 4), and his average student evaluation grade in the five traditional classes
was 3.45. The students viewed the
instructor equally in both formats; thus,
instructor performance should not be
viewed as a confounding factor. In fact
the p value for the paired comparison test
for unequal average differences in student
evaluations was .71, and the p value for
the paired comparison test for differences
in student evaluations for semesters in
which the course was taught in both formats was .30. These values support the
conjecture that instructor performance
was not a confounding factor. In general,
the correlation between student grade
point average and the instructor’s student
evaluation average was only .42.
In the second study, we compared the
results from the student evaluations for
the given semester with those from the
student evaluations for the same course
given by the same instructor in the previous semester. The courses were taught
in exactly the same manner in both
semesters, covered exactly the same
material, were given at approximately
the same times and days of the week,
and were delivered in the same set of
classrooms to the same types of students. The only significant differing factor was that in the latter semester the
instructor gave two multiple-choice and
two problem-solving exams, as opposed
to only problem-solving exams, which
he gave in the immediately preceding
semester. The difference in the results
was dramatic. A two-sample t test for the
hypothesis that there was an increase in
the overall student evaluation scores
yielded strong evidence (p = 1.63 x 1018)
that students preferred the instruction
when multiple-choice tests were substituted for problem-solving exams. This
result might lead one to infer that students feel that multiple-choice tests are
July/August 2004

337

easier to take and thus might allow them
to be more relaxed about the class in
general. Because the difficulty of written
tests seems to be an overwhelming concern for students, one might conclude
that offering multiple-choice tests may
lead to a more positive learning experience (and higher evaluations for the
instruction) in the classroom.

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:31 12 January 2016

Overall Conclusions
We can draw several conclusions
based on these two studies. The first is
that distance-learning students do not
fare as well as those taking the same
course in the traditional format. Although the trend timeline appeared to
show that the gap representing the difference in overall course performance
between traditional and distancelearning students narrowed over time,
with the average grade in distancelearning classes increasing and the percentage of students receiving Ws and or
WUs in distance-learning classes
decreasing, none of these trend estimates could be found to be statistically
different from 0. However, according to

338

Journal of Education for Business

student evaluations, students’ perceptions of distance-learning classes were
as favorable as their perceptions of traditional classes.
The results of these studies also show
that there were variations in the test
scores and the average student performance from test to test according to the
type of exam. There may be many theories about the variations in the two types
of test (involving the complexity of the
tests, students’ familiarity with the format and language of the tests, their feeling tense or being psyched out, etc.), but
the data for the combined samples
showed that the test scores were not significantly different between the written
and multiple-choice tests. However,
according to the analysis of student
evaluations, students seem to prefer a
class in which at least some of the
exams are multiple choice. One perception that both students and faculty members might share is that multiple-choice
tests are somehow easier. This can be
viewed positively, as students may feel
more relaxed when taking multiplechoice tests and have more positive
learning experiences in the classroom.

REFERENCES
Basow, S. A. (1995). Student evaluations of college professors: When gender matters. Journal
of Education Psychology, 87(4), 656–665.
Bachen, C. M., McLouglin, M. M., & Garcia, S.
(1999). Assessing the role of gender in college
students’ evaluation of faculty. Communication
Education, 48(3), 193–210.
Cohen, P. A. (1981). Students ratings of instruction and student achievement: A meta-analysis
of multi-section validity studies. Review of
Educational Research, 51(3), 281–309.
Finn, C. E., Jr. (1998). Today’s academic market
requires a new taxonomy of colleges. Chronicle
of Higher Education, XLV(1).
Lawrence, J. A. (2003). A distance learning
approach to teaching management science and
statistics. International Transactions in Operational Research, 10, 1–13.
National Education Association. (2000). A survey
of traditional and distance learning in higher
education. Washington, DC: National Education Association.
Phillips, V. (2001). The Virtual University Gazette’s
FAQ on distance learning, accreditation, and
other college degrees. Retrieved November 9,
2001 from http://www.geteducated.com/
articles/dlfaq
Phipps, R., & Merisotis, J. (1999). What is the difference? A review of contemporary research on
the effectiveness of distance learning in higher
education. Washington, DC: Institute for Higher Education Policy.
Tax breaks will make higher education more
accessible. (1997, July 14). USA Today, p. 14A.