Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.80.4.231-234

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Using Course Management Systems to Enhance
the Value of Student Evaluations of Teaching
Richard L. Oliver & Elise Pookie Sautter
To cite this article: Richard L. Oliver & Elise Pookie Sautter (2005) Using Course Management
Systems to Enhance the Value of Student Evaluations of Teaching, Journal of Education for
Business, 80:4, 231-234, DOI: 10.3200/JOEB.80.4.231-234
To link to this article: http://dx.doi.org/10.3200/JOEB.80.4.231-234

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 27

View related articles

Citing articles: 6 View citing articles


Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 12 January 2016, At: 22:38

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:38 12 January 2016

Using Course Management
Systems to Enhance the Value of
Student Evaluations of Teaching
RICHARD L. OLIVER
ELISE POOKIE SAUTTER
New Mexico State University
Las Cruces, New Mexico

Q

uality assessment tools are an

important element in recognizing, rewarding, and encouraging continuous improvement and innovation in
higher education. Many researchers
have investigated the advantages and
disadvantages of student evaluations of
teaching (SETs) as a method of assessing teaching (Cashin, 1995; Centra,
1993; Feldman, 1989; Marsh, 1984;
McKeachie, 1987). Much of the debate
surrounding the validity of SETs focuses on potential sources of bias that are
often beyond the control of the instructor. These sources of bias include such
factors as (a) personal attributes of the
student such as student gender and
expected course grade, (b) situational
characteristics of the learning environment such as the elective status of the
course and size of the class, and (c)
genetic traits of the instructor such as
gender and attractiveness. Mixed
research results regarding these and
other variables continue to stimulate
debate about the ultimate validity of
SETs as effective tools for teaching

assessment.
Much of the debate regarding the
value of SETs likely stems from the
intended use of feedback from the
instruments. A considerable amount of
research explores the comparative worth
of SETs for summative versus formative
purposes (Cashin & Downey, 1992;

ABSTRACT. In this article, the
authors propose a method of course
management system (CMS) administration of student evaluations of teaching (SETs). The method provides a
mechanism for providing greater guarantee of anonymity to the student
respondents. The authors report on a
case study in which this guarantee was
likely a significant factor contributing
to the increase in response rates for
online submissions. In addition, the
results suggest that the method provides significant benefits for improving both the summative and formative
value of SETs.


Hobson & Talbot, 2001). Summative
uses of SETs attempt to provide summary judgments about teaching performance that can be used to make decisions regarding promotion, tenure, and
annual performance reviews. Alternatively, formative feedback emphasizes
the collection of information useful for
the development and improvement of
teaching. Though researchers argue that
SETs can provide a valuable source of
formative feedback, many institutions
design and implement SETs in a way
that significantly weakens the formative
value of the feedback. In this article, we
explore the use of new learning technologies for improving the formative
value of SETs and the procedural efficiency and integrity of the process for
summative purposes.
In recent years, the advent of online

learning technologies has increased interest in the administration of online teaching evaluations. Empirical researchers
have explored the potential pros and cons
of the online format for administration of

student teaching evaluations. In general,
the research findings indicate that students prefer online administration of
SETs but that potential problems with
guarantees of anonymity and response
rates have limited faculty and/or institutional acceptance of these approaches
(Dommeyer, Baum, & Hanna, 2002;
Layne, DeCristoforo, & McGinty, 1999).
Other researchers have reported on differences between student responses to Webbased and paper versions of the same
questionnaire (Layne et al.; Olsen,
Wygant, & Brown, 1999; Tomsic, Hendel, & Matross, 2000). Course management systems, such as WebCT, Blackboard, and Ecollege offer real advances in
improving the efficiency and overall
value derived from the administration of
student teaching evaluations online. In
this article, we provide a case study
detailing the use of the WebCT course
management system for departmental
administrations of the student teaching
evaluation process. We present empirical
evidence to demonstrate the comparative
value of this approach in enhancing the

formative value of SET feedback and
offer suggestions to further enhance the
SET collection process.
March/April 2005

231

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:38 12 January 2016

Relative Value of Online
Administration of Teaching
Evaluations
A tremendous body of literature has
been published on the advantages and
disadvantages of using SETs for examining teaching effectiveness (Cashin,
1995; Centra, 1993; Feldman, 1989;
Marsh, 1984). In general, these researchers concluded that student evaluations represent one important element
of a more comprehensive and multimethod approach to the evaluation and
improvement of teaching. Given the
assumption that SETs remain a stable

element of teaching assessment, researchers continue to examine how the
design and administration of SETs can
be affected to enhance the tool’s value
to faculty members. Most recently,
research regarding the comparative
value of online versus in-class administration of SETs has revealed some interesting results.
In general, recent research findings
suggest that online ratings offer significant advantages in terms of efficiency
(i.e., less waste of resources such as
paper, class time, processing time, and
costs), and students express higher levels
of satisfaction when the evaluation
process is conducted online. In addition,
preliminary investigations have indicated
that more students are willing to give
comments to open-ended questions on
the survey instruments when they are
administered online (Layne et al., 1999).
The possibility of increased qualitative
response deserves particular attention

because faculty members perceive
greater value in students’ written
responses to open-ended questions than
in categorized responses to closed-ended
questions (Ory & Braskamp, 1981;
Tiberius, Sackin, & Cappe, 1987).
Online administration of SETs is not
without problems. In particular, the
findings of most studies have indicated
that response rates differ significantly
according to the method of administration. Baum, Chapman, Sommeyer, and
Hanna (2001) found that response rates
ranged from 32.8% for online responses
to 76.8% for in-class ones, and Layne et
al. (1999) found that they ranged from
47.8% for online responses to 60.6% for
in-class ones. In both cases, the
232

Journal of Education for Business


researchers have suggested that greater
institutional endorsement of the online
collection process and more convincing
guarantees of student anonymity could
be important remedies in correcting
response-rate discrepancies.
Using Course Management
Systems to Improve Online SET
Administration
Adoption rates for course management systems (CMS) have increased
dramatically on higher education campuses. According to the Campus Computing Project, more than one fifth of all
college courses are now taught using
course management systems (Green,
2001). WebCT, an early service provider
in the field, has reported on its commercial Web site (2003) that “(t)housands of
institutions in over 80 countries are
licensed to use WebCT (p. 1).” Given the
popularity of WebCT and other similar
CMSs, instructors can realize significant

synergies by learning how they can use
such systems to improve online administration of SETs.
Instructors can easily use the survey
and quiz tool included in most CMSs to
construct and administer student surveys
within their own courses. Indeed,
instructors have reported that this is a
useful mechanism for administering
midsemester surveys for formative feedback (Austin & Austin, 2002). Unfortunately, survey administration within a
course cannot absolutely guarantee
anonymity of student respondents. As
Austin and Austin (2002) noted, “(t)he
instructor can see who took the survey,
but not what the person said unless there
is only one person who has answered the
survey.” Even though the possibility of
identification is small, the importance of
anonymity guarantees for online administration makes it imperative that one use
the system in such a way that will guarantee student anonymity.
Departmental or college-wide administration of SETs can be constructed

within the CMS system so as to overcome many of the limitations previously
noted about online SET administration.
The approach involves creation of a
CMS “course” that is used solely for
collection of SETs. This approach,
which we describe in this article,

improves the process by allowing for a
more common and centralized format
for administration. Though one can customize a survey to fit the needs of individual departments or instructors, the
mechanics and logistics of survey
administration can be standardized to
increase the ease and convenience of
administration for students. The standardization works to reinforce perceptions of institutional commitment to the
SET process and can improve response
rates owing to the reinforcement across
multiple classes or departments.
A Case Study Using WebCT for
Centralized SET Administration
We designed a case study to empirically investigate two research questions:
(a) Are student response rates (i.e., percentage of students completing the SET
instruments) negatively affected by
moving SET administration online
through a course management system?
and (b) Will students provide significantly more qualitative feedback (i.e.,
more comments in response to openended questions) when SETs are administered by means of a Web-based course
management system? Our case study,
conducted at a midsized (17,000 students on main campus) southwestern
state university, provided the basis for
an empirical analysis of SET administration using the WebCT course management system. Staff members in three
disciplines in the College of Business
Administration and Economics agreed
to administer the teaching evaluations
using the WebCT system (accounting,
business computing systems, and marketing). As a matter of policy, the university requires collection of SETs at
the conclusion of each fall and spring
semester. College policy also requires
that each SET survey contain at least 10
questions relevant to instructional effectiveness, including two mandatory questions regarding the overall quality of the
course and the overall quality of the
instructor. Specific selection of the
remaining items is dependent on departmental policy or the needs of the individual faculty members. The course and
instructor quality measures are used as
summative measures of performance in
annual performance reviews, whereas

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:38 12 January 2016

the remaining items are chosen for formative assessment and instructional
development purposes. In our case
study, the instructors in the accounting
and business computing systems disciplines used a common survey instrument with 10 items. In the case of the
marketing department, individual faculty
members selected as few as eight and as
many as 20 additional items to include
on individual class SETs. They selected
these items from a list of over 75 questions or ratings items commonly used to
evaluate teaching and approved by the
College Teaching Excellence Committee. The faculty members use the SET
results for instructional improvement,
and the administration uses the results as
one of a set of assessment metrics for
annual merit, promotion, and tenure
decisions.
Departmental administrative assistants, hereafter referred to as site
designers, created WebCT course sites
for each discipline area (accounting,
business computer systems, and marketing) and created separate survey instruments for each class offered in the discipline. The site designers named each
WebCT course site in such a way as to
ensure that students recognized the purpose and related content of each site
(e.g., a course titled Department of Marketing Class Evaluations contained surveys for all marketing classes taught
that term). All students taking one or
more classes in a given discipline were
granted access to the relevant site; the
site administrator used the “Selective
Release” function in the design of each
class SET survey to ensure that only the
students enrolled in a particular class
had access to the SET instrument for
that class.
Roughly 1 month before the end of
the term, instructors distributed instructions for the new evaluation procedure
in both hard copy and e-mail formats.
Students were asked to verify that they
had access to the appropriate discipline
evaluation Web sites and that they saw
surveys listed for each class in which
they were enrolled. The site designer
did not release the surveys for completion by the students until the final week
of classes. The site designer assured the
students that the instructor had no
access to the evaluation Web sites, that

results of the survey would be provided
to the instructors only after final grades
were turned in, and that all responses
were anonymously received and reported
in aggregate to the instructors. Students
were free to complete the SET instruments on their own time, taking as much
time as they needed and completing them
at any location that provided access to
the Internet.
The survey function in WebCT guarantees that no names are tied to survey
reporting; at most, the designer (i.e.,
departmental assistant) can ascertain
whether a student has completed an
evaluation but cannot tie results to specific persons or students. In this case
study, the site designer managed the
opening and closing of the release period for the SETs and distributed summaries of the survey results to instructors after grades were recorded with the
registrar. The WebCT survey tool automatically calculated descriptive statistics for all closed-ended questions; the
“View” option in the “Detail” for the
survey provided a compressed listing of
the students’ written comments. The
feedback was then made immediately
available to the professors for consideration in design of the courses for the following semester.
Empirical Results From
Case Study
To more systematically address the
research questions, we compared our
results with SET response patterns from
the immediately preceding semester;
SETs during that term were administered
through the traditional in-class method.
We used departmental records from that
term to conduct a comparative analysis of
the effects of administration mode on
response rates. We performed a test of
proportions to examine whether there
was any significant difference between
the response rates of 49 in-class SET
administrations and 64 online SET
administrations. (The college difference
in the number of class sections was due to
greater use of large sections in the fall
term.) There were no significant differences in the response rates based on disciplines (F statistic = 1.270, p = .285).
Although the response rate for the inclass administration of SETs was slightly

higher (75.9%) than that for online
administration of SETs (70.1%), the difference was not statistically significant (F
test statistic = 2.659, p = .106).
We addressed our second research
question by comparing student response
patterns to open-ended questions in the
in-class and online SET administration
formats. Faculty members in the marketing department agreed to allow the
departmental assistant to record quantitative information about student responses to open-ended questions in the
two formats (i.e., two consecutive
terms). The open-ended questions used
during each term were the same: One
solicited students’ comments on what
they particularly liked, and the other
solicited their comments on things that
they particularly disliked about a course
or instructor. The departmental assistant
used departmental copies of the SETs
from the previous term and the responses from the online administration to
record the necessary data. For each
semester’s classes, the assistant reported
the percentage of students in each class
who provided feedback to either of the
open-ended questions and the average
number of words given, per class, in
response to an open-ended question. On
average, 72% of the students provided
comments in response to open-ended
questions when the SET was administered in class, whereas 87% responded
when the SET was administered online;
this difference was marginally significant (F test statistic = 3.930, p = .058).
Interestingly, the most significant difference was in the amount of feedback
given in response to the open-ended
questions. The in class administration
yielded results in which students gave
an average of 7.83 words of feedback
per question, whereas students in the
online administration gave nearly four
times as much feedback, with an average of 28.97 words provided per question (F test statistic = 13.944, p = .001).
Discussion
Our results suggest that the proposed
method of CMS administration provides
significant benefits for improving the
summative value of SETs. First, the system provides a mechanism for providing
greater guarantee of anonymity to the
March/April 2005

233

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:38 12 January 2016

student respondents. In our case study,
this guarantee was likely a significant
factor contributing to the increase in
response rates for online submissions.
The standardization of the format provided through a centralized collection
process also enhances institutional commitment to the process, which further
alleviates previous problems in SET collection procedures.
This method of administration also
provides significant benefits that
increase the formative value of SETs.
The procedure described in this article
allows the designer to customize
course surveys with relative ease
according to the individual needs of the
instructor. This flexibility is important
for improving formative feedback.
Specifically, Centra (1993) argued that
the formative value of SETs is predicated on the extent to which the evaluation process provides new information to the instructor, which he or she
perceives as feasible and consequential
in making meaningful improvements to
the learning environment. Similarly,
Fuhrman and Grasha (1983) suggested
that evaluations must be clearly
focused and deal with the individual
instructor’s personal goals for the
course. This requirement clearly necessitates that some flexibility be built
into the design process to accommodate diversity in instructional goals and
learning environments.
The increased willingness of students
to provide voluntary comments in an
online administration is likely attributable to the reduced time constraints associated with completion of the SETs.
Svinicki (2001) noted that students are
more likely to give constructive qualitative feedback when they are given adequate time to reflect on the questions and
the course environment, a condition that
is not characteristic of most in-class SET
administrations. The timing factor is also
relevant with regard to the expediency of
returning feedback to the instructor. In
this case study, the designer summarized
the feedback and returned it to the
instructors immediately after the semester was completed, a condition that
allows instructors to incorporate feed-

234

Journal of Education for Business

back into design of courses in the subsequent semester. Although this case study
focused on end-of-semester evaluations,
the process can be clearly improved
through the inclusion of midsemester
evaluations as well.
Last but not least, the ease of downloading the data into statistical analysis
frameworks provides the opportunity for
a more informed analysis of student
responses. As mentioned earlier, many
sources of bias in SET administration are
known to exist. Closer scrutiny of the
bias effects is possible when administrators or faculty members have the ability
to easily transfer SET data into other data
formats using the “Download” function
provided in most CMS survey tools.
Similarly, suggestions for improving the
formative value of open-ended feedback
can be more easily implemented when
the data are readily accessible for transfer into content analysis software programs or other qualitative data analysis
tools (Lewis, 2001).
Conclusions
In this article, we examined a method
of using a course management system to
administer student evaluations of teaching. The method provides a greater
guarantee of anonymity to the student
respondents. We reported on a case
study in which this guarantee was likely
a significant factor contributing to the
increase in response rates for online
submissions. In addition, the results of
our case study suggest that the method
provides significant benefits for improving both the summative and formative
value of SETs.
REFERENCES
Austin, D., & Austin, J. (2002, April). Using
Blackboard to survey students at midterm.
Paper presented at the Seventh Annual MidSouth Instructional Technology Conference.
Retrieved November 24, 2003 from http://
www.mtsu.edu/~itconf/proceed02/91.html
Baum, P., Chapman, K., Sommeyer, C., & Hanna,
R. (2001, June 14–17). On-line versus in-class
student evaluations of faculty. Paper presented
at the Proceedings of the Hawaii Conference on
Business, Honolulu, HI.
Cashin, W. E. (1995). Student ratings of teaching:
The research revisited. Manhattan, KS: Center

for Faculty Evaluation and Development,
Kansas State University.
Cashin, W. E., & Downey, R. G. (1992). Using
global student rating items for summative evaluation. Journal of Educational Psychology, 84,
563–572.
Centra, J. A. (1993). Reflective faculty evaluation:
Enhancing teaching and determining faculty
effectiveness. San Francisco: Jossey-Bass.
Dommeyer, C. J., Baum, P., & Hanna, R. W.
(2002). College students’ attitudes toward
methods of collecting teaching evaluations: Inclass versus on-line. Journal of Education for
Business, 78, 11–15.
Feldman, K. A. (1989). Instructional effectiveness
of college teachers as judged by teachers themselves, current and former students, colleagues,
administrators, and external (neutral) observers.
Research in Higher Education, 30, 137–174.
Fuhrman, B. S., & Grasha, A. F. (1983). A practical handbook for college teachers. New York:
Little, Brown.
Green, K. (2001). The 2001 National Survey of
Information Technology in U.S. Higher Education. Retrieved April 3, 2004, from http://
www.campuscomputing.net
Hobson, S. M., & Talbot, D. M. (2001). Understanding student evaluations. College Teaching,
49, 26–31.
Layne, B. H., DeCristoforo, J. R., & McGinty, D.
(1999). Electronic versus traditional student
ratings of instruction. Research in Higher Education, 40(2), 221–232.
Lewis, K. G. (2001). Making sense of student
written comments. New Directions in Teaching
and Learning, 87, 25–32.
Marsh, H. W. (1984). Students’ evaluations of university teaching: Dimensionality, reliability,
validity, potential biases, and utility. Journal of
Educational Psychology, 76, 707–754.
McKeachie, W. J. (1987). Instructional evaluation:
Current issues and possible improvements.
Journal of Higher Education, 58, 344–350.
Olsen, D. R., Wygant, S. A., & Brown, B. L.
(1999, October). Entering the next millennium
with Web-based assessment: Considerations of
efficiency and reliability. Paper presented at the
Conference of the Rocky Mountain Association
of Institutional Research, Las Vegas, NV.
Ory, J. C., & Braskamp, L. A. (1981). Faculty perceptions of the quality and usefulness of three
types of evaluative information. Research in
Higher Education, 15, 271–282.
Svinicki, M. V. (2001). Encouraging your students
to give feedback. New Directions in Teaching
and Learning, 87, 17–24.
Tiberius, R. G., Sackin, H. D., & Cappe, L.
(1987). A comparison of two methods for evaluation teaching. Studies in Higher Education,
12, 287–297.
Tomsic, M. L., Hendel, D. D., & Matross, R. P.
(2000). A World Wide Web response to student
satisfaction surveys: Comparisons using paper
and Internet formats. Paper presented at the
Annual Meeting of the Association for Institutional Research, Cincinnati, OH.
WebCT, Incorporated (2003). Learning without
limits: Flexible e-learning solutions for institutions across the educational spectrum. Retrieved April 1, 2004 from http://www.
webct.com/service/ViewContent?contentID=
17980017