Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.80.2.86-92

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

A Critical Incidents Approach to Outcomes
Assessment
Peter Bycio & Joyce S. Allen
To cite this article: Peter Bycio & Joyce S. Allen (2004) A Critical Incidents Approach to
Outcomes Assessment, Journal of Education for Business, 80:2, 86-92, DOI: 10.3200/
JOEB.80.2.86-92
To link to this article: http://dx.doi.org/10.3200/JOEB.80.2.86-92

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 33

View related articles

Citing articles: 2 View citing articles


Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 12 January 2016, At: 22:25

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

A Critical Incidents Approach to
Outcomes Assessment
PETER BYCIO
JOYCE S. ALLEN
Xavier University
Cincinnati, Ohio

T

he agencies responsible for accrediting programs in higher education
(e.g., International Association for Management Education, AACSB [2001])

increasingly have been requiring evidence that universities are fulfilling their
stated missions. When designing outcomes assessment programs to help
meet the requirements of these agencies,
universities face challenging decisions
concerning the particular types of
assessments to be used (DeMong, Lidgren, & Perry, 1994). Some methods,
such as student–teacher portfolios
(Seldin, 1997) and the Educational Testing Service (ETS) Major Field Test in
Business (Allen & Bycio, 1997) are
intended to directly assess specific
aspects of student learning. Others, such
as the College Student Experiences
Questionnaire (CSEQ; Pace & Kuh,
1998) are designed to elicit student
opinions concerning a wide range of
aspects of their university experiences.
When the assessment option chosen
relies on student opinions, a wide range
of issues associated with the validity of
self-reported data come into play. A

review of the literature by Schwarz
(1999), for example, revealed that selfreports can differ dramatically as a
result of (a) question context, (b) question wording, (c) the response scale
used, and (d) the order in which questions are asked.
86

Journal of Education for Business

ABSTRACT. Standardized questionnaires often have been used to
elicit student opinions on issues
regarding accreditation by the Association to Advance Collegiate
Schools of Business—International
(AACSB). These questionnaires
require students to rate a wide range
of items that may or may not be
important to them. In this study, the
authors supplemented these questionnaires by asking graduating business
students to evaluate their university
experiences by describing aspects
that were especially successful and

factors that needed improvement.
This open-format, “critical incidents”
approach to outcomes assessment
readily generated recollections of
incidents useful in guiding program
change. The incidents provided additional perspective on data obtained
from the College Student Experiences Questionnaire.

The concerns raised by Schwarz
(1999) have potential applicability to the
interpretation of standardized questionnaires such as the 169-item CSEQ. Questionnaires of this type address a comprehensive range of issues, including the
general university environment and estimates of educational gains. One can easily collect CSEQ data from large numbers of students and directly compare the
results with those of other schools. On
the other hand, the standardized aspect of
the CSEQ implies that student responses
must be interpreted within the questionnaire context.

The rater bias traditionally referred to
as the halo effect is an example of a contextual issue (Hoyt, 2000, among others). Specifically, the interpretation of
the CSEQ can be ambiguous because

most of the items refer to “the college.”
This makes it difficult to determine
whether the results reflect the university
as a whole, the business college, the
particular business major, or some combination of all three.
One method of obtaining a sense of
the relative prominence of the departments, colleges, and university from the
student perspective is to supplement
standardized questionnaire feedback
with an open-response format. Schwarz
(1999) noted the potential for dramatically different self-reported results from
different response formats. For example, when parents were presented with a
list and asked to rate “the most important thing for children to prepare them
for life,” 61.5% picked “to think for
themselves.” Yet, when asked the same
question without a list of items to
choose from, only 4.6% of parents identified a similar theme (Schuman &
Presser, 1981, pp. 105–107). Findings
such as these raise the following question: In the absence of a standard list of
items, what would student feedback

mean to universities?
While considering the potential benefits of open-response formats, we

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

thought of a concept from the human
resource management literature—critical incidents. Originally described by
Flanagan (1954), critical incidents
essentially are stories that reflect especially good or bad performance by an
incumbent on a job. Ideally, they are
specific, describing what the incumbent
did, the context of the behavior, and its
consequences. When large numbers of
critical incidents are collected for a target job (e.g., college instructor), the
information aids in the development of
management tools including job
descriptions, training materials, and
performance-appraisal systems (Bownas & Bernardin, 1988).
There are many reasons to suspect that
critical incidents can yield useful supplemental outcomes-assessment information. First, by definition, the feedback is

critical. When examining a lengthy reference period, respondents likely will
focus on significant, nonmundane events
(Schwarz, 1999). Second, compared
with standardized questions providing a
wide range of cues, the critical-incidents
approach is more likely to indicate
aspects that immediately come to mind
about the college experience. Finally, to
the extent that students are able and willing to provide them, critical incidents
should offer more detail compared with
standardized questionnaire ratings. For
example, the following critical incident
would be applicable to our university
environment:
I often arrive at the university after 9:00
a.m. Many times there are no parking
spots available. In one case, I had to park
a mile or two from campus and did poorly
on an exam because I was late to class.


If many incidents similar to this one
emerged, the institution likely would
conclude that parking was a student
concern.
Although parking per se is of little
interest from an accreditation perspective, the institution using this approach
would look for an overall pattern of incidents in line with areas of university
focus. Importantly, whatever the specific
pattern of results, critical incidents
should reflect what the students are
thinking without prompting from more
structured formats.
Our purpose in this study was to
examine the value-added in asking busi-

ness students who were near graduation
to generate critical incidents based on
their university experiences. Although
our study was exploratory, we sought to
examine at least three issues of interest.

First, given the relatively open-ended
assessment format, we wondered how
many useful critical incidents the average student would provide. A second
area of interest involved the relative percentages of incidents that would be
related to the university, business college, and business major. Finally, a
question of obvious interest concerns
the degree to which the critical-incidents
content would correspond to data
obtained from the CSEQ. A comparison
was possible because an independent
university-level mailing of the CSEQ
had been conducted.
Method
Sample
During the final regular class meeting, we asked students in the undergraduate and graduate business capstone
courses during spring 1999 to provide
critical incidents. We surveyed all five
capstone undergraduate offerings for a
potential sample size of 105. We used 7
of the 8 MBA capstone courses, for a

potential sample of 189 (one MBA professor forgot to administer the form).
Because the capstone was the final program requirement, the respondents typically were very near graduation.
Independently, the university mailed
the CSEQ to all undergraduates who
completed their degree program during
the 1998–1999 academic year. The university’s potential sample of 122 business majors was slightly larger than our
critical-incidents undergraduate group
(n = 105), because it had included fall
1998 graduates. The university received
86 responses from business majors for a
70% response rate.
Critical Incidents Form:
Administration and Analysis
The critical incidents form first asks
students to think back over their experiences regarding the courses and the
advising and general services that they
received. Then they are asked to pro-

vide (a) up to three examples of things
that they believed that a department,

the college, or the university did well
(Part A) and (b) up to three examples of
things that the same groups did poorly
(Part B). To give students a better
understanding of the level of information requested, we presented them with
the critical-incident example pertaining
to parking noted earlier. We administered the critical-incidents forms in the
same way that we treated teaching evaluations: The instructor was not present
while the students completed the
forms, and a student was asked to collect the forms and return them to either
the office of a department chair or to
the dean.
One of us (the senior author of this
article) read and counted the critical
incidents. We then created content categories appropriate to the responses.
Very few of the responses were ambiguous. We placed critical incidents that
were relatively idiosyncratic or unclear
in a “miscellaneous” category.
We used t tests for dependent samples
for statistical comparisons of the number of college, department, and university incidents provided. We performed
analysis of variance to examine the possibility of differences by program (i.e.,
undergraduate and MBA).
The CSEQ
As noted earlier, the CSEQ (Pace &
Kuh, 1998) is a standardized survey that
asks students about a wide range of
issues. These include college activities
(e.g., experiences with faculty members
and campus facilities), the college environment (e.g., emphasis on the development of scholarly and intellectual qualities), and estimates of gains from the
college experience (e.g., the ability to
write clearly or the ability to learn on
own’s own).
Critical Incidents and CSEQ
Correspondence
Given that the collection of critical
incidents was exploratory and that the
CSEQ had been administered and analyzed independently by another unit of
the university, we had little a priori idea
of the degree to which these two forms of
November/December 2004

87

assessment would correspond. At a minimum, we intended to examine the CSEQ
for items and dimensions aligned with
meaningful clusters of critical incidents.

TABLE 1. Number of Critical Incidents Generated per Student
College
M
SD

Department
M
SD

University
M
SD

Results

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

Response Rate
At the undergraduate level, 64 of the
originally registered 105 (61%) students
completed the form. The response rate
among MBAs was 55% (105 of 189
original registrants). These response
rates reflect less-than-perfect class
attendance as well as the tendency for
some students to drop the course as the
semester progressed.
Quantity and Distribution of Critical
Incidents
Neither group of students had difficulty with the open-ended format. The
average undergraduate respondent generated 4.42 critical incidents for a total
of 284. MBAs provided 486 incidents,
an average of 4.62 per student. The
overall ratio of positive-to-negative incidents was close to 50/50. In Table 1, we
present data pertaining to the number of
critical incidents generated for the college, department, and university.
As the data in Table 1 show, there was
a tendency for both undergraduate (t =
2.39, p < .02) and master’s degree (t =
2.25, p < .03) students to report significantly more college-related incidents
than department-related ones. Similarly,
undergraduates (t = 2.98 p < .01) and
master’s degree (t = 5.68, p < .01)
respondents provided significantly more
incidents that referred to the college
than university-related incidents. Master’s degree students differed from
undergraduates only in that they provided significantly more (t = 2.89, p < .01)
department-related incidents compared
with university-related ones.

Undergraduate
Graduate

1.98 a
2.09 a

1.30 b
1.56 b

1.67
1.55

1.16 b
.97 c

1.52
1.63

1.31
1.23

Note. For undergraduates, n = 64; for graduates, n = 105. Means with different letters were significantly different at the p < .01 level.

TABLE 2. Content and Quantity of College of Business and
Department-Based Critical Incidents

Topic

Undergraduate
Needs
Good improvement

Graduate
Needs
Good improvement

College of business
Advising
CO-OP program
Curriculum
Course availability
Outside speakers
Diversity of course content
Field trips
Interrelationship of courses
Learned/used computer
technology
Study-abroad program
Teamwork
Textbooks
Other
Extracurricular activities
Professors
Adjuncts
Availability, caring, helpful
Knowledgeable, real-world
oriented
Other
Business building
Interactions with the dean
Small class sizes
Student diversity
Miscellaneous
Overall program
Total

12
5

6
4

22
N/A

11
N/A

2
1
4
2
0

7
1
5
1
1

27
0
7
0
5

20
2
7
0
8

1
4
2
0
0
0

3
1
2
2
1
0

0
N/A
6
0
0
0

8
N/A
8
2
8
6

0
26

1
3

4
16

4
5

14

3

18

2

2
2
4
0
0
0

0
2
0
0
3
0

0
0
0
3
1
6

0
0
0
0
2
12

81

46

115

105

9
1
6
7
3
16

3
3
8
3
8
16

18
5
11
23
37
12

4
12
5
10
21
6

42

41

106

58

Departments

Critical Incidents Content
In Table 2, we display the categories
and frequency of responses relating to
the business college and its departments, and in Table 3 we show similar
information for the university as a
whole. A broad range of issues were
88

Journal of Education for Business

Accounting
Information systems
Economics and HR
Finance
Management
Marketing
Total

TABLE 3. Content and Quantity of University-Based Critical Incidents

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

Topic
Advising (adult students)
Bookstore
Bursar
Career services
Campus police
Clubs
Computer lab
Food services
Library
Math/writing lab
Parking
Physical environment
Professors (nonbusiness)
Registration
Sports center
Student commuter issues
Student housing
Textbooks
Tuition
Core curriculum
Miscellaneous
Total

Undergraduate
Needs
Good
improvement

Graduate
Needs
Good
improvement

1
0
0
0
2
2
2
0
0
1
1
2
1
0
0
1
0
0
0
7
1

0
0
0
9
2
0
5
2
2
0
10
0
4
4
2
1
2
2
1
3
3

0
4
7
0
0
0
3
0
8
0
0
5
N/A
12
0
0
N/A
0
0
N/A
2

0
3
3
8
0
0
4
0
2
0
25
6
N/A
13
0
0
N/A
0
3
N/A
2

21

52

41

69

raised in the critical incidents, some
expected (e.g., incidents related to professors and curriculum) and others more
surprising (e.g., cases regarding the
campus police and food services). It
was interesting to note that the content
areas were very similar for undergraduates and MBA students.
The pattern of critical-incidents frequencies in Table 2 shows that the
undergraduate students perceived the
business college as having a very clear
strength. Specifically, 26 of the 64 students made reference to available, caring, and/or helpful faculty members in
the critical incidents reported. Moreover, a relatively high number (14)
referred to business professors as being
knowledgeable and real-world oriented.
Similarly, more than half of the positive
department-specific incidents (23 of 42)
referred to particular professors. There
was less consensus regarding the areas
needing improvement (see Table 2).
Even the most frequently cited issue,
course availability, was addressed in
only 7 incidents.
Areas of strength were evident in the
pattern of responses from master’s
degree students, although these differed

somewhat from the areas of strength
noted by undergraduates. Our 105
MBAs regarded course availability (27)
and advising (22) as the top strengths,
whereas they perceived knowledge (18)
and availability of faculty members (16)
as secondary strengths. On the other
hand, like the undergraduates, the
MBAs referred to specific professors in
almost half (50 of 106) of their positive
department-specific incidents.
As with the undergraduates, there
was less consensus among MBAs concerning areas needing improvement.
The most frequently cited concern was
course availability. On the surface, it
struck us as odd that course availability
would be considered both a strength
(27) and a weakness (20) (see Table 2).
However, this pattern revealed that our
MBAs, more that 90% of whom were
part-time students, appreciated the
opportunity to take courses on evenings
and weekends and during the summer.
Still, like some of the undergraduates,
they wanted even more choice regarding
the specific days and times.
One aspect of our findings not
reflected in Tables 2 or 3 relates to the
level of detail and, in some cases, the

colorful language provided by students
in the incidents. On the positive side,
there were stories about specific faculty
members who stayed late in the evening
or came in on a weekend to hold study
sessions or to counsel individuals who
were starting small businesses. On the
other hand, there were occasional incidents in which students felt “blown off”
by a professor or an administrator who,
for example, barely looked at the course
schedule proposed by a student during
preregistration.
Many issues were more programmatic
than interpersonal. For example, more
than half of the positive critical incidents
regarding the management department at
the MBA level concerned the use of a
multifaceted global computer business
simulation. A similar percentage of negative stories concerned inappropriate
overlap among the management courses.
Correspondence Between the Critical
Incidents and CSEQ Results
In Tables 4 and 5, we show the relationship between the critical incidents
content and the CSEQ for college and
university issues. We limited this comparison to the undergraduate sample
because the university did not mail the
CSEQ to MBA students.
The data in Table 4 show that the
majority of critical incidents regarding
the business college (90 of 127, 71%)
were related, at least indirectly, to
CSEQ content. The main areas lacking
a CSEQ link were the co-op program
and course availability, each of which
yielded 9 incidents. In most cases, the
links or lack thereof were quite clear.
Besides comparing content correspondence per se, we also compared the
strength of CSEQ ratings with the relative percentages of positive and negative
incidents in a given category. Although
the CSEQ employs a mix of 4- and 7point response scales, we converted all
the findings to the 4-point metric for
ease of comparison.
In some cases, high percentages of
positive incidents (e.g., those referring to
available, caring, and helpful professors)
were associated with relatively high
CSEQ ratings (e.g., 3.21 for relationships with faculty members). On the
other hand, there were cases of poor
November/December 2004

89

TABLE 4. Relationship of Critical Incidents to CSEQ Content: College of Business

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

Topic
Advising
CO-OP program
Curriculum
Course availability
Outside speakers
Diversity of course content
Field trips
Interrelationship of courses
Learned/used computer technology
Study-abroad program
Teamwork
Textbooks
Other
Extracurricular activities
Professors
Adjuncts
Availability, caring, helpful
Knowledgeable, real-world oriented
Other
Business building
Interactions with the dean
Small class sizes
Student diversity
Miscellaneous
Overall program

Undergraduate
Needs
Good
improvement

Parallels to CSEQ results

12
5

6
4

Relations with administrative offices


2
1
4
2
0
1
4
2
0
0
0

7
1
5
1
1
3
1
2
2
1
0



Gain, information relevant to a career


Emphasis on information literacy skills
Gain, knowledge of other parts of the world
Gain, functioning as a team member




0
26
14

1
3
3


Relationships with faculty members
Personal relevance, practical courses

2
2
4
0
0
0

0
2
0
0
3
0




Student acquaintances scale (10 items)

How well do you like the college?

(2.86)a

3.15
(3.04)
2.23
3.30

(3.21)
(2.82)

2.43
3.27

a

To facilitate comparison across items, we converted items in brackets from 7-point to 4-point scales.

agreement that reflected differences in
the assessments. For example, four students reported very positive studyabroad experiences, with the only negative being that more such opportunities
should be offered, yet the CSEQ item
reflecting student-rated gains in knowledge of other parts of the world was only
2.23. Thus, although the CSEQ data
indicate overall relative weakness in this
area, the critical-incidents responses
revealed the very positive experiences of
a small group. Analogous examples of
the assessments yielding different information included the application and use
of computer technology.
Among the university-related critical
incidents, only 28% of the content (24
of 74 incidents) corresponded to items
in the CSEQ (see Table 5). Parallels
were lacking for concerns regarding
parking, career services, campus police,
and registration, among others. Also,
much like the study-abroad example in
the business college, CSEQ ratings for
90

Journal of Education for Business

specific university services could be
quite low (e.g., 1.69 for “used a campus
learning lab” and 1.94 for clubs and
organizations), even though some students found this category very memorable and valuable.
Discussion
Usefulness of Critical Incidents for
Outcomes Assessment
Critical-incidents information has
helped us focus both our college and
department resources on issues that are
especially important and memorable to
students. For example, we have used
clusters of incidents (good and bad)
involving business college staff as a
basis for feedback during performance
evaluations. Another example involved
the formation of a department committee in response to a critical-incidents
cluster concerning undue overlap among
MBA-level management courses. We

subsequently made substantial adjustments to one of the courses, and the
issue no longer surfaces in our assessments. These examples illustrate how
critical-incident content can be used for
continuous improvement.
Furthermore, we found that even
small numbers of critical incidents
yielded potentially useful assessment
data. The clearest example related to the
Study Abroad Program. This was
important supplemental information,
given the relatively low CSEQ scores
for gain in knowledge of other parts of
the world (2.23, or “some” on a 4-point
scale). On an administrative level, without the critical incidents, one would be
left guessing as to how to address this
issue. In our case, we discovered that
the business college clearly would benefit from a greater promotion of study
abroad.
Although we were pleased with the
level of detail yielded by the critical
incidents, we encountered some unan-

TABLE 5. Relationship of Critical Incidents to CSEQ Content: University

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

Topic

Undergraduate
Needs
Good
improvement

Advising
Advising (adult students)
Bookstore
Bursar
Career services
Campus police
Clubs
Computer lab
Food services
Library
Math/writing lab
Parking
Physical environment
Professors (nonbusiness)
Registration
Sports center
Student commuter issues
Student housing
Textbooks
Tuition
Core curriculum
Miscellaneous

12
1
0
0
0
2
2
2
0
0
1
1
2
1
0
0
1
0
0
0
7
1

6
1
0
0
9
2
0
5
2
2
0
10
0
4
4
2
1
2
2
1
3
3

Parallels to CSEQ results
Relations with administrative offices





Used club organizations (5 items)
Used computer info tech (9 items)

Used library (8 items)
Used a campus learning lab




Used campus recreational facilities




Gain, broad, general education


(2.86)a

1.94
2.56
2.18
1.69

2.42

3.27

a

To facilitate comparison across items, we converted items in brackets from 7-point to 4-point scales.

ticipated administrative issues associated with their use. For example, a
small number of the incidents were
highly personal and derogatory, which
potentially raises confidentiality concerns. Another practical issue was the
time required to read and categorize
the critical incidents, even with the relatively small sample involved in this
study. Thus, large institutions likely
would need to employ a sampling
strategy to make the use of critical
incidents tenable. Of course, the capability to collect data efficiently from
large numbers of students is a strength
of standardized assessments such as
the CSEQ.
Critical Incidents and CSEQ
Correspondence
At first glance, some might be bothered by the relative lack of convergence
between the critical-incidents content
and the CSEQ results, especially those
pertaining to the university as a whole.
Perhaps the content validity of the
CSEQ could be enhanced by adding

items concerning course availability,
career services, the registration process,
and the hours that university services
are available. Alternatively, institutions
could take advantage of the CSEQ
option of adding custom items. Nevertheless, responses to those findings that
focus on the addition of CSEQ items
miss the more fundamental issue of
response format differences (Schwartz,
1999). For example, CSEQ responses to
items related to the quality of relationships with administrative offices or faculty members cannot provide the same
perspective as positive and negative
incidents that name faculty and staff
members individually. In some sense,
the assessments are simply different,
with each offering a relevant piece of
the overall picture.
Generalizability of the Findings
Any determinations concerning the
relationship of critical-incidents assessment to the CSEQ obviously must be
made cautiously, because of the relatively small sample sizes in this study. For

example, one might wonder to what
degree the critical-incident content is
stable from year to year. Analysis of
subsequent critical incidents revealed
that the overall content framework presented in Tables 2 and 3 was virtually
unchanged at a later, more recent period.
Moreover, the strong points have
remained the same.
Future Research
Beyond generalizability issues, future
researchers should address the effects of
cueing in outcomes assessment. A format
issue relates to our use of a negative incident as an example to aid the students in
understanding the kind of responses
requested. This leaves open a question as
to what impact a positive incident might
have had on the responses.
Another interesting possibility for
research has arisen from the recent
AACSB effort to develop standardized
surveys targeted to business programs
per se (Detrick, 1999). Obviously, it
would be of interest to examine the
level of convergence obtained between
November/December 2004

91

business college critical incidents and
AACSB data. However, we should note
that the AACSB item content is comparable to the CSEQ in terms of its level
of generality. Moreover, factors such as
cost and student willingness to respond
also will affect the choices that individual institutions ultimately make.
REFERENCES

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:25 12 January 2016

AACSB. (2001). Standards for business accreditation. St. Louis, MO: Author.
Allen, J. S., & Bycio, P. (1997). An evaluation of

92

Journal of Education for Business

the educational testing service major field
achievement test in business. Journal of
Accounting Education, 15, 503–514.
Bownas, D. A., & Bernardin, H. J. (1988). Critical incident technique. In S. Gael (Ed.), The
job analysis handbook for business, industry
and government (pp. 1120–1137). New York:
Wiley.
DeMong, R. F., Lindgren, J. H., & Perry, S. E.
(1994). Designing an assessment program for
accounting. Issues in Accounting Education, 9,
11–27.
Detrick, G. (1999, February 28–March 2).
AACSB/EBI assessment and continuous
improvement tools. Outcome assessment: From
strategy to program improvement. Clearwater
Beach, FL.

Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327–358.
Hoyt, W. T. (2000). Rater bias in psychological
research: When is it a problem and what can we
do about it? Psychological Methods, 5, 64–86.
Pace, C. R., & Kuh, G. D. (1998). College student
experiences questionnaire (4th ed). Bloomington,
IN: Center for Postsecondary Research and Planning, Indiana University School of Education.
Schuman, H., & Presser, S. (1981). Questions and
answers in attitude surveys. New York: Academic Press.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93–105.
Seldin, P. (1997). Improving college teaching.
Bolton, MA: Anker.