Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.83.5.288-294

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Assessing Learning Outcomes in Quantitative
Courses: Using Embedded Questions for Direct
Assessment
Barbara A. Price & Cindy H. Randall
To cite this article: Barbara A. Price & Cindy H. Randall (2008) Assessing Learning Outcomes in
Quantitative Courses: Using Embedded Questions for Direct Assessment, Journal of Education
for Business, 83:5, 288-294, DOI: 10.3200/JOEB.83.5.288-294
To link to this article: http://dx.doi.org/10.3200/JOEB.83.5.288-294

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 52

View related articles


Citing articles: 3 View citing articles

Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 11 January 2016, At: 23:13

Assessing฀Learning฀Outcomes฀in฀
Quantitative฀Courses:฀Using฀Embedded฀
Questions฀for฀Direct฀Assessment
BARBARA฀A.฀PRICE

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

CINDY฀H.฀RANDALL
GEORGIA฀SOUTHERN฀UNIVERSITY
STATESBORO,฀GEORGIA

ABSTRACT. Researchers฀can฀evaluate฀

learning฀by฀using฀direct฀and฀indirect฀assessment.฀Although฀there฀are฀various฀ways฀
to฀apply฀these฀approaches,฀two฀common฀
techniques฀are฀pretests฀and฀posttests฀(direct฀
assessment),฀in฀which฀students฀demonstrate฀
mastery฀of฀topics฀or฀skills,฀and฀the฀use฀of฀
knowledge฀surveys฀(indirect฀assessment).฀
The฀present฀authors฀used฀these฀two฀techniques฀to฀demonstrate฀that฀student฀knowledge฀of฀course฀material฀increased฀significantly฀during฀the฀semester.฀฀Furthermore,฀
the฀authors฀demonstrated฀that฀the฀indirect฀
knowledge฀survey฀of฀perceived฀knowledge฀
did฀not฀correlate฀with฀actual฀knowledge.
Keywords:฀assessment,฀learning฀outcomes,฀
quantitative฀classes

Copyright฀©฀2008฀Heldref฀Publications

288฀

Journal฀of฀Education฀for฀Business

R

A

ccreditation฀ helps฀ institutions฀
show฀ that฀ they฀ are฀ attaining฀ an฀
acceptable฀ level฀ of฀ quality฀ within฀ their฀
degree฀programs฀(Lidtke฀&฀Yaverbaum,฀
2003;฀Pare,฀1998;฀Valacich,฀2001).฀Also,฀
accreditation฀ ensures฀ national฀ consistency฀of฀programs,฀provides฀peer฀review฀
and฀ recognition฀ from฀ outside฀ sources,฀
and฀ brings฀ programs฀ onto฀ the฀ radar฀
screen฀ of฀ potential฀ employers฀ (Rubino,฀
2001).฀To฀meet฀accreditation฀standards,฀
faculty฀ and฀ administrators฀ are฀ responsible฀for฀the฀continuous฀improvement฀of฀
degree฀ programs฀ and฀ the฀ measurement฀
and฀ documentation฀ of฀ student฀ performance฀ (Eastman,฀ Aller,฀ &฀ Superville,฀
2001).฀ Many฀ colleges฀ and฀ universities฀
rely฀ heavily฀ on฀ program฀ assessment฀
to฀ comply฀ with฀ accreditation฀ and฀ state฀
demands฀ (Eastman฀ et฀ al.;฀ Schwendau,฀
1995)฀and฀to฀guide฀curriculum฀(Abunawass,฀Lloyd,฀&฀Rudolf,฀2004;฀Blaha฀&฀

Murphy,฀2001).฀
Assessment฀ to฀ determine฀ whether฀
degree฀programs฀are฀providing฀appropriate฀education฀to฀graduates฀has฀become฀a฀
key฀ component฀ of฀ most฀ accreditation฀
self-study฀ report฀ requirements฀ and฀ a฀
vehicle฀ that฀ is฀ preferred฀ for฀ accountability฀ purposes฀ (Earl฀ &฀ Torrance,฀
2000).฀Several฀accreditation฀boards฀now฀
require฀ that฀ colleges฀ set฀ learning฀ goals฀
and฀ then฀ assess฀ how฀ well฀ these฀ goals฀
are฀met฀(Jones฀&฀Price,฀2002).฀Learning฀
goals฀ that฀ reflect฀ the฀ skills,฀ attitudes,฀
and฀knowledge฀that฀students฀are฀expect-

ed฀ to฀ acquire฀ as฀ a฀ result฀ of฀ their฀ programs฀of฀study฀are฀broad฀and฀not฀easily฀
measured.฀Objective฀outcomes฀are฀clear฀
statements฀ outlining฀ what฀ is฀ expected฀
from฀ students.฀ They฀ can฀ be฀ observed,฀
measured,฀ and฀ used฀ as฀ indicators฀ of฀
goals฀(Martell฀&฀Calderon,฀2005).
Under฀ the฀ Association฀ to฀ Advance฀

Collegiate฀ Schools฀ of฀ Business฀ International’s฀ (AACSB’s)฀ new฀ standards฀฀
(Betters-Reed,฀ Chacko,฀ &฀ Marlina,฀
2003)฀ and฀ the฀ Southern฀Association฀ of฀
Colleges฀ and฀ Schools’฀ (SACS’s)฀ new฀
standards฀ (Commission฀ on฀ Colleges,฀
2006),฀ business฀ programs฀ will฀ have฀ to฀
set฀ goals฀ to฀ address฀ what฀ skills,฀ attributes,฀ and฀ knowledge฀ they฀ want฀ their฀
students฀to฀master฀and฀must฀then฀be฀able฀
to฀demonstrate฀that฀their฀graduates฀have฀
met฀these฀goals.฀Establishing฀and฀implementing฀ a฀ system฀ under฀ which฀ these฀
programs฀can฀prove฀that฀their฀graduates฀
have฀met฀the฀established฀goals฀is฀necessary฀ under฀ these฀ standards.฀ Any฀ such฀
system฀will฀have฀to฀rely฀on฀the฀creation฀
and฀ measurement฀ of฀ course฀ objectives฀
to฀ serve฀ as฀ indicators฀ that฀ goals฀ are฀
being฀met.
Two฀ basic฀ approaches฀ to฀ assess฀
learning฀ are฀ indirect฀ and฀ direct.฀ Indirect฀ approaches฀ gather฀ opinions฀ of฀
the฀ quality฀ and฀ quantity฀ of฀ learning฀
that฀ takes฀ place฀ (Martell฀ &฀ Calderon,฀

2005).฀ Techniques฀ for฀ gathering฀ data฀
by฀ using฀ indirect฀ assessment฀ include฀
focus฀ groups,฀ exit฀ interviews,฀ and฀฀

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

surveys.฀ One฀ common฀ type฀ of฀ survey฀
is฀ the฀ knowledge฀ survey฀ (Nuhfer฀ &฀
Knipp,฀2003).฀Knowledge฀surveys฀can฀
cover฀ the฀ topics฀ of฀ an฀ entire฀ course—
both฀ skills฀ and฀ content฀ knowledge—
exhaustively.฀ This฀ coverage฀ is฀ accomplished฀ through฀ the฀ use฀ of฀ a฀ rating฀
system฀in฀which฀students฀express฀their฀
confidence฀ in฀ providing฀ answers฀ to฀
problems฀or฀issues฀(Horan,฀2004).
Using฀ a฀ knowledge฀ survey,฀ the฀ student฀ responded฀ to฀ one฀ of฀ three฀ choices:฀
(a)฀ “You฀ feel฀ confident฀ that฀ you฀ can฀
now฀ answer฀ the฀ question฀ sufficiently฀
for฀ graded฀ test฀ purposes”;฀ (b)฀ “You฀ can฀
now฀answer฀at฀least฀50%฀of฀the฀question฀

or฀ you฀ know฀ precisely฀ where฀ you฀ can฀
quickly฀ get฀ the฀ information฀ and฀ return฀
(20฀ minutes฀ or฀ less)฀ to฀ provide฀ a฀ complete฀answer฀for฀graded฀purposes”;฀or฀(c)฀
“You฀ are฀ not฀ confident฀ you฀ could฀ adequately฀ answer฀ the฀ question฀ for฀ graded฀
test฀purposes฀at฀this฀time”฀(Horan,฀2004).฀
This฀ method฀ of฀ assessment฀ allows฀ students฀to฀consider฀complex฀problems฀and฀
issues฀ as฀ well฀ as฀ course฀ content฀ knowledge฀(Nuhfer฀&฀Knipp,฀2003).฀
In฀contrast,฀direct฀assessment฀requires฀
that฀ students฀ demonstrate฀ mastery฀ of฀
topics฀ or฀ skills฀ by฀ using฀ actual฀ work฀
completed฀by฀the฀students.฀This฀requirement฀ can฀ be฀ accomplished฀ by฀ using฀
papers,฀ presentations,฀ speeches,฀ graded฀
assessment฀ items,฀ or฀ pretests฀ and฀ posttests.฀Pretests฀and฀posttests฀are฀probably฀
the฀ most฀ widely฀ used฀ form฀ of฀ evaluating฀ how฀ students฀ have฀ progressed฀ during฀the฀semester฀(Outcome฀Assessment,฀
2003).฀This฀method฀surveys฀students฀at฀
the฀beginning฀and฀end฀of฀a฀course.฀With฀
standard฀ pretests฀ and฀ posttests,฀ students฀can฀complete฀the฀same฀quiz฀at฀the฀
beginning฀and฀end฀of฀the฀course,฀and฀a฀
grade฀can฀be฀computed฀to฀illustrate฀how฀
much฀ students฀ learned.฀ Critics฀ believe฀

this฀ approach฀ is฀ limiting฀ because฀ time฀
alone฀dictates฀the฀amount฀of฀material฀on฀
which฀students฀can฀be฀tested฀(Nuhfer฀&฀
Knipp,฀2003).฀Proponents฀feel฀that฀these฀
tests฀ are฀ specifically฀ designed฀ to฀ coincide฀ with฀ the฀ curriculum฀ of฀ the฀ course฀
and฀ can฀ focus฀ on฀ the฀ missions,฀ goals,฀
and฀objectives฀of฀the฀department฀or฀university฀(Outcome฀Assessment,฀2003).฀
Regardless฀ of฀ which฀ of฀ the฀ direct฀
methods฀ is฀ used,฀ educators฀ can฀ measure฀ the฀ progress฀ of฀ students฀ by฀ using฀
course-embedded฀ assessment.฀ Course฀

embedded฀ assessment,฀ a฀ cutting-edge฀
formalized฀ assessment฀ (Gerretson฀ &฀
Golson,฀ 2005),฀ requires฀ that฀ the฀ products฀ of฀ students’฀ work฀ be฀ evaluated฀
by฀ using฀ those฀ criteria฀ and฀ standards฀
established฀ in฀ the฀ course฀ objectives.฀
It฀ tends฀ to฀ be฀ informal฀ but฀ well฀ organized฀(Treagust,฀Jacobowitz,฀Gallagher,฀
&฀ Parker,฀ 2003).฀ By฀ embedding,฀ the฀
opportunities฀ to฀ assess฀ progress฀ made฀
by฀ students฀ are฀ integrated฀ into฀ regular฀

instructional฀ material฀ and฀ are฀ indistinguishable฀ from฀ day-to-day฀ classroom฀
activities฀(Keenan-Takagi,฀2000;฀Wilson฀
&฀ Sloane,฀ 2000).฀ The฀ results฀ are฀ then฀
shared฀ with฀ the฀ faculty฀ so฀ that฀ learning฀ and฀ curriculum฀ can฀ be฀ improved.฀
This฀technique฀is฀efficient฀and฀insightful฀
(Martell฀ &฀ Calderon,฀ 2005)฀ and฀ guarantees฀consistency฀within฀multiple฀sections฀ of฀ the฀ same฀ course฀ by฀ using฀ the฀
same฀ outcomes฀ and฀ rubrics฀ (Gerretson฀
&฀Golson,฀2005).
Hypotheses
The฀ goal฀ of฀ the฀ present฀ study฀ was฀
to฀ provide฀ insight฀ on฀ the฀ use฀ of฀ direct฀
versus฀ indirect฀ techniques฀ as฀ means฀
of฀ assessing฀ student฀ learning,฀ with฀ the฀
hope฀ that฀ these฀ findings฀ can฀ be฀ used฀
as฀ input฀ to฀ course฀ improvement฀ as฀
well฀ as฀ assessment฀ and฀ accreditation฀
self-studies.฀ To฀ accomplish฀ this฀ goal,฀
we฀ asked฀ students฀ at฀ a฀ university฀ who฀
were฀ enrolled฀ in฀ Management฀ 6330฀
during฀ the฀ 2004–2005฀ academic฀ year฀

to฀ participate฀ in฀ a฀ knowledge฀ survey฀
project฀ including฀ a฀ pretest฀ and฀ posttest฀ validity฀ check.฀ Management฀ 6330,฀
or฀ Quantitative฀ Methods฀ for฀ Business,฀
is฀ an฀ introductory฀ course฀ in฀ statistics฀
and฀ management฀ science฀ techniques฀
required฀for฀students฀entering฀the฀MBA฀
or฀ MAcc฀ degree฀ programs฀ who฀ have฀
either฀not฀acquired฀the฀knowledge฀from฀
a฀ BA฀ degree฀ program฀ or฀ have฀ paused฀
for฀ some฀ time฀ since฀ taking฀ decision฀
analysis฀ courses.฀ Using฀ these฀ students’฀
scores,฀ we฀ compared฀ pretest฀ and฀ posttest฀scores฀and฀knowledge฀survey฀scores฀
on฀ a฀ question-by-question฀ basis.฀Additionally,฀pretest฀and฀posttest฀and฀beforeand-after฀knowledge฀survey฀scores฀were฀
compared.฀ Last,฀ the฀ class฀ averages฀ on฀
both฀instruments฀were฀compared฀for฀the฀
data฀gathered฀at฀the฀beginning฀and฀then฀
at฀the฀end฀of฀the฀semester.

We฀studied฀the฀following฀hypotheses:฀
1.฀At฀ the฀ beginning฀ of฀ a฀ course,฀ students’฀ knowledge฀ and฀ actual฀ knowledge฀are฀mutually฀independent.

2.฀At฀the฀end฀of฀a฀course,฀students’฀perceived฀ knowledge฀ and฀ actual฀ knowledge฀are฀related.
3.฀Students’฀ perceived฀ knowledge฀ is฀
significantly฀ greater฀ at฀ the฀ end฀ of฀
a฀ course฀ than฀ at฀ the฀ beginning฀ of฀ a฀
course.฀
4.฀Students’฀actual฀knowledge฀is฀significantly฀ greater฀ at฀ the฀ end฀ of฀ a฀ course฀
than฀at฀the฀beginning฀of฀a฀course.฀
5.฀Average฀ perceived฀ knowledge฀ for฀
students฀is฀significantly฀greater฀at฀the฀
end฀of฀a฀course฀than฀at฀the฀beginning฀
of฀a฀course.
6.฀Average฀ actual฀ knowledge฀ for฀ students฀ is฀ significantly฀ greater฀ at฀ the฀
end฀of฀a฀course฀than฀at฀the฀beginning฀
of฀a฀course.฀
METHOD
During฀ the฀ 2004–2005฀ academic฀
year,฀Dr.฀David฀W.฀Robinson฀conducted฀
a฀knowledge฀survey฀trial฀at฀a฀university฀
in฀ the฀ southeastern฀ United฀ States฀ and฀
invited฀ all฀ faculty฀ members฀ to฀ participate.฀Those฀who฀chose฀to฀do฀so฀created฀
a฀list฀of฀questions฀that฀comprehensively฀
expressed฀ the฀ content฀ of฀ their฀ classes.฀
Then,฀Robinson฀(2004)฀used฀these฀questions฀ to฀ construct฀ a฀ knowledge฀ survey฀
instrument.฀ One฀ class฀ whose฀ professor฀
chose฀ to฀ participate฀ in฀ the฀ trial฀ was฀
Management฀ 6330,฀ Quantitative฀ Methods฀ for฀ Business.฀ This฀ class฀ is฀ taught฀
every฀semester.฀
During฀the฀fall฀2004฀and฀spring฀2005฀
semesters,฀ students฀ enrolled฀ in฀ Management฀ 6330฀ were฀ participants฀ in฀ the฀
knowledge฀ survey฀ project.฀ As฀ a฀ participant฀ in฀ this฀ project,฀ each฀ student฀
completed฀ a฀ Web-based฀ survey฀ during฀
the฀ first฀ class.฀ The฀ survey฀ asked฀ each฀
student฀ to฀ indicate฀ confidence฀ in฀ being฀
able฀ to฀ answer฀ questions฀ on฀ material฀
that฀ would฀ be฀ covered฀ over฀ the฀ course฀
of฀the฀semester.฀At฀the฀end฀of฀the฀semester,฀ each฀ student฀ completed฀ the฀ same฀
survey,฀providing฀a฀means฀to฀assess฀the฀
learning฀ that฀ occurred฀ over฀ the฀ semester.฀ These฀ surveys฀ were฀ administered฀
via฀ the฀ Web฀ and฀ did฀ not฀ count฀ in฀ the฀
student’s฀ course฀ average.฀ The฀ faculty฀
member฀teaching฀the฀class฀did฀not฀have฀
May/June฀2008฀

289

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

access฀ to฀ the฀ survey฀ results฀ until฀ after฀
the฀semester฀ended.
One฀ problem฀ with฀ surveys฀ in฀ which฀
students฀are฀asked฀if฀they฀have฀adequate฀
knowledge฀ without฀ having฀ to฀ prove฀
knowledge฀is฀that฀some฀students฀exhibit฀ overconfidence฀ (Nuhfer฀ &฀ Knipp,฀
2003).฀To฀ overcome฀ this฀ problem,฀ during฀ the฀ second฀ night฀ of฀ class฀ each฀ student฀received฀the฀same฀pretest฀and฀actually฀ solved฀ the฀ test฀ problems.฀ Another฀
problem฀ often฀ encountered฀ is฀ that฀ students฀ fail฀ to฀ take฀ the฀ test฀ seriously฀ if฀
no฀ incentive฀ is฀ attached฀ (THEC฀ Performance฀Funding,฀2003).฀In฀fall฀2004,฀
this฀activity฀did฀not฀count฀as฀part฀of฀the฀
student’s฀ final฀ grade;฀ however,฀ with฀ an฀
overall฀score฀of฀70%฀or฀higher,฀the฀student฀could฀elect฀to฀exempt฀Management฀
6330.฀ If฀ the฀ student฀ remained฀ in฀ the฀
course,฀this฀same฀test฀was฀administered฀
at฀the฀end฀of฀the฀fall฀semester.฀The฀score฀
on฀this฀exam฀accounted฀for฀10%฀of฀the฀
student’s฀final฀class฀average.
In฀ spring฀ 2005,฀ Management฀ 6330฀
students฀again฀chose฀to฀participate฀in฀the฀
assessment฀ study.฀After฀ the฀ initial฀ trial฀
during฀the฀prior฀semester,฀the฀professor฀
refined฀both฀the฀survey฀and฀the฀process.฀
One฀ change฀ involved฀ proof฀ of฀ competency฀ for฀ Management฀ 6330.฀ Instead฀
of฀ exempting฀ the฀ course฀ with฀ an฀ overall฀ passing฀ grade฀ (70฀ or฀ above)฀ on฀ the฀
pretest,฀ the฀ students฀ had฀ to฀ score฀ a฀ 70฀
or฀higher฀in฀each฀of฀the฀six฀competency฀
areas฀ (descriptive/graphical฀ analysis,฀
probability,฀ inference,฀ decision฀ analysis,฀ linear฀ programming,฀ and฀ quality฀
control฀ processes).฀ The฀ second฀ change฀
involved฀ the฀ posttest.฀ Students฀ in฀ the฀
fall฀ semester฀ complained฀ about฀ the฀
number฀ of฀ tests฀ facing฀ them฀ at฀ the฀ end฀
of฀ the฀ course.฀ In฀ the฀ spring,฀ instead฀ of฀
giving฀ a฀ separate฀ posttest฀ that฀ counted฀
as฀part฀of฀the฀final฀exam,฀the฀professor฀
embedded฀a฀random฀selection฀of฀pretest฀
questions฀ from฀ each฀ of฀ the฀ six฀ competency฀ areas฀ into฀ the฀ final฀ exam.฀ These฀
questions,฀which฀accounted฀for฀roughly฀
half฀ of฀ the฀ original฀ pretest฀ questions,฀
were฀ compared฀ with฀ the฀ pretest฀ score฀
for฀assessment.฀
RESULTS
Assessment฀of฀students฀in฀Management฀6330฀began฀on฀the฀first฀night฀of฀
class.฀Although฀ at฀ the฀ end฀ of฀ the฀ fall฀
290฀

Journal฀of฀Education฀for฀Business

semester฀ class฀ enrollment฀ showed฀ a฀
total฀ of฀ 29฀ students,฀ some฀ enrolled฀
late.฀ Therefore,฀ only฀ 23฀ completed฀
both฀ the฀ pretest฀ and฀ posttest฀ knowledge฀ survey฀ instrument.฀Again฀ in฀ the฀
spring,฀ students฀ enrolled฀ late,฀ and฀
some฀ did฀ not฀ complete฀ the฀ pretest฀
knowledge฀ survey฀ instrument.฀ Of฀ the฀
25฀ students฀ who฀ finished฀ the฀ course,฀
only฀ 17฀ completed฀ both฀ the฀ pretest฀
and฀posttest฀knowledge฀survey฀instruments.฀Therefore,฀in฀the฀fall฀and฀spring฀
semesters,฀40฀students฀completed฀both฀
pretest฀and฀posttest฀knowledge฀survey฀
instruments.฀ A฀ total฀ of฀ 54฀ students฀
completed฀ the฀ pretest฀ and฀ posttest฀ by฀
solving฀problems.
Because฀ we฀ recorded฀ student฀ assessment฀ of฀ perceived฀ knowledge฀ by฀ using฀
ordinal฀ data฀ and฀ per-question฀ actual฀
knowledge฀ by฀ using฀ binary฀ data฀ (0฀ =฀
incorrect,฀ 1฀ =฀ correct),฀ nonparametric฀
methods฀ for฀ statistical฀ procedures฀ were฀
used฀ to฀ test฀ five฀ of฀ the฀ six฀ hypotheses.฀
Hypothesis฀1฀was฀addressed฀by฀using฀rank฀
correlations฀in฀which฀Spearman’s฀rho฀was฀
calculated฀to฀test฀significance.฀The฀authors฀
tested฀the฀following฀hypotheses:
H0:฀ At฀ the฀ beginning฀ of฀ the฀ semester,฀
a฀ positive฀ or฀ negative฀ relationship฀
between฀ the฀ measures฀ of฀ students’฀
perceived฀ knowledge฀ and฀ actual฀
knowledge฀exists.
H1:฀ At฀ the฀ beginning฀ of฀ the฀ semester,฀
the฀ measures฀ of฀ students’฀ perceived฀
knowledge฀and฀actual฀knowledge฀are฀
mutually฀independent.
Twenty-one฀ of฀ the฀ 23฀ students฀ who฀
completed฀ the฀ pretest฀ assessments฀ for฀
perceived฀ and฀ actual฀ knowledge฀ at฀
the฀ beginning฀ of฀ fall฀ semester฀ and฀ 13฀
of฀ the฀ 17฀ who฀ completed฀ the฀ pretest฀
assessments฀ for฀ perceived฀ and฀ actual฀
knowledge฀ in฀ the฀ spring฀ semester฀ produced฀ results฀ showing฀ no฀ significant฀
relationship฀between฀the฀two฀measures.฀
Two฀ students฀ in฀ the฀ fall฀ and฀ 4฀ in฀ the฀
spring฀ revealed฀ a฀ significant฀ relationship฀ between฀ what฀ they฀ believed฀ they฀
knew฀ and฀ what฀ they฀ actually฀ knew,฀ 3฀
at฀ the฀ .05฀ level฀ of฀ significance฀ and฀ the฀
others฀ at฀ the฀ .10฀ level฀ of฀ significance฀
(see฀Table฀1).฀
The฀results฀indicated฀that฀at฀the฀beginning฀of฀the฀semester฀most฀students฀could฀
not฀ accurately฀ assess฀ their฀ levels฀ of฀
existing฀ knowledge.฀ Of฀ those฀ assessed,฀

85%฀ showed฀ no฀ significant฀ relationship฀between฀their฀perceived฀knowledge฀
and฀ actual฀ knowledge฀ of฀ the฀ subject.฀
In฀other฀words,฀at฀the฀beginning฀of฀the฀
semester,฀ the฀ students฀ were฀ unable฀ to฀
determine฀ the฀ difference฀ between฀ perceived฀ knowledge฀ and฀ actual฀ knowledge.฀Therefore,฀H0฀cannot฀be฀rejected.฀
Hypothesis฀1฀is฀supported.
We฀ also฀ addressed฀ Hypothesis฀ 2฀
by฀ using฀ rank฀ correlations฀ in฀ which฀
Spearman’s฀ rho฀ was฀ calculated฀ to฀ test฀
significance.฀ We฀ tested฀ the฀ following฀
hypotheses:
H0:฀At฀the฀end฀of฀the฀semester,฀the฀measures฀ of฀ students’฀ perceived฀ knowledge฀ and฀ of฀ their฀ actual฀ knowledge฀
are฀mutually฀independent.
H2:฀At฀ the฀ end฀ of฀ the฀ semester,฀ a฀ positive฀or฀negative฀relationship฀between฀
the฀ measures฀ of฀ students’฀ perceived฀
knowledge฀and฀of฀their฀actual฀knowledge฀exists.
Seventeen฀ of฀ the฀ 23฀ students฀ who฀
completed฀ the฀ posttest฀ assessments฀
for฀ perceived฀ knowledge฀ and฀ actual฀
knowledge฀ during฀ the฀ fall฀ and฀ 12฀ of฀
the฀ 17฀ students฀ who฀ completed฀ the฀
posttest฀ assessments฀ for฀ perceived฀
knowledge฀ and฀ actual฀ knowledge฀ in฀
the฀ spring฀ produced฀ test฀ results฀ showing฀no฀significant฀relationship฀between฀
the฀ two฀ measures.฀ Only฀ 6฀ students฀ in฀
the฀ fall฀ and฀ 5฀ in฀ the฀ spring฀ revealed฀ a฀
significant฀ relationship฀ between฀ what฀
they฀believed฀they฀knew฀and฀what฀they฀
actually฀ did฀ know,฀ 5฀ at฀ the฀ .01฀ level฀
of฀ significance,฀ 4฀ at฀ the฀ .05฀ level฀ of฀
significance,฀ and฀ 2฀ at฀ the฀ .10฀ level฀ of฀
significance฀(see฀Table฀2).฀
At฀the฀end฀of฀both฀semesters,฀most฀students฀were฀not฀accurate฀in฀their฀assessment฀of฀acquired฀knowledge.฀Although฀
a฀ slight฀ improvement฀ occurred,฀ by฀ the฀
end฀of฀the฀semester฀most฀students฀were฀
still฀ unable฀ to฀ determine฀ the฀ difference฀
between฀perceived฀knowledge฀and฀actual฀ knowledge.฀ Just฀ over฀ 72%฀ of฀ those฀
assessed฀ after฀ they฀ had฀ completed฀ the฀
course฀ showed฀ no฀ significant฀ relationship฀between฀perceived฀knowledge฀and฀
actual฀knowledge฀of฀the฀subject.฀Therefore,฀H0฀cannot฀be฀rejected฀and฀Hypothesis฀2฀is฀not฀supported.
Hypothesis฀ 3฀ compared฀ perceived฀
knowledge฀ at฀ the฀ beginning฀ of฀ the฀
semester฀to฀perceived฀knowledge฀at฀the฀

TABLE฀1.฀Relationship฀of฀Perceived฀and฀Actual฀Knowledge฀at฀the฀Beginning฀
of฀the฀Semester฀
Semester฀

Spearman’s฀rho฀

Spring฀2005฀
Fall฀2004฀
Spring฀2005฀
Spring฀2005฀
Spring฀2005฀
Fall฀2004฀

H0:฀ The฀ difference฀ between฀ actual฀
knowledge฀at฀the฀end฀of฀the฀semester฀
and฀ actual฀ knowledge฀ at฀ the฀ beginning฀is฀not฀significant.
H4:฀At฀the฀end฀of฀the฀semester,฀students’฀
actual฀ of฀ knowledge฀ is฀ significantly฀
greater฀than฀at฀the฀beginning.

p

.190฀
.341฀
.196฀
.234฀
.258฀
.268฀

.021
.034
.037
.081
.091
.099

Because฀ this฀ pretest฀ assessment฀ was฀
administered฀ on฀ the฀ second฀ night฀ of฀
class฀and฀all฀members฀of฀the฀class฀were฀
present,฀a฀total฀of฀29฀students฀in฀the฀fall฀
and฀ 25฀ in฀ the฀ spring฀ took฀ this฀ pretest฀
assessment.฀Of฀the฀54฀students฀assessed,฀
44฀demonstrated฀that฀their฀actual฀knowledge฀ improved฀ significantly฀ over฀ the฀
course฀of฀the฀semester฀(see฀Table฀3).฀
More฀ than฀ three฀ fourths฀ of฀ those฀
assessed฀ (81.48%)฀ gained฀ a฀ significant฀
amount฀ of฀ knowledge฀ of฀ the฀ subject฀
over฀the฀course฀of฀the฀semester฀(see฀Figure฀2).฀On฀the฀basis฀of฀these฀test฀results,฀
we฀ rejected฀ the฀ null฀ hypothesis฀ (H0).฀
Hypothesis฀4฀was฀supported.฀

Fall฀2004฀Students฀

.478฀
.465฀
.456฀
.456฀
.378฀
.305฀

Spearman’s฀rho฀

p

.002฀
.003฀
.004฀
.004฀
.018฀
.059

.404฀
.378฀
.342฀
.329฀
.283฀

.010
.016
.031
.038
.077

end฀of฀the฀semester.฀Because฀data฀from฀
the฀ knowledge฀ survey฀ were฀ ordinal,฀฀
with฀ students’฀ responding฀ to฀ one฀ of฀
three฀ choices,฀ sign฀ tests฀ were฀ used฀
to฀ test฀ the฀ differences฀ between฀ the฀
pretest฀ assessment฀ and฀ the฀ posttest฀
assessment.฀ We฀ tested฀ the฀ following฀
hypotheses:

We฀ compared฀ assessment฀ results฀
for฀ 40฀ (23฀ fall฀ and฀ 17฀ spring)฀ students.฀In฀all฀cases,฀students’฀perceived฀
knowledge฀ at฀ the฀ end฀ of฀ the฀ semester฀
was฀ significantly฀ greater฀ at฀ the฀ .01฀
level฀ of฀ significance฀ than฀ their฀ perceived฀knowledge฀at฀the฀beginning฀of฀
the฀semester฀(see฀Figure฀1).฀Analyses฀
failed฀ to฀ support฀ the฀ null฀ hypothesis฀
(H 0).฀ Therefore,฀ Hypothesis฀ 3฀ was฀
supported.
Hypothesis฀4฀theorizes฀that฀students’฀
actual฀ knowledge฀ at฀ the฀ end฀ of฀ the฀
semester฀is฀significantly฀greater฀than฀at฀
the฀beginning.฀Sign฀tests฀were฀used฀for฀


KSB฀1฀

KSA฀1

3.5

3.0

2.5

2.0

1.5

1.0

0.5

0

Q1

H0:฀At฀the฀end฀of฀the฀semester,฀the฀students’฀ perceived฀ knowledge฀ are฀ not฀
greater฀than฀at฀the฀beginning.
H3:฀At฀the฀end฀of฀the฀semester,฀students’฀
perceived฀ knowledge฀ is฀ significantly฀
greater฀than฀at฀the฀beginning.฀



Q7
Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4








p฀

Q3
Q5

฀ Spearman’s฀rho฀

Spring฀2005฀Students

Confidence฀Index

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

TABLE฀2.฀Relationship฀of฀Perceived฀and฀Actual฀Knowledge฀at฀the฀End฀฀
of฀the฀Semester฀


this฀analysis.฀The฀following฀hypotheses฀
were฀tested:

FIGURE฀1.฀Perceived฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀fall฀
semester.฀KSA฀=฀posttest฀for฀knowledge฀survey;฀KSB฀=฀pretest฀for฀
knowledge฀survey;฀Q฀=฀question฀number.

May/June฀2008฀

291

TABLE฀3.฀Comparison฀of฀Actual฀Knowledge฀at฀the฀End฀and฀Beginning฀฀
of฀the฀Semester


p฀





Number฀of฀students฀
whose฀actual฀knowledge฀฀
significantly฀increased฀


.01฀

.05฀

.10฀
Not฀significant฀

%฀of฀total฀number฀of
students฀evaluated

27฀
9฀
8฀
10฀



50.00
16.67
14.81
18.52

Pre฀

Post

Correct/Incorrect

1.0

0.5

Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4

Q7

Q5

Q3

0

Q1

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

1.5

FIGURE฀2.฀Actual฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀semester.฀
Pre฀=฀pretest฀for฀actual฀knowledge;฀Post฀=฀posttest฀for฀actual฀number;฀฀
Q฀=฀question฀number.

Hypothesis฀ 5฀ examined฀ the฀ difference฀between฀the฀average฀scores฀of฀pretests฀ and฀ those฀ of฀ posttests฀ regarding฀
perceived฀ knowledge.฀ This฀ comparison฀
was฀ made฀ using฀ the฀ Wilcoxon฀ signed฀
ranks฀test฀(Conover,฀1971).฀The฀following฀hypotheses฀were฀tested:
H0:฀ On฀ the฀ average,฀ perceived฀ knowledge฀ does฀ not฀ appear฀ to฀ be฀ greater฀
at฀ the฀ end฀ of฀ the฀ semester฀ than฀ perceived฀knowledge฀at฀the฀beginning฀of฀
the฀semester.
H5:฀ On฀ the฀ average,฀ perceived฀ knowledge฀ appears฀ to฀ be฀ significantly฀
greater฀at฀the฀end฀of฀the฀semester฀than฀
perceived฀knowledge฀at฀the฀beginning฀
of฀the฀semester.
For฀33฀of฀the฀40฀students฀who฀completed฀ the฀ pretest฀ and฀ posttest฀ knowledge฀ surveys,฀ average฀ scores฀ on฀ per292฀

Journal฀of฀Education฀for฀Business

ceived฀ knowledge฀ after฀ the฀ course฀
was฀ completed฀ were฀ higher฀ than฀ those฀
before฀ the฀ course฀ began.฀ Average฀
assessment฀ scores฀ of฀ 6฀ students฀ in฀ the฀
fall฀ class฀ were฀ the฀ same฀ in฀ the฀ pretest฀
and฀ posttest฀ results.฀ Only฀ 1฀ student฀
(fall฀semester)฀had฀a฀lower฀score฀at฀the฀
end฀ of฀ the฀ course฀ (see฀ Figure฀ 3).฀ The฀
Wilcoxon฀ signed฀ ranks฀ test฀ (Conover,฀
1971)฀ indicted฀ that฀ the฀ difference฀ in฀
pretest฀and฀posttest฀average฀assessment฀
scores฀ regarding฀ perceived฀ knowledge฀
at฀ the฀ .01฀ level฀ of฀ significance฀ in฀ the฀
fall฀and฀at฀the฀.00฀level฀of฀significance฀
in฀the฀spring.
More฀than฀80%฀of฀the฀students฀demonstrated฀ a฀ significantly฀ greater฀ degree฀
of฀ perceived฀ knowledge฀ of฀ class฀ material฀at฀the฀end฀of฀the฀semester.฀This฀does฀
not฀ support฀ the฀ null฀ hypothesis฀ (H0).฀
Hypothesis฀5฀was฀supported.

Hypothesis฀ 6฀ questioned฀ the฀ difference฀ in฀ the฀ average฀ actual฀ knowledge฀
gained฀ over฀ the฀ course฀ of฀ the฀ semester.฀For฀this฀assessment,฀questions฀were฀
weighted฀on฀the฀basis฀of฀their฀difficulty,฀
and฀ results฀ were฀ at฀ the฀ ratio฀ level.฀ A฀
paired฀ t฀ test฀ was฀ used.฀ The฀ hypotheses฀
tested฀were฀the฀following:
H0:฀On฀average,฀actual฀knowledge฀does฀
not฀appear฀to฀be฀greater฀at฀the฀end฀of฀
the฀semester฀than฀actual฀knowledge฀at฀
the฀beginning฀of฀the฀semester.
H6:฀On฀average,฀actual฀knowledge฀appears฀
to฀be฀significantly฀greater฀at฀the฀end฀of฀
the฀semester฀than฀actual฀knowledge฀at฀
the฀beginning฀of฀the฀semester.
We฀ tested฀ students฀ on฀ course฀ concepts฀ at฀ the฀ beginning฀ and฀ the฀ end฀ of฀
the฀semester.฀We฀compared฀the฀average฀
test฀scores฀and฀found฀that฀the฀difference฀
in฀the฀pretest฀and฀the฀posttest฀at฀the฀.01฀
level฀of฀significance฀in฀the฀fall฀and฀at฀the฀
.00฀ level฀ of฀ significance฀ in฀ the฀ spring.฀
On฀ average,฀ students฀ demonstrated฀ a฀
significant฀ gain฀ in฀ actual฀ knowledge฀
over฀ the฀ course฀ of฀ the฀ semester฀ (see฀
Figure฀4).
On฀the฀basis฀of฀the฀significant฀t฀test฀
results,฀we฀concluded฀that฀students฀did฀
perform฀ significantly฀ better฀ at฀ the฀ end฀
of฀ the฀ semester.฀ Therefore,฀ the฀ null฀
hypothesis฀ (H0)฀ was฀ rejected.฀ Hypothesis฀6฀was฀supported.
DISCUSSION
Colleges฀ and฀ universities฀ wishing฀
to฀ attain฀ and฀ maintain฀ accreditation,฀
demonstrate฀ compliance฀ with฀ state฀ and฀
federal฀ guidelines,฀ and฀ direct฀ curriculum฀rely฀on฀the฀assessment฀of฀students.฀
Assessment฀ is฀ one฀ means฀ of฀ exhibiting฀ that฀ learning฀ is฀ taking฀ place฀ in฀ the฀
classroom.฀The฀assessments฀can฀be฀conducted฀ in฀ various฀ ways;฀ two฀ common฀
ways฀are฀through฀(a)฀the฀use฀of฀pretests฀
and฀posttests฀in฀which฀students฀demonstrate฀mastery฀of฀topics฀or฀skills฀and฀(b)฀
the฀ use฀ of฀ knowledge฀ surveys.฀ In฀ the฀
present฀study,฀we฀used฀both฀assessment฀
techniques฀ to฀ determine฀ whether฀ students฀were฀learning.฀
฀Assessment฀is฀a฀necessary฀tool฀with฀
which฀ schools฀ can฀ exhibit฀ compliance฀
with฀ accreditation,฀ state,฀ and฀ federal฀
guidelines.฀ It฀ is฀ not฀ easy฀ to฀ implement,฀ and฀ it฀ is฀ time฀ consuming.฀ Once฀



Avg฀KB฀

Avg฀KA

3.5

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

Avg฀Confidence฀Index

3.0
2.5
2.0
1.5
1.0
0.5
0

฀ 1฀ 2฀ 3฀ 4฀ 5฀ 6฀ 7฀ 8฀ 9฀ 10฀11฀12฀13฀14฀ 15฀16฀17฀18฀19฀20฀21฀22฀23฀24฀25฀26฀27฀28฀29

FIGURE฀3.฀Average฀perceived฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀
semester.฀Avg฀KA฀=฀average฀score฀for฀posttest,฀perceived฀knowledge;฀Avg฀
KB฀=฀average฀score฀for฀pretest,฀perceived฀knowledge;฀Q฀=฀question฀number.

Pretest฀actual฀knowledge

14
12
Test฀Score

10
8
6
4
2
0

฀ ≤฀0฀

0–10฀ 10–20฀ 20–30฀ 30–40฀ 40–50฀ 50–60฀ 60–70฀ 70–80฀ 80–90฀90–100฀ >฀100

Number฀of฀Students
Postest฀actual฀knowledge

14
12

Test฀Score

10
8
6
4
2
0

฀ ≤฀0฀

0–10฀ 10–20฀ 20–30฀ 30–40฀ 40–50฀ 50–60฀ 60–70฀ 70–80฀ 80–90฀90–100฀ >฀100

Number฀of฀Students
FIGURE฀4.฀Average฀actual฀knowledge฀at฀the฀beginning฀and฀end฀of฀the฀
semester.



an฀ assessment฀ test฀ has฀ been฀ created,฀
it฀ must฀ be฀ evaluated฀ and฀ fine-tuned฀
each฀ semester;฀ however,฀ the฀ benefits฀
more฀than฀offset฀the฀time฀and฀effort฀that฀
assessment฀requires.
Posttest฀ assessment฀ can฀ be฀ used฀ to฀
revise฀ course฀ content฀ so฀ that฀ areas฀ in฀
which฀students฀are฀weak฀can฀be฀emphasized.฀ Similarly,฀ pretest฀ results฀ can฀
identify฀ areas฀ in฀ which฀ students฀ have฀
prior฀knowledge,฀and฀teachers฀can฀dedicate฀ less฀ class฀ time฀ to฀ those฀ topics.฀ In฀
short,฀both฀the฀teacher฀and฀the฀students฀
can฀ benefit฀ from฀ assessment.฀ Faculty฀
should฀embrace฀assessment฀as฀a฀means฀
to฀ enhance฀ their฀ course฀ and฀ not฀ view฀
assessment฀as฀another฀hurdle฀in฀the฀road฀
to฀compliance.
To฀ successfully฀ use฀ these฀ techniques฀
for฀this฀study,฀we฀had฀to฀establish฀learning฀ objectives฀ for฀ Management฀ 6330,฀
the฀course฀that฀we฀used฀for฀this฀research฀
project.฀ Questions฀ or฀ problems฀ had฀ to฀
be฀created฀to฀focus฀on฀course฀topics฀and฀
to฀ enable฀ students฀ to฀ demonstrate฀ that฀
these฀goals฀had฀been฀met.฀These฀activities฀were฀time฀consuming.
Through฀ pretests฀ and฀ posttests,฀ we฀
assessed฀both฀perceived฀knowledge฀and฀
actual฀ knowledge฀ of฀ course฀ material.฀
These฀data฀were฀compared฀at฀the฀beginning฀ and฀ the฀ end฀ of฀ the฀ semester฀ and฀
were฀compared฀against฀each฀other.฀The฀
levels฀of฀perceived฀knowledge฀and฀actual฀knowledge฀climbed฀significantly฀both฀
in฀testing฀data฀student฀by฀student฀and฀by฀
examining฀the฀average฀amount฀learned.฀
Students฀ were฀ not฀ able฀ to฀ accurately฀
perceive฀their฀knowledge฀level.
Is฀ it฀ unusual฀ that฀ the฀ students฀ were฀
not฀ able฀ to฀ accurately฀ perceive฀ their฀
knowledge฀ level?฀ This฀ is฀ a฀ difficult,฀
if฀ not฀ impossible,฀ question฀ to฀ answer.฀฀
However,฀ Rogers฀ (2006)฀ noted,฀ “as฀
evidence฀ of฀ student฀ learning,฀ indirect฀
methods฀ are฀ not฀ as฀ strong฀ as฀ direct฀
measures฀ because฀ assumptions฀ must฀
be฀ made฀ about฀ what฀ exactly฀ the฀ self-฀
report฀means.”฀The฀results฀of฀our฀study฀
indicate฀ that฀ self-reporting฀ does฀ not฀
mean฀ much.฀ Rogers฀ goes฀ on฀ to฀ state฀
that฀ “it฀ is฀ important฀ to฀ remember฀ that฀
all฀assessment฀methods฀have฀their฀limitations฀ and฀ contain฀ some฀ bias.”฀ The฀
inability฀ of฀ the฀ students฀ to฀ identify฀
their฀ knowledge฀ level฀ implies฀ that฀ to฀
accurately฀ measure฀ learning,฀ direct฀
measures฀should฀be฀employed.
May/June฀2008฀

293

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016

NOTES
Barbara฀A.฀Price,฀PhD,฀is฀a฀professor฀of฀quantitative฀analysis฀in฀the฀College฀of฀Business฀Administration฀at฀Georgia฀Southern฀University.฀She฀has฀
more฀than฀50฀publications฀in฀various฀professional฀
journals฀ and฀ proceedings฀ including฀ the฀ Decision฀
Sciences฀Journal฀of฀Innovative฀Education,฀Journal฀
of฀Education฀for฀Business,฀Inroads—the฀SIGCSE฀
Bulletin,฀ and฀ Journal฀ of฀ Information฀ Technology฀
Education.
Cindy฀H.฀Randall฀is฀an฀assistant฀professor฀of฀
quantitative฀ analysis฀ in฀ the฀ College฀ of฀ Business฀
Administration฀ at฀ Georgia฀ Southern฀ University.฀
She฀ has฀ published฀ in฀ numerous฀ proceedings฀ as฀
well฀ as฀ in฀ the฀ International฀ Journal฀ of฀ Research฀
in฀ Marketing,฀ Journal฀ of฀ Marketing฀ Theory฀ and฀
Practice,฀ Marketing฀ Management฀ Journal,฀ Journal฀of฀Transportation฀Management,฀and฀Inroads—
the฀SIGCSE฀Bulletin.
Correspondence฀ concerning฀ this฀ article฀ should฀
be฀ addressed฀ to฀ Cindy฀ H.฀ Randall,฀ Department฀
of฀ Finance฀ and฀ Quantitative฀ Analysis,฀ Georgia฀
Southern฀ University,฀ Box฀ 8151,฀ COBA,฀ Statesboro,฀GA฀30460,฀USA.
E-mail:฀[email protected]
REFERENCES
Abunawass,฀A.,฀ Lloyd,฀W.,฀ &฀ Rudolf,฀ E.฀ (2004).฀
COMPASS:฀A฀CS฀program฀assessment฀project.฀
Proceedings,฀ITICSE,฀36(3),฀127–131.฀
Betters-Reed,฀ B.฀ L.,฀ Chacko,฀ J.,฀ M.,฀ &฀ Marlina,฀ D.฀ (2003).฀ Assurance฀ of฀ learning:฀ Small฀
school฀ strategies.฀ Continuous฀ improvement฀
symposium,฀AACSB฀conferences฀and฀seminars.฀
Retrieved฀November฀3,฀2006,฀from฀http://www฀
.aacsb.edu/handouts/CIS03/cis03-prgm.asp
Blaha,฀K.฀D.,฀&฀Murphy,฀L.฀C.฀(2001).฀Targeting฀
assessment:฀ How฀ to฀ hit฀ the฀ bull’s฀ eye.฀ Journal฀ of฀ Computing฀ in฀ Small฀ Colleges,฀ 17(2),฀
106–115.
Commission฀ on฀ Colleges.฀ (2006).฀ Principles฀ of฀

294฀

Journal฀of฀Education฀for฀Business

accreditation:฀Foundation฀for฀quality฀enhancement฀ by฀ the฀ Southern฀ Association฀ of฀ Colleges฀
and฀ Schools฀ (2002–2006฀ edition).฀ Retrieved฀
November฀ 3,฀ 2006,฀ from฀ http://www.sacscoc฀
.org/pdf/PrinciplesOfAccreditation.PDF
Conover,฀ W.฀ J.฀ (1971).฀ Practical฀ nonparametric฀
statistics.฀New฀York:฀Wiley.
Earl,฀ L.,฀ &฀ Torrance,฀ N.฀ (2000).฀ Embedding฀
accountability฀and฀improvement฀into฀large-scale฀
assessment:฀What฀difference฀does฀it฀make?฀Peabody฀Journal฀of฀Education,฀75(4),฀114–141.
Eastman,฀ J.฀ K.,฀ Aller,฀ R.฀ C.,฀ &฀ Superville,฀ C.฀
L.฀ (2001).฀ Developing฀ an฀ MBA฀ assessment฀
program:฀ Guidance฀ from฀ the฀ literature฀ and฀
one฀ program’s฀ experience.฀ Retrieved฀ November฀ 10,฀ 2006,฀ from฀ http://www.westga.edu/
~bquest/2001/assess.html
Gerretson,฀H.,฀&฀Golson,฀E.฀(2005).฀Synopsis฀of฀
the฀ use฀ of฀ course-embedded฀ assessment฀ in฀ a฀
medium฀ sized฀ public฀ university’s฀ general฀ education฀program.฀Journal฀of฀General฀Education,฀
54(2),฀139–149.
Horan,฀ S.฀ (2004).฀ Using฀ knowledge฀ surveys฀ to฀
direct฀ the฀ class.฀ Retrieved฀ November฀ 3,฀ 2006,฀
from฀ http://spacegrant.nmsu.edu/NMSU/2004/
horan.pdf.
Jones,฀ L.฀ G.,฀ &฀ Price,฀A.฀ L.฀ (2002).฀ Changes฀ in฀
computer฀ science฀ accreditation.฀ Communications฀of฀the฀ACM,฀45(8),฀99–103.
Keenan-Takagi,฀ K.฀ (2000).฀ Embedding฀ assessment฀in฀choral฀teaching.฀Music฀Educators฀Journal,฀86(4),฀42–49.
Lidtke,฀D.฀K.,฀&฀Yaverbaum,฀G.฀J.฀(2003).฀Developing฀ accreditation฀ for฀ information฀ systems฀
education.฀IEEE,฀5(1),฀41–45.
Martell,฀ K.,฀ &฀ Calderon,฀ T.฀ (2005).฀ Assessment฀
of฀ student฀ learning฀ in฀ business฀ schools:฀ Best฀
practice฀ each฀ step฀ of฀ the฀ way.฀ Vol.฀ 1,฀ No.฀ 1.฀
Tallahassee,฀ FL:฀ Association฀ for฀ Institutional฀
Research.
Nuhfer,฀ E.,฀ &฀ Knipp,฀ D.฀ (2003).฀The฀ knowledge฀
survey:฀A฀ tool฀ for฀ all฀ reasons.฀ To฀ Improve฀ the฀
Academy,฀21,฀59–78.

Outcome฀Assessment.฀ (2003).฀ Office฀ of฀ the฀ Provost฀ at฀ The฀ University฀ of฀Wisconsin–Madison.฀
Retrieved฀ November฀ 10,฀ 2006,฀ from฀ http://
www.provost.wisc.edu/assessment/manual/
manual12.html
Pare,฀ M.฀ A.฀ (Ed.).฀ (1998).฀ Certification฀ and฀
accreditation฀ programs฀ directory:฀ A฀ descriptive฀ guide฀ to฀ national฀ voluntary฀ certification฀
and฀ accreditation฀ programs฀ for฀ professionals฀
and฀ institutions฀ (2nd฀ ed.).฀ Farmington฀ Hills,฀
MA:฀Gale฀Group.
Robinson,฀ D.฀ W.฀ (2004).฀ The฀ Georgia฀ Southern฀
knowledge฀survey฀FAQ.฀Retrieved฀July฀1,฀2004,฀
from฀ http://ogeechee.litphil.georgiasouthern.
edu/nuncio/faq.php
Rubino,฀ F.฀ J.฀ (2001).฀ Survey฀ highlights฀ importance฀ of฀ accreditation฀ for฀ engineers.฀ ASHRAE฀
Insight,฀16(7),฀27–31.฀
Rogers,฀ G.฀ (2006).฀ Assessment฀ 101:฀ direct฀ and฀
indirect฀ assessments:฀ what฀ are฀ they฀ good฀ for?฀฀
Retrieved฀May฀8,฀2008,฀from฀http://www.abet.
org/Linked%20Documents-UPDATE/Newslet฀
ters/06-08-CM.pdf
Schwendau,฀ M.฀ (1995).฀ College฀ quality฀ assessment:฀ The฀ double-edged฀ sword.฀ Tech฀ Directions,฀54(9),฀30–32.
THEC฀Performance฀Funding.฀(2003).฀Pilot฀evaluation:฀Assessment฀of฀general฀education฀learning฀
outcomes฀ [Standard฀ I.B.฀ 2002-03].฀ Retrieved฀
July฀ 30,฀ 2004,฀ from฀ http://www.state.tn.us/
thec/2004web/division_pages/ppr_pages/pdfs/
Policy/Gen%20Ed%20RSCC%20Pilot.pdf
Treagust,฀ D.฀ F.,฀ Jacobowitz,฀ R.,฀ Gallagher,฀ J.฀ J.,฀
&฀Parker,฀J.฀(2003).฀Embed฀assessment฀in฀your฀
teaching.฀Science฀Scope,฀26(6),฀36–39.
Valacich,฀J.฀(2001).฀Accreditation฀in฀the฀information฀ academic฀ discipline.฀ Retrieved฀ November฀
5,฀ 2006,฀ from฀ http://www.aisnet.org/Curricu฀
lum/AIS_AcreditFinal.doc
Wilson,฀ M.,฀ &฀ Sloane,฀ K.฀ (2000).฀ From฀ principles฀ to฀ practice:฀ An฀ embedded฀ assessment฀
system.฀ Applied฀ Measurement฀ in฀ Education,฀
13(2),฀181–208.