Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.83.5.288-294
Journal of Education for Business
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
Assessing Learning Outcomes in Quantitative
Courses: Using Embedded Questions for Direct
Assessment
Barbara A. Price & Cindy H. Randall
To cite this article: Barbara A. Price & Cindy H. Randall (2008) Assessing Learning Outcomes in
Quantitative Courses: Using Embedded Questions for Direct Assessment, Journal of Education
for Business, 83:5, 288-294, DOI: 10.3200/JOEB.83.5.288-294
To link to this article: http://dx.doi.org/10.3200/JOEB.83.5.288-294
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 52
View related articles
Citing articles: 3 View citing articles
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]
Date: 11 January 2016, At: 23:13
AssessingLearningOutcomesin
QuantitativeCourses:UsingEmbedded
QuestionsforDirectAssessment
BARBARAA.PRICE
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
CINDYH.RANDALL
GEORGIASOUTHERNUNIVERSITY
STATESBORO,GEORGIA
ABSTRACT. Researcherscanevaluate
learningbyusingdirectandindirectassessment.Althoughtherearevariousways
toapplytheseapproaches,twocommon
techniquesarepretestsandposttests(direct
assessment),inwhichstudentsdemonstrate
masteryoftopicsorskills,andtheuseof
knowledgesurveys(indirectassessment).
Thepresentauthorsusedthesetwotechniquestodemonstratethatstudentknowledgeofcoursematerialincreasedsignificantlyduringthesemester.Furthermore,
theauthorsdemonstratedthattheindirect
knowledgesurveyofperceivedknowledge
didnotcorrelatewithactualknowledge.
Keywords:assessment,learningoutcomes,
quantitativeclasses
Copyright©2008HeldrefPublications
288
JournalofEducationforBusiness
R
A
ccreditation helps institutions
show that they are attaining an
acceptable level of quality within their
degreeprograms(Lidtke&Yaverbaum,
2003;Pare,1998;Valacich,2001).Also,
accreditation ensures national consistencyofprograms,providespeerreview
and recognition from outside sources,
and brings programs onto the radar
screen of potential employers (Rubino,
2001).Tomeetaccreditationstandards,
faculty and administrators are responsibleforthecontinuousimprovementof
degree programs and the measurement
and documentation of student performance (Eastman, Aller, & Superville,
2001). Many colleges and universities
rely heavily on program assessment
to comply with accreditation and state
demands (Eastman et al.; Schwendau,
1995)andtoguidecurriculum(Abunawass,Lloyd,&Rudolf,2004;Blaha&
Murphy,2001).
Assessment to determine whether
degreeprogramsareprovidingappropriateeducationtograduateshasbecomea
key component of most accreditation
self-study report requirements and a
vehicle that is preferred for accountability purposes (Earl & Torrance,
2000).Severalaccreditationboardsnow
require that colleges set learning goals
and then assess how well these goals
aremet(Jones&Price,2002).Learning
goals that reflect the skills, attitudes,
andknowledgethatstudentsareexpect-
ed to acquire as a result of their programsofstudyarebroadandnoteasily
measured.Objectiveoutcomesareclear
statements outlining what is expected
from students. They can be observed,
measured, and used as indicators of
goals(Martell&Calderon,2005).
Under the Association to Advance
Collegiate Schools of Business International’s (AACSB’s) new standards
(Betters-Reed, Chacko, & Marlina,
2003) and the SouthernAssociation of
Colleges and Schools’ (SACS’s) new
standards (Commission on Colleges,
2006), business programs will have to
set goals to address what skills, attributes, and knowledge they want their
studentstomasterandmustthenbeable
todemonstratethattheirgraduateshave
metthesegoals.Establishingandimplementing a system under which these
programscanprovethattheirgraduates
havemettheestablishedgoalsisnecessary under these standards. Any such
systemwillhavetorelyonthecreation
and measurement of course objectives
to serve as indicators that goals are
beingmet.
Two basic approaches to assess
learning are indirect and direct. Indirect approaches gather opinions of
the quality and quantity of learning
that takes place (Martell & Calderon,
2005). Techniques for gathering data
by using indirect assessment include
focus groups, exit interviews, and
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
surveys. One common type of survey
is the knowledge survey (Nuhfer &
Knipp,2003).Knowledgesurveyscan
cover the topics of an entire course—
both skills and content knowledge—
exhaustively. This coverage is accomplished through the use of a rating
systeminwhichstudentsexpresstheir
confidence in providing answers to
problemsorissues(Horan,2004).
Using a knowledge survey, the student responded to one of three choices:
(a) “You feel confident that you can
now answer the question sufficiently
for graded test purposes”; (b) “You can
nowansweratleast50%ofthequestion
or you know precisely where you can
quickly get the information and return
(20 minutes or less) to provide a completeanswerforgradedpurposes”;or(c)
“You are not confident you could adequately answer the question for graded
testpurposesatthistime”(Horan,2004).
This method of assessment allows studentstoconsidercomplexproblemsand
issues as well as course content knowledge(Nuhfer&Knipp,2003).
Incontrast,directassessmentrequires
that students demonstrate mastery of
topics or skills by using actual work
completedbythestudents.Thisrequirement can be accomplished by using
papers, presentations, speeches, graded
assessment items, or pretests and posttests.Pretestsandposttestsareprobably
the most widely used form of evaluating how students have progressed duringthesemester(OutcomeAssessment,
2003).Thismethodsurveysstudentsat
thebeginningandendofacourse.With
standard pretests and posttests, studentscancompletethesamequizatthe
beginningandendofthecourse,anda
gradecanbecomputedtoillustratehow
much students learned. Critics believe
this approach is limiting because time
alonedictatestheamountofmaterialon
whichstudentscanbetested(Nuhfer&
Knipp,2003).Proponentsfeelthatthese
tests are specifically designed to coincide with the curriculum of the course
and can focus on the missions, goals,
andobjectivesofthedepartmentoruniversity(OutcomeAssessment,2003).
Regardless of which of the direct
methods is used, educators can measure the progress of students by using
course-embedded assessment. Course
embedded assessment, a cutting-edge
formalized assessment (Gerretson &
Golson, 2005), requires that the products of students’ work be evaluated
by using those criteria and standards
established in the course objectives.
It tends to be informal but well organized(Treagust,Jacobowitz,Gallagher,
& Parker, 2003). By embedding, the
opportunities to assess progress made
by students are integrated into regular
instructional material and are indistinguishable from day-to-day classroom
activities(Keenan-Takagi,2000;Wilson
& Sloane, 2000). The results are then
shared with the faculty so that learning and curriculum can be improved.
Thistechniqueisefficientandinsightful
(Martell & Calderon, 2005) and guaranteesconsistencywithinmultiplesections of the same course by using the
same outcomes and rubrics (Gerretson
&Golson,2005).
Hypotheses
The goal of the present study was
to provide insight on the use of direct
versus indirect techniques as means
of assessing student learning, with the
hope that these findings can be used
as input to course improvement as
well as assessment and accreditation
self-studies. To accomplish this goal,
we asked students at a university who
were enrolled in Management 6330
during the 2004–2005 academic year
to participate in a knowledge survey
project including a pretest and posttest validity check. Management 6330,
or Quantitative Methods for Business,
is an introductory course in statistics
and management science techniques
requiredforstudentsenteringtheMBA
or MAcc degree programs who have
eithernotacquiredtheknowledgefrom
a BA degree program or have paused
for some time since taking decision
analysis courses. Using these students’
scores, we compared pretest and posttestscoresandknowledgesurveyscores
on a question-by-question basis.Additionally,pretestandposttestandbeforeand-afterknowledgesurveyscoreswere
compared. Last, the class averages on
bothinstrumentswerecomparedforthe
datagatheredatthebeginningandthen
attheendofthesemester.
Westudiedthefollowinghypotheses:
1.At the beginning of a course, students’ knowledge and actual knowledgearemutuallyindependent.
2.Attheendofacourse,students’perceived knowledge and actual knowledgearerelated.
3.Students’ perceived knowledge is
significantly greater at the end of
a course than at the beginning of a
course.
4.Students’actualknowledgeissignificantly greater at the end of a course
thanatthebeginningofacourse.
5.Average perceived knowledge for
studentsissignificantlygreateratthe
endofacoursethanatthebeginning
ofacourse.
6.Average actual knowledge for students is significantly greater at the
endofacoursethanatthebeginning
ofacourse.
METHOD
During the 2004–2005 academic
year,Dr.DavidW.Robinsonconducted
aknowledgesurveytrialatauniversity
in the southeastern United States and
invited all faculty members to participate.Thosewhochosetodosocreated
alistofquestionsthatcomprehensively
expressed the content of their classes.
Then,Robinson(2004)usedthesequestions to construct a knowledge survey
instrument. One class whose professor
chose to participate in the trial was
Management 6330, Quantitative Methods for Business. This class is taught
everysemester.
Duringthefall2004andspring2005
semesters, students enrolled in Management 6330 were participants in the
knowledge survey project. As a participant in this project, each student
completed a Web-based survey during
the first class. The survey asked each
student to indicate confidence in being
able to answer questions on material
that would be covered over the course
ofthesemester.Attheendofthesemester, each student completed the same
survey,providingameanstoassessthe
learning that occurred over the semester. These surveys were administered
via the Web and did not count in the
student’s course average. The faculty
memberteachingtheclassdidnothave
May/June2008
289
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
access to the survey results until after
thesemesterended.
One problem with surveys in which
studentsareaskediftheyhaveadequate
knowledge without having to prove
knowledgeisthatsomestudentsexhibit overconfidence (Nuhfer & Knipp,
2003).To overcome this problem, during the second night of class each studentreceivedthesamepretestandactually solved the test problems. Another
problem often encountered is that students fail to take the test seriously if
no incentive is attached (THEC PerformanceFunding,2003).Infall2004,
thisactivitydidnotcountaspartofthe
student’s final grade; however, with an
overallscoreof70%orhigher,thestudentcouldelecttoexemptManagement
6330. If the student remained in the
course,thissametestwasadministered
attheendofthefallsemester.Thescore
onthisexamaccountedfor10%ofthe
student’sfinalclassaverage.
In spring 2005, Management 6330
studentsagainchosetoparticipateinthe
assessment study.After the initial trial
duringthepriorsemester,theprofessor
refinedboththesurveyandtheprocess.
One change involved proof of competency for Management 6330. Instead
of exempting the course with an overall passing grade (70 or above) on the
pretest, the students had to score a 70
orhigherineachofthesixcompetency
areas (descriptive/graphical analysis,
probability, inference, decision analysis, linear programming, and quality
control processes). The second change
involved the posttest. Students in the
fall semester complained about the
number of tests facing them at the end
of the course. In the spring, instead of
giving a separate posttest that counted
aspartofthefinalexam,theprofessor
embeddedarandomselectionofpretest
questions from each of the six competency areas into the final exam. These
questions,whichaccountedforroughly
half of the original pretest questions,
were compared with the pretest score
forassessment.
RESULTS
AssessmentofstudentsinManagement6330beganonthefirstnightof
class.Although at the end of the fall
290
JournalofEducationforBusiness
semester class enrollment showed a
total of 29 students, some enrolled
late. Therefore, only 23 completed
both the pretest and posttest knowledge survey instrument.Again in the
spring, students enrolled late, and
some did not complete the pretest
knowledge survey instrument. Of the
25 students who finished the course,
only 17 completed both the pretest
andposttestknowledgesurveyinstruments.Therefore,inthefallandspring
semesters,40studentscompletedboth
pretestandposttestknowledgesurvey
instruments. A total of 54 students
completed the pretest and posttest by
solvingproblems.
Because we recorded student assessment of perceived knowledge by using
ordinal data and per-question actual
knowledge by using binary data (0 =
incorrect, 1 = correct), nonparametric
methods for statistical procedures were
used to test five of the six hypotheses.
Hypothesis1wasaddressedbyusingrank
correlationsinwhichSpearman’srhowas
calculatedtotestsignificance.Theauthors
testedthefollowinghypotheses:
H0: At the beginning of the semester,
a positive or negative relationship
between the measures of students’
perceived knowledge and actual
knowledgeexists.
H1: At the beginning of the semester,
the measures of students’ perceived
knowledgeandactualknowledgeare
mutuallyindependent.
Twenty-one of the 23 students who
completed the pretest assessments for
perceived and actual knowledge at
the beginning of fall semester and 13
of the 17 who completed the pretest
assessments for perceived and actual
knowledge in the spring semester produced results showing no significant
relationshipbetweenthetwomeasures.
Two students in the fall and 4 in the
spring revealed a significant relationship between what they believed they
knew and what they actually knew, 3
at the .05 level of significance and the
others at the .10 level of significance
(seeTable1).
Theresultsindicatedthatatthebeginningofthesemestermoststudentscould
not accurately assess their levels of
existing knowledge. Of those assessed,
85% showed no significant relationshipbetweentheirperceivedknowledge
and actual knowledge of the subject.
Inotherwords,atthebeginningofthe
semester, the students were unable to
determine the difference between perceived knowledge and actual knowledge.Therefore,H0cannotberejected.
Hypothesis1issupported.
We also addressed Hypothesis 2
by using rank correlations in which
Spearman’s rho was calculated to test
significance. We tested the following
hypotheses:
H0:Attheendofthesemester,themeasures of students’ perceived knowledge and of their actual knowledge
aremutuallyindependent.
H2:At the end of the semester, a positiveornegativerelationshipbetween
the measures of students’ perceived
knowledgeandoftheiractualknowledgeexists.
Seventeen of the 23 students who
completed the posttest assessments
for perceived knowledge and actual
knowledge during the fall and 12 of
the 17 students who completed the
posttest assessments for perceived
knowledge and actual knowledge in
the spring produced test results showingnosignificantrelationshipbetween
the two measures. Only 6 students in
the fall and 5 in the spring revealed a
significant relationship between what
theybelievedtheyknewandwhatthey
actually did know, 5 at the .01 level
of significance, 4 at the .05 level of
significance, and 2 at the .10 level of
significance(seeTable2).
Attheendofbothsemesters,moststudentswerenotaccurateintheirassessmentofacquiredknowledge.Although
a slight improvement occurred, by the
endofthesemestermoststudentswere
still unable to determine the difference
betweenperceivedknowledgeandactual knowledge. Just over 72% of those
assessed after they had completed the
course showed no significant relationshipbetweenperceivedknowledgeand
actualknowledgeofthesubject.Therefore,H0cannotberejectedandHypothesis2isnotsupported.
Hypothesis 3 compared perceived
knowledge at the beginning of the
semestertoperceivedknowledgeatthe
TABLE1.RelationshipofPerceivedandActualKnowledgeattheBeginning
oftheSemester
Semester
Spearman’srho
Spring2005
Fall2004
Spring2005
Spring2005
Spring2005
Fall2004
H0: The difference between actual
knowledgeattheendofthesemester
and actual knowledge at the beginningisnotsignificant.
H4:Attheendofthesemester,students’
actual of knowledge is significantly
greaterthanatthebeginning.
p
.190
.341
.196
.234
.258
.268
.021
.034
.037
.081
.091
.099
Because this pretest assessment was
administered on the second night of
classandallmembersoftheclasswere
present,atotalof29studentsinthefall
and 25 in the spring took this pretest
assessment.Ofthe54studentsassessed,
44demonstratedthattheiractualknowledge improved significantly over the
courseofthesemester(seeTable3).
More than three fourths of those
assessed (81.48%) gained a significant
amount of knowledge of the subject
overthecourseofthesemester(seeFigure2).Onthebasisofthesetestresults,
we rejected the null hypothesis (H0).
Hypothesis4wassupported.
Fall2004Students
.478
.465
.456
.456
.378
.305
Spearman’srho
p
.002
.003
.004
.004
.018
.059
.404
.378
.342
.329
.283
.010
.016
.031
.038
.077
endofthesemester.Becausedatafrom
the knowledge survey were ordinal,
with students’ responding to one of
three choices, sign tests were used
to test the differences between the
pretest assessment and the posttest
assessment. We tested the following
hypotheses:
We compared assessment results
for 40 (23 fall and 17 spring) students.Inallcases,students’perceived
knowledge at the end of the semester
was significantly greater at the .01
level of significance than their perceivedknowledgeatthebeginningof
thesemester(seeFigure1).Analyses
failed to support the null hypothesis
(H 0). Therefore, Hypothesis 3 was
supported.
Hypothesis4theorizesthatstudents’
actual knowledge at the end of the
semesterissignificantlygreaterthanat
thebeginning.Signtestswereusedfor
KSB1
KSA1
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
Q1
H0:Attheendofthesemester,thestudents’ perceived knowledge are not
greaterthanatthebeginning.
H3:Attheendofthesemester,students’
perceived knowledge is significantly
greaterthanatthebeginning.
Q7
Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4
p
Q3
Q5
Spearman’srho
Spring2005Students
ConfidenceIndex
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
TABLE2.RelationshipofPerceivedandActualKnowledgeattheEnd
oftheSemester
thisanalysis.Thefollowinghypotheses
weretested:
FIGURE1.Perceivedknowledgeatthebeginningandendofthefall
semester.KSA=posttestforknowledgesurvey;KSB=pretestfor
knowledgesurvey;Q=questionnumber.
May/June2008
291
TABLE3.ComparisonofActualKnowledgeattheEndandBeginning
oftheSemester
p
Numberofstudents
whoseactualknowledge
significantlyincreased
.01
.05
.10
Notsignificant
%oftotalnumberof
studentsevaluated
27
9
8
10
50.00
16.67
14.81
18.52
Pre
Post
Correct/Incorrect
1.0
0.5
Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4
Q7
Q5
Q3
0
Q1
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
1.5
FIGURE2.Actualknowledgeatthebeginningandendofthesemester.
Pre=pretestforactualknowledge;Post=posttestforactualnumber;
Q=questionnumber.
Hypothesis 5 examined the differencebetweentheaveragescoresofpretests and those of posttests regarding
perceived knowledge. This comparison
was made using the Wilcoxon signed
rankstest(Conover,1971).Thefollowinghypothesesweretested:
H0: On the average, perceived knowledge does not appear to be greater
at the end of the semester than perceivedknowledgeatthebeginningof
thesemester.
H5: On the average, perceived knowledge appears to be significantly
greaterattheendofthesemesterthan
perceivedknowledgeatthebeginning
ofthesemester.
For33ofthe40studentswhocompleted the pretest and posttest knowledge surveys, average scores on per292
JournalofEducationforBusiness
ceived knowledge after the course
was completed were higher than those
before the course began. Average
assessment scores of 6 students in the
fall class were the same in the pretest
and posttest results. Only 1 student
(fallsemester)hadalowerscoreatthe
end of the course (see Figure 3). The
Wilcoxon signed ranks test (Conover,
1971) indicted that the difference in
pretestandposttestaverageassessment
scores regarding perceived knowledge
at the .01 level of significance in the
fallandatthe.00levelofsignificance
inthespring.
Morethan80%ofthestudentsdemonstrated a significantly greater degree
of perceived knowledge of class materialattheendofthesemester.Thisdoes
not support the null hypothesis (H0).
Hypothesis5wassupported.
Hypothesis 6 questioned the difference in the average actual knowledge
gained over the course of the semester.Forthisassessment,questionswere
weightedonthebasisoftheirdifficulty,
and results were at the ratio level. A
paired t test was used. The hypotheses
testedwerethefollowing:
H0:Onaverage,actualknowledgedoes
notappeartobegreaterattheendof
thesemesterthanactualknowledgeat
thebeginningofthesemester.
H6:Onaverage,actualknowledgeappears
tobesignificantlygreaterattheendof
thesemesterthanactualknowledgeat
thebeginningofthesemester.
We tested students on course concepts at the beginning and the end of
thesemester.Wecomparedtheaverage
testscoresandfoundthatthedifference
inthepretestandtheposttestatthe.01
levelofsignificanceinthefallandatthe
.00 level of significance in the spring.
On average, students demonstrated a
significant gain in actual knowledge
over the course of the semester (see
Figure4).
Onthebasisofthesignificantttest
results,weconcludedthatstudentsdid
perform significantly better at the end
of the semester. Therefore, the null
hypothesis (H0) was rejected. Hypothesis6wassupported.
DISCUSSION
Colleges and universities wishing
to attain and maintain accreditation,
demonstrate compliance with state and
federal guidelines, and direct curriculumrelyontheassessmentofstudents.
Assessment is one means of exhibiting that learning is taking place in the
classroom.Theassessmentscanbeconducted in various ways; two common
waysarethrough(a)theuseofpretests
andposttestsinwhichstudentsdemonstratemasteryoftopicsorskillsand(b)
the use of knowledge surveys. In the
presentstudy,weusedbothassessment
techniques to determine whether studentswerelearning.
Assessmentisanecessarytoolwith
which schools can exhibit compliance
with accreditation, state, and federal
guidelines. It is not easy to implement, and it is time consuming. Once
AvgKB
AvgKA
3.5
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
AvgConfidenceIndex
3.0
2.5
2.0
1.5
1.0
0.5
0
1 2 3 4 5 6 7 8 9 1011121314 151617181920212223242526272829
FIGURE3.Averageperceivedknowledgeatthebeginningandendofthe
semester.AvgKA=averagescoreforposttest,perceivedknowledge;Avg
KB=averagescoreforpretest,perceivedknowledge;Q=questionnumber.
Pretestactualknowledge
14
12
TestScore
10
8
6
4
2
0
≤0
0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
NumberofStudents
Postestactualknowledge
14
12
TestScore
10
8
6
4
2
0
≤0
0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
NumberofStudents
FIGURE4.Averageactualknowledgeatthebeginningandendofthe
semester.
an assessment test has been created,
it must be evaluated and fine-tuned
each semester; however, the benefits
morethanoffsetthetimeandeffortthat
assessmentrequires.
Posttest assessment can be used to
revise course content so that areas in
whichstudentsareweakcanbeemphasized. Similarly, pretest results can
identify areas in which students have
priorknowledge,andteacherscandedicate less class time to those topics. In
short,boththeteacherandthestudents
can benefit from assessment. Faculty
shouldembraceassessmentasameans
to enhance their course and not view
assessmentasanotherhurdleintheroad
tocompliance.
To successfully use these techniques
forthisstudy,wehadtoestablishlearning objectives for Management 6330,
thecoursethatweusedforthisresearch
project. Questions or problems had to
becreatedtofocusoncoursetopicsand
to enable students to demonstrate that
thesegoalshadbeenmet.Theseactivitiesweretimeconsuming.
Through pretests and posttests, we
assessedbothperceivedknowledgeand
actual knowledge of course material.
Thesedatawerecomparedatthebeginning and the end of the semester and
werecomparedagainsteachother.The
levelsofperceivedknowledgeandactualknowledgeclimbedsignificantlyboth
intestingdatastudentbystudentandby
examiningtheaverageamountlearned.
Students were not able to accurately
perceivetheirknowledgelevel.
Is it unusual that the students were
not able to accurately perceive their
knowledge level? This is a difficult,
if not impossible, question to answer.
However, Rogers (2006) noted, “as
evidence of student learning, indirect
methods are not as strong as direct
measures because assumptions must
be made about what exactly the self-
reportmeans.”Theresultsofourstudy
indicate that self-reporting does not
mean much. Rogers goes on to state
that “it is important to remember that
allassessmentmethodshavetheirlimitations and contain some bias.” The
inability of the students to identify
their knowledge level implies that to
accurately measure learning, direct
measuresshouldbeemployed.
May/June2008
293
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
NOTES
BarbaraA.Price,PhD,isaprofessorofquantitativeanalysisintheCollegeofBusinessAdministrationatGeorgiaSouthernUniversity.Shehas
morethan50publicationsinvariousprofessional
journals and proceedings including the Decision
SciencesJournalofInnovativeEducation,Journal
ofEducationforBusiness,Inroads—theSIGCSE
Bulletin, and Journal of Information Technology
Education.
CindyH.Randallisanassistantprofessorof
quantitative analysis in the College of Business
Administration at Georgia Southern University.
She has published in numerous proceedings as
well as in the International Journal of Research
in Marketing, Journal of Marketing Theory and
Practice, Marketing Management Journal, JournalofTransportationManagement,andInroads—
theSIGCSEBulletin.
Correspondence concerning this article should
be addressed to Cindy H. Randall, Department
of Finance and Quantitative Analysis, Georgia
Southern University, Box 8151, COBA, Statesboro,GA30460,USA.
E-mail:[email protected]
REFERENCES
Abunawass,A., Lloyd,W., & Rudolf, E. (2004).
COMPASS:ACSprogramassessmentproject.
Proceedings,ITICSE,36(3),127–131.
Betters-Reed, B. L., Chacko, J., M., & Marlina, D. (2003). Assurance of learning: Small
school strategies. Continuous improvement
symposium,AACSBconferencesandseminars.
RetrievedNovember3,2006,fromhttp://www
.aacsb.edu/handouts/CIS03/cis03-prgm.asp
Blaha,K.D.,&Murphy,L.C.(2001).Targeting
assessment: How to hit the bull’s eye. Journal of Computing in Small Colleges, 17(2),
106–115.
Commission on Colleges. (2006). Principles of
294
JournalofEducationforBusiness
accreditation:Foundationforqualityenhancement by the Southern Association of Colleges
and Schools (2002–2006 edition). Retrieved
November 3, 2006, from http://www.sacscoc
.org/pdf/PrinciplesOfAccreditation.PDF
Conover, W. J. (1971). Practical nonparametric
statistics.NewYork:Wiley.
Earl, L., & Torrance, N. (2000). Embedding
accountabilityandimprovementintolarge-scale
assessment:Whatdifferencedoesitmake?PeabodyJournalofEducation,75(4),114–141.
Eastman, J. K., Aller, R. C., & Superville, C.
L. (2001). Developing an MBA assessment
program: Guidance from the literature and
one program’s experience. Retrieved November 10, 2006, from http://www.westga.edu/
~bquest/2001/assess.html
Gerretson,H.,&Golson,E.(2005).Synopsisof
the use of course-embedded assessment in a
medium sized public university’s general educationprogram.JournalofGeneralEducation,
54(2),139–149.
Horan, S. (2004). Using knowledge surveys to
direct the class. Retrieved November 3, 2006,
from http://spacegrant.nmsu.edu/NMSU/2004/
horan.pdf.
Jones, L. G., & Price,A. L. (2002). Changes in
computer science accreditation. CommunicationsoftheACM,45(8),99–103.
Keenan-Takagi, K. (2000). Embedding assessmentinchoralteaching.MusicEducatorsJournal,86(4),42–49.
Lidtke,D.K.,&Yaverbaum,G.J.(2003).Developing accreditation for information systems
education.IEEE,5(1),41–45.
Martell, K., & Calderon, T. (2005). Assessment
of student learning in business schools: Best
practice each step of the way. Vol. 1, No. 1.
Tallahassee, FL: Association for Institutional
Research.
Nuhfer, E., & Knipp, D. (2003).The knowledge
survey:A tool for all reasons. To Improve the
Academy,21,59–78.
OutcomeAssessment. (2003). Office of the Provost at The University ofWisconsin–Madison.
Retrieved November 10, 2006, from http://
www.provost.wisc.edu/assessment/manual/
manual12.html
Pare, M. A. (Ed.). (1998). Certification and
accreditation programs directory: A descriptive guide to national voluntary certification
and accreditation programs for professionals
and institutions (2nd ed.). Farmington Hills,
MA:GaleGroup.
Robinson, D. W. (2004). The Georgia Southern
knowledgesurveyFAQ.RetrievedJuly1,2004,
from http://ogeechee.litphil.georgiasouthern.
edu/nuncio/faq.php
Rubino, F. J. (2001). Survey highlights importance of accreditation for engineers. ASHRAE
Insight,16(7),27–31.
Rogers, G. (2006). Assessment 101: direct and
indirect assessments: what are they good for?
RetrievedMay8,2008,fromhttp://www.abet.
org/Linked%20Documents-UPDATE/Newslet
ters/06-08-CM.pdf
Schwendau, M. (1995). College quality assessment: The double-edged sword. Tech Directions,54(9),30–32.
THECPerformanceFunding.(2003).Pilotevaluation:Assessmentofgeneraleducationlearning
outcomes [Standard I.B. 2002-03]. Retrieved
July 30, 2004, from http://www.state.tn.us/
thec/2004web/division_pages/ppr_pages/pdfs/
Policy/Gen%20Ed%20RSCC%20Pilot.pdf
Treagust, D. F., Jacobowitz, R., Gallagher, J. J.,
&Parker,J.(2003).Embedassessmentinyour
teaching.ScienceScope,26(6),36–39.
Valacich,J.(2001).Accreditationintheinformation academic discipline. Retrieved November
5, 2006, from http://www.aisnet.org/Curricu
lum/AIS_AcreditFinal.doc
Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment
system. Applied Measurement in Education,
13(2),181–208.
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
Assessing Learning Outcomes in Quantitative
Courses: Using Embedded Questions for Direct
Assessment
Barbara A. Price & Cindy H. Randall
To cite this article: Barbara A. Price & Cindy H. Randall (2008) Assessing Learning Outcomes in
Quantitative Courses: Using Embedded Questions for Direct Assessment, Journal of Education
for Business, 83:5, 288-294, DOI: 10.3200/JOEB.83.5.288-294
To link to this article: http://dx.doi.org/10.3200/JOEB.83.5.288-294
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 52
View related articles
Citing articles: 3 View citing articles
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]
Date: 11 January 2016, At: 23:13
AssessingLearningOutcomesin
QuantitativeCourses:UsingEmbedded
QuestionsforDirectAssessment
BARBARAA.PRICE
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
CINDYH.RANDALL
GEORGIASOUTHERNUNIVERSITY
STATESBORO,GEORGIA
ABSTRACT. Researcherscanevaluate
learningbyusingdirectandindirectassessment.Althoughtherearevariousways
toapplytheseapproaches,twocommon
techniquesarepretestsandposttests(direct
assessment),inwhichstudentsdemonstrate
masteryoftopicsorskills,andtheuseof
knowledgesurveys(indirectassessment).
Thepresentauthorsusedthesetwotechniquestodemonstratethatstudentknowledgeofcoursematerialincreasedsignificantlyduringthesemester.Furthermore,
theauthorsdemonstratedthattheindirect
knowledgesurveyofperceivedknowledge
didnotcorrelatewithactualknowledge.
Keywords:assessment,learningoutcomes,
quantitativeclasses
Copyright©2008HeldrefPublications
288
JournalofEducationforBusiness
R
A
ccreditation helps institutions
show that they are attaining an
acceptable level of quality within their
degreeprograms(Lidtke&Yaverbaum,
2003;Pare,1998;Valacich,2001).Also,
accreditation ensures national consistencyofprograms,providespeerreview
and recognition from outside sources,
and brings programs onto the radar
screen of potential employers (Rubino,
2001).Tomeetaccreditationstandards,
faculty and administrators are responsibleforthecontinuousimprovementof
degree programs and the measurement
and documentation of student performance (Eastman, Aller, & Superville,
2001). Many colleges and universities
rely heavily on program assessment
to comply with accreditation and state
demands (Eastman et al.; Schwendau,
1995)andtoguidecurriculum(Abunawass,Lloyd,&Rudolf,2004;Blaha&
Murphy,2001).
Assessment to determine whether
degreeprogramsareprovidingappropriateeducationtograduateshasbecomea
key component of most accreditation
self-study report requirements and a
vehicle that is preferred for accountability purposes (Earl & Torrance,
2000).Severalaccreditationboardsnow
require that colleges set learning goals
and then assess how well these goals
aremet(Jones&Price,2002).Learning
goals that reflect the skills, attitudes,
andknowledgethatstudentsareexpect-
ed to acquire as a result of their programsofstudyarebroadandnoteasily
measured.Objectiveoutcomesareclear
statements outlining what is expected
from students. They can be observed,
measured, and used as indicators of
goals(Martell&Calderon,2005).
Under the Association to Advance
Collegiate Schools of Business International’s (AACSB’s) new standards
(Betters-Reed, Chacko, & Marlina,
2003) and the SouthernAssociation of
Colleges and Schools’ (SACS’s) new
standards (Commission on Colleges,
2006), business programs will have to
set goals to address what skills, attributes, and knowledge they want their
studentstomasterandmustthenbeable
todemonstratethattheirgraduateshave
metthesegoals.Establishingandimplementing a system under which these
programscanprovethattheirgraduates
havemettheestablishedgoalsisnecessary under these standards. Any such
systemwillhavetorelyonthecreation
and measurement of course objectives
to serve as indicators that goals are
beingmet.
Two basic approaches to assess
learning are indirect and direct. Indirect approaches gather opinions of
the quality and quantity of learning
that takes place (Martell & Calderon,
2005). Techniques for gathering data
by using indirect assessment include
focus groups, exit interviews, and
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
surveys. One common type of survey
is the knowledge survey (Nuhfer &
Knipp,2003).Knowledgesurveyscan
cover the topics of an entire course—
both skills and content knowledge—
exhaustively. This coverage is accomplished through the use of a rating
systeminwhichstudentsexpresstheir
confidence in providing answers to
problemsorissues(Horan,2004).
Using a knowledge survey, the student responded to one of three choices:
(a) “You feel confident that you can
now answer the question sufficiently
for graded test purposes”; (b) “You can
nowansweratleast50%ofthequestion
or you know precisely where you can
quickly get the information and return
(20 minutes or less) to provide a completeanswerforgradedpurposes”;or(c)
“You are not confident you could adequately answer the question for graded
testpurposesatthistime”(Horan,2004).
This method of assessment allows studentstoconsidercomplexproblemsand
issues as well as course content knowledge(Nuhfer&Knipp,2003).
Incontrast,directassessmentrequires
that students demonstrate mastery of
topics or skills by using actual work
completedbythestudents.Thisrequirement can be accomplished by using
papers, presentations, speeches, graded
assessment items, or pretests and posttests.Pretestsandposttestsareprobably
the most widely used form of evaluating how students have progressed duringthesemester(OutcomeAssessment,
2003).Thismethodsurveysstudentsat
thebeginningandendofacourse.With
standard pretests and posttests, studentscancompletethesamequizatthe
beginningandendofthecourse,anda
gradecanbecomputedtoillustratehow
much students learned. Critics believe
this approach is limiting because time
alonedictatestheamountofmaterialon
whichstudentscanbetested(Nuhfer&
Knipp,2003).Proponentsfeelthatthese
tests are specifically designed to coincide with the curriculum of the course
and can focus on the missions, goals,
andobjectivesofthedepartmentoruniversity(OutcomeAssessment,2003).
Regardless of which of the direct
methods is used, educators can measure the progress of students by using
course-embedded assessment. Course
embedded assessment, a cutting-edge
formalized assessment (Gerretson &
Golson, 2005), requires that the products of students’ work be evaluated
by using those criteria and standards
established in the course objectives.
It tends to be informal but well organized(Treagust,Jacobowitz,Gallagher,
& Parker, 2003). By embedding, the
opportunities to assess progress made
by students are integrated into regular
instructional material and are indistinguishable from day-to-day classroom
activities(Keenan-Takagi,2000;Wilson
& Sloane, 2000). The results are then
shared with the faculty so that learning and curriculum can be improved.
Thistechniqueisefficientandinsightful
(Martell & Calderon, 2005) and guaranteesconsistencywithinmultiplesections of the same course by using the
same outcomes and rubrics (Gerretson
&Golson,2005).
Hypotheses
The goal of the present study was
to provide insight on the use of direct
versus indirect techniques as means
of assessing student learning, with the
hope that these findings can be used
as input to course improvement as
well as assessment and accreditation
self-studies. To accomplish this goal,
we asked students at a university who
were enrolled in Management 6330
during the 2004–2005 academic year
to participate in a knowledge survey
project including a pretest and posttest validity check. Management 6330,
or Quantitative Methods for Business,
is an introductory course in statistics
and management science techniques
requiredforstudentsenteringtheMBA
or MAcc degree programs who have
eithernotacquiredtheknowledgefrom
a BA degree program or have paused
for some time since taking decision
analysis courses. Using these students’
scores, we compared pretest and posttestscoresandknowledgesurveyscores
on a question-by-question basis.Additionally,pretestandposttestandbeforeand-afterknowledgesurveyscoreswere
compared. Last, the class averages on
bothinstrumentswerecomparedforthe
datagatheredatthebeginningandthen
attheendofthesemester.
Westudiedthefollowinghypotheses:
1.At the beginning of a course, students’ knowledge and actual knowledgearemutuallyindependent.
2.Attheendofacourse,students’perceived knowledge and actual knowledgearerelated.
3.Students’ perceived knowledge is
significantly greater at the end of
a course than at the beginning of a
course.
4.Students’actualknowledgeissignificantly greater at the end of a course
thanatthebeginningofacourse.
5.Average perceived knowledge for
studentsissignificantlygreateratthe
endofacoursethanatthebeginning
ofacourse.
6.Average actual knowledge for students is significantly greater at the
endofacoursethanatthebeginning
ofacourse.
METHOD
During the 2004–2005 academic
year,Dr.DavidW.Robinsonconducted
aknowledgesurveytrialatauniversity
in the southeastern United States and
invited all faculty members to participate.Thosewhochosetodosocreated
alistofquestionsthatcomprehensively
expressed the content of their classes.
Then,Robinson(2004)usedthesequestions to construct a knowledge survey
instrument. One class whose professor
chose to participate in the trial was
Management 6330, Quantitative Methods for Business. This class is taught
everysemester.
Duringthefall2004andspring2005
semesters, students enrolled in Management 6330 were participants in the
knowledge survey project. As a participant in this project, each student
completed a Web-based survey during
the first class. The survey asked each
student to indicate confidence in being
able to answer questions on material
that would be covered over the course
ofthesemester.Attheendofthesemester, each student completed the same
survey,providingameanstoassessthe
learning that occurred over the semester. These surveys were administered
via the Web and did not count in the
student’s course average. The faculty
memberteachingtheclassdidnothave
May/June2008
289
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
access to the survey results until after
thesemesterended.
One problem with surveys in which
studentsareaskediftheyhaveadequate
knowledge without having to prove
knowledgeisthatsomestudentsexhibit overconfidence (Nuhfer & Knipp,
2003).To overcome this problem, during the second night of class each studentreceivedthesamepretestandactually solved the test problems. Another
problem often encountered is that students fail to take the test seriously if
no incentive is attached (THEC PerformanceFunding,2003).Infall2004,
thisactivitydidnotcountaspartofthe
student’s final grade; however, with an
overallscoreof70%orhigher,thestudentcouldelecttoexemptManagement
6330. If the student remained in the
course,thissametestwasadministered
attheendofthefallsemester.Thescore
onthisexamaccountedfor10%ofthe
student’sfinalclassaverage.
In spring 2005, Management 6330
studentsagainchosetoparticipateinthe
assessment study.After the initial trial
duringthepriorsemester,theprofessor
refinedboththesurveyandtheprocess.
One change involved proof of competency for Management 6330. Instead
of exempting the course with an overall passing grade (70 or above) on the
pretest, the students had to score a 70
orhigherineachofthesixcompetency
areas (descriptive/graphical analysis,
probability, inference, decision analysis, linear programming, and quality
control processes). The second change
involved the posttest. Students in the
fall semester complained about the
number of tests facing them at the end
of the course. In the spring, instead of
giving a separate posttest that counted
aspartofthefinalexam,theprofessor
embeddedarandomselectionofpretest
questions from each of the six competency areas into the final exam. These
questions,whichaccountedforroughly
half of the original pretest questions,
were compared with the pretest score
forassessment.
RESULTS
AssessmentofstudentsinManagement6330beganonthefirstnightof
class.Although at the end of the fall
290
JournalofEducationforBusiness
semester class enrollment showed a
total of 29 students, some enrolled
late. Therefore, only 23 completed
both the pretest and posttest knowledge survey instrument.Again in the
spring, students enrolled late, and
some did not complete the pretest
knowledge survey instrument. Of the
25 students who finished the course,
only 17 completed both the pretest
andposttestknowledgesurveyinstruments.Therefore,inthefallandspring
semesters,40studentscompletedboth
pretestandposttestknowledgesurvey
instruments. A total of 54 students
completed the pretest and posttest by
solvingproblems.
Because we recorded student assessment of perceived knowledge by using
ordinal data and per-question actual
knowledge by using binary data (0 =
incorrect, 1 = correct), nonparametric
methods for statistical procedures were
used to test five of the six hypotheses.
Hypothesis1wasaddressedbyusingrank
correlationsinwhichSpearman’srhowas
calculatedtotestsignificance.Theauthors
testedthefollowinghypotheses:
H0: At the beginning of the semester,
a positive or negative relationship
between the measures of students’
perceived knowledge and actual
knowledgeexists.
H1: At the beginning of the semester,
the measures of students’ perceived
knowledgeandactualknowledgeare
mutuallyindependent.
Twenty-one of the 23 students who
completed the pretest assessments for
perceived and actual knowledge at
the beginning of fall semester and 13
of the 17 who completed the pretest
assessments for perceived and actual
knowledge in the spring semester produced results showing no significant
relationshipbetweenthetwomeasures.
Two students in the fall and 4 in the
spring revealed a significant relationship between what they believed they
knew and what they actually knew, 3
at the .05 level of significance and the
others at the .10 level of significance
(seeTable1).
Theresultsindicatedthatatthebeginningofthesemestermoststudentscould
not accurately assess their levels of
existing knowledge. Of those assessed,
85% showed no significant relationshipbetweentheirperceivedknowledge
and actual knowledge of the subject.
Inotherwords,atthebeginningofthe
semester, the students were unable to
determine the difference between perceived knowledge and actual knowledge.Therefore,H0cannotberejected.
Hypothesis1issupported.
We also addressed Hypothesis 2
by using rank correlations in which
Spearman’s rho was calculated to test
significance. We tested the following
hypotheses:
H0:Attheendofthesemester,themeasures of students’ perceived knowledge and of their actual knowledge
aremutuallyindependent.
H2:At the end of the semester, a positiveornegativerelationshipbetween
the measures of students’ perceived
knowledgeandoftheiractualknowledgeexists.
Seventeen of the 23 students who
completed the posttest assessments
for perceived knowledge and actual
knowledge during the fall and 12 of
the 17 students who completed the
posttest assessments for perceived
knowledge and actual knowledge in
the spring produced test results showingnosignificantrelationshipbetween
the two measures. Only 6 students in
the fall and 5 in the spring revealed a
significant relationship between what
theybelievedtheyknewandwhatthey
actually did know, 5 at the .01 level
of significance, 4 at the .05 level of
significance, and 2 at the .10 level of
significance(seeTable2).
Attheendofbothsemesters,moststudentswerenotaccurateintheirassessmentofacquiredknowledge.Although
a slight improvement occurred, by the
endofthesemestermoststudentswere
still unable to determine the difference
betweenperceivedknowledgeandactual knowledge. Just over 72% of those
assessed after they had completed the
course showed no significant relationshipbetweenperceivedknowledgeand
actualknowledgeofthesubject.Therefore,H0cannotberejectedandHypothesis2isnotsupported.
Hypothesis 3 compared perceived
knowledge at the beginning of the
semestertoperceivedknowledgeatthe
TABLE1.RelationshipofPerceivedandActualKnowledgeattheBeginning
oftheSemester
Semester
Spearman’srho
Spring2005
Fall2004
Spring2005
Spring2005
Spring2005
Fall2004
H0: The difference between actual
knowledgeattheendofthesemester
and actual knowledge at the beginningisnotsignificant.
H4:Attheendofthesemester,students’
actual of knowledge is significantly
greaterthanatthebeginning.
p
.190
.341
.196
.234
.258
.268
.021
.034
.037
.081
.091
.099
Because this pretest assessment was
administered on the second night of
classandallmembersoftheclasswere
present,atotalof29studentsinthefall
and 25 in the spring took this pretest
assessment.Ofthe54studentsassessed,
44demonstratedthattheiractualknowledge improved significantly over the
courseofthesemester(seeTable3).
More than three fourths of those
assessed (81.48%) gained a significant
amount of knowledge of the subject
overthecourseofthesemester(seeFigure2).Onthebasisofthesetestresults,
we rejected the null hypothesis (H0).
Hypothesis4wassupported.
Fall2004Students
.478
.465
.456
.456
.378
.305
Spearman’srho
p
.002
.003
.004
.004
.018
.059
.404
.378
.342
.329
.283
.010
.016
.031
.038
.077
endofthesemester.Becausedatafrom
the knowledge survey were ordinal,
with students’ responding to one of
three choices, sign tests were used
to test the differences between the
pretest assessment and the posttest
assessment. We tested the following
hypotheses:
We compared assessment results
for 40 (23 fall and 17 spring) students.Inallcases,students’perceived
knowledge at the end of the semester
was significantly greater at the .01
level of significance than their perceivedknowledgeatthebeginningof
thesemester(seeFigure1).Analyses
failed to support the null hypothesis
(H 0). Therefore, Hypothesis 3 was
supported.
Hypothesis4theorizesthatstudents’
actual knowledge at the end of the
semesterissignificantlygreaterthanat
thebeginning.Signtestswereusedfor
KSB1
KSA1
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
Q1
H0:Attheendofthesemester,thestudents’ perceived knowledge are not
greaterthanatthebeginning.
H3:Attheendofthesemester,students’
perceived knowledge is significantly
greaterthanatthebeginning.
Q7
Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4
p
Q3
Q5
Spearman’srho
Spring2005Students
ConfidenceIndex
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
TABLE2.RelationshipofPerceivedandActualKnowledgeattheEnd
oftheSemester
thisanalysis.Thefollowinghypotheses
weretested:
FIGURE1.Perceivedknowledgeatthebeginningandendofthefall
semester.KSA=posttestforknowledgesurvey;KSB=pretestfor
knowledgesurvey;Q=questionnumber.
May/June2008
291
TABLE3.ComparisonofActualKnowledgeattheEndandBeginning
oftheSemester
p
Numberofstudents
whoseactualknowledge
significantlyincreased
.01
.05
.10
Notsignificant
%oftotalnumberof
studentsevaluated
27
9
8
10
50.00
16.67
14.81
18.52
Pre
Post
Correct/Incorrect
1.0
0.5
Q9
Q1
1
Q1
3
Q1
5
Q1
7
Q1
9
Q2
1
Q2
3
Q2
7
Q2
9
Q3
1
Q3
3
Q3
5
Q3
9
Q4
2
Q4
4
Q7
Q5
Q3
0
Q1
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
1.5
FIGURE2.Actualknowledgeatthebeginningandendofthesemester.
Pre=pretestforactualknowledge;Post=posttestforactualnumber;
Q=questionnumber.
Hypothesis 5 examined the differencebetweentheaveragescoresofpretests and those of posttests regarding
perceived knowledge. This comparison
was made using the Wilcoxon signed
rankstest(Conover,1971).Thefollowinghypothesesweretested:
H0: On the average, perceived knowledge does not appear to be greater
at the end of the semester than perceivedknowledgeatthebeginningof
thesemester.
H5: On the average, perceived knowledge appears to be significantly
greaterattheendofthesemesterthan
perceivedknowledgeatthebeginning
ofthesemester.
For33ofthe40studentswhocompleted the pretest and posttest knowledge surveys, average scores on per292
JournalofEducationforBusiness
ceived knowledge after the course
was completed were higher than those
before the course began. Average
assessment scores of 6 students in the
fall class were the same in the pretest
and posttest results. Only 1 student
(fallsemester)hadalowerscoreatthe
end of the course (see Figure 3). The
Wilcoxon signed ranks test (Conover,
1971) indicted that the difference in
pretestandposttestaverageassessment
scores regarding perceived knowledge
at the .01 level of significance in the
fallandatthe.00levelofsignificance
inthespring.
Morethan80%ofthestudentsdemonstrated a significantly greater degree
of perceived knowledge of class materialattheendofthesemester.Thisdoes
not support the null hypothesis (H0).
Hypothesis5wassupported.
Hypothesis 6 questioned the difference in the average actual knowledge
gained over the course of the semester.Forthisassessment,questionswere
weightedonthebasisoftheirdifficulty,
and results were at the ratio level. A
paired t test was used. The hypotheses
testedwerethefollowing:
H0:Onaverage,actualknowledgedoes
notappeartobegreaterattheendof
thesemesterthanactualknowledgeat
thebeginningofthesemester.
H6:Onaverage,actualknowledgeappears
tobesignificantlygreaterattheendof
thesemesterthanactualknowledgeat
thebeginningofthesemester.
We tested students on course concepts at the beginning and the end of
thesemester.Wecomparedtheaverage
testscoresandfoundthatthedifference
inthepretestandtheposttestatthe.01
levelofsignificanceinthefallandatthe
.00 level of significance in the spring.
On average, students demonstrated a
significant gain in actual knowledge
over the course of the semester (see
Figure4).
Onthebasisofthesignificantttest
results,weconcludedthatstudentsdid
perform significantly better at the end
of the semester. Therefore, the null
hypothesis (H0) was rejected. Hypothesis6wassupported.
DISCUSSION
Colleges and universities wishing
to attain and maintain accreditation,
demonstrate compliance with state and
federal guidelines, and direct curriculumrelyontheassessmentofstudents.
Assessment is one means of exhibiting that learning is taking place in the
classroom.Theassessmentscanbeconducted in various ways; two common
waysarethrough(a)theuseofpretests
andposttestsinwhichstudentsdemonstratemasteryoftopicsorskillsand(b)
the use of knowledge surveys. In the
presentstudy,weusedbothassessment
techniques to determine whether studentswerelearning.
Assessmentisanecessarytoolwith
which schools can exhibit compliance
with accreditation, state, and federal
guidelines. It is not easy to implement, and it is time consuming. Once
AvgKB
AvgKA
3.5
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
AvgConfidenceIndex
3.0
2.5
2.0
1.5
1.0
0.5
0
1 2 3 4 5 6 7 8 9 1011121314 151617181920212223242526272829
FIGURE3.Averageperceivedknowledgeatthebeginningandendofthe
semester.AvgKA=averagescoreforposttest,perceivedknowledge;Avg
KB=averagescoreforpretest,perceivedknowledge;Q=questionnumber.
Pretestactualknowledge
14
12
TestScore
10
8
6
4
2
0
≤0
0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
NumberofStudents
Postestactualknowledge
14
12
TestScore
10
8
6
4
2
0
≤0
0–10 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–9090–100 >100
NumberofStudents
FIGURE4.Averageactualknowledgeatthebeginningandendofthe
semester.
an assessment test has been created,
it must be evaluated and fine-tuned
each semester; however, the benefits
morethanoffsetthetimeandeffortthat
assessmentrequires.
Posttest assessment can be used to
revise course content so that areas in
whichstudentsareweakcanbeemphasized. Similarly, pretest results can
identify areas in which students have
priorknowledge,andteacherscandedicate less class time to those topics. In
short,boththeteacherandthestudents
can benefit from assessment. Faculty
shouldembraceassessmentasameans
to enhance their course and not view
assessmentasanotherhurdleintheroad
tocompliance.
To successfully use these techniques
forthisstudy,wehadtoestablishlearning objectives for Management 6330,
thecoursethatweusedforthisresearch
project. Questions or problems had to
becreatedtofocusoncoursetopicsand
to enable students to demonstrate that
thesegoalshadbeenmet.Theseactivitiesweretimeconsuming.
Through pretests and posttests, we
assessedbothperceivedknowledgeand
actual knowledge of course material.
Thesedatawerecomparedatthebeginning and the end of the semester and
werecomparedagainsteachother.The
levelsofperceivedknowledgeandactualknowledgeclimbedsignificantlyboth
intestingdatastudentbystudentandby
examiningtheaverageamountlearned.
Students were not able to accurately
perceivetheirknowledgelevel.
Is it unusual that the students were
not able to accurately perceive their
knowledge level? This is a difficult,
if not impossible, question to answer.
However, Rogers (2006) noted, “as
evidence of student learning, indirect
methods are not as strong as direct
measures because assumptions must
be made about what exactly the self-
reportmeans.”Theresultsofourstudy
indicate that self-reporting does not
mean much. Rogers goes on to state
that “it is important to remember that
allassessmentmethodshavetheirlimitations and contain some bias.” The
inability of the students to identify
their knowledge level implies that to
accurately measure learning, direct
measuresshouldbeemployed.
May/June2008
293
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:13 11 January 2016
NOTES
BarbaraA.Price,PhD,isaprofessorofquantitativeanalysisintheCollegeofBusinessAdministrationatGeorgiaSouthernUniversity.Shehas
morethan50publicationsinvariousprofessional
journals and proceedings including the Decision
SciencesJournalofInnovativeEducation,Journal
ofEducationforBusiness,Inroads—theSIGCSE
Bulletin, and Journal of Information Technology
Education.
CindyH.Randallisanassistantprofessorof
quantitative analysis in the College of Business
Administration at Georgia Southern University.
She has published in numerous proceedings as
well as in the International Journal of Research
in Marketing, Journal of Marketing Theory and
Practice, Marketing Management Journal, JournalofTransportationManagement,andInroads—
theSIGCSEBulletin.
Correspondence concerning this article should
be addressed to Cindy H. Randall, Department
of Finance and Quantitative Analysis, Georgia
Southern University, Box 8151, COBA, Statesboro,GA30460,USA.
E-mail:[email protected]
REFERENCES
Abunawass,A., Lloyd,W., & Rudolf, E. (2004).
COMPASS:ACSprogramassessmentproject.
Proceedings,ITICSE,36(3),127–131.
Betters-Reed, B. L., Chacko, J., M., & Marlina, D. (2003). Assurance of learning: Small
school strategies. Continuous improvement
symposium,AACSBconferencesandseminars.
RetrievedNovember3,2006,fromhttp://www
.aacsb.edu/handouts/CIS03/cis03-prgm.asp
Blaha,K.D.,&Murphy,L.C.(2001).Targeting
assessment: How to hit the bull’s eye. Journal of Computing in Small Colleges, 17(2),
106–115.
Commission on Colleges. (2006). Principles of
294
JournalofEducationforBusiness
accreditation:Foundationforqualityenhancement by the Southern Association of Colleges
and Schools (2002–2006 edition). Retrieved
November 3, 2006, from http://www.sacscoc
.org/pdf/PrinciplesOfAccreditation.PDF
Conover, W. J. (1971). Practical nonparametric
statistics.NewYork:Wiley.
Earl, L., & Torrance, N. (2000). Embedding
accountabilityandimprovementintolarge-scale
assessment:Whatdifferencedoesitmake?PeabodyJournalofEducation,75(4),114–141.
Eastman, J. K., Aller, R. C., & Superville, C.
L. (2001). Developing an MBA assessment
program: Guidance from the literature and
one program’s experience. Retrieved November 10, 2006, from http://www.westga.edu/
~bquest/2001/assess.html
Gerretson,H.,&Golson,E.(2005).Synopsisof
the use of course-embedded assessment in a
medium sized public university’s general educationprogram.JournalofGeneralEducation,
54(2),139–149.
Horan, S. (2004). Using knowledge surveys to
direct the class. Retrieved November 3, 2006,
from http://spacegrant.nmsu.edu/NMSU/2004/
horan.pdf.
Jones, L. G., & Price,A. L. (2002). Changes in
computer science accreditation. CommunicationsoftheACM,45(8),99–103.
Keenan-Takagi, K. (2000). Embedding assessmentinchoralteaching.MusicEducatorsJournal,86(4),42–49.
Lidtke,D.K.,&Yaverbaum,G.J.(2003).Developing accreditation for information systems
education.IEEE,5(1),41–45.
Martell, K., & Calderon, T. (2005). Assessment
of student learning in business schools: Best
practice each step of the way. Vol. 1, No. 1.
Tallahassee, FL: Association for Institutional
Research.
Nuhfer, E., & Knipp, D. (2003).The knowledge
survey:A tool for all reasons. To Improve the
Academy,21,59–78.
OutcomeAssessment. (2003). Office of the Provost at The University ofWisconsin–Madison.
Retrieved November 10, 2006, from http://
www.provost.wisc.edu/assessment/manual/
manual12.html
Pare, M. A. (Ed.). (1998). Certification and
accreditation programs directory: A descriptive guide to national voluntary certification
and accreditation programs for professionals
and institutions (2nd ed.). Farmington Hills,
MA:GaleGroup.
Robinson, D. W. (2004). The Georgia Southern
knowledgesurveyFAQ.RetrievedJuly1,2004,
from http://ogeechee.litphil.georgiasouthern.
edu/nuncio/faq.php
Rubino, F. J. (2001). Survey highlights importance of accreditation for engineers. ASHRAE
Insight,16(7),27–31.
Rogers, G. (2006). Assessment 101: direct and
indirect assessments: what are they good for?
RetrievedMay8,2008,fromhttp://www.abet.
org/Linked%20Documents-UPDATE/Newslet
ters/06-08-CM.pdf
Schwendau, M. (1995). College quality assessment: The double-edged sword. Tech Directions,54(9),30–32.
THECPerformanceFunding.(2003).Pilotevaluation:Assessmentofgeneraleducationlearning
outcomes [Standard I.B. 2002-03]. Retrieved
July 30, 2004, from http://www.state.tn.us/
thec/2004web/division_pages/ppr_pages/pdfs/
Policy/Gen%20Ed%20RSCC%20Pilot.pdf
Treagust, D. F., Jacobowitz, R., Gallagher, J. J.,
&Parker,J.(2003).Embedassessmentinyour
teaching.ScienceScope,26(6),36–39.
Valacich,J.(2001).Accreditationintheinformation academic discipline. Retrieved November
5, 2006, from http://www.aisnet.org/Curricu
lum/AIS_AcreditFinal.doc
Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment
system. Applied Measurement in Education,
13(2),181–208.