08832323.2013.800467

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Seeking Empirical Validity in an Assurance of
Learning System
Sherry L. Avery, Rochell R. McWhorter, Roger Lirely & H. Harold Doty
To cite this article: Sherry L. Avery, Rochell R. McWhorter, Roger Lirely & H. Harold Doty
(2014) Seeking Empirical Validity in an Assurance of Learning System, Journal of Education for
Business, 89:3, 156-164, DOI: 10.1080/08832323.2013.800467
To link to this article: http://dx.doi.org/10.1080/08832323.2013.800467

Published online: 06 Mar 2014.

Submit your article to this journal

Article views: 72

View related articles

View Crossmark data


Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 11 January 2016, At: 20:37

JOURNAL OF EDUCATION FOR BUSINESS, 89: 156–164, 2014
C Taylor & Francis Group, LLC
Copyright 
ISSN: 0883-2323 print / 1940-3356 online
DOI: 10.1080/08832323.2013.800467

Seeking Empirical Validity in an Assurance
of Learning System
Sherry L. Avery, Rochell R. McWhorter, Roger Lirely, and H. Harold Doty

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

University of Texas at Tyler, Tyler, Texas, USA


Business schools have established measurement tools to support their assurance of learning
(AoL) systems and to assess student achievement of learning objectives. However, business
schools have not required their tools to be empirically validated, thus ensuring that they measure
what they are intended to measure. The authors propose confirmatory factor analysis (CFA)
be utilized by business schools to evaluate AoL measurement systems. They illustrate a CFA
model used to evaluate the measurement tools at their college. The authors’ approach is in its
initial steps, currently evaluating individual measurement tools, but the authors are working
toward developing a system that can evaluate the entire AoL measurement system.
Keywords: AACSB, assessment, assurances of learning, confirmatory factor analysis

A decade ago, the Association to Advance Collegiate Schools
of Business (AACSB) International ratified new accreditation
requirements including the addition of assurance of learning (AoL) standards for continuous improvement (Martell,
2007). As part of this addition, schools seeking to earn or
maintain AACSB accreditation must develop a set of defined learning goals and subsequently collect relevant assessment data to determine direct educational achievement
(LeClair, 2012; Sampson & Betters-Reed, 2008). The establishment of the mission-driven assessment process requires
“well-documented systematic processes to develop, monitor,
evaluate, and revise the substance and deliver of the curricula
on learning” (Romero, 2008, p. 253).

With establishment of the 2003 AACSB standards, all
schools “must develop assessment tools that measure the
effectiveness of their curriculum” (Pesta & Scherer, 2011,
p. 164). As a response to this outcomes assessment mandate, a number of schools created models to depict and track
their assessment functions (Betters-Reed, Nitkin, & Sampson, 2008; Gardiner, Corbitt, & Adams 2010; Zocco, 2011).
However, the question arises as to the validity of system
models for measuring learning outcomes—does the model
measure what it purports to measure—do the learning experiences accomplish the learning goals outlined in the systems model? This question is a very important one because

Correspondence should be addressed to Sherry L. Avery, University
of Texas at Tyler, College of Business and Technology, 3900 University
Boulevard, Tyler, TX 75799, USA. E-mail: savery@uttyler.edu

once validity of a measurement system is established, then
it provides confidence in a program and quality of assurance
in achieving the school’s mission (Baker, Ni, & Van Wart,
2012).
The purpose of this article is to illustrate development of
an empirically based AoL system that may be used by other
business schools seeking accreditation. Relevant literature on

this topic will be examined next.

REVIEW OF LITERATURE
A search of the literature for an empirically validated AoL
system yielded results for research covering either the validation of AoL tools or validation of an AoL model. Each is
discussed in the following section.
Measures of Validity in AoL Assessment Tools
Measures of validity associated with AoL learning outcomes
were located in business literature by reviewing articles that
described locally developed assessment tools and externally
validated instruments. For instance, researchers developed
an assessment tool to explore students’ self-efficacy toward
service and civic participation. They utilized traditional scale
development and confirmatory factor analysis (CFA) and simultaneous factor analysis in several populations for insuring the validity and reliability of their instrument to measure
AoL criteria for ethics and social responsibility (Weber, Weber, Sleeper, & Schneider, 2004). Another tool offered was a

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

SEARCHING FOR EMPIRICAL VALIDITY


content valid assessment exam created to measure business
management knowledge (Pesta & Scherer, 2011).
Also, a matrix presented by Harper and Harder (2009)
depicts demonstrated abilities intersected with competency
clusters; the clusters were developed from literature describing valid research into “the kinds of knowledge and skills that
are known to be necessary for success as a practitioner in the
MIS field” (p. 492). However, no statistical measures of validity were provided. Additionally, instances of use of externally validated instruments such as the revised version of the
Defining Issues Test to assess ethical reasoning instruction
in undergraduate cost accounting (Wilhelm & Czyzewski,
2012) and use of the CAPSIM computer simulation to assess
business strategy integration (Garrett, Marques, & Dhiman,
2012) were found.
Measures of Validity in AoL Assessment Models
Various models have been offered for outcomes measurement as part of a processed-based approach for meeting
AoL standards (i.e., Beard, Schwieger, & Surendran, 2008;
Betters-Reed et al., 2008; Hess & Siciliano, 2007) but without statistical evidence or discussion of validity measures.
However, the search of literature found an article by Zocco
(2011) that presented a recursive model to address and document continuous and systematic improvement and discussed
validity issues surrounding the application of recursion to a
process such as AoL. Although helpful for looking at school

improvement, the model does not measure validity of the
model itself. Therefore, the review of literature offered several tools and a model with validity calculations, however,
no example of an empirically validated system was found.

CASE STUDY: ASSURANCES OF LEARNING
AT THE UNIVERSITY OF TEXAS AT TYLER
During the past five years, the College of Business at the
University of Texas at Tyler (hereafter referred to as college)
has conducted a complete redesign of its AACSB AoL system. To understand the rationale for this design change it
is important to explore several key drivers of this decision,
especially in light of the fact that our prior AoL system was
cited as one or our best practices during our last maintenance
of accreditation visit. At last visit, the college operated three
different and largely unrelated assessment systems: one for
the AACSB, one for The Association of Technology, Management and Applied Engineering, and one for the Southern
Association of Colleges and Schools (SACS). In some ways,
these independent assessment systems simplified accreditation reporting: each system was tailored to the specifics of
a single accrediting body and the data associated with one
system were not considered in collaboration with data collected for a different accrediting body. For example, AACSB
and the college’s assessment procedures were treated as completely independent. This approach simplified reporting, but


157

hindered integrating different assessment data in the larger
curriculum management process.
A second major contextual factor relevant to our AoL
process was feedback from our last AACSB visit that recommended revisions to the vision and mission statements for the
college and AACSB AoL processes. As part of that revision
process, the college clarified its mission and identified five
core values.
Based largely on these contextual factors, faculty determined we were at an ideal point to design a new single
integrated assessment model to meet the needs of each of
our accrediting bodies. Further, we determined that the new
system should be linked to the new mission by incorporating the core values as learning outcomes, and that we
should attempt to assess the validity of the system in terms
of both the theoretical model used to design the system and
the measurement model used to organize the data collection.
These additional steps would allow more confidence in the
evidence-based changes we were making in program structure and course curriculum. The full-scale implementation
began in the 2010–2011 school year; our model is more fully

described next.
Faculty-Driven Process
AoL in the college is a faculty-driven process. Oversight of
this process is charged to the AoL committee, a committee
comprised of a faculty chair, the undergraduate program director, the graduate programs coordinator and four at-large
faculty members. The composition of the committee provides
cross-sectional representation of all disciplines and programs
in the college.
The committee works closely with our faculty to ensure
that each learning objective is measured periodically, at least
twice during each five-year period but generally more often. The faculty employ a variety of measurement strategies,
including major field tests, embedded test questions, case
analyses, observation of student presentations, activity logs,
simulations, and other class assignments or projects. Analyses of results guide the committee in its work with the faculty
to develop and implement appropriate actions to ensure curricula and pedagogy are managed in a manner enhancing
student learning and development. Figure 1 illustrates how
the AoL assessment process operates in a continuous improvement mode.
Conceptual Framework
The AoL system in the college is based on a set of shared core
values: professional proficiency, technological competence,

global awareness, social responsibility, and ethical courage
as seen in Figure 2. These mission-based core values form
the framework for our comprehensive, empirically validated
AoL models for both the bachelor in business administration
program and the master of business administration program,
as well as other college programs that are outside the scope
of AACSB accreditation. AoL in the college has evolved

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

158

S. L. AVERY ET AL.

FIGURE 1

Assurance of learning curriculum management process at the University of Texas at Tyler (color figure available online).

to the point where our current system is second-generation,
that is, it is the culmination of an assessment of the AoL

system itself. Many of the best features of the prior system were retained, including assessment of discipline-based
knowledge, communication skills, and the use of quantitative
tools and business technology. The result of this process is
a value-based conceptual framework whose efficacy can be
tested empirically using confirmatory factor analysis. To our
knowledge, our college is the first AACSB-accredited program to design an empirically validated AoL system. Figure 2
depicts the conceptual framework of our AoL system for the
bachelor in business administration program.
METHOD
Data Collection
The faculty developed 10 learning objectives to support the
five learning goals of the college. A measurement tool was
designed for each objective, such as the Major Field Test

or rubrics. Assessment was conducted within required core
business courses that included students across all college
of business majors. Students were generally either junior or
seniors in one of the business majors. Results were then collected and compiled centrally in an administrative function
within the college.
Analysis Approach

Several of the measurement tools included a number of items
that collectively assessed the specific learning objective.
CFA was conducted to assess empirical validity of the items
measures and learning objectives. CFA was chosen for
assessment because it tests how well the measured variables
represent the constructs (Hair, Black, Babin, Anderson, &
Tatham, 2005).
The items that comprise each construct are identified prior
to running the CFA analysis. We then confirm or reject that
the items properly reflect the construct, in this case the learning objective. CFA was conducted on six of the learning

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

FIGURE 2

Conceptual framework of bachelor in business administration program at the University of Texas at Tyler (color figure available online).

159

160

S. L. AVERY ET AL.

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

TABLE 1
Outcomes
Outcomes

Means of assessments

Empirical validity assessment

Professional Proficiency 1: Students demonstrate
that they are knowledgeable about current
business theory, concepts, methodology,
terminology, and practices.
Professional Proficiency 2: Students can prepare a
business document that is focused, well organized,
and mechanically correct.
Professional Proficiency 3: Students are able to
deliver a presentation that is focused, well
organized, and includes appropriate verbal and
nonverbal behaviors.
Technological Competence 1: Students demonstrate
understanding of information systems and their
role in business enterprises.
Technological Competence 2: Students are able to
use business software, data sources, and tools.
Global Awareness 1: Students demonstrate
awareness of global issues and perspectives.
Global Awareness 2: Students are knowledgeable of
global issues and perspectives that may impact
business activities.
Social Responsibility: Students exhibit an
understanding of social consequences of business
activities.
Ethical Courage 1: Students understand legal and
ethical concepts.
Ethical Courage 2: Students make ethical decisions.

Major Field Test (MFT)

Confirmatory factor analysis

Rubric assessed writing assignment

Confirmatory factor analysis

Rubric assessed oral presentation assignment

Confirmatory factor analysis

Section V (Information Systems) subscore from
MFT

None (single item)

Rubric assess technology project

Sample size too small to run confirmatory factor
analysis
Confirmatory factor analysis

Global Awareness Profile (GAP) test,
standardized external exam
Rubric assessed global business case

Confirmatory factor analysis

Business case

10 yes or no questions, cannot assess using
confirmatory factor analysis

MFT, section VII (Legal and Social
Environment) sub-score
Ethics game assessment of decision-making :

None (single item from standardized external
exam)
Binary data—unable to assess using
confirmatory factor analysis

objectives. We were unable to run CFA for some learning
objectives because they were either measured by a single
item, the sample size was too small, or data was binary,
therefore negating the applicability of using CFA. Table 1
details the learning objectives, measurement tools, and when
the CFA was conducted. In the following section, we discuss
the general approach used in the CFA analysis. Then, we
follow up with two examples of the CFA analysis; the first
example is empirically valid, and the second example was
not empirical valid. We used a combination of the software
tools SPSS AMOS (vers. 21, IBM, Meadville, PA) to support
the analysis.
For each of the CFA analyses, we followed a three-step
approach documented in many of the leading academic journals: (a) review of the raw data, (b) assessment of model
fit, and (c) assessment of construct validity. Prior to the CFA
analyses, we reviewed the data for sample size, outliers, missing data, and normality. We determined if the sample size was
adequate for the model based on suggested requirements that
range from five to 20 observations per variable. (Hair et al.,
2005). The existence of outliers along with their potential
impact on normality and the final results were assessed at
both the univariate and multivariate level by reviewing the
Mahalanobis distance (D2) calculation for each case. Next,
we identified missing data and assessed the potential impact

on analysis. A case that has a substantially different value
from other D2 calculations is a potential outlier. Then, we
identified the amount of missing data and assessed the potential impact on the analysis. Finally, we assessed normality by
reviewing both skewness and kurtosis at the univariate and
multivariate level. Values of zero represent a normal distribution. For skewness, less than three is acceptable (Chou &
Bentler, 1995; Kline, 2005). For kurtosis, Kline stated that
less than 8 is reasonable with greater than 10 indicating a
problem and over 20 an extreme problem.
We evaluated how well the data fit the measurement model
using AMOS software. We used maximum-likelihood estimation, which is a widely used approach and is fairly robust
to violations of normality and produces reliable results under many circumstances (Hair et al., 2005; Marsh, Balla, &
McDonald, 1988). First, we evaluated the chi-square statistic, which measures the difference between the observed and
estimated covariance matrices. It is the only statistic that has
a direct statistical significance and is used as the basis for
many other fit indices (Hair et al., 2005). Statistical significance in this case indicates that an error of approximation or
estimation exists. Many researchers question the validity of
the chi-square statistics (Bentler, 1990), so if it is significant
additional indices should be used to assess overall evaluate model fit. The root mean square error of approximation

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

SEARCHING FOR EMPIRICAL VALIDITY

(RMSEA) is a standardized measure of the lack of fit of
the data to the model (Steiger, 1990). It is fairly robust in
terms of small sample size (i.e., 250 or less). Thresholds of
.05–.08 have been suggested with Hu and Bentler (1999)
recommending a cutoff close to .06. The Bentler-Bonnet
(1980) nonnormed fit index (NNFI) was used because it
also works well with small sample sizes (Bedeian, 2007).
Generally .90 or better is considered adequate fit with Hu
and Bentler (1999) suggesting a threshold of .95 or better for
good fit. The NNFI and comparative fit index (CFI) are incremental fit indices in that they assess model fit by referring a
baseline model (Bentler 1990; Bentler & Bonett, 1980; Hu &
Bentler, 1999). The NNFI and CFI generate values between
0 and 1 with .90 or greater representing adequate fit (Bedeian
2007; Hu & Bentler, 1999).
The final step was to assess construct validity, which is
the extent to which a set of measured items accurately reflect
the theoretical latent construct the items were designed to
measure (Hair et al., 2005). The standardized factor loadings
should be statistically significant and .5 or higher (Hair et al.,
2005). Convergence validity was assessed by calculating the
average variance extracted (AVE) and construct reliability
(CR). The average percent of variance extracted among a set
of construct items is a summary indicator of convergence.
AVE of .5 or higher suggests adequate convergence. Less
than 5 indicates that on average more error remains in the
items than variance explained by the latent construct (Fornell
& Larcker, 1981). High CR reliability indicates that the measures consistently represent the same latent construct. Values
of .6 or 7 are acceptable, with .7 or higher indicating good
reliability (Nunnally, 1978).
RESULTS
The purpose of this article is to illustrate the method we used
to assess empirical validity of our learning objectives to aid
other business schools in their AoL journey. It is not our goal
to suggest that our measurement tools or learning objectives
should be universally adopted. Therefore for illustration of
our process, we limit our discussion to the overall fit of
our constructs and then discuss in detail two examples of

161

how we used CFA; an example of a valid measure (business
knowledge) and an example of a measure that requires some
modifications (oral communication).
Table 2 documents the model fit indices for the CFA analyses performed for six of the learning objectives. The learning
objectives for business knowledge, written communication,
global awareness context, region, and perspectives are valid
for construct reliability and model fit. Therefore we are reasonably confident that these objectives adequately measure
the learning goals established by the college. The model fit
for oral communication was below suggested thresholds.
Business Knowledge
Students take the Educational Testing System Major Field
Test (MFT) for the bachelor’s degree in business in a capstone class in their senior year. The MFT is a widely used
standardized exam for business students. The capstone class
is required for all business majors and the MFT is administered in all sections of the capstone class thus ensuring that all
business majors participate in the assessment prior to graduation. To ensure that students take the exam seriously and
give their best effort, their results are reflected in their course
grade. Two hundred and eighteen responses were obtained
from the exams administered in 2010–2011. Nine composite
scores from the exam are used to assess the overall business
knowledge of the student (see Table 3 for a listing of these
nine items). A review of the data found no missing data or
significant outliers. The sample size divided by the number of
responses to the number of variables (218 / 9 = 24) was well
above the recommended threshold range of 5–20. The kurtosis and skewness statistics were less than the recommended
thresholds of 8 and 3, which indicates only a slight departure from normality. Therefore we were reasonably confident
in proceeding to the next phase of the analysis, evaluating
model fit by running a confirmatory factor analysis on the
item measures.
The chi square was significant, however the RMSEA was
.077, below the recommended threshold of .08. The RMSEA
is parsimonious in that it considers the impact of the number
of variables in their calculations, so it is a better indicator of

TABLE 2
Fit Indices
Learning objective
Business knowledge (n = 218)
Written communication (n = 147)
Oral communication (n = 161)
Global awareness context (n = 151)
Global awareness region (n = 151)
Global awareness perspectives (n = 90)

Composite reliability

Variance extracted

χ2

RMSEA

RMSR

NNFI

CFI

df

.853
.798
.62
.85
.89
.96

.409
.498
.246
.397
.562
.73

61.35∗
6.163∗
120.24∗
19.65∗
8.183
96.643∗

.077
.119
.148
.089
.000
.208

11.66
.011
.033
.336
.192
.031

.932
.926
.693
.958
1.003
.867

.949
.975
.770
.975
1.000
.905

27
2
27
22
4
20

Note: RMSEA = root mean square error of approximation; RMSR = root mean square residual; NNFI = nonnormed fit index; CFI = comparative fit index.
< .05.

∗p

162

S. L. AVERY ET AL.
TABLE 3
Factor Loadings
Business knowledge

Indicators

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

Accounting
Economics
Management
Quantitative analysis
Finance
Marketing
Law
Systems
International
∗p

Oral communication

Standardized
loadings

Indicators

Standardized
loadings

.590∗∗
.703∗∗
.644∗∗
.232∗
.723∗∗
.737∗∗
.545∗∗
.552∗∗
.837∗∗

Progression
Conclusion
Projection
Delivery
Eye contact
Gestures
Pace
Fillers

.173∗
–.017
.916∗
.455∗
.276∗
.246∗
.975∗∗
.104

< .05, ∗∗ p < .001.

model fit than chi square. The NNFI and CFI was .932 and
.949, well above the recommended threshold of .90. Overall,
the model fit is acceptable.
In assessing construct validity of the items, we noted that
all items were statistically significant, however one item measure, quantitative analysis, fell below the recommended loading of .50. The composite reliability of .853 was well above
the recommended threshold of .6 and the variance extracted
was slightly below the recommended threshold at .50. Overall there is evidence that the item measures adequately reflect
the latent construct of business knowledge. However, further
analysis is needed to determine the cause of the low factor
loading of quantitative analysis. We provide Figure 3 as a
visual representation of the CFA model for this construct.
Oral Communication
The students’ oral communication skills were measured by
a rubric assessed oral presentation assignment administered

FIGURE 3

in the business communication course, which is part of the
required core curriculum. The business communication professor for all sections of the class completed the assessment.
There is only one business communication professor, thus ensuring consistency of the measurement process. The rubric
is comprised of nine item measures (see Table 3 for a list of
these items). Data was obtained from the 2010–2012 assessments, which resulted in 161 observations for the CFA of the
oral communication construct. A review of the data found
two missing observations for the item measure conclusion
and one missing observation for eye contact. Because the
impact of missing data was small, we used the mean imputation method for the missing observations. We also identified
one potential outlier; we deleted the case on a trail run and
found that it did not have a significant impact on normality
or the results. The sample size ratio (161 / 9 = 17.9) is in
the recommended threshold range of 5–20. The multivariate
kurtosis statistic of 23.393 was well above the recommended
thresholds of 8, which provides evidence of a departure from
normality. The univariate skewness statistics were below the
threshold of 3.
Because of the nonnormal distribution, we attempted to
run an asymptotically distribution-free estimation method to
contact the CFA. Unfortunately this resulted in an inadmissible solution because of the existence of a negative error variance. Therefore we used the maximum-likelihood estimation
technique, which often provides reasonable results with departures from normality. The χ 2 was significant, however the
adjusted chi-square ratio (χ 2/df ) was 4.453, below the recommended threshold of 5. The RMSR was .033 well below
the recommended threshold of .10 while the RMSEA was
.145, above the recommended threshold of .08. The NNFI
and CFI was .693 and .770, below the recommended threshold of .90. Our overall assessment is the model fit is poor.
The RMSEA, NNFI, and CFI are impacted by the model
complexity, which could be an indication that the number

Business knowledge confirmatory factor analysis model.

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

SEARCHING FOR EMPIRICAL VALIDITY

FIGURE 4

163

Oral communication confirmatory factor analysis model.

of variables in the model affected model fit. We provide
Figure 4 as a visual representation of the CFA model for oral
communication.
Two of the items were not statistically significant: conclusion and fillers. Only two of the nine items were greater than
the recommended threshold of .50, projection and pace. The
composite reliability of .62 met the recommended threshold
of .6. The variance extracted of .246 was well below the recommended threshold of .50. Our conclusion is that the item
measures do not accurately reflect the latent construct oral
communications.
DISCUSSION
The purpose of this article is to highlight an AACSBaccredited program as a case study of its design of an empirically validated AoL system and to demonstrate how empirical validity improved our AoL system. When appropriate,
we used confirmatory factor analysis to validate the measurement instruments used to assess student achievement of
the learning objectives established by the faculty and stakeholders of the college. We provided a description of the CFA
process used to assess the empirical validity of the learning
objectives. Finally we illustrated the process by discussing
the results of the validation process on two learning objectives.
For the business knowledge learning objective, we found
that both the model fit and construct reliability are valid.
However, we noted that factor loading for quantitative analysis was much lower than the other item measures. In reviewing the raw scores, not surprisingly, we found that our
students do not perform as well in the quantitative analy-

sis topic when compared to the other topics covered by the
MFT indicating that even though we had a valid measure
of business knowledge, our students need improvement in
their quantitative skills. These results prompted us to examine the curriculum of the class where much of the quantitative
methods are taught.
For the oral communication learning objective, we found
that the model fit was poor, construct reliability was low,
and many of the item measures from the rubric did not
load. These results prompted us to examine the measurement tool used to assess achievement oral communication competency. Corrective action includes a review
of the rubric and also the process used to collect the
data.

CONCLUSION AND LIMITATIONS
We found value in and therefore will continue to empirically
validate the AoL learning objectives using CFA. The validation process has increased the support of the AoL system by
faculty that understand and are trained in the research process. We have received positive feedback from the AACSB
and higher education associations on our validation process.
Most importantly, it provides confidence in the tools we are
using to measure student achievement of the learning objectives. Now that the process and the supporting models are
in place, it will be relatively simple to continue the validation process in order to continually improve. Because of the
method we used to capture the assessments, we are able to
use the same validation process for both AACSB and SACS
accreditations.

164

S. L. AVERY ET AL.

Downloaded by [Universitas Maritim Raja Ali Haji] at 20:37 11 January 2016

We are continually striving to improve our validation
process. One limitation is that we are unable to simultaneously assess the empirical validity of the entire model of
the learning objectives. Assessment is conducted by class,
rather than individual student across all classes. Therefore,
we do not have data for one student for all learning objectives.
To address this limitation, we are evaluating both in-house
and commercially developed databases to track student data
across classes and semesters. For example, the University
of Central Florida (UCF) developed an in-house database
for tracking of student data (Moskal, Ellis, & Keon, 2008).
Taskstream (www1.taskstream.com) is a commercially available database for tracking data.
REFERENCES
Baker, D. L., Ni, A. Y., & Van Wart, M. (2012). AACSB assurance of learning: Lessons learned in ethics module development. Business Education
Innovation Journal, 4, 19–27.
Beard, D., Schwieger , D., & Surendran, K. (2008). Integrating soft skills
assessment through university, college, and programmatic efforts at
an AACSB accredited institution. Journal of Information Systems, 19,
229–240.
Bedeian, A. G. (2007). Even if the tower is “Ivory,” it isn’t “White”‘: Understanding the consequences of faculty cynicism. Academy of Management
Learning & Education, 1, 9–32. doi:10.2307/40214514
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of
fit in the analysis of covariance structures. Psychological Bulletin, 88,
588–606.
Betters-Reed, B. L., Nitkin, M. R., & Sampson, S. D. (2008). An assurance of
learning success model: Toward closing the feedback loop. Organization
Management Journal, 5, 224–240.
Chou, C. P., & Bentler, P. M. (1995). Estimates and tests in structural
equation modeling. In R. Hoyle (Ed.), Structural equation modeling (pp.
37–59). Thousand Oaks, CA: Sage.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models
with unobservable variables and measurement error. Journal of Marketing
Research, 18, 39–50.
Gardiner, L. R., Corbitt, G., & Adams, S. J. (2010). Program assessment:
Getting to a practical how-to model. Journal of Education for Business,
85, 139–144. doi:10.1080/08832320903258576
Garrett, N., Marques, J., & Dhiman, S. (2012). Assessment of business
programs: A review of two models. Business Education & Accreditation,
4, 17–25.

Hair, J. F., Black, B., Babin, B., Anderson, R. E., & Tatham, R. L.
(2005). Multivariate data analysis (6th ed.). Saddle River, NJ: PrenticeHall.
Harper, J. S., & Harder, J. T. (2009). Assurance of learning in the MIS
program. Decision Sciences Journal of Innovative Education, 7, 489–504.
doi:10.1111/j.1540-4609.2009.00229.x
Hess, P. W., & Siciliano, J. (2007). A research-based approach to continuous
improvement in business education. Organization Management Journal,
4, 135–147.
Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance
structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55.
Kline, R. B. (2005). Principles and practice of structural equation modeling
(2nd ed.). New York, NY: Guilford.
LeClair, D. (2012). Broadening our view of assurance of learning.
Retrieved from http://aacsbblogs.typepad.com/dataandresearch/2012/
02/broadening-our-view-of-assurance-of-learning.html
Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988). Goodness-of-fit
indices in confirmatory factor analysis: Effects of sample size. Psychological Bulletin, 103, 391–411.
Martell, K. (2007). Assurance of learning (AoL) methods just have
to be good enough. Journal of Education for Business, 82, 241–
243.
Moskal, P., Ellis, T., & Keon, T. (2008). Assessment in higher education
and the management of student-learning data. Academy of Management
Learning & Education, 2, 269–278. doi:10.2307/40214542
Nunnally, J. C. (1978) Psychometric theory (2nd ed.). New York, NY:
McGraw-Hill.
Pesta, B., & Scherer, R. (2011). The assurance of learning tool as predictor and criterion in business school admissions decisions: New use
for an old standard? Journal Of Education For Business, 86, 163–170.
doi:10.1080/08832323.2010.492051
Romero, E. J. (2008). AACSB accreditation: Addressing faculty concerns.
Academy of Management Learning & Education, 7, 245–255.
Sampson, S. D., & Betters-Reed, B. L. (2008). Assurance of Learning
and outcomes assessment: A case study of assessment of a marketing
curriculum. Marketing Education Review, 18, 25–36.
Steiger, J. H. (1990). Structural model evaluation and modification: An
interval estimation approach. Multivariate Behavioral Research, 25, 173–
180.
Weber, P., Weber, J. E., Sleeper, B. J., & Schneider, K. C. (2004). Selfefficacy toward service, civic participation and the business student:
Scale development and validation. Journal of Business Ethics, 49, 359–
369.
Wilhelm, W. J., & Czyzewski, A. B. (2012). Ethical reasoning instruction
in undergraduate cost accounting: A non-intrusive approach. Academy of
Educational Leadership Journal, 16, 131–142.
Zocco, D. (2011). A recursive process model for AACSB assurance of
learning. Academy of Educational Leadership Journal, 15, 67– 91.

Dokumen yang terkait