Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.82.4.241-243

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Assurance of Learning (AoL) Methods Just Have to
Be Good Enough
To cite this article: (2007) Assurance of Learning (AoL) Methods Just Have to Be Good Enough,
Journal of Education for Business, 82:4, 241-243, DOI: 10.3200/JOEB.82.4.241-243
To link to this article: http://dx.doi.org/10.3200/JOEB.82.4.241-243

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 20

View related articles

Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]


Date: 11 January 2016, At: 23:26

INTERVIEW/KATHRYN MARTELL

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:26 11 January 2016

“Assurance of Learning (AoL) Methods
Just Have to Be Good Enough”

BIO. Kathryn Martell (PhD, University
of Maryland; BA, University of Chicago) is
Associate Dean of the School of Business
and Professor of Management at Montclair
State University. Prior to 2002, she was
Associate Dean for Academic Affairs at the
School of Business at Southern Illinois University at Edwardsville (1999–2002).
Since the new accreditation standards were
passed in April 2003, Dr. Martell has worked
closely with the AACSB to help schools

meet the assurance of learning (AoL) standards. According to Dr. Martell, more than
700 faculty and administrators from 250 universities have attended the AACSB seminars
on assessment of student learning that she
facilitates. She is also a frequent speaker at
AACSB national and regional conferences.
She developed the content for AACSB’s
online assessment resource center
(www.aacsb.edu/ARC), and edited (alongside Dr. Thomas Calderon) the newly
released book published by the AACSB and
Association for Institutional Research (AIR)
Assessment of Student Learning in Business
Schools: Best Practices Each Step of the
Way. Dr. Martell recently talked to the Journal of Education for Business’s Anjoo
Pokharel about program assessment issues
in business education.
Copyright © 2007 Heldref Publications

JEB: Business schools seem to be
aware that accreditation standards set by
the AACSB are closely tied to program

assessment. What—do you think—are
the schools doing to meet this standard
through program assessment (e.g., curriculum innovativeness)?
Martell: The AACSB standards are
divided into three sections, and one of
the sections is assurance of learning
(AoL), that is, assessment. Schools
must successfully meet all standards to
attain or maintain accreditation. The
AoL standards call for assessment of
student learning for each degree program in the business school. For each
degree program, learning goals must be
articulated, and evaluated through direct
measure. The assessment data must then
be used to improve the curriculum. The
AACSB allowed for a transition period,
which is now over. Starting this year
(2007), schools are expected to have
their assessment plans fully implemented for each degree program.
With regard to curriculum management (now called AoL) there are two

main differences from what was
required under the previous set of standards. Under the old standards, curriculum was evaluated against a prescribed
list of topics and skill areas such as multicultural perspective or ethical reasoning. Second, the key form of documenting curriculum management pre-2003
was to indicate how business schools
were teaching these topics, combined
with survey data that reflected students’
or alumni perceptions of their learning.

Today, the curriculum must be aligned
with the learning goals that the faculty
establishes for each degree program.
The documentation to meet these standards must be focused on students’
demonstration of their learning, called
the direct measures. This is a major
departure from what was required in the
past, and most business schools did not
have assessment programs that would
meet these specifications when the standards were passed in 2003. Meeting the
AoL standards has required a significant
effort from most business schools.

JEB: Do program assessment scores
affect obtaining or maintaining accreditation? This seems to put pressure on
the schools to mask problems with their
programs? What is being done to prevent that?
Martell: The standards call for
schools to assess student learning and
use that assessment data to improve
their curriculum. At least initially,
schools will not be accountable for the
assessment results. Schools have the
freedom to uncover a problem with their
students learning—that they cannot use
quantitative methods for example—as
long as that finding is used to make
changes in the curriculum. The point is
not to evaluate one school’s effectiveness versus the other. The point is to
systematically and routinely evaluate
students’ performance on learning goals
that the faculty deem important and
make improvements as needed, over

time, to improve student learning.
March/April 2007

241

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:26 11 January 2016

JEB: How close are educators to
standardizing program assessment
methods? Can it work just like SAT,
GMAT, or ETS’s Major Field Achievement Test (MFAT) for Business? Why
or why not?
Martell: AACSB has no interest in
standardizing assessment. Just as the
faculty have the freedom to choose
learning goals that best fit their mission,
so do they have the flexibility to choose
methods—as long as they are direct
measures—to evaluate student learning.
JEB: Do program assessment scores

influence school ranking such as the one
published in the U.S. News and World
Report?
Martell: Not directly.
JEB: Who sets guidelines or gives
training to the faculty to “close the
loop” after program assessment is complete? How are the faculty compensated
for their time and effort in planning,
designing, and implementing program
assessment methods? Or, is program
assessment part of their teaching
responsibility?
Martell: The standards call for
schools to use assessment data to
improve their curriculum, also known as
closing the loop. There is no prescribed
way to close the loop. Alternatives
include modifying an existing required
course or courses, adding a new course,
modifying entrance requirements or a

curriculum within a major, or faculty
development. For example, let’s say that
your assessment results indicate that
your students’ writing skills are not up
to the faculty standard. Here are some
closing-the-loop options: New writing
course could be required; selected
courses across the curriculum could be
modified to include more writing;
entrance requirements could be modified to include a writing skills test; if
students from certain majors underperformed others in writing, that curriculum could be revised to include more
writing; if transfer students underperformed native students in writing, an
additional writing requirement could be
part of the transfer requirements, and
faculty development could focus on
writing across the curriculum. The
choice between these alternatives will
242

Journal of Education for Business


depend on the circumstances of the
business school, but some action is
required. If the faculty has identified a
particular learning goal, it is saying that
it is willing to take accountability, over
time, for students meeting an acceptable
standard for this goal. “We can’t be
expected to teach them how to write” is
not an acceptable response.
With regard to faculty compensation,
this depends on the implementation
model. My survey data indicates that
about half of the time the dean’s office
takes the lead in administering assessment, while in the other half, there is a
faculty member or committee that takes
the key role. Designing and administering a program assessment program is not
just “service”—it is too time consuming.
Because the new standards were passed
almost 4 years ago, there has been an

increase in the use of release time—typically one course a semester—to a faculty member who is in charge of assessment, and the use of faculty
stipends—typically $1,000–$1,500—to
faculty for 10–20 hrs of work in the
summer to “do” the assessments—such
as, use a rubric to assess student writing
assignments. With regard to implementing an assessment activity—assigning
an individually written case analysis, for
example, or including a new international module—in my opinion, this falls
within a faculty member’s normal teaching responsibility.
JEB: In your opinion, is it true that
faculty members often fail to understand certain nuances of program
assessment such as the importance of
direct and indirect assessment, or the
process of course-embedded assessment? How can schools encourage faculty to adopt assessment process as a
formal part of teaching and service?
Martell: Absolutely. Most business
faculty did not have education courses
included in their PhD curriculum, and
are initially overwhelmed with assessment methodology and language. Training is the solution to this problem and,
not surprisingly, hundreds of business

schools have sent faculty to assessment
seminars over the past 3 years. Faculty’s
main concerns about assessment include
the time involved, the use of results to
evaluate their own teaching, and a loss

of freedom in the classroom. Although
assessment clearly does involve a time
commitment, most faculty members will
not be involved with the nuts and bolts
of assessment. Once faculty are made
aware of what assessment involves,
many realize it is far less time consuming than they imagined.
With regard to the second concern,
program assessment data should never
be used to assess individual faculty
members. Although I can understand
that concern, I have never seen program
assessment data used for that purpose.
Finally, assessment programs do not
require standardization across courses—everyone teaching from a common
syllabus, for example—but faculty
should not be surprised if they have to
do something different in their classes
as a result of assessment. The point of
assessment is to diagnose areas for
improvement in student learning in the
business curriculum. As problems are
diagnosed, they will need to be
addressed. That may mean that some
courses have a greater emphasis on
some skill building, or reinforcing
knowledge that students learned earlier
in their program. Other faculty members may need to include individually
prepared assignments in their courses
for assessment purposes.
It is hard to imagine a scenario where
an honest, comprehensive program
assessment will not reveal some areas of
needed curriculum improvement.
Schools can facilitate faculty’s involvement in assessment programs through
training, leadership, support, and evaluation. Over time, I would expect that
many schools would formally include
assessment activities as part of how faculty responsibility is defined and evaluated. Program assessment is a requirement for both AACSB and regional
accreditation; some states have requirements as well. To the extent that accreditation is important to a school—and it
is hard to imagine many situations in
which this would not be the case—it is
critical that it develops supporting systems. Assessment must be faculty driven; therefore, their involvement is critical. To ensure that this critical
responsibility is fulfilled and rewarded,
assessment activities must become part
of formal evaluation processes.

Downloaded by [Universitas Maritim Raja Ali Haji] at 23:26 11 January 2016

JEB: What are the pitfalls of implementing program assessment methods?
In your opinion, how can that be overcome?
Martell: The first point to make
about AoL methods is that the emphasis
is on direct measures of student learning. Surveys can provide some useful
feedback about student satisfaction, but
as an AoL measure you are better off
using surveys strictly as a secondary
measure. Another quick point to make
about methods is that the student product used for the assessment has to be an
individual product. You cannot use a
team written paper to evaluate writing
skills, or financial analysis skills, for
example, that group projects typically
represent the best students’ work. Team
presentations are acceptable to evaluate
oral communication skills as long as
students do not self select. If they do,
once again you’re just evaluating the
best students’ work.
Different methods have different
trade-offs. Some require more faculty
resources, while others require more
financial resources. Some can be implemented quickly; others require more
development time. Standardized measures can be quickly implemented, but
can be relatively expensive and are not,
generally speaking, as tied to the
school’s curriculum as are homegrown
measures. Some methods, primarily
those that do not also fill a course
requirement for students, can present
motivational issues.
My advice with regard to methods is
not to strive for the same level of rigor
that is required for scholarly research.
Make an honest effort to assess the
learning goals that are important to the
school. How perfect does a measure
have to be to provide the basis for a conclusion on whether students can write or
not, conduct a statistical test, download
and analyze a financial spreadsheet?
Very often, faculty become so preoccupied with rigor and validity of the measures that their AoL efforts get stymied.
Just get started. You will revise as you
gain experience.
Other advice that I offer on methods
is, wherever possible, (a) build on
something you are already doing, (b)

use the same method to gather data on
two goals (for example, use a case to
assess both writing and problem solving skills), (c) and gather some descriptive data on the student as you are
implementing the assessment that you
will help analyze the data later. For
example, it’s always a good idea to ask
students to indicate on the assessment
their major or concentration, transfer
status, and how many credits they’ve
completed. Collecting descriptive data
from the student saves a lot of time
when you’re trying to analyze the
assessment results later.
Finally, always, always, always
choose a method that is going to produce data that you can use. I have seen
some psychometric tests, for example,
on leadership, multicultural affinity,
moral reasoning, etc., that I have a difficult time imagining how the results
might be used to improve the curriculum. My colleague, Doug Eder, says,
“A pig doesn’t get any fatter merely by
weighing it ... you just end up with a
very annoyed pig!” Don’t waste time,
money, or goodwill by using an assessment method that is not going to produce actionable data.
JEB: You have experience teaching
capstone courses. Do you think a capstone course should be required of all
majors to capture student growth and
maturity in their area of study? Why?
Martell: My PhD is in strategy, and
I’ve taught the capstone course my
entire academic career. Given my background, it’s probably not a surprise that
I think the capstone (strategy) course
plays a very important role in both the
undergraduate and MBA curriculum.
My work in assessment has only
strengthened this conviction. Many
schools, including my own, are drawing
conclusions from their AoL processes
that retention of knowledge is a major
problem for students. The capstone
course provides us with a venue for
reinforcing critical knowledge in addition to training students about competitive dynamics and how to integrate
across the curriculum. It is no wonder
that many schools, when they go
through their curriculum alignment

exercise, find that the capstone course
incorporates many of their program’s
learning goals.
JEB: You have recently edited a twovolume book on program assessment
published by AACSB/AIR. You must
have come across exciting ideas in the
areas of program assessment. Please
share some of the ideas that struck you
as brilliant.
Martell: I suppose I am a bit like a
movie reviewer when it comes to
assessment. Just as a movie reviewer
who watches hundreds of movies a year
tends to become enamored with a movie
that is different, so too do I get excited
about a unique approach to assessment.
The book on best practices provides
many of these examples: Valparasio’s
assessment center, Seton Hall’s assessment panel, Cal State Fullerton’s comprehensive approach to writing assessment, the use of business practitioners
to evaluate presentation skills at Eastern
Kentucky, Texas Christian’s selection
processes. I have provided some other
examples (including a few from my own
school, Montclair State) in the article in
this journal issue (see p. 189), and still
more on the AACSB assessment
resource center Web page including the
Cal State Chico STEPs program, and a
terrific original measure that the faculty
at the College of Business at Sam Houston developed to assess critical thinking.
But, here’s my point: AoL doesn’t
have to be brilliant. It doesn’t even have
to be original. In fact, the assessment
community is very generous and collegial about sharing their methods. It just
has to be “good enough”—good enough
to help you diagnose problems with
your students’ learning. Don’t waste
your time on trying to be brilliant. An
honest effort, that leaves you with
enough energy for the real task at
hand—improving your students’ learning—is the best approach.
NOTE
Correspondence concerning this interview
should be addressed to Dr. Kathryn Martell, Associate Dean and Professor of Management, School
of Business, Montclair State University, Montclair, NJ 07043.
E–mail: martellk@mail.montclair.edu

March/April 2005

243