Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.81.1.55-64

Journal of Education for Business

ISSN: 0883-2323 (Print) (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Determinants of Success on the ETS
Business Major Field Exam for Students in an
Undergraduate Multisite Regional University
Business Program
Bruce D. Bagamery , John J. Lasik & Don R. Nixon
To cite this article: Bruce D. Bagamery , John J. Lasik & Don R. Nixon (2005) Determinants of
Success on the ETS Business Major Field Exam for Students in an Undergraduate Multisite
Regional University Business Program, Journal of Education for Business, 81:1, 55-63, DOI:
10.3200/JOEB.81.1.55-64
To link to this article: http://dx.doi.org/10.3200/JOEB.81.1.55-64

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 18


View related articles

Citing articles: 17 View citing articles

Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RA JA ALI HA JI
TANJUNGPINANG, KEPULAUAN RIAU]

Date: 12 January 2016, At: 17:50

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

Determinants of Success on
the ETS Business Major Field Exam for
Students in an Undergraduate Multisite
Regional University Business Program
BRUCE D. BAGAMERY
JOHN J. LASIK
DON R. NIXON

CENTRAL WASHINGTON UNIVERSITY
SEATTLE, WASHINGTON

ABSTRACT. Extending previous
studies, the authors examined a larger
set of variables to identify predictors of
student performance on the Educational Testing Service Major Field Exam in
Business, which has been shown to be
an externally valid measure of student
learning outcomes. Significant predictors include gender, whether students
took the SAT, and grades (using grade
point averages for either business core
and preadmission courses, or loadings
on four factors identified as general,
quantitative, accounting, and management). Site (on- or off-campus), age,
transfer status, and major (accounting
vs. business administration) were not
significant. In a further extension of
previous studies, the authors discovered significant interactions between
some predictors.


T

ne continuing task of all collegiate
business programs is assessment.
How does a program confirm that it is
meeting its objectives? Mirchandani,
Lynch, and Hamilton (2001) approached
this problem from the point of view of
internal users (i.e., administrators and
faculty). They developed a paradigm in
which an educational institution applies
processes to inputs to obtain outputs.
Processes include all the experiences the
institutions provide for the students, most
notably a variety of academic courses in
a curriculum. Inputs include all characteristics of the students who attend such
as high school grade point average
(GPA), SAT scores, and age. Outputs
include various performance measures

such as students’ final GPAs, admission
to graduate programs, starting salaries, or
performance on standardized tests.
Mirchandani et al. (2001) employed
the Educational Testing Service’s (ETS)
Major Field Exam in Business as a primary output measure and investigated
what types of input and process variables had an effect on students’ scores
on the ETS exam. They found that SAT
scores, and occasionally some grade
factors, influenced ETS scores for those
students who took the SAT exam. Some
grade factors and transfer GPAs significantly influenced ETS scores for transfer students who did not take the SAT.
Our study extended the work of Mirchandani et al. (2001) in several ways.

We expanded the set of potential input
variables to include student age, transfer
status, and gender. We also expanded
potential process variables to include
on- or off-campus test sites, and choice
of major. Using a dummy variable

regression approach, we tested to see if
different input or process categories significantly affected students’ ETS exam
scores. We also extended prior studies
by testing for interactions between significant predictor variables.
Need for Assessment
of Learning
The end-of-major learning assessment program in the College of Business (COB) at Central Washington University attempts to answer the following
questions: (a) Do the graduates of our
business programs demonstrate a reasonable level of understanding of standard business school concepts? and (b)
How does the level of understanding of
our COB graduates compare to that of
other senior-level business students
nationwide?
The answer to these questions has
become increasingly important to both
external and internal stakeholders.
External stakeholders include government officials, university boards of
trustees, and accrediting agencies, such
as the Association to Advance Collegiate Schools of Business (AACSB)
September/October 2005


55

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

International. These stakeholders use
assessment data to make judgments
about program quality and funding
decisions. Internal stakeholders, especially faculty and administrators, use
the data in their effort to improve the
effectiveness of the curriculum.
Literature Review
Educational assessment continues to
be a major topic on college campuses as
programs strive to achieve or maintain
accreditation. In a review, Allen
(2004),concluded that published tests
may not be very useful as a direct measure for program assessment unless
those tests are aligned with program
learning objectives. In addition, Firestone, Monfils, and Schorr (2004)

expressed concern with the possibility
that there will be pressure on instructors
to improve student scores on assessment
tests. This could lead to more teaching
of material found in test questions as
opposed to providing a broader range of
educational experiences, which might
not improve test scores.
In their article concerning marketing
curriculum content at AACSB-accredited schools of business, Barnett, Dascher,
and Nicholson (2004) reported that written and oral communications were the
most important core areas for marketing.
While published tests may be effective
at measuring subject matter knowledge,
they may not be sufficient to assess the
specific program objectives that are
most important for each of the various
departments in a college of business.
Pfeffer and Fong (2002) argued that
attainment of a business degree should be

related to the various measures of career
success. Graduates of business programs
should be better prepared for a career and
should be compensated at a higher level.
They found that, except for graduates
from prestigious or top-ranked programs,
research indicates that there is little correlation between having a business degree
and attaining economic success. They
even argued that what is being assessed
might be the quality of the student rather
than the quality of the program. They
believed that assessment methods should
be redirected toward meeting the needs of
business in order to enhance the career
prospects of business school graduates.
56

Journal of Education for Business

Educational Benchmarking, Inc.

(EBI) took a different approach to the
assessment of business programs. They
have concentrated on the methods to use
in measuring learning outcomes.
Instead of direct measures of actual cognitive learning, they have developed a
method of measuring student self-perceived cognitive learning. They developed this approach from research that
reports a high degree of correlation
between direct and indirect measures of
student learning (Cheseboro &
McCroskey, 2000; Richmond, Gorham,
& McCroskey, 1987). Students were
surveyed as to their satisfaction with all
aspects of their business school experience. Instead of looking at factual retention of data by students, the school can
concentrate on those areas that receive
low satisfaction scores.
A more traditional method of assessment of student learning involves standardized testing of general knowledge of
a subject area. ETS provides a series of
major field achievement tests, which are
easily administered, relatively inexpensive, and can be compared with results
from other institutions. There are limitations to the use of standardized tests, as

noted by Mirchandani et al. (2001). Two
of the most serious concerns are that performance on a standardized exam is best
predicted by performance on other standardized exams, and that exam performance may not be correlated with actual
knowledge. Even with the limitations,
Mirchandani et al. concluded that standardized tests are attractive vehicles for
program assessment for several reasons:
There is no need for administrators or
professors to create or validate them,
they are easy to administer, they are
graded as part of the cost, they provide
comparable data, and they allow testing
to begin immediately.
Mirchandani et al. (2001) conducted a
study at Rowan University in Glassboro,
New Jersey, where the ETS Major Field
Test in Business was administered to
graduating seniors as part of an assessment effort. They found that two types
of variables were related to performance
on the ETS exam: input variables (SAT
scores, transfer GPA, and gender), and

process variables (grades in quantitative
courses). They concluded that, because
of the nature of their institution, it was

not viable to increase the entrance
requirements to improve ETS exam
scores. Their institution decided instead
to improve their coverage of the material in the quantitative area (calculus,
accounting, finance, operations management, and management information systems) even though the process variable
effect sizes were smaller than those for
the input variables.
While overall GPA was significantly
related to both the input and process variables, the SAT score alone explained
most of the variation with the ETS results
for the subset of the students who took
the SAT. The conclusion that Mirchandani et al. (2001) reached was that overall GPA has strong internal validity and
provides a measure of student performance related to the curriculum of the
school. The ETS scores provided external
validity and enabled them to benchmark
against national norms.
In their article on the use of standardized assessment tests, Black and Duhon
(2003) reviewed how state governments
and institutional boards have increasingly
begun holding colleges and universities
accountable for expenditures. They
traced the evolution of the assessment
process from the initial emphasis on
structure and processes in the 1970s to
the current concentration on learning outcomes and continuous improvement. Of
particular interest were the changes in the
requirements of the AACSB accreditation standards, which focused on evidence of learning as opposed to intended
outcomes (AACSB, 2004). This shift in
emphasis has created the need for effective means of measuring and improving
student achievement. Standardized
assessment tests, either developed locally
or from a national testing service, can
provide a reliable and valid method for
institutions to measure the performance
of their students. While locally developed
tests can be tailored to specific requirements of a program, national tests enable
an institution to compare results for its
students with the results for students
from other institutions.
Black and Duhon (2003) administered the ETS Major Field Test in Business to 456 students enrolled in a
senior-level business core class. Their
sample excluded students who had at
least 6 hr of business core classes

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

remaining to be taken, and those students who had not taken the American
College Testing Program (ACT) exam,
resulting in a final sample of 297 students. To develop a predictive model of
determinants of ETS test performance,
they gathered additional information
about the students in the study. Other
studies (Allen & Bycio, 1997; Mirchandani et al., 2001) showed that several
variables were strongly related to ETS
exam scores. These variables were
GPAs (for business core, for economics,
accounting, and overall), ACT score,
gender, and major. In their study, Black
and Duhon added age as a proxy for
work experience.
Black and Duhon’s (2003) regression
model explained 58% of the variation in
ETS scores for their student sample. The
predictor variables in their final model
were business core GPA, composite ACT
score, age, and dummies for male gender
and management major. Their results
indicated that a one-point increase in
GPA was associated with a 7.49 increase
in ETS exam score, while a one-point
increase in ACT score was associated
with a 1.51 increase. Each additional
year of age implied an ETS test score .71
points higher. Controlling for the other
variables, they found that men can be
expected to score 3.79 points higher than
women, and management majors can be
expected to score 3.57 points lower than
nonmanagement majors.
Black and Duhon (2003, p. 94) concluded that their “regression coefficients provide credible, useful information. In addition, the high significance
levels of R2 and the independent-variable coefficients indicate that ETS
Major Field Test in Business scores
have criterion validity for our program.”
From these results from their own institution, they inferred that standardized
testing can be used as a method for evaluation and enhancement of programs at
other business schools as well. To assist
other schools, they listed 17 different
uses for the standardized test scores.
They classified these uses into either
learning outcomes information, which
is used for program and student assessment, or continuous improvement information, which is used for program and
student development, and provided several examples.

Use of the Educational Testing
Service Field Exam in Business
as an Assessment Tool
Following a review of several external examinations, the COB at Central
Washington University recently adopted
the ETS Major Field Examination in
Business. This exam has several noteworthy features. First, it is widely used
in business programs; thus, it provides
nationally normed data for a large number of institutions and programs. Second, the test is designed to provide
aggregate performance indicators in
eight key business areas: accounting,
economics, finance, international
issues, legal and social environment,
management, marketing, and quantitative business analysis.
The ETS exam was first administered
to all students enrolled in the Central
Washington University COB senior-level
capstone course (MGT 489, Strategic
Management) during the Fall 2002 quarter. Following the Fall 2002 pilot test, the
COB decided to administer the ETS
exam to every student who enrolls in
MGT 489, beginning Fall 2003.
METHOD
Description of the Sample
Our data set of 169 observations
included students who took the ETS
Business Major Field exam as a component of the college’s capstone Strategic
Management course during Fall 2002 or
Fall 2003. Students cannot register for
this course until they have completed all
the other required business core classes.
The data set comprised eight sections of
this course: four sections at the Central
Washington University main campus
location in Ellensburg, and two sections
each at two off-campus university centers
in the greater Seattle area at Lynnwood
and SeaTac.
The names and brief definitions of
the variables used in the study appear in
Table 1. Many variables are selfexplanatory; however, some additional
information follows about some of the
variables: Site (or section) dummies are
included for the second section in
Ellensburg, ELL2, and for the two offcampus sites. The base section, ELL1,

does not need a dummy specification.
The university records two final grade
point averages for each student; GPA
includes only those courses taken at the
university, whereas GPA W TRANS
includes grades of courses at other
schools where students obtained credit
toward graduation at the university.
The preadmission course GPA
(PRAD_GPA) is the average grade in
seven courses prospective applicants
must take before they may apply for a
major in the COB. Similarly, the core
course GPA (CORE_GPA) is the average grade in the seven business core
classes. These 14 course grades are
some of the process variables used in
the study.
Some characteristics of the student
population are evident from the
descriptive statistics in Table 2. The
decimal values of the means of the
dummy variables correspond to percentage of students with that characteristic. For example, the site dummy
mean values of .280 for Lynnwood and
.220 for SeaTac indicate that 28% and
22% of the students, respectively,
attended at the two off-campus sites;
therefore, the remaining 50% were students at the main campus. The mean
student age was 26.79 years, 45% of the
students were over 24 years old, and
56% were men. Consistent with the
nature of a program accepting a large
number of transfer students, primarily
from community colleges, only 51 of
the 169 students (30%) took the SAT at
some earlier date.
Given several exceptional situations
at our university, there is not a strict
relationship between transfer student
status and whether or not the student has
taken the SAT. In other words, it is not
the case that every native student (one
who has taken classes only at our institution) does take the SAT and every
transfer student does not take the SAT.
Several students took the ACT instead
of, or in addition to, the SAT, and some
native students were admitted having
taken neither exam. More specifically,
of 112 transfer students, 14 took the
SAT only, 2 took the ACT only, 2 took
both exams, and 94 took neither exam.
Of 57 native students, 30 took the SAT
only, 7 took the ACT only, 5 took both
exams, and 15 took neither exam. In
September/October 2005

57

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

TABLE 1. Variable Definitions for Students Who Took the Educational
Testing Service (ETS) Business Major Field Exam During 2002 or 2003
at a Regional University Undergraduate Business Program

Variable
Input
AGE
OVER24
MALE
TRANSF
SAT
TOOK_SAT
Process
ELL2
LYN
SEA
ACCTG
ACCT_I
ACCT_II
ECON_MIC
ECON_MAC
STAT_I
LAW
CALC
FIN
MKT
MGT
STAT_II
PROD
MIS
STR_MGT
Fac1
Fac2
Fac3
Fac4
PRAD_GPA
CORE_GPA
Output
ETS
GPA
GPA
W TRANS

Significant predictor of
ETS performance
Mirchandani,
Current
Lynch, &
study
Hamilton (2001)

Definition

Student age in years at time of exam
Dummy for age over 24a
Dummy for maleb
Dummy for transfer studentc
SAT test score
Dummy for took SAT d
Dummy for site Ell2e
Dummy for site Lynf
Dummy for site Seag
Dummy for Accounting majorh
Accounting I grade
Accounting II grade
Microeconomics grade
Macroeconomics grade
Statistics I grade
Business Law grade
Calculus grade
Finance grade
Marketing grade
Management grade
Statistics II grade
Production grade
Management Information Systems
grade
Strategic Management grade
General grade factor
Quantitative grade factor
Accounting I grade factor
Strategic Mgt grade factor
Grade Point Average in 7
preadmission coursesi
Grade Point Average in 7
business core courses j

N
N
Y
N

?
?
Y

Y

RESULTS

N
N
N
N

Y
Y
Y
Y

Y (Quantit.)
N (Qualit.)
N (Econ.)

Y
Y

ETS Business Major Field Test
Score
Final GPA excluding transfer courses
Final GPA including transfer courses

Note. N = 169. Grades for all courses were recorded on a 4-point scale with 4.0 indicating the highest grade (A). a1 = Over 24. b1 = Male. c1 = Transfer. d1 = Took SAT, 0 = Did not. e1 = Site Ell2.
f
1 = Site Lyn. g1 = Site Sea. h1 = Accounting, 0 = Business. iACCT_I, ACCT_II, ECON_MIC,
ECON_MAC, SAT_I, LAW, CALC. jFIN. MKT, MGT, STAT_I, PROD, MIS, STR_MGT.

some of the following analyses, we used
a dummy variable TOOK_SAT, which
indicated whether a student took the
SAT exam. If we instead used a different dummy TOOK_EITHER, indicating
that a student took either or both exams,
results were almost identical, so we did
not report the TOOK_EITHER results.
58

Journal of Education for Business

institution into the university’s course
listings. Because grade distributions are
frequently skewed instead of normally
distributed, we followed the suggestion
of McDermeit, Funk, and Dennis
(1999), and replaced missing values not
with the mean but with the median.
Also, because grading differences may
appear between sites and sections, each
missing grade was replaced by its corresponding median from the same section.
Unlike the similar study by Mirchandani et al. (2001), we chose not to standardize the variables in order to facilitate interpretations of the results.

Adjusted Analysis
Before analyzing the data, we needed
to adjust for missing values of some (<
5%) of the grades. Some grades were
missing, for example, because of challenges involving translating a course
that a transfer student took at a previous

Table 3 shows the correlations among
the 14 grades, the four GPAs, and ETS
Major Field Exam in Business score.
No grade or GPA was particularly highly correlated with ETS; the highest correlation, ETS exam score with
PRAD_GPA, was only .539. In addition, there was an interesting lack of
multicollinearity among the 14 grade
variables with the highest correlation
being .633 between Accounting I and
Accounting II. Only 15 of the 91 correlations were above .5.
We used factor analysis to reduce the
number of independent grade variables.
Table 4 shows the results, including the
loadings of each grade variable on each
factor. Consistent with the recommendations of Raven (1994) and Korth
(1975), factor loadings below .400 were
suppressed. Twelve of the 14 grades
loaded on at least one factor, and only
three (Finance grade, Accounting II
grade, and Microeconomics grade)
loaded on two factors. Calculus grade
and Management Information Systems
(MIS) grade did not load on any factor,
consistent with their lowest correlations
with ETS of .164 and .216, respectively.
The four factors, Fac1 through Fac4,
can be (subjectively) identified as general, quantitative, accounting, and management factors, in that order. Together,
these four factors explained 50.833% of
the variance of all the grades.
Having determined the factors that
reflect the potential contributions that
the individual course grades may make
to performance on the ETS exam, we
used regression analysis to determine

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

TABLE 2. Descriptive Statistics

Variable
Course
ACCT_I
ACCT_II
ECON_MIC
ECON_MAC
STAT_I
LAW
CALC
FIN
MKT
MGT
STAT_II
PROD
MIS
STR_MGT
Testing Site
ELL2
LYN
SEA
Demographics
AGE
OVER24
MALE
ACCTG
TRANSFER
TOOK_SAT
Test
ETS
SAT
Grade Point Average (GPA)
GPA
GPA_W_TR
PRAD GPA
CORE GPA

Score or value
Minimum
Maximum

M

SD

1.7
0.0
1.3
1.3
1.7
1.7
1.7
0.7
0.7
1.7
0.7
0.7
1.3
0.7

4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0

3.120
2.921
2.921
2.896
3.005
3.053
2.983
2.543
3.092
3.041
2.980
2.970
3.276
2.998

.739
.771
.757
.704
.721
.697
.705
.829
.693
.647
.704
.722
.582
.716

0.0
0.0
0.0

1.0
1.0
1.0

.330
.280
.220

.472
.452
.415

19.0
0.0
0.0
0.0
0.0
0.0

50.0
1.0
1.0
1.0
1.0
1.0

26.790
.450
.560
.330
.660
.300

6.901
.499
.498
.470
.474
.460

120.0
720.0

187.0
1,360.0

156.460
1,033.920

14.234
137.027

4.0
4.0
4.0
4.0

3.008
3.031
2.986
2.986

.467
.434
.517
.471

2.009
2.146
1.9
2.043

Note. N = 169; n = 51 for SAT.

which variables significantly helped
determine ETS exam scores for our
sample of 169 students. In contrast to
Mirchandani et al. (2001), who ran six
separate regressions for native (male,
female, and combined) students and
transfer (male, female, and combined)
students and commented on the differences in results between equations, we
used dummy variables to assess differences all within a single equation. The
dummy variable approach enabled us to
test hypotheses regarding significance
of various categorical input and process
variables that may result in differences
in performance.
First, it is illuminating to discuss
which variables did not have a significant effect on ETS scores. The off-campus centers enroll students older than

those on the main campus, and almost
all of those students are transfer students. We thought there might be differences in ETS scores between the main
campus and each off-campus site, but
none of the site dummies was significant. This reassuringly implies that
there are no significant performance differences between students on the main
campus and students at the off-campus
sites. The dummy for transfer students
was also not significant, consistent with
the results of Mirchandani et al. (2001),
who found no differences between
transfer students and native students.
Age also did not have a significant
effect on ETS performance, either if
actual age was used or just a categorical
dummy for students over age 24. In
addition, there was no significant differ-

ence between ETS performance of
accounting majors and business majors,
after controlling for the other predictor
variables.
So, what were the significant determinants of performance on the ETS
Business Major Field Exam? Table 5
displays results from linear regressions.
In the basic result equation (top of Table
5), 45.6% of the variability in ETS score
was explained by grade factors and gender, and whether or not the student previously took the SAT. Coefficients on
all four grade factors identified in Table
3 were positive and significant. The
effect size of the first factor (general)
was about twice the size of the other
three factors, which was consistent with
the results in Table 4, which shows that
8 of the 14 courses loaded strongly on
that factor. Additionally, the MALE
dummy was positive and significant;
with this sample of 169 students, being
a male student added about 8.5 points to
an ETS score, after controlling for
effects of the grade factors.
Another significant variable was the
TOOK_SAT dummy (see Table 5).
Having previously taken the SAT added
about 4.8 points to students’ ETS
scores. Sacks (1997) has shown that students who do well on one standardized
test, such as the SAT, tend to do better
on other standardized tests, such as the
ETS Major Field exams. The significance of this variable may indicate that
students who ventured to take the SAT
could be expected to do better on standardized tests in general than those students who did not take the exam.
Table 5 also shows the six basic predictors augmented by a set of four terms
testing for possible interactions between
gender and the four grade factors. These
interaction terms enabled us to assess
whether the effects of higher grade factor scores on ETS test performance
might be different for men and women.
A priori, we did not have any view as to
which of the four grade factors might
display significant interaction with gender, so Table 5 reports the results,
including all four interaction variables,
Fac1 × MALE through Fac4 × MALE.
The augmented regression explained
49.5% of the variability in ETS performance. All four of the interaction terms
were negative, implying that as grade
September/October 2005

59

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

60
Journal of Education for Business

TABLE 3. Correlations Between Educational Testing Service (ETS) Business Major Field Exam, Course Grades, and Grade Point Averages (GPAs)
Variable
1. ETS
2. GPA
3. GPA
W
TRANS
4. PRAD
_GPA
5. CORE
_GPA
6. ACCT
_I
7. ACCT
_II
8. ECON
_MIC
9. ECON
_MAC
10. STAT_I
11. LAW
12. CALC
13. FIN
14. MKT
15. MGT
16. STAT
_II
17. PROD
18. MIS
19. STR
_MGT

1

2

3

4

1.000
.452

1.000

.431

.918

1.000

.539

.712

.796

1.000

.523

.869

.816

.730

1.000

.397

.460

.542

.762

.513

1.000

.405

.611

.663

.818

.580

.633

1.000

.455

.596

.648

.748

.604

.485

.570

1.000

.533
.257
.465
.164
.483
.405
.327

.553
.486
.502
.319
.714
.547
.658

.610
.515
.535
.433
.702
.493
.608

.763
.612
.695
.564
.667
.446
.489

.612
.432
.509
.372
.798
.667
.705

.516
.308
.464
.365
.446
.329
.377

.571
.465
.459
.334
.573
.403
.427

.245
.347
.216

.489
.653
.548

.502
.607
.476

.468
.582
.392

.602
.724
.587

.372
.474
.166

.397

.468

.429

.350

.602

.209

Note. Correlations above .5 are bold faced italic.

5

6

7

8

9

10

11

12

13

14

15

16

17

18

.588
.339
.416
.288
.533
.402
.372

1.000
.295
.534
.291
.551
.379
.367

1.000
.347
.286
.331
.233
.318

1.000
.260
.512
.342
.422

1.000
.367
.116
.143

1.000
.496
.506

1.000
.358

1.000

.336
.494
.295

.387
.492
.408

.329
.503
.370

.371
.212
.299

.210
.402
.215

.316
.308
.191

.397
.512
.372

.290
.539
.248

.270
.512
.357

1.000
.229
.292

1.000
.305

1.000

.166

.235

.348

.271

.254

.268

.372

.181

.341

.339

.278

.302

19

1.000

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

es, then the augmented model in the bottom section of Table 5 is superior.

TABLE 4. Factor Analysis Results for Particular Courses

Variable
PROD
MKT
FIN
MGT
ECON_MAC
ECON_MIC
LAW
ACCT_II
STAT_I
STAT_II
MIS
CALC
ACCT_II
STR_MGT

Factor 1:
Generala
0.761
0.613
0.569
0.518
0.503
0.497
0.429
0.471

Factor 2:
Quantitativeb

Factor 3:
Accountingc

Factor 4:
Managementd

0.435
0.472
0.566
0.564
0.416
0.915
0.749

Note. Extraction method: Maximum likelihood; Rotation method: Varimax with Kaiser normalization; loadings of less than .4 were suppressed. aIndividual % variance explained = 19.594, cumulative % variance explained = 19.594. bIndividual % variance explained = 13.589, cumulative %
variance explained = 33.182. cIndividual % variance explained = 10.067, cumulative % variance
explained = 43.249. dIndividual % variance explained = 7.584, cumulative % variance explained =
50.833.

factor scores increase, male performance on the ETS exam will increase
less than female performance. One of
the interaction terms, Fac1 × Male, was
highly significant. To facilitate interpretation of the interaction term, Equations
1a and 1b isolate the relationship
between ETS score and the general
grade factor, Fac1, keeping all other
predictor values the same, for women
and men, respectively:
(1a) ETS = 149.78 + 10.27 Fac1 + . . .
Women
(1b) ETS = (149.78 + 8.90) + (10.27 –
5.76) Fac1 + . . . . Men = 158.68 + 4.51
Fac1

The intercept for males of 158.68 was
significantly higher than that of 149.78
for women because the dummy Fac1 ×
MALE was significantly different from
zero. Conversely, the slope for men of
4.51 was significantly lower than that
for women of 10.27 because the dummy
Fac1 × MALE was significantly different from zero.
The standard partial F test (Bowerman
& O’Connell, 2003) indicated that the set
of four interaction terms significantly (p
= .0208) contributed to the explanation of
ETS scores, over and above the contribution of the six predictors in the basic
model shown (Table 5). We also tested

for interactions between TOOK_SAT and
the four grade factors and for interactions
between gender and TOOK_SAT; no
other significant interactions appeared.
The bottom half of Table 5 replicates
the results shown in the top half, substituting two process GPA variables for the
four factors obtained in the factor analysis. The results are quite similar but not
as strong. The basic model explained
43.5% of the variability in ETS scores.
Both the preadmission course GPA and
the business core course GPA were significant, and they had similar effect
sizes.
The augmented model including
MALE × GPA interactions explained
46.6% of ETS score variability, and the
partial F test confirmed that the set of
two interaction terms significantly (p =
.0099) added to the explanatory capability of the model beyond that of the original four predictor variables.
Therefore, if we choose to define the
grade process variables as the four grade
factors, we can conclude that the augmented model in the second section of
Table 5 provides the best specification of
the predictors of performance on the
ETS Business Major Field Exam. If we
prefer to summarize the grade process
variables as the two GPAs from the
preadmission classes and the core class-

DISCUSSION
The findings of this study were similar to those of Mirchandani et al. (2001).
We found dummy variables MALE and
TOOK_SAT to be highly correlated
with scores on the ETS Business Major
Field Exam. Unfortunately, scores for
the SAT were not available for transfer
students, so all we were able to record
was whether the student took the SAT.
As was the case at Rowan University, it
would be difficult for Central Washington University to significantly increase
the SAT scores of incoming students or
discriminatory to only admit men in
order to improve scores on the ETS
exam.
In the study by Mirchandani et al.
(2001), the quantitative factor, which
they labeled as a process variable, was
the most significant after the SAT scores
for the subset of students who had taken
the SAT. They reported that it explained
35% of the variance in ETS scores for
the Rowan students, as opposed to Factor 2 (quantitative) and Factor 3
(accounting) that, together, explained
only 24% of the variance in student
scores in the present study. Factor 1
(general) explained 19.6% of the variance, but was composed of a variety of
quantitative and qualitative courses. It is
interesting that grades for accounting I
alone explained 10% of the total variance. It would be relatively easy to
attempt to improve student learning in
this one course, as opposed to improving learning across a wide variety of
courses.
We were somewhat surprised that
student age, transfer status, and choice
of major (business vs. accounting) had
no significant effect on ETS Business
Major Field Test scores because we
had noted instances when these factors
had affected performance in the classroom. The fact that there were no significant differences between scores of
main campus and off-campus students
helped verify that the COB curriculum
is consistently delivered across all
locations.
In terms of taking steps for improvement, it would have been easier if this
September/October 2005

61

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

TABLE 5. Regression Results of Dependent Variable ETS Business Major Field Exam Score

Predictor

β

SE

t

p

VIF

R2

Adjusted
R2

F(df )

45.6%

43.6%

22.66 (6, 162)

49.5%

46.3%

15.46 (10, 158)

43.5%

42.17%

31.58(4, 164)

44.7%

23.60 (6, 162)

SEE

Grade factors, gender, and taking the SAT a
Constant
Fac1
Fac2
Fac3
Fac4
MALE
TOOK_SAT

150.247
6.909
3.831
3.180
3.798
8.585
4.769

1.356
1.014
1.089
0.852
1.086
1.688
1.889

110.81
6.81
3.52
3.73
3.50
5.09
2.52

0.000
0.000
0.001
0.000
0.001
0.000
0.013

10.69
1.1
1.1
1.0
1.1
1.0
1.1

Grade factors, gender, taking the SAT, and interactionsb
Constant
Fac1
Fac2
Fac3
Fac4
MALE
TOOK_SAT
Fac1 × MALE
Fac2 × MALE
Fac3 × MALE
Fac4 × MALE

149.777
10.270
3.498
4.539
4.842
8.900
4.090
–5.759
–0.059
–2.075
–2.030

1.335
1.547
1.654
1.276
1.597
1.652
1.874
2.023
2.181
1.692
2.102

112.19
6.64
2.12
3.56
3.03
5.39
2.18
–2.85
–0.03
–1.23
–0.97

0.000
0.000
0.036
0.000
0.003
0.000
0.031
0.005
0.979
0.222
0.336

10.44
2.7
2.6
2.4
2.5
1.0
1.1
2.6
2.5
2.5
2.3

Grade averages, gender, and taking the SAT c
Constant
PRAD_GPA
CORE_GPA
MALE
TOOK_SAT

94.130
9.328
9.537
8.417
4.385

5.820
2.402
2.671
1.710
1.848

16.17
3.88
3.57
4.92
2.37

0.000
0.000
0.000
0.000
0.019

10.83
2.2
2.3
1.0
1.0

Grade averages, gender, taking the SAT, and interactionsd
Constant
76.413
PRAD_GPA
9.337
CORE_GPA
15.342
MALE
41.380
TOOK_SAT
3.797
PRAD_GPA × MALE –0.298
CORE_GPA × MALE –10.679

8.186
3.558
3.917
11.070
1.819
4.680
5.204

9.33
2.62
3.92
3.74
2.09
–0.06
–2.05

0.000
0.010
0.000
0.000
0.038
0.949
0.042

10.59

46.6%

5.1
5.1
45.6
1.1
75.7
90.2

Note. ETS = Educational Testing Service; VIF = variance inflation factor; SEE = standard error of estimate.
df for t = 162. bdf for t = 158; partial F test for Fac1 through Fac4 × MALE Interaction Effects: F(6, 158) = 2.9817; p = .0208. cdf for t = 164. ddf for
t = 162. Partial F test for PRAD_GPA and CORE_GPA × Male Interaction Effects: F(4, 162) = 4.7529; p = .0099.

a

analysis had identified certain deficient
areas, such as lower ETS scores for a
particular major or a particular site. It
would be easier to devote additional
resources to improving the scores of an
identified group, rather than having to
spread efforts across all of the business
core classes. The results do provide the
program with some internal validity for
grading practices. While grading standards are somewhat more rigorous in
some quantitative classes (apparently
finance in particular, as indicated in
62

Journal of Education for Business

Table 2), the qualitative areas of management, law, marketing, and macroeconomics were represented in Factor 1
(general). Only the MIS and calculus
courses did not load on any of the factors.
External validity for the program was
provided by a comparison with the
results from other universities. The student scores were at the 60th percentile
for the over 300 universities currently
using the ETS exam. Based on the initial results, the COB at Central Washington University will need to continue

to emphasize the learning objectives in
the business core classes. As additional
data are gathered, it may be possible to
identify specific areas for continuous
improvement of the business program.
One approach would be to require
seniors to take a review course with
standardized exams to reinforce learning in subjects that, in some cases, they
took over 3 years earlier. As the COB
identifies specific areas needing further
attention, they may be added to the topics covered in that review course.

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:50 12 January 2016

This study could be extended in several ways. First, as each term passes,
additional observations can be added to
the data set, which would enhance statistical power in re-examining the significance of some predictors which
were not significant in the current study,
such as student age, transfer status, and
major. Future studies could also expand
the set of possible process variables to
include students’ selected area of specialization (finance, marketing, management, etc.) within the business
administration area. A third, much more
challenging, but potentially valuable
expansion would attempt to identify
other output variables beyond the ETS
Business Major Field Exam that correlate with future student success.
REFERENCES
The Association to Advance Collegiate Schools of
Business (AACSB). (2004). Eligibility proce-

dures and standards for business accreditation.
Tampa, FL: Author.
Allen, J. S., & Bycio, P. (1997). An evaluation of
the Educational Testing Service Major Field
Achievement Test in Business. Journal of
Accounting Education, 15, 503–514.
Allen, M. J. (2004). Assessing academic programs
in higher education. Bolton, MA: Anker.
Barnett, S. T., Dascher, P. E., & Nicholson, C. Y.
(2004). Can school oversight adequately assess
department outcomes? A study of marketing
curriculum content. Journal of Education for
Business, 79, 157–162.
Black, H. T., & Duhon, D. L. (2003). Evaluating
and improving student achievement in business
programs: The effective use of standardized
assessment tests. Journal of Education for Business, 79, 90–98.
Bowerman, B. L., & O’Connell, R. T. (2003).
Business statistics in practice (3rd ed.). Boston:
McGraw-Hill Irwin.
Chesebro, J. L., & McCroskey, J. C. (2000). The
relationship between students’ reports of learning and their actual recall of lecture material: A
validity test. Communication Education, 49,
297–301.
Firestone, W. A., Monfils, L. F., & Schorr, R. Y.
(Eds.). (2004). The ambiguity of teaching to the
test: Standards, assessment, and educational
reform. Mahwah, NJ: Erlbaum Associates.

Korth, B. (1975). Rotations in exploratory factor
analysis. In D. J. Amick & H. J. Walberg,
(Eds.), Introductory multivariate analysis: For
educational, psychological and social
research (pp. 113–146). Berkeley, CA:
McCutchan.
McDermeit, M., Funk, R., & Dennis, M. (1999).
Data cleaning and replacement of missing values. Retrieved February 23, 2004, from
www.chestnut.org/LI/downloads/training
memos/missing data 1.pdf
Mirchandani D., Lynch, R., & Hamilton, D.
(2001). Using the ETS Major Field Test in
Business: Implications for assessment. Journal
of Education for Business, 77, 51–56.
Pfeffer, J., & Fong, C. T. (2002). The end of business schools? Less success than meets the eye.
Academy of Management Learning and Education, 1, 78–95.
Raven, M. R. (1994). The application of
exploratory factor analysis in agricultural education research. Journal of Agricultural Education, 35, 9–14.
Richmond, V. P., Gorham, J., & McCroskey, J. C.
(1987). The relationship between selected
immediacy behaviors and cognitive learning. In
M. A. McLaughlin, (Ed.), Communication
yearbook 10 (pp. 574–590). Newbury Park,
CA: Sage.
Sacks, P. (1997). Standardized testing: Meritocracy’s crooked yardstick. Change, 29, 24–32.

Journal of Education for Business
The Journal of Education for Business is for those educating tomorrow’s businesspeople. The journal features basic and applied research–based articles concerning accounting, communications, economics, finance, information systems, management, marketing, and other business disciplines. Articles report successful innovations in teaching and curriculum development at the college and postgraduate levels. Changes in today’s
business world and in the business professions are fundamentally influencing the competencies that business
graduates need. JEB offers a forum for authors addressing these areas or proposing new theories and analyses
of controversial issues. Its articles are selected by peer review.

Subscribe today!
BIMONTHLY; ISSN 0883-2323; Institutional Rate $97; Individual Rate $54
Add $16 for postage outside the U.S.
Heldref Publications
1319 Eighteenth Street, NW, Washington, DC 20036-1802 • Phone: (800) 365-9753 • Fax: (202) 293-6130
www.heldref.org
Libraries may order through subscription agents.

HELDREF PUBLICATIONS

September/October 2005

63