Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.83.6.339-346
Journal of Education for Business
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
A Multiattributes Approach for Ranking PhD
Programs
Frank R. Urbancic
To cite this article: Frank R. Urbancic (2008) A Multiattributes Approach for Ranking PhD
Programs, Journal of Education for Business, 83:6, 339-346, DOI: 10.3200/JOEB.83.6.339-346
To link to this article: http://dx.doi.org/10.3200/JOEB.83.6.339-346
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 44
View related articles
Citing articles: 5 View citing articles
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]
Date: 11 January 2016, At: 23:15
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
AMultiattributesApproachforRanking
PhDPrograms
FRANKR.URBANCIC
UNIVERSITYOFSOUTHALABAMA
MOBILE,ALABAMA
ABSTRACT. InitsplantocombatthePhD
shortagecrisis,theAssociationtoAdvance
CollegiateSchoolsofBusinessInternational
(AACSB;2003)hascalledforthedevelopmentofPhDprogramrankingstoserveas
incentivesforacademicinstitutionstoinvest
moreinPhDprograms,therebycounterbalancingthedisproportionateinfluenceof
masterofbusinessadministration(MBA)
rankingsonbusinessschools.Theauthor
reportsonthedevelopmentofaunique
multiattributesapproachforobjectivelyrankingPhDprograms.Theadvantageofthis
approachisaninherentlybroaderconsiderationfortheindicatorsofqualityandreputationofaprogramasmeasuredbytheaccomplishmentsofitsgraduates.Bycombining
multipleattributesintoarankingmetric,this
approachemphasizesresearchqualitythatis
inlinewiththerecommendationstatedbythe
AACSB(2003).Also,becausethemultiattributesapproachincorporatesdatathatisreadily
available,PhDprogramrankingscanbemore
efficientlyupdatedannually.
Keywords:doctoralprograms,PhDshortage,rankingmetric,researchreputation
Copyright©2008HeldrefPublications
T
he PhD supply shortage for businesseducationhasbeenwelldocumentedinrecentyears.Initialattention
wasdrawntotheproblembytheAssociationtoAdvanceCollegiateSchoolsof
BusinessInternational(AACSB)inthe
landmarkreportManagementEducation
at Risk in 2002.AACSB responded by
creatingtheDoctoralFacultyCommission. The AACSB (2003) commission
presented a comprehensive assessment
of the crisis 1 year later and provided
recommended actions for addressing
the problem in its report Sustaining
Scholarship in Business Schools. One
ofAACSB’skeyrecommendationscalls
for the development of PhD program
rankings. Unlike other business school
programs, such as the master of business administration (MBA) program,
there are few financial or reputation
incentives for academic institutions to
invest in PhD programs. According to
AACSB (2003), the development of
PhD program rankings should provide
reputational incentives to stimulate
added investments in the programs by
business schools, thereby counterbalancingthedisproportionateinfluenceof
MBArankingsonbusinessschools.
Evidence suggests that rankings matter to prospective PhD students, especially during the early stages of their
process of identifying a set of potential
programs.InasurveyofMBAstudents
who indicated that they might enter a
PhDprogramatsomepointinthefuture,
Davis and McCarthy (2005) asked the
students to rate the importance of factorsinselectingprogramstowhichthey
would apply. According to this survey,
oneofthemostimportantfactorsiscollegeranking.Althoughreadilyavailable,
the aforementioned college rankings
focusprimarilyontheMBAprogramsof
business colleges, and therefore a rankingofPhDprogramsforeachoneofthe
majordisciplinesofbusinesswouldprove
tobemuchmorerelevanttoprospective
students.Therearepreviouslypublished
studiesthatrankPhDprograms,butthe
studieswerebasedonlyonasingleattribute (e.g., a count of either the number
of articles published or the number of
citations to the published research of a
program’s graduates). The purpose of
the present study is to propose a multi-
attributesapproachforrankingPhDprograms. The advantage inherent to this
approach is a broader consideration of
theindicatorsforqualityandreputation
ofaprogramasmeasuredbytheaccomplishmentsofitsgraduates.Becausethe
AACSB (2003) explicitly emphasizes
the role of research as a contributor to
PhDprogramquality,themultiattributes
approach that is presented in this study
includes a ranking metric to recognize
theimportanceofresearch.
In this study, we demonstrate the
multiattributes approach for ranking
PhD programs by an application to
July/August2008
339
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
accounting.However,thesamemethod
can be applied to rank the PhD programs for each of the other primary
disciplinesoffinance,management,and
marketing.Theremainderofthisstudy
is organized as follows: First, previous research related to ranking PhD
programs is reviewed, and the attributesusedinthesestudiesarecritically
evaluated for suitability. In the second
section, relevant attributes are identified and support for their inclusion is
discussed.Thethirdsectionpresentsthe
findings from application of the multiattributesapproach.Finally,concluding
comments on the significance of the
findingsarediscussed.
RelatedResearch
Previously published studies that
rank PhD programs in accounting are
basedonlyonasingleattribute.Where
the studies vary is in the specific attribute chosen as the basis for ranking
programs. To date, published rankings
have been based on the application of
anattributechosenfromoneofthefollowing:perceptionsofprogramquality,
numberofpublishedjournalarticlesby
graduates, number of published citationstotheresearchofgraduates,initial
placement record of graduates, graduates’ representation on editorial boards
of academic journals, and the number
of endowed positions held by graduates. A discussion of these attributes
follows and includes consideration for
theirsuitabilitytothepurposeofrankingPhDprograms.
The earliest PhD-ranking studies for
accounting are the surveys of Carpenter,Crumbley,andStrawser(1974)and
Estes(1970),whichfocusedonperceptionsofdoctoralprogramquality.Both
studiesreliedonasurveyquestionnaire
butdifferedintheirapproach.Carpenter
et al. provided a list of doctoral programsto1,190facultymemberswitha
request for an assessment of perceived
qualityforeachprogrambasedona4pointscale.Estesalsoprovidedalistof
doctoralprograms,butparticipantswere
asked to rank only the top programs
from1to10.Thesesurveystudieswere
soundly criticized by Morton (1975),
ZeffandRhode(1975),andRhodeand
Zeff(1970),primarilyforthelackofa
340
JournalofEducationforBusiness
consistent standard or defined criteria
onwhichtoevaluatequalitybutalsofor
inherent problems of bias that significantlylimittheusefulnessofthesurvey
approachasasuitablebasisforranking
doctoralprograms.
Another technique for ranking PhD
programs is based on a count of the
number of published journal articles
by graduates. This approach serves
as the basis for ranking in studies by
BazleyandNikolai(1975),Bublitzand
Kee (1984), Hasselback and Reinstein
(1995), Jacobs, Hartgraves, and Beard
(1986),andStevensandStevens(1996).
Differences among the rankings provided by the studies are the result of
differencesinchoiceofjournalsandthe
time periods examined. For example,
Bublitz and Kee counted articles from
thelargestnumberofjournals(69)but
fortheshortestperiodoftime(5years).
ComparedwiththestudybyBublitzand
Kee,thestudiesbyBazleyandNikolai
and by Jacobs et al. used longer time
frames (7 and 13 years, respectively)
butcountedarticlespublishedinavery
small group of journals (4 and 8 journals,respectively).Journalarticlecount
studies that are based on the longest
time periods have been by Hasselback
andReinstein(1995),whoexamined41
journals for a period of 15 years, and
byStevensandStevens,whoexamined
40journalsforaperiodof19years.A
key difference between the latter studies is that Hasselback and Reinstein
adjustedtheircountsforcoauthorships,
whereasStevensandStevenscounteda
coauthoredarticleasawholearticlefor
eachauthorregardlessofthenumberof
authorsonthearticle.
The wide differences of opinion
regardinghowtoidentifyanappropriate
setofjournals,howtochoosetheright
time frame, and whether it is fitting to
adjustforcoauthorshipsallcombineto
limittheusefulnessofarticlecountsas
a base for ranking PhD programs. For
example,researchersmightarguethata
count should be based only on articles
that are published in top-tier journals.
However,astudybySmith(2004)provided empirical evidence as proof that
not all the articles in the top journals
are top articles. Another weakness of
articlecountasanattributeforranking
program quality is that the approach
ignores important research contributionsthatarepublishedaseitherbooks
or monographs as opposed to journal
articles.Onthebasisofaquestionnaire
survey of 2,135 accounting academicians, Heck and Huang (1986) identified the top 15 research monographs
thathavemadethemostsignificantcontributions to the accounting literature.
Yet these types of publications are not
considered in PhD rankings that are
basedonarticlecounts.
A third approach that researchers
have used to rank PhD programs is
basedonnumberofcitationstothepublished research of a program’s graduates.Frequencyofcitationisameasure
that is considered by some to be as
revealing of reputation for quality as
any other approach. PhD programs in
accounting are ranked on the basis of
citationanalysisbyBrownandGardner
(1985),GambleandO’Doherty(1985),
andSriramandGopalakrishnan(1994).
These studies yield different rankings
primarily because of differences in the
journals chosen for analysis. Gamble
andO’DohertyanalyzedtheAccounting
Review(AR)andJournalofAccounting
Research (JAR), Brown and Gardner
assessedtheJournalofAccountingand
Economics(JAE)andAccountingOrganizations and Society (AOS) in addition to AR and JAR, whereas Sriram
and Gopalakrishan analyzed six journals: AR, JAR, JAE, AOS, Auditing:
A Journal of Practice andTheory, and
Journal of Accounting, Auditing and
Finance.Therefore, as is the case with
article counts, the lack of agreement
aboutthecorrectsetofjournalsandthe
focusonjournalarticlestotheexclusion
of books and monographs raises questions about the suitability of the citationanalysisapproachforrankingPhD
programs.Additionalweaknessesinherent to citation analysis as discussed by
GambleandO’Doherty,Hasselbackand
Reinstein(1995),andHeckandHuang
(1987)includethefollowing:failureto
distinguish among journals of different
quality or class; counting both positive
and negative citations as equals; and
inability to differentiate citations that
are biased in favor of popular authors,
topics,ormethodologies.
The initial placement of graduates
representsafourthapproachtoranking
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
PhD programs. According to Fogarty
andSaftner(1993),thepremiseofthis
approach suggests that because candidates are hired on the basis of how
they will appear to outside observers,
the prestige of their doctoral program
is central, whereas real credentials and
underlying facts of the candidate go
unexamined. Fogarty and Saftner used
this approach to rank 68 programs for
their placements from 1980–1989, and
Stammerjohan and Hall (2002) studied placements to rank the graduates
of 80 programs on the basis of initial
placements from 1986–1990. In Fogarty and Saftner’s study, prestige was
measuredonthebasisofthepercentage
of a program’s graduates who are initiallyplacedinpositionswithdoctoral-
granting departments rather than non–
doctoral-granting departments. In contrast,StammerjohanandHallrecognized
thattheprestigeofsomenon–doctoral-
granting departments may actually
exceed that of some less prestigious
doctoral-granting departments, and for
this reason they used a different basis
forrankingPhDprograms.Inthestudy
by Stammerjohan and Hall, the measuresofgraduateplacementwerealong
two lines: The first scale used results
from a ranking of universities and collegespublishedinU.S.NewsandWorld
Report: America’s Best Colleges, and
the second scale used previously published information (from Hasselback
and Reinstein, 1995) on the research
productivityofaccountingdepartments.
According to Stammerjohan and Hall,
the prestige of a PhD program can be
measuredbygraduates’placementswith
top-tier universities and by their placementswithaccountingdepartmentsthat
arerecognizedforabove-averagepublication productivity.Although the latter
study improved the method used by
Fogarty and Saftner, questions remain
concerning the suitability of initial
placements as a basis for ranking PhD
programs.Forexample,prestigemaybe
offsetbyotherfactorsthatareexcluded
fromthesestudies,suchasacandidate’s
geographic location preference in the
job search. Also, supply and demand
characteristics can partially mitigate
prestige structures so that initial placementcharacteristicsarenotstableover
time. Indeed, the current severe short
age of faculties could cause changes
in the hiring choices of higher quality
departments,andsuchshiftsareafunctionofthelabormarketratherthanthe
qualityofPhDprograms.
ThefifthapproachusedtorankPhD
programs is based on a count of the
numberofjournaleditorialboardmembershipsheldbythegraduatesofaprogram.Editorialboardrepresentation,as
discussed by Urbancic (2006), is often
used to rank faculties in the areas of
accounting, economics, finance, marketing, real estate, statistics, and transportation.However,Mittermaier(1991)
extended the editorial board approach
to develop a ranking of PhD programs
based on the doctoral origins of editorial board members for accounting
journals. A multidisciplinary study by
Trieschmann, Dennis, Northcraft, and
Niemi (2000) added validity and relevance to the use of editorial board
memberships as a basis for an assessment of academic quality by demonstrating a positive correlation between
the number of memberships held and
business school rankings. Because it is
imperativethatjournaleditorsendeavor
to sustain and enhance journal reputation, Rynes (2006) stated that scholarswithstrongpublicationandcitation
recordsarethemostobviouscandidates
to receive board invitations to leading
journals. In effect, the editorial board
approach encompasses both the article
countandcitationanalysismethodsfor
ranking PhD programs.The latest year
for which Mittermaier (1991) obtained
editorialboarddatawas1990,butsince
thattimeanadditionalsevenPhDprograms in accounting have been established,andthereforemorerecentinformationonmembershipsisnecessaryto
provideamorecurrentrankingofPhD
programs.
ThesixthapproachusedtorankPhD
programs is a count of the number of
namedpositions(endowedchairs,fundedprofessorships,andfellowships)held
bythegraduatesofaprogram.According to a study byWorthington,Waters,
andFields(1989),adoctoralprogram’s
ability to develop highly productive
graduatescanbemeasuredbythenumberofdoctoralgraduatesholdingnamed
positions. This approach has served
as the basis for a ranking of account-
ing PhD programs in studies by Meier
and Kamath (2005), Tang and Griffith
(1997), and Worthington et al. (1989).
But,exceptforthestudybyWorthington et al. (1989), the reported results
are not sufficiently comprehensive in
their program coverage. For example,
there are more than 80 PhD programs
in accounting in the United States, but
Tang and Griffith presented a ranking
for only 28 programs—although they
indicated that there are at least 100
graduates of other PhD programs that
also hold named positions. The study
byMeierandKamathimprovedonthe
workofTangandGriffithbyreporting
rankings for 37 programs, but numerousprogramsrepresentedbygraduates
holding 89 named positions remained
unreportedfortheirranking.Thestudy
byWorthington et al. offered the most
completelookatallthePhDprogramsin
accountingwithrespecttonamedpositionsheldbygraduates,buttheranking
isbasedondatacollectedin1988,and
since then several more PhD programs
havebeeninitiated,andanevengreater
number of named positions have been
established.Thenumberofnamedpositionholdersisarelevantbasisonwhich
torankPhDprograms,butamorecomprehensive and current compilation of
informationiscalledfor.
METHOD
The review of previously published
approaches used to rank PhD programs
suggeststhatarankbasedonlyonasingleattributedoesnotsufficientlydistinguishdifferencesinquality.Therefore,an
improvementintherankingofPhDprograms could be achieved by developing
a multiattributes approach.An essential
considerationinthedevelopmentofthis
approach is explicit recognition of the
emphasis placed by theAACSB (2003)
on research as the primary determinant
of PhD program quality and rankings.
For this approach, three attributes are
chosen to compose a ranking metric
basedonthedoctoraloriginsofthefollowing: research award winners, editorialboardmembersfortopjournals,and
namedpositionholders.
The first component of the multiattributes ranking approach is the doctoraloriginsofresearchawardwinners.
July/August2008
341
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Althoughnotinusebypriorresearchers
to rank PhD programs, the power of
nationalawardsasasignalofleadership
inresearchhasbeendocumentedbyLee
(1995).UsingthehistoryoftheAmerican Accounting Association (AAA) as
an empirical foundation for analyzing
thedevelopmentofacademicaccounting
research,Lee(1995)foundthatresearch
awards exist on the same level as editorial board appointments in terms of
their capacity to signify research elites
among doctoral programs. Currently,
theAAA provides national recognition
in the form of seven awards of which
five are based on research: Wildman
Medal Award, Seminal Contributions
toAccounting LiteratureAward, Notable Contributions to Accounting LiteratureAward,OutstandingAccounting
Educator Award, and the Competitive
ManuscriptAward.Thedoctoralorigins
wereidentifiedforallwinnersofthese
awards and incorporated as part of the
rankings for PhD programs. Research
awardsasabasisforrankingPhDprograms has three advantages compared
with counts of the number of articles
published or research citations. First,
both the Wildman Medal Award and
the Notable Contributions to Accounting Literature Award more broadly
consider significant books and monographs, as well as journal articles, in
the recognition of research. Second, as
previouslydiscussed,astudybySmith
(2004) provided empirical evidence as
proof that not all the articles in the
top journals are top articles. By comparison, only research judged by the
AAA as top is bestowed with national
recognition.Third,thedisadvantagesof
citationanalysisasabasisforrankings
are avoided because research that has
garnered award(s) is most likely to be
heavilycitedresearchanyway.
ThenumberofeditorialboardmembershipsheldbythegraduatesofaPhD
programconstitutesavalidindicatorof
quality (Mittermaier, 1991). Because a
strong record of publication is a prerequisite for selection to a board, it is
reasonablethatthenumberofmemberships held implicitly includes “number of articles published” as a ranking
metric, but without a need to confront
the problem of whether to adjust for
coauthorship credit. In relying on the
342
JournalofEducationforBusiness
numberofeditorialboardmemberships
held as an indicator of quality, it is
necessary to first identify an appropriate core set of journals. Studies that
identifythemostinfluentialjournalsin
academic accounting have been made
byBonner,Hesford,VanderStede,and
Young(2006)andBallasandTheoharakis(2003).Bothstudiesconcludedthat
the top five journals in accounting are
AOS,AR,JAE,JAR,andContemporary
AccountingResearch(CAR).Therefore,
in the present study the multiattributes
ranking includes the doctoral origins
for the editorial board members of
thesefivejournalsbasedonthedegree
information published in Hasselback’s
(2006) Accounting Faculty Directory
2006–2007.
We also used the data provided by
Hasselback’s (2006) Accounting Faculty Directory 2006–2007 to identify
namedfacultypositionholdersandtheir
doctoralorigins.Inamannersimilarto
that of Meier and Kamath (2005), we
interpreted named positions broadly to
include endowed chairs, named professorships, and fellowships without
regard to faculty rank. Validation for
using the doctoral origins of named
positionholdersasthethirdcomponent
forrankingPhDprogramswasprovided
bysurveystudiesofnamedpositionsby
Rezaee, Elmore, and Spiceland (2004)
andbyTang,Forrest,andLeach(1990),
because the results from both studies
indicatedthatthemostimportantcriterioninthedecisionforanappointment
to a named position is the record of
published research productivity established by an individual. Respectively,
these studies reported that universities
seekscholarswithoutstandingorexcellent publication records to fill named
positions.
RESULTS
Information on the doctoral origins
of research award winners, editorial
board members for top journals, and
named position holders for the graduates of 80 PhD programs is in Table
1. We excluded from Table 1 all PhD
programs with fewer than 5 graduates
(Duke, Florida International, Georgia
Institute of Technology, Lehigh, Rensselaer, Rice, SUNY–Binghamton, and
Vanderbilt) and doctoral programs in
accounting that had been discontinued
at three universities (American, St.
Louis, and Santa Clara). Collectively,
graduates of the 80 PhD programs in
Table 1 have received 226 awards for
outstandingresearch,hold236editorial
board memberships for top journals,
andhold462namedfacultypositions.
Comparisons among the programs
presented in Table 1 reveal that it is a
rare accomplishment for a program’s
graduates to excel in more than one
of the three categories. Programs
whose graduates have received 15 or
moreAAAawardsforresearchinclude
Berkeley, Chicago, Cornell, Illinois,
Michigan, and Stanford, while 15 or
more memberships to editorial boards
areheldbygraduatesfromtheprograms
ofChicago,Iowa,Michigan,Rochester,
and Stanford. And the graduates from
programsofIllinois,Indiana,Michigan,
Ohio State, Pennsylvania State, and
TexasatAustinhold15ormorenamed
faculty positions.At the other extreme
are graduates from six programs who
havenotreceivedaresearchaward,who
do not currently serve as members of
a prominent editorial board, and who
do not hold appointments to a named
position.TheseprogramsareCleveland
State, Drexel, Memphis, Rutgers, Virginia Commonwealth, and Washington
State. Awards for research have gone
only to the graduates of 32 programs,
whereas the editorial board appointmentsextendtoonlythegraduatesof37
programs, and the named positions are
held by graduates from 71 programs.
Thiswidedisparityintheachievements
attainedbygraduatesofPhDprograms
in accounting further underscores the
importance of using a multiattributes
approachtoranktheprograms.
The process for assigning the relativerankstoPhDprogramsinamulti-
attributeformatisbasedoncomputing
acombinedscoretorepresentresearch
awards, editorial board memberships,
and named positions. For example,
information in Table 1 indicates that
one research award winner, no board
members,and12namedpositionholdersreceivedtheirPhDsfromAlabama.
Therefore, a computed score for Alabama’sPhDprogramwouldequal.0304
(orthesumof1/226+0/236+12/462).
TABLE1.DoctoralOriginsofResearchAwardWinners,EditorialBoard
Members,andNamedPositionHolders
PhDprogram
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Alabama
Arizona
ArizonaState
Arkansas
Boston
California–Berkeley
California–LosAngeles
CarnegieMellon
CaseWesternReserve
CentralFlorida
Chicago
Cincinnati
CUNY–Baruch
ClevelandState
Colorado
Columbia
Connecticut
Cornell
Drexel
Florida
FloridaState
GeorgeWashington
Georgia
GeorgiaState
Harvard
Houston
Illinois
Indiana
Iowa
Kansas
KentState
Kentucky
LouisianaState
LouisianaTech
Maryland
Massachusetts
MassachusettsInst.ofTech.
Memphis
Michigan
MichiganState
Minnesota
Mississippi
MississippiState
Missouri
Nebraska
NewYork
NorthCarolina
NorthTexas
Northwestern
OhioState
Oklahoma
OklahomaState
Oregon
Pennsylvania
PennsylvaniaState
Pittsburgh
Purdue
Rochester
Research
Editorial
Named
awardwinners boardmembers positionholders
1
1
0
0
0
22
0
10
0
0
32
0
0
0
0
0
0
17
0
3
0
0
1
0
1
0
23
1
4
0
0
0
2
0
0
0
2
0
15
10
5
0
0
2
0
0
0
0
1
14
0
1
3
1
1
0
0
6
0
8
2
0
1
8
1
6
0
0
24
0
0
0
0
3
1
10
0
2
0
0
0
0
6
1
7
1
16
3
0
0
0
0
0
1
3
0
30
3
10
0
0
0
0
2
3
0
5
7
0
0
2
8
6
5
0
15
12
12
14
13
0
9
4
13
3
1
13
2
2
0
2
4
0
6
0
8
6
1
8
8
5
4
22
15
8
1
3
10
11
4
4
1
1
0
19
14
14
9
7
10
10
4
10
9
8
18
3
8
3
5
16
2
1
8
(tablecontinues)
Becausethereareapproximatelytwice
the total number of named position
holders(462)asthereareeitherawards
(226)oreditorialboardmembers(236),
theportionofthescorethatisweighted
fornamedpositionsisineffectreduced
by half. However, the aforementioned
reduction is a justifiable outcome of
differences inherent to the three attributes.Inotherwords,agraduatefroma
givenPhDprogramhaseitherreceived
a nationalAAA award for research or
has not, and the graduate is either a
memberofatopjournaleditorialboard
or is not. But by comparison, appointmenttoanamedpositiononthefaculty
doesnotineveryinstancealwayscarry
the same distinctive significance as
either an award or selection to an editorial board because named positions
areknowntowidelyrangefromjusta
relativelysmallannualstipendtoafar
more lucrative salary package. Findings from survey studies of endowed
position holders by Bloom, Fuglister, and Meier (1996) and by Rezaee,
Elmore,andSpiceland(2004)indicated
extensive differences in the financial
amounts provided to fund support for
the positions, with corresponding differences in compensation for the positionholders.Therefore,comparedwith
AAA research awards and selection to
a top journal editorial board, named
position appointments signify relevant
achievement but are not uniformly as
strong an indication of the research
emphasis in the PhD program of the
appointee. And bear in mind that the
emphasisontheroleof“researchasan
important contributor to PhD program
quality”iscentraltotheAACSB(2003)
callforrankingPhDprograms(p.34).
Table2presentstherankfor80PhD
programs in accounting on the basis
oftheprocessforcomputingweighted
scores as described in the preceding
paragraph. According to these results,
thetop10programsareChicago,Michigan, Illinois, Stanford, Berkeley, Cornell, Ohio State, Washington, Texas–
Austin, and Rochester. Most of these
top programs are also highly ranked
in the previous studies as well, but by
incorporating a ranking metric based
onmultipleattributes,therankingscan
beextendedtoagreaternumberofprogramsthanwouldnormallybethecase
July/August2008
343
TABLE1.(cont.)
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
PhDprogram
Rutgers
SouthCarolina
SouthFlorida
SouthernCalifornia
SouthernIllinois
Stanford
SUNY–Buffalo
Syracuse
Temple
Tennessee
Texas–Arlington
Texas–Austin
TexasA&M
TexasTech
Tulane
Utah
VirginiaCommonwealth
VirginiaPoly.Inst.
Washington
Washington–St.Louis
WashingtonState
Wisconsin
Total
TABLE2.PhDProgram
Rankings
Research
Editorial
Named
awardwinners boardmembers positionholders
0
0
0
1
0
18
1
0
0
0
0
12
1
0
0
0
0
0
11
0
0
3
0
0
0
1
0
16
0
0
0
0
0
3
0
0
2
0
0
0
10
0
0
4
226
236
ifbasedonlyonasingleattribute.However,theexpansionofrankstoinclude
more programs results in several tied
scores.Fordeterminingtherankstobe
assigned in Table 2, we resolved any
tied scores between programs in favor
of the program with fewer graduates
through the year 2005, according to
data provided by Hasselback’s (2006)
Accounting Faculty Directory 2006–
2007. In all, there were 11 tied scores
resolved on this basis, with only the
programs of CUNY–Baruch and Templeremainingtiedinthe66thposition
becausebothprogramshadanidentical
number of graduates (38). We emphasize that number of graduates is nothing more than an expedient condition
forbreakingtiesandisnotnecessarily
coincidentwitheitherahigherorlower
rank. Some findings indicate that the
larger programs do not automatically
haveanadvantageintermsofrank.For
example,ChicagoandStanford,with74
and72graduates,respectively(according to Hasselback’s 2006 Accounting
Faculty Directory 2006–2007), rank
significantly higher (1st and 4th) than
Missouri (26th) and Arkansas (30th),
with 183 and 168 graduates, respec344
JournalofEducationforBusiness
0
5
2
6
2
14
1
3
2
13
2
24
7
9
0
2
0
4
14
2
0
14
462
tively. Conversely, Oregon (28th) and
Pittsburgh(31st),with51and37graduates, respectively, rank significantly
lowerthandoMichigan(2nd)andOhio
State(7th),with119and133graduates,
respectively.Althoughsizeofprogram
does not coincide with rank, there are
indications that the age of a program,
asmeasuredbythe1styearinwhicha
degree was conferred per data in Hasselback’s (2006) Accounting Faculty
Directory 2006–2007, tends to align
with rank in that the established programsrankhigherthannewerprograms.
For example, 17 of the 20 programs
composing the top quartile conferred
first degrees prior to 1968; the only
exceptions are Arizona, Pennsylvania,
and Rochester. On the other hand, all
of the programs composing the fourth
quartile conferred a first degree after
1968, except for Colorado, Utah, and
Washington–St.Louis.
DISCUSSION
WhencomparingPhDprograms,the
role and extent of emphasis on quality research is an essential characteristic that sets the programs apart from
Rank
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Program
Score
Chicago
Michigan
Illinois
Stanford
California–Berkeley
Cornell
OhioState
Washington
Texas–Austin
Rochester
Iowa
CarnegieMellon
Minnesota
MichiganState
PennsylvaniaState
Arizona
Wisconsin
Pennsylvania
Northwestern
Indiana
Harvard
Florida
ArizonaState
NorthCarolina
LouisianaState
Missouri
Alabama
Oregon
Tennessee
Arkansas
Pittsburgh
MassachusettsInst.
ofTech.
OklahomaState
Georgia
SouthernCalifornia
Kentucky
Nebraska
Columbia
TexasA&M
TexasTech
Mississippi
NorthTexas
GeorgiaState
NewYork
MississippiState
Kansas
FloridaState
California–Los
Angeles
Houston
SouthCarolina
Maryland
LouisianaTech
VirginiaPoly.Inst.
Tulane
SUNY–Buffalo
CaseWesternReserve
Syracuse
.2714
.2346
.1790
.1777
.1507
.1306
.1306
.1213
.1178
.1074
.1028
.0978
.0948
.0873
.0645
.0643
.0605
.0491
.0429
.0411
.0407
.0391
.0388
.0344
.0327
.0305
.0304
.0282
.0281
.0281
.0255
.0237
.0217
.0217
.0216
.0216
.0216
.0214
.0196
.0195
.0195
.0195
.0173
.0171
.0152
.0149
.0130
.0129
.0129
.0108
.0087
.0087
.0087
.0085
.0066
.0065
.0065
(tablecontinues)
TABLE2.(cont.)
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Rank
58
59
60
61
62
63
64
65
66
66
68
69
70
71
72
73
74
75
76
77
78
79
80
Program
Score
Oklahoma
KentState
Massachusetts
SouthFlorida
SouthernIllinois
Texas–Arlington
Utah
Washington–St.
Louis
CUNY–Baruch
Temple
Cincinnati
Colorado
Connecticut
Boston
CentralFlorida
Purdue
GeorgeWashington
ClevelandState
WashingtonState
Drexel
Rutgers
Virginia
Commonwealth
Memphis
.0065
.0065
.0064
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0042
.0042
.0022
.0022
.0022
.0000
.0000
.0000
.0000
.0000
.0000
each other. Such comparisons inevitably invite development of a suitable
approach to ranking PhD programs.
Compared with the various approaches
used as a basis in prior studies that
ranked PhD programs, the method that
we applied in this study is unique by
virtue of its simultaneous combination
of three attributes as a ranking metric
ofresearchquality.Theseattributesare
research awards, editorial board memberships for top journals, and holders
of named positions. In terms of their
capacity to signal the research quality
ofaPhDprogram,theattributesencompass the more traditional measures of
productivity accomplishment, such as
counts of either the number of articles
publishedorthenumberofcitationsto
the published research of a program’s
graduates. Also, because the multi-
attributes approach incorporates data
that is readily available, PhD program
rankingscanbemoreefficientlyupdatedannually.
The development of a reliable and
efficient means for ranking PhD programs is a worthwhile goal. Survey
results from a study by Davis and
McCarthy (2005) provided evidence
that college rankings matter greatly to
prospective PhD students, especially
during the early stages of their process to identify a set of potential PhD
programs. However, although readily
available, the aforementioned college
rankings focus primarily on the MBA
programs of business colleges, and
therefore a ranking of PhD programs
foreachoneofthemajordisciplinesof
business would offer far more relevant
information to prospective students.
Also, AACSB (2003) emphasized that
thedevelopmentofPhDprogramrankingsshouldprovidereputationalincentives to stimulate added investments
in the programs by business schools,
therebycounterbalancingthedisproportionate influence of MBA rankings on
business schools. The results from the
presentstudyshowthatamultiattributes
approachtorankingPhDprogramshas
thepotentialtosuccessfullyachievethe
objectivesetforthbyAACSB.
AdecisiontopursueaPhDisasignificant one, as is the choice of programs to apply to. Rankings are an
important information source for comparing programs during early stages of
the search process, but there are additional factors that prospective students
shouldconsiderpriortofinalizingtheir
decisions.Someoftheseconsiderations
areadmissionrequirements,curriculum,
amounts of financial support offered,
geographicpreferences,andpreferences
betweensmall-townlocationandlarger
citylocation.
NOTES
Dr. Frank R. Urbancic’s research interests are
financialreportingstandardsandbusinesseducation.
Correspondence concerning this article should
be addressed to Frank R. Urbancic, Department
ofAccounting,MitchellCollegeofBusiness,University of South Alabama, Mobile, AL 36688,
USA.
E-mail:furbanci@usouthal.edu
REFERENCES
Association to Advance Collegiate Schools of
Business. (2002). Management education at
risk.St.Louis,MO:Author.
Association to Advance Collegiate Schools of
Busi
ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20
A Multiattributes Approach for Ranking PhD
Programs
Frank R. Urbancic
To cite this article: Frank R. Urbancic (2008) A Multiattributes Approach for Ranking PhD
Programs, Journal of Education for Business, 83:6, 339-346, DOI: 10.3200/JOEB.83.6.339-346
To link to this article: http://dx.doi.org/10.3200/JOEB.83.6.339-346
Published online: 07 Aug 2010.
Submit your article to this journal
Article views: 44
View related articles
Citing articles: 5 View citing articles
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]
Date: 11 January 2016, At: 23:15
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
AMultiattributesApproachforRanking
PhDPrograms
FRANKR.URBANCIC
UNIVERSITYOFSOUTHALABAMA
MOBILE,ALABAMA
ABSTRACT. InitsplantocombatthePhD
shortagecrisis,theAssociationtoAdvance
CollegiateSchoolsofBusinessInternational
(AACSB;2003)hascalledforthedevelopmentofPhDprogramrankingstoserveas
incentivesforacademicinstitutionstoinvest
moreinPhDprograms,therebycounterbalancingthedisproportionateinfluenceof
masterofbusinessadministration(MBA)
rankingsonbusinessschools.Theauthor
reportsonthedevelopmentofaunique
multiattributesapproachforobjectivelyrankingPhDprograms.Theadvantageofthis
approachisaninherentlybroaderconsiderationfortheindicatorsofqualityandreputationofaprogramasmeasuredbytheaccomplishmentsofitsgraduates.Bycombining
multipleattributesintoarankingmetric,this
approachemphasizesresearchqualitythatis
inlinewiththerecommendationstatedbythe
AACSB(2003).Also,becausethemultiattributesapproachincorporatesdatathatisreadily
available,PhDprogramrankingscanbemore
efficientlyupdatedannually.
Keywords:doctoralprograms,PhDshortage,rankingmetric,researchreputation
Copyright©2008HeldrefPublications
T
he PhD supply shortage for businesseducationhasbeenwelldocumentedinrecentyears.Initialattention
wasdrawntotheproblembytheAssociationtoAdvanceCollegiateSchoolsof
BusinessInternational(AACSB)inthe
landmarkreportManagementEducation
at Risk in 2002.AACSB responded by
creatingtheDoctoralFacultyCommission. The AACSB (2003) commission
presented a comprehensive assessment
of the crisis 1 year later and provided
recommended actions for addressing
the problem in its report Sustaining
Scholarship in Business Schools. One
ofAACSB’skeyrecommendationscalls
for the development of PhD program
rankings. Unlike other business school
programs, such as the master of business administration (MBA) program,
there are few financial or reputation
incentives for academic institutions to
invest in PhD programs. According to
AACSB (2003), the development of
PhD program rankings should provide
reputational incentives to stimulate
added investments in the programs by
business schools, thereby counterbalancingthedisproportionateinfluenceof
MBArankingsonbusinessschools.
Evidence suggests that rankings matter to prospective PhD students, especially during the early stages of their
process of identifying a set of potential
programs.InasurveyofMBAstudents
who indicated that they might enter a
PhDprogramatsomepointinthefuture,
Davis and McCarthy (2005) asked the
students to rate the importance of factorsinselectingprogramstowhichthey
would apply. According to this survey,
oneofthemostimportantfactorsiscollegeranking.Althoughreadilyavailable,
the aforementioned college rankings
focusprimarilyontheMBAprogramsof
business colleges, and therefore a rankingofPhDprogramsforeachoneofthe
majordisciplinesofbusinesswouldprove
tobemuchmorerelevanttoprospective
students.Therearepreviouslypublished
studiesthatrankPhDprograms,butthe
studieswerebasedonlyonasingleattribute (e.g., a count of either the number
of articles published or the number of
citations to the published research of a
program’s graduates). The purpose of
the present study is to propose a multi-
attributesapproachforrankingPhDprograms. The advantage inherent to this
approach is a broader consideration of
theindicatorsforqualityandreputation
ofaprogramasmeasuredbytheaccomplishmentsofitsgraduates.Becausethe
AACSB (2003) explicitly emphasizes
the role of research as a contributor to
PhDprogramquality,themultiattributes
approach that is presented in this study
includes a ranking metric to recognize
theimportanceofresearch.
In this study, we demonstrate the
multiattributes approach for ranking
PhD programs by an application to
July/August2008
339
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
accounting.However,thesamemethod
can be applied to rank the PhD programs for each of the other primary
disciplinesoffinance,management,and
marketing.Theremainderofthisstudy
is organized as follows: First, previous research related to ranking PhD
programs is reviewed, and the attributesusedinthesestudiesarecritically
evaluated for suitability. In the second
section, relevant attributes are identified and support for their inclusion is
discussed.Thethirdsectionpresentsthe
findings from application of the multiattributesapproach.Finally,concluding
comments on the significance of the
findingsarediscussed.
RelatedResearch
Previously published studies that
rank PhD programs in accounting are
basedonlyonasingleattribute.Where
the studies vary is in the specific attribute chosen as the basis for ranking
programs. To date, published rankings
have been based on the application of
anattributechosenfromoneofthefollowing:perceptionsofprogramquality,
numberofpublishedjournalarticlesby
graduates, number of published citationstotheresearchofgraduates,initial
placement record of graduates, graduates’ representation on editorial boards
of academic journals, and the number
of endowed positions held by graduates. A discussion of these attributes
follows and includes consideration for
theirsuitabilitytothepurposeofrankingPhDprograms.
The earliest PhD-ranking studies for
accounting are the surveys of Carpenter,Crumbley,andStrawser(1974)and
Estes(1970),whichfocusedonperceptionsofdoctoralprogramquality.Both
studiesreliedonasurveyquestionnaire
butdifferedintheirapproach.Carpenter
et al. provided a list of doctoral programsto1,190facultymemberswitha
request for an assessment of perceived
qualityforeachprogrambasedona4pointscale.Estesalsoprovidedalistof
doctoralprograms,butparticipantswere
asked to rank only the top programs
from1to10.Thesesurveystudieswere
soundly criticized by Morton (1975),
ZeffandRhode(1975),andRhodeand
Zeff(1970),primarilyforthelackofa
340
JournalofEducationforBusiness
consistent standard or defined criteria
onwhichtoevaluatequalitybutalsofor
inherent problems of bias that significantlylimittheusefulnessofthesurvey
approachasasuitablebasisforranking
doctoralprograms.
Another technique for ranking PhD
programs is based on a count of the
number of published journal articles
by graduates. This approach serves
as the basis for ranking in studies by
BazleyandNikolai(1975),Bublitzand
Kee (1984), Hasselback and Reinstein
(1995), Jacobs, Hartgraves, and Beard
(1986),andStevensandStevens(1996).
Differences among the rankings provided by the studies are the result of
differencesinchoiceofjournalsandthe
time periods examined. For example,
Bublitz and Kee counted articles from
thelargestnumberofjournals(69)but
fortheshortestperiodoftime(5years).
ComparedwiththestudybyBublitzand
Kee,thestudiesbyBazleyandNikolai
and by Jacobs et al. used longer time
frames (7 and 13 years, respectively)
butcountedarticlespublishedinavery
small group of journals (4 and 8 journals,respectively).Journalarticlecount
studies that are based on the longest
time periods have been by Hasselback
andReinstein(1995),whoexamined41
journals for a period of 15 years, and
byStevensandStevens,whoexamined
40journalsforaperiodof19years.A
key difference between the latter studies is that Hasselback and Reinstein
adjustedtheircountsforcoauthorships,
whereasStevensandStevenscounteda
coauthoredarticleasawholearticlefor
eachauthorregardlessofthenumberof
authorsonthearticle.
The wide differences of opinion
regardinghowtoidentifyanappropriate
setofjournals,howtochoosetheright
time frame, and whether it is fitting to
adjustforcoauthorshipsallcombineto
limittheusefulnessofarticlecountsas
a base for ranking PhD programs. For
example,researchersmightarguethata
count should be based only on articles
that are published in top-tier journals.
However,astudybySmith(2004)provided empirical evidence as proof that
not all the articles in the top journals
are top articles. Another weakness of
articlecountasanattributeforranking
program quality is that the approach
ignores important research contributionsthatarepublishedaseitherbooks
or monographs as opposed to journal
articles.Onthebasisofaquestionnaire
survey of 2,135 accounting academicians, Heck and Huang (1986) identified the top 15 research monographs
thathavemadethemostsignificantcontributions to the accounting literature.
Yet these types of publications are not
considered in PhD rankings that are
basedonarticlecounts.
A third approach that researchers
have used to rank PhD programs is
basedonnumberofcitationstothepublished research of a program’s graduates.Frequencyofcitationisameasure
that is considered by some to be as
revealing of reputation for quality as
any other approach. PhD programs in
accounting are ranked on the basis of
citationanalysisbyBrownandGardner
(1985),GambleandO’Doherty(1985),
andSriramandGopalakrishnan(1994).
These studies yield different rankings
primarily because of differences in the
journals chosen for analysis. Gamble
andO’DohertyanalyzedtheAccounting
Review(AR)andJournalofAccounting
Research (JAR), Brown and Gardner
assessedtheJournalofAccountingand
Economics(JAE)andAccountingOrganizations and Society (AOS) in addition to AR and JAR, whereas Sriram
and Gopalakrishan analyzed six journals: AR, JAR, JAE, AOS, Auditing:
A Journal of Practice andTheory, and
Journal of Accounting, Auditing and
Finance.Therefore, as is the case with
article counts, the lack of agreement
aboutthecorrectsetofjournalsandthe
focusonjournalarticlestotheexclusion
of books and monographs raises questions about the suitability of the citationanalysisapproachforrankingPhD
programs.Additionalweaknessesinherent to citation analysis as discussed by
GambleandO’Doherty,Hasselbackand
Reinstein(1995),andHeckandHuang
(1987)includethefollowing:failureto
distinguish among journals of different
quality or class; counting both positive
and negative citations as equals; and
inability to differentiate citations that
are biased in favor of popular authors,
topics,ormethodologies.
The initial placement of graduates
representsafourthapproachtoranking
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
PhD programs. According to Fogarty
andSaftner(1993),thepremiseofthis
approach suggests that because candidates are hired on the basis of how
they will appear to outside observers,
the prestige of their doctoral program
is central, whereas real credentials and
underlying facts of the candidate go
unexamined. Fogarty and Saftner used
this approach to rank 68 programs for
their placements from 1980–1989, and
Stammerjohan and Hall (2002) studied placements to rank the graduates
of 80 programs on the basis of initial
placements from 1986–1990. In Fogarty and Saftner’s study, prestige was
measuredonthebasisofthepercentage
of a program’s graduates who are initiallyplacedinpositionswithdoctoral-
granting departments rather than non–
doctoral-granting departments. In contrast,StammerjohanandHallrecognized
thattheprestigeofsomenon–doctoral-
granting departments may actually
exceed that of some less prestigious
doctoral-granting departments, and for
this reason they used a different basis
forrankingPhDprograms.Inthestudy
by Stammerjohan and Hall, the measuresofgraduateplacementwerealong
two lines: The first scale used results
from a ranking of universities and collegespublishedinU.S.NewsandWorld
Report: America’s Best Colleges, and
the second scale used previously published information (from Hasselback
and Reinstein, 1995) on the research
productivityofaccountingdepartments.
According to Stammerjohan and Hall,
the prestige of a PhD program can be
measuredbygraduates’placementswith
top-tier universities and by their placementswithaccountingdepartmentsthat
arerecognizedforabove-averagepublication productivity.Although the latter
study improved the method used by
Fogarty and Saftner, questions remain
concerning the suitability of initial
placements as a basis for ranking PhD
programs.Forexample,prestigemaybe
offsetbyotherfactorsthatareexcluded
fromthesestudies,suchasacandidate’s
geographic location preference in the
job search. Also, supply and demand
characteristics can partially mitigate
prestige structures so that initial placementcharacteristicsarenotstableover
time. Indeed, the current severe short
age of faculties could cause changes
in the hiring choices of higher quality
departments,andsuchshiftsareafunctionofthelabormarketratherthanthe
qualityofPhDprograms.
ThefifthapproachusedtorankPhD
programs is based on a count of the
numberofjournaleditorialboardmembershipsheldbythegraduatesofaprogram.Editorialboardrepresentation,as
discussed by Urbancic (2006), is often
used to rank faculties in the areas of
accounting, economics, finance, marketing, real estate, statistics, and transportation.However,Mittermaier(1991)
extended the editorial board approach
to develop a ranking of PhD programs
based on the doctoral origins of editorial board members for accounting
journals. A multidisciplinary study by
Trieschmann, Dennis, Northcraft, and
Niemi (2000) added validity and relevance to the use of editorial board
memberships as a basis for an assessment of academic quality by demonstrating a positive correlation between
the number of memberships held and
business school rankings. Because it is
imperativethatjournaleditorsendeavor
to sustain and enhance journal reputation, Rynes (2006) stated that scholarswithstrongpublicationandcitation
recordsarethemostobviouscandidates
to receive board invitations to leading
journals. In effect, the editorial board
approach encompasses both the article
countandcitationanalysismethodsfor
ranking PhD programs.The latest year
for which Mittermaier (1991) obtained
editorialboarddatawas1990,butsince
thattimeanadditionalsevenPhDprograms in accounting have been established,andthereforemorerecentinformationonmembershipsisnecessaryto
provideamorecurrentrankingofPhD
programs.
ThesixthapproachusedtorankPhD
programs is a count of the number of
namedpositions(endowedchairs,fundedprofessorships,andfellowships)held
bythegraduatesofaprogram.According to a study byWorthington,Waters,
andFields(1989),adoctoralprogram’s
ability to develop highly productive
graduatescanbemeasuredbythenumberofdoctoralgraduatesholdingnamed
positions. This approach has served
as the basis for a ranking of account-
ing PhD programs in studies by Meier
and Kamath (2005), Tang and Griffith
(1997), and Worthington et al. (1989).
But,exceptforthestudybyWorthington et al. (1989), the reported results
are not sufficiently comprehensive in
their program coverage. For example,
there are more than 80 PhD programs
in accounting in the United States, but
Tang and Griffith presented a ranking
for only 28 programs—although they
indicated that there are at least 100
graduates of other PhD programs that
also hold named positions. The study
byMeierandKamathimprovedonthe
workofTangandGriffithbyreporting
rankings for 37 programs, but numerousprogramsrepresentedbygraduates
holding 89 named positions remained
unreportedfortheirranking.Thestudy
byWorthington et al. offered the most
completelookatallthePhDprogramsin
accountingwithrespecttonamedpositionsheldbygraduates,buttheranking
isbasedondatacollectedin1988,and
since then several more PhD programs
havebeeninitiated,andanevengreater
number of named positions have been
established.Thenumberofnamedpositionholdersisarelevantbasisonwhich
torankPhDprograms,butamorecomprehensive and current compilation of
informationiscalledfor.
METHOD
The review of previously published
approaches used to rank PhD programs
suggeststhatarankbasedonlyonasingleattributedoesnotsufficientlydistinguishdifferencesinquality.Therefore,an
improvementintherankingofPhDprograms could be achieved by developing
a multiattributes approach.An essential
considerationinthedevelopmentofthis
approach is explicit recognition of the
emphasis placed by theAACSB (2003)
on research as the primary determinant
of PhD program quality and rankings.
For this approach, three attributes are
chosen to compose a ranking metric
basedonthedoctoraloriginsofthefollowing: research award winners, editorialboardmembersfortopjournals,and
namedpositionholders.
The first component of the multiattributes ranking approach is the doctoraloriginsofresearchawardwinners.
July/August2008
341
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Althoughnotinusebypriorresearchers
to rank PhD programs, the power of
nationalawardsasasignalofleadership
inresearchhasbeendocumentedbyLee
(1995).UsingthehistoryoftheAmerican Accounting Association (AAA) as
an empirical foundation for analyzing
thedevelopmentofacademicaccounting
research,Lee(1995)foundthatresearch
awards exist on the same level as editorial board appointments in terms of
their capacity to signify research elites
among doctoral programs. Currently,
theAAA provides national recognition
in the form of seven awards of which
five are based on research: Wildman
Medal Award, Seminal Contributions
toAccounting LiteratureAward, Notable Contributions to Accounting LiteratureAward,OutstandingAccounting
Educator Award, and the Competitive
ManuscriptAward.Thedoctoralorigins
wereidentifiedforallwinnersofthese
awards and incorporated as part of the
rankings for PhD programs. Research
awardsasabasisforrankingPhDprograms has three advantages compared
with counts of the number of articles
published or research citations. First,
both the Wildman Medal Award and
the Notable Contributions to Accounting Literature Award more broadly
consider significant books and monographs, as well as journal articles, in
the recognition of research. Second, as
previouslydiscussed,astudybySmith
(2004) provided empirical evidence as
proof that not all the articles in the
top journals are top articles. By comparison, only research judged by the
AAA as top is bestowed with national
recognition.Third,thedisadvantagesof
citationanalysisasabasisforrankings
are avoided because research that has
garnered award(s) is most likely to be
heavilycitedresearchanyway.
ThenumberofeditorialboardmembershipsheldbythegraduatesofaPhD
programconstitutesavalidindicatorof
quality (Mittermaier, 1991). Because a
strong record of publication is a prerequisite for selection to a board, it is
reasonablethatthenumberofmemberships held implicitly includes “number of articles published” as a ranking
metric, but without a need to confront
the problem of whether to adjust for
coauthorship credit. In relying on the
342
JournalofEducationforBusiness
numberofeditorialboardmemberships
held as an indicator of quality, it is
necessary to first identify an appropriate core set of journals. Studies that
identifythemostinfluentialjournalsin
academic accounting have been made
byBonner,Hesford,VanderStede,and
Young(2006)andBallasandTheoharakis(2003).Bothstudiesconcludedthat
the top five journals in accounting are
AOS,AR,JAE,JAR,andContemporary
AccountingResearch(CAR).Therefore,
in the present study the multiattributes
ranking includes the doctoral origins
for the editorial board members of
thesefivejournalsbasedonthedegree
information published in Hasselback’s
(2006) Accounting Faculty Directory
2006–2007.
We also used the data provided by
Hasselback’s (2006) Accounting Faculty Directory 2006–2007 to identify
namedfacultypositionholdersandtheir
doctoralorigins.Inamannersimilarto
that of Meier and Kamath (2005), we
interpreted named positions broadly to
include endowed chairs, named professorships, and fellowships without
regard to faculty rank. Validation for
using the doctoral origins of named
positionholdersasthethirdcomponent
forrankingPhDprogramswasprovided
bysurveystudiesofnamedpositionsby
Rezaee, Elmore, and Spiceland (2004)
andbyTang,Forrest,andLeach(1990),
because the results from both studies
indicatedthatthemostimportantcriterioninthedecisionforanappointment
to a named position is the record of
published research productivity established by an individual. Respectively,
these studies reported that universities
seekscholarswithoutstandingorexcellent publication records to fill named
positions.
RESULTS
Information on the doctoral origins
of research award winners, editorial
board members for top journals, and
named position holders for the graduates of 80 PhD programs is in Table
1. We excluded from Table 1 all PhD
programs with fewer than 5 graduates
(Duke, Florida International, Georgia
Institute of Technology, Lehigh, Rensselaer, Rice, SUNY–Binghamton, and
Vanderbilt) and doctoral programs in
accounting that had been discontinued
at three universities (American, St.
Louis, and Santa Clara). Collectively,
graduates of the 80 PhD programs in
Table 1 have received 226 awards for
outstandingresearch,hold236editorial
board memberships for top journals,
andhold462namedfacultypositions.
Comparisons among the programs
presented in Table 1 reveal that it is a
rare accomplishment for a program’s
graduates to excel in more than one
of the three categories. Programs
whose graduates have received 15 or
moreAAAawardsforresearchinclude
Berkeley, Chicago, Cornell, Illinois,
Michigan, and Stanford, while 15 or
more memberships to editorial boards
areheldbygraduatesfromtheprograms
ofChicago,Iowa,Michigan,Rochester,
and Stanford. And the graduates from
programsofIllinois,Indiana,Michigan,
Ohio State, Pennsylvania State, and
TexasatAustinhold15ormorenamed
faculty positions.At the other extreme
are graduates from six programs who
havenotreceivedaresearchaward,who
do not currently serve as members of
a prominent editorial board, and who
do not hold appointments to a named
position.TheseprogramsareCleveland
State, Drexel, Memphis, Rutgers, Virginia Commonwealth, and Washington
State. Awards for research have gone
only to the graduates of 32 programs,
whereas the editorial board appointmentsextendtoonlythegraduatesof37
programs, and the named positions are
held by graduates from 71 programs.
Thiswidedisparityintheachievements
attainedbygraduatesofPhDprograms
in accounting further underscores the
importance of using a multiattributes
approachtoranktheprograms.
The process for assigning the relativerankstoPhDprogramsinamulti-
attributeformatisbasedoncomputing
acombinedscoretorepresentresearch
awards, editorial board memberships,
and named positions. For example,
information in Table 1 indicates that
one research award winner, no board
members,and12namedpositionholdersreceivedtheirPhDsfromAlabama.
Therefore, a computed score for Alabama’sPhDprogramwouldequal.0304
(orthesumof1/226+0/236+12/462).
TABLE1.DoctoralOriginsofResearchAwardWinners,EditorialBoard
Members,andNamedPositionHolders
PhDprogram
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Alabama
Arizona
ArizonaState
Arkansas
Boston
California–Berkeley
California–LosAngeles
CarnegieMellon
CaseWesternReserve
CentralFlorida
Chicago
Cincinnati
CUNY–Baruch
ClevelandState
Colorado
Columbia
Connecticut
Cornell
Drexel
Florida
FloridaState
GeorgeWashington
Georgia
GeorgiaState
Harvard
Houston
Illinois
Indiana
Iowa
Kansas
KentState
Kentucky
LouisianaState
LouisianaTech
Maryland
Massachusetts
MassachusettsInst.ofTech.
Memphis
Michigan
MichiganState
Minnesota
Mississippi
MississippiState
Missouri
Nebraska
NewYork
NorthCarolina
NorthTexas
Northwestern
OhioState
Oklahoma
OklahomaState
Oregon
Pennsylvania
PennsylvaniaState
Pittsburgh
Purdue
Rochester
Research
Editorial
Named
awardwinners boardmembers positionholders
1
1
0
0
0
22
0
10
0
0
32
0
0
0
0
0
0
17
0
3
0
0
1
0
1
0
23
1
4
0
0
0
2
0
0
0
2
0
15
10
5
0
0
2
0
0
0
0
1
14
0
1
3
1
1
0
0
6
0
8
2
0
1
8
1
6
0
0
24
0
0
0
0
3
1
10
0
2
0
0
0
0
6
1
7
1
16
3
0
0
0
0
0
1
3
0
30
3
10
0
0
0
0
2
3
0
5
7
0
0
2
8
6
5
0
15
12
12
14
13
0
9
4
13
3
1
13
2
2
0
2
4
0
6
0
8
6
1
8
8
5
4
22
15
8
1
3
10
11
4
4
1
1
0
19
14
14
9
7
10
10
4
10
9
8
18
3
8
3
5
16
2
1
8
(tablecontinues)
Becausethereareapproximatelytwice
the total number of named position
holders(462)asthereareeitherawards
(226)oreditorialboardmembers(236),
theportionofthescorethatisweighted
fornamedpositionsisineffectreduced
by half. However, the aforementioned
reduction is a justifiable outcome of
differences inherent to the three attributes.Inotherwords,agraduatefroma
givenPhDprogramhaseitherreceived
a nationalAAA award for research or
has not, and the graduate is either a
memberofatopjournaleditorialboard
or is not. But by comparison, appointmenttoanamedpositiononthefaculty
doesnotineveryinstancealwayscarry
the same distinctive significance as
either an award or selection to an editorial board because named positions
areknowntowidelyrangefromjusta
relativelysmallannualstipendtoafar
more lucrative salary package. Findings from survey studies of endowed
position holders by Bloom, Fuglister, and Meier (1996) and by Rezaee,
Elmore,andSpiceland(2004)indicated
extensive differences in the financial
amounts provided to fund support for
the positions, with corresponding differences in compensation for the positionholders.Therefore,comparedwith
AAA research awards and selection to
a top journal editorial board, named
position appointments signify relevant
achievement but are not uniformly as
strong an indication of the research
emphasis in the PhD program of the
appointee. And bear in mind that the
emphasisontheroleof“researchasan
important contributor to PhD program
quality”iscentraltotheAACSB(2003)
callforrankingPhDprograms(p.34).
Table2presentstherankfor80PhD
programs in accounting on the basis
oftheprocessforcomputingweighted
scores as described in the preceding
paragraph. According to these results,
thetop10programsareChicago,Michigan, Illinois, Stanford, Berkeley, Cornell, Ohio State, Washington, Texas–
Austin, and Rochester. Most of these
top programs are also highly ranked
in the previous studies as well, but by
incorporating a ranking metric based
onmultipleattributes,therankingscan
beextendedtoagreaternumberofprogramsthanwouldnormallybethecase
July/August2008
343
TABLE1.(cont.)
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
PhDprogram
Rutgers
SouthCarolina
SouthFlorida
SouthernCalifornia
SouthernIllinois
Stanford
SUNY–Buffalo
Syracuse
Temple
Tennessee
Texas–Arlington
Texas–Austin
TexasA&M
TexasTech
Tulane
Utah
VirginiaCommonwealth
VirginiaPoly.Inst.
Washington
Washington–St.Louis
WashingtonState
Wisconsin
Total
TABLE2.PhDProgram
Rankings
Research
Editorial
Named
awardwinners boardmembers positionholders
0
0
0
1
0
18
1
0
0
0
0
12
1
0
0
0
0
0
11
0
0
3
0
0
0
1
0
16
0
0
0
0
0
3
0
0
2
0
0
0
10
0
0
4
226
236
ifbasedonlyonasingleattribute.However,theexpansionofrankstoinclude
more programs results in several tied
scores.Fordeterminingtherankstobe
assigned in Table 2, we resolved any
tied scores between programs in favor
of the program with fewer graduates
through the year 2005, according to
data provided by Hasselback’s (2006)
Accounting Faculty Directory 2006–
2007. In all, there were 11 tied scores
resolved on this basis, with only the
programs of CUNY–Baruch and Templeremainingtiedinthe66thposition
becausebothprogramshadanidentical
number of graduates (38). We emphasize that number of graduates is nothing more than an expedient condition
forbreakingtiesandisnotnecessarily
coincidentwitheitherahigherorlower
rank. Some findings indicate that the
larger programs do not automatically
haveanadvantageintermsofrank.For
example,ChicagoandStanford,with74
and72graduates,respectively(according to Hasselback’s 2006 Accounting
Faculty Directory 2006–2007), rank
significantly higher (1st and 4th) than
Missouri (26th) and Arkansas (30th),
with 183 and 168 graduates, respec344
JournalofEducationforBusiness
0
5
2
6
2
14
1
3
2
13
2
24
7
9
0
2
0
4
14
2
0
14
462
tively. Conversely, Oregon (28th) and
Pittsburgh(31st),with51and37graduates, respectively, rank significantly
lowerthandoMichigan(2nd)andOhio
State(7th),with119and133graduates,
respectively.Althoughsizeofprogram
does not coincide with rank, there are
indications that the age of a program,
asmeasuredbythe1styearinwhicha
degree was conferred per data in Hasselback’s (2006) Accounting Faculty
Directory 2006–2007, tends to align
with rank in that the established programsrankhigherthannewerprograms.
For example, 17 of the 20 programs
composing the top quartile conferred
first degrees prior to 1968; the only
exceptions are Arizona, Pennsylvania,
and Rochester. On the other hand, all
of the programs composing the fourth
quartile conferred a first degree after
1968, except for Colorado, Utah, and
Washington–St.Louis.
DISCUSSION
WhencomparingPhDprograms,the
role and extent of emphasis on quality research is an essential characteristic that sets the programs apart from
Rank
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Program
Score
Chicago
Michigan
Illinois
Stanford
California–Berkeley
Cornell
OhioState
Washington
Texas–Austin
Rochester
Iowa
CarnegieMellon
Minnesota
MichiganState
PennsylvaniaState
Arizona
Wisconsin
Pennsylvania
Northwestern
Indiana
Harvard
Florida
ArizonaState
NorthCarolina
LouisianaState
Missouri
Alabama
Oregon
Tennessee
Arkansas
Pittsburgh
MassachusettsInst.
ofTech.
OklahomaState
Georgia
SouthernCalifornia
Kentucky
Nebraska
Columbia
TexasA&M
TexasTech
Mississippi
NorthTexas
GeorgiaState
NewYork
MississippiState
Kansas
FloridaState
California–Los
Angeles
Houston
SouthCarolina
Maryland
LouisianaTech
VirginiaPoly.Inst.
Tulane
SUNY–Buffalo
CaseWesternReserve
Syracuse
.2714
.2346
.1790
.1777
.1507
.1306
.1306
.1213
.1178
.1074
.1028
.0978
.0948
.0873
.0645
.0643
.0605
.0491
.0429
.0411
.0407
.0391
.0388
.0344
.0327
.0305
.0304
.0282
.0281
.0281
.0255
.0237
.0217
.0217
.0216
.0216
.0216
.0214
.0196
.0195
.0195
.0195
.0173
.0171
.0152
.0149
.0130
.0129
.0129
.0108
.0087
.0087
.0087
.0085
.0066
.0065
.0065
(tablecontinues)
TABLE2.(cont.)
Downloaded by [Universitas Maritim Raja Ali Haji] at 23:15 11 January 2016
Rank
58
59
60
61
62
63
64
65
66
66
68
69
70
71
72
73
74
75
76
77
78
79
80
Program
Score
Oklahoma
KentState
Massachusetts
SouthFlorida
SouthernIllinois
Texas–Arlington
Utah
Washington–St.
Louis
CUNY–Baruch
Temple
Cincinnati
Colorado
Connecticut
Boston
CentralFlorida
Purdue
GeorgeWashington
ClevelandState
WashingtonState
Drexel
Rutgers
Virginia
Commonwealth
Memphis
.0065
.0065
.0064
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0043
.0042
.0042
.0022
.0022
.0022
.0000
.0000
.0000
.0000
.0000
.0000
each other. Such comparisons inevitably invite development of a suitable
approach to ranking PhD programs.
Compared with the various approaches
used as a basis in prior studies that
ranked PhD programs, the method that
we applied in this study is unique by
virtue of its simultaneous combination
of three attributes as a ranking metric
ofresearchquality.Theseattributesare
research awards, editorial board memberships for top journals, and holders
of named positions. In terms of their
capacity to signal the research quality
ofaPhDprogram,theattributesencompass the more traditional measures of
productivity accomplishment, such as
counts of either the number of articles
publishedorthenumberofcitationsto
the published research of a program’s
graduates. Also, because the multi-
attributes approach incorporates data
that is readily available, PhD program
rankingscanbemoreefficientlyupdatedannually.
The development of a reliable and
efficient means for ranking PhD programs is a worthwhile goal. Survey
results from a study by Davis and
McCarthy (2005) provided evidence
that college rankings matter greatly to
prospective PhD students, especially
during the early stages of their process to identify a set of potential PhD
programs. However, although readily
available, the aforementioned college
rankings focus primarily on the MBA
programs of business colleges, and
therefore a ranking of PhD programs
foreachoneofthemajordisciplinesof
business would offer far more relevant
information to prospective students.
Also, AACSB (2003) emphasized that
thedevelopmentofPhDprogramrankingsshouldprovidereputationalincentives to stimulate added investments
in the programs by business schools,
therebycounterbalancingthedisproportionate influence of MBA rankings on
business schools. The results from the
presentstudyshowthatamultiattributes
approachtorankingPhDprogramshas
thepotentialtosuccessfullyachievethe
objectivesetforthbyAACSB.
AdecisiontopursueaPhDisasignificant one, as is the choice of programs to apply to. Rankings are an
important information source for comparing programs during early stages of
the search process, but there are additional factors that prospective students
shouldconsiderpriortofinalizingtheir
decisions.Someoftheseconsiderations
areadmissionrequirements,curriculum,
amounts of financial support offered,
geographicpreferences,andpreferences
betweensmall-townlocationandlarger
citylocation.
NOTES
Dr. Frank R. Urbancic’s research interests are
financialreportingstandardsandbusinesseducation.
Correspondence concerning this article should
be addressed to Frank R. Urbancic, Department
ofAccounting,MitchellCollegeofBusiness,University of South Alabama, Mobile, AL 36688,
USA.
E-mail:furbanci@usouthal.edu
REFERENCES
Association to Advance Collegiate Schools of
Business. (2002). Management education at
risk.St.Louis,MO:Author.
Association to Advance Collegiate Schools of
Busi