08832323.2010.492049

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Removing Size as a Determinant of Quality: A Per
Capita Approach to Ranking Doctoral Programs in
Finance
Roger McNeill White , John Bryan White & Michael M. Barth
To cite this article: Roger McNeill White , John Bryan White & Michael M. Barth
(2011) Removing Size as a Determinant of Quality: A Per Capita Approach to Ranking
Doctoral Programs in Finance, Journal of Education for Business, 86:3, 148-154, DOI:
10.1080/08832323.2010.492049
To link to this article: http://dx.doi.org/10.1080/08832323.2010.492049

Published online: 24 Feb 2011.

Submit your article to this journal

Article views: 45

View related articles


Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji]

Date: 11 January 2016, At: 22:14

JOURNAL OF EDUCATION FOR BUSINESS, 86: 148–154, 2011
C Taylor & Francis Group, LLC
Copyright 
ISSN: 0883-2323
DOI: 10.1080/08832323.2010.492049

Removing Size as a Determinant of Quality: A Per
Capita Approach to Ranking Doctoral Programs in
Finance
Roger McNeill White
Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

University of Pittsburgh, Pittsburgh, Pennsylvania, USA


John Bryan White
U.S. Coast Guard Academy, New London, Connecticut, USA

Michael M. Barth
The Citadel, Charleston, South Carolina, USA

Rankings of finance doctoral programs generally fall into two categories: a qualitative opinion
survey or a quantitative analysis of research productivity. The consistency of these rankings
suggests either the best programs have the most productive faculty, or that the university
affiliations most often seen in publications are correlated with institutional quality, which
biases the rankings towards larger programs. The authors introduce a per capita measure of
research output to evaluate finance programs in a context that removes absolute size as a
variable. The results indicate that smaller programs in the field are frequently overlooked in
traditional rankings.
Keywords: ranking doctoral programs, ranking finance programs

Rankings of programs and institutions of higher education
abound in both academic reading and the popular press.
The U.S. News and World Report and The Princeton Review

annually publish thorough rankings of dozens of both undergraduate and graduate programs, and multiple specialized
rankings can be found for nearly any field of interest.
Program directors are interested in a high ranking because
that influences better students to apply to those programs.
University administrators analyze rankings closely because
the prestige from one program’s high ranking may spill
over to the institution’s prestige as well. Potential graduate
students are keenly aware of the rankings of departments
in their field, because a degree from a more prestigious
institution usually translates to a more successful job search.
Search committees also consider the school a candidate’s
degree is from in determining whom to interview. Even those
without a professional interest will scan the rankings to see

Correspondence should be addressed to John Bryan White, U.S. Coast
Guard Academy, Management Department, 27 Mohegan Avenue, New London, CT 06320, USA. E-mail: john.b.white@uscga.edu

if they have bragging rights with respect to a neighbor or
coworker.
Due to the variety of university programs in the United

States, numerous ranking criteria and formulae exist and are
used by various outlets. For instance, the U.S. News and
World Report collegiate rankings look at quantitative factors,
such as retention and graduation rates, faculty resources,
student selectivity, and alumni giving. However, 25% of the
ranking is based on a peer assessment, a highly qualitative
measure (Morse & Flanigan, 2009).
For business schools, a critical measure of program quality is based on peer-reviewed journal publications. There are
any number of studies that rank the productivity of business
programs by the research output of their faculty. A topic curiously deficient of academic interest, until recently, has been
any approach ranking the doctoral programs by the research
productivity of their graduates. Most industries are evaluated
by the quality of their product and not the producer. Indeed,
hiring committees are most interested in the graduate’s research productivity. They want to know what to expect from
their potential new hire. We seek to provide a more detailed
evaluation of the per capita output of faculty and graduates,

RANKING FINANCE DOCTORAL PROGRAMS

as opposed to their unadjusted contributions en masse. The

correlation between the level of research activity by the
faculty and the research activity of their graduates is also
examined.

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

LITERATURE REVIEW
As previously stated, there are numerous ranking of business
programs in general. The “Best Colleges and Universities”
report published by U.S. News and World Report annually is
perhaps the best known ranking of business programs. U.S.
News and World Report has reported these rankings since
1990. However, their ranking is essentially a ranking of the
MBA programs, and not a reflection of the quality of the
doctoral program. Roose and Andersen (1970) were among
the early users of subjective surveys to evaluate doctoral
programs. They surveyed economists from 130 departments,
asking the respondents to classify departments as distinguished, good, and adequate. Siegfried’s (1972) ranking
of economics departments by published pages in major
economics journals was highly correlated with the earlier

peer rankings. Brooker and Shinoda (1976) applied the peer
survey approach to the functional areas of business, which
included finance programs. Two studies by Klemkosky
and Tuttle (1977a, 1977b) found ranking departments by
faculty publications (1977a) and publications by finance
doctoral graduates (1977b) mirrored the results of the peer
survey.
There have been several rankings of the research productivity of finance programs recently.Heck and Cooley (2008)
reviewed the university affiliations (place of employment
and source of degree) of the authors in the Journal of Finance
(JF) for the last 60 years. Chan, Chen, and Lung (2007) used
an expanded number of journal outlets (the top 15 finance
journals) but limited the time period studied to 1990–2004.
Another study by Heck (2007) used four publication outlets:
JF, Journal of Financial and Quantitative Analysis (JFQA),
Journal of Financial Economics (JFE), and the Review of
Financial Studies (RFS) from 1991 to 2005. This study
also included a survey of finance department chairs or
finance doctoral program directors on their opinions of
the best programs. There is remarkable consistency among

the departmental rankings from these studies. Each study
ranked NYU at the top of research output, and the next four
(alphabetically, as the order is not consistent) are Chicago,
Harvard, Pennsylvania, and UCLA. Thus, research output,
a generally accepted metric of graduate business faculty
quality, is consistently at the top of the rankings regardless of
what journals are included or what time period is used. The
collective opinions of department chairs or program directors
echo the research productivity results. Beyond the top five,
the rankings become only slightly less consistent. For
instance, 17 of the top research producing faculties are in the
chairs’ and directors’ top 20. When publications are grouped

149

by the degree granting institution, 18 of the top 20 institutions by graduate productivity are in the top 20 in the opinion
of the department chairs and doctoral program directors.
Although the schools at the top of the list clearly have
excellent faculty who are prodigious researchers, they also
tend to have large finance faculties. Three of the top five

research producers, NYU, Pennsylvania, and Harvard, have
the three largest finance faculties, at 36, 33, and 30 members,
respectively. Chicago is close behind, at number seven, with
22 faculty members. Thus, part of their research success is
the result of the number of researchers at a single institution. A larger faculty who have widely diverse interests and
fields may be of interest to a prospective graduate student
who is uncertain of their own interest, because it increases
the likelihood that someone has expertise in the field he or
she ultimately selects. But the question remains as to whether
those faculty members who share their interest are research
productive. Using absolute research output as an indication
of program quality is akin to claiming a firm’s performance
is superior to another by evaluating total sales or profits.
For comparisons among firms of unequal size, finance professionals use financial ratios or entries from common-sized
financial statements to remove the influence of firm size when
interpreting performance measures. The value of rankings by
absolute research output is in part a reflection of exposure.
In this study we evaluated a similar performance measure of doctoral programs in finance that is independent of
the influence of program size. A doctoral student learns the
most from those faculty members with whom they are most

involved, those individuals who read students’ assignments,
compose and grade exams, lecture, and mentor a graduate
student on a daily basis, pushing the boundaries of finance
and training the cohort for the next generation. A graduate
student can, at any point in time, be engaged by only a limited
number of professors. Professors, likewise, are limited in the
number of students they can teach and whose dissertations
they may advise. Although a graduate student certainly has
mind-broadening informal interactions with many members
of the faculty, either through scheduled paper presentations
and/or random conversations in a social setting, these effects
of these interactions are less significant than the structured
contact of classes and supervised research. The professors
the student is not engaged with during graduate school are
much less relevant to the quality of training received. Once
the faculty is beyond some critical mass large enough to staff
a variety of fields within the discipline, the quantity of faculty
plays very little role in the training of doctoral candidates.
Rather, the quality of the individual faculty members the
graduate student studies with is the critical factor in determining the educational quality of the program.

What is needed, therefore, is a better measure of the
quality of individual faculty. The average research output,
or per capita output, of an individual faculty member would
be a better measure of individual faculty quality than the
research output of the entire department. A per capita output

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

150

R. M. WHITE ET AL.

measure removes the absolute faculty size as a factor in the
rankings.
Likewise, the gauge of how effective a program is in
producing effective researchers is not found in the absolute
number of publications cataloged by the authors’ doctoral institution. A program with many graduates should be expected
to have more published articles from those graduates than
a program with significantly fewer graduates. Publications
per graduate from a particular institution is a much more

instructive measure of the quality of a program’s graduates.
This per capita approach would be of interest to two
groups in particular: hiring committees and prospective graduate students. Hiring committees are much like major league
scouts, combing various applicants’ curriculum vita for indications of skills and training that evolves into big-league
talent. The athletic scouts have a great deal of data to base
their recommendations on. Baseball has its runs batted in and
batting average, basketball uses average scoring and shooting
percentages, and football running backs are evaluated based
on total yardage and yards per carry. Note that each of these
sports has an absolute measure as well as an adjusted statistic. A basketball player who has a 35-point scoring average
with a 30% shooting average is much less attractive than one
with a 30-point average that shoots 60%.
Unfortunately, hiring committees lack similar historical
indicators when evaluating new PhDs. in their search for
individuals who will be good teachers and productive
researchers. A new doctorate has limited teaching experience and often no manuscripts accepted for publication.
The new doctorate’s degree granting institution is often
the single most important source of expectations on the
individual. The generally accepted thinking is that higher
ranked programs accept more highly qualified applicants
and graduate members of the profession who publish at a
higher rate. Thus, a program’s rank or perception of quality
(e.g., being among the top programs, a middle-tier program)
is a critical input in the hiring decision.
However, hiring committees do not hire a program. They
hire an individual. Therefore, rankings that are influenced
by program size do not provide accurate information. A
faculty may have produced 30 articles in the last 5 years in
the top four journals in finance, but if there are 30 members
in the department, then their research output averages only
one article per person over the 5-year period. The department
as a whole is certainly well known because of their total
output, but the individual output is only one top article over
the five-year period. A department of 10 that publishes 20
articles over the same period has an output of two articles
per person. This department may less well known because
the department’s name is seen on fewer articles. However, if
research is an indicator of faculty quality, then this per capita
approach is a better indicator than total output. A prospective
graduate student looking for a research active faculty would
be better served by the smaller program in the example
above because of their higher per capita research output.

Prospective graduate students know that successful
research is the key to a successful academic career, so they
are also keenly interested in the research productivity of a
program’s graduates. Graduates of large programs have a
great deal of publications in the top four finance journals.
However, there are also a large number of graduates from
large programs. The prospective graduate student should be
more interested in how individuals from programs perform,
not the performance of a program’s graduates en masse.
Thus, a per capita approach to the publications of a program’s
graduates is much more indicative of future success than the
total research output of a program’s graduates.

METHOD
In the present study we extended Heck’s (2007) analysis, examining the authorships from the same four finance journals
(JF, JFQA, JFE, and RFS) used in his 2007 study over the
same 15-year period, 1991–2005. Rankings of this research
output are made based on university affiliation at the time of
publication, and by university awarding their doctoral degree.
The absolute number of articles is converted to a per capita
output based on faculty size as of 2004 using the 2004–2005
Prentice Hall Guide to Finance Faculty, compiled by James
R. Hasselbeck. Only tenured and tenure-track faculty were
included in the study. Rankings were made of the professional
affiliation and degree granting institution of the authors’
per capita output. The results of the per capita rankings
were then compared with the absolute rankings, as well as
a qualitative ranking of programs from the survey of finance
department chairs or finance doctoral program directors.
It should also be noted that, in accordance with Heck’s
(2007) publication, only programs considered to be in the
top 50 for each of his criteria (unadjusted publications by
faculty, unadjusted publications by graduates, and rating of a
program by other program directors) were evaluated. Thirtynine schools make this list, and as with any ranking, some
schools are at the bottom of this list. However, given the
stringent requirements to even be included in this list, all of
the programs should be considered outstanding (Heck).
In evaluating the data in accordance with the aforementioned criteria, the results should be interpreted with an eye
to several assumptions. The first assumption is that each academic listed in Hasselback’s directory earned their PhD in
finance.Although those faculty members holding an MBA or
JD were easily eliminated from consideration, many faculty
members were no doubt wrongly included in this analysis.
Economics PhDs often teach finance, and their inclusion certainly skewed the results in some manner. Also, many faculty members who would consider themselves professors of
risk management or real estate were also wrongly included
in this study. A number of institutions report faculty in such
disciplines as professors of finance, and although their terminal degree may be in their respective field, the institutional

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

RANKING FINANCE DOCTORAL PROGRAMS

reporting practices makes classifying such academics very
difficult.
Faculty per capita output was determined using the
number of faculty members at the end of the 15-year study
period. The implicit assumption was that faculty size does
not change dramatically over time. To the extent that a
finance faculty has added several new PhDs near the end
of the period under review, their per capita output would
probably be lower because of the increased denominator
and few (if any) publications from the new hires. Likewise,
having one extraordinarily productive researcher on a small
faculty will skew the average results upward, even though
the median output may be mediocre. Rankings based on
total output are also similarly affected by a single, highly
productive researcher. The implicit assumption is that
neither of these situations is common in the field. Faculty
size ranged from 8 to 36, with a median finance faculty of 17.
No attempt wa made to adjust for where the faculty member was at the time the article was published. It is assumed the
article’s prestige follows the author and does not reside at the
author’s institution at the time of publication. Custom seems
to support this assumption, as articles are often referred to by
the authors’ names only, such as Fama, Fisher, Jensen, and
Roll (1969), Black and Scholes (1973), or Modigliani and
Miller (1958), with no mention of institutional affiliation.
Publications per graduate also has implied assumptions.
The principle assumption is that graduates of the top programs enter academia at roughly the same rate. If an institution places its graduates exclusively in industry, where
publications in academic journals may not be as highly or
encouraged, then that institution would rank poorly on the research activity of its graduates. Likewise, a preponderance of
foreign students who return to their home country after graduation and do not publish in English (which is the language of
the four journals included in Heck’s [2007] evaluation) would
also impair a program’s performance based on the per capita
output of its graduates. While these shortcomings are acknowledged, they also skew Heck’s study in a similar fashion.
Finally, we acknowledge that the selection of four journals in finance and an analysis period of 1991–2005 are
arbitrary and artificial constraints. University finance faculties most certainly changed during that period. However,
these constraints were maintained in this study so that any
results that differed from the Heck’s (2007) rankings could
not be attributed to other factors. For the sake of continuity
with Heck’s study, we also did not discern between publications with multiple authors and single authors. If a paper
had multiple authors, each got equal and full credit for the
publication.

151

institution, or the qualitative ranking by finance department
chairs or finance doctoral program directors were remarkably
consistent at the top. The top five (NYU, Chicago, Penn, Harvard, and UCLA) were the same, with only minor differences
in order, in Chan et al.’s (2007) 15-year study using 21 journals, Heck and Cooley’s (2008) study over the 1976–2005
period only using articles in the JF, and Heck’s (2007) study
using four journals. Heck’s survey of chairs included Stanford and MIT in the top five, but excluded Harvard and NYU
from that group. In total graduate publications, Chicago, Harvard, and UCLA maintain a spot in the top five, and they were
joined by MIT and Stanford. Heck’s results are reproduced
in the first three columns of Table 1.
When faculty research output is evaluated on a per capita
basis, the rankings vary dramatically. Cornell, which is 15th
in the chairs’ ranking, 11th in publications by graduates, and
6th in faculty publications vaults to number one in per capita
faculty output. UCLA and Chicago maintained a position in
the top five on a per capita basis, and were joined by Yale
and Purdue. Table 2 reranks the chairs’ ranking from Table 1
and displays the top 10 programs based on per capita faculty
output.
Purdue, ranked 28th in the chairs’ survey, was fifth in
per capita faculty publications. A quick glance at the table
reveals that they did so with a faculty of nine. With such a
small faculty, it is easy to see how their research productivity
is overlooked when publications are evaluated en masse.
In evaluating finance doctoral programs on the basis of
per capita graduate research, several other surprises find their
way into the top ten. Chicago is the only chairs’ top 5 that is in
the top five of research output per graduate. Rochester, which
comes in at 14th in the chairs’ ranking and 11th in total faculty
publications, tops the ranking for output per graduate. Table 2
reranks the chairs’ ranking from Table 1 and displays the top
10 programs based on per capita doctoral graduate output. For
both a hiring committee and a prospective graduate student,
this ranking would seem to be of particular interest.
There seems to be a good deal of correlation between
research productivity of faculty and their graduates. The correlation coefficient between the number of faculty publications and the number of publications by their graduates is
57%. When this relationship is examined on a per capita basis, the correlation is even higher. The correlation coefficient
between per capita faculty publications and the per capita
publications of their graduates is 72.3%. This suggests that
the individual faculty research productivity is more important than total faculty productivity in predicting the potential
research output of a new PhD in finance (see Table 3).

RESULTS

CONCLUSIONS AND SUGGESTIONS FOR
FURTHER RESEARCH

Recall that the ranking of finance programs, either by faculty research output, research output by the author’s doctoral

We have gone to great lengths to describe why many smaller,
quality doctoral programs in finance are often overlooked

152

R. M. WHITE ET AL.
TABLE 1
Finance Program Ranking (Reported in Heck, 2007)

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

School
Chicago
Stanford
MIT
Pennsylvania
UCLA
UC Berkeley
Carnegie Mellon
Northwestern
Harvard
NYU
Yale
Duke
Columbia
Rochester
Cornell
Michigan
North Carolina
Ohio State
Texas
Maryland
Washington
University
Illinois
Indiana
Boston College
Universiy of
Washington
Minnesota
Wisconsin
Purdue
Arizona State
Florida
Penn State
Utah
Iowa
Pittsburgh
Arizona
Michigan State
Georgia
Oregon
Virginia Tech
Virginia

Doctoral chair
ranking

Number of graduate
publications

Number of
faculty pubs

Number of
graduates, 2004

Number of
faculty, 2004

Publications per
graduate

Publications per
faculty

5
4.984
4.919
4.902
4.656
4.617
4.583
4.548
4.516
4.516
4.45
4.361
4.361
4.311
4.271
4.082
3.847
3.817
3.78
3.508
3.5

564
217
304
184
219
111
103
147
252
139
121
62
65
181
128
96
76
142
94
29
30

141
58
66
153
144
48
54
76
164
226
57
116
76
57
116
89
59
71
77
72
34

199
123
93
142
92
79
36
144
119
109
37
26
59
52
47
81
123
117
95
30
41

22
17
15
33
16
14
12
25
30
36
8
21
24
12
12
18
20
19
19
17
14

2.83
1.76
3.27
1.30
2.38
1.41
2.86
1.02
2.12
1.28
3.27
2.38
1.10
3.48
2.72
1.19
0.62
1.21
0.99
0.97
0.73

6.41
3.41
4.40
4.64
9.00
3.43
4.50
3.04
5.47
6.28
7.13
5.52
3.17
4.75
9.67
4.94
2.95
3.74
4.05
4.24
2.43

3.5
3.467
3.45
3.35

61
58
15
66

89
26
58
28

166
114
27
99

28
22
16
18

0.37
0.51
0.56
0.67

3.18
1.18
3.63
1.56

3.339
3.339
3.31
3.203
3.167
3.052
2.949
2.949
2.776
2.763
2.746
2.724
2.7
2.474
2.466

35
59
126
26
60
13
32
45
32
12
16
21
21
21
11

29
47
57
49
59
36
51
28
18
21
41
26
22
23
45

30
109
89
41
92
64
36
65
36
42
129
122
42
52
35

12
17
9
20
20
15
12
15
9
12
17
16
10
20
12

1.17
0.54
1.42
0.63
0.65
0.20
0.89
0.69
0.89
0.29
0.12
0.17
0.50
0.40
0.31

2.42
2.76
6.33
2.45
2.95
2.40
4.25
1.87
2.00
1.75
2.41
1.63
2.20
1.15
3.75

Note. The University of Pittsburgh was not included in Heck’s (2007) study.

by traditional ranking schemes. Although academic excellence is not defined solely by the quantity of research, it is
certainly a universally recognized and quantifiable measure
of success. No critic would deny that the four journals used
in this analysis are preeminent in the field of finance. The
per capita approach utilized by this paper should be of great
interest to prospective PhD candidates in finance and hiring
committees, if not the entire discipline.
It is acknowledged that the period of study, 1991–2005, is
somewhat dated and that including only four top journals in
the definition of research is most certainly a narrow classifi-

cation of finance scholarship. However, Heck’s (2007) study
provided two remarkably consistent rankings of the quality of
finance programs that are nearly universally accepted, a survey of finance program directors (or chairs) and a quantitative
measure of publications. Extending the study by including a
per capita measure of research productivity required using the
same time period and definition of research if the results are to
have any meaningful comparison to Heck’s ranking results.
What could be inferred if the rankings by total publications
using four journals from 1991–2005 differed significantly
from the ranking of per capita research output using more

RANKING FINANCE DOCTORAL PROGRAMS

153

TABLE 2
Doctoral Programs Ranked by Per Capita Faculty Publications

School

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

Cornell
UCLA
Yale
Chicago
Purdue
NYU
Duke
Harvard
Michigan
Rochester

Doctoral chair
ranking

Number of
graduate
publications

Number of faculty
pubs

Number of
graduates, 2004

Number of
faculty, 2004

Publications per
graduate

Publications per
faculty

4.271
4.656
4.45
5
3.31
4.516
4.361
4.516
4.082
4.311

128
219
121
564
126
139
62
252
96
181

116
144
57
141
57
226
116
164
89
57

47
92
37
199
89
109
26
119
81
52

12
16
8
22
9
36
21
30
18
12

2.72
2.38
3.27
2.83
1.42
1.28
2.38
2.12
1.19
3.48

9.67
9.00
7.13
6.41
6.33
6.28
5.52
5.47
4.94
4.75

comprehensive list of journals in a more recent time period?
Using 15 finance journals from 2000–2009 would produce
a more extensive and certainly more recent indication of research productivity. But the value of the ranking would be
less significant in the absence of a comparable survey ranking
from program directors.
It should be noted that we only evaluated programs that
qualified under Heck’s (2007) ranking under several strict
criteria and the inclusion of a program to this list is quite an
accomplishment in itself. Unfortunately, such restrictive entrance requirements for the sample result in the exclusion of
several quality programs that should not be overlooked. For
instance, The University of Pittsburgh’s nine faculty members in finance published 18 articles in the four journals
used in Heck’s (2007) study. This per capita output of 2.0
articles in the period under review would place the Pitt finance program at number 33. Pitt’s 36 graduates published
32 qualifying articles, or 0.9 articles per capita. This output
would rank tied for 22rd if included in an evaluation of per
capita publications. This situation also leads to the belief
that other small programs would place very highly in terms

of producing productive graduates if they were included in
the study.
This study’s value is more than merely inserting smaller
programs into an ordinal ranking of finance programs. Rather,
the evaluation of finance programs on per capita research output enables researchers to get a sense of which programs are
comparable when they are of different sizes. Hiring committees should find this information quite useful as they seek to
identify potential hires that meet the needs of their institution. This study would also be useful to prospective graduate
students as they seek to find a program that is a good match
to their aspirations. However, programs with low per capita
output should not be dismissed as a poor fit for the potential
student. Rather, the prospective student should investigate
as to whether there are productive faculty in the specific
field they intend to study. Likewise, an institution with low
research output from its graduates may send many of its
graduates abroad or to industry, where a productive career is
measured by a metric other that peer-reviewed journal output. While research continues to be a factor in promotion and
tenure decisions, the importance of teaching is increasing. A

TABLE 3
Doctoral Programs Ranked by Per Capita Graduate Publications

School
Rochester
Yale
MIT
Carnegie
Mellon
Chicago
Cornell
Duke
UCLA
Harvard
Stanford

Doctoral chair
ranking

Number of
graduate
publications

Number of faculty
pubs

Number of
graduates, 2004

Number of
faculty, 2004

Publications per
graduate

Publications per
faculty

4.311
4.45
4.919
4.583

181
121
304
103

57
57
66
54

52
37
93
36

12
8
15
12

3.48
3.27
3.27
2.86

4.75
7.13
4.40
4.50

5
4.271
4.361
4.656
4.516
4.984

564
128
62
219
252
217

141
116
116
144
164
58

199
47
26
92
119
123

22
12
21
16
30
17

2.83
2.72
2.38
2.38
2.12
1.76

6.41
9.67
5.52
9.00
5.47
3.41

Downloaded by [Universitas Maritim Raja Ali Haji] at 22:14 11 January 2016

154

R. M. WHITE ET AL.

prospective graduate student would do well to evaluate the
classroom training a program provides if they aspire to a
career in academia.
In conclusion, no ranking system is flawless or all encompassing. At the very least, those considering entering a
terminal degree program in finance now have a viable basis
for quantifying the quality of PhD programs and faculties of
different sizes. Also, this work should spur discussion among
the upper echelons of finance as it reveals that present perceptions of program quality depend more on the collective
quantity of a program’s exposure than the individual research
abilities of their graduates and faculties.
Further research in this area is certainly merited. Program
directors should continue to be surveyed regarding program
quality. However, these surveys should also request a list of
the top 10 (or 15 or 20) journals in finance. Subsequent studies emphasizing research productivity could then evaluate
programs based on this broader list of journals deemed most
significant by program directors. Per capita research productivity should continue to be measured in order to evaluate
smaller programs that are not included in the opinion survey
results.

REFERENCES
Black, F., & Scholes, M. (1973). The pricing of options and corporate
liabilities. Journal of Political Economy, 81, 637–654.

Brooker, G., & Shinoda, P. (1976). Peer ratings of graduate programs in
business. Journal of Business, 29, 240–251.
Chan, K. C., Chen, C. R., & Lung, P. P. (2007). One-and-a-half decades
of global research output in finance: 1990–2004. Review of Quantitative
Finance and Accounting, 28, 417–439.
Fama, E. F., Fisher, L., Jensen, M., & Roll, R. (1969). The adjustment
of stock prices to new information. International Economic Review, 10,
1–21.
Hasselback, J. R. (Ed.). (2005). 2004–2005 Prentice Hall guide to finance
faculty. Upper Saddle River, NJ: Prentice-Hall.
Heck, J. L. (2007). Establishing a pecking order for finance academics:
Ranking of U.S. finance doctoral programs. Review of Pacific Basin Financial Markets and Policies, 10, 479–490.
Heck, J. L., & Cooley, P. L. (2008). Sixty years of research leadership:
Contributing authors and institutions to the Journal of Finance. Review of
Quantitative and Finance and Accounting, 31, 287–309.
Klemkosky, R. C., & Tuttle, D. L. (1977a). The institutional source and
concentration of financial research. Journal of Finance, 13, 901–907.
Klemkosky, R. C., & Tuttle, D. L. (1977b). A ranking of doctoral programs
by financial research of graduates. Journal of Financial and Quantitative
Research, 12, 491–497.
Modigliani, F., & Miller, M. H. (1958). The cost of capital, corporation
finance and the theory of investment. American Economic Review, 48
(June), 261–297.
Morse, R., & Flanigan, S. (2009). How we calculate the
rankings. U.S. News and World Report. Retrieved from
http://www.usnews.com/articles/education/best-business-schools/2009/
04/22/business-school-rankings-methodology.html
Roose, K. D., & Andersen, C. J. (1970). A rating of graduate programs.
Washington, DC: American Council of Education.
Siegfried, J. J. (1972). The publishing of economic papers and its impact on
graduate faculty ratings, 1960–1969. Journal of Economic Literature, 10,
31–49.

Dokumen yang terkait