Manajemen | Fakultas Ekonomi Universitas Maritim Raja Ali Haji joeb.81.2.73-80

Journal of Education for Business

ISSN: 0883-2323 (Print) 1940-3356 (Online) Journal homepage: http://www.tandfonline.com/loi/vjeb20

Business Schools' Policies Regarding Publications
in Electronic Journals
Geraldine E. Hynes & Robert H. Stretcher
To cite this article: Geraldine E. Hynes & Robert H. Stretcher (2005) Business Schools' Policies
Regarding Publications in Electronic Journals, Journal of Education for Business, 81:2, 73-77,
DOI: 10.3200/JOEB.81.2.73-80
To link to this article: http://dx.doi.org/10.3200/JOEB.81.2.73-80

Published online: 07 Aug 2010.

Submit your article to this journal

Article views: 17

View related articles

Citing articles: 2 View citing articles


Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=vjeb20
Download by: [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RA JA ALI HA JI
TANJUNGPINANG, KEPULAUAN RIAU]

Date: 12 January 2016, At: 17:53

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:53 12 January 2016

Business Schools’ Policies Regarding
Publications in Electronic Journals
GERALDINE E. HYNES
ROBERT H. STRETCHER
SAM HOUSTON STATE UNIVERSITY
HUNTSVILLE, TEXAS

ABSTRACT. Perhaps the most obvious
example of innovation in faculty performance is the adoption of new technologies
for research. Both administrators and faculty have expressed concern about the role

that electronic publications play in their
research evaluation processes, particularly
in business schools, where scholarly publication is often emphasized over other activities. Yet, there appears to be no empirical
evidence for the way that electronic journals are evaluated compared to printed
paper versions. Therefore, in this study, the
authors sought to determine how business
school deans regard the formats in which
their faculty is publishing.
Copyright © 2005 Heldref Publications

Evaluating Business Faculty
Research Performance

O

ver the years, much research has
been conducted on the topic of
business faculty evaluation, particularly
on the relative importance of teaching
and research productivity (Ehie &

Karathanos, 1994). A national survey by
the Carnegie Foundation found that
45% of business faculty felt that straight
counts of publications are the chief indicator of research productivity at their
institutions (Boyer, 1990), which is
higher than the percentage of faculty
across all disciplines (Radhakrisbna &
Jackson, 1993). Furthermore, 45% of
business faculty felt that the reputation
of the press or journal publishing the
research was unimportant for tenure
review (Boyer). Bures and Tong (1993)
surveyed 590 finance professors on the
evaluation systems used to measure faculty performance and found strikingly
similar results. Finance faculty affirmed
that the number of articles in professional journals was the factor most
affecting their performance evaluations.
The Association to Advance Collegiate Schools of Business International
(AACSB), the major accrediting body
of collegiate business programs, takes

a different view of faculty evaluation.
Since 1991, AACSB has advocated
standards with a strong focus on the
institution’s mission. For example, if
an institution positions itself primarily

as a teaching institution, then teaching
performance should count more heavily during faculty evaluations than
scholarship activity. However, in several studies over the past 25 years,
business faculty and deans of AACSBaccredited schools have consistently
expressed the belief that publishing
record is counted more heavily than
teaching in faculty evaluations, regardless of the institution’s stated mission
(Bures & Tong, 1993; Ehie &
Karathanos, 1994; Lein & Merz, 1978;
Tong & Bures, 1987).
Are business faculty members—
whether in AACSB-accredited or nonAACSB schools—satisfied with their
evaluation systems? Unfortunately,
they are not. In the 1989 Carnegie

Foundation study, over two thirds of
business faculty agreed that we need
better ways to evaluate scholarly performance (Boyer, 1990). Likewise,
approximately 36% of the respondents
in Bures and Tong’s (1993) survey
expressed dissatisfaction with their
current systems. This disturbing lack
of confidence in evaluation systems
demands that scholarship be more creatively assessed. As a first step, Boyer
(p. 35) urged that faculty assessment
criteria take into account “a broader
range of writing” and changing
social contexts. Boyer asserted that
“Standards must be flexible and creative…and innovation should be
rewarded, not restricted” (p. 80).
November/December 2005

73

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:53 12 January 2016


Evaluating Publications in
Electronic Journals
Perhaps the most obvious example of
innovation in faculty performance is the
adoption of new technologies for teaching and research (Bloedel, 2001; McInnis, 2002). Computer-mediated communication, in particular, is reconfiguring
the way in which knowledge is produced and disseminated (McInnis,
2002). The Internet is creating new
opportunities to publish research results
(Wilga, 2000). It is estimated that, of the
44,000 active scholarly journals (refereed and nonrefereed) listed in Ulrich’s
Periodicals Directory, over 41% now
offer full text or full content online (A.
Jerabek, personal communication, February 26, 2004) and the number is
steadily increasing. Reasons for migrating to electronic media are (a) other
periodicals are taking that route, (b) it is
less expensive to publish electronically
than in paper format, (c) it is less labor
intensive, (d) it allows for just-in-time
delivery, (e) it allows for greater diffusion of knowledge across disciplines,

and (f) it allows for inclusion of articles
in other electronic indexes and bibliographies (B. Shwom, personal communication, February 27, 2004). Thus,
electronic publication may better serve
the purposes of the contemporary
researcher.
As the electronic journal (eJournal)
comes of age, new issues emerge. Both
administrators and faculty have expressed
concern about the role that eJournal publications play in their research evaluation
systems. Because scholarship activities
are increasingly heterogeneous, it is necessary to derive new standards by which
research productivity is judged (Marine,
2002), yet the literature provides no evidence that faculty evaluation systems are
keeping pace by developing and incorporating new criteria for electronic scholarship outlets.
Research Objectives
In this study, we sought to determine
how business school deans regard the
format of journals in which their faculty
is publishing. While scholarly journals
are increasingly migrating to electronic

format, it is unclear how administrators
74

Journal of Education for Business

evaluate electronic publications compared to printed formats when conducting faculty performance appraisals. In
addition, we attempted to identify any
differences regarding preferred formats
of research publications along demographic factors such as institutional mission, size, and region.
METHOD
Population
We surveyed deans of U.S. business
schools that are members of the
AACSB. We mailed a questionnaire by
USPS to all 419 U.S. business school
deans included on the AACSB mailing
list of member institutions. One hundred seven usable surveys were
returned, for a 25.5% response rate.
Considering that most return rates for
USPS mail surveys hover around 10%,

this level of response indicates strong
interest in this issue among deans.

U.S. universities and colleges, we felt
that pilot testing was impractical. A
copy of the survey instrument appears
in Appendix A.
Data Collection and Analysis
The questionnaire was mailed to all
U.S. deans of AACSB-member schools
in May 2004. The researchers’ own
dean contributed a cover letter supporting the study (Appendix B), personally
signing each copy. A return envelope
was included in the packet of materials.
We chose to use a paper instrument
rather than a Web-based or e-mail survey in order to capture responses of any
deans who do not favor computer-mediated communication.
We entered responses into a database
and performed statistical analyses to
identify patterns of results (frequencies,

means, and percentages). Findings from
sections 1 and 3 of the survey instrument (described above) are reported in
the following section.

Instrumentation
We developed the survey instrument
for this study in consultation with our
business school dean and associate
dean. Two survey statisticians at our
institution reviewed the instrument for
comprehensiveness, possible bias, and
statistical integrity. We made revisions
according to their suggestions. The final
version consisted of 24 forced-choice
items. Items were grouped into three
sections: (a) Your business school’s current policies, which contained items on
the factors applied when rating the quality of a journal, including eJournals,
conference proceedings, and abstracts;
(b) your personal opinions regarding
evaluation of faculty publications,

which contained items on respondents’
own views of journal quality, including
eJournals, conference proceedings, and
abstracts. This section also contained
items on predicted changes to business
school policies; and (c) demographic
questions, which contains items on the
university’s size, accreditation, and
Carnegie classification along with similar items about the business school.
Because of the limited accessibility
and small size of our target population,
deans of accredited business schools in

RESULTS
Journal Quality Ratings
An overwhelming 84.11% of business school deans said that their faculty
evaluation policies include criteria for
rating the quality of a journal in which
the faculty publishes. The business
schools that do not rate journal quality
fit a clear demographic profile: In general, these schools have fewer than
1,500 business students (73.3%), are in
the Southern Association (53.33%) or
North Central Association (26.7%)
accrediting region, and fall into the
Carnegie classification of Masters I
(73.3%).
The deans who responded that their
schools do rate journal quality were then
asked about the criteria that they apply.
These Deans were asked to rate the
importance of various factors used to
determine journal quality on a scale of
1–5, with higher ratings indicating
greater importance. Table 1 shows the relative importance ratings for seven factors.
As Table 1 shows, three factors were
clearly the most important criteria to
deans when evaluating journal quality:
the peer review process (83.33%), the

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:53 12 January 2016

TABLE 1. Respondents’ Ratings of Journal Evaluation Factors (in percentages)
Level of
importance

Acceptance
rate

Cabell’s
listing

Sponsoring
organization

Peer
review

Issue
frequency

No. of
citations

Professional
reputation

5
4
3
2
1
No response

54.44
23.33
14.44
2.22
2.22
3.33

30.00
22.22
17.78
12.22
14.44
3.33

13.33
20.00
22.22
16.67
21.11
6.67

83.33
12.22
2.22
1.11
1.11
0.00

2.22
3.33
17.78
26.67
43.33
6.67

28.89
12.22
15.56
18.89
20.00
4.44

62.22
18.89
4.44
4.44
7.78
2.22

Note. 5 = very important and 1 = not important.

journal’s professional reputation
(62.22%), and the journal’s acceptance
rate (54.44%).
Respondents were also asked to rate
the importance of a journal’s format—
electronic versus hardcopy print—in the
journal evaluation process. Results are
shown in Table 2.
The findings in Table 2 indicate that
about two thirds of deans of business
schools that rate journals use format as
a filtering mechanism to some degree
when evaluating journals. While only
20% gave format a high level of importance (rating it “5” or “4” on the 5point scale), it is definitely a consideration in most current business school
policies.
Electronic Journal Quality
Ratings
Business deans were also asked
whether their schools consider eJournal
publications to be valid intellectual conTABLE 2. Respondents’
Ratings of Journal Format
as an Evaluation Factor
(in percentages)
Level of
importance

Journal format:
electronic vs print

5
4
3
2
1
No response

8.89
11.11
22.22
15.56
35.56
6.67

Note. 5 = very important and 1 = not important.

tributions. While the overall response to
this item was positive, with 85.7% of all
respondents agreeing that eJournal publications are considered to be valid outlets for scholarship, we thought it would
be useful to examine more closely the
small group of deans (14.3%) who stated that their schools do not consider
eJournal publications to be valid intellectual contributions. These schools
share consistent demographic characteristics, as shown in Table 3.
1. They can be described as relatively
large academic units (73.4% have over
1,500 students).
2. They are in midsize institutions
(86.6% have between 5,000 and 25,000
students).
3. The majority of their institutions
are ranked Carnegie Research Extensive
(53.3%).
The universities are accredited by the
Southern (40%) or North Central (26.7%)
Regional Accrediting Association.
5. The business schools have been
AACSB International members for
more than 25 years (60%).
Returning to the 85.7% of business
schools that do recognize the legitimacy
of eJournals, the majority of these deans
also stated that eJournal quality was
evaluated at their schools (see Table 4).
As Table 4 shows, among business
schools that have rating criteria in place
for evaluating print journal quality, a
majority (57.78%) also have criteria in
place for rating eJournal quality,
although almost 27% of schools that
rate print journals do not rate eJournals.
Conversely, of the business schools that
do not rate print journals, 93.33% also
do not rate eJournals.

Probing deeper, we asked the deans
to compare eJournals and print journals.
Table 5 shows that eJournals appear to
be generally accepted as equivalent to
print journals, especially at schools
where eJournals are rated. Only 18.69%
of all the respondents stated that their
schools weighted eJournals as inferior
to print journals. The perception of inferiority is stronger among the business
school deans who stated that their
schools do rate eJournal quality. On the
one hand, almost 30% in this group stated that eJournals are weighted as inferior to print journals. On the other hand,
two thirds of these deans (66.67%) stated that eJournals and print journals are
treated equivalently at their schools.
None of the deans responded that eJournals are weighted as superior to print
journals, whether or not their schools
rate the quality of eJournals.
Such negative attitudes toward electronic publications are apparently a
source of concern among business faculty. Over 80% of business school deans
in our study stated that less than 20% of
their faculty include electronic publications (journals, proceedings, or
abstracts) in their annual faculty activity
report.
DISCUSSION
Our preliminary findings indicate
that electronic publications are typically considered along with printed-paper
publications during business school
faculty research evaluations. However,
their status in the mix remains tenuous.
Business school policies vary widely
in their recognition of electronic outlets for faculty research publication
November/December 2005

75

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:53 12 January 2016

TABLE 3. Demographics of
Business Schools That Do Not
Count Electronic Journal Publications (n = 15)
Characteristic

%

University enrollment
< 5000 0.0
5000–12000
33.3
12000–25000
53.3
> 25000 13.3
Carnegie classification
Research extensive 53.3
Research intensive 20.0
Masters I 20.0
Masters II
6.7
Regional accrediting agency
Southern 40.0
North central
26.7
New England
20.0
North western
6.7
Western 0.0
Business school enrollment
200–800 13.3
800–1500
13.3
1500–3000
26.7
> 3000 46.7
Years in AACSB
25
60.0
Note. AACSB = The Association to Advance Collegiate Schools of Business.

and in their criteria for evaluating electronic media. A troubling 30% of
schools that evaluate journals during
faculty research reviews expressed
bias against publications in electronic
media—they are automatically considered inferior to publications in print
media. In short, discipline-wide agreement about the quality of electronically published research has not yet
emerged.
Analysis of our survey results is
incomplete at this time. This paper
reports only on current business school
faculty evaluation policies regarding
research published in print and electronic journals. Future reports will present
our findings about (a) business school
deans’ personal opinions and how they
compare to their institutions’ current
policies, and (b) the deans’ regard for
conference proceedings and abstracts
(print and electronic), as well as their
schools’ policies for counting confer76

Journal of Education for Business

ence proceedings and abstracts as
research publications. Demographic
patterns such as institutional size,
region, Carnegie classification, and
accreditation status will also be reported
for these results. Ultimately, we hope
that clearer impressions of the current
status and future trends in evaluating
electronic publications will emerge.
Conclusions and
Recommendations
Yolanda Moses, president of the
American Association for Higher Education, identified the role of technology
as one of six important trends in colleges and universities (2001). She foresaw new technologies as having a significant impact on the professoriate.
Our study focuses on one aspect of technology’s impact—the increasing number of faculty publications in eJournals.
It is fair to assume that the trend will
continue. However, our research indicates that administrators are not keeping
pace with their faculties’ migration to
electronic outlets for scholarship. Many
faculty performance evaluation systems
ignore or reflect a bias against publishing in eJournals.

The challenge to business school
deans is to rethink their traditional evaluation standards. Moses (2001) called
for universities to establish research
agendas that will answer such institutional questions. From this research,
best practices should develop. Mt.
Holyoke College, South Hadley, Massachusetts, provides a model of such
research work. Its advisory board
designed a set of guidelines for evaluating academic work in the digital age
(Wilga, 2000). Advisory board members suggest that administrators seek
outside advice and consultation on quality measures. Creating venues for periodic discussion of technology-related
issues may prove one of the most
important actions that can be taken.
In addition to calling on external
experts, Centra (1987, 1993) recommended including peer evaluation in
the faculty review process. Medlin,
Green, and Whitten (2001–2002) surveyed business schools and found that
peer review is an important component
of faculty evaluation in 54% of
AACSB-member schools. When used,
peer evaluation is rigorous, and both
deans and faculty perceive the results
as important.

TABLE 4. Business Schools That Rate or Do Not Rate Print Journals and
Rate or Do Not Rate Electronic Journals (eJournals, in percentages)

Survey response
Rate eJournals
Do not rate eJournals
No response
Total

Respondents who
Rate print journals
Do not rate print journals
57.78
26.67
15.56
100.00

6.67
93.33
0.00
100.00

TABLE 5. Respondents’ Comparisons of Electronic Journals (eJournals)
and Print Journals (in percentages)

Comparison option
eJournals and print journals are treated
equivalently
eJournals are weighted as inferior to print
journals
eJournals are weighted as superior to print
journals
No response

All survey
respondents

Respondents who
rate eJournals

40.19

66.67

18.69

29.63

0.00
41.12

0.00
3.70

Downloaded by [Universitas Maritim Raja Ali Haji], [UNIVERSITAS MARITIM RAJA ALI HAJI TANJUNGPINANG, KEPULAUAN RIAU] at 17:53 12 January 2016

Wergin (1999) concurred that peers
and outside reviewers are valuable
resources in the evaluation process. He
called for “decentralizing” faculty evaluation to the maximum extent possible,
so that research publication quality
reflects the academic department’s cultural values and mission. Following
Wergin’s and others’ advice (Hatch,
1997; Mills & Hyle, 1999; Waller,
2004), administrators should be cognizant of the culture of their academic
unit and seek changes in faculty evaluation tools that account for the culture.
Thus, in business and professional
schools where a significant degree of
heterogeneity exists among the scholarship activities of the faculty, these
changes should include evaluation
methods appropriate to publishing in a
range of media.
On a more immediate level, the Mt.
Holyoke guidelines suggest additional
practices. Evaluators should judge
scholarly work as it was designed
specifically for presentation in the
medium, rather than judging hard copy
substitutes for, say, Web pages or online
discussion forums. Senior faculty
should take responsibility for keeping
the department informed of new
changes in the norms for publication
and scholarly interaction. Lastly, they
suggest that junior faculty and prospective hires should be clearly informed
about how Web-based publications are
used in reappointment, tenure, and promotion evaluations.
In conclusion, the rapid pace of technological change makes it impossible for
any set of guidelines to apply to all publication of scholarship in electronic

media. Nevertheless, administrators are
encouraged to establish evaluation policies and practices that are relevant, credible, and fair. Those policies should state
standards for faculty uses of electronic
media as their research outlets, thus
ensuring the attraction and retention of
the best of the new breed of faculty. As
Boyer (1990) observed, “even the best of
our institutions must continuously
evolve. And to sustain the vitality of
higher education in our time, a new
vision of scholarship is required” (p. 81).
NOTE
Correspondence concerning this article should
be addressed to Dr. Geraldine E. Hynes, Department of General Business and Finance, College of
Business Administration, Sam Houston State University, Box 2056, Huntsville, Texas 77341. Email: hynes@shsu.edu
REFERENCES
Bloedel, J. R. (2001). Evaluating research productivity. The Research Mission of Public Universities. Retrieved June 20, 2004, from http://
merrill.ku.edu/publications/2001whitepaper/
bloedel.html
Boyer, E. L. (1990). Scholarship revisited: Priorities of the professoriate. Princeton, NJ: The
Carnegie Foundation for the Advancement of
Teaching.
Bures, A. L., & Tong, H. M. (1993). Assessing
finance faculty evaluation systems: A national
survey. Financial Practice & Education, 3,
141–145.
Centra, J. A. (1987). Formative and summative
evaluation: Parody or paradox? In L.M. Aleamoni (Ed.), New directions for teaching and
learning, no. 31: Techniques of evaluating and
improving instruction (pp. 47–55). San Francisco: Jossey-Bass.
Centra, J. A. (1993). Reflective faculty evaluation:
Enhancing teaching and determining faculty
effectiveness. San Francisco: Jossey-Bass.
Ehie, I. C., & Karathanos, D. (1994). Business
faculty performance evaluation based on the
new AACSB accreditation standards. Journal of
Education for Business, 69, 257–263.
Hatch, M. J. (1997). Organization theory: Mod-

ern, symbolic, and postmodern perspectives.
London: Oxford University Press.
Lein, D. D., & Merz, C. M. (1978). Faculty evaluation in schools of business: The impact of
AACSB accreditation on promotion and tenure
decisions. Collegiate News and Views, 31(2),
21–24.
Marine, R. J. (2002). A systems framework for evaluation of faculty Web-work. In C. L. Colbeck,
(Ed.), New directions for institutional research,
no. 114: Evaluating faculty performance (pp.
63–71). San Francisco: Jossey-Bass.
McInnis, C. (2002). The impact of technology on
faculty performance and its evaluation. In C. L.
Colbeck, (Ed.), New directions for institutional
research, no. 114: Evaluating faculty performance (pp. 53–61). San Francisco: Jossey-Bass.
Medlin, B., Green, K., Jr., & Whitten, D.
(2001–02). Peer evaluations at AACSB-accredited institutions. Academic Forum #19. Retrieved
June 20, 2004, from http://www.hsu.edu/faculty/
afo/2001-02/contents19.htm
Mills, M., & Hyle, A. (1999). Faculty evaluation:
A prickly pear. Higher Education, 38, 351–371.
Retrieved March 15, 2004, from EBSCOHost
database.
Moses, Y. T. (2001). Scanning the environment:
AAHE’s president reports on trends in higher
education. AAHE Bulletin. Retrieved June 20,
2004, from http://www.aahebulletin.com/public/archive/scanning.asp?pf=1
Radhakrisbna, R. B., & Jackson, G. (1993). Agricultural and extension education department
heads’ perceptions of journals and importance
of publishing. Journal of Agricultural Education, 34(4), 8–16.
Tong, H. M., & Bures, A. L. (1987). An empirical
study of faculty evaluation systems: Business
faculty perceptions. Journal of Education for
Business, 62, 319–322.
Waller, S. C. (2004, May 3). Conflict in higher education faculty evaluation: An organizational perspective. Organizational Issues and Insights, New
Foundations: Supporting the Reflective Educator.
Retrieved June 20, 2004, from http://www.new
foundations.com/OrgHeader.html
Wergin, J. F. (1999, December). Evaluating
department achievements: Consequences for
the work of faculty. AAHE Bulletin. Retrieved
June 20, 2004, from http://www.aahebulletin.
com/public/archive/dec99f1.asp?pf=1
Wilga, D. (2000, May 1). Guidelines for evaluating
faculty research, teaching and community service in the digital age. Retrieved June 20, 2004,
from http://www.mtholyoke.edu/committees/
facappoint/guidelines.shtml

November/December 2005

77