Directory UMM :Data Elmu:jurnal:A:Accounting, Organizations and Society:Vol25.Issue4-5.May2000:

Accounting, Organizations and Society 25 (2000) 511±525
www.elsevier.com/locate/aos

Reconsidering performance evaluative style
K. Vagneur a, *, M. Peiperl b
a

Pricewaterhouse Coopers (London)
b
London Business School

Abstract
Hopwood, A. G. (1972), An empirical study of the role of accounting data in performance evaluation. Journal of
Accounting Research, 10, 156±182, modeled ``performance evaluative style'' as the predictor of unintended e€ects from
performance measurement control systems, stimulating one of the few areas of cumulative research in behavioral
accounting. However, despite twenty-®ve years of empirical testing, this stream of research has failed to converge. This
paper considers the validity issues created by evolution in the conceptualisation and speci®cation of the relevant
variables and classi®es them by calculation type. Results of an empirical test designed to explore comparability between
the variable types are reported. Finally, implications for interpreting prior research and for future research directions
are considered. # 2000 Elsevier Science Ltd. All rights reserved.


Traditionally, management theory considers
performance an outcome. Performance measurements are used as surrogates for performance
outcomes, implicitly assuming measurement does
not in¯uence performance. Argyris (1952) challenged this practice by positing that performance
measurement control systems in¯uence organizational outcomes. Since then, a small but growing
cross-disciplinary literature has explored these
``unintended e€ects''. Hopwood (1972, 1973)
modeled subordinate perceptions of ``performance
evaluative style'' as a predictor of various unintended behavioral outcomes such as job related
tension and dysfunctional decision making. He also
argued that these behaviors could negatively a€ect
long-term performance (Hopwood, 1973, p. 192).
Since then, accumulated evidence on these models
has been complex and has failed to converge. This
paper aims to help in the development of the
* Corresponding author.

evaluative style concept by exploring some subtle
di€erences in conceptualisation and method found
in this research area.


1. Evaluative style in the literature
Argyris (1952) and then Simon et al. (1954)
explored the human side of formal measurement
control systems. Both studies concluded that budgets and budgeting processes can be associated
with important human relations problems. These
included worker±management separation, crossboundary con¯ict and job-related tension. This
was a substantial departure from the mechanistic
approach to performance measurement found in
traditional management theory (e.g. Taylor, 1911;
Chandler, 1962; Anthony, 1965). Subsequent
research has tended to focus on Argyris' suggestion
that it is the way formal measurement controls are
used that stimulates organizational problems.

0361-3682/00/$ - see front matter # 2000 Elsevier Science Ltd. All rights reserved.
PII: S0361 -3 682(98)00002 -6

512


K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

Hopwood (1972, 1973) concentrated analysis on
one independent construct to embody Argyris'
concept of variation in use. He operationalized a
four-level categorical variable to measure subordinate perceptions of the importance of unit
budget results in their superior's evaluation of the
respondent's performance. Hopwood found that
perceptions of high budget importance in evaluation, a ``budget constrained style'', correlated with
increased job related tension, less favorable relations with peers, and unit size (1973, p. 170). He
posited that high budget emphasis in performance
evaluation would be associated with higher levels
of data manipulation, distrust, rivalry and dysfunctional decision making vis a vis costs, customer service and innovation, and argued that these
would negatively a€ect performance. Hopwood's
evidence for manipulation and dysfunctional decision making was developed from interviews
(n=20) and detailed analysis of accounts; it was
not tested statistically on the larger survey sample
(n=167). His argument was consistent with White
(1961) who suggested that inter-departmental
con¯ict was associated with certain cost allocation

processes. However, the only performance variable Hopwood measured was budget performance. It had no signi®cant association with
evaluative style, although a subset of the sample
(32%), respondents in departments with high variation in evaluative style scores, did re¯ect a very
weak association (p=0.11).
1.1. The Otley±Hopwood debate
Otley (1978) sought to replicate Hopwood's
study with some modi®cation in variable speci®cation and method. In particular, Otley chose a
site with low between-unit interdependence, suggesting that unit budgets might not be an appropriate control device when interdependencies are
high (which was the case in the Hopwood study).
The Hopwood-Otley ®ndings have attracted much
debate and discussion, as well as some confusion.
The principal di€erence between the two studies is
that Hopwood found job related tension to be
predicted by a budget constrained evaluative style,
while Otley did not. Otley reported that his subject
managers ``appeared to adapt, with little felt

stress, to the budgetary system as they perceived it
to be operated'' (1978, p. 136). Otley highlighted
the level of interdependence as a main factor

behind this di€erence. He also suggested that the
di€erence may have occurred because the respondents in his sample had pro®t center responsibility, while Hopwood's were cost center
managers. Finally, Otley suggested that the
accounting system in his sample may have provided more complete measures of performance
than the budgetary system Hopwood described,
and that the resulting performance outcomes may
have in¯uenced senior managers' future choice of
evaluative style.
Although Otley found no signi®cant linear association between performance evaluative style and
job related tension, he did ®nd evidence of behavioral outcomes predicted by his evaluative style
variable. For example, a budget constrained style
had a positive association with trust in supervisor
(r=0.19, p=0.05), a negative association with
perceived ambiguity in evaluation (r=ÿ0.22,
p=0.03) and a very weak negative association with
felt ambiguity of job (r=ÿ0.14, p=0.10). Otley also
found evidence of non-linear associations between
behavioral variables and evaluative style (cf. Vagneur, 1994, 1995).
Otley tested unit budget performance on a subset of his sample (49%), respondents who had
been in their positions long enough to be ``able to

in¯uence'' both the budgeting process and unit
performance over one budget cycle. For this
subset, the association between evaluative style
and output budget performance was strong
(r=0.51, p=0.002), although the subset was small
(n=19). Other types of budget performance
re¯ected little or no signi®cant association with
evaluative style.
1.1.1. Mixed results in the accumulated evidence
Otley (1980) proposed a contingency approach
to reconcile the con¯icting Hopwood±Otley
results. Technology, organizational structure,
environment and responsibility center interdependence were posited as possible explanations.
Research interest has followed Otley's lead and
focused primarily on contingent variables. This
has created a base of cumulative empirical evidence

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

and a large number of potential model variables

because of the many contingent relationships that
have been proposed. Table 1 summarizes a sample
of tests of association between evaluative style and
performance, including tests of both main e€ects
and interactions (the variable types are explained
below). Overall, the results have been mixed (see
Briers & Hirst, 1990; Vagneur, 1995; Otley et al.,
1996 for in-depth discussions).

2. Evolution in variable conceptualization and
speci®cation
Subtle di€erences and ambiguities have emerged
within this literature stream, beginning with the
Otley±Hopwood con¯ict itself. That the stream of
post-Hopwood empirical testing has failed to
converge is due at least in part to changes in con-

513

ceptualization and to operationalization of the

relevant variables.
To operationalize performance evaluative style,
Hopwood (1972) created a four-level categorical
variable, based on whether a respondent had
nominated each of two items (Table 2, column I,
items 5 and 7) as being among the three most
important criteria used by the supervisor in evaluating the individual's performance. Relative position within the top three did not a€ect the
category score.
Otley (1978) modi®ed both the question content
and the calculation of Hopwood's variable (see
Table 2, column II). The content changes accounted for di€erences in language and frame of reference that Otley observed in his organization
relative to Hopwood's. Otley's modi®cation to the
calculation was to take account of the relative
position of items 5 and 7 within the top three list, if

Table 1
Examples of evaluative style associations with performance
Typea

Study


N

Association

Speci®cation

Performance

167
19
169
58
58
102

Ns
Sig
Sig
Ns

Ns
Ns

Unknown
Obj (1)
SS (1)
Ss (12)
Ss (12)
Ss (7)

Unit-budget
Unit-budget
Unit-budget
Unit-overall
Unit-overall
Individual

38
79


Sig
Sig

Ss (9)
Ss (9)

Individual
Individual

79

Sig

Ss (9)

Individual

24

Sig

Ss(1)

Unit

56

Neg

Ss (9)

Individual

40

Ns

Ss (9)

Individual

121
58

Sig
Sig

Ss (10)
Ss (12)

Unit
Unit

58

Ns

Ss (12)

Unit

Main e€ects
A
B
D
E
D
E

Hopwood (1972, 1973)
Otley (1978)
Kenis (1979)
Govindarajan (1984)
Gupta (1987)
Imoisili (1989)

Interaction e€ects
Participation
C Brownell (1982)
D Brownell and Dunk (1991)
Participation and task uncertainity:
D Brownell and Dunk (1991)
Manufacturing automation:
D Dunk (1992)
Environmental complexity:
D Brownell (1987)
Function:
D Brownell (1985)
Strategy:
B Govindarajan (1988)
D Gupta (1987)
Strategic "mission":
D Gupta (1987)

Association: Neg, negative signi®cant; Sig, signi®cant; Ns, not signi®cant
Performance measure: Type: Obj, objective measure; Ss, respondent self-rated. Number of criteria used: (1), (7), (9), (10), or (12)
a
See Table 4 for the classi®cation types.

514

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

both were nominated. This change resulted in ®ve
categories, by splitting one of Hopwood's categories into two.
Changes in the speci®cation of evaluative style
have continued as this literature stream has developed. In fact, at least some speci®cation change has
occurred in the majority of studies in the postHopwood literature (see Table 3).
The content of most evaluative style variables
relates to respondent perceptions of budget use in
performance evaluation (e.g. Hopwood, 1972,
1973; Otley, 1978; Brownell, 1982; Govindarajan,
1988), but the construct has been conceptualized in
other ways. For example, Hirst (1983) designed a
study around the perceptions of the relative
importance of quantitative measures in performance evaluation. Another variation used questions on budget process, including participation,
evaluation on variance, goal diculty and punishment (Kenis, 1979). Govindarajan (1984)
articulated the evaluative style concept as the
extent to which subjective factors (vs formulae)
were used to determine bonuses.
Each of these approaches has subtle di€erences
in its conceptual base. Reconciliation of such
di€erences would require consideration of both
individual psychological responses to performance
assessment and the nature of the systemic e€ects

created by budgets and other formal and informal
management control processes (e.g. reward,
planning, training and information systems). This
presents a signi®cant opportunity for further
research drawing on psychology, organizational
behavior and behavioral accounting research. In
the present study, however, our approach was
not to reconcile the various approaches conceptually but to understand and test their operational
di€erences.
2.1. Content evolution and content validity
Most evaluative style measures have been
structured for respondents to rate or select from a
list of alternative choices (question content) determined by the researcher. Those scores are then
manipulated (a calculation) to form the variable.
The research strategy developed by Hopwood
(1972, 1973) and replicated by Otley (1978) was to
select a sample from one company and to undertake extensive inductive ®eld research within that
company in order to develop the question content.
This approach assumes that every ®eld site will be
di€erent, and that it is necessary to identify the
particular vocabulary of the site by inductive
research in order to provide meaningful choices
to respondents.

Table 2
Comparison of variable content di€erences
I. Hopwood's content

II. Otley's equivalent

III. Used in this study

1. How much e€ort I put into the job.
2. My concern with quality.
3.
4. My ability to handle my men.

The e€ort I put into my job.
My concern with quality.
How much pro®t I make.
The relationships I have established
with my sta€ and men.
How eciently I run my unit.
How well I get on with group sta€.
How well I meet my budget.

The e€ort I put into my job.
My concern with quality.
My contribution to company pro®ts.
The relationships I have established
with sta€.
How eciently I run my unit.
How well I get on with my superiors.
How well I meet my budget.
Objective customer service ratings.
My attitude toward my work.
How well I develop a team.

5. My concern with costs.
6. How well I get along with my boss.
7. Meeting the budget.
8.
9.My attitude to my work and company
10.
11. How well I cooperate with
colleagues.

My attitude toward my work.

Hopwood's respondents were asked:
1. ``When your departmental supervisor is evaluating your performance how much importance do you think he attaches to the
following items?'' (A 5-point anchored Likert-type scale was provided to score each criterion.)
2. List in order the three most important of the criteria ( a nomination ranking).

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

Other studies have used a conceptually di€erent
research strategy, employing deductive methods to
assess the appropriateness of question content
(e.g. Govindarajan, 1984, 1988; Imoisili, 1989;

Harrison, 1992). Content validity rests on the issue
of providing sucient choice to measure variation
within the sampling frame. For evaluative style, it
would be dependent on the extent to which

Table 3
Evoluation in evaluative style variable speci®cation
Study

Source

Description

A Hopwood
(1972/1973)

New

B Otley
(1978)

Modi®ed
Hopwood

Categorical. Nomination to a rank order (top three) list of budget and cost concern
from list of seven alternatives (Table 2, column I). Relative rank order within list did
not a€ect the category scoring. The evaluative styles are:
4. Budget constrained styleÐmeeting the budget (item 7, Table 2 , Column I) but not
eciency (item 5) was nominated to the list of the three most important.
3. Budget±pro®t styleÐboth meeting the budget and eciency were nominated to the
top three.
2. Pro®t conscious styleÐeciency, but not meeting the budget was nominated to the
top three.
1. Non-accounting styleÐneither meeting the budget nor eciency was nominated to
top three.
Ordinal. Nomination to a rank order list. Modi®ed question content (see Table 2,
column II). Relative position within rank order of budget and eciency used. This
splits Hopwood's budget-pro®t style (number 3 above) into budget-pro®t (budget
precedes eciency) and pro®t-budget (eciency ranked before budget).
Continuous. Summed 5-point Likert-type ratings on budgeting characteristics (e.g.
evaluation on variance, goal diculty, punishment).
Binary. Nomination to rank order. Content unclear, modi®ed either Otley or Hopwood.
Assigns value of 1 to Hopwood's categories 1 and 2 (above); and a 0 to categories 3
and 4.
Continuous. 5-point Likert-type rating, on ®ve questions on quantitative measurement
use in evaluation and reward.
Continuous. Raw decimal. Score of respondent perceptions of the percent that
performance bonus is formula-based vs subjective based.
Continuous. Raw decimal. Score of respondent perceptions of the percent that
performance bonus is formula-based vs subjective based.
Continuous. 7-point Likert-type ratings. Hopwood content. Manipulated reported:
``average of the raw scores to determine the styles''.
Continuous. 5-point Likert-type ratings. Content not reported; modi®ed Hopwood or
Otley. Sums ratings for budget and cost/revenue concern.
Categorical. Nomination to rank order. Modi®ed Hopwood/Otley to ten items.

Kenis (1979)
C Brownell
(1982)
Hirst (1983)

New
New

New

Govindarajan New
(1984)
Gupta (1987) Govindarajan
(1984)
Imoisili (1989) Hybrid
D Brownell
(1985)
Brownell and
Hirst (1986)
Brownell
(1987)
Govindarajan
(1988)
Brownell and
Dunk (1991)
Dunk (1992)
E Harrison
(1992)

New
Modi®ed
Hopwood
Brownell
(1985)
Hopwood

515

Continuous. 5-point Likert-type ratings. Same as Brownell (1985)
Categorical. Nomination to rank order. Same as Hopwood (1972, 1973)

Modi®ed
Continuous. 7-point Likert-type ratings. Used Hopwood content. Calculated as Brownell
Brownell (1985)
(1985), sums ratings on budget and cost concern.
Modi®ed
Continuous. 5-point Likert-type ratings. Content unclear. Sumed ratings for budget and
Brownell (1985)
cost concern
New
Continuous. 5-point Likert-type ratings. Content unreported, used Brownell and Hirst
(1986) instrument. Created ratio of the sum of budget and cost concern ratings divided
by sum of ratings other items to capture relative scores of accounting criteria to
non-accounting criteria.

Bold denotes variable used in the present study to represent calculation type A,B,C,D, and E summarized in Table 4.

516

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

question content includes the most important elements in the performance evaluative environment.
If important elements are missing, responses may
re¯ect variation in less important alternatives, and
thus may not adequately measure within-sample
variation in evaluative style. Therefore, when the
evaluative environment is di€erent, question content may need to evolve as well. For example, if
team development and objective customer service
ratings are important criteria in performance evaluation, the Hopwood and Otley speci®cations
(Table 2, columns I and II) might have lower
content validity than would a larger set including
these two items.
Thus the evolution of content sets would not
necessarily reduce between-study comparability so
long as the environment in each case has been
adequately assessed to ensure that the content
o€ers sucient choice. Hopwood's content was
developed from issues he identi®ed in interviews
and used language signi®cant to the respondents
(shop ¯oor cost center managers at an American
integrated steel producer). Otley modi®ed Hopwood's content to re¯ect di€erences he observed
during face-to-face interviews in his sample (pro®t
center managers in Britain's nationalized coal
industry). It thus seems reasonable to assume that
the response sets of Hopwood and Otley's studies
were suciently inclusive to have high content
validity. Implicitly, by providing sucient choice
to respondents, they would also have high
between-study comparability.
Some studies have introduced modi®ed response
sets without explanation (e.g. Brownell, 1982, 1985;
Dunk, 1992). It will have to be assumed that these
modi®cations were intended to provide improved
content validity. However, whether this was
accomplished by inductive or deductive means is
unclear. Other researchers have adopted content
unchanged from earlier studies. For example,
Harrison (1993) adopted Brownell and Hirst's
(1986) content unchanged, and Imoisili (1989) and
Govindarajan (1988) adopted Hopwood's (1972,
1973) content without change. Hopwood's content
was designed to capture the evaluative style environment of a 1960s industrial plant; as such it may
not be adequate for a sample of Fortune 500 general managers during the 1980s and 1990s. Unless

appropriateness of a content set has been empirically determined before data collection, ex-post,
its validity must be a matter of speculation.
2.2. Calculation evolution and validity
Like variable content, calculation methods have
also changed in this literature stream. Hopwood
(1972, 1973); Otley (1978) and Brownell (1982) all
used nominations to a ranking list, producing
somewhat di€erent discrete variables (categorical,
ordinal and binary, respectively). Brownell (1985);
Harrison (1992) and others later used Likert-type
ratings on a set of criteria and manipulated those
rating scores. The use of continuous variables
constituted a fundamentally di€erent approach to
measuring evaluative style.
The calculations which have been used in the
evaluative style literature can be classi®ed into ®ve
basic types: (A) categorical variables based on
inclusion in a ranking nomination list, (B) ordinal
(loosely, ``continuous'') variables based on relative
rankings in a nomination list, (C) binary variables
based on inclusion in a nomination to a ranking
list, (D) ratings or arithmetic sums of ratings and
(E) algebraic manipulations of ratings. Table 4
classi®es evaluative style studies by calculation type
(these types are also indicated in Tables 1 and 3).
The di€erent speci®cations of evaluative style
may have reduced between-study comparability,
and therefore external validity. Because calculation validity could be directly assessed by empirical testing, we designed a study to explore the
comparability of the ®ve calculation types by
exploring both their intercorrelations and their
relationship to performance outcomes.

3. Method
Data for this study were collected from business
unit managers in 28 British-based business units of
20 international companies. The companies were
selected in the spirit of a theoretical sample to
provide a representative range of organizational
performance. This was based on publicly available
information (published accounts and newspaper
reports) on companies' success at accomplishing

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

517

Table 4
Classi®cation of studies by calculation type
Type Based on (Manipulation to)

Studies

A

Nomination to ranking (categorical)

B
C
D

Nomination relative rankings (ordinal)
Nominations (binary)
Ratings (summed/continuous)

E

Ratings (algebraic/continuous)

Hopwood (1972, 1973);
Brownell and Hirst (1986)
Otley (1978); Govindarajan (1988)
Brownell (1982)
Kenis (1979); Hirst (1983); Brownell (1985, 1987); Gupta (1987);
Brownell and Dunk (1991); Dunk (1992)
Govindarajan (1984); Imoisili (1989); Harrison (1992)

Bold denotes variable used to represent type for the present study. for the classi®cation types.

targeted improvements. Companies were also
selected to show a reasonable spread by industry,
internal diversity (number of business areas represented), overall size (£ million to £63 billion in
revenue; 110 to 120,000 employees), unit size (£4
million to £4.3 billion in revenue; 67 to 55,000
employees) and unit size relative to the total company (1 to 100%). Only one company which was
approached declined to participate in the study.
Eighty-two managers (three to six from each
company, all of whom had budget responsibility
for functional or departmental areas within business units) were interviewed, and the evaluative
style criteria they perceived as important were
assessed. This research strategy was consistent
with the approach undertaken by Hopwood and
Otley, in that it sought to develop the relevant
content criteria by inductive means. This approach
is conceptually di€erent from research which uses
deductive development of the variable content.
Managers were advised that the discussion was
con®dential and only aggregate data from multiple companies would be reported. Once all of the
interviews were complete, a follow-up questionnaire was developed which provided the data
to calculate a set of evaluative style variables. The
questionnaire was distributed by mail to all eighttwo managers with an accompanying cover letter
again assuring con®dentiality of the data. A postage paid return envelope was enclosed.
Of the 82 questionnaires distributed, 68 were
returned. After checks for consistency between
ratings and rankings, two subjects whose responses were highly inconsistent were excluded. The

usable response rate was thus 80%. Because of the
high response rate, no test was made to see whether those who did not respond represented a systematic sub-set (i.e. response bias). In addition,
because respondents came from multiple companies, tests were made for e€ects from company
and unit size, CEO and respondent time in position, and industry sector and market served by the
business unit. None of these sampling variables
re¯ected any systematic associations with the evaluative style variables (see Vagneur (1995) for further discussion).
Environmental uncertainty and economic conditions had been previously identi®ed as a€ecting
evaluative style (Govindarajan, 1984; Imoisili,
1989). The data for this study were collected just
as the British economy began to emerge from its
deepest post-war recession. All of the companies
in the sample (no two of which competed in the
same sector) had experienced the e€ects of the
recession. The level of international competition
was high, and cost and headcount reduction
initiatives were under way in all of the units.
Therefore variation in environmental uncertainty
and economic conditions was not expected to
in¯uence the analysis.
3.1. Variable speci®cation
3.1.1. Content
The follow-up questionnaire sent to respondents
asked, ``When your performance is evaluated, how
much importance do you think is attached to each
of the following''. The ten criteria provided

518

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

(Table 2, column III) were based on data collected
during the interviews. A fully anchored sevenpoint Likert-type scale (from ``not at all'' to ``critically'') was provided for respondents to rate each
of the 10 items. A second question then asked
respondents to list in rank order the three most
important criteria from the content list. In order
to ensure that the criteria were suciently inclusive, a third question asked, ``If there are important factors which are missing from the list above,
please make a list of the most important factors
your superior uses in the assessment of your performance.''
To the extent appropriate to the organizations
under study, the criteria were designed to capture
the essence of those used by Hopwood and Otley.
One criterion (item 2) was unchanged by Otley
(1978) from that used by Hopwood (1972). It was
unchanged here as well. Two items were re®ned in
order to improve relevance across the sample
(items 4 and 6). Where there were di€erences
between Otley and Hopwood, Otley's content was
adopted (items 1, 5, 7, and 9). Team development
and objective customer service ratings (items 8 and
10) were included because the interviews had disclosed that these were important evaluative criteria in some companies.
3.1.2. Calculation
Five variables were operationalized to re¯ect the
calculation types (Table 4; see Table 3 for more
speci®cation detail). The variables were as follows:
Variable A, a four-level categorical variable
based on nominations to a list of the three most
important criteria; Hopwood's (1972, 1973)
method was used.
Variable B, an ordinal (``continuous'') variable
based on relative rankings of the most important
in a nomination list. Otley's (1978) method was
selected.
Variable C, a binary variable based on nominations to a ranking list. Brownell's (1982) method
was used. This assigned a value of 1 to Hopwood's
budget constrained and budget-pro®t styles. Pro®t
conscious and non-accounting styles were assigned
a value of zero.1
1

Brownell (1982) used ÿ1.

Variable D, a ``continuous'' variable based on
summed Likert-type ratings, using Brownell's
(1985) approach. This technique summed the ratings of budget and eciency, creating an absolute
measure of the perceived importance of content
items 5 and 7.
Variables E (1 and 2), ``continuous'' variables
based on algebraic manipulations of ratings:
E1: Harrison's (1992) approach was selected.
This method calculated the ratio of the sum
of the absolute ratings of budget and eciency (content items 5 and 7), to the sum of
the other eight criteria. This provided a relative measure of the perceived importance of
content items 5 and 7 vs the other items.
E2: The algebraic manipulation of variable
E1 was modi®ed by moving the score on item
3 (contribution to pro®ts) from the denominator to the numerator of the ratio. This
provided a relative measure of the perceived
importance of all three criteria which might
be considered ®nancial measurement based.
This addressed the potential for interpreting
content item 3 as an accounting based criterion.
3.1.3. Performance
There is a lack of consensus as to what constitutes valid performance measurements (Steers,
1975). Therefore, three kinds of performance
variables were included: objective, self-reported,
and researcher-rated. All were standardized before
being included in the analysis.
Objective (longitudinal measures):
Abnormal shareholder returns: Five-year average of data (from Datastream) re¯ecting the
di€erence between company returns and a
market index with the same beta.
Actual returns: Five-year average of accounting pro®t plus dividends.
Self-reported (all on seven-point Likert-type
scales):

K. Vagneur, M. Peiperl / Accounting, Organizations and Society 25 (2000) 511±525

Weighted strategic performance: A comprehensive set of strategy factors (sales growth,
market share, operating pro®t, new technology development, pro®t margin, budget performance, return on investment, new product
development, market development, operating
cash ¯ow, cost reduction, personnel development, public a€airs, and cooperation) weighted using the method of Steers (1975).
Budget performance (from the list above).
Sales growth (from the list above).
Researcher-rated
scale):

(seven-point

Likert-type

Consistency in objectives and priorities. Two
independent investigators scored within-unit
variation in interviewees' views on objectives
and priorities, following the method of
Machin and Tsai (1983). (All organizations in
the sample had improvement in consistency
or coordination as stated objectives.)

4. Results
Table 5 presents summary descriptive statistics
for the content ratings. Pro®t, unit eciency, and
meeting the budget (items 3, 5 and 7) were the most
frequent choices in the nomination rankings. Only
one respondent failed to nominate at least one of

519

these three. Except for responses on item 8 (customer service rating) means and standard deviations for the content items clustered in a small
range (means from 5.0 to 5.8; standard deviations
from 1.03 to 1.50). Factor analysis found all of the
content items were independent with items 1, 2
and 3 forming three independent factors that
explained 61% of the variance.
Eighteen pairs of content ratings had signi®cant
correlations (Table 6). Contribution to pro®ts
(item 3) was correlated with budget performance
(item 7) but not with eciency of unit (item 5).
Only seven of the signi®cant correlation pairs were
strong (r  0.40), and only one of these pairs
involved any of the three most frequently selected
content items (eciency, item 5, with e€ort into
job, item 1; r=0.41, p