Directory UMM :Data Elmu:jurnal:I:Information and Management:Authorlist C:

Information & Management 35 (1999) 203±216

Research

System usage behavior as a proxy for user satisfaction:
an empirical investigation
Charles E. Downing1
Operations and Strategic Management Department, Wallace E. Carroll School of Management, Boston College,
Chestnut Hill, MA 02167, USA
Received 3 March 1998; accepted 12 October 1998

Abstract
Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important
determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users'
worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a
methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can
be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6
months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey,
was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed
statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement. # 1999
Elsevier Science B.V. All rights reserved.

Keywords: System usage; User satisfaction; Measurement of user satisfaction; Survey; Behavioral data; Empirical study

1. Introduction
As information technology has become a dominant
presence in the business community, both academic
and practitioner literature have increasingly recognized the importance of users' satisfaction in the
success of information system (IS) applications [3,
10, 14]. One author states that user satisfaction is often
considered the most important factor in reviewing the
quality of an information system [19].
Given this recognition, several techniques for measuring users' IS satisfaction have subsequently
1
Tel.: +1-617-552-0435; fax: +1-617-552-0433; e-mail:
downinch@bc.edu

emerged [1, 4, 13]. Methodologies ranging from the
use of Likert Scales to complex manipulations of the
Semantic Differential have demonstrated impressive
validity and reliability. Yet, business executives interviewed for this study readily admit to not applying
these measures for anything but a one-shot academic

study. Their retrospective assessment of why this is the
case centers around one prevailing reason: their organizations ®nd the process too cumbersome to be
justi®ed ®nancially and practically.
Careful review of the literature reveals that while
deviations in exact methodology and number and type
of dimensions in the satisfaction equation are many, all
the approaches have the underlying similarity of some
sort of intrusion into the user's world [1, 4, 8, 9, 12, 13,

0378-7206/99/$ ± see front matter # 1999 Elsevier Science B.V. All rights reserved.
PII: S-0378-7206(98)00090-1

204

C.E. Downing / Information & Management 35 (1999) 203±216

15, 18]. In all cases reviewed for this research, the
users were either requested to complete a questionnaire, interviewed, or directly observed (with the
notable exception of [17]). In the current business
climate of downsizing and belt-tightening, it is easy to

understand the executives who might not want their
employees routinely subjected to such activity.

2. Conceptual framework and research
hypothesis
This research focuses on creating a methodology to
solve the management paradox of simultaneously
needing and desiring information system satisfaction
data and being unable or unwilling to constantly
survey users to get it. Due to current electronic
systems capabilities, capturing system usage data
requires little effort and cost. Thus, if such usage data
could serve as a proxy for user satisfaction, management would have an ongoing measure of satisfaction
which was also practical to obtain. Both theory [7] and
a recent path analysis [2] suggest that such a link
between computer system satisfaction and computer
system usage in fact exists.
Fishbein and Ajzen [7] reason that human intuition
has always linked behavior and attitude, and carefully
enumerate the evolution of scienti®c studies surrounding that linkage. They cite Nemeth's study [16], in

which the number of seconds a person spent talking to
another person was taken as a measure of liking, and
Wicker's research [21], which was able to identify
studies in which ``at least one attitudinal measure and
one overt behavioral measure toward the same object
[were] obtained for each subject'' ([21], p. 48). They
conclude by stating ``In the preceding discussion we
have suggested
that behavioral observations can be
used to measure the person's attitude'' ([7], p. 357).
Concerning the procedures to follow to accomplish
such measurement, they explain that investigators will
need to empirically test the behavior±attitude relationship in an exploratory fashion, continuing until
hypothesized relationships are demonstrated and
duplicated.
Similarly, Baroudi, et al. [2] demonstrate empirical
evidence that system usage and user information
satisfaction are linked. They carefully specify that
``user information satisfaction is an attitude toward


the information system while system usage is a behavior'' ([2], p. 234), and conclude that ``. . .the study
provides evidence that the user's satisfaction with the
system will lead to greater system usage.'' While they
debate the direction of this linkage (Satisfaction !
Usage vs. Usage ! Satisfaction), of importance to this
study is that a linkage exists.
Thus, since information systems have the ability to
easily capture usage behavior, and behavior can be a
predictor of attitude, and user information system
satisfaction is an attitude, it follows that capturing
usage behavior can assist management in determining
user satisfaction. The research hypothesis is stated as
follows:
Research hypothesis
H0: Analyzing system usage data will prove to be a
valid alternative to a survey in measuring user satisfaction with an information system.
Ha: The analysis of usage data will exhibit no relationship to a survey when measuring user satisfaction with
an information system, and therefore will not prove to
be a valid alternative.


3. Methodology
Current technological capabilities which allow for
the easy and unobtrusive electronic observance of IS
user behavior were utilized for this study. A secondary
information system was installed to work `under' the
primary system (the system with which the users
interacted). The research goal was to use this `sub'or `meta'-monitoring system to collect and analyze
user behavior data, and then calculate a measure of
user satisfaction. Finally, this measure needed to be
proven similar to traditional and validated measures.
A ®eld study was carried out where the metamonitoring system worked in the background of an
interactive voice response system which served an
organization's savings plan informational needs.
The meta-monitoring system was installed to collect
and analyze caller behavior data (`usage data') both
before and after enhancements to the system were
installed. The ®eld research goal was to use time and
space dimensions of these data to match the results of

C.E. Downing / Information & Management 35 (1999) 203±216


205

3.1. Measuring user satisfaction

Fig. 1. Research model.

the survey: to determine the level of user satisfaction
with the primary system. The research model is shown
in Fig. 1.
The system studied, named Savings Express, is a
12-line telephone interactive voice response system
(IVRS) responsible for providing 401(k) retirement
plan information to 10,252 internal employees. As is
the case with other IVRSs in this ®eld, customers can
use their touch-tone telephones to access personal
account or general plan information, request forms
and plan brochures, and make various personal
account changes (transfer account balances, initiate
withdrawals and loans, change contribution amounts,

etc.). Additionally, the IVRS allows customers to
model unlimited `what if' scenarios of potential loans
and projected plan account balances. As a system
responsible for all of the input, processing, storage,
and output needs of the company's Savings and Pro®t
Sharing Plan, and differing from a `normal' end-user
system only in its interface (telephone input and
spoken response for output vs. the traditional keyboard and screen), this IVRS provided an excellent
®eld information system for empirical examination of
the research hypothesis. A graphical depiction of
Savings Express appears in Fig. 2. Note that the
components in Fig. 2 which have double underlines
beneath them are enhancements to the system which
were added midway through this study.

3.1.1. First measure of user satisfaction ± Traditional
survey
The traditional means of determining user
satisfaction is through the use of a survey. The goal
of this study is not to validate or test a new instrument,

but simply to apply the one which is most widely
accepted. The literature reveals many successful
vehicles for measuring user satisfaction [1, 13], but
in the realm of end-user computing the work of
Doll and Torkzadeh [4] remains the standard. Their
instrument was painstakingly developed and tested for
both reliability and validity. Methodological and
conceptual issues about their instrument have been
raised [6], but test±retest studies have further
demonstrated the reliability and stability of the instrument [11, 20]. As such, the Doll and Torkzadeh
measure of end-user satisfaction was adopted for
this study.
This instrument measures end-user satisfaction
across ®ve components ± content, accuracy, format,
ease of use, and timeliness ± using 12 questions with
Likert-type scales. The instrument developed for this
study followed these speci®cations, with the exception
of number of questions. Due to practical constraints
associated with the company-sponsored survey, six
questions were used to address the ®ve components of

satisfaction, as opposed to the Doll and Torkzadeh
guideline of 12 questions. However, issues of reliability and validity arising from this difference in
number of questions have been addressed [5]. Fig. 3
shows the overall instrument structure, as well as the
recommended instrument questions compared to the
actual statements used. A copy of the instrument
appears in Appendix A.
Finally, for ease of presentation purposes, the statements were given a code, and these codes appear in the
rightmost column of Fig. 3.
While researchers have had impressive success in
validating this instrument, practical concerns centering around the obtrusiveness and cost remain.
3.1.2. Second measure of user satisfaction ± usage
behavior
The methodology for this study involved surveying
users as described above, with the additional element
of recording precise details of their behavior. To

206

C.E. Downing / Information & Management 35 (1999) 203±216


Fig. 2. Map of information system.

provide the usage data for this comparison, a metamonitoring system under the IVRS was collecting
detailed caller usage data, notably which touch tones
were pressed and when. This collection took place 7
days a week, 24 hours a day during a 6-month period.
If the meta-monitoring system analysis was to
function similar to the survey, it needed to address
the same parameters which were used in the survey
(the codes in Fig. 3). To address these parameters, an
automated rule-set had to be constructed for the metamonitoring system to follow for each parameter. The
iterative process described by Fishbein and Ajzen [7]
was employed to create these `rules' for the metamonitoring system to follow to determine values for

each parameter. This process proceeded as follows: a
group of experts was questioned, in Delphi fashion,
concerning what usage behavior might indicate satisfaction for each parameter. Seven high-level managers, all familiar with information systems in
general and the system studied in this research in
particular, were questioned for this research. Based
on their answers, acceptable ranges of behavioral
equivalents to survey responses (`1' to `5') were
created. The goal was to have time and space dimensions of callers' usage behavior (`time' ± when, how
often, and how long a caller called and remained in a
certain section of Savings Express, and `space' ±
which sections and options that caller used) analyzed

C.E. Downing / Information & Management 35 (1999) 203±216

207

Fig. 3. Instrument structure ± The components of end-user computing satisfaction.

to create equivalent responses to the quality parameter
statements. Take as an example the CONT1 quality
parameter (the response to the statement ``The information helps me plan my ®nances.'') When examining
the usage data, should the rules dictate that an average
of one call per month by a caller to the General Plan
Information Menu means that that caller gives a
response of ``1 ± strongly agree?'' Does it need to
be two or more calls? Does at least one call per quarter
need to have been made by the caller to the Personal
Fund Transfer Information section before that caller
can be categorized as responding ``1 ± strongly agree''

to the CONT1 parameter? The establishment of these
rules, which the meta-monitoring system could use to
determine the six necessary parameters, is incredibly
subjective at best. To stabilize and quantify the process, the most subjective aspect of the process, the
brainstorming as to what the rules might be, was used
only to establish ranges in which the rules could ®t.
In the example just mentioned, acceptable ranges for
the CONT1 parameter might be as follows (note
that these ranges were allowed to overlap during this
phase to allow ¯exibility during the construction of the
®nal rules):

208

C.E. Downing / Information & Management 35 (1999) 203±216

Response
Strongly agree ± 1

Range
Average total monthly
calls to General Plan
Information Menu and
Personal Fund Transfer
Information, > 2
Average total monthly
calls (to sections listed
above), > 1 and < 3
Average total monthly
calls, > 0.5 and < 2
Average total monthly
calls, > 0 and < 1
Average total monthly
calls, < 0.5

extent allowed by the ranges (in an attempt to move
the means closer together), and a new test was run. The
large amount of subjectivity in the process was therefore lessened by the width of the ranges, which came
from the consensus of the expert panel. The metamonitoring code was changed and tests re-run until an
acceptable match was achieved, or until no further
testing would be useful (the meta-monitoring code had
reached the edge of one of its ranges and the hypothesis test still showed the means not equal). After this
process of establishing rules using comparison with
the ®rst survey distribution, the rule-set was tested
against the second survey distribution for validation.
The meta-monitoring code used was written in the
programming language Visual Basic.

In other words, for a caller to be considered to have
responded `2 - Somewhat agree,' his/her average total
monthly calls to the General Plan Information Menu
and Personal Fund Transfer Information section would
have to at least be larger than 1, and could not be 3 or
larger. Such a range implies that an average less than
or equal to 1 has to be considered a lower response
than a `2,' and an average greater than or equal to 3 can
only be viewed as a response of `1.'
After the somewhat lengthy and involved process of
establishing the ranges, the ®nal rules for each parameter were set. This rule-setting process took place in
a trial and error manner, with the trial and error bounds
being the ranges explained above. The ranges were
applied to the activity of 500 randomly selected callers. The population for the random selection was
anyone who had called the system and entered a Social
Security number from 1 April 1993 until 30 June 1993
(the time-frame equivalent of the ®rst survey), and the
random selection was performed similarly to the
survey random selection. After the 500 Social Security
Numbers had been selected, the guiding force in the
trial and error process was statistical comparison of
the meta-monitoring results to the data from the ®rst
survey. Hypothesis testing was used, with the mean of
a given parameter from the meta-monitoring system
being tested against the mean of the similar parameter
from the survey. The null hypothesis was that the
means of the two parameters were equal, and the
alternative hypothesis that they were not, at
ˆ0.10. If the test showed that the means were not
equal, the meta-monitoring code was changed to the

3.1.3. Data collection
Data collection took place from 1 April 1993 to 30
September 1993. Two distributions of 500 surveys
were mailed to employees who had called the system,
the ®rst in late May and the second in late August.
Recipients were asked to rate their agreement with the
statements listed in Fig. 3 on a 1±5 scale (1 being
`Strongly agree'). As mentioned, the meta-monitoring
system was continuously collecting data throughout
the survey distribution process. A graphical depiction
of the data collection timeline appears in Fig. 4.

Somewhat agree ± 2

Neutral ± 3
Somewhat disagree ± 4
Strongly disagree ± 5

4. Results
4.1. The survey
As mentioned, survey recipients were asked to rate
their agreement or disagreement with the statements in
Fig. 3. A copy of the instrument appears in Appendix A. Response rates were 52.6% for the ®rst survey
and 56% for the second. Data for the two surveys
appears in Table 1.
4.2. Usage behavior
The group consensus approach, tested against the
®rst survey distribution, yielded the behavioral analysis rules which appear in Table 2. Note that all
equivalent responses are checked in a top-down manner, and once a condition is met, no further checking is
done; that is, if the conditions for `1 - Strongly agree'
are met, the checking for that parameter is over.

209

C.E. Downing / Information & Management 35 (1999) 203±216

Fig. 4. Data collection timeline.

Table 1
Survey summary data
Total of individual
ratings (each 1±5)

Number of valid
responses (1±5)

Number of blanks

Number of 0s

First survey summary data
CONT1
466
FORM
376
ACC
336
EASY
334
CONT2
575
TIME
935

252
250
240
256
235
223

8
10
8
7
14
13

6
6
18
3
17
30

1.85
1.50
1.40
1.30
2.45
4.19

Second survey summary data
CONT1
490
FORM
445
ACC
382
EASY
375
CONT2
656
TIME
978

270
270
257
271
260
241

6
6
10
6
10
9

4
4
13
3
10
30

1.81
1.65
1.49
1.38
2.52
4.06

A sample of the raw meta-monitoring analysis
output (with the Social Security numbers removed)
which was a result of these rules appears in Appendix B. The results of the z-tests of these parameter
results versus the survey parameter results for the
second distribution appear in Table 3. Note that
two-tailed tests were conducted, with the goal being
to determine if the means were equal, with the hypotheses being:
H0 : 1 ÿ 2 ˆ 0
Ha : 1 ÿ 2 6ˆ 0

Average rating (1±5)

The rejection region is z < ÿz =2 or z > z =2, and
with ˆ 0.10 and the large degrees of freedom the
sample sizes afford, z /2  1.65. Note that variables
with `_S' after them refer to the survey response mean
of the parameter listed in front, and variables with
`_M' after them refer to the meta-monitoring equivalent response means to these parameters. For example,
ACC_S is the mean of the collection of survey
responses to the statement ``The information is accurate'' and ACC_M is the mean of the collection of
meta-monitoring equivalent responses to the same
statement.

210

C.E. Downing / Information & Management 35 (1999) 203±216

Table 2
Final rule sets which translate usage data into satisfaction parameters
Final rules for CONT1 parameter
Meta-monitoring equivalent response
Non-respondent
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

CONT1
The response to the statement ``The information helps me plan my finances''.
Sum ˆ 0
Average  3.5
Average  2
Average  1
Average  0.5
Average < 0.5
±
Sum ˆ (calls to Personal Account Information Menu for the quarter) ‡ 3*(calls to General Plan
Information Menu for the quarter) ‡ 2*(calls to Personal Contribution section for the
quarter) ‡ 2*(calls to Personal Transfer section) ‡ 2*(calls to the Personal Loan Information
section) ‡ 2*(calls to the Loan Modeling section).
Average ˆ Sum/ (number of months in which calls were made)

Final rules for FORM parameter
Meta-monitoring equivalent response

Non-respondent
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

FORM
The response to the statement ``The information helps me understand the Savings and Profit
Sharing Plan and the features and options available to me.''
Suminf ˆ 0 or Sumpers ˆ 0
Ratio  0.5
Ratio  0.3
Ratio  0.2
Ratio  0.15
Ratio  0.1
Ratio < 0.1
Suminf ˆ (total calls to General Plan Information Menu for the quarter) ‡ (total calls to Savings
Express Explanation section for the quarter).
Sumpers ˆ total calls to the Personal Account Information Menu for the quarter.
Ratio ˆ Suminf/Sumpers

Final rules for CONT2 parameter
Meta-monitoring equivalent response
Non-respondent
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

CONT2
The response to the statement ``I would like to receive more information.''
Sumpinf  3
Ratio ˆ 0 and (Staravg ˆ 0 or Staravg > 120)
Ratio  0.35 and (Staravg ˆ 0 or Staravg > 90)
Ratio  0.5 and (Staravg ˆ 0 or Staravg > 60)
Ratio  0.85 and Staravg > 30
Ratio  1.5 and Staravg > 10
Ratio > 1.5
Sumpers ˆ total calls to the Personal Account Information Menu for the quarter.
Sumpin ˆ total calls to the PIN Change section for the quarter.
Ratio ˆ Sumpin/Sumpers.
Sumstar ˆ total number of times the `star' key was pressed.
Sumtostar ˆ total seconds elapsed before pressing the star key, summed for each occurrence of
pressing the star key.
Staravg ˆ Sumtostar/Sumstar.

Final rules for ACC parameter
Meta-monitoring equivalent response
Non-respondent

ACC
The response to the statement ``The information is accurate''.
Avg 1 ˆ 0

C.E. Downing / Information & Management 35 (1999) 203±216

211

Table 2 (Continued )
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

Avg 1 ˆ 1 or (Avg 1 ˆ 2 and Avg 2 ˆ 1)
Avg 1 ˆ 2 or (Avg 1 ˆ 3 and Avg 2 ˆ 1)
Avg 1 ˆ 3 or (Avg 1 ˆ 4 and Avg 2 ˆ 1)
Avg 1 ˆ 4 or (Avg 1 ˆ 5 and Avg 2 ˆ 1)
Avg 1 ˆ 5
Avg 1 > 5
Sum 1 ˆ (Maximum of total calls to any of the following modules for month #1: Account Balance
section, Personal Contribution section, Personal Transfer section, Personal Withdrawal section,
and Personal Loan Information section) ‡ (Maximum of total calls to any of the following
modules for month #2: Account Balance section, Personal Contribution section, Personal Transfer
section, Personal Withdrawal section, and Personal Loan Information section) ‡ (Maximum of
total calls to any of the following modules for month #3: Account Balance section, Personal
Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan
Information section).
Sum 2 ˆ (Second largest number of total calls to any of the following modules for month #1:
Account Balance section, Personal Contribution section, Personal Transfer section, Personal
Withdrawal section, and Personal Loan Information section) ‡ (Second largest number of total
calls to any of the following modules for month #2: Account Balance section, Personal
Contribution section, Personal Transfer section, Personal Withdrawal section, and Personal Loan
Information section) ‡ (Second largest number of total calls to any of the following modules for
month #3: Account Balance section, Personal Contribution section, Personal Transfer section,
Personal Withdrawal section, and Personal Loan Information section).
Avg 1 ˆ Integer value of [Sum1/(number of months in which calls were made)].
Avg 2 ˆ Integer value of [Sum2/(number of months in which calls were made)].

Final rules for EASY parameter
Meta-monitoring equivalent response
Non-respondent
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

EASY
The response to the statement ``I can quickly and easily obtain the information I need.''
Sumpers  4 and Sumtostar 6ˆ 0 and Sumend 6ˆ 0
Sumtostar ˆ 0 or Sumstar ˆ 0 or StarAvg  360
StarAvg  240
StarAvg  120
StarAvg  60
StarAvg < 60
±
Sumpers ˆ total calls to the Personal Account Information Menu for the quarter.
Sumend ˆ total calls to the End Call section for the quarter.
Sumstar ˆ total number of times the ``star'' key was pressed.
Sumtostar ˆ total seconds elapsed before pressing the star key, summed for each occurrence of
pressing the star key.
Staravg ˆ Sumtostar/Sumstar.

Final rules for TIME parameter
Meta-monitoring equivalent response
Non-respondent
1 ± Strongly agree
2 ± Somewhat agree
3 ± Neutral
4 ± Somewhat disagree
5 ± Strongly disagree
0 ± Don't know
Calculations

TIME
The response to the statement ``The information is out-of-date.''
PersAvg  0.5
DaySum  4
DaySum  3
DaySum  2
DaySum  1
DaySum  0
±
Sumpers ˆ total calls to the Personal Account Information Menu for the quarter.
PersAvg ˆ Sumpers/ (number of months in which calls were made).
DaySum ˆ Total number of calls for the quarter made on the 1st, 2nd, or 3rd day of a month.

212

Parameters

Null hypothesis

Alternative hypothesis

z-statistic

Result

CONT1_S vs. CONT1_M
FORM_S vs. FORM_M
CONT2_S vs. CONT2_M
ACC_S vs. ACC_M
EASY_S vs. EASY_M
TIME_S vs. TIME_M

CONT1_S ˆ CONT1_M
FORM_S ˆFORM_M
CONT2_S ˆ CONT2_M
ACC_S ˆ ACC_M
EASY_S ˆ EASY_M
TIME_SˆTIME_M

CONT1_S 6ˆ CONT1_M
FORM_S 6ˆ FORM_M
CONT2_S 6ˆ CONT2_M
ACC_S 6ˆ ACC_M
EASY_S 6ˆ EASY_M
TIME_S 6ˆ TIME_M

0.97
0.85
ÿ1.69
1.68
1.41
ÿ0.86

Fail to reject the hypothesis that CONT1_S ˆ CONT1_M
Fail to reject the hypothesis that FORM_S ˆ FORM_M
Reject the hypothesis that CONT2_S ˆ CONT2_M
Reject the hypothesis that ACC_S ˆ ACC_M
Fail to reject the hypothesis that EASY_S ˆ EASY_M
Fail to reject the hypothesis that TIME_S ˆ TIME_M

C.E. Downing / Information & Management 35 (1999) 203±216

Table 3
Statistical justification of meta-monitoring parameter rules ± comparison of resulting meta-monitoring parameter means (`_M') with survey parameter means (`_S')

C.E. Downing / Information & Management 35 (1999) 203±216

213

Fig. 5. Response percentage histograms.

It is important to note that when the result is ``fail to
reject the hypothesis that the two means are equal,''
statistically this does not mean that it is `accepted' that
the means are equal.
Technically, the meta-monitoring parameters
CONT2_M and ACC_M were proven to be not equal
to their survey counterparts (even after the metamonitoring rules for these parameters had been taken
to the extreme side of their ranges). However, CONT2_S ˆ CONT2_M was rejected because ÿ1.69 <
ÿ1.65, and ACC_S ˆ ACC_M was rejected because
1.68 > 1.65. As these hypotheses were `just barely'
rejected, histograms of the percentage of responses
were plotted to better examine the distributions of the
responses, and these histograms appear in Fig. 5.
These distributions appear to be quite similar, and
the null hypotheses were just on the fringe of rejection.
Therefore, z-tests were run on the means of the above
percentage distributions, and the results were z-statistics of 0.65 for CONT2 and ÿ1.53 for ACC, both of
which result in a conclusion of ``fail to reject that the
means are equal.'' Therefore, it is the opinion of this
researcher that on these grounds the results from the

meta-monitoring CONT2 and ACC parameters can
still be considered useful.
A summary of the parameter means for the survey
and the parameter means for the usage data collected
and analyzed by the meta-monitoring system appears
in Table 4. Numbers from both the ®rst survey distribution comparison (`rule derivation') and second
survey distribution comparison (`rule validation') are
included. When examining these means, it is important to note that the comparisons of interest to the
study are between the meta-monitoring and survey
parameter means; the non-movement of means after
the enhancements attracts attention; however, the
purpose of the study was to judge the meta-monitoring
system's ability to mirror the survey.

5. Discussion and conclusions
This research has shown that system usage behavior, which is easily tracked and recorded, can be
analyzed to produce a measure of user satisfaction.
The rules established by the experts used for this study

Table 4
Survey and meta-monitoring satisfaction parameter mean comparison (possible mean values range from ``1 ± strongly agree'' (extremely
satisfied), to ``5 ± strongly disagree'' (extremely dissatisfied))
Before enhancements
Parameter
After enhancements
Meta-monitoring
Survey mean
Meta-monitoring
Survey mean
mean for parameter
for parameter
mean for parameter
for parameter
1.85
1.78
Cont1
1.82
1.89
1.50
1.44
Form
1.65
1.58
1.47
1.67
Cont2
1.51
1.47
1.43
1.33
Acc
1.43
1.36
1.31
1.21
Easy Time
1.38
1.27
4.19
4.26
(inverted range)
4.06
4.23

214

C.E. Downing / Information & Management 35 (1999) 203±216

allowed a meta-monitoring information system to
analyze system usage data and arrive at a measure
of user satisfaction which was similar to that found
using a validated survey. The success of this approach
in a single organizational test has exciting potential
rami®cations. If further testing of the methodology
were to validate these results, business organizations
could have an ongoing measure of user satisfaction, at
minimal relative expense. This continuous measure
could be achieved with start-up energy and costs only
slightly higher than those associated with current onetime measures of user satisfaction, with nearly zero
additional expenses. Thus, managers could easily and
con®dently track user satisfaction with information
systems in their organization, and appropriate action
could be taken if satisfaction dropped off suddenly or
gradually. Such tracking continues to take place with
the system described in this study.
While the potential uses of such a system could be
consequential, there are limitations in the research,
and further testing and veri®cation is needed. Speci®cally, the following activities are desirable:
1. install and study additional meta-monitoring
systems, to solidify conclusions based on inductive reasoning.
2. install and study additional meta-monitoring systems, to observe the system's performance when

The information helps me plan my finances.
The information helps me understand the
Savings and Profit Sharing Plan and the
features and options available to me.
Savings Express representatives are
courteous and helpful.
I feel comfortable that the information
which Savings Express conveys to me will
be kept confidential.
The information is consistent.
The information is accurate.
I can quickly and easily obtain the
information I need.
I would like to receive more information.
The information is out-of-date.
Overall, I am satisfied with the way
the information is communicated to me.

user satisfaction has a signi®cant increase or
decrease.
3. install and study additional meta-monitoring systems and control systems simultaneously, to
achieve enhanced control of the results.
4. install and study different types of meta-monitoring
systems, in particular systems which must be used
(in contrast to the voluntary nature of the usage of
this system).
5. study the meta-monitoring rule creation process in
greater depth and with greater formality, with more
experts being involved.
Appendix A
Survey instrument
HOW DO YOU LIKE SAVINGS EXPRESS?
Please answer the following questions based on
your experience with the Savings and Pro®t Sharing
Plan information you have received from Savings
Express regarding your account balance, amounts
available for loan or withdrawal, etc. since Savings
Express started April 1st, 1993. Assume that questions
are referring to the automated part of Savings Express
(recorded voice) unless otherwise indicated.

Strongly
agree
1
1

Somewhat Neutral
agree
2
3
2
3

Somewhat
disagree
4
4

Strongly
disagree
5
5

Don't
know
0
0

1

2

3

4

5

0

1

2

3

4

5

0

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

1
1
1

2
2
2

3
3
3

4
4
4

5
5
5

0
0
0

215

C.E. Downing / Information & Management 35 (1999) 203±216

PLEASE GIVE US YOUR PERSONAL PROFILE
What is your age?
What is your sex?
Have you ever used an automated voice
response system before?

1
25 or under
1
Female
1

2
26±35
2
Male
2

Yes

No

THANK YOU
Please return your survey in the pre-addressed,
postage-paid envelope by June 15, 1993.

3
36±45

4
46±55

5
56 or older

Table 5 (Continued )
CONT1

FORM

CONT2

ACC

EASY

TIME

1

±
±

±
±

±
1

±
±

±
±

1

1

1

3

1

4

2
1
1
3
1
1

±
±
1
1
±
±

±
±
±
±
1
±

2
1
1
1
3
1

±
±
1
±
1
±

5
5
5
±
1
5

1
1
3
2
ETC.

1
1
±
1

±
±
±
1

1
1
2
1

±
±
1
1

4
5
5
4

Appendix B
Sample of actual meta-monitoring system analysis
Output contains derived responses for each randomly selected Social Security number to the six
quality parameters. Note that the Social Security
numbers have been removed in the interest of con®dentiality. (Table 5).

Table 5
Meta-monitoring derived answers to the six quality parameters,
from the 1 April 1993 to 30 June 1993 dump
CONT1

FORM

CONT2

ACC

EASY

TIME

2
3

±
±

±
±

1
1

±
±

4
5

3
3
1
2
2
3

±
1
1
±
2
1

±
±
±
±
±
±

1
1
1
1
2
1

1
±
±
±
1
±

5
±
5
5
4
4

1
2
3
1

1
±
±
±

±
±
±
1

1
1
1
2

1
±
±
1

5
4
4
3

1
2
2
1

1
±
±
1

±
±
±
±

1
1
±
1

±
±
±
±

5
5
5
4

1

±

±

1

±

4

References
[1] J.E. Bailey, S.W. Pearson, Development of a tool for
measuring and analyzing computer user satisfaction, Management Science 29(5), 1983, pp. 530±545.
[2] J.J. Baroudi, M.H. Olson, B. Ives, An empirical study of the
impact of user involvement on system usage and information
satisfaction, Communications of the ACM 29(3), 1986, pp.
232±238.
[3] G. Cole, User acceptance key to success of voice processing,
Computing Canada 18(13), 1992, pp. 39±40.
[4] W.J. Doll, G. Torkzadeh, The measurement of end-user
computing satisfaction, MIS Quarterly 12(2), 1988, pp. 259±274.
[5] C.E. Downing, Rhetoric or reality? The professed satisfaction of older customers with information technology, Journal
of End User Computing 9(1), 1997, pp. 15±27.
[6] J. Etezadi-Amoli, A. Farhoomand, On end-user computing
satisfaction, MIS Quarterly 15(1), 1991, pp. 1±4.
[7] M. Fishbein, I. Ajzen, Belief, Attitude, Intention and
Behavior: An Introduction to Theory and Research, Addison-Wesley, Boston, MA, 1975.

216

C.E. Downing / Information & Management 35 (1999) 203±216

[8] M. Fleischer, J. Morrell, The use of of®ce automation by
managers: A survey, Information Management Review 4(1),
1988, pp. 1±13.
[9] L. Foster, D. Flynn, Management information technology: Its
effects on organizational form and function, MIS Quarterly
8(4), 1984, pp. 229±235.
[10] C. Gallagher, Perceptions of the value of a management
information system, Academy of Management Journal 17,
1974, pp. 46±55.
[11] A. Hendrickson, K. Glorfeld, T. Cronan, On the repeated
test-retest reliability of the end-user computing satisfaction
instrument: A comment, Decision Sciences 25(4), 1994, pp.
655±667.
[12] S. Hiltz, K. Johnson, User satisfaction with computermediated communication systems, Management Science
36(6), 1990, pp. 739±764.
[13] B. Ives, M. Olson, J. Baroudi, The measurement of user
information satisfaction, Communications of the ACM
26(10), 1983, pp. 785±794.
[14] J. Maglitta, Anxious allies: 1995 CEO/CFO survey, Computerworld Special Report 12, 1995, pp. 5±9.
[15] Z. Millman, J. Hartwick, The impact of automated of®ce
systems on middle managers and their work, MIS Quarterly
11(4), 1987, pp. 479±490.
[16] C. Nemeth, Effects of free versus constrained behavior on
attraction between people, Journal of Personality and Social
Psychology 15, 1970, pp. 302±311.
[17] S. Sampson, Rami®cations of monitoring service quality
through passively solicited customer feedback, Decision
Sciences 27(4), 1996, pp. 601±622.

[18] M. Sheldrick, Technology for the elderly, Electronic News
38(1912), 1992, pp. 22.
[19] P. Tom, Managing Information as a Corporate Resource,
HarperCollins, New York, 1991.
[20] G. Torkzadeh, W. Doll, Test-retest reliability of the end-user
computing satisfaction instrument, Decision Sciences 22(1),
1991, pp. 26±38.
[21] A. Wicker, Attitudes vs. actions: The relationship of verbal
and overt behavioral responses to attitude objects, Journal of
Social Issues 25, 1969, pp. 41±78.

Charles E. Downing is Assistant
Professor of Management Information
Systems in the Wallace E. Carroll
School of Management at Boston
College. Professor Downing received
his Ph.D. from Northwestern University
in 1994, and has significant experience
as an Information Technology consultant in the financial services industry. He
continues to research and consult in
topics such as measuring the effectiveness of management information systems, the implementation and
management of Decision Support Systems, and telecommunications-assisted collaboration and communication. His articles have
appeared in major journals, he has been quoted in The Boston
Herald and other popular press venues, and he was a contributing
author of the book Groupware: Collaborative Strategies for
Corporate LANs and Intranets.