D.J. Butler J. of Economic Behavior Org. 41 2000 277–297 293
Table 11 Monotonicity violations
a
Percentage of subjects changing choices for the questions listed
b
Problem pair Standard measure
9 + measure P
U N
P U
N Total
7 and 313 5.7
4.0 90.3
5.7 4.0
90.3 176
9 and 313 35.7
4.4 59.9
33.0 3.3
63.7 182
8 and 2 25.0
2.2 72.8
23.9 0.0
76.1 92
10 and 2 20.6
6.5 72.9
13.0 3.3
83.7 92
Total 21.6
4.2 74.1
19.2 3.0
77.8 542
a
Note: Questions 3 and 13 were identical, so have been combined here. Each subject potentially provided six observations, and 542 of the 564 resulting comparisons were valid.
b
P = predicted, U = unpredicted and N = neither.
it should be possible for a fraction of those people to violate event-wise monotonicity if an unsuitable choice rule is prompted. But even so, it should make economists uneasy that
ceteris paribus improvements to the expected value of one gamble on the order of A 2–A 6, should result in a choice switch away from that gamble some 4 percent of the time.
9
It suggests that the haziness and uncertainty surrounding our preferences may be deeper than
has hitherto been supposed.
4. Discussion and conclusion
4.1. Relevant literature As in this paper, similarity theory Leland, 1994 applies only when the utilities of the
choice options are close; for other choices Expected Utility theory is assumed to hold. When the lotteries are similar, preferences are said to be unclear and a simple series of compar-
isons are made in a particular order to produce a choice. Choice reversals are predicted as a by-product of gaps in the choice rule. Similarity theory assumes individuals apply
a lexicographic semi-order, which is a limit case of the compensatory additive-difference choice rule see Suppes et al., 1989. Being lexicographic therefore, non-compensatory it
involves fewer cognitive costs than the general case of the additive-difference model, while still generating some of the predictions of Generalised Expected Utility theories based on
that model, such as regret theory.
But there are weaknesses; measuring the similarity of choice objects is not always easy. Buschena and Zilberman 1996 argue it is unclear how to apply the theory to lotteries with
multiple non-zero outcomes. Also, its inability to confront trade-offs prevents it from making a prediction other than for an arbitrary choice for some choice pairs, when arbitrariness may
not otherwise seem to be required. Similarity theory’s predictions when the presentational displays are altered have also not been formalised, as the theory is defined on lotteries.
But similarity theory has received some experimental support. It is probable that similarity
9
All expected values were under A20.
294 D.J. Butler J. of Economic Behavior Org. 41 2000 277–297
has a role to play in explaining the clarity of a subject’s preference: the more similar the lotteries are perceived to be, the less likely is a confident choice, and the more likely is the
risky gamble to be selected. But the correspondence is far from perfect; for example the Expected Value rule can produce a clear preference even for very similar lotteries. Also
Butler and Loomes 1988 found that increasing the variance of one lottery in a pair-wise choice problem led to less confident choices, despite the options becoming less similar.
Although no complete list of items likely to impact on preference clarity is offered here, factors such as the displays used to present the lotteries and their presentation within a
display, the variance of the riskier lottery and the magnitude of differences in expected value are likely candidates. This is a task for future research.
Butler and Loomes also report evidence of extensive haziness of preference, even for sim- ple, well-defined choice problems. They suggest that individuals do not have cognitively
cheap access to a clear preference ordering, even over relatively straightforward lotteries. Loomes 1998 presents further evidence that subjects’ responses are constructed, rather
than a reflection of well-defined underlying preferences. Loomes 1988 provides evidence that alternative procedures for eliciting preferences lead to different certainty-equivalent
valuations of risky options. When asked to state their certainty equivalents for various sim- ple lotteries, most individuals employed very coarse-grained rounding for those values. He
found that one method
10
of eliciting preferences led to notably less rounding of certainty equivalents, which in the current paper would imply relatively small Just Noticeable Differ-
ences, a clearer view of one’s underlying disposition and hence fewer choice reversals. As heterogeneity in the extent of rounding used by subjects was also observed, it may be true
that different individuals will exhibit Expected Utility and Generalised Expected Utility patterns and choice reversal rates in different degrees.
I make the normative presumption that for a given expenditure of cognitive resources, the occurrence of choice reversals should be minimised. The more often the option with
the higher utility can be recognised and selected, the greater will be total utility. This is equivalent to prescribing the use of displays and elicitation methods that prompt the use of
the most suitable choice rules to improve the clarity of the preferences. Payne et al. 1993 made a related argument for choice in general, using a broader definition of accuracy:
“...better decisions can be encouraged by designing displays that passively encourage more accurate strategies by making them easier to execute”.
Both Loomes et al. 1997 and Ballinger and Wilcox 1997 assume a family of stochastic models that each incorporate a core Generalised Expected Utility theory which they describe
as the deterministic special case as the stochastic element reduces to zero. Thus, regret theory could be a core theory, just as easily as could Expected Utility theory. Choice reversals are
then measured against these underlying core preference structures.
Loomes et al. divide the explanations of choice reversals in the literature into three parts, depending on the stage at which the randomness is introduced: a preference selection stage,
a computation stage, and an action stage. They suggest viewing the Harless and Camerer 1994 model as one of ‘white noise’ located in the action stage, which we have here argued
plays a limited role in explaining the extent and distribution of choice reversals. They then
10
The ‘iterative choice and valuation’ method.
D.J. Butler J. of Economic Behavior Org. 41 2000 277–297 295
argue that Hey and Orme 1994 assume the randomness occurs at the computation stage, where individuals are liable to processing errors. Loomes et al. prefer a random preference
model, in which individuals possess a stochastic utility function, rather than one single true utility function. That is, they locate the error in the initial preference-selection stage;
once a utility function has been selected, there is no further error. Although the Harless and Camerer model can be appended to the other models, the remaining theories are more
clearly rivals to each other.
By contrast I have not argued that our ‘underlying disposition’ utility function is random, but that there is no uniquely defined utility function within a Just Noticeable Difference.
It is the incompleteness of a stable utility function that leads to attempts at preference construction, and any remaining choice errors. The results of the present experiment offer
support to this explanation of errors. Encouragingly, event-splitting effects and regret effects also seem to be disproportionately concentrated in lottery pairs and displays where subjects
are less sure of their preference. Although not all of the latter were statistically significant, a larger sample size might have made them so. These findings suggest that the Generalised
Expected Utility theories’ claims to represent core preference structures are questionable, making the theoretical results in Butler 1998 more pertinent.
5. Conclusions