Human Judgment and Decision Making with Imprecise Probabilities
HUMAN JUDGMENT AND IMPRECISE PROBABILITIES
Michael J. Smithson
OVERVIEW
1. Introduction
The study of human judgment under uncertainty has a history that is almost
contemporaneous with that of probability theories. This is not a coincidence.
>From the outset, the idea of using probability to describe cognitive states
or aspects of subjective judgment has provoked debate, theory construction,
and empirical research. It is no exaggeration to say that probability theories
have exerted a strong prescriptive influence on the study of judgment and
decision making (see Gigerenzer 1994 [21] and
Smithson 1989 [41] for overviews).
In the modern era, proponents of the Subjective Expected Utility (SEU)
framework advocated a version of Bayesianism as the benchmark for rational
judgment and decision making, and this viewpoint dominated studies of human
judgment and decision making during the 50's and 60's. By the late 70's
and early 80's, some scholars had begun to question whether we should regard
deviations from probability theories as "irrational" (cf. Cohen 1981 [9],
Jungermann 1983 [29]), and attempts to develop descriptive
theories of decision making that retained as many features of SEU as possible
became a smallscale industry among decision scientists. However, most
of these critiques and alternatives have left certain Bayesian prescriptions
unquestioned. Two of these are directly relevant to the study of imprecise
probabilities:

Precision, i.e., the doctrine that uncertainty (or utility as well) may
be represented by a single number; and

Prior sample space knowledge, i.e., the assumption that all possible outcomes
or alternatives are known beforehand.
Accusations against Bayesians of overprecision and arbitrariness in their
priors date back to the mid19^{th} century, but empirical studies
of how people deal with imprecision were rare until the mid1980's and
to this day there are almost no studies of how people cope with sample
space ignorance. In terms of reasonable combinations from Table 1, the
vast majority of empirical studies deal with situations where probabilities
are precise, outcomes are known, and the utilities of all outcomes are
precise (cell 1). A much smaller but growing literature concerns situations
with vague probabilities but known outcomes (usually with precise utilities
 cell 2). A still smaller set of studies deals with imprecise utilities
and/or partly known outcomes. Very few studies venture any further into
imprecision or ignorance than that (but see, for instance, Hogarth &
Kunreuther 1995 [28]).



Probabilities: 
Outcomes:

Utilities: 
Precise

Vague

Vacuous


Precise 
1

2

3

Known

Vague 
4

5

6


Vacuous 
7

8

9


Precise 
10

11

12

Partly known

Vague 
13

14

15


Vacuous 
16

17

18

Unknown

Vacuous 


19

Table 1: Knowledge of Outcomes and Probabilities
Early attempts to develop descriptive as well as prescriptive frameworks
for decision making when probabilities are unknown (but not necessarily
when outcomes also are unknown) include Keynes (1921 [31])
and Knight (1921 [32]). These and other more recent
writings have created something of a confusion in concepts and terminology
that has yet to be resolved. In most psychological research, writers use
"ambiguity" to refer to imprecise probabilities. I will use that term here,
although I prefer Max Black's (1937 [3]) use of "vagueness"
for this purpose (in the classical tradition philosophers such as Black
and literary critics such as Empson (1930 [17])
use "ambiguity" to refer to multiple discrete possible interpretations
for a word or object).
Although Knight ([32] 219) had posed conundrums
for decision makers based on ambiguous probabilities, Ellsberg (1961 [16])
brought this matter to the attention of SEU advocates and the psychological
research community. His stated object was to investigate whether the distinction
between ambiguous and precise probabilities has any "behavioral significance."
His most persuasive example (the one most writers describe in connection
with "Ellsberg's Paradox") is as follows. Suppose we have an urn with 90
balls, of which 30 are Red and 60 are either Black or Yellow (but the proportions
of each are unknown). If asked to choose between gambles A and B as shown
in the upper part of Table 2 (i.e., betting on Red versus betting on Black),
most people prefer A.

30 
60



Red 
Black 
Yellow 
A 
$100 
$0 
$0 
B 
$0 
$100 
$0 

30 
60



Red 
Black 
Yellow 
C 
$100 
$0 
$100 
D 
$0 
$100 
$100 
Table 2: Ellsberg's Problem
However, when asked to choose between gambles C and D, most people prefer
D. People preferring A to B and D to C are violating one of the SEU axioms
(often called the "SureThing Principle") because the (C,D) pair simply
adds $100 for drawing a Yellow ball to the (A,B) pair. If we prefer A to
B, we should also prefer C to D.
Thus, Ellsberg demonstrated that ambiguity has behavioral consequences
that violate the axioms of SEU. An obvious explanation for the AD preference
pattern is that when probabilities are imprecise people adopt a pessimistic
stance towards the possible outcomes. The research literature on this phenomenon
calls it "ambiguity aversion", and the past 20 years have witnessed a marked
growth in the number studies and attempts to incorporate it into modified
SEU frameworks. This literature is summarized in the sections on Ambiguity
aversion and Empirical studies of ambiguous probability
models. The review paper by Camerer & Weber (1992 [7])
provides an indepth and thorough survey.
Sample space ignorance, on the other hand, has received only indirect
and unsystematic attention. Research during the late 70's indicated that
when alternatives are not explicitly represented in a decision tableau,
people tend to underestimate the probability of their occurrence. This
was termed the "CatchAll Underestimation Bias" (CAUB) by Fischhoff, Slovic,
& Lichtenstein (1978 [18]), and it spawned
several studies during the 80's and 90's. A recent framework that generalizes
this phenomenon along lines that are contrary to Walley's (1991 [45],
1996 [46]) Embedding and Representation Invariance
Principles is Support Theory (Tversky & Koehler 1994 [44]).
These and related accounts are described in the section on Research
related to sample space ignorance.
2. Ambiguity aversion
This review is necessarily less thoroughgoing and briefer than the excellent
survey by Camerer & Weber (1992 [7]). It does,
however, contain a few updates. As they observe, one kind of empirical
work on ambiguous probabilities has consisted of replications of Ellsberg's
original thoughtexperiments. Ambiguity aversion has been found not only
under the conditions set up by Ellsberg (cf. Table 3 in Camerer & Weber
1992 for a list of studies) and gambles with real payoffs, but also even
after people are exposed to written arguments persuading them not to indulge
in it (e.g., MacCrimmon 1968 [34], Slovic &
Tversky 1974 [40]). Some studies report less ambiguity
aversion for losses than gains (Cohen, Jaffray, & Said 1985 [10]
and Einhorn & Hogarth 1986 [15]), but at least
one has found no such difference (Kahn & Sarin 1988 [30]).
The major exceptions to ambiguity aversion appear to be ambiguity preference
at low probabilities under the prospect of gain and high probabilities
under the prospect of loss (Curley & Yates 1985 [11],
Einhorn & Hogarth 1985 [14], Kahn & Sarin
1988 [30], and Hogarth & Einhorn 1990 [27]).
However, as Heath & Tversky (1991 [25]) and
Boiney (1993 [4]) have pointed out, these findings
may be due to a 'regression' effect, whereby people infer that an ambiguous
probability has a skewed distribution, thereby biasing its mean upward
from the lower end or downward from the upper end of the [0,1] interval.
A number of writers have attempted psychological explanations of ambiguity
aversion. Many have taken their cue from Ellsberg's claim that this phenomenon
is both reasonable and independent of probability judgments per se. Such
a claim goes beyond the trite assertion that the imprecision of a probability
may vary independently of the magnitude of the probability. If people who
prefer to receive $10 for sure to a cointoss for $20 or nothing, also
prefer the cointoss to betting $20 on drawing a Red ball from an urn containing
an unknown number of Red and Black balls, then there is nothing that psychologically
distinguishes ambiguity aversion from risk aversion. Several investigators
have found only very low correlations between people's risk attitudes and
their attitudes towards ambiguity (Cohen, Jaffray, & Said 1985, Hogarth
& Einhorn 1990 [27], Schoemaker 1991 [38]),
which would seem to indicate that ambiguity aversion is a distinct phenomenon.
Moreover, Sherman (1974 [39]) found a modest correlation
between ambiguity aversion and a psychometric scale measuring "intolerance"
of ambiguity (developed by Budner 1962 [6]).
Nevertheless, some evidence indicates that ambiguity aversion operates
somewhat similarly to riskaversion. First, increased ambiguity may increase
perceived riskiness. Second, ambiguity seems to exhibit some framing effects
that are analogous to those found in precise probability judgments. Smithson
(1989 [41]) reports two relevant experiments: one
in which subjects showed a tendency to perceive an ambiguous risk couched
in terms of success as less risky than an equivalent risk couched in terms
of failure; and another in which they tended to rate a prospect couched
in terms of possibility as less restricting than one couched in terms of
necessity. Third, several investigators (Casey & Scholz 1991 [8],
GonzalezVallejo et al. 1996 [24], Kuhn &
Budsecu 1996 [33], and Smithson 1989) present evidence
that people do not find imprecise outcomes more or less important
than imprecise probabilities, and that they respond similarly to either
kind of imprecision.
Explanations for ambiguity aversion have foundered to some degree on
the shoals of definitions. Since various writers have used "ambiguity"
to mean rather different things, their explanations sometimes talk past
one another. Proposals fall mainly into two camps. One emphasizes the idea
that people respond to ambiguity on the basis of preferences. The other
claims that their responses are due to an impact that ambiguity has on
people's perceptions of likelihood and therefore risk. These explanations
are, of course, not incompatible.
The preferences camp comprises a number of competing explanations, whose
common thread is the supposedly negative consequences that people attribute
to ambiguous situations. Frisch & Baron (1988 [20]),
for instance, see ambiguity as a matter of missing information "that is
relevant and could be known", and therefore think ambiguity aversion arises
from generalizing a heuristic to avoid placing bets when one lacks information
that others might have. Einhorn & Hogarth (1985 [14]),
on the other hand, consider that disagreements or conflicting assessments
cause ambiguity by way of a decrement in source credibility. In a somewhat
similar vein, Curley, Yates, & Abrams (1986 [12])
argue that selfjustification accounts for ambiguity aversion; while Heath
& Tversky (1991 [25]) produce evidence suggesting
that ambiguity aversion disappears when people believe they have sufficient
knowledge or skill in the relevant domain. All of these explanations imply
that the decision maker would prefer to obtain more information or choose
the option about which they are best informed, all other things being equal
(Baron & Frisch 1994 [1] p. 280).
If that is all there is to explaining ambiguity aversion, then the particular
form the ambiguity takes or its underlying cause should not matter. But
perhaps it does. Studies underway by Smithson and his colleagues (Smithson,
in preparation [42]) strongly support the hypotheses
that people prefer consensual but ambiguous assessments to disagreeing
but precise ones, and that they regard agreeing but ambiguous sources as
more credible than disagreeing but precise ones. Thus, people prefer ambiguity
to conflict even when the one is informationally equivalent to the other,
possibly because they associate consensus among information sources with
source knowledgeability and credibilityi.e., conflict aversion.
Conflict aversion, attributions of competence, and concerns about accountability
or justification all point to the importance of taking into account the
social context in which risk information is provided.
Despite arguments presented by writers who favor a preferencebased
account of ambiguity aversion (e.g., Winkler 1991 [47]),
researchers have also produced evidence that ambiguity influences perceptions
of likelihood or risk. Einhorn & Hogarth (1985 [14]),
provide one of the most convincing demonstrations of these effects and
propose a model to describe them. Briefly, they claim that when people
are given a probability that they believe is ambiguous, they use it as
an anchor and then adjust it upwards or downwards depending on whether
they are pessimistic or optimistic about the likelihood of the event concerned.
Moreover, the net adjustment depends on the magnitude of the anchoring
estimate; low probabilities tend to be adjusted upwards and high ones downwards.
This model and others are surveyed in the section on Empirical Studies
of Ambiguous Probability Models. The main implication arising from such
research is that a utility  or preferencebased explanation of ambiguity
aversion is necessarily incomplete, which then opens up a debate over whether
ambiguity aversion is 'rational' or not.
3. Research related
to sample space ignorance
As suggested in the Introduction, there has been little research directly
on what people do under sample space ignorance. A group of studies and
a framework that address partial sample space ignorance to some extent
includes research on the "CatchAll Underestimation Bias" (CAUB), first
studied by Fischhoff, Slovic, & Lichtenstein (1978 [18]),
and a descriptive framework called Support Theory, recently proposed by
Tversky & Koehler (1994 [44]).
Fischhoff et al. conducted experiments concerning people's assignments
of probabilities to possible causes of a given outcome (e.g., an automobile
that will not start), and found that those possible causes that were explicitly
listed received higher probabilities than those that were implicitly incorporated
into a "CatchAll" category of additional causes. At least three explanations
have been proposed for this effect:

Unlisted causes are not as available to a person's mental representation
of the situation, and therefore not rated as highly likely;

People may perceive ambiguity in a list that is incomplete and mentally
redefine some of the items on the list by adding other unlisted causes
to them (Hirt & Castellan 1988 [26]); and

A list that is incomplete may be perceived as lacking credibility, so people
inflate the probabilities of the explicitly listed causes (DubeRioux &
Russo 1988 [13]).
Russo & Kozlow (1994 [37]) conducted further
studies and found the most evidence for the unavailability explanation
and the least for the credibility explanation. However, Bonini & Caverni
(1995 [5]) provided evidence from the literature and
their own experiments that casts doubt on all three explanations. For instance,
they found that making the unlisted causes more available to people did
not decrease the CAUB.
In a similar vein, Support Theory (Tversky & Koehler 1994 [44]
and Rottenstreich & Tversky 1997 [36]) is a
framework that begins with the claim that people do not follow the extensional
logic of conventional probability theory. Instead, unpacking a compound
event into disjoint components tends to increase the perceived likelihood
of that event. An immediate implication is that unpacking an hypothesis
and/or repacking its complement will increase the judged likelihood of
that hypothesis. Moreover, while the sum of the subjective probabilities
of an hypothesis and its complement might sum to 1, finergrained partitions
of either will result in 'probabilities' whose sum exceeds 1. Support Theory
revives the Keynesian (1921 [31]) distinction between
the balance of evidence in favoring a given proposition and the weight
or strength of evidence for that proposition.
Both the CAUB and Support Theory are important because they suggest
that people are sensitive to the sample space in which events are embedded.
If true, then people may violate both the Embedding and Representation
Invariance Principles. Moreover, it is possible that they could exhibit
a form of "ignorance aversion" akin to ambiguity aversion.
Clearly what is needed here are direct tests involving sample space
ignorance rather than merely ambiguous probabilities in situations where
the possible outcomes are fully specified. Smithson et al. (in preparation
[43]) are currently conducting studies to perform
such tests. Preliminary findings point toward the following assertions:

Most people exhibit "ignorance aversion" when choosing between bets involving
only partial ignorance with vacuous probabilities and bets involving sample
space ignorance. That is, they prefer partial ignorance when betting on
an event that has been named as a possible outcome, and prefer total ignorance
when betting on an event other than one that has been named as a possible
outcome.

However, many people endorse Walley's Vacuous Priors Principle (i.e., they
give a lower probability of 0 and an upper probability of 1 to any event
in the absence of any prior information).

Likewise, many people give 0 as a lower probability for an unobserved event
after having seen other events occur. Conversely, almost no one gives a
lower probability of 0 for an event that has already been observed.

Nevertheless, they do not adhere to the Representation Invariance Principle
in their upper probability assignments, giving greater upper probabilities
to "any new" event than to a specific named event.

Most people also violate the Representation Invariance Principle by rating
more 'plausible' unobserved events as more likely to occur than less 'plausible'
ones.

The nearest result to a CAUB effect is a tendency for the difference between
people's upper and lower probabilities for an unobserved event to be less
than that for an observed event. In other words, people tend to be less
imprecise about unobserved event probabilities, which suggests that they
may underestimate their likelihood.
4. Empirical studies of ambiguous
probability models
Camerer & Weber (1992 [7]) provide a thorough
review of ambiguous probability models, so this section merely summarizes
their review and adds a few remarks pertaining to imprecise probabilities
and sample space ignorance. Camerer & Weber group these models three
classes:

Models assuming a single secondorder probability (SOP) distribution, which
effectively treat possible probabilities as possible outcomes are treated
in SEU;

Models assuming sets of probabilities but not a unique SOP over these sets,
which then model preferences in terms of considerations based on all or
some of the possible probability distributions in the set; and

Models based on nonlinear weighting functions of unique probabilities or
nonadditive probabilities, in which the weighting function expresses ambiguity
aversion or preference.
The SOP models (e.g., Kahn & Sarin 1988 [30],
Becker & Sarin 1990 [2]) usually assume a wellspecified
secondorder distribution but permit nonlinear weighting functions and
relax a few other SEU assumptions. One of the most interesting examples
of this approach is one based on RankDependent Expected Utility theory
(RDEU: cf. Quiggin 1993 [35]). Unlike earlier probability
weighting schemes, this one works with a (de)cumulative distribution function
(CDF) whose ordering of outcomes is determined by the decision maker's
preferential rankordering of them. Perhaps the most interesting prediction
by RDEU is that people overweight extremely good and bad events (i.e.,
the tails of the CDF), which contrasts with models such as Einhorn &
Hogarth's (1985 [14]), whose weightings are a function
of the magnitude of the anchoring probability rather than the utility of
the outcome. An intriguing corollary is that under sample space ignorance
people may give higher upper probabilities to an extremal unobserved event
than a nonextremal one. Studies are currently underway to investigate this
prediction.
Models based on sets of probabilities date back to Ellsberg (1961 [16])
and researchers in this group tend to focus on whether people use some
weighted averaging or minimax type of rule (e.g., Gilboa & Schmeidler
1989 [23]) for assessing the utility of each
alternative. These models are the most obviously compatible, in formal
terms, with imprecise probability theory. A common objection by researchers
to both this and the SOP approach is that they replace an unrealistic precision
about probability with unrealistic precision about probability bounds.
However, as Camerer & Weber ([7] p. 346) suggest,
people may not be uncomfortable giving precise bounds; and current studies
by Smithson et al. (in preparation [43]) indicate
that people are most comfortable and best calibrated when giving lower
and upper bounds and a 'best guess'.
The third group of models, namely those using nonadditive probabilities,
includes the most popular attempts to account for the apparent effects
of ambiguities on risk perception. Many of these are formal models that
are compatible to with imprecise probability theory because they employ
special cases of imprecise probabilities (e.g., Gilboa 1989 [22]).
Others are more descriptively oriented models that have undergone some
empirical tests. A simple example is Einhorn and Hogarth's (1985 [14]),
model, for which they begin with an anchoring probability P, and then introduce
two parameters, q and b,
in the following roles:
k_{g} = q(1P) and
k_{s} = qP^{b},
where 0 < q < 1 and b
> 0. According to the EH model, people adjust P upwards by k_{g}
and downward by k_{s}. So, a person's subjective probability P_{s}
is
P_{s} = P + k_{g}  k_{s} = P + q(1
 P  P^{b}).
Einhorn and Hogarth then estimate q and b
from sample judgments of P_{s}, when P is known.
One may recast the EH model in terms of upper and lower probabilities
for this model if P is known and there are sufficient data to estimate
q and b. Let P_{}
and P^{+} represent lower and upper probabilities, respectively.
Put
P_{} = P  qP,
P^{+} = P + q(1P), and
P_{s} = lP_{} + (1l)P^{+}.
If we set l = P^{b},
then we have the EH model. Of course, l could
have other functional forms.
The q parameter is the degree of ambiguity
or latitude around P attributed by the person. b,
on the other hand, reflects the relative weighting of values smaller than
or larger than P. So it may be interpreted as a pessimism/optimism parameter.
If P is the chance of getting a reward, then b
< 1 would indicate pessimism, and b > 1 optimism.
Fobian & ChristensenSzalanski (1994 [19])
applied the EH model to studying the effects of ambiguity on the likelihood
of a negotiated settlement to a liability dispute. They found that reducing
ambiguity actually decreased the likelihood of a negotiated settlement,
and that which party had greater ambiguity affected the likelihood of settlement
conditional on the perceived probability of a victory by the plaintiff.
Walley's (1996 [46]) Imprecise Dirichelet (ID)
model may serve as a normative benchmark in research of this kind, and
it provides a good case in point both because of its simplicity and ease
of comparability with human performance. There is a direct connection between
the EH and ID models. Given N observations of which n are occurrences of
the outcome concerned, if we set q = s/(N+s)
for s > 0 and estimate P by putting P = n/N, then P_{s} is a convex
combination of Walley's lower and upper probabilities. So under these conditions,
a sufficient sample of judgments of P_{s} enables estimates of
s. Figure 1 shows Walleyan lower and upper probabilities (for N = 7, s
= 3) in a graph with 'optimistic' (b = 4) and
'pessimistic' (b = 0.25) EH models.
Figure 1: EH Models with ID benchmarks
However, researchers are better off obtaining subjective judgments
of P_{} and P^{+} under the conditions specified above.
We may obtain direct estimates of s, and ascertain whether people update
P_{} and P^{+} as if they have constant s as N increases
(preliminary evidence indicates that many people may update overly cautiously,
so that s increases with N). We may also assess whether people are optimistic
or pessimistic, independent of their svalues. Setting P = n/N, since P_{}
= P  qP and P^{+} = P + q(1P),
we have P_{}/(1 P^{+}) = P/(1P) = n/(Nn), so we may
use the oddslike ratio P_{}/(1 P^{+}) to assess calibration.
Values greater than P/(1P) indicates optimism and lower ones indicate
pessimism. Finally, if we obtain not only P_{} and P^{+}
but also 'best guess' judgments, then we may model P_{s}
as a weighted combination of P_{} and P^{+} in the setting
of the EH model or, conversely, P_{} and P^{+} as a function
of P_{s} and s.
Despite the obvious interest from psychology and related disciplines
in empirical investigations and theories of imprecise probabilities, both
research and theory to date have been limited in crucial respects:

Almost no researchers have elicited lower and upper probabilities from
subjects, nor entertained the possibility that these might be differently
affected by factors influencing ambiguity aversion;

Few researchers have distinguished mere ambiguity from ignorance of possible
outcomes, so sample space ignorance effects remain largely unexplored;

Very little attention has been paid to the issue of how people (ought to)
update ambiguous probabilities on the basis of new sample information;
and

SEU and precise probabilities are still taken to be the benchmarks of rationality
by many empirical researchers.
Studies are underway to investigate the issues raised here, and there is
clearly considerable potential for productive dialog between proponents
of normative and/or descriptive frameworks, and empirical researchers.
References

Baron, J. & Frisch, D. (1994). Ambiguous probabilities
and the paradoxes of expected utility. In G. Wright and P. Ayton (Eds.),
Subjective Probability. Chichester, U.K.: Wiley, 273294.

Becker, J. L. & Sarin, R. K. (1990). Economics of
ambiguity in probability. Working paper, UCLA Graduate School of Management,
May.

Black, M. (1937). Vagueness: an exercise in logical
analysis. Philosophy of Science. 4, 427455.

Boiney, L. G. (1993). Effects of skewed probability
of decision making under ambiguity. Organizational Behavior and Human
Decision Processes. 56, 134148.

Bonini, N. & Caverni, J.P. (1995). The "catchall
underestimation bias": Availability hypothesis vs. category redefinition
hypothesis. Current Psychology of Cognition. 14, 301322.

Budner, S. (1962). Intolerance of ambiguity as a personality
variable. Journal of Personality. 30, 2950.

Camerer, C. & Weber, M. (1992). Recent developments
in modeling preferences: Uncertainty and ambiguity. Journal of Risk
and Uncertainty. 5, 325370.

Casey, J. T. & Scholz, J. T. (1991). Boundary effects
of vague risk information on taxpayer decisions. Organizational Behavior
and Human Decision Processes. 50, 360394.

Cohen, L.J. (1981). Can human irrationality be experimentally
demonstrated? Behavioral and Brain Sciences. 4, 317331.

Cohen, M., Jaffray, J. & Said, T. (1985). Individual
behavior under risk and under uncertainty: An experimental study. Theory
and Decision. 18, 203228.

Curley, S. P. & Yates, F. J. (1985). The center
and range of the probability interval as factors affecting ambiguity preferences.
Organizational Behavior and Human Decision Processes. 36, 272287.

Curley, S. P., Yates, F. J. & Abrams, R. A. (1986).
Psychological sources of ambiguity avoidance. Organizational Behavior
and Human Decision Processes. 38, 230256.

DubeRioux, L. & Russo, J.E. (1988). An availability
bias in professional judgment. Journal of Behavioral Decision Making.
1, 223237.

Einhorn, H. J. & Hogarth, R. M. (1985). Ambiguity
and uncertainty in probabilistic inference. Psychological Review.
92, 433461.

Einhorn, H. J. & Hogarth, R. M. (1986). Decision
making under ambiguity. Journal of Business. 59, S225S250.

Ellsberg, D. (1961). Risk, ambiguity, and the Savage
axioms. Quarterly Journal of Economics. 75, 643669.

Empson, W. (1930, reprinted in 1995). Seven
Types of Ambiguity. London: Penguin.

Fischhoff, B., Slovic, P, & Lichtenstein, S.
(1978). Fault trees: Sensibility of estimated failure probabilities to
problem representation. Journal of Experimental Psychology: Human Perception
and Performance. 4, 330344.

Fobian, C.S. & ChristensenSzalanski, J.J.J.
(1994). Settling liability disputes: the effects of asymmetric levels of
ambiguity on negotiations. Organizational Behavior and Human Decision
Processes. 60, 108138.

Frisch, D. & Baron, J. (1988). Ambiguity and rationality.
Journal of Behavioral Decision Making. 1, 149157.

Gigerenzer, G. (1994). Why the distinction between
singleevent probabilities and frequencies is important for psychology
(and vice versa). In G. Wright and P. Ayton (Eds.), Subjective Probability.
Chichester, U.K.: Wiley, 129161.

Gilboa, I., (1989). Duality in nonadditive expected
utility theory. Annals of Operations Research. 19, 405414.

Gilboa, I. & Schmeidler, D., (1989). Maxmin
expected utility with a nonunique prior. Journal of Mathematical Economics.
18, 141153.

GonzalezVallejo, C., Bonazzi, A. & Shapiro,
A. J. (1996). Effects of vague probabilities and of vague payoffs on preference:
A model comparison analysis. Journal of Mathematical Psychology.
40, 130140.

Heath, C. & Tversky, A. (1991). Preference
and belief: Ambiguity and competence in choice under uncertainty. Journal
of Risk and Uncertainty. 4, 528.

Hirt, E.R. & Castellan, N.J. Jr. (1988). Probability
and category redefinition in the fault tree paradigm. Journal of Experimental
Psychology: Human Perception and Performance. 20, 1732.

Hogarth, R. M. & Einhorn, H. J. (1990). Venture
theory: A model of decision weights. Management Science. 36, 780803.

Hogarth, R. M. & Kunreuther, H. (1995). Decision
making under ignorance: arguing with yourself. Journal of Risk and Uncertainty.
10, 1526.

Jungermann, H. (1983). The two camps on rationality.
In R.W. Scholz (Ed.) Decision Making Under Uncertainty. Amsterdam:
Elsevier.

Kahn, B. E. & Sarin, R.K. (1988). Modelling ambiguity
in decisions under uncertainty. Journal of Consumer Research. 15,
265272.

Keynes, J. M. (1921). A treatise on Probability.
London: Macmillan.

Knight, F. H. (1921). Risk, Uncertainty and Profit.
Boston: Houghton Mifflin.

Kuhn, K. M. & Budescu, D. V. (1996). The relative
importance of probabilites, outcomes, and vagueness in hazard risk decisions.
Organizational Behavior and Human Decision Processes. 68, 301317.

MacCrimmon, K. R. (1968). Descriptive and normative
implications of the decisiontheory postulates. In K. Borch and J. Mossin
(Eds.), Risk and Uncertainty. London: MacMillan.

Quiggin, J. (1993) Generalized Expected Utility:
The RankDependent Model. Dordrecht: Kluwer.

Rottenstreich, Y. & Tversky, A. (1997). Unpacking,
repacking, and anchoring: Advances in support theory. Psychological
Review. 104, 406415.

Russo, J.E. & Kozlow, K. (1994). Where is the
fault in fault trees? Journal of Experimental Psychology: Human Perception
and Performance. 20, 1732.

Schoemaker, P. J. H. (1991). Choices involving uncertain
probabilities: Tests of generalized utility models. Journal of Economic
Behavior and Organization. 16, 295317.

Sherman, R. (1974). The psychological difference
between ambiguity and risk. Quarterly Journal of Economics. 88,
166169.

Slovic, P. & Tversky, A. (1974). Who accepts Savage's
axiom? Behavioral Science. 19, 368373.

Smithson, M. (1989). Ignorance and Uncertainty:
Emerging Paradigms. New York: SpringerVerlag.

Smithson, M. (in preparation) Conflict Aversion.
Working paper, Division of Psychology, The Australian National University.

Smithson, M. Takemura, K. & Bartos, T. (in preparation)
Judgment under Outcome Ignorance. Working paper, Division of Psychology,
The Australian National University.

Tversky, A. & Koehler, D. J. (1994). Support theory:
a nonextensional representation of subjective probability. Psychological
Review. 101, 547567.

Walley, P. (1991). Statistical Reasoning with Imprecise
Probabilities. London: Chapman and Hall.

Walley, P. (1996). Inferences from multinomial data:
Learning about a bag of marbles. (with discussion) Journal of the Royal
Statistical Society, Series B. 58, 357.

Winkler, R. L. (1991). Ambiguity, probability, preference,
and decision analysis. Journal of Risk and Uncertainty. 4, 285297.
Copyright © 1997 by Michael J. Smithson and the Imprecise
Probabilities Project.
This page is maintained by Gert
de Cooman.
It was created on 10 November 1997 and last modified 10 November 1997.
For comments, send a message to the following address: gert.decooman@rug.ac.be
[ back to IPP home page ]