Introduction to IP models (Inés Couso, Enrique Miranda)
We motivate the need for imprecise probability models and introduce the most important models from the literature:
coherent lower probabilities/previsions, 2-monotone capacities, belief functions, possibility/necessity measures,
and probability boxes. In addition, we discuss the representation of sets of desirable gambles and the connection
with preference relations, as well as the extension to larger domains. Finally, we review the most important concepts
of independence that can be used with imprecise probability models.
Engineering (Scott Ferson)
We are at a crossroads in our scientific appreciation of uncertainty. The traditional view is that there is only one
kind of uncertainty and that probability theory is its calculus. This view leads in practice to quantitative results
that are often misconstrued and demonstrably misleading. An emerging alternative view, however, entails a richer
mathematical concept of uncertainty and a broader framework for uncertainty analysis. The concept admits a kind of
uncertainty that is not handled by traditional Laplacian probability measures. The “engineering” day will discuss
this non-Laplacian view that different kinds of uncertainty must be propagated differently through simulations,
reliability and risk analyses, calculations for robust design, and other computations. The modern approach makes
practical solutions easier for engineering and physics-based models, and the inferences drawn from such models under
this view are more defensible. Topics include:
- aleatory versus epistemic uncertainty (variability v. incertitude);
- probability boxes to characterise imprecise random numbers;
- integrating available ancillary knowledge to improve estimates;
- confidence structures generalising Walley’s Imprecise Beta Model;
- why bounding probabilities is not always sufficient;
- handling dependence among input variables;
- sensitivity analysis;
- engineering design via backcalculation;
- spacecraft mission analysis and early design; and
- satellite conjunction analysis.
Depending on time and the interests of participants. We will use a convenient implementation in R of interval arithmetic
and probability bounds analysis to illustrate several numerical examples.
Decision making (Matthias Troffaes)
Since Abraham Wald, decisions have been at the center of classical statistical inference. As we will see, they also play
a central role in the interpretation of imprecise probability, and in the practice of inference under uncertainty when
using imprecise probabilities. In this course,
- we investigate the decision theoretic foundation to imprecise probability and its link to standard Bayesian decision theory;
- we critically review some fundamental problems encountered when applying imprecise probability theory to decision making;
- we discuss the most popular decision criteria, and why you might use them or not;
- we briefly discuss simple imprecise simulation methods for applying these decision criteria (and more general inference problems); and finally,
- we demonstrate some algorithms that can be used to solve decision problems with imprecise probability in practice.
To achieve these goals, we will start with a simple example to highlight the typical issues one encounters when trying
to making decisions under severe uncertainty. In earlier lectures, you will have seen how lower and upper expectations
are useful as models for severe uncertainty, applicable for instance when only partial probability specifications are
available, and when we are worried about the consequences of implicit assumptions not reflected by data or expert opinion.
We explore how such lower and upper expectations naturally arise in decision theory, simply by allowing for indecision, or
incomparability between options. Next, we discuss a few specific decision criteria, again using our simple example as a
starting point: Gamma-maximin, interval dominance, maximality, and E-admissibility. For each criterion, we explore why you
might use it, how you can calculate it, and how algorithms and techniques seen in earlier lectures can be exploited most
effectively for the purpose of decision making. The focus of exercises will be on solving actual decision problems. For
some of the more advanced exercises, simulation will be used.
Machine learning (Cassio de Campos)
Machine learning methods have become an essential tool in many areas of theoretical
and applied research, as well as numerous practical industrial cases. Probabilistic
models are among the most used machine learning techniques and have shown impressive
accuracy in tasks such as classification, regression and clustering. Such models
often employ sharp estimates of parameters in order to represent a probability distribution,
even if the precise elicitation might be unreliable because of limited expert knowledge,
or observations might be scarce, incomplete and/or noisy. Instead, one may resort to
sets of probability distributions to represent their knowledge, adopting the theory of
imprecise probability. During this tutorial we introduce the main techniques that are
designed to cope with uncertainty in a principled robust manner and to yield reliable
results for a collection of machine learning approaches ranging from robust statistical
testing to credal network classifiers and ideas for robust learning of deep models. We
discuss on robust machine learning and sensitivity analysis, and ideas that can be used
to improve classification accuracy in many domains. We will explore different tasks in
astrophysics, biomedicine, image analysis and pattern recognition.
Belief functions (Sebastien Destercke)
In this lecture, we will present the most common elements of evidence theory (a.k.a. Dempster-Shafer theory),
whose basic building blocks are belief functions. After presenting those basic building blocks, we will focus
on different information treatment problems, from information fusion and conditioning to independence modelling.
We will take special care of connecting the presented tools to those already introduced during the school,
pointing out their similarities and differences.