Improb: A Python library for lower previsions and decision making
Posted on April 14, 2014 by Matthias C.M. Troffaes
[ go back to blog ]
History
Improb started in 2008 as a fairly small library for solving simple toy examples involving natural extension. The idea was to support exact rational calculations for lower previsions, by means of Komei Fukuda’s linear programming package cddlib. The very first incarnation of the library simply supported calculating the (unconditional) natural extension from any finite collection of assessments, and checking for avoiding sure loss and coherence.
For about two years, not much happened with the code, until 2010.
At that time, Nathan Huntley and myself were studying sequential decision problems, and we needed some way of testing our algorithms, so the library grew support for common decision rules such as maximality, Gammamaximin, and interval dominance. It provided us with a good platform for testing all kinds of properties, and finding counterexamples with relative ease.
In that same year, the SIPTA summerschool was to be held in Durham. I was looking for a way to demonstrate some simple decision rules on some toy examples in my summerschool lecture, in order to teach the students how to apply the principles of decision making with imprecise probabilities on some of their own toy examples, and even perhaps in their own research. It seemed obvious then that improb could suit this purpose, however, the library was not really designed with “user friendliness” in mind. So, I spent about a month revamping the library, and in the end I did manage to use the library for the school. In fact, the first public version of the library, 0.1.0, was release just before the summerschool.
Much to my pleasure, some students actually managed to use it. Most notably, Erik Quaeghebeur provided valuable feedback, and even contributed to the project during the course of 2011, leading to the 0.1.1 release.
The Future of Improb?
Embarrassingly, the 0.1.1 release in 2011, almost three years ago, was also the last public release. In retrospect, there are perhaps three reasons for this.
First, although the code has been relatively untouched in the last two years, the code is actually quite stable and appears to be free of bugs. It has been tested and worked with for a long time. (Therefore, it may still provides a good starting point for anyone wanting to play around with lower previsions and decision making.)
A second reason goes back to 2011, when Erik started murasyp which has a much cleverer way of representing certain mathematical objects internally, as well as being more general in aim: murasyp allows arbitrary sets of desirable gambles, not just lower previsions.
Thirdly, working with multivariate spaces in improb is a bit of a pain. Extending the library to support, say, imprecise Bayesian networks, and more general multivariate statistical reasoning, is somewhat nontrivial.
The latter two concerns have led me to believe that I need to rethink the design of improb. Some work has happened in the development branch, in particular to find a good programming interface for multivariate work, however these efforts are still not entirely satisfactory. Nevertheless, with the SIPTA summerschool coming up again this summer in Montpellier, there will be renewed reason to spend some quality time on improb.
Anyway, given improb’s relative maturity, and its suitability for solving toy problems quickly and with an easy interface, a introductory tutorial about improb seems more than worth it, hence this blog post.
Installation
First, you will want to install the library. On Linux, assuming that you have Python 2.7 and the GMP development libraries (GMP is used for exact calculations with rationals), you can simply type the following from your favorite shell:
pip install pycddlib user
pip install improb user
(The user
option will cause the installation to happen locally without requiring administrator rights.) On Windows, you will need Python 2.7 as well as the following packages from PyPI (pick the 2.7 .exe installers): see https://pypi.python.org/pypi/pycddlib/ and https://pypi.python.org/pypi/improb/.
Basics
Let us get started and specify a simple lower prevision. From the Python prompt, type
from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=4)
Now, lpr
will be a (initially, vacuous) lower prevision defined on a possibility space with four elements, namely Ω = {0, 1, 2, 3}. The LowPoly
class is the most general class in improb for lower previsions: it can represent any conditional lower prevision with finite domain. As just mentioned, initially, lpr
is vacuous. Let us verify that this is indeed the case:
x = [1, 4, 3, 2]
lpr.get_lower(x)
lpr.get_upper(x)
This code will calculate the lower and upper prevision of the gamble X defined by X(0) = 1, X(1) = 4, X(2) = 3, X(3) = 2, via natural extension of the provided assessments. Note that we denoted the gamble X by a lower case x
in Python; this is merely to follow Python coding standards](http://legacy.python.org/dev/peps/pep0008/). Because we have not specified any assessments, lpr
is vacuous, and will return 1 and 4 for the lower and upper prevision of X. As you can see, we simply used a list to specify the gamble: the first element of the list is the value of the gamble for the first element of the possibility space, and so on.^{1}
Natural Extension
Let us do something more interesting. Let Y(0) = Y(1) = 1 and Y(2) = Y(3) = 5. Suppose we know that the expectation of Y must lie between 2 and 3. How does this affect the bounds?
x = [1, 4, 3, 2]
y = [1, 1, 5, 5]
lpr.set_lower(y, 2)
lpr.set_upper(y, 3)
lpr.get_lower(x)
lpr.get_upper(x)
We now get 1.25 and 3.75 instead of 1 and 4 for the lower and upper prevision of X: our bounds on the expectation of Y had tightened our bounds on the expectation of X.
These kind of calculations work with any number of gambles. Improb will correct incoherent assessments as expected from natural extensions, as in for instance
from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [0, 2, 3]
y = [0, 4, 6]
lpr.set_lower(x, 1)
lpr.set_lower(y, 3)
lpr.get_lower(x)
Because Y = 2X, the lower prevision of Y must be twice the lower prevision of X; so lpr.get_lower(x)
will return 1.5 rather than 1, reflecting the information embodied by the lower bound on the expectation on Y.
In case of conflict, as in
from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [0, 2, 3]
y = [0, 4, 6]
lpr.set_upper(x, 1)
lpr.set_lower(y, 3)
lpr.get_lower(x)
then improb will raise a ValueError
. The conflict above arises from the fact that if the expectation of X is bounded above by 1 then the expectation of Y = 2X must be bounded above by 2: the lower bound of 3 can never be satisfied. Improb simply detected this situation for us, saving us from shooting ourselves in the foot.
There is much more to improb’s lower previsions, too much to cover in a short blog post. Suffice it to note that we can also work with conditional lower previsions (using the WalleyPelessoniVicig algorithm), as well as alternative algorithms for calculating the natural extension in specific cases, for example via Mobius inversion, Choquet integration, εcontamination, or even plain linear expectation. The documentation for improb.lowprev
has all the details.
Decision Making
As mentioned in the introduction of this post, improb has been used quite extensively in the study of decision making with imprecise probabilities. Consequently, improb has reasonably good support for the standard decision rules that are used in imprecise probability, namely Γmaximin, Γmaximax, Hurwicz, interval dominance, and maximality.^{2}
How does this work in practice? We start with specifying our lower prevision.
from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [1, 2, 3]
lpr.set_lower(x, 1.5)
lpr.set_upper(x, 2.5)
From our lower prevision, we create an optimality operator.
from improb.decision.opt import OptLowPrevMax
opt = OptLowPrevMax(lpr)
OptLowPrevMax
is the class for creating optimality operators according to maximality. There are similar classes for other optimality operators, and again, we refer to the documentation for details.
We can readily apply our optimality operator opt
to any set of gambles. For example,
gambles = [[1, 2, 3], [1.5, 1.4, 1.3]]
list(opt(gambles))
will report that the gamble [1, 2, 3]
is optimal, i.e. it dominates [1.5, 1.4, 1.3]
for the given lower prevision lpr
.
The object opt
that we created behaves like a function: it takes any list of gambles, and returns an iterator that iterates over all optimal gambles; for the sake of example we also call list
so the result gets printed to the console immediately.
Again, the above just an initial tasting, and there is much more to this in improb. Worth mentioning in particular is full support for decision trees and solving sequential decision problems.
Concluding Remarks
I hope you find this interesting and useful! If you have any questions about improb, or if you run into any general technical issues, feel free to post a ticket on improb’s issue tracker.
About the author
Matthias Troffaes is senior lecturer in statistics at the Department of Mathematical Sciences, Durham University, UK. His research interests include the foundations of statistics and decision making under severe uncertainty, with applications to engineering, renewable energy, and environment.
Note from editor: the SIPTA software page provides a list of software for imprecise probabilities, including Improb.

In simple examples such as this one, this system works very effectively, For more complex possibility spaces, it is better to specify a gamble using a dictionary; see the documentation. ↩︎

Notably missing from this list are Eadmissibility and infogap. ↩︎