Improb: A Python library for lower previsions and decision making, by Matthias C.M. Troffaes


Improb started in 2008 as a fairly small library for solving simple toy examples involving natural extension. The idea was to support exact rational calculations for lower previsions, by means of Komei Fukuda’s linear programming package cddlib. The very first incarnation of the library simply supported calculating the (unconditional) natural extension from any finite collection of assessments, and checking for avoiding sure loss and coherence.

For about two years, not much happened with the code, until 2010.

At that time, Nathan Huntley and myself were studying sequential decision problems, and we needed some way of testing our algorithms, so the library grew support for common decision rules such as maximality, Gamma-maximin, and interval dominance. It provided us with a good platform for testing all kinds of properties, and finding counterexamples with relative ease.

In that same year, the SIPTA summerschool was to be held in Durham. I was looking for a way to demonstrate some simple decision rules on some toy examples in my summerschool lecture, in order to teach the students how to apply the principles of decision making with imprecise probabilities on some of their own toy examples, and even perhaps in their own research. It seemed obvious then that improb could suit this purpose, however, the library was not really designed with “user friendliness” in mind. So, I spent about a month revamping the library, and in the end I did manage to use the library for the school. In fact, the first public version of the library, 0.1.0, was release just before the summerschool.

Much to my pleasure, some students actually managed to use it. Most notably, Erik Quaeghebeur provided valuable feedback, and even contributed to the project during the course of 2011, leading to the 0.1.1 release.

The Future of Improb?

Embarrassingly, the 0.1.1 release in 2011, almost three years ago, was also the last public release. In retrospect, there are perhaps three reasons for this.

First, although the code has been relatively untouched in the last two years, the code is actually quite stable and appears to be free of bugs. It has been tested and worked with for a long time. (Therefore, it may still provides a good starting point for anyone wanting to play around with lower previsions and decision making.)

A second reason goes back to 2011, when Erik started murasyp which has a much cleverer way of representing certain mathematical objects internally, as well as being more general in aim: murasyp allows arbitrary sets desirable gambles, not just lower previsions.

Thirdly, working with multivariate spaces in improb is a bit of a pain. Extending the library to support, say, imprecise Bayesian networks, and more general multivariate statistical reasoning, is somewhat non-trivial.

The latter two concerns have led me to believe that I need to rethink the design of improb. Some work has happened in the development branch, in particular to find a good programming interface for multivariate work, however these efforts are still not entirely satisfactory. Nevertheless, with the SIPTA summerschool coming up again this summer in Montpellier, there will be renewed reason to spend some quality time on improb.

Anyway, given improb’s relative maturity, and its suitability for solving toy problems quickly and with an easy interface, a introductory tutorial about improb seems more than worth it, hence this blog post.


First, you will want to install the library. On Linux, assuming that you have Python 2.7 and the GMP development libraries (GMP is used for exact calculations with rationals), you can simply type the following from your favorite shell:

pip install pycddlib --user
pip install improb --user

(The --user option will cause the installation to happen locally without requiring administrator rights.) On Windows, you will need Python 2.7 as well as the following packages from PyPI (pick the 2.7 .exe installers): see and


Let us get started and specify a simple lower prevision. From the Python prompt, type

from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=4)

Now, lpr will be a (initially, vacuous) lower prevision defined on a possibility space with four elements, namely Ω = {0, 1, 2, 3}. The LowPoly class is the most general class in improb for lower previsions: it can represent any conditional lower prevision with finite domain. As just mentioned, initially, lpr is vacuous. Let us verify that this is indeed the case:

x = [1, 4, 3, 2]

This code will calculate the lower and upper prevision of the gamble X defined by X(0) = 1, X(1) = 4, X(2) = 3, X(3) = 2, via natural extension of the provided assessments. Note that we denoted the gamble X by a lower case x in Python; this is merely to follow Python coding standards. Because we have not specified any assessments, lpr is vacuous, and will return 1 and 4 for the lower and upper prevision of X. As you can see, we simply used a list to specify the gamble: the first element of the list is the value of the gamble for the first element of the possibility space, and so on. 1

Natural Extension

Let us do something more interesting. Let Y(0) = Y(1) = 1 and Y(2) = Y(3) = 5. Suppose we know that the expectation of Y must lie between 2 and 3. How does this affect the bounds?

x = [1, 4, 3, 2]
y = [1, 1, 5, 5]
lpr.set_lower(y, 2)
lpr.set_upper(y, 3)

We now get 1.25 and 3.75 instead of 1 and 4 for the lower and upper prevision of X: our bounds on the expectation of Y had tightened our bounds on the expectation of X.

These kind of calculations work with any number of gambles. Improb will correct incoherent assessments as expected from natural extensions, as in for instance

from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [0, 2, 3]
y = [0, 4, 6]
lpr.set_lower(x, 1)
lpr.set_lower(y, 3)

Because Y = 2X, the lower prevision of Y must be twice the lower prevision of X; so lpr.get_lower(x) will return 1.5 rather than 1, reflecting the information embodied by the lower bound on the expectation on Y.

In case of conflict, as in

from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [0, 2, 3]
y = [0, 4, 6]
lpr.set_upper(x, 1)
lpr.set_lower(y, 3)

then improb will raise a ValueError. The conflict above arises from the fact that if the expectation of X is bounded above by 1 then the expectation of Y = 2X must be bounded above by 2: the lower bound of 3 can never be satisfied. Improb simply detected this situation for us, saving us from shooting ourselves in the foot.

There is much more to improb’s lower previsions, too much to cover in a short blog post. Suffice it to note that we can also work with conditional lower previsions (using the Walley-Pelessoni-Vicig algorithm), as well as alternative algorithms for calculating the natural extension in specific cases, for example via Mobius inversion, Choquet integration, ε-contamination, or even plain linear expectation. The documentation for improb.lowprev has all the details.

Decision Making

As mentioned in the introduction of this post, improb has been used quite extensively in the study of decision making with imprecise probabilities. Consequently, improb has reasonably good support for the standard decision rules that are used in imprecise probability, namely Γ-maximin, Γ-maximax, Hurwicz, interval dominance, and maximality. 2

How does this work in practice? We start with specifying our lower prevision.

from improb.lowprev.lowpoly import LowPoly
lpr = LowPoly(pspace=3)
x = [1, 2, 3]
lpr.set_lower(x, 1.5)
lpr.set_upper(x, 2.5)

From our lower prevision, we create an optimality operator.

from improb.decision.opt import OptLowPrevMax
opt = OptLowPrevMax(lpr)

OptLowPrevMax is the class for creating optimality operators according to maximality. There are similar classes for other optimality operators, and again, we refer to the documentation for details.

We can readily apply our optimality operator opt to any set of gambles. For example,

gambles = [[1, 2, 3], [1.5, 1.4, 1.3]]

will report that the gamble [1, 2, 3] is optimal, i.e. it dominates [1.5, 1.4, 1.3] for the given lower prevision lpr.

The object opt that we created behaves like a function: it takes any list of gambles, and returns an iterator that iterates over all optimal gambles; for the sake of example we also call list so the result gets printed to the console immediately.

Again, the above just an initial tasting, and there is much more to this in improb. Worth mentioning in particular is full support for decision trees and solving sequential decision problems.

Concluding Remarks

I hope you find this interesting and useful! If you have any questions about improb, or if you run into any general technical issues, feel free to post a ticket on improb’s issue tracker.

  1. In simple examples such as this one, this system works very effectively, For more complex possibility spaces, it is better to specify a gamble using a dictionary; see the documentation
  2. Notably missing from this list are E-admissibility and info-gap.
About the author

Matthias Troffaes is senior lecturer in statistics at the Department of Mathematical Sciences, Durham
University, UK. His research interests include the foundations of statistics and decision making under severe uncertainty, with applications to engineering, renewable energy, and environment.

Software for credal classification- by Giorgio Corani

Software for credal classification

A classifier is a statistical model of the relationship between the attributes (features) of an object and its category (class). Classifiers are learned from a training set and later are used on the test set to predict the class of a new object given its features.  Credal classifiers extend traditional classifiers allowing for set-valued (or indeterminate) predictions of classes.  The output set is typically larger when the data set is small or it contains many missing values. Credal classifiers aim at producing reliable classifications also in conditions of poor information.

I am aware of only two software suitable for credal classification.


The first is the JNCC2, authored by myself together with M. Zaffalon. It runs from the command line and it is implemented in Java. The code is open source. It reads the data from an ARFF file; this is the open format developed for WEKA (one of the most important open source software for data mining).

The  downloadable zip file contains the source code and the user manual, with worked examples. Once a data set is provided, the software performs cross-validation, comparing Naive Bayes and Naive Credal Classifier. Continuous features are discretized using (Fayyad and Irani, 1993) algorithm.

JNCC2 enables the conservative treatment of missing data. One can declare whether each attribute of the classification problem is subject to a MAR (missing at random) or to a non-MAR missingness process. See here for an introductory discussion of MAR and non-MAR missing data; here for a practical example of non-MAR data; here for a theoretical discussion on how to perform conservative inference in presence of non-MAR missing data.

JNCC2 reports the most traditional indicators of performance for credal classification; for instance it measures the accuracy of Naive Bayes on the instances on which Naive Credal Classifier is determinate or indeterminate. It also computes other typical indicators of performance, such as the percentage of indeterminate classifications, the number of classes returned on average when indeterminate and so on. The results are written to a text file. The software has been published on the JMLR special track on open source software.


The weka-IP plugin has been developed when preparing the classification chapter of the book Introduction to Imprecise Probabilities. I had the opportunity to spend some time in Granada with J. Abellan , A. Masegosa, S. Moral. Their research group has a large experience in credal classification based on decision trees.

We thus decided to make available a number of credal classifiers under the WEKA interface. Andres  linked the code of  WEKA with that of credal decision trees and JNCC2. He is the maintainer of the package.

Weka-IP contains the following credal classifiers:

  • credal decision trees (paper);
  • naïve credal classifier (paper);
  • lazy naïve credal classifier (paper);
  • credal model averaging (paper).

The last two algorithms had been developed by me by extending the JNCC2 code-base, but I had not previously released the code.

The credal classifiers are available in a separate folds of classifiers, which is not present in the standard WEKA interface.

Weka-IP computes also the utility-based metrics for scoring credal classifiers; in this respect it is more up-to-date compared to JNCC2.

I recommend using the Experimenter interface (manual) of Weka-IP. The Experimenter allows comparing via statistical tests the performance of different credal classifiers, based on the results of cross-validation. Moreover, it allows running several credal classifiers on multiple data sets.

Installation and usage details are discussed in the user manual. Some details require some attention. Credal decision tree require removing missing data, which is done by the filter E_FilteredClassifier. Credal classifiers derived from JNCC2 requires the feature to be discrete, which is done by the filter E_Discretize.

Due to interface reasons, it is not possible to deal with non-MAR missing data.

Another advantage of Weka-IP is that one can exploit the powerful Weka functionalities for data pre-processing, such as feature selection.


The Weka-IP allows to easily get in touch with different credal classifiers, to run them from a graphical user interface and to statistically compare their performances. The code might be regarded as preliminary in different respects (see my previous comment on the need for filters), but its interface is generally very easy to use. If you plan to develop your own new algorithm for credal classification, it might be a good idea to try to add it to the Weka-IP package. In this way, you could quickly compare your new algorithm with previously existing ones.

About the author

G. Corani is senior researcher at the Imprecise Probability Group of IDSIA(Lugano, Switzerland). He obtained his PhD in Information Technology from the Politecnico di Milano in 2005. His research interests are mainly data mining and applied statistic. Most of his work done with imprecise probability regards credal classification. He is author of about 50 international publications.