On 13 May 2015, after four years of intensive research under the enthousiastic supervision of Gert de Cooman, I succesfully defended my PhD Thesis, entitled “Credal Networks under Epistemic Irrelevance: Theory and Algorithms”. The jury was composed of Fabio Cozman, Enrique Miranda, Serafín Moral, Joris Walraevens, Dirk Aeyels, Dries Benoit, Jan Van Campenhout and Rik Van de Walle.

My dissertation presents a detailed study of credal networks under epistemic irrelevance [1], which are probabilistic graphical models that can compactly and intuitively represent the uncertainty that is associated with the key variables in some domain, and which can then be used to answer various domain-specific queries (compute inferences) that are of interest to the user. They share many of the nice features of Pearl’s celebrated Bayesian networks [2], but have the added advantage that they can represent uncertainty in a more flexible and realistic way.

Unlike credal networks under strong independence, which generalise Bayesian networks in a similar way, very little was so far known about credal networks under epistemic irrelevance and their properties, and efficient inference was possible only for specific inferences in networks with a tree structure [3]. The goal of my PhD was to change this, that is, to study the properties of credal networks under epistemic irrelevance, and to use these properties to develop efficient inference algorithms for them. If you don’t feel like reading through the more detailed summary that I am about to provide, let me just say that I did indeed achieve this goal: I have unraveled the theoretical properties of credal networks under epistemic irrelevance and I have developed multiple efficient exact inference algorithms for them.

**Modelling uncertainty**

Since a credal network under epistemic irrelevance is a special type of imprecise probability model, my dissertation starts with a general introduction to the theory of imprecise probabilities [4]. Simply put, whenever it is infeasible to reliably estimate a single probability, this theory allows for the use of a set of probabilities instead, each of whose elements is regarded as a candidate for some ideal ‘true’ probability. However, this simplified view is only one of the many ways to look at or interpret imprecise probabilities. In fact, uncertainty can be expressed without any reference to probabilities, using other imprecise-probabilistic frameworks such as sets of desirable gambles, lower previsions and sets of linear previsions. In the first part of my dissertation, I provide a detailed overview of these different frameworks and their interpretation, and discuss how they are connected to each other. I pay special attention to conditional models, which I regard as primitive concepts whose connection with unconditional models should be established by means of rationality criteria. The main advantage of the resulting so-called coherent conditional models is that they do not suffer from the traditional problems that arise when some of the conditioning events have probability zero. This is especially important in the context of imprecise probabilities, where probability zero cannot be ignored because it may easily happen that an event has lower probability zero but positive upper probability.

**Updating and conditioning**

Although my overview of imprecise probability theory contains new results that fill some gaps in the literature, its contribution mainly consists in bringing together results from various existing frameworks and connecting them to each other.The first real contribution of this dissertation is my discussion of updating, which is the act of changing a model based on the information that some event has occurred. In probability theory, it has become standard practice to do this by conditioning on that event using Bayes’s rule. Similarly, in an imprecise-probabilistic setting, updating is typically performed by applying an imprecise conditioning rule such as regular or natural extension. However, little argumentation is usually given as to why such an approach would make sense. I help address this problem by providing a firm philosophical justification for using natural and regular extension as updating rules. What makes this justification especially powerful is that I derive it directly in terms of sets of desirable gambles. In this way, I avoid making some of the unnecessarily strong assumptions that are traditionally adopted, such as the existence of an ideal ‘true’ probability mass function.

**Multivariate models**

In order to apply imprecise probabilities in a multivariate context, additional tools are needed, such as marginalisation, as well as ways of combining these tools with concepts such as conditioning and updating. All of this is well known and relatively easy in terms of probabilities, but it becomes more challenging for some of the other imprecise-probabilistic frameworks that I consider. My dissertation devotes a complete chapter to these issues. I gather the existing tools, add new ones whenever something is missing, and connect all of them with one another. The result is a complete and well-founded theory of multivariate imprecise probabilities that is, to the best of my knowledge, novel in its completeness, generality and consistency. Using this theory, I then formally introduce one of the most important concepts of my dissertation: epistemic irrelevance, which is an imprecise-probabilistic generalisation of independence that is assymetric. I recall several existing definitions for this notion, argue why only one of them is really adequate, and compare epistemic irrelevance to other imprecise-probabilistic independence notions. Finally, I explain how structural assessments such as epistemic irrelevance can be combined with direct or local partial probability assessments to construct a multivariate uncertainty model.

**Credal networks under epistemic irrelevance**

The rest of my dissertation is concerned with one particular type of multivariate uncertainty model: the irrelevant natural extension of a credal network under epistemic irrelevance. The basic idea is very similar to that of a Bayesian network. The starting point is a collection of domain-specific variables that are connected by means of arrows that reflect how these variables depend on each other. The arrows form a Directed Acyclic Graph (DAG), which simply means that there are no directed cycles. The interpretation of the graph is that for any variable, conditional on its parents, its non-parent non-descendants are epistemically irrelevant. Each variable also has a given local imprecise-probabilistic model, conditional on the values of its parents in the graph. In combination with the assessments of epistemic irrelevance that correspond to the DAG, these local models form a credal network under epistemic irrelevance. The most conservative global uncertainty model that is compatible with all these assessments is called the irrelevant natural extension of the network. This concept was first introduced by Cozman [1], who defined it in terms of sets of probabilities under the simplifying assumption that all probabilities are strictly positive. I drop this positivity assumption and provide definitions in terms of three other frameworks as well: sets of desirable gambles, lower previsions and sets of linear previsions. These different definitions turn out to be closely related, which allows me to translate results that are proved in one framework to analogous results in other frameworks.

**Why I studied this type of credal networks**

Credal networks under epistemic irrelevance are not the only imprecise-probabilistic generalisations of Bayesian networks. In fact, they are not even all that popular. Most authors prefer to consider a different type of credal networks, called credal networks under strong independence. I believe that the main reason for this lack of popularity of credal networks under epistemic irrelevance has been a profound lack of known theoretical properties. Surely, this has severely inhibited the development of tractable inference algorithms. In fact, untill now, only one inference algorithm was available, and even then, only for a particular type of inference and for networks whose DAG has a tree structure [3]. Nevertheless, due to the remarkable efficiency of this particular algorithm, which is linear in the size of the network, and because that same inference problem is NP-hard in credal networks under strong independence [5], credal networks under epistemic irrelevance are regarded as a promising alternative that requires—and deserves—further research [6, Section 10.6]. This further research is what my dissertation is all about.

**Theoretical properties**

One of the main contributions of my dissertation is a detailed study of the theoretical properties of the multivariate uncertainty model that corresponds to a credal network under epistemic irrelevance: the irrelevant natural extension. By focusing on the framework of sets of desirable gambles, I was able to derive some remarkable properties of this model, which I then managed to translate to other frameworks as well. A first important example is a fundamental separating hyperplane result that establishes a connection between the irrelevant natural extension of a complete network and that of its subnetworks. This result leads to various marginalisation, factorisation and external additivity properties. A second important result is that the irrelevant natural extension satisfies a collection of epistemic irrelevancies that is induced by AD-separation, an asymmetric adaptation of d-separation that is proved to satisfy all graphoid properties except symmetry. I also establish connections with the notions of independent natural extension and marginal extension and study the updated models that are obtained by applying regular extension to the irrelevant natural extension of a credal network.

**Inference algorithms**

In the final part of my dissertation, I show how the theoretical properties that I have proved can be used to develop efficient inference algorithms for credal networks under epistemic irrelevance. A first important contribution consists of two preprocessing techniques that can be used to simplify inference problems before the actual algorithm is applied. I explain how and when it is possible to translate an inference problem in a large network into a similar problem in a smaller network, and show how solving a conditional inference problem can be reduced to solving a series of unconditional ones. In a second set of results, I rephrase inference as a linear optimisation problem. As was already mentioned by Cozman [1], every unconditional inference can be computed by solving a linear program. However, in order to establish this result, he required a simplifying positivity assumption. I show that this positivity assumption is not needed; unconditional inferences can always be characterised as the solution of a linear program. For conditional inferences, multiple such linear programs need to be solved. Unfortunately, the size of these linear programs is exponential in the size of the network and this in principle generally applicable method is therefore only tractable for small networks. For the specific case of a network that consists of two disconnected binary variables, I was able to solve the corresponding linear program symbolically. In this way, I obtained closed-form expressions for the extreme points of the independent natural extension of two binary models. Fortunately, the intractability of brute force linear programming methods can often be circumvented by developing other, more efficient and often recursive computational techniques. I illustrate this by means of a number of examples. My most important algorithmic contribution, and the proverbial icing on the cake, is a collection of recursive algorithms that can efficiently compute various inferences in credal networks under epistemic irrelevance whose graphical structure is a recursively decomposable DAG, which is a new type of DAG that includes trees as a special case.

**Conclusion**

The main conclusion of my dissertation is that credal networks under epistemic irrelevance satisfy surprisingly many powerful theoretical properties, and that these properties can be exploited to develop efficient exact inference algorithms, for large classes of inference problems that were previously presumed to be intractable. Since many of these inference problems are NP-hard in credal networks under strong independence, our results turn credal networks under epistemic irrelevance into a serious, practically feasible alternative type of credal network that should enable practitioners to solve real-life problems for which the corresponding necessary inferences were hitherto regarded as intractable.

**References**

[1] Fabio G. Cozman. Credal networks. Artificial Intelligence, 120(2):199– 233, 2000.

[2] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Mateo, 1988.

[3] Gert de Cooman, Filip Hermans, Alessandro Antonucci, and Marco Zaffalon. Epistemic irrelevance in credal nets: the case of imprecise Markov trees. International Journal of Approximate Reasoning, 51(9):1029–1052, 2010.

[4] Peter Walley. Statistical reasoning with imprecise probabilities. Chapman and Hall, London, 1991.

[5] Denis D. Mauá, Cassio P. de Campos, Alessio Benavoli, and Alessandro Antonucci. Probabilistic inference in credal networks: new complexity results. Journal of Artificial Intelligence Research, 50:603–637, 2014.

[6] Alessandro Antonucci, Cassio P. de Campos, and Marco Zaffalon. Probabilistic graphical models. In Thomas Augustin, Frank P. A. Coolen, Gert de Cooman, and Matthias C. M. Troffaes, editors, Introduction to Imprecise Probabilities, pages 207–229. John Wiley & Sons, Chichester, 2014.

Jasper De Bock is currently a Post-Doctoral Researcher of the Research Foundation – Flanders (FWO), working at the Electronics and Information Systems department of Ghent University. On 13 May 2015, he obtained his PhD in Mathematical Engineering at the same university.