Golden Roof ISIPTA'11 home Nordkette mountain range

Richard Crossman, Joaquí­n Abellán, Thomas Augustin, Frank Coolen

Building Imprecise Classification Trees With Entropy Ranges


One method for building classification trees is to choose split variables by maximising expected entropy. This can be extended through the application of imprecise probability by replacing instances of expected entropy with the maximum possible expected entropy over credal sets of probability distributions. Such methods may not take full advantage of the opportunities offered by imprecise probability theory. In this paper, we change focus from maximum possible expected entropy to the full range of expected entropy. We then choose one or more potential split variables using an interval comparison method. This method is presented with specific reference to the case of ordinal data, and we present algorithms that maximise and minimise entropy within the credal sets of probability distributions which are generated by the NPI method for ordinal data.


Imprecise probability, classification trees, nonparametric predictive inference

Download area

The paper is available in the following formats:

Plenary talk: file

Authors’ addresses

Richard Crossman

Joaquí­n Abellán
Dpto. Ciencias de la Computación
ETSI Informática
18071 Granada

Thomas Augustin
Department of Statistics
University of Munich
Ludwigstr. 33
D-80539 Munich

Frank Coolen
Department of Mathematical Sciences
Science Laboratories, South Road
Durham, DH1 3LE,

E-mail addresses

Richard Crossman
Joaquí­n Abellán
Thomas Augustin
Frank Coolen

Send any remarks to