One method for building classification trees is to choose split variables by maximising expected entropy. This can be extended through the application of imprecise probability by replacing instances of expected entropy with the maximum possible expected entropy over credal sets of probability distributions. Such methods may not take full advantage of the opportunities offered by imprecise probability theory. In this paper, we change focus from maximum possible expected entropy to the full range of expected entropy. We then choose one or more potential split variables using an interval comparison method. This method is presented with specific reference to the case of ordinal data, and we present algorithms that maximise and minimise entropy within the credal sets of probability distributions which are generated by the NPI method for ordinal data.
The paper is available in the following formats:
Plenary talk: file
Dpto. Ciencias de la Computación
Department of Statistics
University of Munich
Department of Mathematical Sciences
Science Laboratories, South Road
Durham, DH1 3LE,
Send any remarks to firstname.lastname@example.org.