We are proud to announce that ISIPTA 2019 will feature keynote talks by Thomas Dietterich, Vladimir Vovk and Aharon Ben-Tal. Thomas is one of the founders of the field of Machine Learning. He will share his views on robust AI and shed some light on the role that imprecise probabilities have to play there. Vladimir (Volodya) developed game-theoretic probability, in cooperation with Glenn Shafer. He will present their latest book on the topic, emphasizing its connections with imprecise probabilities. Aharon is an authority on robust optimization. He will explain to us the basics of this field of research and its relevance to imprecise probabilities.
On the occasion of the 20-year anniversary of ISIPTA, the program will also feature four Update@ISIPTA talks by former presidents of SIPTA: Gert de Cooman, Teddy Seidenfeld, Fabio Cozman and Matthias Troffaes. They will discuss topics in imprecise probabilities close to their hearts, and offer some historical perspective to the younger members of our community, mixed with recent developments and challenges for the future.
Titles and abstracts of all seven invited talks are available below, together with short biographies of the speakers.
Aharon Ben-Talpresentation Robust optimization of uncertain optimization problems affected by ambiguous probability distributions Wednesday 3 July, 16:30 – 17:30 Chair: Erik Quaeghebeur
Mathematical optimization problems traditionally model uncertainty via probability distributions. However, observable statistical data can often be explained by many strikingly different distributions. This “uncertainty about the uncertainty” poses a major challenge for optimization problems with uncertain parameters.
The emerging field of distributionally robust optimization (DRO) seeks to propose new optimization models whose solutions are robust in the sense that they are optimized against all distributions consistent with the given prior information. DRO models also offer a more realistic account of uncertainty and mitigate the post-decision disappointment characteristic of stochastic models. Moreover, in important cases where the original stochastic problems are computationally intractable, the corresponding DRO counterpart generates tractable optimization problems.
Aharon Ben-Tal is a Professor of Operations Research and former head of the MINERVA Optimization Center at the Faculty of Industrial Engineering and Management at Technion - Israel Institute of Technology. He received his Ph.D. in Applied Mathematics from Northwestern University in 1973 and has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT, CWI Amsterdam, Columbia University and NYU. His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. Recently the focus of his research is on optimization problems affected by uncertainty. In the last 15 years, he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 130 papers in professional journals and co-authored three books. As of February 2019, these publications have more than 23500 citations (Google Scholar). He also served on the editorial board of all major Optimization/Operations Research journals and has given numerous plenary and keynote lectures in international conferences. Finally, he was awarded the EURO Gold Medal (the highest distinction of Operations Research within Europe), was given the status of Distinguished Scientist by CWI, received the IBM Faculty Award, was named Fellow of SIAM and INFORMS and received two lifetime achievement awards for his work on optimization, by INFORMS (the Khachiyan prize) and ORSIS.
Vladimir Vovkpresentation Game-theoretic foundations for imprecise probabilities Thursday 4 July, 16:30 – 17:30 Chair: Gert de Cooman
In this talk I will discuss the main topics of my forthcoming book with Glenn Shafer (“Game-theoretic foundations for probability and finance”, to be published by Wiley in May 2019), concentrating on parts that are most relevant for the communities working on imprecise probabilities. My plan is to review the game-theoretic foundations for imprecise probabilities and compare them with the much more standard measure-theoretic ones. I will argue that in some important contexts the former have significant advantages, allowing us to eliminate or weaken the required stochastic assumptions, design new prediction strategies, and make scientific modelling more realistic.
Vladimir Vovk is Professor of Computer Science at Royal Holloway, University of London. His current research interests include the foundations of probability and statistics, machine learning, and mathematical finance. His interest in the game-theoretic foundations of probability started in the early 1980s, when Andrei Kolmogorov charged him with investigating Kolmogorov’s finitary version of Richard von Mises’s notion of collective. In the early 1990s Vovk became a student of the American Institute of Business and Economics founded in Moscow by the American economist of the Austrian school Edwin G. Dolan and his wife, where he developed an interest in nonstochastic approaches to finance. In 2001 Glenn Shafer and Vovk published the first modern book-length exposition of game-theoretic probability under the title “Probability and finance: it’s only a game”, after which they have written more than 50 working papers on this topic. Their new book “Game-theoretic foundations for probability and finance” (2019) gives an up-to-date summary of game-theoretic probability, a field that they believe is still in its infancy.
Thomas Dietterichpresentation Robust artificial intelligence and robust organizations Friday 5 July, 16:30 – 17:30 Chair: Fabio Cozman
Many emerging AI applications involve high-risk settings where errors can lead to injury or death. How can we ensure that these applications are as safe as possible? This talk will argue that the answer is a combination of robust artificial intelligence and robust human organizations. Every application must be managed and operated by a human organization. We will begin by reviewing the area of High Reliability Organizations (HROs) and discuss the properties of human organizations that confer high reliability. From this we will articulate a research agenda for robust artificial intelligence that includes robust reasoning and decision making, anomaly detection, ensemble root cause analysis, situational awareness, human-machine improvisational problem solving, and detecting failures in the human organization.
Dr. Thomas G. Dietterich is Distinguished Professor (Emeritus) of Computer Science at Oregon State University and Chief Scientist of BigML, a machine learning startup company. As one of the founders of the field of machine learning, Dietterich has published more than 130 scientific papers. Dietterich’s research seeks methods for enabling AI systems to robustly deal with “unknown unknowns”. He also leads projects in applying AI to biological conservation, management of invasive species, and policies for controlling wildfire. He is applying machine learning methods to automatically detect errors in big data applications including weather data collected by the Trans-Africa Hydrometeorological Observatory (TAHMO), which is a sustainable development project throughout sub-Saharan Africa. Dietterich is a Fellow of the Association for the Advancement of Science, the Association for Computing Machinery, and the Association for the Advancement of Artificial Intelligence. He serves as Past President of the Association for the Advancement of Artificial Intelligence, founding President of the International Machine Learning Society, and former Executive Editor of the journal Machine Learning.”
Teddy Seidenfeldpresentation Rates of incoherence and IP theory (again) Wednesday 3 July, 12:00 – 12:30 Chair: Matthias Troffaes
In this presentation, I reprise a theme from our ISIPTA-99 paper, namely, how select IP models relate to book making against sets of incoherent previsions. The central idea in that work is to distinguish among different sets of incoherent previsions based on what we call “rates of incoherence”. That is, not all incoherent sets of previsions are equally incoherent!
As part of a new perspective on our old work, I discuss how to use what we call “robust” algorithms for updating (possibly) incoherent previsions so as to reduce their rate of incoherence. What makes these algorithms interesting is that the incoherent agent may apply them without being aware of the rate of incoherence in her/his previsions. In that sense, these “robust” algorithms display self-improving qualities that result even when the agent is unaware of her/his normative failings. One illustration is to use new “evidence” from mathematical computations to update incoherent previsions about elementary mathematical propositions.
Teddy Seidenfeld is the H.A. Simon University Professor of Philosophy and Statistics at Carnegie Mellon University, Pittsburgh PA, USA. He works on foundational issues in Probability, Statistical Inference, and Decision Theory. Often, his work relates to problems involving more than one decision maker, which accounts for his approach to and use of Imprecise Probability theory. Seidenfeld took his undergraduate degree at the University of Rochester (1969), where H.E. Kyburg, Jr. served as his Philosophy advisor, and where he studied Mathematics first with S. Tennenbaum and then J.H.B. Kemperman. He took his doctoral degree in Philosophy at Columbia University (1975), where I. Levi served as his thesis advisor. Seidenfeld’s thesis was about R.A. Fisher’s contributions to Statistics, with a focus on Fiducial inference. Each of Kyburg’s and Levi’s original theories remain important influences on Seidenfeld’s thinking. Seidenfeld became a member of the Philosophy faculty at the University of Pittsburgh (1975), at Washington University (1981), and at Carnegie Mellon University (1985) where he helped to found the current Philosophy Department. For the past 40 years, Seidenfeld has had a continuing collaboration with J.B. Kadane, and M.J. Schervish, emeritus Professors of the Statistics Department at CMU. A collection of their work from the 1990s is available in Rethinking the Foundations of Statistics (Cambridge University Press, 1999). Their collaboration has generated numerous papers about Imprecise Probability, with contributions to many of the biennial ISIPTA conferences. Seidenfeld served as the third president of SIPTA, from 2009-2013.
Gert de Coomanpresentation Imprecision in stochastic processes – combining IP and GTP Thursday 4 July, 11:30 – 12:00 Chair: Teddy Seidenfeld
The talk traces the origins of a growing body of work on dealing with imprecision and robustness in stochastic processes, and in particular Markov chains. It intends to show how combining ideas in the fields of imprecise probabilities and game-theoretic probability led to an approach that proved more productive and perhaps more successful than earlier attempts at solving this problem.
Gert de Cooman is Full Professor in Uncertainty Modelling and Systems Science at Ghent University, Belgium, where he heads FLip, the Foundations Lab for Imprecise Probabilities. He has been very actively involved in the development of, and contributed to diverse areas in IP, such as mathematical aspects of coherence and lower previsions (on which Matthias Troffaes and he wrote a monograph, Lower Previsions), models for independence, credal networks, imprecise Markov chains, game-theoretic probability, desirability, exchangeability, predictive inference, choice functions, and the connections between imprecision and randomness. He was one of the organisers of the first ISIPTA, a founding member of SIPTA, and its first president for a period of four years.
Fabio Cozmanpresentation Credal networks and the like Friday 5 July, 11:30 – 12:00 Chair: Gregory Wheeler
Bayesian networks and other graph-theoretical modeling languages appeared in artificial intelligence research during the 80s; there have been several extensions of those tools to cope with imprecision/indeterminacy in probability values. This talk examines what we know about these modeling languages, and what are their most promising extensions.
Fabio Cozman is a Full Professor at Universidade de São Paulo, Brazil, where he works with probabilistic reasoning and machine learning, with a special interest in formalisms that extend probability theory. He got an Engineering degree at USP-Brazil and a PhD in Robotics at Carnegie Mellon University-USA, and has served, among other activities, as Program and General Chair of the Conf. on Uncertainty in Artificial Intelligence, Area Chair of the Int. Joint Conf. on Artificial Intelligence, Associate Editor of the Artificial Intelligence Journal, Associate Editor of the Journal of Artificial Intelligence Research, and Associate Editor of the Journal of Approximate Reasoning.
Matthias Troffaespresentation Decisions & algorithms: how to get your act together? Saturday 6 July, 11:30 – 12:00 Chair: Enrique Miranda
Congratulations: you’ve modeled your severe uncertainty using lower previsions! But, now what? You need to make a decision, or some other form of complicated inference. In this talk, I will present some highlights behind the history of decision making under severe uncertainty and ambiguity, from Pascal and de Condorcet, all the way to Walley and beyond. I will discuss how these ideas developed into algorithms, and what are the challenges ahead for imprecise probability to become even more useful for applications in statistics, artificial intelligence, and machine learning.
After receiving his MSc degree in engineering (theoretical physics) in 2000 from Gent University, Belgium, Matthias Troffaes joined the SYSTeMS research group at the same university as a doctoral researcher, pursuing research in imprecise probability theory under the guidance of Gert de Cooman, earning the degree of PhD in April 2005. In July 2005, he went to Carnegie Mellon University as a Francqui Foundation Fellow of the Belgian American Educational Foundation, working as a post-doctoral researcher with Teddy Seidenfeld. In September 2006, he became a lecturer in statistics at the Department of Mathematical Sciences, Durham University, where he is currently Associate Professor (Reader) in Statistics. Dr Troffaes is an expert in the theory of imprecise probabilities. His main research interests concern the foundations of statistics and decision making under severe uncertainty, with applications to the environmental and engineering sciences. This includes sequential decision processes (backward induction and dynamic programming), uncertainty modelling (lower previsions, p-boxes, non-additive integration, linear programming), optimal control, bioinformatics, and expert elicitation.