Jukka Kohonen · Jukka Suomela

Lessons learned in the challenge: making predictions and scoring them

MLCW 2005 · 1st PASCAL Machine Learning Challenges Workshop, Southampton, UK, April 2005 · doi:10.1007/11736790_7

authors’ version publisher’s version


In this paper we present lessons learned in the Evaluating Predictive Uncertainty Challenge. We describe the methods we used in regression challenges, including our winning method for the Outaouais data set. We then turn our attention to the more general problem of scoring in probabilistic machine learning challenges. It is widely accepted that scoring rules should be proper in the sense that the true generative distribution has the best expected score; we note that while this is useful, it does not guarantee finding the best methods for practical machine learning tasks. We point out some problems in local scoring rules such as the negative logarithm of predictive density (NLPD), and illustrate with examples that many of these problems can be avoided by a distance-sensitive rule such as the continuous ranked probability score (CRPS).


Joaquin Quiñonero-Candela, Ido Dagan, Bernardo Magnini, and Florence d’Alché-Buc (Eds.): Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification, and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11–13, 2005, Revised Selected Papers, volume 3944 of Lecture Notes in Artificial Intelligence, pages 95–116, Springer, Berlin, 2006

ISBN 978-3-540-33427-9


© Springer 2006

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.