A New AI Evaluation Cosmos: Ready to Play the Game?

  • José Hérnandez-Orallo Universitat Politècnica de València
  • Marco Baroni Facebook
  • Jordi Bieger Reykjavik University
  • Nader Chmait Monash University
  • David L. Dowe Monash University
  • Katja Hofmann Microsoft Research
  • Fernando Martínez-Plumed Universitat Politècnica de València
  • Claes Strannegård Chalmers University of Technology
  • Kristinn R. Thórisson Reykjavik Universit

Abstract

We report on a series of new platforms and events dealing with AI evaluation that may change the way in which AI systems are compared and their progress is measured. The introduction of a more diverse and challenging set of tasks in these platforms can feed AI research in the years to come, shaping the notion of success and the directions of the field. However, the playground of tasks and challenges presented there may misdirect the field without some meaningful structure and systematic guidelines for its organization and use. Anticipating this issue, we also report on several initiatives and workshops that are putting the focus on analyzing the similarity and dependencies between tasks, their difficulty, what capabilities they really measure and – ultimately – on elaborating new concepts and tools that can arrange tasks and benchmarks into a meaningful taxonomy.

Author Biography

Marco Baroni, Facebook
Artificial Intelligence Research Laboratory
Published
2017-10-02
Section
Reports