AI systems often depend on information provided by other agents, for example sensor data or crowdsourced human computation. Usually, providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important both to verify the correctness of data, but also to provide incentives so that agents that provide high-quality data are rewarded while those that do not are discouraged by low rewards.
We will cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews and predictions. We will survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the game-theoretic analysis with practical examples of applications in prediction platforms, community sensing and peer grading.
The tutorial addresses two basic types of information elicitation scenarios that are relevant for the AI audience.
In the first scenario, we study the elicitation process of fully or partially verifiable information, as in forecasting or crowdsourcing with gold standard tasks. We show how to elicit predictions about a future event from multiple individuals, either explicitly by using strictly proper scoring rules, or implicitly by using a prediction market approach or a reputation based approach.
In the second scenario, we investigate how to elicit information that cannot be directly verified, such as product reviews or sensor measurements in crowd-sensing. We show the range of peer-consistency-based methods that have been developed over the past years, and several examples that show how they work in real scenarios.
Throughout the tutorial, we present three types of material: specific mechanisms, game-theoretic analysis of these mechanisms, and experience with practical application. The tentative schedule of the tutorial is:
Boi Faltings is a full professor at the Swiss Federal Institute of Technology in Lausanne (EPFL) and has worked in AI since 1983, and has been one of the pioneers on the topic of mechanisms for truthful information elicitation, with the first work dating back to 2003. He has taught AI and multi-agent systems to students at EPFL for 29 years. He is a fellow of AAAI and ECCAI, and has served in various roles on the program committees of most recent AAAI, IJCAI and AAMAS conferences.
Goran Radanovic is a postdoctoral researcher at Harvard University. He has done his PhD at the Swiss Federal Institute of Technology in Lausanne (EPFL) on the topic of mechanisms for information elicitation. His work has been published mainly at AI conferences. He has participated in the teaching of a course on multi-agent systems since 2012.
The tutorial is based on the book: Game Theory for Data Science: Eliciting Truthful Information, by Boi Faltings and Goran Radanovic, Morgan & Claypool Publishers, 2017.