About
Gathering, incentivizing, and aggregating information from the crowd, as well as tapping vast sources of other data, has become the cornerstone of modern forecasting techniques. This workshop aims to bring together theoreticians, empiricists, and practicioners to discuss the elicitation and aggregation of information for prediction making, which has been an emerging theme in the EC community over the last several years. To foster inclusion both across areas and to newcomers, the format mixes evenly between high-level, "overview" talks from invited speakers, and contributed talks on recent work.
Schedule Overview
All events take place on Tuesday, June 27. Each session includes a mix of invited speakers and short talks.
9:00am - 10:30am | Session 1 |
Invited Speaker: Eric Zitzewitz, Dartmouth | |
10:30am - 11:00am | Coffee break |
11:00am - 12:30pm | Session 2 |
Invited Speaker: Jon Kleinberg, Cornell | |
12:30pm - 2:00pm | Lunch |
2:00pm - 3:30pm | Session 3 |
Invited Speakers: Jens Witkowski, ETH Zurich; John McCoy, MIT | |
3:30pm - 4:00pm | Coffee break |
4:00pm - 5:30pm | Session 4 |
Invited Speaker: Jenn Wortman Vaughan, Microsoft |
Detailed Schedule
9:00 - 10:30 | Session 1 | |
9:00 | Opening remarks | |
9:05 | Eric Zitzewitz | Trump's Election and Stock Returns |
Prior to the 2016 election, stock markets reacted to election news as if Trump's election was expected to cause declines in global stocks, emerging market currencies, oil, and interest rates. When Trump actually won, initial market movements in the predicted direction were followed by a rally in US stocks. This rally was especially pronounced in stocks expected to do well under a President Trump. So-called "Trump Trade" stocks continued to appreciate through Trump's inauguration, but in March to May 2017, these movements had more than completely reversed. I discuss the extent to which "Trump Trade" stocks can be used as a supplement to prediction markets to track the President's political fortunes. | ||
9:55 | Patrick Kane and Stephen Broomell | Detecting Systematic Errors in Forecasting with Kernel Smoothers: Evidence for Environmental Noise Propagation |
10:10 | Pranjal Awasthi, Avrim Blum, Nika Haghtalab, and Yishay Mansour | (Lightning talk) Efficient PAC Learning from the Crowd |
10:15 | Reshmaan Hussam, Natalia Rigol, and Ben Roth | Targeting High Ability Entrepreneurs Using Community Information: Mechanism Design in the Field |
10:30 - 11 | Coffee break | |
11:00 - 12:30 | Session 2 | |
11:00 | Jon Kleinberg | Human Decisions and Machine Predictions in Judicial Settings |
We compare human and algorithmic decision-making in a context with
important policy implications: judicial decisions on bail. By law, such
decisions hinge on a judge’s prediction of what a defendant would do if
released; it is thus a promising machine learning application because it
is a concrete prediction task for which there is a large volume of data
available. Yet comparing algorithms to judges in this setting proves
complicated: the data are themselves generated by prior judge decisions,
and we only observe crime outcomes for released defendants, not for those
whom the judges detained. We develop a set of techniques for dealing with
these challenges, and explore a set of further issues, including questions
of algorithmic fairness, and a set of analyses that focus on predicting
judges' decisions as a way of gaining insight into their decision-making. This is joint work with Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. | ||
11:50 | Daniel Benjamin, David Mandel and Jonathan Kimmelman | Can Cancer Researchers Accurately Judge Whether Preclinical Reports Will Reproduce? |
12:05 | Itai Arieli, Yakov Babichenko, and Rann Smorodinsky | Forecast Aggregation |
12:20 | Discussion period | |
12:30 - 2 | Lunch | |
2:00 - 3:30 | Session 3 | |
2:00 | John McCoy | Crowd Wisdom and the Surprisingly Popular Answer |
We consider the problem of aggregating many people’s judgments on a single question when no outside information about their competence is available. Standard methods for solving this problem select the most popular answer, after correcting for variations in confidence. We present an alternative method: elicit from respondents their predictions about the answers given by others, and select the answer that is more popular than people predict. We prove that under a standard model of Bayesian respondents with asymmetric information our new method strictly outperforms standard methods. We empirically validate our method in a variety of domains, including art valuation and medical diagnosis. I will conclude with a sampling of our ongoing work developing these methods, including an application to forecasting. This is joint work with Drazen Prelec and Sebastian Seung. | ||
2:30 | Pavel Atanasov, Jens Witkowski, Lyle Ungar, Philip Tetlock, and Barbara Mellers | Small Steps to Prediction Accuracy |
2:45 | Jens Witkowski | Proper Proxy Scoring Rules |
Proper scoring rules can be used to incentivize a forecaster
to truthfully report her private beliefs about the probabilities of
future events and to evaluate the relative accuracy of forecasters.
While standard scoring rules can score forecasts only once the
associated events have been resolved, many applications would benefit
from instant access to proper scores. We introduce proxy scoring
rules, which generalize proper scoring rules and, given access to an
appropriate proxy, allow for immediate scoring of probabilistic
forecasts without access to event outcomes. In particular, we suggest
a proxy-scoring generalization of the popular quadratic scoring rule,
and characterize its incentive and accuracy evaluation properties
theoretically. Moreover, using a proxy that is computed from other
forecasters' forecasts, we evaluate proxy scoring experimentally on
data from a large real world geopolitical forecasting tournament. Our
results show that proxy scoring is competitive with proper scoring,
especially when the number of questions is small. Joint work w/ Pavel Atanasov, Lyle H. Ungar, and Andreas Krause. | ||
3:15 | Discussion period | |
3:30 - 4 | Coffee break | |
4:00 - 5:30 | Session 4 | |
4:00 | Jenn Wortman Vaughan | Self-financed Wagering Mechanisms: What’s Been Done and What’s to Come |
Wagering mechanisms allow a principal to elicit the beliefs of a
group of agents without taking on any risk. Each agent specifies a
belief, her subjective estimate of the likelihood of a future event,
along with a monetary budget or wager, the maximum amount that she
is willing to lose. The agents’ wagers are then collected by the
principal and, after the truth is revealed, redistributed to the
agents in such a way that agents with more accurate predictions are
more highly rewarded. Since agents directly report their beliefs,
the principal is able to leverage the wisdom of crowds to obtain
an accurate consensus forecast for the event, for example by
computing an average. In this talk, I’ll begin by motivating and describing the class of Weighted-Score Wagering Mechanisms, the unique wagering mechanisms to simultaneously satisfy a set of desirable properties including strict budget balance and incentive compatibility. I’ll then discuss two problems with these mechanisms, Pareto inefficiency (that is, low stakes) and the ability for certain agents to profit regardless of the final outcome, and will describe new wagering mechanisms that have been introduced to overcome these flaws. Along the way, I’ll touch on connections between wagering mechanisms and other market mechanisms including market scoring rules and parimutuel markets. | ||
4:50 | Rahul Deb, Mallesh Pai, and Maher Said | Evaluating Strategic Forecasters |
5:05 | Xintong Wang | (Lightning talk) Market Making with Liquidity Adaptation via Learning Rate Tuning |
5:10 | Rupert Freeman, Sebastien Lahaie, and David Pennock | Crowdsourced Outcome Determination in Prediction Markets |
5:25 | Closing remarks |
Important Dates
- Submission Opens: 8 April 2017 (open now)
- Submission Deadline: 1 May 2017, 11:59:00 PM PDT
- Author Notification: 15 May 2017
- Workshop Date: 27 June 2017
Submission Instructions
We invite submissions on the following topics: forecasting, information elicitation, forecast aggregation, elicitation interfaces which encourage accuracy, mechanisms combining elicitation and aggregation, forecast evaluation, and related topics. We will favor submissions of broader potential interest to both theoretical and applied audiences and those likely to spark interesting discussions and future work.
Submission page: https://easychair.org/conferences/?conf=ecfw2017
Submissions consist of two components:
- A short description (one to two paragraphs) of why the work is appropriate for this workshop and may be of interest to participants. Please add the description to the bottom of your abstract on the Easychair submission form.
- The paper in PDF format describing the work and its contributions. There are no requirements for length or style. Typical submissions are either manuscripts of recently-completed work or brief 6-8 page descriptions of ongoing work and current results.
Submissions will be kept confidential. There will not be any published proceedings.
Upon acceptance, submissions will be invited as either poster or oral presentations.
Submission Deadline: 1 May 2017, 11:59:00 PM PDT