People accept as fact that the polling industry epically failed in the 2016 and 2020 elections, particularly Democratic pollsters. The predicted “blue wave” never materialized in either election, even though most polls showed Democrats with an insurmountable lead. To many, the industry performed even worse in 2020 than it did in 2016, given the even larger differences between predicted and actual votes. Many have asked what changes can be made to avoid this in 2024.
However, the private polling industry is actually performing efficiently given the objectives of its market participants, including those participating in and producing polls. But it is failing for the consumers of the forecast services it provides. But they are not paying anything for these services, so as the saying goes, you get what you pay for.
The discussion of the pollsters’ failure is misguided because it assumes that the objective is only to minimize bias when they also are influenced by tilting fundraising and elections. After all, most pollsters are unofficially aligned with the Republicans or Democrats, and telling either to report the truth is like telling a thief not to steal — a noble recommendation, but not one adhered to due to the incentives involved. Thus, a given party may succeed in an election while having more biased polls.
The confusion concerning the polling industry’s performance is due to a more general lack of understanding of how the ”data markets” underlying polls work. Both government intervention into the private polling industry and self-regulation through bipartisan bodies will be doomed without an understanding of the incentives that drive these data markets.
Polls are driven by an implicit labor market in which both sides of the market often prefer bias. The supply side in this labor market involves those sampled. Like any other workers, they are unlikely to show up for work and do a good job for free. Responding at all or providing answers lacking errors are not their primary goals, but quickly getting out of the unpaid work of completing a survey often is. Pew Research finds dismal response rates around 6% for polls, and others point to large errors among respondents, such as shy Trump voters. Larger participation rates and smaller errors when responding are, by definition, central to better polls.
Pollsters make up the demand side of this labor market but also have other objectives to balance against accuracy. Turning out polls favoring their side, whether Democratic or Republican, raises fundraising as the bidding for future policies and positions goes to the perceived winner. Better polls for your party thus have a self-enforcing effect when stronger campaign financing improves your vote further.
Given that accuracy is not the only goal on either side of this labor market, is it any surprise the market outcome is biased?
Those relying on polls complain about the bias and dread further inaccurate market outcomes in 2024. The problem is that consumers of forecasts, such as the electorate, pay nothing for the services they want improved. The fundamental problem that those who want more accurate polls do not pay for polls is the largest obstacle to improving them.
Improved financial incentives for better accuracy are both needed and feasible. For better performance of survey respondents, economists have long recommended paying them to show up and paying them to reduce errors by monitoring a small share of them and financially rewarding accuracy. After all, that is what IRS does to reduce reporting errors of incomes. Predicting both turnout and how people will vote makes polls more difficult, but both dimensions can be monitored if the sample was incentivized correctly. Indeed, we routinely reward performance in other labor markets, and we should for survey respondents as well if we expect them to show up and do good work that does not benefit them personally.
Pay for performance seems to be needed for the demand side of pollsters, too. They bear no negative financial consequence for biased polls that tilt in their favor. Saying that reputational concerns will correct the lack of correct incentives flies in the face of their poor performance. Averaging polls is not the solution to this problem, as this mainly works if all polls are aiming for the truth, which they are not. Averaged polls may even lead to more bias, as it will simply reflect the market shares of polls and relative biases of the two parties. In both 2016 and 2020, Republican polls with small errors were closer to the truth than the average poll including larger errors by Democrats. More generally, if one side’s error dominates the other side’s, using the average midsized error may stray more from the truth than using one side alone. Averaging errors is useful when small samples randomly deviate from the truth as opposed to averaging pollsters, which may be harmful when they systematically deviate from the truth. Rather, contracts that reward a pollster within a range of the truth and penalize them for misses outside that range would provide better discipline. Stronger financial incentives for accuracy need to exist, or else incorrect polls that reflect the tradeoffs of both sample members and pollsters will prevail.
Inaccurate forecasts due to bad incentives extend far beyond polling when political motives are present. For example, economic forecasting suffers from the same problem. Given the name of the person or organization predicting a future economic statistic or the economic impact of a given government program, it is too easy to predict how the forecast will land. Left-leaning economists predicted a stock market crash from a Trump win, and right-leaning economists predicted no deficit effects from his fiscal policies.
Predicting biases from the name of the forecaster is easier to do for academics, nonprofit groups, and government agencies who do not sell their forecasts than it is for for-profit forecasters whose paying customers may demand accuracy. Just as for polling, there are no performance-based rewards for much of economic forecasting, so other professional or political objectives seem often to be balanced against accuracy. Across many other disciplines, similar poor incentives rather than lack of statistical skill explain why experts so often miss the mark.
The 2020 election demonstrated, again, that accuracy is not the natural outcome of the data markets that underlie polling. Statistics is not the discipline to solve this problem, as it simply assumes market participants only value accuracy, which they do not. Rather, economics is the field better suited to advance new financial innovations that aligns market incentives with those relying on and using the forecasts produced. Without the industry adopting such innovations, the same story will play out in 2024. Recognizing and admitting the problem is the first of 12 steps to cure the industry’s addiction to biased polls and avoid an intervention by government bodies.
Tomas J. Philipson is a professor of public policy at the University of Chicago. He served as a member and acting chairman of the White House Council of Economic Advisers from 2017 to 2020.
View original Post