Seth Weingram, Ph.D.
Senior Vice President, Director, Client Advisory
Heading into the 2016 elections, we were inundated by data —polls, predictive models, betting odds —and constant media commentary on the numbers. The surprising results have triggered a great deal of discussion around why most forecasts were off the mark and the sources of overconfidence in their precision. These debates are not an indictment of quantitative forecasting methods in politics. Rather they’re part of a healthy process of refining predictive models in a remarkably complicated application.
1. Test Model Assumptions, Not Just Predictive Accuracy
As other observers have pointed out, many election prediction methodologies depended on hidden and ultimately incorrect assumptions regarding turnout or voting tendencies in various demographics. It’s crucial to recognize the assumptions of a model and to understand forecast sensitivity to them. One can easily overfit a forecasting model based purely on historical predictive accuracy.
One must always consider the intuition and assumptions behind forecasting models. For example, some machine learning and black-box approaches don’t provide much insight into what’s driving their predictions, a material drawback that one must weigh against other benefits.
2. Don’t Overestimate Input Data Quantity Or Quality
Political observers and the media now closely monitor real-time reactions and prediction markets in the search for clues to election sentiment. But we suspect that many were excessively reliant on metrics without fully understanding their breadth or quality. For example, while prediction markets may seem responsive to news, some trade thinly, draw from narrow demographic and geographic trader populations, or limit wager amounts. And in an October paper on financial market perceptions of the U.S. election, professors Justin Wolfers and Eric Zitzewitz noted that pollsters were live-tweeting reactions based on tiny focus groups, as few as 30 people.1 But narrowly sourced, sketchy information reverberating across multiple media venues may have created a misimpression of strong consensus among voters and markets.
Although we’re in an era characterized by data proliferation, new information often turns out to be redundant. Therefore, forecasters must assess the value of new data not just on a standalone basis, but in terms of whether it is additive relative to other sources already captured.
3. Thoughtfully Categorize Your Data
In the political context, a poll would be driven by 1) propensity of voters within each category to support a given candidate (i.e. a partisan score), and 2) each category’s predicted turnout (electoral weight). Forecast error may creep in via either component. Regarding the first point, it is critical to categorize subjects in terms of visible and verifiable characteristics, and associate them with the outcome variable of interest, in this case, partisan voting preferences.
A variety of factors make polling difficult, especially in large-scale elections. Small samples, the decline of telephone landlines, and self-selection in online formats, to name a few. Many political predictions, therefore, try to model partisan tendencies associated with demographic categories, such as race, gender, ethnicity, income, geography, etc. In the 2016 presidential elections, voting preferences in key demographic groups largely conformed to prior patterns, although Clinton won lower percentages of minority and younger voters than Obama in 2012.
But some groups of voters prove difficult to model. One example would be the 12% of Trump voters who approved of Obama’s job performance, according to exit polls. Another apparently crucial one would be whites with no college education, which as a group voted 15-20% more Republican than in 2012. On a state-by-state basis, underestimation of Trump’s support in polls was highly correlated with the prevalence of this demographic.
Perhaps the preferences of this midwestern electorate could have been more accurately modeled based on alternative combinations of gender, racial, geographic, and educational characteristics.
4. Carefully Weight Your Inputs
Categories are important for classifying observations, but weights on categories are even more important. As applied to polling, voter turnout projections are the weights one applies to different categories of voters. After the 2012 election, Sean Trende of the website RealClearPolitics identified an estimated six million white voters out of 2008’s electorate of nearly 130 million that did not appear to participate. Most 2016 polls assumed that these voters again wouldn’t show up. But they did so in large numbers. Combined with a greater propensity to vote Republican, this unexpected turnout more than compensated for Trump’s losses among highly educated voters.
Differences in turnout assumptions are an important source of variation between forecasts. While in 2016 most models seemed to assume that the electorate would resemble 2012’s or 2008’s, one notable exception was Nate Silver’s FiveThirtyEight.com, which ran simulations over a variety of turnout scenarios. The site forecasted substantial uncertainty over the election due to potential variability in state-by-state outcomes associated with small shifts in turnout. Consequently, FiveThirtyEight produced one of the lowest probabilities of a Clinton victory among data-driven forecasters, 71% on election morning. A good forecasting process puts considerable emphasis on the weightings and is aware of forecast sensitivity to their variation.
5. Balance Model Reactivity And Robustness
On the night of the election, the New York Times’ Upshot model swung from less than a 10% probability of a Trump victory to over 90% in a span of two hours. The FiveThirtyEight model, in contrast, updated much more slowly, moving from 30% to 70% over the same time period. We attribute the difference to the Upshot model’s greater emphasis on polls and more recent data, while FiveThirtyEight’s polls-plus model tended to “shrink back” (in forecasting parlance) state-level projections to a set of prior beliefs that reflected economic conditions, past voting records, and other characteristics.
As well, the FiveThirtyEight model assumed that states’ voting profiles would be more highly correlated than the Upshot model. In our view, the FiveThirtyEight approach was more accurate earlier on in the campaign, never giving Trump less than a 10% chance of winning the election. A forecasting process must take noisy data into account, whether in investing or in politics. There are a few ways to do this. One can use priors about the data, based on the past, other samples, or beliefs about behavior. One can make sure the model is robust to changes in data or slight variations in variables. One can put bounds on results, or average different high quality forecasts. Such approaches help mitigate the impact of noise and should improve longer-run accuracy.
Conclusion: Quantitative Modelling Is An Ongoing Process
We don’t see pollsters, or for that matter, the wider public, abandoning data-driven analysis anytime soon. One of the greatest virtues of such approaches is that they allow for dispassionate, disciplined analysis of their failures. In coming months, pollsters will undoubtedly sift through the election results and review their approaches, including model specifications, data sources, and sample selection methods. This process must be done carefully, guarding against overreaction to an outcome that represents only a single data point. But we believe that it is the ability to scientifically analyze model performance that will drive their success in the long run.
1 Justin Wolfers, Zitzewitz, Eric, “What Do Financial Markets Think of the 2016 Election?,” Brookings Institution, October 21, 2016.
General Legal Disclaimer
Acadian provides this material as a general overview of the firm, our processes and our investment capabilities. It has been provided for informational purposes only. It does not constitute or form part of any offer to issue or sell, or any solicitation of any offer to subscribe or to purchase, shares, units or other interests in investments that may be referred to herein and must not be construed as investment or financial product advice. Acadian has not considered any reader’s financial situation, objective or needs in providing the relevant information.
The value of investments may fall as well as rise and you may not get back your original investment. Past performance is not necessarily a guide to future performance or returns. Acadian has taken all reasonable care to ensure that the information contained in this material is accurate at the time of its distribution, no representation or warranty, express or implied, is made as to the accuracy, reliability or completeness of such information.
This material contains privileged and confidential information and is intended only for the recipient/s. Any distribution, reproduction or other use of this presentation by recipients is strictly prohibited. If you are not the intended recipient and this presentation has been sent or passed on to you in error, please contact us immediately. Confidentiality and privilege are not lost by this presentation having been sent or passed on to you in error.
Acadian’s quantitative investment process is supported by extensive proprietary computer code. Acadian’s researchers, software developers, and IT teams follow a structured design, development, testing, change control, and review processes during the development of its systems and the implementation within our investment process. These controls and their effectiveness are subject to regular internal reviews, at least annual independent review by our SOC1 auditor. However, despite these extensive controls it is possible that errors may occur in coding and within the investment process, as is the case with any complex software or data-driven model, and no guarantee or warranty can be provided that any quantitative investment model is completely free of errors. Any such errors could have a negative impact on investment results. We have in place control systems and processes which are intended to identify in a timely manner any such errors which would have a material impact on the investment process.
Acadian Asset Management LLC has wholly owned affiliates located in London, Singapore, Sydney, and Tokyo. Pursuant to the terms of service level agreements with each affiliate, employees of Acadian Asset Management LLC may provide certain services on behalf of each affiliate and employees of each affiliate may provide certain administrative services, including marketing and client service, on behalf of Acadian Asset Management LLC.
Acadian Asset Management LLC is registered as an investment adviser with the U.S. Securities and Exchange Commission. Registration of an investment adviser does not imply any level of skill or training.
Acadian Asset Management (Japan) is a Financial Instrument Operator (Discretionary Investment Management Business). Register Number DirectorGeneral Kanto Local Financial Bureau (Kinsho) Number 2814. Member of Japan Investment Advisers Association.
Acadian Asset Management (Singapore) Pte Ltd, (Registration Number: 199902125D) is licensed by the Monetary Authority of Singapore.
Acadian Asset Management (Australia) Limited (ABN 41 114 200 127) is the holder of Australian financial services license number 291872 (“AFSL”). Under the terms of its AFSL, Acadian Asset Management (Australia) Limited is limited to providing the financial services under its license to wholesale clients only. This marketing material is not to be provided to retail clients.
Acadian Asset Management (UK) Limited is authorized and regulated by the Financial Conduct Authority (‘the FCA’) and is a limited liability company incorporated in England and Wales with company number 05644066. Acadian Asset Management (UK) Limited will only make this material available to Professional Clients and Eligible Counterparties as defined by the FCA under the Markets in Financial Instruments Directive.