Home
cover of episode 538’s New Forecast Says The Election Is A Toss-Up

538’s New Forecast Says The Election Is A Toss-Up

2024/6/11
logo of podcast FiveThirtyEight Politics

FiveThirtyEight Politics

Chapters

Shownotes Transcript

You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.

November the 5th is your birthday? Literally my birthday. Literally. Oh, man, you got upstaged this year. I get upstaged almost every year because my birthday is always the first week of November. I have canceled my birthday party on multiple occasions because of elections. Oh, man. Well, this year we'll do both. Okay, yeah, yeah, yeah, exactly. And by the way, not to upstage you, you just had a birthday. We were going to launch the model on my birthday, which felt like it could have been cool. Oh, my God, wouldn't that have been fun?

Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk, and if you are listening to this podcast, it means that the 2024 presidential election forecast has launched.

It shows Biden and Trump with tied likelihoods of winning the presidency. Specifically, right now, it shows Biden with a 52% chance of winning and Trump with a 48% chance. For the record, we're recording this around noon on Monday, June 10th, in case the forecast has changed between now and when you're hearing this.

We're going to dig into a lot of what's in the forecast, but if you'd like to check it out for yourself, head on over to FiveThirtyEight.com. You'll be able to see things like in 1,000 simulations of the election, Biden wins 517 of them. Trump wins 477 of them, and no one wins in six of those simulations. In other words, there's a tie decided by the House. You can see things like this in the election.

You can check out the likeliest tipping point states. Like right now, the likeliest is Pennsylvania. No surprises there. But the next most likely is North Carolina. So maybe some surprises there. And you can see that there is a 62% chance the election is decided by a smaller margin than the vote share for the third party candidates.

Let's get into all of it. And here with me to do that is the guy who built the forecast, Director of Data Analytics, Elliot Morris. Welcome, Elliot. How's it going? Hey, Galen. It's nice to see you in the flesh again. It is. We're here in person in the DC studio. We're wearing jackets. We're fancy here in DC. Shirts with collars. Totally. Trying to fit in to the vibe, you know?

So I think people might have a couple reactions to the odds currently being shown by the forecast, as I'm sure you can imagine. So the first might be that Trump leads in the average of national polls and in all the battleground states. So how on earth does Biden have a 52% chance of winning while Trump has a 48% chance?

So the first thing to say is the difference between a 52 and a 48 is not all that much. In fact, it's just the difference between like 0.1 percentage points because we're at the top of that distribution. So just having a little bit of a different prediction could flip the odds the other way. So don't be surprised if that happens very soon.

But the forecast was different than I expected, certainly. And the difference comes down to the fundamentals. And we'll talk about how they're constructed. But the forecast model is just putting less weight on the polls, I think, than people expect because it's still so early, because there's a chance of a polling error.

If we just ran a polls-only version of the model to give you a direct comparison, we'd be closer to 58% Trump today. So we're at 48%. So that difference is just the fundamentals. And we're, of course, going to dig into those fundamentals and...

the polls, but maybe first I want to ask a more philosophical question because I'm sure folks are going to tune in and click on the model and click refresh on the model. Hopefully they refresh it constantly. Yeah, constantly. Need those clicks, right? No, but seriously, I think for folks who maybe after they get over the perhaps surprise that it is either closer than they expected or that, you know, Biden shows 52% chance of winning the election, I think

you know, we have known for a while and said for a while that we expect this election to be close. So what does this model add to our understanding? You go to the page and you're like, oh, it's a coin flip. Okay, what does this add beyond that?

Well, I think that it provides us a written down explanation for why we think it's close. And it's different for you and I, who I would say are more sophisticated readers of the polls. We know that things can change. We have these examples. You're so smart, and you look so smart today. I know. So do you. So do you.

We know, like, from our historical experience for the last eight years of elections, that polls can change a lot and they can be wrong. And so our mental model of the election is actually pretty close to the quantitative models, but that's not true for a lot of political journalists. I mean, I just think over the last six months, people have been putting a lot more weight on the polls as evidenced by the model than they probably should have. So having a way to offload our decision-making, our, like, precise models

switches we're flipping in our head to figure out how much to trust these polls into a mathematical model that can be a little bit more objective than we can be on any given day. That's the real value of it. All right. So what exactly goes into the forecast before we get into the nitty gritty of the polls versus the fundamentals and the time horizon and all of that? Okay, this thing that spits out 52, 48, what is it? What's going into it?

We can use two main sources of data to predict an election. You could use the polls, which are the best. Pretty much at any time horizon, they're the best predictor, but they're not perfect. And if you don't have them in the states where you need them, then you need something else. So the something else we use is a mix of past election results at the state and the national level and a reading of national economic and political environment, which

And we call these variables the fundamentals in political science historically and in how websites like FiveThirtyEight have covered politics in the past. So that's stuff like, hey, is the economy growing? Is there an incumbent president on the ballot? Yes and yes.

How much is the economy growing? Was there a big recession recently? That sort of stuff. And all of that gets blended together in our final model, depending on how far away we are from the election, how much polling data we have in any given state, that sort of thing. Let's talk about the polls first. That's our comfort zone. How much volatility should folks expect in the polls between now and election day? Our model goes back to 1948 to calculate how much

the polls have changed between any given day and the final day, the final polling average. The polls historically from 1948 to 2020 moved by about seven percentage points on margin from a July, about nine percentage points from June until election day, about 12 percentage points over the whole course of the election. And that line just goes down as you get closer to the election. So we're sitting, expecting the polls to

to move by about 9 percentage points in the average state. Whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa. Okay, that is 75 years of elections. And we know that elections have gotten a lot closer and that the polls have moved a bit less as polarization has set in and there are fewer persuadable voters to sort of swing back and forth. So how do we account for that? Because I know...

When considering favorability, for example, and how popular an incumbent president is, we sort of make a cutoff in 1996 and say from 1996 onward, the electorate has become more polarized.

And so we consider the data since 1996 more sort of seriously when thinking about favorability. Do we do that same thing for the volatility of polls? Yeah, and you picked the right year too. So our model considers elections after 1996, after the Republican Gingrich revolution as more polarized. And so the model, when it's updating, asks itself, like, have these polls been less volatile? So over the entire course of history, since 1948, nine points. That's how much things move between now and election day.

If you go back to 1980, it's still about nine points. Once you get to 2000, it's closer to four or five early on. But there's the same amount of movement later in the campaign. The baseline voter behavior is set in earlier, but people still change their minds because of conventions or something. So you do want to use the historical average error in like

Oh, interesting. But you can make your model a little more certain early on. Okay, so today, these days, in September, the polls move just about as much as they would have moved in like 1960, whatever. Yeah, controlling for how many polls you have, the averages move about the same amount because averages are more stable now because we have more polls.

There's a lot of situations in which people will say, okay, well, that's history, but it's the same candidates that were running last time. They have 100% name recognition. And so is this time different? And that's a question that you can sort of pose to the forecast on many different planes, right?

I know forecasting is not about retrofitting to the last election or just what we know about elections today. It's also being open to uncertainty that we can't yet quantify because new things happen in elections all the time. The unknown unknowns, Gail. The unknown unknowns. Yeah, you put it more succinctly than my little rant there. How much does the forecast consider history versus unknown unknowns? Let's take a step back. Like, as someone who's trying to predict something in the future, you have...

a prediction, a point prediction, you have uncertainty. And that uncertainty in statistical speak, we call an error term. That's like how much we think we might be wrong. Really early on today, we've said polls could move by nine points. That's a nine point error term, right? I like to think about our current what ifs as pushing us into the error term of the model, but we don't know ahead of time whether or not that's the right decision. We only know that after the fact.

In 2020 and 2016, it just so happens that it was the wrong decision to decrease the volatility of the model, controlling for all other factors. And so in those years, you don't get pushed into the error term. But the reason we do a forecast is to simulate what would happen if our historical rules are wrong. That's true for polling error. That's also true for, is this a more stable election? So...

The model will get more stable if opinion is more stable. Right now, it's hedging toward a historical guess. But I just want to be honest. I don't know if this is going to be a more predictable election. That's not the right empirical way to approach the forecast. But maybe things will be more stable and we'll end up somewhere with like a 70%, 80% probability for one of the candidates in September. And we'll look back and go, we were a little too uncertain. I'm okay with being a little too uncertain early on.

So just to clarify this a little bit, say that we zero out of the time horizon. It's the day before the election or it's midnight of the election day, which I think is when we freeze the forecast historically. TBD. TBD. Okay, TBD. But—

Is it all polls? Is it just considering polls at that moment in time? Or do the fundamentals, which is to say economic indicators, favorability, you know, partisanship of states, et cetera, does that still carry weight? Our model doesn't do any ad hoc weighting. So we're not like average of the polls, average.

get a fundamentals prediction and do some sort of weighted average of them together. We use Bayes' formula. We use Bayesian reasoning in which we combine distributions. I'm getting wonky here. Here's the actual math. On election day in Pennsylvania,

Pennsylvania, our tipping point state. If the polls are exactly tied, then the amount of bias that could happen 95% of the time is about 5 percentage points. It's less. Most often, we assume that polls are unbiased, but we're simulating bias. It's 5 percentage points. The fundamentals on Election Day have an error closer to 10 or 11 points. So we don't want to discard that data. We know historically it's useful, but it's less useful than the polls. So if Election Day were tomorrow, what do you think our forecast would show?

today? We can run a now cast version of the model where we don't simulate any future uncertainty, and it's closer to 80% Trump. There's a lot of uncertainty this early on, and this early on the model, historically speaking, fundamentals are better predictors of elections than they will be in November. So there's more weight on the fundamentals. Okay. So quite a bit on weight on those fundamentals. Let's talk about them. Yeah. The economy.

We just did a podcast in this feed about it's the economy stupid and the cultural significance of that and also the actual quantitative significance of that. There are a lot of economic indicators that you can look at to get a sense of both the actual economy and how Americans are feeling about it. I think I've looked at some of them. It's like somewhere close to like 40 different indicators. And you can also look at absolute value and you can also look at change value.

Where do you fall? What are the most important indicators for the fundamentals? Historically speaking, the best thing to do is to take a basket of indicators. Not 40. We use 10. We use very frequently updated data sets, monthly updated data sets on stuff like payrolls. Are there more jobs? Is income growing adjusted for inflation? Also, what's inflation at?

And we also use, to your point, we also use consumer sentiment. So we get a reading not just of the objective economy, but how people are processing it, bringing other information in. And again, historically speaking, the best thing to do to predict an election, to figure out how people are rationalizing about the economy, is to look at how the economy has changed historically.

not just over the last year, but over the last two years of our president's term. So you look at annual growth in these 10 indicators over the last two years of our president's term. So this year, that means we get a lot of the recent improvement in the economy. It's weighted very heavily, but we're not ignoring that prices have gone up.

or that people are controlling for how good the economy looks like on terms of payroll growth or something, still pretty sour about the economy. So that all gets averaged together, gets squashed down, and we have a number between minus two and plus two about how good the economy is. Today, that number is zero. It's exactly average. So from that, you would expect an incumbent president to win by about two percentage points on margin.

The fundamentals also include things like approval rating or an incumbency advantage, things like that, which let's talk about. But if you had a fundamentals-only model... If you had an economics-only fundamentals model, you'd expect...

incumbent president to win by about 1.8 percentage points or about two. But what if it was fundamentals only? So including also the approval rating information and the incumbency information that we have today for Biden and Trump. A fundamentals only election forecast published by FiveThirtyEight, the top line probability would be 55% Biden. Okay. So those fundamentals are doing a lot of work. That's sort of what gets us from, I mean, this is all, this is

fundamentally tied, right? So we're talking around the margins. But for people who are like, why is it Biden 52 instead of Trump 52 and Biden 48 or whatever? Well, it was Trump 52 two weeks ago. It moves around a lot. So... Speaking of which, we go all the way back to April. You can see on April 1st, Biden had a 60% chance of winning the election. Now, I mean, I'm not that old, but I was around in April of 2024. And...

That was not the vibes in April. Yeah, that was not the vibes in April. Explain yourself, Elliot. I'll explain the model. So the model is reasoning. The reason that Biden's odds have decreased, when perhaps we should have expected them to increase with better poll numbers over the last month, is that the model is putting more weight on the polls, which are on average worse than the fundamentals over that two-month time span.

So all else equal, if we keep the fundamentals exactly where they are today, which is Biden plus, well, it's plus 3.5 if you include the other factors, but plus 1.7 if you're just looking at economics.

and the polls, which are Biden minus one nationally, and you just run the election, no more data, let's just see how the confidence intervals converge, you end up with an election that looks better for Trump over time, as it puts more weight on those polls that are worse for Biden. So that's why he starts off pretty good. Our model's pretty hesitant about these polls in April, but as we're going on, the model puts more weight on them and his probability goes down.

Okay, so the other things included in the fundamentals, like we said, are approval rating, which...

doesn't seem all that good for Joe Biden. So if you look at our approval rating tracker, you see that Biden has the worst approval rating of any president at this moment in time, except for Jimmy Carter and George H.W. Bush, who have something in common. So you would think that, okay, if approval rating is part of the fundamentals, that's probably not good for Biden. It's not good for Biden. I told you the fundamentals is exactly zero, exactly average. But

Biden's approval rating is closer to negative one standard deviation. It's worse than 86% of past presidents adjusting for everything else. It is bad. Based on that indicator alone, you'd expect him to lose. So if we ran an approval-only model, it would probably be closer to like 75% or 80% Trump, 20% Biden, 30% maybe, considering the uncertainty. We do account for polarization, however.

Since 1996, especially recently, voters put less weight on economic growth when they're deciding who to vote for because they're more sorted into their partisan camps. They view the economy through their partisan rose-colored glasses, right? They do the same thing when evaluating their president. Oh, maybe Biden's been bad for me, but the other option is further away from me ideologically, not part of my team. I'm not going to vote for that guy even if the economy is not as good as I would want it to be. So that decreases growth.

the amount of negative input into the fundamentals from a bad approval rating. This was the correct decision to make in 2020 when the fundamentals underestimated Trump for most of the year because of a low approval rating in a polarized time and a bad economy in a polarized time and COVID notwithstanding. Yeah, out of curiosity, if you ran the 2020 election through this forecast,

In June of 2020, do you know what it would have shown? Yeah, it would have been 75% Biden today. Interesting. Yeah. Okay. Which is, I think, about what we should expect. I mean, I published a model in 2020 that I think put a little bit, that had two small confidence intervals on the fundamentals. I think other people did models that had a little bit more uncertainty. So, he got like somewhere in the middle of that range of 75%.

That gets at an important question here, which is when it comes to creating forecasts, should people look at this forecast as a codification, in some sense, of how you understand elections to work based on history and data or whatever? Or should people understand it as like, this is the forecast that's written in the stars? Yeah, this is not handed down by God. Yeah, this forecast is not. I'm not the Oracle at Delphi. Okay. What I'm trying to do is really tell readers that.

This is how much weight you should put on the data. This is how much uncertainty you should have at any point. And the process at arriving at that, here's how much uncertainty number, comes with a lot of choices as a modeler. We choose to go back to 1948, like I'm saying, with our uncertainty because that has worked in the past.

Someone else, I think, could make a reasonable argument that you should only go back to 1999 or whatever. I want more data. I think that that's probably wrong. I'm not certain it's wrong. And so that would change your forecast. Maybe you don't use consumer sentiment in your model. And that makes the fundamentals even better for Biden. But I think that would be bad. So I use consumer sentiment, right? Maybe you don't use approval rating, a political fundamental in the model. I think you're leaving information on the table.

But see what the model says, right? There are nearly unlimited. If we had unlimited computational time, we could run thousands of different election models and average them together. We could do a Bayesian model average of our Bayesian model. And I think that would be legit. We don't have unlimited time. And this is our best guess at what

the average model should say, but you can disagree around it. That's the statistically sound way to do it. That's fine. Yeah, we're describing the human part of all of this in a sense, and you had to make a lot of choices over the past several months that you've been working on this. Like, can I just ask? Years, dude. How are you feeling? I'm so excited to finally release it to the world and to get a good night's sleep.

Amen. Amen. Well, it might sound like we're ending this podcast now, but we are not. We're just taking a break. We're going to come back and talk about some of the more fine-tuned details that folks can look into in the forecast model.

Today's podcast is brought to you by GiveWell. You're a details person. You want to understand how things really work. So when you're giving to charity, you should look at GiveWell, an independent resource for rigorous, transparent research about great giving opportunities whose website will leave even the most detail-oriented reader stunned.

GiveWell has now spent over 17 years researching charitable organizations and only directs funding to a few of the highest impact opportunities they've found. Over 100,000 donors have used GiveWell to donate more than $2 billion.

Rigorous evidence suggests that these donations will save over 200,000 lives and improve the lives of millions more. GiveWell wants as many donors as possible to make informed decisions about high-impact giving. You can find all their research and recommendations on their site for free. And you can make tax-deductible donations to their recommended funds or charities. And GiveWell doesn't take a cut.

Go to GiveWell.org to find out more or make a donation. Select podcast and enter 538POLITICS at checkout to make sure they know you heard about them from us. Again, that's GiveWell.org to donate or find out more. Today's podcast is brought to you by Shopify. Ready to make the smartest choice for your business? Say hello to Shopify, the global commerce platform that makes selling a breeze.

Whether you're starting your online shop, opening your first physical store, or hitting a million orders, Shopify is your growth partner. Sell everywhere with Shopify's all-in-one e-commerce platform and in-person POS system. Turn browsers into buyers with Shopify's best converting checkout, 36% better than other platforms. Effortlessly sell more with Shopify Magic, your AI-powered all-star. Did

Did you know Shopify powers 10% of all e-commerce in the U.S. and supports global brands like Allbirds, Rothy's, and Brooklinen? Join millions of successful entrepreneurs across 175 countries, backed by Shopify's extensive support and help resources.

Because businesses that grow, grow with Shopify. Start your success story today. Sign up for a $1 per month trial period at shopify.com slash 538. That's the numbers, not the letters. Shopify.com slash 538.

You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.

Let's talk about some of the other information that folks can find on the forecast page. First is the tipping point state, which is a very sexy piece of data, but something we use a lot. Like Wisconsin has been the tipping point state for the past two elections. And what that means is when you line up all of the states in terms of

sort of vote share for the winning candidate. It is the state that puts the winning candidate over the edge in the Electoral College. So the 270th Electoral College vote. Today, the forecast suggests that Pennsylvania is the likeliest tipping point state. Interestingly enough, folks will see if they go and look that Texas right now has a better chance of being the tipping point state than Wisconsin. In terms of likeliest tipping point state,

today. It goes Pennsylvania, North Carolina, Michigan, Georgia, Florida, Texas, Wisconsin. So the reason that Texas is so much higher than you'd expect is because it has so many electoral votes and there's so much time remaining between now and the election.

The way we calculate the tipping point is to take all of our different simulations of how the election could go. And that takes into account both national movement in the polls, but also state-specific movement. And for the nerds out there, we think the correlation between the states is around 0.8. That's what the model is saying about 2020. Might be higher, might be lower by November. But today it's around 0.82.

So most of the movement in the polls is national, but some is state-specific. So an hour, 20,000 or what have you simulations of the election. In those simulations where Texas breaks off and moves more to the left than the nation as a whole, most of the time it becomes the tipping point state just because it has so many 40 electoral votes.

So something similar is happening with North Carolina. There's a lot of uncertainty. It has a good number of electoral votes, more than most of the northern battlegrounds. So it also exerts more than I think people would expect influence over the forecast right now. But if you're asking me, not the model, then I think you should expect those values to converge more towards our expectations as we get closer to Election Day and there's less uncertainty about what the national environment and the ordering of the states will be in November. Yeah.

Okay, another sexy topic here is the Electoral College popular vote split. Ooh, so hot.

Of course, in 2016, Hillary Clinton won the popular vote. Donald Trump won the Electoral College. We've seen that in the past two elections, Republicans have had an advantage in the Electoral College. But prior to that, Obama had an advantage in the Electoral College against Romney.

If you look today at the national polling average, Trump leads by slightly over one percentage point. And if you look at places like Michigan, Wisconsin, and Pennsylvania, he also leads by just about a point or so. So in that sense, it seems like

the split between the national popular vote and the electoral college has converged a little bit. Is that what you see? It's lower. It's definitely lower this year. Right now, the forecast is benchmarking close to a two percentage point difference in the popular vote over the whole country and the vote in the tipping point state. Last time it was about 3.8 points. So it's about cutting it in half. And as the forecast puts more weight on the polls, which show even lower, we might expect that to decrease significantly.

But that percentage point you cited, this 11% chance that Biden wins the popular vote, loses the electoral college, I would expect that to increase as we lose that uncertainty about where the national environment is going to be in November. Most of the time when we rerun these models, assuming no drift in the national environment or et cetera, the map just looks a lot closer together.

to 2020. And there's an Electoral College popular vote divide then of 3.8 points. So we should expect that to be the case. If it's not the case, it'll get overwritten. Yeah. One fifth of the time when Biden wins the election, one out of five times he wins the popular vote, he's losing the Electoral College. That is...

Like high systems risk of your party coalition not providing you the outcome that the voters want. And so even that, I think, is not to be discounted, 11%. Yeah, and in fact, when it comes to how Democrats might be thinking about this forecast, so they may see that and be concerned, they may also look at it and say,

see, I've been telling you that the polls have been underestimating Biden or whatever. He has a better chance than you thought based on whatever polls are out there. Yeah, that's not what we're saying. Okay, so what are we saying? Yeah, what we're saying is that we don't know what the polling bias is going to be, but historically there is some amount of it. And if you simulate assuming no bias,

on average, but some amount of bias in each individual simulation, and you're also taking into account how the election might change, which is a reversion somewhat towards the fundamentals, then Biden's doing better. The forecast does not know anything that we can't mathematically test historically about potential bias this year in the surveys.

So to be specific, it doesn't know, for example, what my belief is partially that Biden's doing worse in the polls among young people because of like protest polling or expressive responding, the pollsters would call it, where people, young people especially, but Democrats in general have a higher probability than average of saying they won't vote for the incumbent because they're dissatisfied with him.

but would come home somewhat by the time the election happens. This happened somewhat with Republicans in 2020, who early on,

were somewhat down on Trump because of COVID and the economy, but came home. And that year, we saw a polling bias in a similar direction, but for an entirely different reason, which is Democrats weren't answering. But what you're saying is you think that may be the case, but the forecast is not suggesting that at all. The forecast doesn't make any predictions about polling bias this year. It only asks the question, what could happen if the polls are wrong? And

in X direction some amount by this amount, approaching the historical distribution of polling bias. So if you want to tell me also, by the way, polls are even worse now, and you have a good statistical, mathematical argument for that. So you might say my model is underestimating the chance of a big error. Make that argument.

And we'll rerun the model and we'll see what it says. We can totally do that. But we're not going to make any pronouncements about the bias because historically, you don't want to guess average bias before the election. So we don't do it. And to your point, historically, or all historically, in the past two presidential elections, there has been

Some high-profile error. The error was significantly larger in 2020 than it was in 2016, but the consequences of the error in 2016 sort of got the headlines that maybe sowed some of the distrust in polling. I know you just said we don't and we basically can't account for polling error in one direction or another because it is— Well, I can account for it, but I'm not going to make an average guess. It's unpredictable. It's going to be unbiased on average. Yeah. Is there anything that this forecast does—

to specifically account for the things that happened in 2016 and 2020. So there's no major revision in this model framework after 2016 and 2020. Every year we're running the model, it looks at this is how much error there's been historically in the polls, and it's correlated at this amount between the states. So if a polling miss favors a Democrat in Wisconsin, it probably favors

Democrats in Minnesota as well as similar state. So that like fundamental approach to simulating polling error is the same. That's the one that gives you the best forecast, empirically speaking, over the, you know, the life of post-war elections in America. But,

The key thing is we do look at the historical amount of error when we're making forecasts on the future. So the amount of bias, potential bias we simulate in the polls today in the year of our Lord 2024 is higher than in 2016 because we have observed larger misses in 2016 and 2020. So it does take into account the higher than average amount of error over the last two election cycles, but it's not like

we are intentionally putting less weight on the polls or anything. It's just that the best predictor of the polling bias this year is the historical average. And we have now observed history in 2016 and 2020. Now is the part where this becomes a job interview. What does success look like for this forecast? The reason that I do this is probably different than the reason other people do this. We all have our own

reasons for modeling the outcome before the election. I do it because I don't think polls are perfect predictors of election results. Shocker. Shocker. On average, they are unbiased, but there's a lot of years where we end up in the error term. And I want to be able to tell readers before an election that

Here's how much uncertainty you to expect. I think this tool makes people smarter about anticipating outcomes for the election. In 2022, when there's a lot of polling bias favoring Republicans in key Senate races, for example, not on average that year, but in key races, it's

If you just look at a polling average, you don't know that that's a possibility. You see a point prediction, Republicans gaining Pennsylvania by 0.1 percentage points or something, and that could be your takeaway. But if you do a model where you simulate error across the whole country and how the election might change, then you end up with a distribution of realities that could come true based on the uncertainty of how we measure the world. And I think that just makes us a lot smarter about election results. That's the whole point.

of having this data and this computational capacity at our fingertips. So we might as well try to be a little smarter. But what I'm not doing is telling people I have the best possible prediction of every single election outcome ever.

Or I can offer you laser-like predictive accuracy of the outcome. I don't think anyone can do that. So that's why we have a tool to take into account the uncertainty. Yeah, I mean, I led with the numbers, 52, 48. But what folks will see right after they see those numbers is a histogram that charts all of the different possible outcomes for

when you run 1,000 simulations. And maybe that's... Well, we run 20,000 and we pick 1,000 for you randomly so that you don't have to have 20,000 little dots on your page. So that's why. Right. So while the numbers are sexier, the histogram is really the whole point of this, which is

you should imagine that all of these little dots are a possible outcome and not that, well, because Biden has a 52% chance, that means he's going to win and Trump is going to lose. Or that he's going to win 52% of the vote, right? Well, certainly not that. Certainly not that. What we're trying to say, the reason we're doing the model, and I think that the histogram does a really good job of this, is based off of how much polling volatility there has been at this point in the cycle historically, plus...

how biased the polls have been historically, but especially recently, you should expect, given that the polls could move or be wrong, this distribution of potential outcomes. That in very few scenarios, Biden might win 414 electoral votes. Usually, he wins more than 270, but just barely usually. And that, you know, Trump could win 400 as well. And as a point of clarification here, the forecast does predict

forecast the actual vote share for Biden and Trump. And right now, you know, the likeliest outcome per the forecast is Biden with 47% and Trump with 45%. And with plenty of uncertainty on that too. Yeah. People will see those uncertainty intervals for our vote projection. Yeah. So Elliot, your work is not done here.

There are still going to be down-ballot races. And there'll be more that we add to this page, too. We have more graphs in the queue to add. We'll have a graphic of the Electoral College popular vote divide that the visualizers are cooking up as we speak. And there will be plenty of other...

pretty nerdy, I think, graphs to offer. Like, what if we assume polling bias is less? What would our model say then? I mean, this is getting really meta, but you can imagine a graph similar to the histogram where each ball is a different model you could run of the election. And those approach some distribution as well, giving you uncertainty about our uncertainty. It's just forecast eating forecast eating forecast. I think that's for a pretty niche audience, but I think it would be pretty cool and we could do it for journalistic reasons.

So have you started working on the Senate and House forecasts? Yes, obviously. And do you want to give us a little hint as to what they show or at least when they might come out? We haven't gotten that far yet. Okay. When they'll come out? Sometime before November 5th. Yeah. Okay.

Oh, wow. Okay. So not November 6th. Something would be disastrous if it came out on November 6th. But it would be right. I mean, you know what? Screw it. I'm going to make my own forecast. It's coming out on November 6th. Actually, will we know on November 6th? The chance that it is close enough that there is a recount is 5%. And again, we're early, so...

That number might be lower because there's a lot of simulations now where one candidate is winning by a landslide. My prior, like outside of the model, I think your prior two is that the election will be pretty close. Data speaking, that's probably not the way to go. But if you want to put that prior in the model, that 5% is probably closer to 10 or 15 if you assume a much closer distribution of outcomes, which is what the model will do in like September. So there's also a really high chance that...

that a third-party candidate spoils the election, which adds a layer of uncertainty about vote recounting or assigning different ballots. Well, spoils the election or wins a percentage that is larger than the margin between the two main party candidates that leaves us questioning whether

We'll get to talk about this in another podcast, I'm sure. But technically speaking, it's a larger amount than the margin between the two candidates. We're going to have a lot to talk about. We should probably play a game of like Elliott versus the forecast. Like what does the forecast say that Elliott disagrees with? Well, also, there are going to be a lot of forecasts out there this cycle. There is already another one out by Decision Desk. We can talk about some of the other forecasts that are out there and what they show. Yeah.

And we're going to be adding new tools and publishing down ballot races. So this is not the last of our conversations, Elliot. It's only the first. Hang out in D.C. more often, Galen. I know. Thank you so much for welcoming me here. It's been really lovely. We had a whole parade just for you. I was flattered. It was lovely. There was lots of people out in the street, lots of bright colors. People seemed like they were having a really great time. And to know that it was all for me. It's all for you, buddy. Let's leave it there for now. Thank you, Elliot. Thanks, Galen.

My name is Galen Druk. Our producers are Shane McKeon and Cameron Chertavian, and our intern is Jayla Everett. Mike Claudio is on video. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we will see you soon. Bye-bye.