- Church's Original Recipe is back. - You can never go wrong with original. - Still tastes the same like back in the day. - Right now get two pieces of chicken starting at only $2.99 or 10 pieces starting at only $10.99. Church's, offer valid at participating locations.
Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. The 2024 presidential campaign is approaching its final crescendo. For the next five and a half weeks, Americans will be bombarded with news, political ads, and of course, polls.
We cover polls year-round here at FiveThirtyEight, but over the next month or so, pollsters will release some of the most high-profile polls they will ever conduct. They'll generate clicks and conversation and misinterpretation. If history is any indication, partisans will look at new polls and respond with either glee or despair, only for those emotions to reverse with the next round of polling.
Before we get into October and the news cycle gets even more frantic, we're going to spend an episode returning to some of the fundamentals of polling. And we
And we've assembled a list, the 10 elements of reading polls responsibly. If you're a longtime listener to this podcast, some of these tips will sound familiar, but I think hearing them distilled to the essentials will give you some added nuance. And if you're new to the show, well, first of all, welcome. This is an excellent place to start. It'll give you a sense of how we think about data and politics here and just how nerdy we are.
To do all of that, I am joined by senior elections analyst Nathaniel Rakich. Welcome to the podcast, Nathaniel. Hey, Galen. Thanks for having me. It's so great to have you, in fact, because you have crafted this guide, which folks can also read on the website. And I just want to start off. We'll get into the 10 tips in a second. But why did you think it was important to create this guide in the final stretch of the 2024 campaign?
We are in silly season for campaigns. And just like you, exactly like you said in the intro, people are going to overreact to every poll. And there are people out there who will run with something either, you know, on purpose to misinform or because they just kind of don't really know the nuances of polling. And we want to put out a guide for people to basically read polls in a level-headed way.
Not to get kind of too caught up in every single swing and also to understand what they can't do in addition to the very real powers that they that they do have, which, of course, is kind of the whole raison d'etre of 538. OK, so let's dive right in. The first tip of reading the polls responsibly is to check who conducted the poll.
We want to be sure that the poll is from a trusted, high-quality pollster. And we have pollster ratings on our website with hundreds of pollsters on there. The number one rated pollster today is The New York Times and Siena College. Nathaniel, how are these ratings calculated? And maybe more broadly, what makes a good pollster? For our purposes at FiveThirtyEight, we care a lot about accuracy.
The easiest way to see if a pollster is good and reliable is to look at how good and reliable they have been in the past. So a big component of the pollster ratings is how well that pollster or how close that pollster has gotten to the election result in the last ex-presidential elections, however many
farther back they go. And of course, the longer track record of accuracy that a pollster has, the higher their pollster rating. The other thing that goes into our pollster ratings is transparency. We have a set of 10 questions that we ask basically is like, does the pollster do this? Does the pollster do that? Or do they disclose that they do this and kind of provide adequate answers about things like releasing crosstabs and all the groups that they wait by and things like that.
The higher a pollster's transparency score, also the higher their pollster rating because sometimes a pollster might only have a tracker going back a couple of years, but we know empirically that pollsters that are more transparent tend to have higher ratings of accuracy over time. And so we can kind of use that as a proxy.
Once upon a time, one of the components that we thought about when judging a pollster was their methodology. Do they use live humans contacting voters or registered voters by either landline phone or cell phone? Usually a mix of the two.
But recently, we've come to conclude that there actually isn't an advantage to doing that. And pollsters have iterated and included mixed methods, text messages, emails, you know, even sending snail mail. So when we're thinking about quality of pollster today, are there any rules to methodology?
Yeah. So as you mentioned, there was for a long time this idea that the gold standard of polling was calling people over the phone using live interviewers. But we found in our research that actually those types of polls aren't significantly more accurate anymore than online pollsters. And that's, of course, because phone polling has their own challenges as well with low response rates. People have caller ID now and they'll often screen those calls.
One thing to keep in mind is that I think there is a conception among people that pollsters are just kind of, you know, it's like you're sending me somebody is making dinner and then they're the phone on the wall rings and they pick it up and like are answering a poll that way. It's not just landlines. So obviously, most people in the country use cell phones now instead of landlines. And basically, all pollsters call cell phones now in addition to landlines. So that's a common critique that I see of polls is like, oh, how can it be accurate when you're just calling landlines? That's not what they're doing.
All right, so a key part of tip number one is using FiveThirtyEight's database and looking at the accuracy of pollsters and their transparency over time. What about when, say, there's a new pollster on the scene and they don't have a rating from FiveThirtyEight?
Anytime there's a new pollster that starts releasing polls, we at FiveThirtyEight will reach out to kind of vet them and give them a series of questions that any valid pollster should be able to easily answer before we include them on our polls page. And so we include all valid pollsters on our polls page, but that doesn't necessarily mean that they are
high quality or that they're going to be accurate. So we always recommend that with new pollsters or pollsters, maybe that maybe they've been around for a few years, but they didn't have enough polls for us to give them a rating. We just recommend that, you know, you can listen to them, but take them with a grain of salt and you should put more trust, kind of more weight in your mental model, just as we do in our actual literal model on those pollsters with established track records from highly accurate institutions like the New York Times, Siena College, like Marist College, those sorts of places.
All right, guideline number two for reading the polls responsibly is check who sponsored the poll. A lot of polls are sponsored by partisan groups, and we label those on our website with a little red or blue diamond. And that makes sense. Maybe we want to be skeptical of data that is being pushed out by groups with an agenda.
But at the same time, if I look at our pollster ratings, there are quite a few partisan pollsters who rank fairly well. So on the left, you have Data for Progress, whose mission is helping progressive causes. It's number 26 on our ratings. On the other side, you have Remington Research Group, which is a top Republican pollster ranked at number 30. That would imply that you shouldn't just outright ignore partisan sources of data. So what are we supposed to do when we see that a poll has some partisan affiliation?
Again, the important thing is just to kind of be aware of it and factor it in mentally. So we found at FiveThirtyEight that polls with partisan sponsors have historically been about four to five points better for that side than kind of they should be.
You know, you'll see this all the time, right, is the campaign releases a poll that shows them leading by one point or something like that. When I see a poll like that, I'm like, oh, OK, this means you're actually probably losing and you're just releasing this one poll that shows you leading because you want to create a narrative that like you're competitive or that you're ahead. Right.
As you mentioned, it isn't necessarily because these pollsters are bad or inaccurate, although I think that the sponsors of the poll are trying to mislead and kind of, you know, they always have an agenda when they release these polls. But like a lot of partisan pollsters will like work privately with the campaign, right? And they'll provide them with a lot of data. And then the campaign will decide, oh, this one poll that we got that shows us leading versus all the four other polls say that show us behind, we're going to release just this one.
And so you just have to bear in mind that you're maybe not seeing the complete picture with partisan polling. There are also some partisan pollsters that will, I think, maybe make favorable turnout assumptions when they're making their likely voter model that maybe if you're a Democratic pollster, maybe you'll say, oh, this black electorate is going to be higher maybe than an objective analyst would think that it is.
The important thing to bear in mind is that these pollsters aren't necessarily, yeah, they're not necessarily worse quality, but the sponsors, the people who pay for them, they're only giving you data that they want you to see, and they're probably using it to shape a narrative. And it's always good to ask yourself, why am I seeing this poll? What is the narrative they are trying to build? And how do we...
as a newsroom process, partisan polls, when it comes to our averages? Do we dock them based on the average expectation of their partisan bias, or do we exclude them altogether? We include all polls regardless of their partisanship, but we do adjust for that partisanship. So a Republican pollster, you know, we would shave, you know, five points off the margin or whatever it is, the exact number can vary depending on what we found historically. Right, unless they prove to be perfectly accurate or unbiased
over time, right? So the starting assumption, if it's a brand new pollster, they get docked that full average. But if they prove themselves to be, you know, rigorous, transparent, accurate over time, then that expectation that they behave in a partisan way decreases and can decrease to zero.
Right, exactly. Although it's worth making a distinction between the pollster and the sponsor, right? So like polls with partisan sponsorships are docked by X amount of points, depending on whether they're Republican or Democratic. But you can have a partisan sponsored poll that comes from a high quality pollster, and that poll can still get a lot of weight based on a track record of accuracy, like you said. All right. Tip number three is to pay attention to who was polled.
Sometimes it's useful to survey a particular demographic group or a swing state, but it's also important to remember that those results then don't represent the country as a whole. Another example of this, one that matters a lot this year, is the difference between a poll of registered voters versus one of likely voters. So Nathaniel, why does this distinction matter so much? The simple reason is that we know that not all registered voters or certainly not all adults are going to vote.
turnout is a thing. And even in presidential elections, generally only two-thirds of eligible voters will turn out. That population who votes, and especially in lower turnout elections, looks different, right? They're not representative of the country as a whole. And so that's why when we talk about, quote, likely voter models, which I realized I said earlier and didn't fully explain, but I'm here to explain it now, that's when a pollster
say they call up registered voters based on a list of registered voters that they get from a state, then they use different techniques, whether it's asking the voter directly, are you going to vote? Whether it's looking at the voters' vote history and assessing whether they, based on that, are likely to turn out this year. They use those techniques to determine who is likely to vote in this election. And sometimes those numbers among just the likely voters are going to be different from the overall registered voters. And what's interesting is that historically, likely voter polls have tended to be more Republican than registered voters.
voter polls. But it seems like that has flipped these days, where Democrats now do better in likely voter polls because they are kind of demographically now more likely to vote with college educated people in particular, who tend to be among the most likely people to vote, becoming more and more Democratic. All right. I want to talk about the ever sexy margin of error. But first, a break.
Today's podcast is brought to you by Shopify. Ready to make the smartest choice for your business? Say hello to Shopify, the global commerce platform that makes selling a breeze.
Whether you're starting your online shop, opening your first physical store, or hitting a million orders, Shopify is your growth partner. Sell everywhere with Shopify's all-in-one e-commerce platform and in-person POS system. Turn browsers into buyers with Shopify's best converting checkout, 36% better than other platforms.
effortlessly sell more with Shopify Magic, your AI-powered all-star. Did you know Shopify powers 10% of all e-commerce in the U.S. and supports global brands like Allbirds, Rothy's, and Brooklinen? Join millions of successful entrepreneurs across 175 countries, backed by Shopify's extensive support and help resources.
Because businesses that grow, grow with Shopify. Start your success story today. Sign up for a $1 per month trial period at shopify.com slash 538. The number is not the letters. Shopify.com slash 538.
Today's podcast is brought to you by GiveWell. You're a details person. You want to understand how things really work. So when you're giving to charity, you should look at GiveWell, an independent resource for rigorous, transparent research about great giving opportunities whose website will leave even the most detail-oriented reader busy.
GiveWell has now spent over 17 years researching charitable organizations and only directs funding to a few of the highest impact opportunities they've found. Over 100,000 donors have used GiveWell to donate more than $2 billion. Rigorous evidence suggests that these donations will save over 200,000 lives
and improve the lives of millions more. GiveWell wants as many donors as possible to make informed decisions about high-impact giving. You can find all their research and recommendations on their site for free, and you can make tax-deductible donations to their recommended funds or charities, and GiveWell doesn't take a cut.
Go to GiveWell.org to find out more or make a donation. Select podcast and enter 538 politics at checkout to make sure they know you heard about them from us. Again, that's GiveWell.org to donate or find out more.
All right, Nathaniel, for starters, have you ever seen those videos where you have, say, a physicist explain a concept to a child and then a high school student and then, say, a PhD candidate? No, I haven't. Okay, well, I'm going to ask you to do that nonetheless right now. Watch the video? No. Pretend that I am a child. Oh, okay, okay. Well, pretend that you're a child. I mean, maybe it doesn't take a lot of pretending. You said it, not me.
To start off here, can you explain the margin of error to me as if I were eight years old? Okay, so... You don't need to put on like a Midwestern accent. If you had all the money in the world, right, you would poll the entire population of the country or the entire population of a state, whoever you're trying to reach, right? And ask literally everybody, which is basically the right in an election. We don't have...
all the money in the world. So what you do is you take a sample and
It turns out that if you just interview, say, a thousand people in a state, it actually does get pretty close to being representative of the state of the whole, or at least you have enough respondents that you can kind of weight the poll, which basically means that you, you know, if you get a sample that is not, does not look representative demographically of the entire state, you can kind of adjust it to give more weight to the people who
didn't respond to the poll in his greater numbers so that basically the final composition of the poll looks like the composition of the electorate or whatever you're going to mention. This is perhaps too complicated for an eight-year-old. I was just about to say, I'm not really sure what eight-year-olds you know, but, you know, I think we'll stick with it. We'll stick with it. Okay. Sorry. Sorry. If any eight-year-olds are listening and are confused, I'm happy to take another crack at it. Just email me. Although ask your parents' permission first before using the internet. Yeah.
Anyway, so you can get pretty close to representing the views of the entire state or country or whatever it is with a sample of, say, a thousand people. But you are not going to get 100 percent of the way there. Right. There is this thing called sampling error, which happens when your sample is just too Democratic or too Republican relative to the endocrinologist.
entity as a whole that you're trying to measure. And that is what margin of error is. Basically, it says, you know, if you have a poll that has a plus or minus four percent margin of error, it's basically saying that this poll could be off by four points in either direction. And that's just the nature of sampling and of polling. And when we go on this podcast all the time and say polls are pretty good, but they can't be exact. That's exactly what we mean.
And so is it right to assume that the larger the sample, the more people that a pollster actually talks to, the smaller the margin of error will be? Yes, that is basically how margin of error is calculated based on the sample size. But it's also important to note that you have diminishing returns, right? So if a pollster only calls 100 people or surveys 100 people online, that's going to have a
big margin of error and going up from 100 people to 1,000 people is like that's a significant improvement. You're going to get down to a pretty reasonable margin of error there. If you go from 1,000 to 10,000, it doesn't cut into your margin of error all that much. And again, because pollsters don't have all the money in the world, they'll tend to stop at a reasonable sample size once they feel that they get something that is close enough margin of error wise. Just to really belabor the point here.
The margin of error, say it's three percentage points, can be on either side, which basically creates a gap of you double that and say six percentage points in terms of where the actual results could turn out.
Looking at this moment where we are right now, I think all seven battleground states are within the margin of error. So in theory, in all seven cases, the actual winner could differ from who the polls suggest is up or down today. Right, exactly. So to use a real world example, if you see a poll of Pennsylvania where Trump is leading Harris 46 to 45.
That will often be reported in the media as Trump leads Harris, but it's by one point. And with that margin of error, Trump's actual vote share could be as high as 49. It could be as low as 43. And Harris's could be as high as 48 and it could be as low as 42. And there's obviously a lot of overlap there. And so that's why it's better to think about when you see polls like that showing a one point quote unquote lead that you should think of that as virtually tied because again,
Again, very simply, it could just be a question of getting a sample that is too Democratic or too Republican, and that could reverse the actual lead. Sampling error is not the only source of error in polling, even though that's what we focus on when we talk about margin of error. What are the other sources of error?
There are things like non-response error. We probably ran into this in 2020, for example, where certain types of people, maybe Trump voters, maybe people who distrust institutions, are less likely to take a survey. You're just not able to get them in as great numbers. I mean, there are other things like kind of just simple temporal error. So we're here recording this conversation in September. The election isn't until November.
Things could change between now and then, and that's just not something that a poll can account for. We'll get to that in a later point. But yeah, that is actually a great point is that margin of error only refers to this idea of sampling error. And you'll see in a poll that, again, it has a three-point margin of error, a four-point margin of error. But actually, we find that empirically the average polling error in Senate races, for example, is actually closer to five points. And it's because of those other sources of error as well.
All right, tip number five is check how the poll was written. And we mean here, literally, what are the questions that they asked respondents? And question design can be very important in all of this. When things go wrong, when a question is poorly written, what kinds of results can that produce? What are the risks?
I'm thinking particularly of issue polls with regard to this, right, is that the way you word a question can basically bias respondents to answer a certain way. So I think a classic example of this is when you ask about Obamacare versus the Affordable Care Act, right, is that when you associate it with Barack Obama, who is a – I guess now he's a popular politician, but back in the day he wasn't.
You know, he's a politician who people have very strong feelings about on both sides of the aisle. It can create maybe more of like a split versus if you ask about the Affordable Care Act or even about like specific provisions, like a law containing, you know, protections for preexisting conditions and, you know, allowing people to stay on their parents' health insurance until they're 26 or something like that.
We talk about this on the podcast a bunch too on all sorts of things, abortion and government shutdown and things like that, is that what is the true state of public opinion about some of these things? And there's no right answer. There's no right or wrong way to write a poll, but it is important to read how
how exactly how the pollster asked that question, because if you're going to take a poll saying that, you know, oh, Obamacare is super popular and it's because they asked about it in a certain way that maybe deemphasized the unpopular parts and the association with Obama, you need to know that before you go around waving that as evidence. All right. Tip number six, compare polls only if they were conducted by the same pollster.
I think that's pretty self-explanatory what that means. Why? Basically, because kind of related to what I just said, question wording can matter. The way a survey is designed can matter in terms of the order of the questions or the mode in which they ask people. Online polls can turn out differently from phone polls, for example. Basically, all this type of stuff adds up.
Some pollsters, not all pollsters, but some pollsters will be consistently, for example, more Democratic or more Republican-leaning than the polling consensus, and we call these House effects at 538. Basically, you can't compare a poll from one pollster and a
a poll from another pollster because what if that one pollster has a Republican-leaning House effect and the other one has a Democratic-leaning House effect? It might look like the race has shifted, you know, say toward Kamala Harris because the Democratic-leaning pollster went in the field second, but it could just be because of the methodology. But if you're looking, say, only at polls from Quinnipiac University and you see that two weeks ago they said something and then
two weeks later with using the same question wording and the same methodology, they found a shift in the race. You can be a lot more confident that it's a genuine shift.
Number seven in polling tips. This is, I think, about as sexy as things get here. No, number nine is the sexy one. Don't be real, Galen. All right, all right, all right. Hold on to your seats, folks. Stay tuned, everybody. We're getting crazy here. Number seven, don't pay attention to outliers. Instead, look at the average of polls. Before I dive in with my defensive outlier polls. Oh. Why, Nathaniel?
That's spicy, Galen. Basically, there will often be a poll that comes out that says something different from the consensus. It'll get a lot of attention because when something is surprising, it gets a lot of attention.
And when that is then amplified in the media, that's bad because generally speaking, looking at the consensus of the polls is generally more accurate than looking at one individual poll. Because one individual poll can run into, like I said, a sample that is just too Democratic or too Republican. Every so often, it's going to happen. We didn't talk about it with the margin of error, but
to get a little bit more into the weeds, the margin of error is supposed to describe the kind of range the poll should be within 95% of the time. But that 5% of the time, you're going to get a poll that's even farther outside the margin of error. And that's what's supposed to happen when you poll. If you were to poll 20 times, you would theoretically get 19 times, you would get a result that is within, you know,
points or four points or whatever the margin error is of the like actual quote unquote truth on the ground but that 20th time you will find something that's like six points away or whatever and that is an outlier and that is a totally normal part of polling and so I don't think you should disregard it because then you start to you know cherry pick and be like oh that's an outlier so I'm just gonna ignore it but like sometimes outliers can be
leading you in the right direction. But that's why we just tell everybody, throw it in the average. An outlier on this side will often cross out an outlier on the other side, and it kind of all comes out in the wash.
Well, a couple of things. First of all, I think the word consensus might be a little tricky. When we're talking about looking at the average of the polls, it doesn't mean that there is a polling consensus. The polls could be wildly differing in terms of where they think the race is. But historically speaking, we still know that averaging all those polls together will get us closer to the end result, even if there is a lot of disagreement. So there oftentimes won't be a consensus, but still looking at the average is key.
Now, when you do get that spare outlier poll, I think, you know, maybe beginners tread lightly here, but sometimes it can be indicative of movement that is happening in the race because something has changed. So after a debate or a scandal or a change that you might expect to shake up the race.
Or maybe a pollster hasn't been in the field for a while or we don't have a lot of new polls and then something new comes out that doesn't match the average. It could at times be a leading indicator. And we do have examples of this that
have become quite famous in the polling community. Ann Seltzer here gets a mention because she has published outlier polls in the run-up to, for example, the 2008 Iowa caucuses between Obama and Hillary Clinton in the final days of the 2020 election in Iowa, showing where others had showed a tied race, Trump leading by about seven or eight, which ended up being the result in Iowa. And so sometimes...
when outliers are correct? How should we do justice for outliers when they are correct? Hashtag justice for outliers. I think, yes, to your point, your initial point is that I will take an outlier a lot more seriously, or at least I'll entertain it more when it happens after a big debate. So, for example, you know, we're speaking shortly after this big scandal about Mark Robinson, the Republican candidate for governor in North Carolina, broke. If I see a new poll tomorrow that shows Democrats leading that race by 15,
That would be significantly out of the polling average, which is the Democrats leading by nine right now. But it would be believable because of kind of the scandalous nature of those messages that came out. But
But basically, I'll always want more information, right? Is that if this truly is the beginning of a new trend, there will be more polls coming out after that that will confirm the trend. But if the next couple of polls, especially quote polls from high quality pollsters, go back to the average or they're closer to the average, then I think I'm more like, OK, maybe this was an outlier. But if they are closer to that new outlier, then, yeah, that starts to look like the beginning of a trend. And again, that's why we average polls is that, you know, you have that line and then maybe you get an outlier and then maybe it
budges the line a little bit, but then the line will continue depending on the next few polls. And if the next few polls are like the outlier, it'll continue to move the line in the direction of that outlier. And then you'll see the trend in our polling averages. And this is an important point here is that
outlier polls, just because we're saying maybe don't pay attention to them, pay attention to the average, does not mean we shouldn't include those outlier polls in the average. In fact, it is fundamental to getting accurate averages that outlier polls be both published by the pollsters who conduct them and incorporated into the averages by the people who are aggregating them. Because a problem that we can see in the industry is hurting, where people get a sense of where they think the election's at and
and they get shy about publishing polls that run counter to that conventional wisdom. And when that happens, we see that the actual average ends up being less accurate. And so it can be scary for pollsters. I'm sure Ann Selzer, you know, just before the 2020 election, just before the Iowa caucuses in 2008, felt a little nervous about the data she was about to publish, but publishing those polls
outlier results gets us closer to the answer. And to exactly that point, tip number eight, polls are generally accurate, but not perfect. What? Nathaniel, polls aren't perfect? Yeah. I mean, if you've been listening to the 530 Politics podcast for really any length of time, this will probably not be surprising to you. But so much of our mission is trying to thread this needle, right? Is that
We do believe that polls are the best tool for predicting elections because they do generally have a good track record, even in this era of low response rates via telephone and kind of innovative methods in polling. In the 2022 midterms, for example, the polls were the most accurate that they had been at any point over the last 20 years.
Polls are generally pretty good at giving you a general sense of the race, but they are not going to absolutely nail it. And this goes back to the margin of error stuff. If on election day Kamala Harris is trailing Donald Trump by one point in Georgia, then
That just doesn't tell you anything. I mean, it tells you maybe that Trump's chances are a little bit better than Harris's because that error can go either way. But it is within that margin of error. It is within that confidence interval. It is totally normal for the polls to be off by...
by four to five points that that basically should be our default expectation going into the race. So that's why I really encourage people to think about, you know, think about the races that toss up or like, you know, if in the event that one of the candidates takes a huge lead, you can say, OK, Trump is significantly favored or whatever. But yeah.
There's one more existential question here that, you know, the average error in presidential election polling nationally in the 21st century is a little over four points or four point three points for Senate elections. The general election, it's about five and a half points for House general elections. It's six points for primaries for presidential primaries. It's even more. It's closer to 10 points.
We are in an era where these things tend to be quite close. And so if we should expect the polls to be wrong by about four points nationally in a presidential election, and then to say that in some of the state level polling, we have seen regularly even larger errors than those four points. What are we even doing here?
Well, I think first of all, I think there are many, I guess I have three answers to this. Who am I and why am I here? That's only two questions. To quote a famous VP candidate. Okay, let's see if I can remember all three. Okay, the first is that polls aren't only good for measuring elections, right? There are many important questions of public opinion.
I focused a lot on like horse race polls throughout this podcast, and I want to take a step back and make sure that people understand that it's important to know in a democracy what Americans think about issues about their politicians and things like that. You know, a lot of those things are not 50-50 the way that our elections tend to be. Legalism marijuana is extremely popular, for example, and polls are a good tool for that. Even if maybe we can't say that it's exactly 67% or whatever the latest poll is, we know that it's like generally in the range of like 60 or 70%, right? So that's number one.
Number two is that we don't necessarily know that an election is a toss-up or tied until we do polls. We are in this era of extremely close presidential elections, and if there were absolutely no polls of this election, I would bet on it being very close because the 2020 election was very close, because the 2016 election was very close, because the 2012 election was very close. But you don't really know until you go into the field, and maybe at some point we probably will reach a point where this level of polarization kind of burns off.
And, you know, maybe one party becomes dominant for a while or we go back to having these huge swings in the popular vote like we had in, you know, during points in the 20th century. And polls will tell us that. You know, you saw in, I don't know, take a past election, you know, 1992 or something, for example, like the fortunes of, you know, incumbent President George H.W. Bush varied widely throughout the lead up to that campaign. And the polls could tell us that.
And then I think point number three is, yes, I think I take your point. I think you're absolutely right. I think generally speaking, polls do get probably too much attention. Election polls probably get too much attention. I don't think it is, even though it's kind of, you know, 538's business model, I don't think it's healthy for us to be paying attention to the twists and turns of every poll and to be overanalyzing, you know, whether Harris is up by half a point or Trump is up by half a point. The overall takeaway really is the same.
I think what we'd say is if you're going to do it, do it in an expert fashion. And that's sort of what we focus on doing. But we don't pretend that the amount that we wait polling and stuff like that in our coverage should be the same for every newsroom across America. Yeah, exactly. And I think that's important. Like FiveThirtyEight, a lot of our raison d'etre is it is a to explain polling and how polling works to people. And hopefully we're doing a good job with that. But it is also to cover the horse face and give people an idea of who will win, because I
I think that is important and there's certainly a demand for it. But I think that there is so much more to covering a presidential or other type of election that is important regardless of the polls and also regardless of who wins. And I know that everybody wants to know who wins. I do too. Trust me. And we're doing the best we can. But there is going to be uncertainty in this era of close elections.
But then the other thing is that if you do have precise enough tools, which we at FiveThirtyEight feel that we do between our polling averages and our election model, you can identify maybe, you know, okay, both candidates have a chance of winning, but one candidate maybe is a little bit more likely to win than the other.
If a candidate is leading in a state by, you know, I've been saying a lot about like half a point or whatever, and that really is kind of a, you know, that's where we are right now as we are recording. But maybe on election day, it'll be, you know, a two point lead for Trump in Nevada, right? Like that is within the margin of error and it could go either way. But in that case, you'd also still clearly rather be Trump in Nevada, right? All right. Motoring along here. Number nine, don't try to unskew the polls. Why not? Yeah.
First of all, what do we mean by unskewing the polls and why shouldn't people do it? So real heads will remember back in the 2012 election where this idea took off, there was a website, unskewedpolls.com. It was run by a supporter of Mitt Romney that basically tried to argue, oh, this poll has too many Democrats in it. So if you adjust the number of Democrats, it actually goes from an Obama two-point lead to a Romney three-point leader or whatever. And like, that's the real state of truth.
These days, especially when Biden was still in the race and he was losing in the polls, you see someone skewing by Democrats and they might say, oh, this poll doesn't have enough black voters or young voters in this poll are actually supporting Trump. And that doesn't make sense. So we should throw out the poll or we should, in the case of the black voters, raise the number of black voters to make it more realistic to what I think is going to be the actual composition of the electorate.
We just don't recommend those things for a couple of reasons. First is that pollsters are professionals. They are good at this. This is their job. They have an incentive to be right. Maybe partisan pollsters have an incentive to mislead, but generally speaking, most pollsters have an incentive to be right. And they are trying to put out the best data that they can. And if they think this is the composition of the electorate, then there's probably a good reason for that. They
They have already weighted the poll to make sure that it is representative in terms of race and gender and education and age. And especially the types of people who tend to do this, who are more partisan and perhaps more amateurs, they just don't have the tools to be able to do a better job, even if the pollster is doing it imperfectly because nobody can be perfect because nobody can predict exactly what the electorate is going to be.
Yeah, and we should add here that since we are talking about crosstab diving and really going into the makeup of the electorate and how subgroups of the electorate feel based on these polls, the margins of error grow as the sample size shrinks within those crosstabs. So if you are looking at specifically voters between the ages of 18 and 29 or Latino voters,
The margin of error is not going to be the overall three percentage point margin of error of the poll. It is going to grow. And so, yes, if you look and you see something funny or funky going on, that's not a crazy outcome. That is to be expected. But over time, those will all average themselves out. And it does not mean that the margin of error of the poll overall is any different.
It's also important to bear in mind that, like, we don't know, right, is that if you disregard crosstabs that look strange because they don't comport with history, if they end up being right, you're going to miss a potential realignment. And, you know, you basically want to keep an open mind about things like that.
Yeah, and I want to say here that we're talking about a kind of unskewing of the polls that was forged more or less in the Obama Romney era. More recently, because of the past two presidential elections that underestimated support for Trump, we've seen a new kind of sort of unskewing of the polls, which is to look at the average error of national presidential polls or in states as well.
take the current polling average and then just add or subtract based on what the error has been and say, okay, well, we know that the polls are going to be wrong. So we'll just take the whole average and shift it over three or four points because of how the polls have performed.
Now, there are folks who will do that for just one side. And this also became popular in 2022 with Democrats saying, well, this isn't fully encapsulating the enthusiasm because of Dobbs or whatever. And so we should overall shift things towards Democrats, et cetera, et cetera. That is different, though, from a mental exercise that, you know, nonpartisan newsrooms will do, which is they take their averages and they show you, hey, if there's a normal size polling error that underestimates Republicans, then
This is what the result could be. Hey, if there's a normal size polling error that underestimates Democrats, this is what the result could be. And that's just to give people some sort of frame of reference for the spectrum within which the results could land. Exactly. Yeah, I'm so glad you brought that up because, you know, I do hear all the time from people on different sides of the aisle. Right. 2016 and 2020, the polls underestimated Trump. Right. So they're going to do it again this year. Right.
And the answer is no, not necessarily. We don't know. If it were that simple, then, you know, we would know, right? We would, everybody would just be making that adjustment, including importantly pollsters. And this is a big part of the reason why we don't know, right? Is that every year pollsters iterate and try to do better. And so they are well aware that they missed in 2020 and 2016.
in a way that underestimated republicans and so they've tried to make improvements that will make that not be the case this year so you can't necessarily just add that um because the you know the pollsters are already kind of fixing their models accordingly
There is polling error every year, but the direction changes year to year in what looks like a random, unpredictable pattern. So in 2016 and 2020, the polling error favored Democrats, right? Or it was the polls were too Democratic. In 2012, the polls were actually too Republican. And it bounces around. And in 2022, there was actually basically no bias in the polls. There were some error because that was a weird midterm year. People might remember there were some states where there was...
a red wave, some states where there was a blue wave, and the polls underestimated Democrats in some states and overestimated them in other states. So basically, yes, anybody who thinks that they can predict the direction of polling error in advance, if they are correct on November 6th or whenever we know the election result, it'll be because they got lucky. You have a 50-50 chance of guessing right, guys. All right. And that brings us back to tip number 10. Polls are snapshots, not predictions.
Of course, though, as we get closer to Election Day, they do become more predictive. And so we spend a lot of the summer and the early part of the campaign saying it's too early, it's too early, don't pay attention to the polls yet. And there are caveats there. You know, maybe as we become more set in our ways as an electorate, the polls can become more predictive earlier on, especially when you have really famous people running for office.
president. But as you already mentioned, these circumstances change, you know, trend lines don't always run in the same direction. We got a new candidate and we got a new candidate. But historically speaking, when do the polls become more predictive? Yeah, there's not like a point at which we can say, oh, these polls, you know, you can listen to the polls now, right? It becomes a spectrum. The closer you get to election day, they're better.
I think an important inflection point is it happens in like the spring when the candidates are decided and the primaries are over. And, you know, historically, you know, this year you had Biden and Trump, both of whom were already very well known. And also it was pretty clear they were going to win their nominations.
But you have to go back to maybe 2004 or something when Democrats didn't know they were going to nominate John Kerry. And so a poll in January asking about John Kerry, most Americans are going to be like, who? I don't know that guy. But by May, Americans had gotten to know John Kerry. He was a Democratic nominee. Democrats had kind of come together around him. And at that point, polls do tend to be more accurate. So that's generally when all start to cover general election polls. But again,
But again, it gets better and better over time. Historically, polls conducted in early September will be less close to the final result than a poll conducted in late September, which will be less close to the final result than a poll conducted in mid-October, etc., etc.
All right, as you can probably tell from our conversation, polls can be fickle things, and especially so when we wrap so much of our emotion around them. Some parting thoughts here. Try to be disimpassioned when looking at the polls. They are not personal for the most part. The people behind them are usually trying to get to the bottom of something factual about American life or population.
politics. Really importantly, as you said, Nathaniel,
just because they can't tell us who's going to win an election doesn't mean they can't tell us a lot about our country. And oftentimes, maybe more frequently than people probably think watching the news, Americans agree on things. And when they do, and it's something like, you know, two-thirds or more of Americans who say X or Y or Z, that's something polls can tell us and something that in a democracy, it's important for politicians and the people in charge to pay attention to.
Thank you so much, Nathaniel. Thanks, Galen. My name is Galen Druk. Our producers are Shane McKeon and Cameron Trotavian, and our intern is Jayla Everett. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we'll see you soon.