Home
cover of episode How The Polls Did In 2024

How The Polls Did In 2024

2024/11/21
logo of podcast FiveThirtyEight Politics

FiveThirtyEight Politics

Key Insights

Why did pollsters breathe sighs of relief after the 2024 election?

Polls were less error-prone in 2024 compared to 2016 and 2020, with state-level polling being the most accurate in at least 25 years.

What was the average polling miss in the 2024 election?

The average polling miss was 2.7 percentage points nationally and 2 percentage points in battleground states, significantly better than the 4 percentage point average over the past 25 years.

Why might pollsters have underestimated Trump's support in the 2024 election?

Pollsters might have faced non-response issues among Trump supporters, a problem that has persisted since 2016 and 2020.

Which polling method performed the best in 2024?

River sampling, which drives people to answer polls through social media ads, had the lowest bias and error.

Why did some pollsters choose to weight by past vote in 2024?

Weighting by past vote was seen as a way to ensure a proper mix of past Trump and Biden voters, though it's not clear if this method will be effective in future elections with different candidates.

What challenges do pollsters face with low response rates?

Low response rates force pollsters to model the electorate, making all polls essentially models rather than representative samples.

How did polls perform in measuring subgroup movements in the 2024 election?

Polls did a good job showing movement among Black and Latino voters towards Trump and among young voters, though some overestimated movement among Black voters and underestimated among Latinos.

Are live phone polls still considered the gold standard?

While some pollsters still value live phone polls, the rise of new methods like river sampling suggests that the gold standard may be shifting to more innovative approaches.

What should people take away from the 2024 polling results?

All polls are now models due to low response rates, and there is no longer a single gold standard method. Instead, there are good pollsters who use effective modeling techniques.

Are pollsters optimistic about the future of gauging public opinion?

Yes, pollsters are optimistic as polls continue to provide valuable insights into public opinion and democratic processes, despite the challenges posed by low response rates.

Chapters

The podcast discusses the performance of polls in the 2024 election, comparing it to previous years and highlighting the accuracy of state-level polling.
  • Polls were less error-prone in 2024 compared to 2016 and 2020.
  • State-level polling was the most accurate it’s been in at least 25 years.
  • Polls underestimated Trump's support for the third consecutive election.

Shownotes Transcript

We all have plans in life, maybe to take a cross-country road trip or simply get through this workout without any back pain. Whether our plans are big, small, spontaneous, or years in the making, good health helps us accomplish them. At Banner Health, we're here to provide more than health care. Whatever you're planning, wherever you're going, we're here to help you get there. Banner Health. Exhale.

Ruth, how are you feeling? You know, I have walking pneumonia as do both of my children, but I actually feel fine. I just sound like garbage, which is just the election talking. What you're telling me is that this is more of a walking pneumonia open than a cold open? Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha

Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. Voters went into the 2024 election feeling like the stakes were high. Inflation, the border, abortion access, democratic norms were all on the ballot. And we know how that netted out. But 2024 was a high stakes election for another group of people.

Pollsters. After a high-profile, though not historically large, miss in 2016, and then a historically large miss in 2020, trust in polling has been on the decline. And it's no secret that low response rates to polls have become a real challenge. A big miss for a third presidential election in a row might have been something of a nail in the coffin for election polling.

But that was not to be. The polls, on average, had their most accurate year on record in 2024, a 2.7 percentage point miss nationally and just a 2 percentage point miss in the battleground states. The average polling miss over the past quarter century has been 4 percentage points, so a significant overperformance and finally, success.

But, of course, that isn't the end of the comeback story. For one, the polls have underestimated the same party three presidential elections in a row. Also, underneath those averages are very different methods of trying to get a read on the public. And the hurdle of reaching voters who are willing to take a survey has not gone away.

So what does this all mean for the project that is so near and dear to this podcast's heart, getting an accurate read on what the public thinks and wants in a democracy? Here with me to talk about it are two people who are giving this all a lot of thought, Director of Data Analytics, Elliot Morris. Welcome to the podcast, Elliot. Hey, Galen. And polling editor at The New York Times, Ruth Egelnik. Welcome, Ruth. Hello, hello, with all of my horsey voice glory. You know, we're still recovering from the election. That's right.

As listeners may know, Ruth, you are one of the people responsible for putting together the postmortem on how polling did in this election for the American Association of Public Opinion Research. Ruth, how is that report coming along? Well, let me just be clear. Elliot and I are both on that committee. No, no, Ruth has to write it all. Yeah, Ruth, how's it going? It's in the early stages. For people who don't know, there's a committee of

I want to say 10 to 12 of us in the polling industry across different parts of the polling community, whether it's nonpartisan pollsters, Democratic pollsters, Republican pollsters, public, private. And we're tasked every four years with looking at the quality of the polls. And I think

what you said in the intro is exactly right, right? That polls did historically decent this year. There were not big misses, but it is still the third year in a row that polls underestimated Trump's support by a couple of points. They pretty much accurately got Kamala Harris's share of the vote, but underestimated Trump by on average, I think one to two percentage points, depending on state and national and things like that. There's a lot of variation by

method, by mode, by different sort of methodological decisions, which is what we and the committee will be sort of digging into. But I think our overall sort of broad narrative is that polls did pretty well this year.

Is it clear from maybe what went right in 2024, what went wrong in 2016 and 2020? I mean, I know that we have theories, right? 2016 was education. Polarization blindsided us to a certain extent. 2020 was a combination of, Hey, it was during a pandemic. Also, uh,

But maybe there were still some challenges reaching Trump supporters that extended beyond the education divide. And then also part of the conclusion from 2020 was a big shrug. Like, we don't know 100 percent what is going on. Did this past election give us more information on that front, Elliot?

I think it will once we have a chance to merge the polling data with voter data and who actually turned out to the polls. That'll help us figure out if there was any sort of non-response among the Trumpier part of the country. I'm not super optimistic at the industry's ability to figure out how to combat

If it is a non-response problem or some other problem underestimating the support of the Trumpier part of the ideological spectrum, at some level, it might just go away when he's not on the ballot and pollsters will be able to declare victory. It just seems like a very, very hard problem to solve if the smartest people measuring public opinion in America just can't get it perfect. And maybe we just shouldn't expect them to.

The fact that we didn't have this issue in 2022 and we did have it

very acutely in 2020 and a little less so in 2024, tells me it is sort of this uniquely Trumpy problem. We haven't figured out how to solve it. I think this year was kind of a fun, interesting year because there were so many different methods that it provides us this kind of nice experiment where we can look at how each of the different methods performed. But if we tried this scattershot of methods and nothing quite got at that Trumpier group, we're

That may be a challenge that's too big for us to complete. Well, Ruth, you perfectly predicted. Oh, you must be a pollster. You perfectly predicted my next question, which was among the different methodologies, which seemed to do the best of gauging public opinion. And for some context here, we talked about the debate before the election between pollsters who were waiting by recalled vote or passed vote and pollsters who weren't.

And waiting by recalled vote, just to provide some more context here, is asking voters who did you vote for in the last election and then making sure that you wait accordingly so that you have the proper number of Trump voters and the proper number of Biden voters in your sample. And the challenge with that historically has been that people don't always remember who they voted for in the past election. How did this turn out? Does it seem like waiting by past vote was an effective way of gauging the public?

So I will say that I think waiting by pass vote, people who waited by pass vote were sort of validated by this election and that it didn't harm them in the way that people were worried it would. And maybe that's something that's unique to this electorate and that it was sort of similar ish.

to previous lecture. It's a, maybe it's the kind of thing that we just need to do for accuracy moving forward. But I will say in terms of talking about how different methods did, I want to toss it over to Elliot because anything I say would be based on the analysis that Elliot did, which was very good. Fair enough, Ruth.

Yeah, we've seen that the methods that are newer are doing better, and some of the stalwarts aren't doing as well. Though again, we're talking about differences in accuracy of half to one personage point, so it's not earth-shattering differences here. The modes that seem to do better are...

a mode that we call river sampling, which drives people to answer public opinion polls on their phone or on a computer based on an advertisement they see on a webpage, or like if they're scrolling through Instagram, an ad might come up to do a poll. It's called river sampling. Those polls did the best in terms of having lowest bias and lowest error. So not only did they get

close to the result, even if they overestimated or underestimated someone on average, they just like didn't have such a high bias towards Harris. And again, I'm talking about the difference between a one point bias and a two and a half point bias. The polls that did worse online opt in panels didn't do very well. And live phone polls did statistically worse polls just with only one component live live phone polls. If your only component is calling people over the phone.

And what about waiting by past vote in particular? That we haven't been able to look into yet. I know that Courtney Kennedy at the Pew Research Center had a good interview in a Washington Post story recently in which she says that it seems to have helped. It's just anecdotally, the types of polls that did better are also the types that tend to wait by past vote. So I think she's right. But I don't think we really understand the causality yet.

It's interesting, I will say, like, as we at the New York Times Santa College poll were thinking about, you know, the ongoing debate about whether or not to wait by past vote, went back and re-waited our previous surveys, and they would have been less accurate.

in 2020 and other years if we had waited by pass vote. Now, that doesn't mean that it's not something that worked well this year, but I just re-waiting our own surveys, it made them historically less accurate, which was really interesting. Yeah, Ruth, taking a decade long view here, I do think we should slow our roll on waiting by pass vote because it's not crazy to think that a

election during which Biden is the incumbent president and Trump is running against, well, not Biden, but Biden's VP would be somewhat similar to the election that happened just four years before. The midterm polling that has also been very accurate during the Trump era in 2018 and 2022 has not necessarily relied on race.

recall vote. And it's been a pretty different electorate from who turns out in a Trump presidential election. So in 2028, if we have completely different candidates on the ballot and people decide from this election that waiting by past vote is the new sort of gold standard, we could potentially

by fighting the last battle, create new problems for ourselves in the industry. Yeah, I think that's right. I think one other challenge, and this will just take sort of us looking into it, is like,

A lot of the polls that did wait by pass vote were also very concentrated on swing states where the results were always close. And so the fact that they waited by pass vote and therefore it sort of moved the margin closer wasn't detrimental. But like one example is we did a poll in Florida. Our poll came out Trump plus 13. We got a lot of grief for it because that seemed like a kind of outlandish result. If we had waited by pass vote, it would have pushed it far from that.

The result in Florida was pretty close. I think it was Trump plus 12 or plus 13. So I think it also depends. We sort of need to do some digging into where it was effective, where it wasn't effective. Like you said, not just fight the last battle. But if this is something that's helping, not be opposed to it. I think that just there's some digging and sort of soul searching that we need to do to determine how effective and where effective.

You bring up another important topic from this election, which was hurting. So one possibility was that because all of these pollsters were using the same methodology of weight by past vote, the results all just looked really similar by dint of waiting by past vote. There was also the possibility that folks were hurting. Can we say anything about that after the fact, Elliot?

Yeah, so I'm not convinced about the herding. When you're weighting your samples by something that is correlated with the outcome, like pass vote or even like race and education and all the other things pollsters weight by, you should expect a lower standard deviation than if you didn't have to do any of those things because you're conditioning your sample on something close to the outcome. There's a lot of pollsters out there that need to do the weighting, that need to do the processing because the sample isn't representative from the jump of the thing they're trying to represent.

they probably could do some more work on weighting. They might need to do even more than pass vote to get things to be representative. Yeah, I mean, Carlos Odear was on the podcast last week, and he said something along the lines of pollsters have become modelers in their own right. Because response rates are so low, you have to, in some ways, model the electorate that you expect to turn out because you're having so much difficulty reaching people.

I'm curious, Ruth, if you think that's a fair assessment of the task that is facing pollsters today. And also somewhat related to that, how did pollsters do in terms of measuring movement amongst specific groups, which may be sort of where some of the weighting comes into play?

So Doug Rivers from YouGov has been beating this drum for a while that all polls are non-probability polls now because of low response rates. So we all have to do a good degree of modeling. And I think that he's not entirely wrong, right?

Like with a 1% response rate. Yeah, generously. It used to be decades ago when we had, you know, 30% response rates and we felt like these were representative of the United States. As Elliot said, whether we do it on the front end with sample design like we do at Time Sienna or you're doing at the back end with weighting, which

I mean, we also do. We are doing a good degree of modeling a lot more than previous pollsters. And, you know, Ann Seltzer, who has been the gold standard for many decades, even, you know, with the outlier she released this year, really did not take that approach and weighted their sample very lightly and really did not. They didn't weight by education. They really do a light touch on their sample. And I think you really saw that this year that just doesn't work.

That's just not going to accurately capture things anymore. And so I think that was really sort of a way for us to see that that was no longer very effective at understanding the electorate with the response rates we get. In terms of subgroup movement, I think that the polls did a really good job of showing us a lot of this movement before the election, particularly looking at Black voters and Latino voters. There were a number of good oversamples and deep dives there.

that helped us understand that these groups were moving away from Democrats and towards Trump.

You know, that wasn't a surprise or anybody who's been watching the polls closely. That shouldn't be a surprise. And also young people, particularly young men, you know, we saw this in polls over and over. And I think back to a little over a year ago, the Washington Post was the first to show really strong numbers for Trump with young voters. And they got a lot of grief for it. Right. It seemed like a real outlier finding talking about hurting. They were definitely not.

That then proved to be true over, you know, another, you know, year of surveys from all different kinds of organizations that we saw this openness to Trump among young voters. So I think that,

But, you know, we can kind of talk a lot about, you know, precise point estimates and making sure the polls are as accurate as they can be. But when we're looking at sort of like broad trends and the story of the election, I think polls did a really good job of telling us about these subgroup movements, of telling us about broad sort of anger and distrust in the electorate, distrust of the Democratic Party, openness to the Republican Party. Like, we definitely saw a lot of that beforehand. Yeah.

Yeah, it seems like overall polls may have overestimated some movement amongst Black voters and underestimated some movement amongst Latino voters taken as an average. I think it's going to take a little while for us to... Color me a skeptic of exit polls, it's going to take a little while for us to know for sure. All the true heads know you have to wait for the voter verified surveys. Exactly. But you said something earlier, which gets at a much bigger point that we should emphasize in this conversation, which is...

Ann Selzer has been a gold standard pollster for a very long time. Her final poll showed Harris up three in a state that Trump won by 13 points. That's a 16 point miss there. And take that in combination with something that Elliott said earlier, which is that so far it looks like some of the most accurate pollsters were the folks who are trying brand new methods like advertising on social media to get people to click through.

Now, I have been at FiveThirtyEight long enough to have been around when we were very adamant about what the gold standard was, which is live phone polls. Then we took a step after a few years to say that actually there is no longer an advantage to

looking at the results that we got to live phone polls. Some of these new methods were also getting similarly accurate or inaccurate results. There wasn't much different. Are we ready to say now in 2024 that actually the gold standard is a worse way of doing things and that these new methods are actually going to be more accurate going forward?

I think I'm not ready to say that, which is to say there's an interesting correlation between the rise in some of these methods and the era, the polarized era of our politics and three elections in a row with Trump on the ballot, where I think what I've learned so much from.

you know, sort of pollsters who've been doing this for decades, is you want a method that may be different from election to election, but is a steady hand across decades. And what I'm interested to see with some of these newer methods, they've performed very well in the last few elections. I want to see that they're a steady hand across decades, that they're not overfitting to the particular moment in our politics or the particular people who are on the ballot.

Now that sounds skeptical, but I think it's like skeptical with love. Like I'm encouraged, I'm excited, but I want to know that this is the kind of thing that can do well in all different kinds of situations. I am among the dinosaurs and we still conduct live phone polls. I still see value in the way that we do it and I'm not quite ready to give it up.

And to add to what you're saying, Carlos Odeo, who did say that they ended up waiting by past vote, also said that live phone polls were the best at reaching Latino voters. So polling today looks like a cornucopia of options to choose from. And getting it most accurate may mean pulling at different strands. We're close enough to Thanksgiving to use that metaphor.

Yeah, I will say the wonderful Pew analysis of non-probability surveys where they found that younger voters and Latino voters in non-probability surveys were especially likely to say that they had nuclear class submarine licenses.

It's very stuck in my head. Extremely silly. But the point they were making was these were people who were either not real respondents or were sort of inattentive and not actually paying attention to the survey. And what wound up happening is you have this over-representation that is almost impossible to actually impossible, showing the low quality of Latino and younger respondents in non-probability surveys. So...

Even though on the whole they did well for some of these demographic subgroups, it's hard to know.

I'm going to present two axioms that I think that should be an ass-f***ing use the word axiom, that I think people should take away from this election. It's something that Nate Cohn and Ruth have talked about and written about, something that I wrote about in my book two years ago. The first one being all polls are models now. Just like Doug says, there's no probability samples anymore. There's no representative poll anymore, to use the statistical word representative. All polls are models. As a corollary, as a second thing,

If they're all models, then there's no gold standard poll anymore. Because you've acknowledged that there's a data generating process that's changing, that's getting harder. So I would say there's no gold standard poll anymore. I might say there's gold standard pollsters, but really there's good pollsters. There's a good way to approach the problem of research of modeling public opinion data now. Unfortunately, I think that's kind of what we saw with Ann Seltzer, is just the old gold standard poll thing.

It hasn't really been working, in my opinion, for a while now, and we just couldn't get lucky enough to make it work again this year. Today's podcast is brought to you by Shopify. Ready to make the smartest choice for your business? Say hello to Shopify, the global commerce platform that makes selling a breeze.

Whether you're starting your online shop, opening your first physical store, or hitting a million orders, Shopify is your growth partner. Sell everywhere with Shopify's all-in-one e-commerce platform and in-person POS system. Turn browsers into buyers with Shopify's best converting checkout, 36% better than other platforms. Effortlessly sell more with Shopify Magic, your AI-powered all-star.

Did you know Shopify powers 10% of all e-commerce in the U.S. and supports global brands like Allbirds, Rothy's, and Brooklinen? Join millions of successful entrepreneurs across 175 countries, backed by Shopify's extensive support and help resources. Because businesses that grow, grow with Shopify. Start your success story today. Sign up for a $1 per month trial period at shopify.com slash 5S.

That's shopify.com slash 538. Today's podcast is brought to you by GiveWell. You're a details person. You want to understand how things really work. So when you're giving to charity, you should look at GiveWell, an independent resource for rigorous, transparent research about great giving opportunities whose website will leave even the most detail-oriented reader stunned.

Busy. GiveWell has now spent over 17 years researching charitable organizations and only directs funding to a few of the highest impact opportunities they've found. Over 100,000 donors have used GiveWell to donate more than 2 billion dollars.

Rigorous evidence suggests that these donations will save over 200,000 lives and improve the lives of millions more. GiveWell wants as many donors as possible to make informed decisions about high-impact giving. You can find all their research and recommendations on their site for free, and you can make tax-deductible donations to their recommended funds or charities, and GiveWell doesn't take a cut.

Go to GiveWell.org to find out more or make a donation. Select podcast and enter 538 politics at checkout to make sure they know you heard about them from us. Again, that's GiveWell.org to donate or find out more.

Before we go, I do want to get to one more topic, which is that, you know, we've got some we've received some flack over the years about including some of these polling newcomers that you mentioned, Elliot, who have shown a rosier picture for Trump. I don't need to name names. Simon Rosenberg of people who have said.

said that, you know, these Republican-leaning pollsters are flooding the zone and blah, blah, blah, blah, blah. Have we been vindicated? Can we stop talking about this now? Do we still need to talk about flooding the zone? Yeah, I think, I do think we still need to talk about whether or not aggregation models are taking into account like frequency and house effects and stuff.

properly. But I would say our more holistic approach to aggregation is super vindicated. Thinking back to the early 2010s when non-probability research was just getting shit on left and right is replacement level at this point on par with live phone polls, even though there were some problems in some of the lower quality opt-in online panels. There are sort of two different issues we're talking about. One is

opt-in polls, which generally performed well. And the other are these Republican-leaning polls that aren't representative of all opt-in polls and are a specific brand of poll that often uses opt-in methods.

We do the same thing at the New York Times, and I think many other responsible aggregators do, which is you include everything, but you account for house effects. You wait based on historical accuracy, methodological decisions, transparency, which downweights a lot of those polls. And so, I mean, it was interesting in 2022, this was a big problem. People were worried about this, like,

flooding the zone and red wave. And it almost had more to do with polls people were consuming out in the wild and not the averages that they saw that took those polls into account. We didn't see that problem as much this year, mostly because some of those same pollsters showed results that were more similar to what others were showing. But I think when you look at the averages, like smart aggregators are already taking those things into account and were in 2022.

I do want people to be careful, though, with updating their priors too hard. It's just one year. And if you take the view, and I take this view, that most polls should be subject to a similar or correlated directional non-response. Because if people aren't answering one mode of survey, they're probably not answering another mode of survey for whatever reasons that are intrinsic to them. And that's just an average statement. That's not true for every type of poll.

If you take that view, then you need a lot of elections to figure out if one poll method or pollster is better than a replacement level or just like the average survey.

I don't want people to take away from this that like all they need to do is read Trafalgar and Atlas Intel and Rasmussen reports and they're good because that wasn't true in 2018. It wasn't true in 2022. They do seem to have a knack, these pollsters, for not being pulled by that sort of gravitational non-response that's affecting other types of modes. And I

Honestly, if that keeps happening, they deserve credit for it and it's probably worth listening to some other feedback on what they're doing. But it's also possible that they're just getting hella lucky or that they figured out something that's going to work for two or three years when Trump's on the ballot and it's not going to work in the future. That is totally possible. Like pollsters are playing a long, long game here. So people shouldn't overreact.

And finally, are we optimistic about the future of the project of gauging what Americans think and want in a democracy?

Yes. Oh, yeah. Very optimistic. We continue to do a good job of understanding what Americans think about, care about, how they feel about democracy, how they feel. I mean, we didn't talk a lot on this about, you know, that we sort of accurately predicted that abortion was not the same major issue that some expected it to be. If you step back from the very specific use of polling to predict election outcomes and understand polling as a gauge on public opinion in a democracy,

I think we will hopefully continue to do this for many decades to come. It's the best way to understand what people think outside of your small bubble ecosystem of people who feel similarly to you. It's the best way to see how people across the political spectrum feel. And that hasn't changed.

Yeah, sometimes when I know nobody listening to this podcast, but sometimes I'll encounter folks out in the wild who are like, well, why with all these polls? Why don't we just talk to people? Well, guess what? Polls are people. Polls are people. I'm going to leave it there. Thank you, Ruth and Elliot. Thank you. Thanks.

My name is Galen Druk. Our producers are Shane McKeon and Cameron Tredavian. You can get in touch by emailing us at podcast.538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we will see you soon.

And have your eight hours of sleep. Get everything with Cox. Internet and a mobile line unlimited with Wi-Fi equipment included and without annual contract. Visit cox.com diagonal plus value. Offer for limited time only for new clients. Without annual contract means that there are no minimum duration agreements and charges for early cancellation. Apply additional restrictions.