cover of episode What Do We Really Know About the Maternal-Mortality Crisis?

What Do We Really Know About the Maternal-Mortality Crisis?

2024/8/6
logo of podcast Good on Paper

Good on Paper

Chapters

The episode discusses the perceived rise in maternal mortality in the U.S. and how it was actually due to changes in measurement methods, not an actual increase in deaths.

Shownotes Transcript

There's a common perception that democracy ends with a battle, soldiers in the streets, a coup d'etat, the fall of a government. But we know that democracy can be lost one little step at a time. We've reported on it and lived through it. And when we look at America today, right now, we see a place where the slide to autocracy has already begun. It's not some distant future, it's the present.

I'm Anne Applebaum, a staff writer at The Atlantic. I'm Peter Pomerantsev, a senior fellow at the SNF Agora Institute at Johns Hopkins University. We're the hosts of a new podcast from The Atlantic, Autocracy in America. Subscribe to the show wherever you get your podcasts. ♪

In the decades from 1999 to 2019, researchers found that maternal mortality deaths had more than doubled. This finding kept years of concerns that the U.S. was steadily becoming a deadlier place for pregnant women. These data filtered their way through academic journals and papers and national statistics to newspapers and magazines.

I remember reading these stories myself, and as someone who wanted kids, becoming more and more afraid and confused. What was going on? How could things be getting so much worse every year when medical progress should be moving us forward? And then I started hearing that there were some concerns with the maternal mortality statistics, that the story might be more complicated than was commonly understood.

This is Good on Paper, a policy show that questions what we really know about popular narratives. I'm your host, Jerusalem Dempsis, and I'm a staff writer here at The Atlantic. Today's guest is Saloni Datani. She's a researcher at Our World in Data who has studied death certificates and causes of death data broadly and kept getting questions about why the U.S. maternal mortality data looked so bad. Her research builds on the work of other skeptical scientists,

and found that the seeming rise in maternal mortality is actually the result of measurement changes. In short, things aren't getting deadlier for pregnant women. It's that we've gotten better at tracking what was already going on.

In 1994, the International Classification of Diseases recommended adding a pregnancy checkbox on national death certificates to try and make sure we weren't undercounting maternal mortality deaths. It succeeded, but it also ended up overcounting deaths from other causes. For instance, a study looked at Georgia, Louisiana, Michigan, and Ohio, four states that had adopted the checkbox, and found that more than a fifth of

of the pregnancy deaths were false positives. The women hadn't even been pregnant. Correcting the record on these statistics doesn't change the fact that the U.S. needs to do more to promote women's health.

But when we're using shoddy facts to inform our understanding of the world or to inform policymaking, it can lead us down fruitless paths. And that's not in pregnant women's interests at all. On the surface, this is an episode about measurement error. But it's also one about how scientific narratives develop, both within the academy and when they reach the media and the general public.

And it's about how hard it is to communicate science to the public, even when you have the best of intentions. Saloni, welcome to the show. Thank you for having me on. Well, I'm excited to have you on because I feel like...

I'd heard whispers for a long time about concerns with the maternal mortality data, and I'd kind of seen tweets or someone had mentioned it at some economics conference I was at, but I never really looked into it. And then your article came out, and I was like, oh, wow, this is pretty definitive stuff. I need to look into it myself, which is why I wrote my own piece. So, yeah.

Yeah, I want to start at the beginning, though, for our audience, because you've done a bunch of research on death certificates and cause of death in the U.S. So can you just start us there? How do we determine cause of death in the U.S.? What does that process look like? When someone dies, there are different people who might certify their cause of death. And the death certificate, it includes a description of the specific things that led up to the person's death.

And you go back down that list to figure out what the underlying cause of death was. So, for example, someone might die from a gunshot wound, which eventually caused a cardiovascular, like a heart attack or something like that. And the gunshot wound would be the ultimate underlying cause of death. So, yeah.

Once that field is filled in, all of that data then gets sent to the state to collect statistics for deaths across the state. And then it goes to the National Center for Disease Control and Prevention, the CDC, to collect data.

And they collect all of this data. They turn it into codes so that they can be interpreted by researchers in a standard way. And then it's reported internationally to the World Health Organization each year.

So that's like a really clear case, right? Like someone gets shot and like even if you have pneumonia or cancer or something, everyone understands that the reason you died was because you were shot. But I remember during COVID that it was like there were a lot of conversations around how to classify causes of death when people maybe had other reasons why they were really sick. So how do you decide which the cause of death is? How do they kind of deal with that gray area?

So depending on the actual cause, sometimes the type of measurement might vary. So during COVID, for example, there were generally two different ways to determine whether a death was caused by COVID. So you could either have COVID listed as the underlying cause of death by the doctor. Sometimes they would also look at just test results in the last month and try to just count anyone who had died from COVID

any disease that might have been exacerbated by COVID. And that's quite useful because it can be quite difficult to determine whether a specific condition was worsened by COVID or not. So you would have to have some consistent way of classifying deaths by a certain disease. And that's also what happens with maternal deaths. So...

In the past, we used to have the system where you would just look at what's listed as the underlying cause of death. And if it was pregnancy related, so for example, if it said preeclampsia or something else that is very obviously pregnancy related, only those would have counted as maternal deaths in the past.

The problem with that, though, was that there were many cases where pregnancy might have worsened a woman's underlying health conditions, like hypertension, diabetes, could be AIDS. It could be like various other conditions that she has that gets worsened by the pregnancy. And those weren't being considered maternal deaths. Or sometimes they were by some doctors, but not by others.

And to make that process consistent, the International Classification for Diseases, which is the international system for determining what the cause of death is, they decided that they should give additional guidance to countries on how to classify maternal deaths. And in the 90s, they gave this recommendation that we should count deaths

Any deaths that occurred during pregnancy or within six weeks of the end of pregnancy as a maternal death. So this would allow for a kind of standard. But that means all deaths, even if like you were hit by a car or something?

Oh, sorry, no. So this would only include deaths that are caused by medical conditions. So anything that's not like an injury or an accident or homicide or suicide. So those would be excluded. But for any other cause of death, you would look at deaths during pregnancy or within the six weeks after.

And then they asked countries to add a checkbox to their death certificate to just tick off whether the woman had been pregnant at the time of death or within six weeks. And then they could do some further investigation to follow up the specific cause of death and try to understand whether it was worsened by pregnancy. But this started out as this way to standardize death

deaths during deaths from maternal causes. So the changes were that, you know, if something happens to a woman when she's pregnant, that is related to a medical event, not including... So basically the only things that are excluded are like homicides or suicides? Or I guess like how did they classify the stuff that was like separate from...

maternal death because obviously there are things like suicides that can be very related to pregnancy. So when you look at causes of death, there are broadly two categories of causes of death. One are medical conditions and causes that are kind of considered natural events. And then there are others that are called external causes. So that includes specific things like falls, accidents, suicides, homicides, injuries,

Basically things that kind of happen to you rather than like diseases that worsen over time. Are those changes what led to sort of the narrative around the maternal mortality crisis? So like in 2019, we have a research finding that the U.S. records twice as many maternal deaths as in 1999. Is that because of these changes in measurement?

It eventually led to that rise. So what happened was the International Classification for Diseases recommended that countries add this checkbox so that they could identify maternal deaths that had been missed in the past. And different countries adopted it at different times, but they also didn't necessarily use it to kind of

compile their statistics. So some countries actually used the checkbox directly and said, you know, any woman who has this pregnancy checkbox ticked and doesn't have their death caused by an external cause will classify that as a maternal death and send that to the World Health Organization. But other countries didn't do that. So they had the checkbox, but they didn't use that data for further investigation.

So the U.S. is an example where in 2003, the U.S. decided to adopt this checkbox, but this change happened in different states at different times. So different states in the U.S. have slightly different procedures for certifying a death and they have slightly different death certificates and procedures.

Between 2003 and 2017, each of the states eventually implemented this checkbox. And because that was done in a gradual fashion, you saw that, you know, some states would implement this checkbox, they would identify maternal deaths through this new measurement method.

They would then report it to the CDC and then it would go forward into international statistics and so on. But because different states were doing that at different times, it seemed like the overall rate continued to keep rising between that period.

So, like, if you were just looking at the statistics that the CDC was reporting about national maternal mortality, you would just kind of see this sort of rise happening over time. But what's going on underneath that is that states are just updating how they're measuring maternal deaths. And so this seemingly natural rise from and throughout the 2000s is actually a function of different states at different times changing how they're doing measurement? That's right. Okay.

So if you look within states, you can see a very clear sudden rise in the rates of maternal mortality that they report. So before the change, it was relatively stable for a few years. And then just after the change, the rates on average doubled in states and then remained stable after that.

So what do we actually know about maternal mortality rates right now then like relative to the 90s? Are they stable? Is it hard to tell? Does it feel like there's an overcount, an undercount? What do we think is going on given these changes?

It's a little bit difficult to tell because of how much that measurement has made an impact. So I think in the years since all of the states implemented the measurement change, there's still been a slight rise, especially during the pandemic. I think it's difficult to say whether the rise without the pandemic would be greater, if that makes sense. So whether there is a rising trend is...

besides the measurement change now. But we have seen a rise since then, and I think that's partly attributed to COVID infections and hospital capacity and so on.

One thing I want to ask you about, too, though, is that, you know, I think the way that a lot of people have heard about this crisis is the racial disparities, the gap between maternal deaths for black Americans and for white Americans. So the difference between, you know, the racial gap in maternal deaths there, I think, has been a really big part of this narrative. What do we know about that gap? Has that been affected by this measurement change?

So one thing that's clear from the research is that that gap was present before the change and continues to be present now. So there is a racial disparity regardless of the measurement change. And also at the same time, the measurement change had a bigger impact in maternal deaths that were counted among black women.

So I think part of the reason for that is that in the past, this determination of whether pregnancy was actually the cause of death was quite difficult. And maybe that was especially the case among black women. So the measurement change meant that fewer of those deaths were being missed than they were in the past. But we do see this racial disparity regardless of the measurement change.

So I think that if someone listening to this right now might kind of just be like, so what? Like, who cares? Like, I guess it's fine that people updated. But, you know, if people were concerned about women dying, like women dying in pregnancies bad regardless, like if we're if we're measuring more of them now, like and it's getting people to pay attention to the problem. Like, why is this a problem? Why is it a big deal? Why are we talking about it? So why did you look into this? Like, what made you think this was important to correct?

I looked into this mainly because we would often get this question. So I work at Our World in Data and we put together data sets on global issues. This was a big issue that I think maternal mortality is a really important issue that we have historically made a lot of progress on.

And it's quite alarming to see that, you know, these statistics suggested that there was a sudden rise in the US that seemed to be a reversal of this trend of progress. I think it's quite important for people to be able to trust that data as, you know, this is really showing a rise if it is a rise. And so trying to dig into what the cause of that rise was, was the point of this piece.

But I think it's generally important for people to know, you know, how much progress are we actually making? Are our policies working or are there new problems or challenges emerging that we need to tackle now? This is an example where fortunately that doesn't seem to be the case. And it was the result of a measurement change that helped us recognize a problem that had already been there. Mm-hmm.

But I do think that in general, it can be very useful for people to dig into these statistics. Yeah. I mean, one thing that I thought that was really important is that, you know, there's only one cause of death that someone gets counted under, right? Some people after my article asked me, like, you know, why are you trying to minimize this problem of maternal mortality? Like even, you know, one woman dying is terrible and people, we should try to prevent that. And I think that's obviously true. But I think what people often don't realize is that if you're saying like, oh, we should really push as many things into the maternal mortality cause of death, then

you're necessarily also taking away from other causes of death, right? So if a woman dies because of depression and she was also pregnant, I think it's a difficult question to ask which one of these is classified as maternal mortality or suicide or whether something is classified as a result of

high blood pressure or a heart attack, like all of these things are really important medical areas that deserve a lot of attention. People dying in these ways is terrible. And so I think one of the things that I think important to keep in folks' mind is that, you know, you have to know what the actual numbers are as best to our approximation, given that there's a lot of assumptions built into these models regardless, because you're not going to be able to address what is maybe the largest cause of death for women or large cause of death for people in general.

That's right. So on a death certificate, you would have one underlying cause of death. Like people, doctors can also list other conditions that they thought contributed to the death. But in statistics, we have this single cause of death for each person. And that means that, you know, if we're saying, you know,

Actually, maternal mortality is much higher or much lower. That is actually changing the way that we're classifying deaths. So we're moving deaths from a particular cause to another. And so it would mean, you know, changing how we're able to tackle other problems as well.

I also think that it's, it's more because one thing that you were pointing out is that there was this, there was this progress that we were making on maternal mortality, both in the U.S. but also worldwide. And then seeing the sudden reversal in the U.S., I think part of the problem with that is that it's not just drawing attention to a problem that people should care about. Like, obviously people should care about maternal mortality, but what's,

The argument the data was making, right, was that something has changed in the last 20 years to make women more likely to die in childbirth. And if you're a policymaker, right, then the thing that happens is you start looking for solutions to what's going on now and you start looking for what's changed in the last 20 years. But if the reality is actually like, you know, it's pretty stable, the number of women who are dying, but like that means there are chronic issues that we should still continue to be addressing that actually leads you down different paths.

That's right. So I think it can be really misleading to have like an incorrect picture of what's happening. So not just because, you know, we would misinterpret whether our policies are effective, but I think you'd also go down a route where you're wasting time on this problem when there are actually important insights to consider.

that you could learn from the new type of measurement. And like there are important things that you could do research on to understand where these, what the causes of these deaths were and so on, and whether they're preventable in some other way. All right, time for a quick break. More with Saloni when we get back.

The best journalism does more than just give you information. It sets you up to learn, to think critically, seek out the truth with an open mind. This is what The Atlantic does. It teaches you to question yourself, seek out new perspectives, separate fact from fiction.

Now, in this back-to-school season, for a limited time, you can get $20 off when you give someone an Atlantic gift subscription. Start them off on a lifetime of good learning at theatlantic.com slash learning.

So, I mean, just looking at the numbers and looking at the story of, you know, we tried to deal with an undercount of maternal mortality and now we're slightly overcounting. But now our research institutions, folks like Our World and Data, but also researchers at different scientific journals have been putting out studies showing that we do think that we are now maybe overcounting maternal deaths. That doesn't necessarily sound like a problem for our media or our research institutions. Sometimes we make changes. Sometimes we overcorrect. We undercorrect. We learn. We try to do better.

But I think what made me really concerned as I did research for my article is just how long it took for us all to update when it seems like we kind of had this knowledge for like years. Like I found research from 2017 showing that there was concerns that this maternal mortality narrative was being driven by a misunderstanding of the data and even

there was a blogger from 2010 who was an OBGYN arguing against the crisis narrative, also saying, like, we need to take a closer look at these numbers. So from your perspective, I mean, you're, you know, you're a researcher, you're kind of in these spaces a lot. Why did it take so long for this narrative to become questioned?

It's hard for me personally to understand. I think so currently maternal mortality statistics get collected by the National Center for Health Statistics in the U.S., which is part of the CDC. And because of these measurement changes that were happening in different states at different times,

They decided not to publish national level data in their own reporting. And they didn't explain that there was this measurement change going on and didn't kind of alert researchers to the problem until about 2017 when all the states had then implemented this change and then they could look at the impact of the change.

I'm not really sure why that was the case. Sometimes researchers, I think, have this caution around, we don't want to say something that we don't know is true. Like, we think that this might have been the reason for the rise, but we should wait until all the data is available before we write about it. I think that might have been one issue. I think another part of it is that, you know,

sometimes the communication just doesn't actually reach the general audience. So there were some researchers who knew about the problem, but they didn't manage to communicate it effectively to the general public or the CDC didn't manage to communicate it effectively to researchers. And it's a real shame because it means, you know, a lot of research time is wasted looking at something that is artifactual and not focusing on

what we now know about the problems and also about these deaths that were previously unreported. Yeah, and I mean, it's even, I mean, obviously, I think it's important to talk about this kind of first order concern about research and about, you know, policymaking, but also like second order. I think there's also just a level in which it created kind of a culture of fear downstream. Like if I was just a reader of these kinds of articles and seeing these reports and studies, and of course, it's important to talk about these stories of women in childbirth, but

Part of it, it feels like there's like a problem with how scientists communicate risk to the public or how the public even understands risks. Right. Like even if the original research was completely correct. Right. And there had been a doubling from 1999 to 2019. Right.

That means 505 women were counted as dying of maternal mortality in 1999 and 1,200 were – a little over 1,200 were counted in 2019. For context, there were more than 3.7 million births in 2019. Now, again, it's weird because there's a level which I don't want to minimize that the sadness that anyone is dying. But it's also like it is important to place it in the context of that. So when scientists communicate –

maternal deaths are doubling, right? In my brain, I'm thinking like thousands and thousands of women are dying and like I'm at serious risk of dying if I choose to get pregnant and have a kid when in reality you have a really, really safe time to give birth in this country. And like people should feel much safer than they have perhaps at any point in history. So I

I mean, how do you think about communicating this kind of thing to the public? How do you kind of make decisions about whether you're talking in percentages or you're talking in more colloquial terms? Are there things that you think are really important for scientific communicators to do when talking about risks?

That's a really good question. I think the way that I usually tackle it is by kind of giving people all the information. So not just focusing on, you know, has something doubled, but also telling people what the rates actually are. So I think both of them can be useful for different purposes. Like for example, like a doubling, as we said, if it was, if that was actually the case, it's really important for policymakers to kind of look into it, find out what the reason was for this sudden reversal of progress and, you know, make changes. Right.

But at the same time, for the individual person, it's not exactly very helpful because they don't have the context to know, you know, is this a risk for me personally? And should I be thinking about it during pregnancy? So, you know, different people need different kinds of information on this. But I also think that we can kind of treat people differently.

As adults who can understand these issues if we communicate them correctly and kind of giving people enough information to know, okay, this has gotten worse over time, if that was the case, but also the risks in general are at this level. And in general, you should not be that afraid of dying during pregnancy. Yeah. I mean, it's hard probably increasingly in the age of social media to segment communication this way.

I can imagine, you know, there being different ways that someone puts out a press release and they are briefing a congresswoman or talking to a committee or an NGO who's working on this problem. I mean, there's just different ways that you would talk to those groups, right? But in public, if you're talking about, you know, it becomes very difficult to segment yourself. You can't go, okay, well, this part of my podcast is for the scientists and this part of my podcast is for the people who want to get pregnant. I mean, that's very difficult to do. And it's something where, obviously, I think that the media has a big role in how this narrative has spread.

But it's also hard given that so much of how we communicate about science is downstream of the academy itself. So, you know, how we're hearing things, if a study gets big within academic research circles, it usually takes a couple of years for it to filter into journalism and filter into the general knowledge base after that. That's right. And so correcting that also takes several years. Yeah, exactly. And one thing

that really struck me about what you said, too, is that this need to say, you know, given that there's all these different competing ways that narratives get understood or contextualized, you need to really just give people the information. And so one part of when I was doing my reporting for my article that really shocked me was a statement from Christopher M. Zahn. He's the interim CEO of the American College of Obstetricians and Gynecologists in

where he wrote, quote, reducing the U.S. maternal mortality crisis to overestimation is, quote, irresponsible and minimizes the many lives lost and the families that have been deeply affected. That makes sense, but, you know, the why was what really struck me. And he says it's because it, quote, would be an unfortunate setback to see all the hard work of healthcare professionals, policymakers, patient advocates, and other stakeholders be undermined.

And rather than pointing out any major methodological flaw in the paper here, Zahn's statement is expressing the concern that it could undermine the goal of improving maternal health. And, you know, obviously that's laudable, but...

That is not usually how we expect scientific fact finders to make claims. So I understand that academics worry about how their work will be operationalized in the real world, but I think both of us have contested this would undermine the goal of preventing maternal mortality. I think what's true will help people, but...

But secondly, I think there's this just dominant sense within the public health research space that you need to be thinking about how your work will be perceived. And is this something that you see a lot in the academic community? So with that quote in particular, like, I'm not sure what his kind of reasoning was. I think your explanation makes a lot of sense. I think the other issue here is that, like,

It was not solely about, you know, overestimation or underestimation. It was also previously these deaths were being unreported. Now we have this new system which captures some of those unreported deaths, but also introduces some false positives. So it's a little bit complicated and like it's difficult to kind of have like a summary of whether it's been overestimated or not.

So I understand that. But I also think like the way that we're communicating this has just not been very clear to people. And like it is just difficult for to, you know, communicate all of this stuff at once and try to have a clear picture that people can take away. And like, I think.

Partly because this narrative around the maternal mortality crisis has lasted so long, it's difficult for people to now make this argument without seeming like they're backtracking or, you know, saying that all these things that we've been working on are not useful. I wonder if that's part of the reason. I'm not actually sure.

Well, I think that one of the things, though, about this that struck me as going beyond this, I brought up COVID at the beginning of this conversation because I think that was another time where I think a lot of trust was broken between public health communicators and the general public. Like you saw this theory of we should try and get the public to act what we want, not give them the information that we have. And I thought with masks, this was the clearest case of that. So you had the situation where, you know,

At the very beginning, there was a real push to preserve masks for first respondents and for nurses and for doctors. And in order to do so, they kind of said, like, don't worry about masks. You don't need masks. Masks aren't important. Just stay at home. And later, there's a real push to get everyone to wear masks. And it kept coming up over and over again. I remember this happening all the time, like both in my real life, but also on the Internet, that people would just say, like, you guys said masks didn't matter. Now, all of a sudden, they do. Like, why would we believe you? You don't know what you're talking about. So, like...

I feel like this is like a broader issue than just in the maternal mortality space. Yeah, I definitely agree. I think it's part of the reason is that people are trying to do multiple things at once. They're trying to kind of, you know, explain things. Maybe they don't have...

great understanding of how best to explain everything at once. So they just think of like, okay, what's the goal that I have and what should I say so that people try to, you know, follow these guidelines. And it's really tricky because you don't really... It's like...

If you say something that's inaccurate because you're trying to achieve a certain goal, you don't know if that same statement is going to affect other issues that are quite important later on. And just like the example that you gave, I think what's much more helpful is to just

Be quite clear about what your understanding is. What are the uncertainties? Like, what are these different metrics that you should think about? And what are these problems that how could people misinterpret the statistic? And just tell them what the problems with that misinterpretation are.

rather than kind of hiding it from them and then waiting for them to think this is a contradiction. What's going on? Why did you lie to me before? Yeah. One pushback I got on my article was from folks who I think are very sincerely concerned with the crisis of maternal mortality. They either work in this space or they themselves feel like they didn't feel safe to have children or they didn't feel safe in their pregnancy. And they're very concerned about this.

And if you're an activist, right, like if your goal is to just make this issue prominent in both the media discourse but also amongst politicians and policymakers, it has become a really core cultural understanding. And that's because largely in part due to the fact that there have been lots of articles about this rise in maternal mortality.

And so I guess, like, what do you say to groups or to people who say, like, you know, I think that it's really important that we not push back on this narrative, even if it's not exactly right? In this case, it's kind of a strange, I think, takeaway for people to have. I think partly because what this measurement change has actually shown is that the actual number of maternal deaths was much higher than we had known in the past. The case is not that it was rising, but that it was already much, like, higher than we thought.

And so it's not exactly that, you know, it's minimizing the problem or the crisis. What this research shows is actually these deaths were occurring, they were going unreported before. Now we know a lot more about what specific causes of death they came from and we have a much better ability to try to prevent those deaths.

So I think that's kind of how I would see it. And I think like it's strange for people to say this story shows that actually the crisis was overblown or what is true is that it hasn't actually had this underlying rise over time, which contradicts what I guess a lot of people have been saying.

What was the reaction to folks when your piece published? Like, how did people respond to you? For me, it was a lot of people were just surprised. They hadn't heard about this measurement issue before. They just didn't know that it was what the process actually was. There wasn't that much pushback. It was more just like, why wasn't this communicated to us before?

And I mean, I think that that's like part of the problem that we're having now, too, is that I think, you know, I think that this is like an issue within the media as well, is that it's very, very easy. There's an asymmetrical thing with corrections. Like people see this on Twitter all the time, like a false tweet that's inflammatory will get like thousands and thousands of retweets and likes and responses. But like something that is, you know, very, um.

you know, saying like, oh, actually there was like, that was misquoting or whatever, will not even reach the number of people. And so in part, like this feels like a story about like not, you know,

I guess, misinformation or mistakes within the scientific community, but rather the way that narratives flourish in the media environment that we're in right now. I think that's right as well. If this was about some other topic and this hadn't been a national story for a long time, I don't think people would have cared very much if a measurement changed. And I think that might have been part of the reason that it just didn't get much traction before because people are like, OK, well, this is just some little issue that only technical experts should care about.

But in fact, it really is part of this, you know, any of these statistics that we look at can be affected by how we collect the data, how we interpret the data. And like looking at these can be really important. And communicating them along with the statistics that we're sharing with people is very important, I think, just in case there are these issues that crop up. Yeah. Yeah.

I also wrote this article where I was looking at the COVID economic catastrophes that weren't. There are a bunch of predictions about what would happen with, you know, women dropping out of the labor force en masse, like 30 to 40 million people being at risk of eviction, state and local debt crises happening.

And, you know, all of these predictions are coming and like they don't come to pass. And one of the big things that I pointed out at the time is that it felt like we were just swimming in data, right? That there was just like so many numbers, so many studies, so many preliminary, you know, charts that people were putting out about various things. And I wonder how you feel about this, but like it feels like we're in this like

It's almost like we're in we have such an abundance of numbers that we can put to arguments that it feels like people have a lot more certainty in the things that they're saying now than they may have had before. And a lot of these numbers, like they're good. They're telling us something, but they shouldn't give you like full certainty that you understand everything that's going on in the world. Right. Like but it does seem like it's created a level of certainty when people are making arguments. I don't know. Do you feel like that's happening, too?

Yeah, I completely agree. I think it's a general problem that we have with statistics and numbers that, you know, they sound a lot more empirical than just, you know, telling people how you think things have changed or something. And, you know, in some ways that's good. It's really important to have empirical data on problems. But at the same time, it means I think sometimes people are a bit too, or they would just take these numbers for granted and not look into how, what

What is the process by which this data was collected and so on? And what's tricky with this is that it's like if you're seeing statistics all the time, it's really difficult for each person to have the time to go in and look at where this data comes from and try to understand that. I think it's really important for there to be people who do that on a regular basis, who know generally about the field and who can interpret the data and explain that in a clear way.

But it's not something that we should really be expecting of a general audience. And so I think some of the stuff that we do at Our World in Data helps with that. But also there are various other writers and statisticians who I think should be working much more in this science communication area to kind of help people interpret these statistics. Yeah, I mean, this is why I love Our World in Data. So feel free to sponsor the Good on Paper.

podcast. So, well, always our final question, you know, what is an idea that, you know, you thought once was good on paper, but then it didn't end up panning out for you in the end? I have a really silly example of this, which is that a few years ago, just before the pandemic, I had moved into a new apartment and I hadn't actually properly looked at, you know, whether it had a washing machine. I hadn't, I like properly checked some of the utilities and

One of the problems that I then discovered was that the radiator in the apartment was like the dial of the radiator was broken. And so it was just permanently on the hottest setting.

Oh, God. And like the building would shut off the radiator during some of the summer months, but it was pretty much boiling for a few months every year. Where were you? Was this in London? This was in London. It just happened to be this very large building where there was only one small maintenance team and they just never got around to fixing it in my apartment. I'm going to be honest. I don't think that was good on paper even to begin with. I think it was bad on paper and then it didn't turn out well at all.

That's true. Yeah. Well, thank you so much for coming on the show, Saloni. This has been fantastic to talk with you. And we cannot wait to have you back. Thank you. Yeah, I really enjoyed the conversation. I hope you have a great week. Thank you.

Good on Paper is produced by Janae West. It was edited by Dave Shaw, fact-checked by Enna Alvarado, and engineered by Erica Huang. Our theme music is composed by Rob Smirciak. Claudina Bade is the executive producer of Atlantic Audio, and Andrea Valdez is our managing editor. And hey, if you like what you're hearing, please follow the show and leave us a rating and review. I'm Jerusalem Dempsis, and we'll see you next week.