Stephen Farnsworth is Professor and Director, Center for Leadership and Media Studies at Mary Washington University in Virginia. He received a Ph.D. and an M.A. in government from Georgetown University, a B.A. in history from the University of Missouri-Kansas City, and a B.A. in government from Dartmouth College. Dr. Farnsworth, who worked for 10 years as a newspaper journalist before becoming a professor, has lectured widely on the news media, the presidency and elections before US and international audiences. inFOCUS editor Shoshana Bryen spoke with him in late November.
inFOCUS: Every time you turn around, there’s a poll that says people think this, people think that. How much should we be invested, and specifically, how much should our government be invested, in the results of polling?
Farnsworth: Polling is a very important mechanism for understanding where people are on issues. One of the big differences between the way that the federal government operates compared to some of the states is that there isn’t an opportunity for the public to be heard at the highest level.
iF: So, this is their chance to say, “I want to tell you how I feel.”
Farnsworth: In California and some other states, you have an opportunity to express yourself by voting in a referendum. The United States, the federal government, doesn’t offer those opportunities. So, when elected officials are wondering what a representative sample of Americans believes, polling is certainly a more useful mechanism than what comes across on Twitter or what they get in the mail.
iF: Should that incline members of Congress to look at polling primarily from their own districts or is national polling probably representative for them?
Farnsworth: Polls are not generally conducted at the district level; they tend to be conducted at the state or national level. But you can get some sense of what your constituents feel by looking at data for a survey question that goes beyond your district. Let’s say you represent a very Republican district. You can look at a national survey and see what Republicans say and get some sense of how that might compare to your own district that might be more or less Republican than the national average.
iF: How much difference does it make if the public that you poll isn’t well informed? People like to discuss their opinions, but they don’t always know things.
Farnsworth: That’s always a risk. One of the limitations of polling, of course, is that you ask about something that people are not well informed about and they may offer you an opinion anyhow. They may believe it, but it just may not be based on much in the way of evidence. One of the dangers to a healthy civic culture in America today is that it’s very easy for false information to be spread online and the consequences can be very, very damaging to the validity of public opinion. If you think about how Russian efforts in 2016 used social media to build up divisions on both sides of the partisan divide, you can see just how damaging that can be. Then of course, Americans are not above misleading other Americans for political advantage either.
iF: Do you worry about that increasing in the future, this tendency of people to get social media information or online information?
Farnsworth: I think it’s really important for people to appreciate that in this modern environment, they simply have to be their own editor. Once upon a time, we could count on the gatekeepers of traditional media, the folks deciding what’s going to be on the CBS Evening News or what’s going to be on the front page of The New York Times to minimize the amount of false information that came our way. But now, everybody is sort of equal in the conversation on a place like Facebook or Twitter and that means that we have to have a “BS detector” that is more refined than ever.
iF: This raises the question of the “salience,” or importance to the respondent. I remember a question asking about Israel. People were very favorably disposed, but then they were asked, “How important is the issue of Israel to you?” And Israel came in behind plumbing problems, number 27 or 46 in salience.
Farnsworth: There are polls that ask how important issues are. This is a standard question often employed around elections, “When you’re thinking about voting for this candidate, what’s the most important issue?” We just had legislative elections in Virginia for example, and we’ve found that gun control was a major driving force for people who were thinking about who they wanted to vote for in November 2019. Over time, we can track the extent to which Republicans have benefited more or less from an emphasis on gun issues. We’re now at a point where the issue is as likely to help Democrats as Republicans. You can look at the changing electorate of Virginia and see that gun control helps Democrats at least as much as Republicans.
One of the key things that we find consistently across the surveys is a conviction that domestic matters are far, far more important to most voters than international ones, and I think politicians tend to be pretty responsive to anxieties, concerns, and preferences of voters. The folks are concerned in this country about economic security, the cost of healthcare, about issues that relate to retirement, and education.
There is of course a concern about immigration, but that’s a bit further down the line compared to economic issues, which pretty consistently are the dominant concerns of American voters.
iF: Is that irrespective of the state of the economy at the moment?
Farnsworth: It’s actually pretty consistent. The rare cases where international concerns are more important than domestic ones are in moments of international crisis. In the immediate aftermath of 9/11 for example, international concerns were a much bigger deal than usual. At the time that the war in Iraq started, international concerns were front and center – but those pretty quickly recede and you go back to an environment where most voters care more about domestic matters than international ones.
iF: Two things: One, weighted surveys and how much there might be pollster bias to weight surveys in a certain way. And the other is non-response bias, and what happens when you can’t find the people you want to poll?
Farnsworth: One of the big challenges that pollsters face right now – and this is dramatically increasing the cost of surveys – is getting people to pick up the phone. People get so many junk calls that there’s a very, very high level of letting it ring.
iF: I admit to being in that group.
Farnsworth: If you have to call three or four times as many people to get the number of survey respondents that you want for your survey, that’s going to be a significantly more expensive survey than it used to be. That’s a challenge to be sure. But you do have with various demographic methods, a chance to see how close an approximation the random sample that you’ve developed matches with the population.
For example, if you were surveying the country as a whole and you looked at the percentage of the population that has a college degree, or you look at the percentage of the population that is African American, or female, you know basically what those numbers should look like. You have to check whether your sample matches the norms of the group you’re looking at. The same is true for a state survey. We have, from census records, very precise indications of the kind of age, racial, educational, diversity in a given state. You can use those and adjust the sample a little bit, not to manipulate the results, but rather to create a little bit closer approximation for the group of people being surveyed. So, an African American person in a survey of the whole might count as a 1.1 person effectively to give you a sample that overall looks more like the state or the country depending on the kind of survey you’re doing.
iF: So that’s what people mean when they talk about weighted surveys.
Farnsworth: Yes. They’re trying to create a more accurate picture of the underlying population. One thing you would never want to weight in a survey, though, would be party identification. Many states do not have party identification measures for registration. Virginia is among them. As a result, any guess would be wildly reckless, but even the states that do have party identification, that goes up and down depending on who the governor is or whether you want to participate in the other party’s primary or a host of other factors.
Most pollsters are actually trying hard to get as close to as professional output as possible.
Now obviously, there are individual groups that want certain outcomes and they will write questions that will increase or decrease the number of people answering, “Yes,” to that question based on the result that they want. There may be sampling techniques that are being employed by advocates of one issue or another to try to ramp the numbers up in a way that they find acceptable. This is another example of how we as citizens need a finely tuned BS meter.
iF: Let’s take a half-step back. First, has the number of hang-ups or, “no response” in telephones increased? Second, has it led pollsters to use mechanisms like texting, cell phone numbers, opt-in surveys, and are those as accurate as the traditional mechanism of calling people on a landline? Because then you know where they live and you could say, “This is this demographic. This lives over here.”
Farnsworth: The norm now in survey research that uses random digit dialing would be to weight more heavily towards cell phones and landlines. We did a Virginia statewide survey a couple months ago and we were 65-35 favoring cell phones over landlines. But one of the first things that we ask in our surveys is, sure we may be calling a 703 Area Code, but we ask if these respondents are current residents of Virginia.
iF: Do you get a lot of people who carried their cell phone somewhere else? Do you miss people because people who come to Virginia from other states don’t have a Virginia area code?
Farnsworth: We would have no way of knowing that somebody with, let’s say a 212 Area Code from New York City, might be living in Virginia, and we would never ever call a 212 number on the off chance that they happen to live in Virginia now. So, yes. We might be missing some people.
iF: I want to ask about the confidence people have in polls. One poll I saw indicated that only 36 percent of Republican respondents thought opinion polling was mostly or always accurate. Sixty percent of Democrats in the same poll said they believed it was mostly or always accurate. Maybe this goes to the question, “Who doesn’t want to answer polls?” If you already believe the polls are not necessarily reliable, maybe you don’t want to answer them.
Farnsworth: There are a lot of potential problems with who chooses to participate and chooses not to, but there are only so many things that are under our control. One of the things that you do find in survey work is that when the person is on the phone, they are often very interested in sharing their opinion, be they liberal or conservative. If they’re willing to participate in the survey, they tend to be all in.
iF: Does this relate to margin of error? We see polls that come with a two percent margin of error. What is margin of error measuring?
Farnsworth: Let me give you an example. Let’s say you have 1,000 pennies and you throw them up in the air and they land on the table and you count how many of them come up heads. The most likely response would be 500-500, but if it was 505 and 495, that wouldn’t strike you as particularly problematic. Now if all of a sudden you throw coins up and you get 800 heads to 200 tails, you might say, “There might be something about these coins.” What effectively we do with a survey is try to come up with a sample that is a rough approximation for the public as a whole.
In a state with millions of voters, we will survey a thousand people and we try as best we can through random digit dialing and double checks with respect to demographic factors that we already know about a given state (or if it’s a national survey, the country) and try to be close, but we understand that we can never be sure it’s a perfect match. So if you take those coins, that image, you can see what we’re trying to do. We say, “Well, this is what we think is a survey that would be 500 heads and 500 tails, but it might be 505-495.” Something approximating that is the best that we can do. Again, that assumes that this is a professional survey that’s trying to accurately reflect the interest of the public. If you wanted a survey that was disproportionately liberal, for example, call on Sunday morning. If you want a sample that is disproportionately older, call on a weekend evening.
iF: Do you ever wonder that in a country of 320 million people, a thousand may not be enough as a sample?
Farnsworth: A thousand is a norm and the plus or minus three percentage points is a reasonable approximation. It’s important to remember that when you hear a survey that says that, “Elizabeth Warren is two points ahead of Joe Biden, so she’s taken the lead,” that’s not exactly what the survey tells you. What the survey tells you is that they’re both within the margin of error and that they’re very close. Our best guess is that maybe a candidate has gained a little ground or lost a little ground since the last survey, but unless there’s a really big move, it may be a statistical artifact. It may be noise.
iF: You were saying, you have to be your own editor and you have to keep certain things in mind – margin of error certainly is one of those.
Farnsworth: I think a lot of people who are practicing journalism may not fully understand the extent to which these are approximations. Quite simply, a poll number and a survey should never be seen as an exact political temperature. A thermometer can tell you exactly how hot or how cold it is at a given moment. Survey research can tell you that it’s hot rather than cold, but the exact degree reading – that’s asking too much of a thousand-person poll.
iF: That’s a great point. What about the contention by some people that accuracy is undermined by the fact that people may give you the answers they think you want. Not because a poll is biased, but if a candidate is way ahead of the polls, sometimes people don’t want to say, “Well, yeah, but I’m for the other guy.” Does a desire to be in the mainstream influence how people respond?
Farnsworth: Anything is possible, but people are quite willing to express their opinions these days even if it’s not a popular one. The assumption would be that most people would answer honestly. And if there is an interest in, say lying to a pollster, would that person be interested in doing a survey in the first place?
iF: True. I’m finding this very interesting. What is push polling? I had never heard of that one before.
Farnsworth: Push polling tries to impact an election by asking questions to make people think twice about a candidate they might support. It might be an extra “Did you know that…” comment. Let’s say you’re working for the Democrats and you’re trying to convince an Evangelical voter to vote Democrat. In this hypothetical scenario, you might ask a person, “Does it matter to you whether a person violates his marriage vows?” And then you might ask, “Does it bother you when a politician promoting family values has been divorced?” And then you say, “Did you know that Donald Trump is on his third marriage?” “And did you know…”
In a hypothetical example like this, for example, you would be asking questions that would try to create an environment where the person would be, in this case, more hostile to voting for Donald Trump. That’s not a survey that’s designed to elicit how a person feels about Donald Trump. It’s designed to manipulate how the person feels about Donald Trump.
iF: My last question is about accuracy. Taking the last couple of election cycles, maybe beginning with the 2008 presidential election and coming forward, how do you think pollsters are doing? I know there were some big misses, but there was also some very good forecasts.
Farnsworth: Overall, survey research has been pretty good at predicting outcomes, particularly when you’re looking at the volume of surveys being conducted. You can see with different methodologies and different sampling approaches that results actually tend to cluster. The most important thing to remember is that polls are snapshots at that moment in time.
If the election were held the same day as the survey, the survey is more likely to predict accurately. But if the survey takes place two weeks before the election, then intervening factors may mean that where people were two weeks earlier may not be where they’d be on election day. I think you absolutely saw that in terms of the final weeks before the 2016 presidential election.
When James Comey reopened an investigation into Hillary Clinton and her emails in late October 2016, it had a significant impact on the political narrative, I think.
iF: Did you see that reflected in polling?
Farnsworth: People who look at media coverage see significantly more critical treatment of Hillary in news coverage when the FBI investigation reopened. But in some of the closer states there weren’t surveys conducted after that. And some states were already conducting early voting. At the state level, not all those polls were being conducted as close to the election as would’ve been optimal for figuring these things out.
But even if you’re conducting the survey the day before the election or during the day of the election, you can’t always count on anything more than a rough approximation within a few percentage points. If you expect the survey to accurately predict a statewide election in which the margin between the two candidates is a few thousand or a few tens of thousands of votes, you’re probably going to be disappointed. The measurement tool is just simply not that precise. You’re not going to measure the length of an airplane with a yardstick. You would have a general idea how long the plane is if you use a yardstick, but each time you put that yardstick down, how much error might there be between where the old one stopped and the new one started?
iF: If you had one thing to tell our readers as they watch the polls for the upcoming presidential year, what should they be most aware of?
Farnsworth: The key thing for every citizen, whether regarding consuming news, or concerning polls, or concerning news about polls is to appreciate that this is an approximation. This process is the best that survey research could do, but that’s still not a perfect match for where the voters are. There are certain things that you always want to see in a poll: the exact question wording; when it was conducted; and how it was conducted. You should have answers before you start saying that you take the results seriously.
iF: Great advice. I want to thank you on behalf of the Jewish Policy Center, our members, and inFOCUS readers.