Tuesday, July 28, 2009

How Does Polling Affect Political Campaigns?

Part 7 of our 10-part series: “21st Century Campaigning”

You’ll often hear politicians claim that they don’t pay attention to the polls - especially politicians who are behind in them. But that notion is typically just political rhetoric. Even if the candidate is ignoring the polls, his or her campaign staff and consultants do not.

Campaigning is both an art and a science: there are aspects that require inherent political savvy, talent, and creativity, and then there are aspects that need to be measured empirically - a necessity for winning a race. Polling is in the latter category. Watching and even taking polls helps a campaign track where they are and where they need to go.

But aren’t polls often inaccurate?

No poll is perfect, but political scientists and junkies as well as campaigners do a lot to figure out which polls are best to look at.

Nate Silver - the author of the popular blog FiveThirtyEight.com - is one such political junkie. During the 2008 presidential campaign he was able to determine Barack Obama’s and John McCain’s chances of winning by using a very in-depth system of poll analysis.

As he found in April last year, not only do all polls suffer the normal sampling error (usually below 4%) but also from what he calls a “Pollster-Introduced Error” (or “PIE”) - an error of poor methodology. For example, how a question is asked can lead a respondent to say something different than they would if the question was put to them another way. In fact, there is an entire science behind that concept.

Another problem that arises is who is asking the question. For example, a poll by Fox News is likely to demonstrate a conservative politician or stance being more popular than, say, a New York Times survey would. This is because a pollster that identifies himself as being with Fox News is more likely to have a liberal voter hang-up on him, whereas a New York Times pollster would be more likely to have a conservative voter hang-up on him.

Similarly, a campaign’s internal surveys are likely to be more skewed towards their particular candidate than an independent poll would. Even when the campaign outsources the job to a polling firm - which they almost always do - the poll is likely to suffer from a larger PIE than an external survey.

Still, internal polls are important because they can be used to gather data that an independent survey might not. Smaller races (in fact, most anything smaller than a targeted Senate race) are particularly dependent on these additional data because independent pollsters do not pay as much attention to their races.

Aren’t those internal surveys just “push-polls” though?

A campaign will track polls carefully to see how they’re doing - an important way to empirically determine whether their strategy is working or not. But in order to figure out what strategy needs to be employed in the first place - specifically the candidate’s message - a survey will be taken early-on in the campaign to test the waters of the electorate.

These “message-testing” surveys are very different from “push-polls” because they are meant to legitimately plan a campaign message, platform, and image of the candidate. Push-polls are intended to directly influence the opinion of the respondent under the disguise as a message-testing survey.

The American Association for Public Opinion Research explains how a respondent can tell the difference:

Identifying Advocacy Calls Made Under the Guise of Research

Political telemarketing calls, when disguised as research, may sometimes be difficult to differentiate from a legitimate survey. Here are characteristics that will usually indicate to a respondent that the call is not a legitimate survey.

-One or only a few questions are asked, all about a single candidate or a single issue.

-The questions are uniformly strongly negative (or sometimes uniformly positive) descriptions of the candidate or issue.

-The organization conducting the calls is not named, or a phony name is used.

-Evasive answers are given in response to requests for more information about the survey.

In addition, the following characteristics will indicate to journalists, reporters, and survey professionals that a telephone call is not a legitimate survey.

-The number of people called is very large, sometimes many thousands.

-The calls are not based on a random sample.

-It is difficult to find out which organization conducted the interviews.

[Identifying]…Message Testing

… One way to tell is that message-testing surveys exhibit the characteristics of a legitimate survey, such as:

-At the beginning of the call, the interviewer clearly identifies the call center actually making the calls. (However, legitimate political polling firms will often choose not to identify the client who is sponsoring the research, be it a candidate or a political party, since that could bias the survey results.)

-The interview contains more than a few questions.

-The questions usually ask about more than one candidate or mention both sides of an issue.

-Questions, usually near the end of the interview, ask respondents to report demographic characteristics such as age, education level, and party identification.

-The survey is based on a random sample of voters.

-The number of respondents falls within the range of legitimate surveys, typically between 400 and 1500 interviews.

In 1996, citing the increasing practice of push-polling, the American Association of Political Consultants publicly condemned the tactic and required its members to abstain from it under the AAPC’s Code of Ethics.

But the concept of push-polling really became famous in 2000 during the GOP presidential primaries. Karl Rove encouraged the Bush campaign to wage a push-poll against Sen. John McCain (R-AZ) in South Carolina. The pollsters asked respondents if they were aware that - among other negative and untrue things - McCain was the father of an illegitimate black child. The incident was so appalling and became so famous that little of the practice has been seen since.

So how is the 21st Century changing the effect of polling on campaigns?

While modern opinion polls have been around since George Gallup tracked the 1936 re-election of FDR, in the last few years they have become increasingly significant with the rise of bloggers.

One such blog is RealClearPolitics.com, which in 2000 began compiling polls of the close presidential contest between Gov. George W. Bush (R-TX) and Vice President Al Gore (D-TN). As time passed they would update the outlook of the election with the RCP Poll Average. It was a helpful way for the average political junkie to look at where the candidates were at without worrying too much about the number differences between multiple surveys.

Next in that tradition was FiveThirtyEight.com. Founded in 2008 by Silver - a former baseball statistician - it was able to track where the candidates were at even better. 538 was able to analyze the polls by weighing them, tracking them by state, and using complicated math that the typical American (and even most folks in campaign politics) wouldn’t understand. With the internet providing the opportunity for this blog, millions of Americans were able to see Silver’s rolling predictions.

So on Election Day, when - according to Silver - McCain had roughly a 1% chance of winning, it was really no surprise that Obama could declare victory that night. In fact, it was such a long-shot for McCain to win that the 538-reading Obama supporter had nothing to worry about.

And that’s exactly the point.

Campaigns have had to tell their supporters for years not to pay attention to the polls. If their supporters are taking polls seriously, it means that they’ll either figure their candidate has it in the bag (if he/she is ahead) or that their candidate doesn’t have a chance.

Similarly, in 2004, many criticized the national news media for releasing exit polls mid-way through Election Day. Sen. John Kerry was ahead in the presidential election according to the exit polls, and that may have encouraged Bush supporters to get to their polling-places before they closed and for Kerry supporters to not bother.

Luckily for Obama in 2008, most of his supporters were pretty skeptical of the polls as well as 538’s findings. Not only did they make sure to vote, but they continued the campaign’s GOTV activities like the election could come down to one vote.

But with 538’s analysis being so sophisticated and so incredibly accurate, such blogs pose a real problem for campaign workers. With the information so readily accessible in the internet age, it can go a long way towards influencing supporters. If your candidate’s scientifically-measured chances of winning are so high (or conversely, so low) then why bother volunteering or giving money?

That will be the challenge for campaigns in the future.

“21st Century Campaigning” returns Friday with Voter ID!

No comments: