They are everywhere, trying to grab our attention. And they succeed. Public opinion polls claim to adapt statistical research methods to the measuring of beliefs. Scientific? Perhaps, but polling also operates with hidden goals because it is part of the marketplace.
In 2003 Retro Poll investigated how this works with a poll comparing knowledge and opinions before the invasion of Iraq. The poll found that the media-promoted government misinformation about Iraq’s possession of weapons of mass destruction conditioned public responses about going to war. Those who believed the hype that Iraq had WMDs and was linked to Al Qaeda terrorism favored war by 2:1, but 75 percent of people who could see through that charade opposed U.S. aggression.
Polls are like multiple-choice exams where the student is expected through rote-learning to provide a conclusion based upon memorized course information. The course information is the news that the media markets to the public.
Surprisingly, sometimes even the polling professionals are unaware of their role in this model. Polls usually (and subtly) limit the range of answers and ways of looking at any problem to what has been in the public’s eye through the corporate media inputs. Given a restricted range of information, opinion research promotes “obvious” opinion answers to a problem without the respondents’ awareness that their choices have been limited.
An ongoing discussion among members of the American Association for Public Opinion Research (AAPOR) reveals how polls may constrict options in any debate or discussion. Back in late 2003 Retro Poll first asked people’s views on impeachment. When the question was posted on the AAPOR List, some argued that impeachment was not a legitimate issue to ask about because no one in Congress or the media was discussing it.
Other AAPOR members criticized the question as “leading” because we asked people whether or not “misleading the public and Congress on weapons of mass destruction in Iraq” was grounds for impeachment of the President? These polling gurus did not want the factual presentation to go beyond the media parameters at a time when the media was only just beginning to expose the truth.
So the market-based approach to opinion research leads polling, in general, to reflect the restricted media discourse and to limit the public’s ways of responding—the range of choices. In other words what you see is what you get—in the worst sense when what you see is incomplete. Or “garbage in, garbage out.”
Parenthetically, Retro Poll was actually surprised that 39+ percent thought misleading Congress and the public on weapons of mass destruction was grounds for impeachment in two separate polls six months apart. If this was true, it shows a very deep strain of anger at the regime’s deception as early as November 2003.
Yet the public is often puzzled by poll results. People just can’t ignore them,
especially when they hit topics that are important to us. Reading the polls, some conclude that the public is a herd of passive mindless sheep, uncritical thinkers easily misled by reactionary ideas (call this the “What Happened to Kansas?” camp). Others believe that the polling methods themselves are a fraud (“you can’t tell what millions of people are thinking by asking 568 or 1,000 people”). Both of these views are erroneous.
To the extent that public views are sheepish it’s often a product of the polling methods which create this mirage by limiting the field of discussion and information. Of course many people do have poorly informed opinions, but polls tend to empower particular strains of misinformation. Think about it. This explains why so much money is today going into polling and why we hear, incessantly, so many poll reports.
In truth, polls do serve a very specific social function: they tend to disempower legitimate dissent by negating an analytical or fleshed-out discussion or understanding of political realities. They tend to highlight and encourage mindlessness in the poll respondents and inferential “punditry” in the poll audience reading summaries, much as reality TV and product marketing do.
This process is not driven by “Right-wing” ideology but by behavioral psychology usually used to create audience needs and wants vicariously by linking products to desirable outcomes like youthfulness, sexuality, attractiveness, etc. In polling what is usually suggested is the safety of being part of an implied national consensus, thus supporting an ideology that is implicit rather than explicit.
Another thing to consider: In general, polls—even in cases when they accurately reflect public opinion and disagree with those in power--have marginal impact upon policy decisions, because there are few costs to policy makers in ignoring the numbers. If consistent public opinion mattered, Congress would not have voted to outlaw abortion many times, the United States would be funding most birth control and HIV treatment worldwide, U.S. troops would no longer be in Iraq, all those displaced by Katrina would have been helped to return to New Orleans and we would all have a national health insurance card in our pockets.
These are things that people consistently support in polls. But public opinions matter only when they are backed with credible threats to ever more protracted and militant actions. Even then it’s not the numbers that matter but the level of organization and resistance. So polls are less about informing policy makers than they are about putting the public under a magnifying glass and measuring how we respond to stimulae.
Even though opinion polls are often ignored by policy makers their numbers (and funding) expand faster than the GDP because polls serve to validate the cultural and ideological dominance of the corporate media and solidify the limited scope of alternatives presented in those media.
In this way, polls tend to moderate popular resistance, as does any virtual reality frame that engages peoples’ attention and emotions. You may feel good when a poll shows people agree with you and cynical when you believe a poll shows people are taken in by propaganda, but in both cases you conclude that you have a better appreciation of something real and in neither case are you impelled to action. Like viewers of reality TV, our relationship is voyeuristic and vicarious, and our participation is emotional and reflexively passive.
Statistical tests are used by pollsters and media to appear to verify that opinion polls accurately reflect general population opinions. The scientific issue is that truly random samples with fairly small numbers (eg. 1,000) taken from a very large population can reflect the larger population views in a high (greater than 95 percent) proportion of cases. However, because polls are not really random samples the standard error based on the normal distribution is not applicable. Even good scientific work in health care is a human made approximation of randomness, such as choosing every other patient through the door to get a drug or placebo. Yet most polls today can not come close to that standard. Among the problems:
1. Many people refuse to participate when contacted by random phone calls (sometimes more than 70 percent) and we never know if their views are the same or different from those who do participate.
2. A growing number of people have only cell-phones and are not reached by standard methods. They are a younger group.
3. The largest ethnic minorities in the United States (African-Americans and Latinos) consistently participate at lower rates than European Americans.
4. People who screen their calls and don’t answer the phone may differ in views from others.
5. Poor people, not to mention the homeless, are less likely to have phones or be reachable.
That doesn’t mean that any given poll result does not reflect what the larger population would say. It does mean that we can’t say that it reflects public views within a certain range of accuracy. As a result, when you hear on TV that a poll is accurate to plus or minus 3 percent, that’s a misrepresentation of the truth. Election exit polls are one exception because they choose respondents the same way as medical researchers, the responses are factual (eg. who did you vote for?) and a higher proportion agree to participate.
Still, the more important issue is that “what the general population believes” has actually been fixed before an opinion poll begins by the type of questions, the general context of disinformation, and the outlook of those who summarize and report the data.
An editorial in the liberal Washington Post July 21, 2006 helps explain why support for Israel, for instance, is stronger in the United States than anywhere in the world. The Post editorialized that a cease fire in Lebanon is problematic because it would give succor to the Hizbollah aggressor, mimicking Israel’s line and totally ignoring Israel’s massive invasion of Gaza—the death and destruction that preceded current events—and that Israel’s attack has destroyed Lebanon’s infrastructure and indiscriminately killed so many civilians. The facts have been distorted to allow for the analysis.
Unfortunately, “What Happened to Kansas?” is not a “Red State” problem located in the mid-west. The problem is embedded in the market driven approach to public opinion manipulation. Long ago, survey research was founded to ascertain peoples’ (and communities’) needs and aspirations where consensus building can be a positive social function. Today, although survey research still plays that role, the big money is in opinion polling, which--like market research for products--is often fraught with hidden intent, bias and misrepresentations. Let the buyer beware.
Marc Sapir lives in Berkeley, practices medicine part time with Alameda County and directs Retro Poll (www.retropoll.org). Retro Poll seeks volunteers and donations for its upcoming September poll. Marc can be e-mailed at email@example.com.