The Danger of Unscientific Polling

Reading Time: 12 Minutes


So by now you’ve undoubtedly heard that Donald Trump’s campaign to become the next president of the Unites States reached a new level of crazy this week. It all started when, on December 7th, Trump published a press release on his website calling for a “total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on” (emphasis mine).

At the center of Trump’s anti-Muslim immigration proposal was a statistic he’s used repeatedly to justify his stance. This statistic, taken from a poll commissioned in June by a conservative think tank stated that "25% of [Muslims living in the US] agreed that violence against Americans here in the United States is justified as a part of the global jihad". The poll has already been largely dismissed in the media as unscientific on account of the fact that the entity who commissioned it, an organization called the Center for Security Policy, is a hard-line right-wing organization founded by Frank Gaffney, a man who is regarded as “one of America's most notorious Islamophobes”.

I don’t actually want to talk about Trump's anti-immigration proposal, nor do I want to talk about the credibility (or lack thereof) of Frank Gaffney or the Center for Security Policy. What I do want to talk about is the company Gaffney’s organization worked with to produce the research. The source of polling data is an important issue in the world of market research and one that's central to this controversy. But before we get into it let me take a moment to provide a quick background on the current state of quantitative market research.

The World of Market Research is Evolving

The quantitative research and polling industry has evolved a great deal over the last 10 years. Traditional research collection methods, such as door-to-door or telesurveys have been largely disrupted by online and mobile based panel services. What used to cost hundreds of thousands of dollars and take 12+ months of planning, programming, field work and analysis can now be done for a fraction of the cost and time.

When it comes to online and mobile quantitative research there are many different types of research providers you can work with. The most common type, known as a panel provider, essentially manage a database of people who have opted in to participating in research and who you can survey. What distinguishes a good panel provider is, at a minimum, adherence to ESOMAR’s guidelines for online research. That is a given, as I personally wouldn’t work with a research partner who doesn’t adhere to their standards. But beyond regulation and standardization what I typically look for in a panel provider is in the details of how they source, recruit and maintain people on their panel. Good panel providers will be transparent about their recruitment practices and they will have a wealth of information about the people who sit on their panel. They will also be able to tell you about the policies that govern how they maintain the panel, such as criteria for exclusion or what steps they take to prevent oversampling. 

There are also many new types of research providers/services popping up that provide alternative approaches to getting access to a research audience. Google, for example, recently launched a new service called Google Consumer Surveys (GCS) which offers both a survey builder as well as access to an audience. Google’s approach to audience recruitment is noteworthy as it’s a little different from a typical panel provider. They employ 2 approaches to linking researchers with willing participants. The first is through their Google Opinion Rewards program, which is an Android app that allows you to opt-in to take quick surveys in exchange for credits which can be used within the Google Play Store (which is essentially a panel). The other approach is through media partnerships. For this, what they’re doing is something known as river sampling which is the practice of sourcing respondents on the fly through pop-ups ads and/or banners. In the case of GCS, they work with media sites and offer free content that would typically sit behind a pay wall in exchange for the participant completing a quick survey. GCS offers a good solution for conducting quick and cheap research. However, it’s important that you understand the limitations (and risks) of this type of sampling. For example, when using GCS’s approach I would want to know if my sample was split between river sampling and their rewards app. Furthermore, for the participants who were sourced from a media site I would certainly want to know about which site they were recuirted from. For example, sampling from FOX News vs The New York Times would undoubtedly affect your results (note, I don't actually know which media partners Google works with).

The point is that when you’re working with a research agency or vendor who provides access to an audience, you should know exactly how they source the participants and you’ll need to consider whether their approach will impact your results or how you interpret the results.

So what’s an unscientific poll anyway?

There’s no single way to define this, but Sheldon R. Gawiser (Ph.D) and G. Evans Witt offer some guidance here. They've compiled a list of 20 questions that they recommend every journalist consider before publishing results from a poll. The 20 questions cover a range of criteria, such as the sample size, recruiting methodology, sampling error, etc. But you need not look beyond the first 2 questions to understand whether Trump’s darling poll is beyond questionable. First, you should ask who paid for the poll and why was it done? And second, you should ask who actually did the poll?

Who paid for the poll and why was it done?

We know the answer to the first question already. The Center for Security Policy commissioned the study, and one quick look at their website and you’ll get a pretty good idea about their motivations for conducting the research. Gaffney, the organization's founder, has a long history of pushing anti-Muslim views in the most conspiratorial ways possible. For example, in a 2010 column he “claimed that the Missile Defense Agency logo “appears ominously to reflect a morphing of the Islamic crescent and star with the Obama campaign logo,” part of a “worrying pattern of official U.S. submission to Islam.” Need I say more?

Who did the poll?

Now what about the company who actually carried out the poll? This is where things get really interesting. The poll was run in June of this year by a Washington D.C. based firm known as the polling company, inc, who I’ll now refer to as TPC.

It should go without saying that, in order for a research firm to be effective at what they do they need to approach every research problem objectively and without bias. After all, the goal of any researcher should be the pursuit and discovery of truth, and achieving this requires a commitment to rigor and an openness and acceptance of any outcome regardless of your hypothesis. Simply put, they need to be non-partisan.

So does TPC meet this standard? Let’s take a look. Their website currently lists 2 employees, their founder and president, Kellyanne Conway, as well as the company’s Director of Research, Kevin Quinley. Here are a few interesting facts about TPC, all of which I was able to pull directly from their site:

  • The company’s founder, Kellyanne Conway, is Republican strategist
  • Prior to joining the company, the Direct of Research, Quinley, worked for Carlyle Gregory Company, a [Republican] political consulting firm where he worked pretty much exclusively with Republican political candidates and lobbyists
  • Their client list is a who’s who of political candidates and organizations. Every single one of their 22 listed political clients are associated with the Republican party.
    • African American Republican Leadership Coalition (Republican)
    • Camden County Republican Party (Republican)
    • Cape May County Republican Party (Republican)
    • Cumberland County Republican Party (Republican)
    • Former Ohio Secretary of State Ken Blackwell (Republican)
    • Fred Thompson Presidential Campaign (Republican)
    • Gary Palmer for Congress (AL-6) (Republican)
    • Governor Mike Pence (IN) (Republican)
    • Lee Zeldin for Congress (NY-1) (Republican)
    • National Federation of Republican Women (Republican)
    • National Republican Congressional Committee (Republican)
    • National Republican Senatorial Committee (Republican)
    • Newt Gingrich Presidential Campaign (Republican)
    • Rep. Steve King (IA-4) (Republican)
    • Rep. Dave Weldon (FL-15) (Republican)
    • Rep. Marsha Blackburn (TN-7) (Republican)
    • Rep. Tim Huelskamp (KS-1) (Republican)
    • Rep. Michele Bachmann (MN-6) (Republican)
    • Rep. Jack Kingston (GA-1) (Republican)
    • Republican Jewish Coalition (Republican)
    • Rod Blum for Congress (IA-1) (Republican)
    • Todd Hiett for Oklahoma Corporation Commission (Republican)   

Now I’m not saying that being a Republican is wrong. I don’t live in the U.S. and if I did I certainly wouldn't vote for them. Regardless, what I am trying to point out here is that TPC, a company whose sole existence should be the pursuit of objective truth, is clearly partisan to one particular political institution. In my book this disqualifies them from being taken seriously as a market research firm as they are incapable of approaching a research problem without ideological baggage.

What’s interesting is that shortly after the controversy ignited this week TPC published a press release on their website in an apparent move to address backlash suggesting the research was shoddy. The press release provides details about the methodology and also draws some comparisons to other similar research that was conducted using the same approach. In particular, they state that many other research providers “like SurveyMonkey & Harris Interactive that are relied upon and quoted extensively use the same methodology – an online, opt-in panel.”

Indeed, many companies use online panels. But both SurveyMonkey and Harris Interactive manage and maintain their own panels. You can go to their website, learn about their panel, who is on it and how the company recruits. Both of these companies also have the ability to match national, regional or local demographics in their panel, meaning that they can ensure the sample you used for your study is reflective of the full population.

It’s evident that TPC don’t have their own panel, in which case they had to outsource their data collection and field work to a 3rd party panel provider, which is a common practice for research agencies. But I would be very interested in knowing which panel provider they used to conduct this research, as the quality of panel providers can vary significantly.

Also, opt-in online panels, which is what TPC used for the research, have their short comings. Here’s a great article which explains why opt-in panels can sometimes be flawed. The main issue is that depending on how the company sources and recruits its participants, opt-in panels may be comprised of people who are systematically different from the rest of the population. This can produce results that are “significantly less accurate than results from randomly- (i.e., probabilistically-) selected panels.” To be clear, I’m not saying that opt-in online panels are a fundamentally flawed approach. I use them frequently in my own job to conduct market research for brands, and when it comes to measuring high level concepts such as brand awareness or mindshare, opt-in panels are well suited to the task (again, assuming they are maintained by a reputable company). But when it comes to measuring complex concepts such as a particular group’s interpretation of their religion and how this might affect their behavior, a more comprehensive and rigorous approach is needed. In fact, I would go so far as to say that online panel based research can not, on its own, effectively answer this research problem. This is an incredibly complex and difficult concept to quantify, and to do it both scientifically and meaningfully would require a multi-method approach which utilises a mix of qual and quant research methods.

The sample size is also another factor worth discussing as has been a focal point of much of the coverage surrounding this controversy. Determining an adequate sample size for a study is something that’s not well understood and often misinterpreted. It’s kind of like Bounce Rate for websites, everyone seems to know what it is but few actually understand how it’s calculated and fewer still know how to effectively interpret it. A sample size is determined through what’s called power analysis, which can be used to either determine the minimum sample size required to carry out your study, or it can also be used to determine the minimum effect size that is likely to be detected in your study given your sample size. PEW estimates the Muslim population in the U.S. is just below 1% of the total population, which would equal roughly 3.2 million people. The sample size for TPC’s poll was n=600, or roughly 0.02% of the target population for the study. Was this sample size large enough? Well, there are many ways to determine this. One way could be to conduct a power test which, given a desired confidence level and population size, you can easily calculate the minimum sample size. On the other hand, we can look at the reported margin of error to understand the likelihood of this result occurring if the entire population was surveyed. For example, if we use the 25% of Muslims advocate violence statistic which was reported, if we knew that the margin of error was 5% then we could conclude that between 20% (25-5) and 30% (25+5) would have picked the same answer if the entire population was survyed.

TPC did not report a margin of error for this poll, which is suspect. But to be frank, even if they did determine the sampling error, the main issue comes back to how they sourced the data and TPC’s ability to approach this research objectively. In my opinion TPC fails at every level to be taken seriously as an unbiased market research agency. In fact, they’re not a research agency. They are a partisan political consulting firm that happens to do research.

The danger of unscientific polls

This is the danger of unscientific polling. It’s far too easy to produce data that creates an illusion of truth which is used by people like Frank Gaffney and Donald Trump to proliferate a belief that simply isn't true. The problem is people like Trump, and those who follow him, don’t care about scientific rigor or the potential issues that can arise from poorly designed research. They don’t question the integrity of the study and they accept the figures as they are because it validates an existing bias and belief system. To them, the truth does't matter. What matters is simply blind validation. The kind of validation that reinforces a shallow and insular view of the world.  

You don’t need to be a statistician to understand what scientific polling is, or to be able to spot a potentially dubious statistic reported in the news. A good place to start is Sheldon R. Gawiser (Ph.D) and G. Evans Witt’s 20 questions to ask when looking at poll results. I also highly recommend Statistics Done Wrong by Alex Reinhart or Wrong by David Freedman.

The bottom line is, don’t ever take a poll at face value without considering where it came from and how it was produced. It’s sad to say, but given the frequency at which polls are conducted and reported these days, they are likely more often wrong than right. It’s the responsibility of the research agency to ensure they have approached the research problem fairly and objectively, which we’ve seen may not always be the case. But more importantly, it’s your responsibility to scrutinize the data and where it came from before accepting or rejecting it as a truth.