The Danger of Unscientific Polling

So by now you’ve undoubtedly heard that Donald Trump’s campaign to become the next president of the Unites States reached a new level of crazy this week. It all started when, on December 7th, Trump published a press release on his website calling for a “total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on” (emphasis mine).

At the center of Trump’s anti-Muslim immigration proposal was a statistic he’s used repeatedly to justify his stance. This statistic, taken from a poll commissioned in June by a conservative think tank stated that "25% of [Muslims living in the US] agreed that violence against Americans here in the United States is justified as a part of the global jihad". The poll has already been largely dismissed in the media as unscientific on account of the fact that the entity that commissioned it, an organization called the Center for Security Policy, is a hard-line right-wing organization founded by Frank Gaffney, a man who is regarded as “one of America's most notorious Islamophobes”.

I don’t actually want to talk about Trump's anti-immigration proposal, nor do I want to talk about the credibility (or lack thereof) of Frank Gaffney or the Center for Security Policy. What I do want to talk about is the company Gaffney’s organization worked with to produce the research. The source of polling data is an essential issue in the world of market research and one that's central to this controversy. But before we get into it let me take a moment to provide a quick background on the current state of quantitative market research.

The World of Market Research is Evolving

The quantitative research and polling industry has evolved a great deal over the last ten years. Traditional research collection methods, such as door-to-door or telesurveys, have mainly been disrupted by online and mobile-based panel services. What used to cost hundreds of thousands of dollars and take 12+ months of planning, programming, fieldwork and analysis can now be done for a fraction of the cost and time.

When it comes to online and mobile quantitative research, there are many different types of research providers you can work with. The most common type, a panel provider, essentially manages a database of people who have opted in to participate in research and who you can survey. What distinguishes a good panel provider is, at a minimum, adherence to ESOMAR’s guidelines for online research. That is a given, as I personally wouldn’t work with a research partner who doesn’t adhere to their standards. But beyond regulation and standardization, what I typically look for in a panel provider is in the details of how they source, recruit and maintain people on their panel. Good panel providers will be transparent about their recruitment practices, and they will have a wealth of information about the people who sit on their panel. They will also be able to tell you about the policies that govern how they maintain the panel, such as criteria for exclusion or what steps they take to prevent oversampling. 

There are also many new types of research providers/services popping up that provide alternative approaches to getting access to a research audience. Google, for example, recently launched a new service called Google Consumer Surveys (GCS), which offers both a survey builder as well as access to an audience. Google’s approach to audience recruitment is noteworthy as it’s a little different from a typical panel provider. They employ two approaches to linking researchers with willing participants. The first is through their Google Opinion Rewards program, which is an Android app that allows you to opt-in to take quick surveys in exchange for credits, which can be used within the Google Play Store (which is essentially a panel). The other approach is through media partnerships. For this, what they’re doing is known as river sampling, which is the practice of sourcing respondents on the fly through display banner ads. In the case of GCS, they work with media sites and offer free content that would typically sit behind a paywall in exchange for the participant completing a quick survey. GCS offers a good solution for conducting quick and cheap research.

However, it’s important that you understand the limitations (and risks) of this type of sampling. For example, when using GCS’s approach, I would want to know if my sample was split between river sampling and their rewards app. Furthermore, for the participants who were sourced from a media site, I would certainly want to know which site they were recruited from. For example, sampling from FOX News versus The New York Times would undoubtedly affect your results.

The point is that when you’re working with a research agency or vendor who provides access to an audience, you should know exactly how they source the participants, and you’ll need to consider whether their approach will impact your results or how you interpret the results.

So what’s an unscientific poll anyway?

There’s no one way to define this, but Sheldon R. Gawiser (Ph.D) and G. Evans Witt offer some guidance here. They've compiled a list of 20 questions that they recommend every journalist consider before publishing results from a poll. The 20 questions cover a range of criteria, such as the sample size, recruiting methodology, sampling error, etc. But you need not look beyond the first 2 questions to understand whether Trump’s darling poll is beyond questionable. First, you should ask who paid for the poll and why was it done? And second, you should ask who actually did the poll?

Who paid for the poll and why was it done?

We know the answer to the first question already. The Center for Security Policy commissioned the study, and one quick look at their website and you’ll get a pretty good idea about their motivations for conducting the research. Gaffney, the organization's founder, has a long history of pushing anti-Muslim views in the most conspiratorial ways possible. For example, in a 2010 column he “claimed that the Missile Defense Agency logo “appears ominously to reflect a morphing of the Islamic crescent and star with the Obama campaign logo,” part of a “worrying pattern of official U.S. submission to Islam.” Need I say more?

Who did the poll?

Now, what about the company that carried out the poll? This is where things get interesting. The poll was run in June of this year by a Washington D.C.-based firm known as the polling company, inc, who I’ll now refer to as TPC.

It should go without saying that, in order for a research firm to be effective at what they do, they need to approach every research problem objectively and without bias. After all, the goal of any researcher should be the pursuit and discovery of truth, and achieving this requires a commitment to rigour and an openness and acceptance of any outcome regardless of your hypothesis. Simply put, they need to be non-partisan.

So does TPC meet this standard? Let’s take a look. Their website currently lists two employees, their founder and president, Kellyanne Conway, as well as the company’s Director of Research, Kevin Quinley. Here are a few interesting facts about TPC, all of which I was able to pull directly from their site:

  • The company’s founder, Kellyanne Conway, is Republican strategist

  • Prior to joining the company, the Direct of Research, Quinley, worked for Carlyle Gregory Company, a [Republican] political consulting firm where he worked pretty much exclusively with Republican political candidates and lobbyists

  • Their client list is a who’s who of political candidates and organizations. Every single one of their 22 listed political clients are associated with the Republican party.

    • African American Republican Leadership Coalition (Republican)

    • Camden County Republican Party (Republican)

    • Cape May County Republican Party (Republican)

    • Cumberland County Republican Party (Republican)

    • Former Ohio Secretary of State Ken Blackwell (Republican)

    • Fred Thompson Presidential Campaign (Republican)

    • Gary Palmer for Congress (AL-6) (Republican)

    • Governor Mike Pence (IN) (Republican)

    • Lee Zeldin for Congress (NY-1) (Republican)

    • National Federation of Republican Women (Republican)

    • National Republican Congressional Committee (Republican)

    • National Republican Senatorial Committee (Republican)

    • Newt Gingrich Presidential Campaign (Republican)

    • Rep. Steve King (IA-4) (Republican)

    • Rep. Dave Weldon (FL-15) (Republican)

    • Rep. Marsha Blackburn (TN-7) (Republican)

    • Rep. Tim Huelskamp (KS-1) (Republican)

    • Rep. Michele Bachmann (MN-6) (Republican)

    • Rep. Jack Kingston (GA-1) (Republican)

    • Republican Jewish Coalition (Republican)

    • Rod Blum for Congress (IA-1) (Republican)

    • Todd Hiett for Oklahoma Corporation Commission (Republican)

The highlights above are my own. What I am trying to point out here is that TPC, a company that brands itself a research and polling firm, is clearly partisan to one particular political institution. In my book this disqualifies them from being taken seriously as a market research firm as they are incapable of approaching a research problem without ideological baggage.

What’s interesting is that shortly after the controversy ignited this week TPC published a press release on their website in an apparent move to address backlash suggesting the research was shoddy. The press release provides details about the methodology and also draws some comparisons to other similar research that was conducted using the same approach. In particular, they state that many other research providers “like SurveyMonkey & Harris Interactive that are relied upon and quoted extensively use the same methodology – an online, opt-in panel.”

Indeed, many companies use online panels. But both SurveyMonkey and Harris Interactive manage and maintain their own panels. You can go to their website and learn about their panels and how they’re managed. Both of these companies also have the ability to match national, regional or local demographics in their panel, meaning that they can ensure the sample you used for your study is reflective of the full population.

It’s evident that TPC doesn’t have their own panel, so it would have had to outsource its data collection and field work to a 3rd party panel provider, a common practice for research agencies. I would be very interested in knowing which panel provider they used to conduct this research, as the quality of panel providers can vary significantly.

Also, opt-in online panels, which TPC used for the research, have their shortcomings. Here’s a great article which explains why opt-in panels can sometimes be flawed. The main issue is that depending on how the company sources and recruits its participants, opt-in panels may be comprised of people who are systematically different from the rest of the population. This can produce results that are “significantly less accurate than results from randomly- (i.e. probabilistically-) selected panels.”

To be clear, I’m not saying that opt-in online panels are a fundamentally flawed approach. I use them frequently in my own job to conduct market research for brands, and when it comes to measuring high-level concepts such as brand awareness or mindshare, opt-in panels are well suited to the task (again, assuming a reputable company maintains them). But when measuring complex concepts, such as a population’s interpretation of their religion and how this might affect their behaviour, different methodologies may be needed, or a triangulation of methodologies. I would go so far as to say that online panel-based research can not, on its own, effectively answer this research problem. This is an incredibly complex and challenging concept to quantify and to do it in a scientific and meaningful way would require a multi-modal approach.

The danger of unscientific polls

One of the biggest problems with polling is that it can be weaponized, as it’s far too easy to produce data that creates an illusion of truth. The problem is people like Trump, and those who follow him, don’t care about scientific rigour or the potential issues that can arise from poorly designed research. They don’t question the integrity of the study, and they accept the figures as they are because it validates an existing bias and belief system. To them, the truth doesn't matter. What matters is simply blind validation. The kind of validation that reinforces a shallow and insular view of the world.  

You don’t need to be a statistician to understand what scientific polling is or to be able to spot a potentially dubious statistic reported in the news. A good place to start is Sheldon R. Gawiser (Ph.D) and G. Evans Witt’s 20 questions to ask when looking at poll results. I also highly recommend Statistics Done Wrong by Alex Reinhart or Wrong by David Freedman.

The bottom line is, don’t ever take a poll at face value without considering where it came from and how it was produced. It’s sad to say, but given the frequency at which polls are conducted and reported these days, they are likely more often wrong than right. It’s the responsibility of the research agency to ensure they have approached the research problem fairly and objectively, which we’ve seen may not always be the case. But more importantly, it’s your responsibility to scrutinize the data and where it came from before accepting or rejecting it as a truth. 

Stephen Tracy

I'm a designer of things made with data, exploring the intersection of analytics and storytelling.

https://www.analythical.com
Previous
Previous

Star Wars The Force Awakens: Box Office Success By the Numbers

Next
Next

Communicating Data & The Rise of the Empirical Storyteller