Tuesday 12 March 2013


 

 

On-line research has some major advantages.




It is often less expensive than standard methods and also quicker to yield results. However, as currentlypracticed, it is fatally flawed."We're perpetuating a fraud," is what Simon Chadwick has to say. Mr. Chadwick is former head of NOP Research in the U.K. and is now principal of Cambiar, a Phoenix consultancy.

Surveys tend to poll the same people over and over.In fact, a study done by ComScore Networks indicated that one-quarter of one percent of the population provides about one-third of all on-line responses. This means that instead of getting one vote, each of these respondents is getting the equivalent of 128 votes.We are getting the same people responding over and over again to earn points so they can win a toaster. Or as Mr. Chadwick calls them, "professional respondents who go hunting for...dollars. What's so terrible about professional

respondents, you might ask? Pulitzer Prize winning New York

Times science writer Natalie Angier says: Nothing tarnishes the

credibility of a sample like the desire to be sampled.... a good pollster will

hound and re-hound the very people who least want to cooperate. So not

only are these people ridiculously over-represented, they are the wrong

people. "It's like the hole in the ozone layer," said Shari Morwood,

VP-worldwide market research at IBM in an article in Advertising

Age. "Everyone knows it's a growing problem. But they just ignore it and

go on to the next project." Kim Dedeker, VP-consumer and market

knowledge at P&G, describes an example in which online and mail surveys

came up with diametrical results. "If I only had the online result.... I would have

taken a bad decision right to the top management," she said. In another case,

two surveys conducted a week apart by the same online researcher yielded

completely different recommendations. Furthermore, most of these on-line

researchers don't validate their samples. They don't know who is responding. It

could be my daughter using my computer saying she's me. Or saying she's you

for that matter. And if all that weren't enough, many of them don't limit

responses.

I can log in as five different people and respond five different times. Or

fifty. Or a hundred and twenty-eight. Another lovely bit of hokum they

perpetrate is the degree of confidence. They tell us that their results are

accurate with a 95% degree of confidence. However, they never quite tell

us what it is that they're confident about. Is it that, in general, a study with

this many legitimate respondents will be statistically valid 95% of the time? Or

is it that their interpretation of subjective data will be 95% accurate (by the

way, no one's interpretation of subjective data is 95% accurate) Or is it

something else? Let's give them the benefit of the doubt for a minute and

say that their sample is legitimate (which is highly unlikely) and that they

are brilliant people who can interpret data almost flawlessly. Let's take a

look at what 95% degree of confidence means under the best of circumstances.

Once again we'll turn to Ms. Angier from her book The Canon. Here's an

example she gives. You go for an HIV test. You test positive. The test is said

to be 95% accurate. This means you have a 95% chance of having the HIV

virus, right? Not even close. What it means is that 95% of the time people who

have the HIV virus will test positive. But it also means that 5% of the time

people who do not have the HIV virus will test positive. Now let's say

you live in a town with 100,000 people. Fortunately, the HIV virus is very rare

and only appears in 1 person out of 350. So in your town of 100,000 people,

this means that there will be about 285 people with the HIV virus (100,000

divided by 350). But if we tested all the people in your town, we would get

about 5,000 positives (remember, 5% of the time people who do not have the

virus will test positive) and almost all of these 5,000 positives would be

false.,mIn fact when you do the math, after testing positive not only is

there not a 95% chance you have the virus, there is about a 5% chance you have

it. And an almost 95% chance you don't have the virus.* So much for a 95%

level of confidence.We advertising and marketing people are drowning in

opinions and starving for facts. But we have to be very careful about

distinguishing between the two. In the advertising world, research is no

different from creative work. Some of it is very good, some of it is worthless

and dangerous.To figure out the accuracy of the result, you divide the total

number of true positives you'd expect from your sample (95% of 285, or 271)

by the total number of true and false positives (5,257) and you wind up with a

probability of having the HIV virus is actually about 5.2%, not 95%. If you

can't follow the math, and you don't trust me, don't worry. You can trust Ms.

Angier, she has a Pulitzer Prize. All I have interactive marketing

communication.

No comments: