The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
The Forum - On Line Opinion's article discussion area



Syndicate
RSS/XML


RSS 2.0

Main Articles General

Sign In      Register

The Forum > Article Comments > Pressuring politicians and populist terrorism > Comments

Pressuring politicians and populist terrorism : Comments

By Geoff Alford, published 26/6/2008

Newspaper polls are often just political grandstanding and bring the market research and polling industry into disrepute.

  1. Pages:
  2. 1
  3. Page 2
  4. 3
  5. All
Usual Suspect: awesome.
Posted by rstuart, Friday, 27 June 2008 9:45:55 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Monkey bones heh? I wondered how some proselytising Christians arrived at some of their conclusions.
I,on the other hand have no such problems with the underpinning reasoning of statistical analysis.
Having said that I am however often uncomfortable with poll analysis of opinions on the grounds that Statistics work best with absolutes, numbers and known states i.e. limited options (binary). With opinions they are a dubious snap shot of an intangible.
As stated in the article the construction of the questions are often imprecise and therefore answers require qualification(s)or more explanation. Even the skill/variations of the questioners can influence or skew responses. All techniques to some degree rely on accurate information. There are infinitive ways to skew data input and therefore gain a suspect result (GIGO principal).
A pollster will argue that aggregation and other techniques will balance out the variations and on abroad scale that’s relatively true but that relies on the sample size and the randomness of the sample. I’m often not convinced of either. The greater the chance of error the bigger the sample needs to be. One technique fits no one.
While there is some evidence that these techniques are accurate to a point they often go wrong by over stateing or glossing over the reasons for a given result.
A swing towards a political party on an issue may be for different reasons beyond those tested for or depend on several factors.
If we add to that proprietary secrecy,cost compromises and the need for (saleable) simple answers over the more complex and therefore more accurate ones we finish up with the stated “Yes but answers… (without the “but”.)
Without proper qualifications the public usually receives overly simplified data that is then processed by their personal competencies and biases.
In short is it a science? Perhaps but is it good science that depend on the above issues.
Therefore it’s a very dangerous way to determining policy.
Posted by examinator, Sunday, 29 June 2008 10:57:51 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
I agree with the general thrust of the Examinator's post. However, "measurement validity" needs emphasis.

When you ask people questions in a survey, are you really measuring what you think you are measuring? This is a fundamental GIGO principle, as Examinator says, and statistics cannnot resolve it. If you are measuring GIGO (rubbish), then doubling the sample simply means double GIGO (double rubbish)!

I give an example regarding "recall questions". If you ask grocery buyers how much canned dog food they buy in a random sample household-based survey, and project their answers to ABS household estimates, using a weighting factor (“Population Households” divided by “Sampled Households” ), the final estimate is close to what dog food manufacturers know they manufacture.

Most grocery buyers know how many dogs they have (1, 2 or 3), feed them daily, and know the size of the can (225 or 450 gms) and how many days there are in a week (7). So the weekly estimate is simply "number of dogs x number of cans per day x size of can x 7 days per week x household weighting factor.

Ask the same questions in that same survey about “dry dog food”, and the final estimates are inflated 200-400%. Serving dry dog food involves shaking food pellets into the dog’s bowl when low. Grocery buyers may think “I regularly feed my dog, and buy a kilo-packet of dry dog food every 1-2 weeks”. In reality, “recall” collapses time and regularises behaviour – hence the inflated estimate.

At the time, I was General Manager of the Roy Morgan Research Centre and we could compare survey “recall estimates” with independent and objective measures.

Hence, you wonder when social researchers report that “X% of people have sex twice a week”, or that “Y% of people experiment with drugs on a weekly basis”, that those figures may be grossly inflated, and yet are being used to form social policy.

All survey research should carry a “health warning” – “this survey may be dangerous to policy development!”
Posted by geoffalford, Sunday, 29 June 2008 1:31:36 PM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Here is a second concern. Many researchers in academia and the private sector do NOT understand “statistical significance”. Tests of “statistical significance” merely tell you whether a survey result is likely due to chance or not. They cannot tell you whether it is important.

In a random sample survey of 2,000 people, suppose 53% of men “approve Rudd’s performance” and 47% of women. Given that sample size, that 6-point difference is “statistically significant”, it is unlikely due to chance - there is a small difference in the views of men and women. But is it important? Should the Labour Party undertake an advertising campaign, or develop new policies (involving large government expenditure) to appeal to women? The simple answer is that the difference is so small, it is hardly important. Of course, Labour analysts should compare elements of Rudd’s performance where women and men have different perceptions, but it does not mean that Rudd has a major problem with women. Yet, I fear that this is exactly how politicians react to polls.

If I were Rudd’s advisor, I would be more concerned that only 50% of people approve Rudd's performance, and would be looking at the other 50% who do not approve! So judging whether a result is important or not is outside of the realm of statistics – it is a judgment based on the size of the difference, or it might lead to a different insight, such as in the example of how I would read such a poll result.

Examinator is right to suggest that survey research is “a science”, but also question “whether it is good science?”

What needs emphasis is that all sample-based research in the academic and private sectors should primarily focus on ensuring that “valid measures” are being undertaken. “Validity” is a necessary condition for sound scientific measurement. Statistical issues are only important for testing whether estimates or differences could be due to chance sampling variances. If your measures are “rubbish”, then you have lost the entire plot, and increasing the sample size or repeating the survey just means “more rubbish”!
Posted by geoffalford, Sunday, 29 June 2008 1:52:56 PM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Geoffalford,
Thanks for your response I agree completely…. I breed dogs and we make all their food ( as do many breeders) so bought dog food is a moot point with us. That p brand isn’t doesn’t have the penetration they would indicate. Trust me to be difficult.

I was taught in Uni (all be some time ago) that there was a critical if you like threshold level below which sample size rendered a survey unreliable and that the bigger the sample the greater the confidence in the result. In non-professional speak the larger the sample the greater the self-correcting factor would be. i.e. (consider the extremes) a sample of 1000 has say a confidence factor of say 96% but an election (poll) if repeated would give a confidence factor of 99%.Therefore elections (nominally 100% polls) are more reliable than (sample) polls. This is indicated by surprises in election results. Give all things are equal wouldn’t a bigger sample be better than a small one?

My reason for asking this is that two competing surveys in our Shire (140k population perhaps 60k voters) achieved two vastly different results. I put it drown to poor question formation. Neither was close to the actual election other than in a vague trend.

The second reason is on looking at the US election (Optional voting and extra ordinary State based exclusions and systems). If only say 55% of those entitled to vote how much confidence should an observer place on the election as being truly representative of the US population as a whole? (I can’t resist it seems to me it is nigh on chaotic [perhaps they use chaos math in stead of statistical analysis] and a scientist’s nightmare).

I’m trying to ascertain also the statistically confidence of optional versus proportional voting there and here. Logic dictates proportional is “fairer” but is it.
Now there’s some homework for you or is there no quick answer I could use?
Thanks anyway.
Posted by examinator, Sunday, 29 June 2008 2:27:27 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Great article and highlights the perils of any government taking too much notice of every poll. As stated, polls have their place but it is all in the construction of the questions.

Usual Suspect's Yes Minister post says it all. Reminded me of a survey I did a few years ago.

After giving birth to my second child I filled in a hospital survey about the service I received. My answers in a nutshell stated the nurses were great - friendly, helpful and knowledgeable but I expressed some concern over the shortage of nurses on the ward compared to four years earlier when I had my first child. One of the nurses had even expressed the conditions were dreadful, and when it was deemed "too quiet" in the maternity wing, nurses were dispatched to emergency despite having no training in the field.

About 2-3 months later I received a phone call at home. The woman on the end of the line, in a terse voice, asked if I had filled in a survey in ** month about nursing care. Yes I said. Were you happy with the nursing care? she asked (still in a terse voice).

Well yes, I replied, my answers are all on the sheet. The nurses were great and then I went on to add my concerns as outlined previously.

Were you happy with the service or not? came back the response.

I don't think you are really hearing me said I. So you were happy with the nurses? she repeated. Yes I said again. So I will put down that you were happy. She then proceeded to hang up.

Needless to say I was shellshocked and bet your bottom dollar the additional concerns I raised were shelved under the Sir Humphrey Appleby survey results file never to be seen by a policy maker again.

If I had all my wits about me and had not been suffering sleep deprivation from a new baby I would have rung 'someone' to complain. Kick myself everytime I think about it.
Posted by pelican, Sunday, 29 June 2008 7:18:29 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
  1. Pages:
  2. 1
  3. Page 2
  4. 3
  5. All

About Us :: Search :: Discuss :: Feedback :: Legals :: Privacy