The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
The Forum - On Line Opinion's article discussion area



Syndicate
RSS/XML


RSS 2.0

Main Articles General

Sign In      Register

The Forum > Article Comments > Pressuring politicians and populist terrorism > Comments

Pressuring politicians and populist terrorism : Comments

By Geoff Alford, published 26/6/2008

Newspaper polls are often just political grandstanding and bring the market research and polling industry into disrepute.

  1. Pages:
  2. 1
  3. 2
  4. 3
  5. All
Polls are nonsense - the modern equivalent of throwing monkey bones and searching through chicken entrails.
Posted by Mr. Right, Thursday, 26 June 2008 10:25:36 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Geoff Alford, you are evidently a professional with a passion for your work, and high ethical standards. Good on you. I hope many follow your example.

I take it you expect others in your profession to do exactly that, and you admonish them when they don't. Hear hear.

But expecting politicians to do the same is naive. They aren't pollsters. They don't care about designing finely balanced questions that illuminate the will of the people. It is after all the pollsters job to do that, not the politicians.

The politicians job is to try to drive public perceptions in a direction he thinks will lead us to the best outcome. We expect them to do it, and we know they will try to do it by fair means or foul. For the most part we forgive them for the foul bits, unless they get really on the nose. And when it comes to "getting on the nose", asking leading questions in a poll doesn't even come close. Really, I would be disappointed if they didn't attempt it on occasion to see if they could get away with it. If they don't they are simply not trying hard enough. Expressing outrage when they do it is over the top. You are wasting your time.

On the other hand, publicising that you competitor leaked some poll results - well done. Keep 'em honest.
Posted by rstuart, Thursday, 26 June 2008 1:10:31 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
pollsters get it more right than wrong
where there is a large disparty you will allways find voting fraud

The diebold voting machines is a case of proof, seems you can program them by simply installing a program
[who would have thunk it]
do the research ,poling figures are a science [its only when they try to cover the margin of fraud do they lose respect]

Much depends upon the honesty of the questions posed [ you can get any result you like by asking the push/poling questions, the order of the questions , but as a science polling gets the truth [as long as they are poling for the truth
BUT often they arnt.

Hence polititions are making bad laws based on flawed polling, poling not determined to get the truth but that the law[or]act is passed into law ,much depends on what the questions are asking
[take marijuana [its a registered trademark owned by bayer-corp]

but
has been villified under the lie of marijuanna [describing a poor product[trade/lable ] not the plant or the truth about the plant

but
where ever that word is inserted you get an auto-bias against the buzz/word based on the lies diseminated upon the world via media driven fear campains aganst a people and their medicine plant

That treats everything from high blood presure , arthritus, eating disorders and cures cancer ,
as well as able to supply our oil , paper and fibre needs sustainsably and perpetually ,
makes anything from plastic to food ,

and would only need to be grown on ONE QUATER the land clearfelled LAST YEAR ALONE to replace completly the woodchipping industry based on trees [using only one/quater the chemicals to create the end product]

noting cotton alone,
uses half the WORLDS supply of fertilisers AND pesticides
for one non food crop
[hemp dosnt need any]
Posted by one under god, Thursday, 26 June 2008 1:40:20 PM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
I will add my own post.

Where is the reply (or apology) from Nielsen or Michelle Grattan? Or is it too embarassing? As I said, it was a front page Age story, but the Age would not print my Letter to the Editor, giving me pitiful excuses.

Make your own judgments on the integrity of journalists and pollsters, and the fearless independence of Fairfax publications.

geoffalford
Posted by geoffalford, Friday, 27 June 2008 12:02:38 AM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Polling, well done and with checks and balances, can be a useful democratic tool. It is useful to know what the populace is thinking. That is, only if the polling is done professionally and reported professionally.

The case in point, petrol pricing, was indeed a crass one.

The problems with media reporting of polling are many and varied. Too often the precise question is not reported, and the lead-up questions are often not reported. Then there is the blind need for many to side with the majority, so polling creates as much opinion as it measures it.

Of course nobody want to pay more for petrol, that includes me who believes that high fuel prices are a good thing - full reflective costing of fuel is what is needed. There is a conflict between the hip pocket and what I know is good policy. So if I was polled on the issue with a simple question I can not give a simple answer - in fact my simple yes/no answer has to be a distortion.

Those with big money can afford to spend up big on market research and polling, knowing that they can use the information judiciously and cunningly to favour their commercial or political interest.

I would like to see strict media standards introduced on the reporting of polling, so that it is not able to be reported unethically.
Posted by gecko, Friday, 27 June 2008 8:40:43 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
From Yes Minister...

Humphrey: You know what happens: nice young lady comes up to you. Obviously you want to create a good impression, you don't want to look a fool, do you? So she starts asking you some questions: " Mr. Woolley, are you worried about the number of young people without jobs?"

Bernard: Yes

Humphrey: "Are you worried about the rise in crime among teenagers?"

Bernard: Yes

Humphrey: "Do you think there is a lack of discipline in our Comprehensive schools?"

Bernard: Yes

Humphrey: "Do you think young people welcome some authority and leadership in their lives?"

Bernard: Yes

Humphrey: "Do you think they respond to a challenge?"

Bernard: Yes

Humphrey: "Would you be in favour of reintroducing National Service?"

Bernard: Oh...well, I suppose I might be.

Humphrey: "Yes or no?"

Bernard: Yes

Humphrey: Of course you would, Bernard. After all you told her you can't say no to that. So they don't mention the first five questions and they publish the last one.

Bernard: Is that really what they do?

Humphrey: Well, not the reputable ones no, but there aren't many of those. So alternatively the young lady can get the opposite result.

Bernard: How?

Humphrey: "Mr. Woolley, are you worried about the danger of war?"

Bernard: Yes

Humphrey: "Are you worried about the growth of armaments?"

Bernard: Yes

Humphrey: "Do you think there is a danger in giving young people guns and teaching them how to kill?"

Bernard: Yes

Humphrey: "Do you think it is wrong to force people to take up arms against their will?"

Bernard: Yes

Humphrey: "Would you oppose the reintroduction of National Service?"

Bernard: Yes

Humphrey: There you are, you see Bernard. The perfect balanced sample
Posted by Usual Suspect, Friday, 27 June 2008 9:36:04 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Usual Suspect: awesome.
Posted by rstuart, Friday, 27 June 2008 9:45:55 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Monkey bones heh? I wondered how some proselytising Christians arrived at some of their conclusions.
I,on the other hand have no such problems with the underpinning reasoning of statistical analysis.
Having said that I am however often uncomfortable with poll analysis of opinions on the grounds that Statistics work best with absolutes, numbers and known states i.e. limited options (binary). With opinions they are a dubious snap shot of an intangible.
As stated in the article the construction of the questions are often imprecise and therefore answers require qualification(s)or more explanation. Even the skill/variations of the questioners can influence or skew responses. All techniques to some degree rely on accurate information. There are infinitive ways to skew data input and therefore gain a suspect result (GIGO principal).
A pollster will argue that aggregation and other techniques will balance out the variations and on abroad scale that’s relatively true but that relies on the sample size and the randomness of the sample. I’m often not convinced of either. The greater the chance of error the bigger the sample needs to be. One technique fits no one.
While there is some evidence that these techniques are accurate to a point they often go wrong by over stateing or glossing over the reasons for a given result.
A swing towards a political party on an issue may be for different reasons beyond those tested for or depend on several factors.
If we add to that proprietary secrecy,cost compromises and the need for (saleable) simple answers over the more complex and therefore more accurate ones we finish up with the stated “Yes but answers… (without the “but”.)
Without proper qualifications the public usually receives overly simplified data that is then processed by their personal competencies and biases.
In short is it a science? Perhaps but is it good science that depend on the above issues.
Therefore it’s a very dangerous way to determining policy.
Posted by examinator, Sunday, 29 June 2008 10:57:51 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
I agree with the general thrust of the Examinator's post. However, "measurement validity" needs emphasis.

When you ask people questions in a survey, are you really measuring what you think you are measuring? This is a fundamental GIGO principle, as Examinator says, and statistics cannnot resolve it. If you are measuring GIGO (rubbish), then doubling the sample simply means double GIGO (double rubbish)!

I give an example regarding "recall questions". If you ask grocery buyers how much canned dog food they buy in a random sample household-based survey, and project their answers to ABS household estimates, using a weighting factor (“Population Households” divided by “Sampled Households” ), the final estimate is close to what dog food manufacturers know they manufacture.

Most grocery buyers know how many dogs they have (1, 2 or 3), feed them daily, and know the size of the can (225 or 450 gms) and how many days there are in a week (7). So the weekly estimate is simply "number of dogs x number of cans per day x size of can x 7 days per week x household weighting factor.

Ask the same questions in that same survey about “dry dog food”, and the final estimates are inflated 200-400%. Serving dry dog food involves shaking food pellets into the dog’s bowl when low. Grocery buyers may think “I regularly feed my dog, and buy a kilo-packet of dry dog food every 1-2 weeks”. In reality, “recall” collapses time and regularises behaviour – hence the inflated estimate.

At the time, I was General Manager of the Roy Morgan Research Centre and we could compare survey “recall estimates” with independent and objective measures.

Hence, you wonder when social researchers report that “X% of people have sex twice a week”, or that “Y% of people experiment with drugs on a weekly basis”, that those figures may be grossly inflated, and yet are being used to form social policy.

All survey research should carry a “health warning” – “this survey may be dangerous to policy development!”
Posted by geoffalford, Sunday, 29 June 2008 1:31:36 PM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Here is a second concern. Many researchers in academia and the private sector do NOT understand “statistical significance”. Tests of “statistical significance” merely tell you whether a survey result is likely due to chance or not. They cannot tell you whether it is important.

In a random sample survey of 2,000 people, suppose 53% of men “approve Rudd’s performance” and 47% of women. Given that sample size, that 6-point difference is “statistically significant”, it is unlikely due to chance - there is a small difference in the views of men and women. But is it important? Should the Labour Party undertake an advertising campaign, or develop new policies (involving large government expenditure) to appeal to women? The simple answer is that the difference is so small, it is hardly important. Of course, Labour analysts should compare elements of Rudd’s performance where women and men have different perceptions, but it does not mean that Rudd has a major problem with women. Yet, I fear that this is exactly how politicians react to polls.

If I were Rudd’s advisor, I would be more concerned that only 50% of people approve Rudd's performance, and would be looking at the other 50% who do not approve! So judging whether a result is important or not is outside of the realm of statistics – it is a judgment based on the size of the difference, or it might lead to a different insight, such as in the example of how I would read such a poll result.

Examinator is right to suggest that survey research is “a science”, but also question “whether it is good science?”

What needs emphasis is that all sample-based research in the academic and private sectors should primarily focus on ensuring that “valid measures” are being undertaken. “Validity” is a necessary condition for sound scientific measurement. Statistical issues are only important for testing whether estimates or differences could be due to chance sampling variances. If your measures are “rubbish”, then you have lost the entire plot, and increasing the sample size or repeating the survey just means “more rubbish”!
Posted by geoffalford, Sunday, 29 June 2008 1:52:56 PM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Geoffalford,
Thanks for your response I agree completely…. I breed dogs and we make all their food ( as do many breeders) so bought dog food is a moot point with us. That p brand isn’t doesn’t have the penetration they would indicate. Trust me to be difficult.

I was taught in Uni (all be some time ago) that there was a critical if you like threshold level below which sample size rendered a survey unreliable and that the bigger the sample the greater the confidence in the result. In non-professional speak the larger the sample the greater the self-correcting factor would be. i.e. (consider the extremes) a sample of 1000 has say a confidence factor of say 96% but an election (poll) if repeated would give a confidence factor of 99%.Therefore elections (nominally 100% polls) are more reliable than (sample) polls. This is indicated by surprises in election results. Give all things are equal wouldn’t a bigger sample be better than a small one?

My reason for asking this is that two competing surveys in our Shire (140k population perhaps 60k voters) achieved two vastly different results. I put it drown to poor question formation. Neither was close to the actual election other than in a vague trend.

The second reason is on looking at the US election (Optional voting and extra ordinary State based exclusions and systems). If only say 55% of those entitled to vote how much confidence should an observer place on the election as being truly representative of the US population as a whole? (I can’t resist it seems to me it is nigh on chaotic [perhaps they use chaos math in stead of statistical analysis] and a scientist’s nightmare).

I’m trying to ascertain also the statistically confidence of optional versus proportional voting there and here. Logic dictates proportional is “fairer” but is it.
Now there’s some homework for you or is there no quick answer I could use?
Thanks anyway.
Posted by examinator, Sunday, 29 June 2008 2:27:27 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Great article and highlights the perils of any government taking too much notice of every poll. As stated, polls have their place but it is all in the construction of the questions.

Usual Suspect's Yes Minister post says it all. Reminded me of a survey I did a few years ago.

After giving birth to my second child I filled in a hospital survey about the service I received. My answers in a nutshell stated the nurses were great - friendly, helpful and knowledgeable but I expressed some concern over the shortage of nurses on the ward compared to four years earlier when I had my first child. One of the nurses had even expressed the conditions were dreadful, and when it was deemed "too quiet" in the maternity wing, nurses were dispatched to emergency despite having no training in the field.

About 2-3 months later I received a phone call at home. The woman on the end of the line, in a terse voice, asked if I had filled in a survey in ** month about nursing care. Yes I said. Were you happy with the nursing care? she asked (still in a terse voice).

Well yes, I replied, my answers are all on the sheet. The nurses were great and then I went on to add my concerns as outlined previously.

Were you happy with the service or not? came back the response.

I don't think you are really hearing me said I. So you were happy with the nurses? she repeated. Yes I said again. So I will put down that you were happy. She then proceeded to hang up.

Needless to say I was shellshocked and bet your bottom dollar the additional concerns I raised were shelved under the Sir Humphrey Appleby survey results file never to be seen by a policy maker again.

If I had all my wits about me and had not been suffering sleep deprivation from a new baby I would have rung 'someone' to complain. Kick myself everytime I think about it.
Posted by pelican, Sunday, 29 June 2008 7:18:29 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
Response to Examinator:

It depends how you specify your objectives. If the objective is to identify the “most important issues facing people today” – assuming valid questions – we only need a random sample of 60-70 people to be 95% confident of having identified important issues.

Illustration: define “most important” as any issue important to 20% of the population. To be safe, allow sampling error of +/- 10%. In practice, accept any issue nominated by 10% or more of the sample (some lesser issues may be identified, but unlikely to miss major issues). Then:

N = “1.96^2 x p% x (100-p)% divided by SE^2” , where SE is the allowed sampling error.

N = 3.84 x 20 x 80 / 100 = 62 people

Concerns about non-sampling bias (refusals, non-contacts) might warrant a sample of 200 people as a “safety blanket”.

Your University should have taught you is the bigger the sample, the greater the confidence in “the result”. Or, if the result was derived from invalid questions, then “the bigger the sample, the greater the confidence in the same rubbish results”. It is that simple.

“Elections” are “the outcome”, and sample-based polls are trying to predict that outcome. Hence, elections can produce surprises.

Regarding “a bigger sample being better than a smaller one?”, it depends. A bigger sample will give you more confidence in “the result”, as defined above. But why waste $100K on a survey poll, when $50K will adequately answer your questions?

If you have experienced two competing surveys with vastly different results, and the actual election result also quite different, I would suspect something awry, if not incompetence.

Regarding optional voting systems, people select themselves to vote. We have no way of knowing whether the election result truly reflects the population’s views (a key argument for compulsory voting).

An exception is if 80% of people volunteered to vote, and they all voted the same way. But, you would need to know the context. Is it a referendum for independence in a newly freed country – OK. However, if you were in Zimbabwe, you might be suspicious.
Posted by geoffalford, Tuesday, 1 July 2008 10:00:05 AM
Find out more about this user Visit this user's webpage Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
GeoffAlford
Like I said it was a long time ago.
Thank you for the information I appreciate it.
Regards Examinator
Posted by examinator, Thursday, 3 July 2008 5:09:11 PM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
  1. Pages:
  2. 1
  3. 2
  4. 3
  5. All

About Us :: Search :: Discuss :: Feedback :: Legals :: Privacy