Wednesday 3 August 2011

IE Users Told By World's Press That They're Stupid

Today The Guardian website posted this in Pass Notes:
http://www.guardian.co.uk/technology/2011/aug/02/are-internet-explorer-users-stupid

It relates to a study which claims to have found a relationship (although more likely at best a correlation) between IQ levels and which web browser people choose to use. The inflammatory and eye-catching ‘results’ seem to indicate Internet Explorer users are less intelligent.

They seem to have got the story from CBS and link to this:
http://www.cbsnews.com/8301-501465_162-20086362-501465.html

CBS at least link to the website of the people who claim to have carried out the study:
http://www.aptiquant.com/news/is-internet-explorer-for-the-dumb-a-new-study-suggests-exactly-that/

Leaving aside the issues of sloppy journalism, reporters regurgitating each others’ work and a lack of validation of an issue which will be of interest to so many, not even especially engaging with the apparent shameless publicity stunt the study almost certainly is, there are things in the study itself which undermine any possibility of genuine and statistically significant results. I left a comment on the CBS website that read as follows:

I’m a little rusty, but I have a BSc (Hons) in Psychology and I have many concerns with this study.

This is from their own website:
“A Vancouver based Psychometric Consulting company, AptiQuant, has released a report on a trial it conducted to measure the effects of cognitive ability on the choice of web browser.”

They’re speaking as if reporting someone else’s findings and that seems quite odd.

They claim they’re studying the variable of cognitive ability against the variable of choice, but there is no establishment that the web browsers are chosen by participants, some of whom will potentially be accessing the test from work. Without establishing the factor of choice the results are confounded and unusable.

Next, I can’t find any numbers in the pdf report to indicate how many users there were per web browser. For this type of study that info is essential.
Let’s say, for sake of argument, out of 101,326 participants 101,306 use IE6-8 and the remaining 20 use IE9 or other web browsers. Now let’s say that just by chance half of the 20 have well above average IQs. That kind of slant is going to skew the data completely. The ‘population’ of the study (relating to the other web browsers) is too small and the data ‘not significant’. If the same thing happened at the other end, a handful of IE6 users with a few very low IQs mixed in, we have another set of skewed data and a ‘type 1 error’
(http://en.wikipedia.org/wiki/Type_I_and_type_II_errors). Averages are meaningless without the number of users.

There’s no indication in the report if the results were ‘significant’ – a term which indicates whether the results match the hypothesis, shown by a ‘P-value’. Where’s the P-value for this study? There are misleading uses of the word ‘significant’ in the Results section, e.g. “ranked significantly lower”, but this is not how genuine experiments use the word and it creates (presumably on purpose) confusion about the results by giving the impression they’re statistically significant when the author in fact means ‘pronounced’.

The Results section discusses the findings. That tips me off that this study either isn’t real or wasn’t done properly. It’s not for the Results section to convey the author’s opinion on the findings, that goes in the Discussion/Conclusion section(s). Students have it drummed into them not to talk about what they find until the correct part of the report. The Results section should be statements of bald facts and doing otherwise is either an amateurish mistake or deliberately misleading.

(Sorry for the lack of hyperlinks above. The script doesn't seem to be working today)

Edit later same day.
As suspected, it wasn't real:
http://www.bbc.co.uk/news/technology-14389430
But the point remains that a few simple things gave this away and should have been spotted by the journalists or the journalists who borrowed from other journalists. Or the hoaxers could have done a better job of it. Clearly a lot of effort went into this (for reasons not yet known), but schoolboy errors tripped them up.