[How ironic he writes under 'Bad Science' tag line. All the medical industry beliefs--mercury is safe in your mouth, masts are safe, MMR is safe, etc etc. http://browse.guardian.co.uk/search?search=Ben%20Goldacre ]
http://www.guardian.co.uk/life/badscience/story/0,,1794440,00.html
Bad Science
Academics are as guilty as the media when it comes to
publication bias
Ben Goldacre
Saturday June 10, 2006
The Guardian
When I am finally assassinated by an axe-wielding electrosensitive
homeopathic anti-vaccine campaigner - and that day surely cannot be far off
now - I should like to be remembered, primarily, for my childishness and
immaturity. Occasionally, however, I like to write about serious issues. And
I don't just mean the increase in mumps cases from 94 people in 1996 to
43,322 in 2005. No.
One thing we cover regularly in Bad Science is the way that only certain
stories get media coverage. Scares about mercury fillings get double page
spreads and Panorama documentaries; the follow-up research, suggesting they
are safe, is ignored. Unpublished research on the dangers of MMR gets
multiple headlines; published research suggesting it is safe somehow gets
missed. This all seems quite normal to us now.
Strangely, the very same thing happens in the academic scientific
literature, and you catch us right in the middle of doing almost nothing
about it. Publication bias is the phenomenon where positive trials are more
likely to get published than negative ones, and it can happen for a huge
number of reasons, sinister and otherwise.
Major academic journals aren't falling over themselves to publish studies
about new drugs that don't work. Likewise, researchers get round to writing
up ground-breaking discoveries before diligently documenting the bland,
negative findings, which sometimes sit forever in that third drawer down in
the filing cabinet in the corridor that nobody uses any more. But it gets
worse. If you do a trial for a drug company, they might - rarely - resort to
the crude tactic of simply making you sit on negative results which they
don't like, and over the past few years there have been numerous systematic
reviews showing that studies funded by the pharmaceutical industry are
several times more likely to show favourable results than studies funded by
independent sources. Most of this discrepancy will be down to cunning study
design - asking the right questions for your drug - but some will be owing
to Pinochet-style disappearings of unfavourable data.
And of course, just like in the mainstream media, profitable news can be
puffed and inflated. Trials are often spread across many locations, so if
the results are good, companies can publish different results, from
different centres, at different times, in different journals. Suddenly there
are lots of positive papers about their drug. Then, sometimes, results from
different centres can be combined in different permutations, so the data
from a single trial could get published in two different studies, twice
over: more good news!
This kind of tomfoolery is hard to spot unless you are looking for it, and
if you look hard you find more surprises. An elegant paper reviewing studies
of the drug Ondansetron showed not just that patients were double and treble
counted; more than that, when this double counting was removed from the
data, the apparent efficacy of the drug went down. Apparently the patients
who did better were more likely to be double counted. Interesting.
The first paper describing these shenanigans was in 1959. That's 15 years
before I was born. And there is a very simple and widely accepted solution:
a compulsory international trials register. Give every trial a number, so
that double counting is too embarrassingly obvious to bother with, so that
trials can't go missing in action, so that researchers can make sure they
don't needlessly duplicate, and much more. It's not a wildly popular idea
with drug companies.
Meanwhile the system is such a mess that almost nobody knows exactly what it
is. The US has its own register, but only for US trials, and specifically
not for clinical trials in the developing world (I leave you to imagine why
companies might do their trials in Africa). The EU has a sort of register,
but most people aren't allowed to look at it, for some reason. The Medical
Research Council has its own. Some companies have their own. Some research
charities do too. The best register is probably Current Controlled Trials,
and that's a completely voluntary one set up by some academics a few years
ago. I have a modest prize for the person with the longest list of different
clinical trial registers.
And why is this news? Because people have been calling for a compulsory
register for 20 years, and this month, after years of consulting, the World
Health Organisation proudly announced a voluntary code, and a directory of
other people's directories of clinical trials. If it's beyond the wit of
humankind to make a compulsory register for all published trials, then we
truly are lame.