Can We Trust the Research
Dr. Sean Wheatley, PhD – Research and Trainer in Public Health/25 November 2016
If you pick up today’s paper, it doesn’t matter which one, there will almost certainly be a dramatic health-related story. It seems like the media are constantly reporting a new miracle cure or about a household item that is killing us. Often these stories are based on scientific research. Which in many people’s minds means they must be true. In this blog I’m going to discuss why that isn’t always the case. And why we could all perhaps do with being a little more suspicious when we’re reading/watching the news!
What’s the problem?
The only exposure most people have to science and research is through the mainstream media. This would be fine if its portrayal were a fair reflection of what was actually found. Sometimes the misrepresentation may be harmless (think “alien megastructure found” type stories). But there have been cases where the knock on effects have been much more serious.
A prime example, and possibly the most famous and serious, of where a research article and subsequent media coverage has had a profound effect is the case of the measles, mumps and rubella (MMR) vaccine. In the late nineties a research paper was published by Andrew Wakefield claiming there was a link between the MMR vaccine and autism. This paper has subsequently been discredited due to a number of serious flaws. Including but not limited to major ethical issues, financial conflicts of interest, and falsification of parts of the paper (1).
It’s not the point of this article to discuss this in any detail (a quick google search will provide you with that if you’re interested, whilst it’s also covered in “Bad Science” by Dr Ben Goldacre (2)); but the bottom line is, the research was completely bogus and has been fully retracted. The damage done by the story however remains, with many people being unaware of the retraction and a not-insignificant portion of people still believing vaccines are unsafe.
Granted this is an extreme case, but it highlights the dangers very well. Unfortunately it is not uncommon for stories to cause concerns that do have a knock on effect on people’s lives and beliefs, and often this occurs in the absence of any strong evidence.
Overall it is easier to start a health scare than it is to debunk one.
Perhaps the media are not so forthcoming in putting retractions on the front pages as they are keen to cover the sensationalistic headlines in the first place. There’s also a “no smoke without fire” mentality for some people. Who will always be suspicious of a food/treatment/<insert essentially any other thing here> once they’ve been told to believe it is dangerous. The best way to avoid the potentially serious consequences of misreported results and research is to make sure studies are interpreted more cautiously and with some context. And to arm the public with the knowledge to consider whether or not what they’re being told is reliable.
Who is responsible?
It is a difficult job being a science reporter (I assume, I’ve never been one). Summarising complex stories in a short segment, sometimes no more than a sentence or two, is a challenge. And inevitably leads to some details being neglected. There’s also a pressure on reporters, particularly of certain news outlets (mentioning no names), to provide a factor of sensationalism in order to entertain the reader rather than just to inform them.
There is evidence however that much of this sensationalism is present before the reporters get their hands on it. This evidence is from peer-reviewed, published scientific research no less. That you can either take on trust I’m representing fairly, or you can check out the original articles by following the references at the end. Hopefully by the end of this blog you’ll be suspicious enough to choose the latter option. This research concluded that much of the exaggeration in media stories was actually present in the press release from the institution the research was carried out at (3). This would lay the blame of any misrepresentation at the doors of press officers across the world.
An earlier study however found that the biggest factor affecting hyperbole within press releases was… the research papers themselves (4)! Specifically, they found the exaggerations represented in the press release were already there in the study abstracts. A short summary of the research article that is often the only part of a study people actually read*. This suggests the real culprits may well be the researchers who carried out the original study.
Now you could argue that a diligent science reporter should seek the original article and verify themselves whether the claims made in the press releases they’re handed are fair. The reality however is that with tight deadlines and large workloads this is often just not possible. Sometimes, though, it might still just be laziness!
The press officer is probably the innocent middle man/woman in this, as they will often (rightly or wrongly) not have the scientific background and training to appraise the details put in front of them. There is also a pressure on them to promote the work of their institution. So they are unlikely to provide an objective buffer to any excess enthusiasm.
So what about the researcher? Well, their career may well depend on big hitting, high-impact research. Although they may well be the most important instigator in the over-hyped stories that reach the public (sometimes), to an extent that is a direct product of the culture of the industry they work in. To answer the original question of who is to blame, I’ve got no idea, but something really needs to change!
Now originally the plan here was to go over, within a single blog, what the main issues with much of the science reported in the press are. To help you work out whether the claims made are a fair reflection of what was actually found. Unfortunately I rambled on a little, and this article would be far too long to include that here. I told you it was difficult being a science reporter and having to summarise things briefly!
* Beyond the risk of exaggeration it’s also worth noting that the conclusions presented in the abstract aren’t necessarily accurate! For example, the conclusion in the abstract of a systematic review looking at fat consumption stated that reducing saturated fat intake improved cardiovascular outcomes. But the results found no reduction in the number of deaths (from cardiovascular conditions or otherwise) when saturated fat was reduced (5). The results also showed no reduction in the number of heart attacks, strokes or coronary heart disease; despite claiming a 17% reduction in total risk.
This difference disappeared when only studies that actually showed a difference between saturated fat consumption between the control and intervention groups were included. Your guess is as good as mine as to why you’d include studies where the saturated fat consumption was essentially the same in each group when you’re trying to see compare groups to see if there’s a difference!
As with all our blogs and other work we’d love to hear your thoughts and feedback, so feel free to comment below, drop me an e-mail at firstname.lastname@example.org or tweet us/me at @XPERTHealth or @SWheatley88.
1. Godlee F, Smith J, Marcovitch H. Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ. 2011;342:c7452.
2. Goldacre B. Bad Science. London, UK: Harper Perennial; 2009.
3. Sumner P, Vivian-Griffiths S, Boivin J, Williams A, Venetis CA, Davies A, et al. The association between exaggeration in health related science news and academic press releases: retrospective observational study. BMJ. 2014;349:g7015.
4. Yavchitz A, Boutron I, Bafeta A, Marroun I, Charles P, Mantz J, et al. Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study. PLoS medicine. 2012;9(9):e1001308.
5. Hooper L, Summerbell CD, Thompson R, Sills D, Roberts FG, Moore HJ, et al. Reduced or modified dietary fat for preventing cardiovascular disease. The Cochrane database of systematic reviews. 2012;5:CD002137.