
How to Critique an Article
Dr. Sean Wheatley, PhD – Research and Trainer in Public Health/24 October 2016
How to Critique an Article follows on from “Ask for Evidence Part 1”, that looked at why we shouldn’t always take the stories we read as gospel. This blog gives a crash course in what you can look for to help you decide what to pay attention to. What I will say before I start is that other people much better thinkers and writers than myself, have covered this subject excellently elsewhere. If you’re interested, there is a brief-ish list of suggested reading at the end.
Ideally, if you were keen to assess any claims you come across you would get hold of a copy of the research the story is based on. This is often not possible however for a variety of reasons, which are issues in themselves! These include that the original source is often not referenced clearly enough to make finding the article easy, or that a lot of research is hidden behind pay walls. It’s often more than £25 to read a single article.
There should hopefully be enough detail in the story to look for some of the things discussed below. If not, as a rule of thumb, I’d say take the report with a pretty big pinch of salt!
So what should I look for?
Some of the things that might be worth considering, in no particular order, are:
Surrogate markers or “hard” end-points.
It is often impractical to measure the most useful outcomes in a study. For example, if your study is looking at the effect of an intervention on cardiovascular disease (CVD) you might ultimately want to know the impact on people dying from heart attacks. This would need a really long follow-up period though (i.e. your study would need to go on for years). As a result surrogate markers are often used. So in this example you might look at the influence of your intervention on blood pressure instead. Although this might give a useful indicator of risk, this is just a proxy for what we want to know. It isn’t as useful as knowing the actual impact on the hard end-point we’re really interested in, as not everyone with high blood pressure is going to have a fatal heart attack.
Absolute risk or relative risk.
A common tactic to make a finding look impressive is to report it as a relative. An example of this might be that eating <insert any food/drug here> raises your risk of having a heart attack by 100%. Now this might be true, and it sounds scary (or impressive if you flip it and a 100% risk reduction is reported). If we consider this in absolute terms however the result is often much less interesting. That is to say, if the original risk of having a heart attack was 1-in-a-million then that 100% relative risk increase means our new risk is still only 2-in-a-million. This new risk, which is more useful to know practically speaking, would be nothing to lose sleep over!
Are the findings over-extended.
This is a common flaw in the classic news stories claiming medical breakthroughs and miracle cures, and a pet-hate of mine. There are a number of stages in the development of new treatments (especially drugs), and this usually begins with the use of the treatment/drug on cells in a dish in a lab (“in vitro”). Now success at this stage may give the green light for testing to move on. But it is a million miles away from proof that that particular drug is going to cure cancer. That doesn’t however stop certain media outlets making such exaggerated claims! This principle is also true of findings of treatments tested on other animals; such as mice, rats or zebra fish.
Success at this stage may be promising (and necessary, as you can’t just test things on humans without some sign of efficacy and safety- whether or not it’s okay to test this on animals is another of the many subjects I’m not going to touch with a barge pole). But is still a long way short of the evidence in human trials that is needed before we should get carried away.
Is the study representative.
Leading on from the previous point, it is worth considering who the study was carried out on even where success has been found in studies with humans. It might be that it was a really small study. The success of a treatment in a handful of people isn’t strong evidence that the treatment will work for everyone though. Or it may be the case that the study was in a very specific subset. For example a study in male, elite athletes won’t necessarily provide useful practical information for obese females.
What is the standard of evidence.
There will often be reports (or interviews related to pertinent stories) where the only source of information is the beliefs of an “expert”. Now the classification of evidence quality is often presented as a pyramid (see Figure 1). Irrespective of the qualifications or the standing of the individual, opinions are the bottom rung of this pyramid. So unless these opinions are supported by evidence from higher up the hierarchy, expert opinion should never be taken as proof of anything. Though obviously nobody pointed this out when the 2016 European Guidelines on CVD prevention were being drawn up, as “general agreement” was part of the definition for their highest class of recommendation (1)!

What type of study was it.
Another common issue (particularly in public health research) is regarding the interpretation of observational studies. Now by this we mean any study where we are looking at patterns in a group of people (sometimes a small, specific cohort but sometimes entire countries) rather than actively testing the effect of something through an experiment.
Although they can be an important source of information. They provide us with data we wouldn’t possibly be able to gather in a more controlled trial, the results of observational studies are often grossly overstated. The reality is there are usually a large number of very complex factors influencing each other in any group or setting, which makes it difficult (bordering impossible) to isolate the effect of one thing on another. Even where we are confident an observed relationship is real, correlation does not equal causation.
That is to say, two things being related doesn’t mean one causes the other (brilliantly demonstrated here). These misinterpretations (or misrepresentations) often lead to grand claims of something or other causing a disease. Usually the logic being “well they eat a lot of <insert ANY food here> and they also have a high rate of <insert ANY disease here>, therefore <that food> must cause <that disease>”. This flawed thinking provides much of the basis for why everything apparently causes and cures cancer (see figure 2)!

Who is writing the cheques.
Now this shouldn’t be a problem, but unfortunately it is. A big one. Issues with conflicts of interest are rife. This is particularly an issue in public health due to the vested interests of food companies and the pharmaceutical industry. I’m not going to discuss specific cases as that is another area of great controversy that we could discuss forever. Though if I were I’d probably highlight Coca-Cola’s sponsorship of the American Diabetes Association, amongst many other institutions, as a particularly troubling example. And in the interests of balance I’d also note PepsiCo providing financial support to the Juvenile Diabetes research foundation, amongst many other institutions, as an alternative. It is always important to consider who might be having an influence on the research you’re seeing. And what interests they may have in certain conclusions being reached.
Biases such as this are not restricted to big corporations. There may also be reason to doubt the intentions of individual researchers or research groups in some cases. For example if they have built a career around an idea they might be less likely to reach conclusions against. This, and other issues regarding transparency (you may remember in the first blog, about X-PERT Heart, I said there are cases where elements of a study have been changed during the research so the conclusions better fit a certain idea), are less easy to spot. It is worth maintaining a degree of scepticism, or doing a bit of digging where something piques your interest; particularly for bold claims (extraordinary claims need extraordinary evidence!).
So what can be done to critique an article?
It is an unfortunate truth that elements of the research world are a little murky (depending on your level of cynicism you can decide whether this choice of words is being a little unkind, or is a gross understatement). The more people who read blogs like this, or any of the items in the suggested reading, the more people who are (hopefully) armed to consider what they read more critically. Often people assume that if it’s in writing then it is true. That isn’t the case, not even for peer-reviewed articles in scientific journals.
The best way to address the flawed reporting of science is for people to challenge it, and to that end the “ask for evidence” campaign (no affiliation with these blogs despite the shared name) has been set up. The aim of this is exactly what it says on the tin. With the campaign encouraging people to ask for evidence when claims are made that they aren’t sure of. Either directly to the source of the claim, or via a handy form on the website. The more people who are inquisitive and dig a little deeper, the more likely it is that researchers, press officers, and reporters will be a little more careful about the claims they make and the way they present the findings.
As with all our blogs and other work we’d love to hear your thoughts and feedback, so feel free to comment below or drop me an e-mail at sean.wheatley@xperthealth.org.uk.
Reference List
1. Piepoli, M. F., et al. (2016). “2016 European Guidelines on cardiovascular disease prevention in clinical practice: The Sixth Joint Task Force of the European Society of Cardiology and Other Societies on Cardiovascular Disease Prevention in Clinical Practice (constituted by representatives of 10 societies and by invited experts)Developed with the special contribution of the European Association for Cardiovascular Prevention & Rehabilitation (EACPR).” Eur Heart J 37(29): 2315-2381.
2. Schoenfeld, J. D. and J. P. Ioannidis (2013). “Is everything we eat associated with cancer? A systematic cookbook review.” Am J Clin Nutr 97(1): 127-134.
Suggested Reading
Goldacre, B. (2009). Bad Science. Harper Perennial, London, UK. (There’s also a bad science blog at www.badscience.net)!
Evans, I., Thornton, H., Chalmers, I. and Glasziou, P. (2011). Testing Treatments (2nd edition). Pinter & Martin Ltd, London, UK. (available as a free download at: http://www.testingtreatments.org/wp-content/uploads/2012/09/TT_2ndEd_English_17oct2011.pdf).
Evidently Cochrane “Understanding Evidence” blog, from the Cochrane Collaboration: http://www.evidentlycochrane.net/category/understanding-evidence/
www.students4bestevidence.net (This site includes a series of related blog posts. Although it’s specifically aimed at students it can be a great resource for anyone).
As well as providing excellent tools for you to enquire about specific stories the “ask for evidence” campaign website also has an excellent section dedicated to helping people understand evidence: http://askforevidence.org/help/evidence