Dr. Sean Wheatley, PhD – Researcher and Trainer in Public Health/25 July 2017
When it comes to managing our health we are surrounded by conflicting reports and advice. We hear stories of individuals improving their health by ignoring “traditional” advice in favour of “alternative” ideas. Typically these alternative ideas are sneered at by many, dismissed as meaningless anecdotes. But can an anecdote beat an analysis?
The role of research
Research is important, and that fact should not be forgotten. Without research we cannot develop an understanding of how things work, or examine whether A causes B. Without research we can’t fully explore whether an intervention can prevent, treat or cure certain conditions. Research is essential. But research is not perfect, and it cannot always (or even often!) tell us the whole story.
It is entirely correct that health guidelines are based on the best available evidence (as are the programmes we provide at X-PERT), but these guidelines can take years to be updated even in the face of new evidence. For the most part these guidelines also provide (through necessity) specific advice for broad populations, whether that be the general public or individuals with a specific condition or disease. This approach fails to consider difference between individuals, and the fact that one size doesn’t fit all when it comes to the effect of any diet or drug on a person’s health.
Once we start to consider the potential limitations of the available research and these differences in how individuals respond to any given thing then the possible benefits of considering anecdotal evidence become apparent.
(Some of) The limitations of research
Essentially, all research is flawed in some way. I wrote about some of these flaws in a previous blog, but these limitations often include:
* having small sample sizes,
* not including people who are a good representation of the population the results are relevant for (an example of this might be trying to apply results from “healthy” young men to post-menopausal women with diabetes),
* failure to adequately consider or control for the impact of other factors which could influence the outcome,
* not being carried out on humans (e.g. studies on mice or even cells in a dish are often used as “evidence” for what people should and shouldn’t do).
There is no such thing as a perfect study. This is often through no fault of the researchers, but logistical constraints and the availability of funds and time limit what is possible.
There are other occasions where the influence of outside interests can have a bearing on the outcomes that are reached, or at least on the confidence the interested onlooker can have in those findings. Whether conscious or unconscious, deliberate or accidental, a large proportion of the available evidence has some form of cloud hanging over it. All in all, research can’t give us the clear, incontrovertible evidence we desire to answer the wide variety of health related questions we have.
Extension and exaggeration
Even in high quality research, where we can have a good degree of confidence that the finding are a true reflection of what is happening, there is a tendency for the importance of the outcomes to be overextended or over-exaggerated. This might be through the implication that one thing causes another, when the study has only showed that the two things in question are related to each other. By now most people have heard the phrase “correlation doesn’t equal causation” (i.e. just because A is related to B, it doesn’t mean that A causes B), but that still doesn’t stop people from wielding this associational evidence to support their claims.
The importance of an effect is often over stated too. The traditional method for assessing whether there is a true difference, or a true relationship, is to assess if the finding is “statistically significant”. This involves the use of some complicated stats test (I think they’re complicated anyway) to assess if there is more or less than a 95% likelihood the findings were down to chance. In reality this is a completely arbitrary cut-point that doesn’t provide any indication if the results have any meaningful real world effect. Better options include pre-defining how big a change/difference/relationship would actually be meaningful and judging your findings against this less-arbitrary definition (called “magnitude based inferences”) or using a “number needed to treat” analysis, which assesses how many people would have to undergo a treatment/intervention for any one of them to see a meaningful benefit. The lower the “number needed to treat”, the better the intervention.
The assessment of risk is notorious for being abused too. An increase from 1 person in a thousand to 2 people in a thousand having a heart attack is a 100% increase if we are talking relative risks, but it is a 0.1% increase if we are talking absolute risks. The former is scarier and much more likely to seem important, but the latter is much more useful and a better reflection of what is actually going on.
Where studies show consistent findings with a large magnitude of effect, for example as there is for the link between smoking and lung cancer, then this forms a suitable basis for giving broad advice. In any other instance you might be justified in suggesting a certain treatment/intervention/diet, but you don’t necessarily have the strength of evidence to suggest that this is what everyone should be doing.
I previously alluded to the issue of providing guidance for everyone when we know that not everyone is even close to being the same. Who wants to be average anyway? This issue extends to the application of published research too. The top level of research is a meta-analysis. If performed well (and they aren’t always) a meta-analysis can provide us with the strongest indication of whether or not something does or doesn’t work, or does or doesn’t have an impact on something else. But in essence a meta-analysis provides us with an average impact of all the relevant, eligible studies that have tried to answer a question. Each of those studies uses an average of the effect on the participants to come up with their own conclusion. Therefore our meta-analysis outcome is an average of averages, and does not fully account for the variation in the outcomes of all of the individuals involved.
People are not average though. There is no “normal”. Some people will respond better to treatment A, and some to treatment B. So even if the meta-analysis might give a good idea of what will work for some people, or even most people, it can never tell us what will work for all people. It can give us a good indication of what our front-line treatment should perhaps be, but if that doesn’t work why should an individual care about this average outcome then when their own “anecdote” is all that really matters for them?
So can a story really beat a study?
Anecdotes are often written off as meaningless when it comes to evidence. But if something can work for one person, there is a chance it can work for someone else. So long as there is no clear reason to believe that the intervention/treatment will present an unacceptable risk of danger, then it can generally be considered as an option. Essentially an anecdote can be considered a proof of concept.
Someone once said the “plural of anecdotes is not data” (Google failed in telling me who that someone was), but when a growing number of anecdotes back up the possible efficacy of an approach this supports the notion that something can be beneficial. There is therefore no reason to wait (and that wait can be an extremely long one) for clinical research which, as we have seen, may well not fully settle any debate anyway.
Someone else (or maybe the same person, Google failed again) once said that “absence of evidence is not evidence of absence”. Just because there is not a published clinical trial “proving” something, that doesn’t mean that that something isn’t true. (Cynical side note: Sometimes there are trials to support the anecdotes anyway, but as any study will have flaws it is easy to dismiss anything that doesn’t support your own views.).
So what’s the bottom line?
Published, peer-reviewed research provides the background for the advice we would generally be advised to follow, but it isn’t infallible and it doesn’t have all the answers. An anecdote, or in some cases hundreds of thousands of them, can therefore give an indication of a potential alternative; even in the absence of broad consensus amongst policy makers and healthcare professionals.
The definition of insanity, according to Albert Einstein, is trying the same thing over and over and expecting a different outcome. So if what you’re doing isn’t working, then try something else.
As with all our blogs and other work we’d love to hear your thoughts and feedback, so feel free to leave a comment on our Facebook page, drop me an e-mail at firstname.lastname@example.org, or tweet us/me at @XPERTHealth or @SWheatley88.