Are Health Stories as valuable as Studies.
Dr. Sean Wheatley, PhD – Researcher and Trainer in Public Health/25 July 2017
When it comes to managing our health we are surrounded by conflicting reports and advice. We hear stories of individuals improving their health by ignoring “traditional” advice in favour of “alternative” ideas. Typically these alternative ideas are sneered at by many, dismissed as meaningless anecdotes. But can an anecdote beat an analysis? In this blog we are looking at whether health stories are as valuable as studies.
The role of research
Research is important. Without research we cannot develop an understanding of how things work, or examine whether A causes B. Without research we can’t fully explore whether an intervention can prevent, treat or cure certain conditions. Research is essential. But research is not perfect, and it cannot always (or even often!) tell us the whole story.
It is entirely correct that health guidelines are based on the best available evidence (as are the programmes we provide at X-PERT), but these guidelines can take years to be updated even in the face of new evidence. For the most part these guidelines also provide (through necessity) specific advice for broad populations. Whether that be the general public or individuals with a specific condition or disease. This approach fails to consider difference between individuals. Also, the fact that one size doesn’t fit all when it comes to the effect of any diet or drug on a person’s health.
Once we start to consider the potential limitations of the available research and these differences in how individuals respond to any given thing then the possible benefits of considering anecdotal evidence become apparent.
(Some of) The limitations of research
Essentially, all research is flawed in some way. These limitations often include:
Having small sample sizes.
Not including people who are a good representation of the population the results are relevant for. An example of this might be trying to apply results from “healthy” young men to post-menopausal women with diabetes).
Failure to adequately consider or control for the impact of other factors which could influence the outcome.
Not being carried out on humans. For example, studies on mice or even cells in a dish are often used as “evidence” for what people should and shouldn’t do.
There is no such thing as a perfect study. This is often through no fault of the researchers, but logistical constraints and the availability of funds and time limit what is possible.
There are other occasions where the influence of outside interests can have a bearing on the outcomes that are reached. Or at least on the confidence the interested onlooker can have in those findings. Whether conscious or unconscious, deliberate or accidental, a large proportion of the available evidence has some form of cloud hanging over it. All in all, research can’t give us the clear, incontrovertible evidence we desire to answer the wide variety of health related questions we have.
Extension and exaggeration
Even in high quality research, where we can have a good degree of confidence that the finding are a true reflection of what is happening, there is a tendency for the importance of the outcomes to be overextended or over-exaggerated. This might be through the implication that one thing causes another. When the study has only showed that the two things in question are related to each other. By now most people have heard the phrase “correlation doesn’t equal causation”. I.e. just because A is related to B, it doesn’t mean that A causes B. That still doesn’t stop people from wielding this associational evidence to support their claims.
The importance of an effect is often over stated too.
The traditional method for assessing whether there is a true difference, or a true relationship, is to assess if the finding is “statistically significant”. This involves the use of some complicated stats test to assess if there is more or less than a 95% likelihood the findings were down to chance.
In reality this is a completely arbitrary cut-point that doesn’t provide any indication if the results have any meaningful real world effect. Better options include pre-defining how big a change/difference/relationship would actually be meaningful. And judging your findings against this less-arbitrary definition (called “magnitude based inferences”) or using a “number needed to treat” analysis. Which assesses how many people would have to undergo a treatment/intervention for any one of them to see a meaningful benefit. The lower the “number needed to treat”, the better the intervention.
The assessment of risk is notorious for being abused too. An increase from 1 person in a thousand to 2 people in a thousand having a heart attack is a 100% increase if we are talking relative risks. But it is a 0.1% increase if we are talking absolute risks. The former is scarier and much more likely to seem important. The latter is much more useful and a better reflection of what is actually going on.
Where studies show consistent findings with a large magnitude of effect, for example as there is for the link between smoking and lung cancer, then this forms a suitable basis for giving broad advice. In any other instance you might be justified in suggesting a certain treatment/intervention/diet. However you don’t necessarily have the strength of evidence to suggest that this is what everyone should be doing.
Average people?
I previously alluded to the issue of providing guidance for everyone when we know that not everyone is even close to being the same. Who wants to be average anyway? This issue extends to the application of published research too. The top level of research is a meta-analysis. If performed well (and they aren’t always) a meta-analysis can provide us with the strongest indication of whether or not something does or doesn’t work, or does or doesn’t have an impact on something else.
But in essence a meta-analysis provides us with an average impact of all the relevant, eligible studies that have tried to answer a question. Each of those studies uses an average of the effect on the participants to come up with their own conclusion. Therefore our meta-analysis outcome is an average of averages, and does not fully account for the variation in the outcomes of all of the individuals involved.
People are not average though. There is no “normal”. Some people will respond better to treatment A, and some to treatment B. So even if the meta-analysis might give a good idea of what will work for some people, or even most people, it can never tell us what will work for all people. It can give us a good indication of what our front-line treatment should perhaps be. But if that doesn’t work why should an individual care about this average outcome then when their own “anecdote” is all that really matters for them?
So can a story really beat a study?
Anecdotes are often written off as meaningless when it comes to evidence. But if something can work for one person, there is a chance it can work for someone else. So long as there is no clear reason to believe that the intervention/treatment will present an unacceptable risk of danger, then it can generally be considered as an option. Essentially an anecdote can be considered a proof of concept.
Someone once said the “plural of anecdotes is not data” (Google failed in telling me who that someone was), but when a growing number of anecdotes back up the possible efficacy of an approach this supports the notion that something can be beneficial. There is therefore no reason to wait for clinical research which, as we have seen, may well not fully settle any debate anyway.
Someone else (or maybe the same person, Google failed again) once said that “absence of evidence is not evidence of absence”. Just because there is not a published clinical trial “proving” something, that doesn’t mean that that something isn’t true. (Cynical side note: Sometimes there are trials to support the anecdotes anyway. But as any study will have flaws it is easy to dismiss anything that doesn’t support your own views).
So what’s the bottom line?
Published, peer-reviewed research provides the background for the advice we would generally be advised to follow. But it isn’t infallible and it doesn’t have all the answers. An anecdote, or in some cases hundreds of thousands of them, can therefore give an indication of a potential alternative. Even in the absence of broad consensus amongst policy makers and healthcare professionals.
The definition of insanity, according to Albert Einstein, is trying the same thing over and over and expecting a different outcome. So if what you’re doing isn’t working, then try something else.
As with all our blogs and other work we’d love to hear your thoughts and feedback, so feel free to leave a comment on our Facebook page, drop me an e-mail at sean.wheatley@xperthealth.org.uk, or tweet us/me at @XPERTHealth or @SWheatley88.