Whether you are reading a social media post, a blog, a newspaper article, or a journal paper it should always be read with a critical mind and never taken at face value.

In the world of medical and scientific journal articles it is all too easy to be drawn to an attractive title like “PRP superior to coritsone for lateral elbow pain: meta-anaylsis” to only read through the abstract taking the conclusion as gospel, because after all the title says it’s a meta-analysis which means the paper has to be solid right?

It is true that the strength of an article lies in its design with meta-analysis and systemic reviews at the top of the tree, followed by blinded randomised controlled trials, right down to case reports and editorials. Nevertheless, study design alone is not enough to be able to truly trust an article’s findings.

One must consider the methodology used and how easily it be can be reproduced, in addition to the strength and weaknesses of the selection (inclusion/exclusion) criteria including the outcome measures used. For example, if we are examining tendinopathy and the diagnosis is confirmed by physical examination alone without ultrasound, how many patients in the study will have confirmed tendinopathy? Even if those patients have been selected carefully, they are still at risk of dropping out of the study, which can have disastrous effects on the outcome of a study. As statistical power, or the number of participants required to be able to detect an effect is a vitally important.

Articles are also open to the risk of bias, confounders, errors, and chance which can all influence the outcomes of a study. It maybe the study has selected participants who are known non or strong responders to an intervention being tested, called selection bias. A strong study design aims to mitigate the risks of these occurring however, understanding their impact on a study is imperative when critically reviewing what you are reading. Additionally, articles often use statistical measurements such as, measures of associations and measures of heterogeneity including odds ratio, hazard ratio, relative risk, P value, I2 understanding these is crucial to be able to interpret the results.

Lastly, when consideration has been given to all of these aspects thought must then be given to the results and how they compare to other reputable research available in that area. Are the results consistent with other research, if not why? What research has been used to support their findings and is that research reputable and high quality? Do the results add weight to the existing research to further support an outcome? Do they give rise to a rethink about the existing research? Do they create a new higher standard or intervention? Or did the study use a weak design or were the results influenced by bias, confounders, errors, chance, or a high dropout rate, or poor selection criteria leading one to question the quality of the study’s findings?

Understanding what you are reading in this day in age where “authorities” are in endless supply is extremely important. Regardless of what you are reading, always sit back and ask yourself questions about what you are reading, who is writing it, why are they writing it, how robust is the article, what quality of evidence is it supported by, and are there any parties who will benefit from the results. 

For more interested blog articles check out the Shannon Clinic blog page.