Trial design
The high financial and carbon cost of designing a blinded trial is something researchers must consider when deciding whether to run one.
Is it worth the carbon cost?” Because sometimes, Sydes explains, instead of going to the organisational, financial, environmental and ethical trouble of producing dummy pills and masking information from the various agents involved in the trial, “just changing your outcome measures to something more objective could have been done quite simply”.
“[Blind trials are] making tablets that don’t do anything, and which we may not need. We’re putting them, maybe, in non-recyclable packaging; we’re transporting them; we’re taking up pharmacy time; we’re storing them cold. All of the ways in which carbon may be spent, we may well be doing for a placebo when maybe we didn’t need to do it at all.”
Matthew Sydes
Sydes’ invocation of outcome measures is a reminder, not only of the fact that trials can be designed without the need for any blinding at all, but also that the placebo is not the only means of blinding. In some cases, the outcome assessors might be blinded, a fraction of the cost of mass producing dummy drugs or undertaking sham procedures, and without the ethical and safety concerns that can accompany these methods. While the blinding of outcome assessors comes with its own set of logistical challenges (the data itself often gives the game away), Clarke is hopeful that we will “see more blinded outcome assessment […] and less blinding where it’s practically challenging”. He adds: “If we are trying to make trials more efficient and effective,
16
then one of the cost-cutting exercises could be to avoid that complication.” In one recent study, however, researchers at the University of Nottingham found “in most cases, the insight that the statistician offers was deemed more important to delivery of a trial than the risk of bias they may introduce if unblinded”. Sydes echoes this conclusion, noting that there is a question about who is the statistician that does those analyses. “Some people would say you need a different statistician designing the study to the one that does the interim analyses, but I worry that you do end up with the blind leading the blind if you’ve got […] an independent committee who aren’t invested in the trial,” he adds. “That seems like a recipe for disaster to me – I feel that somebody who knows the trial well should be involved in that process.”
Part of the appeal of the blinded trial is the offering of flexibility and variability; but these are also the very things that make it problematic. Without standards and consistencies, blinding can often end up being more trouble than it’s worth. For every advantage, there seems to be an obstacle or challenge that negates the benefit. In the end, both Clarke and Sydes agree that the question isn’t so much if we should be blinding clinical trials, as when. In future, Clarke hopes that designers and managers will harness blinding more thoughtfully, “using it where it is really needed and not just following the herd”. Similarly, Sydes is hopeful that “we will see less placebo-based trials in the future, that there will be other ways to protect treatment allocation, and that we will rationalise our way down from using [blinding] over-routinely, to using it in special instances”. Because, as the experts in the case of Franz Anton Mesmer were only too aware, no scientific theory deserves blind faith. ●
Clinical Trials Insight /
www.worldpharmaceuticals.net
AllaBond/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45