Evidence
Vol. 14 No 2 | Winter 2012
Feature
Trials and tribulations
A/Prof Rosalie Grivell
BSc, BMBS, FRANZCOG, PhD, CMFM
Prof Jodie M Dodd
FRANZCOG, CMFM, PhD


This article is 12 years old and may no longer reflect current clinical practice.

The importance of testing interventions with randomised trials.

Evidence-based medicine is a phrase that was coined in the early 1990s1 and defined by David Sackett as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’.2 However, the principal tenet underpinning evidence-based medicine can be identified much earlier. The French physician Alexandre Louis led an initiative termed ‘medecine d’observation’ in which practitioners were encouraged not to rely on ‘speculation and theory about causes of disease nor…single experiences,’ but rather to make a ‘large series of observations and derive numerical summaries from which real truth about the actual treatment of patients will emerge.’3

According to the Oxford English Dictionary, ‘evidence’ simply refers to the available body of facts or information that indicates whether a belief or proposition is true or valid. In this context, therefore, virtually all clinicians use ‘evidence’ in making decisions about patient care, and perhaps we should be aspiring to a ‘better use of evidence in medicine’.4 There are numerous examples in contemporary obstetric practice where, as a specialty, we could have made ‘better’ use of the available evidence. Archie Cochrane, in his now well-known writings, awarded the ‘wooden spoon’ to obstetricians for having made the poorest use of randomised trials and having widely incorporated changes into clinical practice without appropriate evaluation5, resulting in the widespread adoption of some practices of uncertain benefit (for example, continuous fetal heart rate monitoring during labour) and the delayed introduction of others (for example, the use of antenatal corticosteroids prior to preterm birth).

Many national research bodies have published hierarchy of research evidence, based on how well potential biases have been minimised (see Table 1), identifying systematic reviews of randomised trials and randomised trials to be of high quality. The randomised trial represents a ‘gold-standard’ methodology in assessing or comparing the effects of different treatments. However, randomised trials are not the only source of valid evidence, nor are they the most appropriate study design to answer all research questions. Furthermore, just because a study is designated as randomised does not imply that it is high quality or without methodological flaw. The following discussion will consider both the strengths and limitations of randomised trials.

The singular advantage of randomised trials is the random allocation of participants to the treatment groups being evaluated, ensuring that the distribution of both known and unknown subject factors that may influence treatment outcomes are randomly allocated across treatment groups. The result is the creation of groups that are similar in their baseline demographic and prognostic variables. Comparable treatment groups at the time of trial entry therefore means that any observed differences in outcomes are likely to reflect true differences between the treatment interventions, rather than individual subject differences.

A number of methodological steps are key to the process of random allocation, the generation of comparable treatment groups and, therefore, minimisation of selection bias. The first is the method of sequence generation, which can broadly be considered as those processes that are truly random and therefore have low risk of bias (for example computer-generated sequence; random number table), as compared with those that are non-random, potentially subject to manipulation, and therefore have high risk of bias (for example odd or even date; or hospital record number). Allocation concealment describes the processes whereby treatment allocation could have been foreseen in advance or potentially changed after recruitment has occurred. Methods considered to have a low risk of bias include use of a central telephone or web-based randomisation service, or sequentially numbered, sealed, opaque envelopes. In contrast, known allocation to treatment groups based on alternation, day of the week of presentation or participant date of birth would all be considered to be at high risk of bias. Failure to maintain random allocation and allocation concealment have been shown to result in overinflated estimates of treatment effects.6

Table 1. NHMRC Evidence Hierarchy: designations of ‘levels of evidence’ for intervention studies. Adapted from: NHMRC levels of evidence and grades for recommendations, December 2009.

Level of evidence Description
I A systematic review of level II studies
II A randomised controlled trial
III-1 A pseudo-randomised controlled trial (i.e. alternate allocation or some other method)
III-2 A comparative study with concurrent controls:

  • Non-randomised,experimental trial
  • Cohort study
  • Case-control study
  • Interrupted time series with a control group
III-3 A comparative study without concurrent controls:

  • Historical control study
  • Two or more single arm study
  • Interrupted time series without a parallel control group
IV Case series with either post-test or pre-test/post-test outcomes

 

Blinding, or masking, refers to the steps taken to ensure that the treatment group allocated remains unknown by participants and caregivers, and often involves the administration of a placebo. While in many circumstances, blinding of participants and their caregivers may not be possible, it is almost always possible to blind outcome assessors to the intervention received. The potential impact of masking varies with the outcome assessed, being particularly important in the evaluation of subjective measures (for example, experience of pain), but relatively less so in the evaluation of more objective outcomes (for example death). While studies are often described as ‘double-blind’, a more specific statement on who was blinded is preferable. Blinding attempts to reduce performance bias (or systematic differences in the care that is provided to participants other than the intervention under investigation), again ensuring any differences observed between the groups reflect differences in the treatment or intervention received.

The above methodological considerations reflect the internal validity of a randomised trial, while external validity refers to the extent to which trial findings can be generalised beyond the study environment to routine clinical practice. Consequently, generalisability is influenced by the similarity of the trial population to the broader population, the nature of the intervention (and, in particular, its relationship to current standards of care) and the outcomes reported.

Randomised trials often have clearly defined, rigorous inclusion and exclusion criteria and while this may create a relatively homogeneous trial population, the challenge then lies in demonstrating that the circumstances and results are applicable to a wider clinical population. An alternative approach has been suggested7 in which the question is asked: ‘Are there any good reasons to believe that the research is not relevant…if there are not…the default position should be that the result should be regarded as applicable.’

Further confounding the issue of generalisability is recognition that individuals who participate in randomised trials are inherently different to those who decline participation, often being of higher socioeconomic standing and higher educational attainment, both of which contribute to a tendency to greater compliance or adherence to the intervention. The net effect is therefore an overestimation of treatment effects when compared with what might be reasonably achieved in the general clinical population.

Similar concerns are often raised in relation to the nature of trial interventions, reflecting the importance of engaging clinicians, researchers and other stakeholders during the process of trial development. If trial interventions deviate significantly from those in standard clinical practice, issues of clinical relevance arise, in addition to considerations of replication in the clinic setting.

The choice of primary outcome is critically linked to the estimation of sample size, this being determined by the incidence of the outcome in the control group, as well as the difference between treatment groups that is anticipated being detected. As a general rule, the greater the incidence of the outcome and the greater the difference anticipated between the intervention and control group, the smaller the sample size required. In contrast, serious but rare clinical outcomes and more modest treatment effects require much larger sample size. Therefore, the choice of primary outcome often represents a compromise between what might be ideal and what is achievable. With declining maternal and perinatal mortality, researchers have focused on surrogate clinical endpoints and composite outcomes reflecting morbidity, both of which usually occur more frequently. The effect is to reduce a potential sample size of tens of thousands of women and their infants, to a smaller sample size that is more feasible and achievable. While this may represent a practical issue in the design and conduct of randomised trials, it may pose difficulties in the clinical interpretation, particularly where components of the composite outcome may vary in both severity and the direction of effect.8,9 Furthermore, the choice of a surrogate outcome, which often represents a short-term measure, should correlate with and accurately predict the long-term or more serious outcome of interest.

Statistical analysis of randomised trials follows intention to treat principles, where participants are analysed in the group to which they were allocated. Analysis in this way ensures the effects of randomisation are maintained, the distribution of both known and unknown factors that may influence treatment outcomes being randomly allocated across treatment groups. In contrast, statistical analysis according to the actual intervention received essentially removes the effects of randomisation, introducing bias. From a trial design perspective, the challenge lies in ensuring that the process of randomisation occurs as close to the point of intervention delivery, maximising the chance that intervention is received as allocated.

It has been estimated that less than half of the one million trials conducted have been published10, representing significant publication bias, with trials demonstrating positive treatment effects more likely to be published in English-language journals. In an attempt to reduce publication bias and selective reporting of trial results, prospective trial registration has been introduced, with an increasing number of journals requiring demonstration of prospective clinical trial registration prior to recruitment of the first participant.

The concept of a single randomised trial providing ‘the answer’ to a clinical question is somewhat of a fallacy, with most research raising and generating more questions than are answered. It has been stated that: ‘Evidence does not speak for itself – it requires interpretation in light of its original context (and) limitations…in order to inform the practical decisions of other (clinicians).’11 In view of this, clinicians require training to be ‘sceptical and discriminating’, to develop the skills required to make the best use of research evidence, and then generate positive changes in clinical practice to improve health outcomes.12

References

  1. Evidence-Based Medicine Working Group. Evidence-based medicine:a new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420-2425.
  2. Sackett DL, Rosenberg WM, Gray JA et al. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312(7023):71-72.
  3. Vandenbroucke JP. Evidence-based medicine and ‘médecined’observation’. J Clin Epidemiol. 1996; 49(12):1335-8.
  4. Liberati A, Vineis P J. Introduction to the symposium: what evidence based medicine is and what it is not. Med Ethics. 2004;30(2):120-1.
  5. Cochrane A. Effectiveness and efficiency. Random reflections on healthservices. The Nuffield Provincial Hospitals Trust 1972.
  6. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408-12.
  7. Petticrew M, Chalmers I. Use of research evidence in practice. Lancet. 2011;378(9804):1696.
  8. Freemantle N, Calvert M, Wood J, et al. Composite outcomes in randomized trials: greater precision but with greater uncertainty? JAMA.2003;289(19):2554-9.
  9. Tomlinson G, Detsky AS. Composite end points in randomized trials:there is no free lunch. JAMA. 2010;303(3):267-8.
  10. Dickersin K, Rennie D. Registering clinical trials. JAMA. 2003;290(4):516-23.
  11. Cook DA. Randomized controlled trials and meta-analysis in medical education: What role do they play? Med Teach. 2012 Apr 10. [Epub ahead of print]
  12. Gilbert R, Burls A, Glasziou P. Clinicians also need training in use of research evidence. Lancet. 2008; 371(9611):472-3.

Leave a Reply

Your email address will not be published. Required fields are marked *