Leichsenring, F. Abbass, A., Hilsenroth, M.J., Leweke, F., Luyten, P., ….Steinert, C. (2016). Bias in research: Risk factors for non-replicability in psychotherapy and pharmacotherapy research. Psychological Medicine, doi:10.1017/S003329171600324X.
An important feature of research is that it should be replicable. That is, another researcher should be able to obtain the same finding as the original study as a pre-requisite for the validity of the conclusions. A recent estimate for cognitive and social psychology research is that only about 36% to 47% of studies are successfully replicated. Another study showed similar low replicability of psychotherapy and pharmacotherapy research. Results that are neither replicable nor valid can lead to improper treatment recommendations. Leichsenring and colleagues review several research biases that affect the replicability of findings in psychotherapy and pharmacotherapy research, and they discuss how to limit these biases. Psychotherapy trials often involve an established treatment approach that is pit against a comparison treatment in a head to head contest. Below I list some of the biases detailed by Leichsenring and colleagues that may affect the validity of psychotherapy trials. First, in psychotherapy trials a large proportion of the differences in outcomes between a treatment and a comparison may be due to the researcher’s allegiance to a particular therapy modality. This may be expressed unconsciously by selecting outcome measures that are more sensitive to the effects of one type of treatment versus another. For example the Beck Depression Inventory (BDI) is particularly sensitive to changes in cognitions, whereas the Hamilton Depression Rating Scale (HDRS) is particularly sensitive to physiological side effects related to antidepressant medications. One way to deal with researcher allegiance effects is to include researchers and therapists who have an allegiance to both of the treatments that are under study. Second, the integrity of the comparison treatment may be impaired. That is the comparison treatment may not be carried out exactly as originally intended. This could occur in pharmacological trials in which doses do not match clinical practice, or in psychotherapy trials in which therapists in the comparison treatment may be told to ignore key symptoms. Properly training and supervising therapists and not constraining them by the study protocol is important to avoid this type of bias. Third, some studies make a lot of noise about small effects that are statistically significant. When two bona-fide psychotherapies are compared the differences tend to be small – this is a common finding. Small differences, even if statistically significant, often turn out to be random, unimportant, and of little clinical significance. Concurrent with this problem is that sometimes researchers will use multiple outcome measures, find significant differences only with some, and report these as meaningful. This refers to selectively emphasizing a small number of findings among a larger number of analyses, which are likely due to chance variation and therefore not replicable.
What should a clinician do when reading a comparative outcome study of psychotherapy? There are some technical red flags for research bias that require specialized knowledge (e.g., small sample sizes and their effect on reliability, over-interpreting statistical significance in the context of small effects, and non-registration of a trial). But there are a few less technical things to look for. First, I suggest that clinicians focus primarily on meta-analyses and not on single research studies. Although not perfect, meta-analyses review a whole body of literature, and are more likely to give a reliable estimate of the state of the research in a particular area. Second, clinicians should ask some important questions about the particular study: (a) are the results unusual (i.e., when comparing 2 bona-fide treatments, is one “significantly” better; or are the results spectacular); (b) does the research team represent only one treatment orientation; and (c) do the researchers reduce the integrity of the comparison treatment in some way (e.g., by not training and supervising therapists properly, by unreasonably limiting what therapists can do)?