Blog
The Psychotherapy Practice Research Network (PPRNet) blog began in 2013 in response to psychotherapy clinicians, researchers, and educators who expressed interest in receiving regular information about current practice-oriented psychotherapy research. It offers a monthly summary of two or three published psychotherapy research articles. Each summary is authored by Dr. Tasca and highlights practice implications of selected articles. Past blogs are available in the archives. This content is only available in English.
This month...

…I blog about therapist variables leading to poor outcomes, aspects of the therapeutic relationship and outcomes, and psychological therapies and patient quality of life.
Type of Research
Topics
- ALL Topics (clear)
- Adherance
- Alliance and Therapeutic Relationship
- Anxiety Disorders
- Attachment
- Attendance, Attrition, and Drop-Out
- Client Factors
- Client Preferences
- Cognitive Therapy (CT) and Cognitive-Behavioural Therapy (CBT)
- Combination Therapy
- Common Factors
- Cost-effectiveness
- Depression and Depressive Symptoms
- Efficacy of Treatments
- Empathy
- Feedback and Progress Monitoring
- Group Psychotherapy
- Illness and Medical Comorbidities
- Interpersonal Psychotherapy (IPT)
- Long-term Outcomes
- Medications/Pharmacotherapy
- Miscellaneous
- Neuroscience and Brain
- Outcomes and Deterioration
- Personality Disorders
- Placebo Effect
- Practice-Based Research and Practice Research Networks
- Psychodynamic Therapy (PDT)
- Resistance and Reactance
- Self-Reflection and Awareness
- Suicide and Crisis Intervention
- Termination
- Therapist Factors
- Training
- Transference and Countertransference
- Trauma and/or PTSD
- Treatment Length and Frequency
May 2015
Effects of Antidepressants in Treating Anxiety Disorders Are Overestimated
Roest, A.M., de Jonge, P., Williames, G.D., de Vries, Y.A., Shoevers, R.A., & Turner, E.H. (2015). Reporting bias in clinical trials investigating second-generation antidepressants in the treatment of anxiety disorders: A report of 2 meta-analyses. JAMA Psychiatry, doi:10.1001/jamapsychiatry.2015.15.
Previous research has shown that the effects of antidepressant medications for treating depression may be over estimated by as much as 35%. This occurs because of publication bias, which refers to the tendency among researchers and editors to prefer to publish positive findings, and also occurs due to the occasional practice of the pharmaceutical industry to suppress negative findings. In these meta analyses, Roest and colleagues assess publication bias in the research of antidepressant medications to treat anxiety disorders (i.e., generalized anxiety disorder [GAD], panic disorder [PD), social anxiety disorder [SAD], post traumatic stress disorder [PTSD], and obsessive compulsive disorder [OCD]). Anxiety disorders are very common in the population, with an estimated year-prevalence of 12%. Second generation antidepressants (i.e., selective serotonin reuptake inhibitors and serotonin-norepinephrine reuptake inhibitors) are the primary pharmacologic treatment for anxiety disorders. Roest and colleagues were also interested in outcome reporting bias, which refers to mis-reporting non-positive findings as if they were positive findings, and spin which refers to interpreting non-positive results as beneficial findings. Positive findings refer to the pharmacological agent significantly outperforming a placebo, and non-positive findings refer to pharmacological agents not significantly outperforming a placebo. Pharmaceutical companies in the US must register any trial with the Food and Drug Administration (FDA) prior to starting the trial if the company wishes to apply for US marketing approval. And so, all medication trials and their findings, whether positive or non-positive must be listed in the FDA register. Despite being listed with the FDA, not all trials and findings get published in peer reviewed journals. This causes a problem for reviews and meta analyses that tend only to focus on published trials, and prescribing physicians tend only to read published trials and reviews. In their meta analyses Roest and colleagues compared findings from all FDA registered medication trials to those that were published in peer reviewed journal. Fifty seven trials were registered with the FDA but only 48 were published. Regarding publication bias, the proportion of studies with positive findings indicating efficacy of antidepressant medications in FDA trials was 72%, whereas the proportion of studies with positive findings in trials published in a journal was 96%. Overall, trials were 5 times more likely to be published if they were positive than if they were non-positive. Regarding outcome reporting bias, 3 of 16 trials that were non-positive in the FDA review were reported as positive in journal publications. Regarding spin, an additional 3 of the 16 non-positive trials interpreted non-positive results as if they were positive. Effect sizes in the FDA data was g = .33 indicating a small average effect size of the medications for anxiety disorders, but the effect size in published journals was g = .38 indicating a small to moderate effect. This represents a 15% over estimation of the effects of the antidepressant medications for anxiety disorders in the published literature.
Practice Implications
The effects of antidepressant medications for anxiety disorders appear to be over estimated by 15% in the published literature. This inflation is not as large as the 35% over estimation in the published literature of the effects of antidepressant medications for depression. By contrast, as I reported in a previous PPRNet Blog, publication bias in psychotherapy trials is small and has little impact on the overall estimate of psychotherapy’s efficacy. Effect sizes for psychological interventions for anxiety disorders are moderate to large, g = .73. Combining medications and psychotherapy only modestly improves efficacy of treatments, and medications may interfere with the efficacy of psychological interventions.
December 2013
Are The Parts as Good as The Whole?
Bell, E. C., Marcus, D. K., & Goodlad, J. K. (2013). Are the parts as good as the whole? A meta-analysis of component treatment studies. Journal of Consulting and Clinical Psychology, 81, 722-736.
Component studies (i.e., dismantling treatments or adding to existing treatments) may provide a method for identifying whether specific active ingredients in psychotherapy contribute to client outcomes. In a dismantling design, at least one element of the treatment is removed and the full treatment is compared to this dismantled version. In additive designs, an additional component is added to an existing treatment to examine whether the addition improves client outcomes. If the dismantled or added component is an active ingredient, then the condition with fewer components should yield less improvement. Among other things, results from dismantling or additive design studies can help clinicians make decisions about which components of treatments to add or remove with some clients who are not responding. For example, Jacobson and colleagues (1996) conducted a dismantling study of cognitive-behavioral therapy (CBT) for depression. They compared: (1) the full package of CBT, (2) behavioral activation (BA) plus CBT modification of automatic thoughts, and (3) BA alone. This study failed to find differences among the three treatment conditions. These findings were interpreted to indicate that BA was as effective as CBT, and there followed an increased interest in behavioral treatments for depression. However, relying on a single study to influence practice is risky because single studies are often statistically underpowered and their results are not as reliable as the collective body of research. One way to evaluate the collective research is by meta analysis, which allows one to assess an overall effect size in the available literature (see my November, 2013 blog on why clinicians should rely on meta analyses). In their meta analysis, Bell and colleagues (2013) collected 66 component studies from 1980 to 2010. For the dismantling studies, there were no significant differences between the full treatments and the dismantled treatments. For the additive studies, the treatment with the added component yielded a small but significant effect at treatment completion and at follow-up. These effects were only found for the specific problems that were targeted by the treatment. Effects were smaller and non-non-significant for other outcomes such as quality of life.
Practice Implications
Psychotherapists are sometimes faced with a decision about whether to supplement current treatments with an added component, or whether to remove a component that may not be helping. Adding components to existing treatments leads to modestly improved outcomes at least with regard to targeted symptoms. Removing components appears not to have an impact on outcomes. The findings of Bell and colleagues’ (2013) meta analysis suggest that specific components or active ingredients of current treatments’ have a significant but small effect on outcomes. Some writers, such as Wampold, have argued that the small effects of specific components highlight the greater importance of common factors in psychotherapy (i.e., therapeutic alliance, client expectations, therapist empathy, etc.). This may be especially the case when it comes to improving a patient’s quality of life.
Author email: david.marcus@wsu.edu
November 2013
Researcher Allegiance in Psychotherapy Outcome Research
Munder, T., Brütsch, O., Leonhart, R., Gerger, H., & Barth, J. (2013). Researcher allegiance in psychotherapy outcome research: An overview of reviews. Clinical Psychology Review, 33, 501-511.
Although evidence for the efficacy of psychotherapy is largely uncontested, there remains debate about whether one type of treatment is more effective than another. This debate continues despite a recent American Psychological Association (APA) resolution on the equivalent efficacy of most systematic psychotherapy approaches. There are many aspects to this debate (e.g., some treatments are more researched than others and so appear to be better; symptom focused measurements are more sensitive to change and so may favour one treatment over another; some treatments are more amenable to manualization and short term application; etc.). One element of the debate that has received a lot of attention is researcher allegiance. Researcher allegiance refers to researchers preferring one treatment approach over another, and this preference may bias comparative outcome trials in favour of the preferred therapy. Researcher allegiance is measured by assessing primary researchers’ publication history or by their self-declared preference for a particular therapy approach. There exist 30 meta analyses that assessed researcher allegiance since the 1980s. These meta analyses focused on different therapy types, client populations (adults, children), and research designs (randomized trials, naturalistic effectiveness studies). However, some meta analyses have reported contradictory results for the researcher allegiance effect. This could be due to the different foci of the meta analyses (i.e., different treatment approaches, patient populations, age groups, etc.), and also possibly due the allegiance of those conducting the meta analyses. Munder and colleagues (2013) conducted a mega analysis of these meta analyses. As the name implies, a mega analysis aggregates the findings of available meta analyses. Munder and colleagues found a significant moderate effect of researcher allegiance. Researcher allegiance was consistent across patient populations, age groups, outcome measures, type of study design, and year of publication.
Practice Implications
As the APA resolution indicates, psychotherapy is the informed and intentional application of clinical methods and interpersonal stances derived from established psychological principles. Evidence-based practice in psychotherapy is "the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences". The results of this mega analysis undermine the claim of some comparative outcome studies that suggest that one evidence-based psychotherapy is more effective than another.
Author email: tmunder@uni-kassel.de
May 2013
Are the Effects of Psychotherapy for Depression Overestimated?
Niemeyer, H., Musch, J., & Pietrowsky, R. (2013). Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for depression. Journal of Consulting and Clinical Psychology, 81, 58-74.
Meta-analyses are important ways of summarizing effects of medical and psychological interventions by aggregating effect sizes across a large number of studies. (Don’t stop reading, I promise this won’t get too statistical). The aggregated effect size from a meta analysis is more reliable than the findings of any individual study. That is why practice guidelines almost exclusively rely on meta analyses when making practice recommendations (see for example the Resources tab on this web site). However meta analyses are only as good as the data (i.e., studies) that go into them (hence, the old adage: “garbage in, garbage out”). For example, if the studies included in a meta analysis are a biased representation of all studies, then the meta analysis results will be unreliable leading to misleading practice guidelines. One problem that leads to unreliable meta analyses is called publication bias. Publication bias often refers to the tendency of peer reviewed journals not to publish studies with non-significant results (e.g., a study showing a treatment is no better than a control condition). Publication bias may also refer to active suppression of data by researchers or industry. Suppression of research results may occur because an intervention’s effects were not supported by the data, or the intervention was harmful to some study participants. In medical research, publication bias can have dire public health consequences (see this TED Talk). There is lots of evidence that publication bias has lead to a significant over-estimation of the effects of antidepressant medications (see Turner et al (2008) New England Journal of Medicine). Does publication bias exist in psychotherapy research, and if so does this mean that psychotherapy is not as effective as we think? A recent study by Niemeyer and colleagues (2013) addressed this question with the most up to date research and statistical techniques. They collected 31 data sets each of which included 6 or more studies of psychotherapeutic interventions (including published and unpublished studies) for depression. The majority of interventions tested were cognitive behavioral therapy, but interpersonal psychotherapy, and brief psychodynamic therapy were also included. The authors applied sophisticated statistical techniques to assess if publication bias existed. (Briefly, there are ways of assessing if the distribution of effect sizes across data sets fall in a predictable pattern called a “funnel plot” – specific significant deviations from this pattern indicate positive or negative publication bias). Niemeyer and colleagues found minimal evidence of publication bias in published research of psychotherapy for depression. This minimal bias had almost no impact on the size of the effect of psychotherapy for depression.
Practice Implications
This is a very important result indicating that despite a minor tendency toward a selective publication of positive results, the efficacy of all reviewed psychotherapy interventions for depression remained substantial, even after correcting for the publication bias. Niemeyer and colleagues’ findings demonstrate that publication bias alone cannot explain the considerable efficacy of psychotherapy for depression. Psychotherapeutic interventions can still be considered efficacious and recommended for the treatment of depression.
Author email address: helen.niemeyer@hhu.de
February 2013
Does Participating in Research Have a Negative Effect on Psychotherapy?
Town, J. M., Diener, M. J., Abbass, A., Leichsenring, F., Driessen, E., & Rabung, S. (2012). A meta-analysis of psychodynamic psychotherapy outcomes: Evaluating the effects of research-specific procedures. Psychotherapy, 49, 276-290.
One of the main reasons that some clinicians do not participate in research is that they argue that doing so will have a negative impact on the therapeutic relationship, the therapy process, and patient outcomes. Although I have heard this from clinicians of many theoretical orientations, this opinion is perhaps most strongly held by some colleagues with a psychodynamic orientation. I identify with psychodynamic theory and practice, so this opinion about research held by some of my colleagues has been very disconcerting to me. Up to now, the best I could say in defense of practice-based research of psychodynamic therapy was to talk about my own experiences, which have been highly positive and rewarding. A recent meta analysis by Town and colleagues from Dalhousie University changes all that. (First, a note about meta analysis. Meta analysis is a statistical way of combining the effects of many studies, each of which has a number of participants, into a common metric called an effect size. By combining studies, the end result is more meaningful and more reliable than the results of any single study on its own.). The meta analysis by Town and colleagues had 45 independent samples and over 1600 patients. Results indicated that psychodynamic treatments for a variety of disorders (e.g., depression, anxiety, personality disorders) showed a significant large positive treatment effect – this is not new. What is new is that compared to conditions in which no research-specific protocols were introduced, conditions that did use research protocols were no different in terms of patient outcomes up to one year post treatment. There was even a significant small positive effect of these research protocols on outcomes from post treatment to one year post treatment. Research-specific protocols included video recordings of therapy sessions, therapists following treatment manuals, fidelity checks to make sure therapists were accurately doing psychodynamic therapy, and psychometric measurements of processes and outcomes
Practice Implications
Research protocols do not have a negative impact on psychodynamic therapy outcomes. Perhaps research protocols should be introduced into all therapies to improve longer term outcomes in addition to studying therapy procedures and processes that work.
Author email: joel.town@dal.ca