Blog
The Psychotherapy Practice Research Network (PPRNet) blog began in 2013 in response to psychotherapy clinicians, researchers, and educators who expressed interest in receiving regular information about current practice-oriented psychotherapy research. It offers a monthly summary of two or three published psychotherapy research articles. Each summary is authored by Dr. Tasca and highlights practice implications of selected articles. Past blogs are available in the archives. This content is only available in English.
This month...

…I blog about therapist empathy, psychotherapeutic treatment for borderline personality disorder, and research on psychological treatment of depression.
Type of Research
Topics
- ALL Topics (clear)
- Adherance
- Alliance and Therapeutic Relationship
- Anxiety Disorders
- Attachment
- Attendance, Attrition, and Drop-Out
- Client Factors
- Client Preferences
- Cognitive Therapy (CT) and Cognitive-Behavioural Therapy (CBT)
- Combination Therapy
- Common Factors
- Cost-effectiveness
- Depression and Depressive Symptoms
- Efficacy of Treatments
- Empathy
- Feedback and Progress Monitoring
- Group Psychotherapy
- Illness and Medical Comorbidities
- Interpersonal Psychotherapy (IPT)
- Long-term Outcomes
- Medications/Pharmacotherapy
- Miscellaneous
- Neuroscience and Brain
- Outcomes and Deterioration
- Personality Disorders
- Placebo Effect
- Practice-Based Research and Practice Research Networks
- Psychodynamic Therapy (PDT)
- Resistance and Reactance
- Self-Reflection and Awareness
- Suicide and Crisis Intervention
- Termination
- Therapist Factors
- Training
- Transference and Countertransference
- Trauma and/or PTSD
- Treatment Length and Frequency
May 2013
Are the Effects of Psychotherapy for Depression Overestimated?
Niemeyer, H., Musch, J., & Pietrowsky, R. (2013). Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for depression. Journal of Consulting and Clinical Psychology, 81, 58-74.
Meta-analyses are important ways of summarizing effects of medical and psychological interventions by aggregating effect sizes across a large number of studies. (Don’t stop reading, I promise this won’t get too statistical). The aggregated effect size from a meta analysis is more reliable than the findings of any individual study. That is why practice guidelines almost exclusively rely on meta analyses when making practice recommendations (see for example the Resources tab on this web site). However meta analyses are only as good as the data (i.e., studies) that go into them (hence, the old adage: “garbage in, garbage out”). For example, if the studies included in a meta analysis are a biased representation of all studies, then the meta analysis results will be unreliable leading to misleading practice guidelines. One problem that leads to unreliable meta analyses is called publication bias. Publication bias often refers to the tendency of peer reviewed journals not to publish studies with non-significant results (e.g., a study showing a treatment is no better than a control condition). Publication bias may also refer to active suppression of data by researchers or industry. Suppression of research results may occur because an intervention’s effects were not supported by the data, or the intervention was harmful to some study participants. In medical research, publication bias can have dire public health consequences (see this TED Talk). There is lots of evidence that publication bias has lead to a significant over-estimation of the effects of antidepressant medications (see Turner et al (2008) New England Journal of Medicine). Does publication bias exist in psychotherapy research, and if so does this mean that psychotherapy is not as effective as we think? A recent study by Niemeyer and colleagues (2013) addressed this question with the most up to date research and statistical techniques. They collected 31 data sets each of which included 6 or more studies of psychotherapeutic interventions (including published and unpublished studies) for depression. The majority of interventions tested were cognitive behavioral therapy, but interpersonal psychotherapy, and brief psychodynamic therapy were also included. The authors applied sophisticated statistical techniques to assess if publication bias existed. (Briefly, there are ways of assessing if the distribution of effect sizes across data sets fall in a predictable pattern called a “funnel plot” – specific significant deviations from this pattern indicate positive or negative publication bias). Niemeyer and colleagues found minimal evidence of publication bias in published research of psychotherapy for depression. This minimal bias had almost no impact on the size of the effect of psychotherapy for depression.
Practice Implications
This is a very important result indicating that despite a minor tendency toward a selective publication of positive results, the efficacy of all reviewed psychotherapy interventions for depression remained substantial, even after correcting for the publication bias. Niemeyer and colleagues’ findings demonstrate that publication bias alone cannot explain the considerable efficacy of psychotherapy for depression. Psychotherapeutic interventions can still be considered efficacious and recommended for the treatment of depression.
Author email address: helen.niemeyer@hhu.de