The Great Psychotherapy Debate: Since in April, 2015 I review parts of The Great Psychotherapy Debate (Wampold & Imel, 2015) in the PPRNet Blog. This is the second edition of a landmark, and sometimes controversial, book that surveys the evidence for what makes psychotherapy work. You can view parts of the book in Google Books.
Wampold, B.E. & Imel, Z.E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work (2nd edition). New York: Routledge.
In this part of the chapter on efficacy, Wampold and Imel provide convincing evidence from numerous reviews of meta analyses that the average effect size of psychotherapy across diverse treatments and patients is about d = .80. This is a reliable figure and is considered a “large” effect by commonly accepted standards. Put another way, the average psychotherapy patient is better off than 79% of untreated clients, psychotherapy accounts for 14% of the outcome variance, and for every 3 patients who receive psychotherapy, one will have a better outcome than had they not received psychotherapy. In other words, psychotherapy is remarkably efficacious. These effect size estimates are mostly drawn from randomized clinical trials that are highly controlled (i.e., therapists are highly trained and supervised, patients are sometimes selected to have no co-morbid problems, treatment fidelity to a manual is closely monitored, etc.). Some argue that the context of these trials renders them artificial, and that findings from these trials reveal little about psychotherapy practiced in the real world with complex patients. How do findings from controlled clinical trials compare to everyday clinical practice? Wampold and Imel review the evidence from three areas of research: clinical representativeness, benchmarking, and comparisons to treatment as usual. With regard to clinical representativeness, a meta analysis (k > 1,000 studies) coded the studies for type of treatment setting, therapist characteristics, referral sources, use of manuals, client heterogeneity, etc. The meta analysis found that therapies that were most representative of typical practice had similar effects to what is observed in highly controlled studies. With regard to benchmarking, a large study (N > 5,700 patients) compared treatment effects observed in naturalistic settings to clinical trial benchmarks. Benchmarks were defined as scores on an outcome (e.g., on a depression scale) that are within 10% of scores reported in clinical trial research. Treatment effects in naturalistic settings were equivalent to and sometimes better than those achieved using clinical trial benchmarks. Further, therapists in practice settings achieved the same outcomes in fewer sessions than in clinical trials. With regard to comparisons to treatment as usual, a meta analysis (k = 30 studies) for personality disorders looked at studies that compared evidence-based treatments tested in clinical trials to treatment as usual. The meta analysis found that evidence-based treatments were significantly more effective than treatment as usual with moderate effects. These results suggest that when it comes to personality disorders, special training and supervision, which are common in clinical trials, might be beneficial.
Wampold and Imel argue that psychotherapy as tested in clinical trials is remarkably effective such that the average treated patient is better off than 79% of untreated controls. The evidence also suggests that psychotherapy practiced in clinical settings is effective and probably as effective as psychotherapy tested in controlled clinical trials. It is possible that therapists who treat those with personality disorders may benefit from additional training and supervision to improve patient outcomes in everyday practice.