Just this week, an FDA advisory panel rejected 10-1 recommending MDMA and therapy to the final phase of its approval. Among several specific complaints regarding the tightness of the research and disclosure of adverse events, one issue emerged that strained the existing double-blind placebo controlled experimental model to the breaking point:
Folks who were rolling could definitely tell. And those who weren’t could too.
It’s prompting a bunch of think pieces, including this one at the Atlantic on “How Psychedelics are Challenging the Scientific Gold Standard”. It also saddened a bunch of advocates of trauma therapy who were really pinning their hopes on MDMA approval.
The culmination of a forty year push from MAPS (the Multidisciplinary Association for Psychedelic Studies) appears stalled at the finish line (for now).
There’s tons of issues involved here, from over-hyping any singular treatment as a cure-all, ideologically burdening an off-patent methamphetamine (that’s the MA in MDMA) with saving the world, excusing personality quirks and flaws of leading figureheads in the movement, and all else.
But there’s a broader issue at stake here, which is potentially the collapse of the reductionist-materialist model of doing science altogether. Not only is it really hard to tease apart expectations from results, it’s even harder to separate the talk therapy that’s supposed to accompany this particular medicine. (to say nothing of the real weirdness of ketamine, psilocybin, LSD and other compounds)
That’s what I’d like to take a look at today–the bigger question of how can we separate mind from matter, intention from prescription, and the medicine from the magic.
FROM SUGAR PILLS TO KITCHEN SINKS
It’s well known at this point that entire fields, ranging from the social sciences to behavioral psychology are suffering what’s been termed “the Replication Crisis.” Groundbreaking, headline grabbing study after study simply cannot be repeated by subsequent teams following the original protocols.
A recent bombshell paper in the journal Science took 100 peer-reviewed psychological studies and tried to replicate them. Only 39 percent passed.
Basically, if you heard about it from Malcolm Gladwell or a TED talk, it’s likely no longer true. From Marshmallows tests to Prisoner experiments, from Depletable Willpower to Wonder Woman poses, our favorite cocktail party tidbits lie in tatters all around us.
Some of it can be chalked up to overzealous researchers fudging their data and “P-hacking” or cherry-picking data to bolster their findings (for a helpful primer on P values see here)
Some of it can be chalked up to professional jealousy, where subsequent replication efforts either willfully or carelessly tweak key conditions or assumptions that were essential for the success of the initial experiment.
And the rest can be chalked up to the simple fact that science is hard. And life is messier than headlines and grant proposals would suggest.
Virtually anything interesting enough to study--whether it’s pinpointing the effects of the latest diet, or finding out if listening to Mozart makes for smarter babies--doesn’t happen in a vacuum. It happens in our bodies and brains, hearts and minds, and very much in the midst of the rest of our lives. Correlation, we are always reminded, does not equal causation! (see Jonathan Haidt’s critics of his recent cell-phones and adolescent anxiety research)
And yet we keep trying and trying to isolate and validate the Silver Bullet. The trouble is, there’s rarely a single bullet, pill or procedure that gets the job done.
When it comes to fixing, improving or healing, we’re less often sharpshooters with rifles, than we are drunks with shotguns. Spraying buckshot in the general direction of our target, and praying that we’ll hit something helpful.
The gold standard of all research trials is the double-blind placebo controlled study. The ‘placebo controlled’ part means that subjects get split into three groups--the first group gets the actual medicine or intervention you’re testing. The second group gets a placebo or sham treatment and the third ‘control’ group gets absolutely nothing.
The ‘double blind’ part means that neither the patients nor the researchers know who got what. That way, when the results come in, there’s no Clever Hans the Talking Horse effect where a scientist could purposefully or accidentally lead the patient to certain outcomes.
But something strange has been happening with those double blind placebo studies in the past few decades.
Since 1996, the placebo effect--where the patient taking a sham treatment or drug experiences positive change, shot up 18 percent.
Jeffery Mogil of McGill University who discovered the trend, says “the placebo response is growing bigger over time, [and] is the most interesting phenomenon in all of science...It’s at the precise interface of biology and psychology.”
Another leading expert on placebo research, Harvard’s Ted Kaptchuk, takes an almost anthropological view of the phenomenon. “[We’re] finding out what is it that’s usually not paid attention to in medicine — the intangible that we often forget when we rely on good drugs and procedures. The placebo effect is a surrogate marker for everything that surrounds a pill. And that includes rituals, symbols, doctor-patient encounters” [emphasis added].
While the numbers vary across disciplines and interventions, the placebo effect can range from anywhere between 15 and 78 percent of the overall healing impact.
That makes it quite tricky to isolate the impact of a single drug or a specific therapy. But it makes it nearly impossible to back out of a multi-variable equation where a bunch of things all combined together to have a positive effect.
Kaptchuk created an ingenious study at Harvard to try and see what role the “rituals, symbols and doctor-patient encounters” had on healing. To do it, he set up a three group study with sham acupuncture treatment for Irritable Bowel Syndrome--a notoriously difficult to treat chronic condition.
The first group received a fake acupuncture treatment (the doctor had needles present but only pretended to insert them all the way into the skin). The administrator was a cold and mostly uncommunicative physician.
The second group had the same sham treatment, but from a physician who specifically engaged their patients, and empathized with the realities of living with the disease.
The third group got waitlisted and no love at all.
Even without any actual medical treatment, the patients who’d been helped by the supportive doctors felt better. “These results,” Kaptchuk wrote in the study, “indicate that such factors as warmth, empathy, duration of interaction, and the communication of positive expectation might indeed significantly affect clinical outcome.”
Now in this case, all Kaptchuk was trying to track was the cumulative role of non-medical interventions, aka ‘placebos.’ But say he’d been inspired to follow up to try and determine exactly which element of that healing ritual held the juju that made it work?
Were the healing effects influenced by the gender of the doctor?
Their personality?
Their clothing (street dress, business attire, white coats)?
The lighting (fluorescent, daylight, mood?)
Was there music playing? If so, what kind and at what volume?
Most studies never get to this level of granularity because they don’t have the time, the subjects or the funding to isolate out every conceivable variable to check to see precisely which bit did what.
But even if they could, they’d be up against the irreducible vagueness of the placebo effect.
If each contributing factor to a solution ranges between 5 to 10 percent of the overall impact, but the placebo effect clocks in at 30 percent, (or higher) you can never reliably single out and confirm the entourage effect of these subtle triggers. The smaller signals get lost in the noise of the overriding power of placebo.
THE KITCHEN SINK METHOD
It’s not that the double-blind method is broken, it’s just that it’s limited, and not always the right tool for the job. It’s best for tracking linear incremental change, but it breaks down when asked to make leaps of faith or logic, or to track truth claims across different disciplines.
But there is another method we can add here, which does a much better job of tracking multivariable equations. Call it the Kitchen Sink Method (KSM).
In the KSM, we don’t try and isolate single factors at first. Instead, we do the exact opposite. We literally throw “everything but the kitchen sink” at the problem. We combine everything that has an evidence-based rationale for impact until we absolutely, positively get the result we’re looking for.
Then we confirm our baseline metrics for efficacy--getting a clear sense of the markers that correlate with the positive benefit we’re seeking.
(in the case of the MDMA trial, it might include the therapist and their modality, the music and visuals, the room, the seating, additional biofeedback or haptic devices, etc.)
So instead of working bottoms up, as most single variable double blind studies do, trying to isolate single factors (typically pharmacological or technological interventions) we work tops down.
Say for instance that Kaptchuk did a full spectrum follow up to the Harvard acupuncture study, but with MDMA therapy instead.
Only this time, he created an optimally warm and inviting experience, soothing music, immersive AR/VR, olfactory cues, vibrating sound beds, follow-up coaching, community support, and everything else he could think of to maximize the “rituals, symbols and doctor-patient encounters” for peak healing.
(Shamanism by any other name. Just swapping rattles and drums for stethoscopes and clipboards).
And let’s say this Kitchen Sink method worked. Really well. Longtime sufferers of trauma experienced significant relief and healing (which is kind of the point of all this research in the first place).
With the desired outcome established and repeatable, he could then work backwards to figure out which elements were ‘nice to have’ vs. which ones were ‘have to have’. He could back off a variable at a time until he observed an undesired drop off in efficacy.
Then he could tune the sweetspot for an optimally shareable protocol, confident that he was including the full spectrum of treatment options without adding unnecessary bells and whistles (or cost and complexity).
He might for instance, find that the MDMA itself only accounted for 15 percent of the impact, but the community support group and empathetic doctor were responsible for 60 percent of the healing.
He might find that lavender scents in the treatment room made a real difference, or that clinical disinfectants made things much worse. He might find that female doctors offered much better outcomes, or that matching gender to patients worked best.
\Or that listening to Alan Watts recordings was more helpful than Jordan Peterson’s.
Regardless, he’d have much more visibility and precision in his tweaking, because he was working backwards from a successful experiment. Rather than stumbling blindly from ground zero without a clear sense of where he was going or what exactly he was looking for, he could plant a flag in the high ground of a successful experiment and optimize from there.
That’s the Kitchen Sink Method in a nutshell. It can’t replace double-blind placebo controlled studies for drug evaluations, but it might be a more helpful method for figuring out the nuances of how we heal and grow, in all of our baffling contradictions and complexity.
This is especially true when we move from seeking single-pointed solutions like a pill or a piece of high tech wizardry to combined therapies that rely on a bunch of smaller effects that add up to big change–as the MDMA + Therapy model is attempting.
Any one of the interventions in isolation wouldn’t poke their heads above the placebo waterline, but together they do something almost magical--they work.
That may not be the stuff of Big Pharma blockbuster drugs, or match the strict empiricism of FDA protocols, but it’s more or less how healing has always been done.
We’re just catching up with the science now.
So while this week’s ruling might be perceived as a setback for advocates of psychedelic therapies, in reality, it might be a breakthrough.
The scientific method which has been so dedicated to atomizing bits (the component parts) from Its (the living wholes)–might need to reverse its approach.
If we want to study wholeness and healing, we might be better of starting from the wholes, and work backwards to the bits that are broken.
This is what one presidential candidate wrote about the FDA decision: "MDMA treatment for PTSD and other mental illnesses has a big drawback — a single treatment is effective. That means small profit potential compared to lifetime treatment with conventional pharmaceuticals. Maybe that’s why an FDA advisory committee shot it down. One of the committee members actually works for Johnson & Johnson.
It’s especially serious for veterans. Every hour, a veteran commits suicide. MDMA is a promising PTSD treatment, but Big Pharma hates it because it doesn't require daily pills for life."
This may be your best article/post I’ve read. Brilliant Jamie. I’ve just come through 7 months of hell battling an aggressive Stage 2 colon cancer. The tumor had breached the anterior wall of my colon with 3 fingers, and according to the colonoscopy doctor it was big, fat, healthy, happy and growing - and too big and dangerous to operate on. In addition to chemo and radiation, I must have piled on 20+ various alternative therapies and protocols. From Yoga Nidra meditations to ee Therapy light rooms, to copious amounts of Curcumin and Metatrol. And plenty of prayers by friends and family for a miraculous healing. Surgery was planned for 2 weeks from now and yesterday I got the call from my surgeon that the tumor is gone and the biopsy was negative for cancer cells. Incidentally, this surgeon had the best bedside manner of any doctor I’ve ever had. He was pure empathy and kindness and compassion. In sum total, the kitchen sink worked. I believe you are spot on with your theories in this post. Thank you for your beautiful mind.