30 Oct The Use and Abuse of Cannabidiol Science
Mark Twain once said, “A lie can travel half way around the world before truth even gets its boots on.” When it comes to reporting on cannabis science, in many ways we’re still traveling barefoot.
The recent revelations about the Sugar Research Foundation bribing Harvard researchers to mislead the public on the role of sugar in heart disease is a particularly egregious example of bad science, but outright deception is not the only way that bogus science makes its way into the public dialogue . Researcher bias and a desire for sensational discoveries can also lead to misinformation—especially when it comes to controversial topics like cannabis. Journalists often compound these distortions by hyping dubious science and ignoring research that calls into question conventional wisdom.
In assessing the veracity and significance of scientific reports, several questions are paramount: What data is being measured? How does the data relate to the phenomenon being studied? What conclusions are drawn from the data? Are the conclusions justified or do they distort or overstate what the data implies?
Consider, for example, a May 2014 article in Pharmacology, Biochemistry and Behavior examining the addictive potential of cannabinoids. The article begins by acknowledging the difficulties in getting rats to self-administer THC, marijuana’s major psychoactive component: “Because ∆9-tetrahydrocannabinol (THC) has been a false negative in rat intravenous self-administration procedures, the evaluation of the abuse potential of candidate cannabinoid medications has proven difficult…. Clarification of underlying factors responsible for the failure of THC to maintain self-administration in cannabinoid-trained rats is needed” .
Note the assumptions at play here. From the start it is assumed that THC is addictive, and contradictory evidence is rationalized to fit that assumption. The authors acknowledge that THC is not addictive in what they refer to as “the ‘gold standard’ in preclinical assessment of abuse liability,” and so one reasonable conclusion is that THC is not addictive. Alternatively, one can conclude that this “gold standard” model does not capture the complex nature of addiction to cannabinoids. Yet neither of these possibilities are mentioned.
Because THC wouldn’t cooperate, the authors utilized a different compound in an experiment that sought to replicate and extend an addiction model that had been successfully performed, but only in a few labs. (Repeating experiments across different lab groups is very important in scientific research.) This model involved the trained self-administration of WIN55,212-2, a synthetic cannabinoid, by rats. WIN55,212-2 (or WIN55, for short) is a potent activator of CB1 and CB2, the same cannabinoid receptors that THC stimulates. But unlike with THC, rats can be trained to self-administer WIN55, which is not derived from marijuana. “The effects of WIN may be more comparable to the frequently abused synthetic cannabinoids, often referred to as K2 and spice, than to marijuana,” University of Pittsburgh scientists reported in a different journal .
This is not to say that the authors of the WIN55 experiment intended to deceive or that their research is without merit. The authors do not step beyond their data to make claims about THC or cannabinoids more generally. But from the first sentences of this article, there is an inconsistency between the authors’ assumptions and the data presented.
Between 2008 and 2014, the National Institutes of Health spent $1.4 billion on marijuana research. Most of the money ($1.1 billion) was earmarked for abuse and addiction studies. Some of this research has yielded important insights into the endocannabinoid system and its pivotal role in health and disease. For the most part, however, therapeutic-oriented studies have gotten short shrift because of NIDA’s narrow, politicized agenda, which has impeded important areas of research.
Because the U.S. Drug Enforcement Administration considers cannabis and cannabinoids to be highly dangerous, studies are rarely performed on humans. Instead, animal models of disease are created, and then these animals are treated with cannabinoids (rarely cannabis itself). But a disease model is not the same as an actual disease, and data from animal studies are not always applicable to human experience. One inherent flaw in animal models is that animals don’t have the exact same proteins, anatomy and brains as people; thus a drug may not have the same effect on a human as a mouse or rat.
For example, is precipitated withdrawal—which involves getting an animal addicted to a substance and then blocking the primary receptor at which that substance acts—really an accurate model of withdrawal in humans? Perhaps it makes sense when studying opiate withdrawal, as this technique is sometimes used in rehab clinics. But for cannabis such a withdrawal model may be irrelevant, if not altogether misleading.
The faults in models of disease often boil down to confounding variables. Confounding variables allow people to imagine a cause-and-effect where one does not exist. The classic example is that there is a very high correlation between ice cream sales and the drowning rate on a given day. Eating ice cream, of course, does not cause people to drown; the confounding variable is the weather. (On hot days, both of these increase. On cold days, they both decrease.)
Stress: A Confounding Variable
If we use animals to model human diseases, it is pertinent to ask if stress is a confounding variable: To what extent does the stress of being a lab animal affect the results of the experiment? Are average humans as stressed and oppressed by their environments?
It is well known that environment plays a major role in disease: nurture has an impact, as well as nature. A number of studies have documented altered gene-expression in sequestered lab animals. Other research has measured the significant impact of various environmental factors, such as changing the animals’ bedding daily, slightly increasing cage size, being with other animals, allowing the animals to exercise, etc. [4,5]. Scientists have also established that the endocannabinoid system is intimately involved in regulating the biological stress mechanism, as well as social behaviors and fear extinction [6,7,8]. Thus, it is reasonable to expect that even minor stressors would affect the response of the endocannabinoid system to cannabis.
Animal studies have implicated cannabinoids in both an increase and decrease in cognitive performance. Some researchers hypothesized that these conflicting results might be a reflection of different environmental stressors. Dr. Patrizia Campolongo and an international team of scientists examined the extent to which WIN55’s effects on memory and learning was influenced by the arousal state, or stress level, of the lab animals . Published in 2013 in Neuropsychopharmacology, this study indicated that the emotional condition of the rats is “a primary factor in determining the outcome of cannabinoid administration on learning and memory.” These findings drew attention to an important (and often overlooked) factor in interpreting endocannabinoid research: environmental stress and its impact on emotional state.
Confounding variables are inherent to scientific experimentation. The stress of being a lab animal is a significant variable that is rarely accounted for. Other confounding variables include studying only male mice, although female mice often react differently (this practice is becoming less common), and excluding multiracial individuals from clinical trials. Researchers hope that with enough experiments, the significant confounders are discovered. But it can take many years before hidden assumptions are rooted out, especially when the assumptions align with conventional wisdom. Delicate considerations such as these indicate why science is such a slow process. It takes many experiments—across different models of disease with different animals under different stressors, being handled by different experimenters—before a scientific consensus begins to form. In many ways, science is a field of disproving, wherein researchers whittle away at possibilities until an apparent relationship is left standing.
Uncontrolled and poorly controlled
While confounding variables can cause problems in research, simple statistical controls can also mislead interpretation.
Take research on cannabis and pregnancy, for example, where women who drink alcohol while pregnant are rarely excluded from studies. Instead of comparing pregnant cannabis smokers who don’t drink to other pregnant women who don’t drink, scientists may attempt to statistically control for the effect of alcohol on the fetus. A simple control would be to look at the impact of alcohol alone, the impact of alcohol and cannabis together, and filter out the effect of alcohol. But this assumes that cannabis and alcohol don’t interact. Some cannabinoids, in fact, exacerbate fetal alcohol syndrome.
So simple statistical controls will misrepresent the super-additive toxicity of cannabis with alcohol as the effect of cannabis alone. The interaction could be accounted for with certain statistical procedures, but the researchers would need to decide this before collecting data for such an analysis to remain valid.
(The reason for this super-additive effect of cannabis and alcohol is not established. But attempts to understand the combination may help explain why cannabinoids are generally toxic to cancer cells but not normal cells. Fetal cells share many properties with cancer – they are highly proliferative and able to differentiate to new cell types. They lack inflammation, however, which is caused by drinking alcohol. This additional inflammation may be the switch that pushes cannabinoids from being cytoprotective to being cytotoxic.)
Masquerading as Science
Although experimentation is the only test of validity in science, researchers can be deceived by their own expectations when interpreting data. In some cases, sweeping claims are made because a minor result coincides with an author’s ingrained bias. Unfortunately, the tendency to overstate claims is not uncommon among scientists, who, despite their pretensions to objectivity and intellectual rigor, are not immune to cultural prejudices that are rampant in society as a whole, especially with respect to cannabis and drug war stereotypes.
“There is a lot of bullshit currently masquerading as science,” John Oliver declared in a recent TVcommentary. Oliver wasn’t referring explicitly to cannabis research, but he could have been when he stated: “Too often, a small study with tentative findings gets blown out of proportion when it’s presented to us, the lay public.”
That’s exactly what happened when researchers at the University of British Colombia (UBC) in Vancouver published an article entitled “Δ9 -Tetrahydrocannabinol decreases willingness to exert cognitive effort in male rats.” The upshot: THC makes rats lazy and this finding validates what is “well known, anecdotally, that consumption of marijuana leads to a so-called ‘stoner phenotype,’ someone who is not super aggressive about pushing ahead in life, maybe not fulfilling their potential,” according to UBC associate professor Dr. Catharine Winstanley .
It’s a huge leap from blitzing a rat’s brain with pure THC to real-world consumption of whole plant cannabis by human beings. Cherry-picking evidence that fit their own preconceptions, the UBC team misconstrued the THC-addled rats’ lack of interest in food as proof that smoking marijuana turns people into unmotivated slackers. This slant was played up in a litany of credulous media reports after a UBC press release announced the first “scientific confirmation” that marijuana makes people lazy.
The many flaws in the UBC paper were deconstructed by Dr. Natasha Ryz. She cites peer-reviewed studies showing, for example, that high-dose single-molecule THC can decrease appetite and sugar craving, although low doses stimulate the munchies . This would explain why the rats weren’t motivated to eat more: They weren’t hungry! Ryz surmised that the anti-marijuana bias of the UBC researchers may have “caused them to interpret their findings in a way that leads to the unfair stigmatization of cannabis users.” [Read Ryz’s full critique here.]