Originally posted by Dr. Davis on 2015-10-27
on the Wheat Belly Blog,
sourced from and currently found at: Infinite Health Blog.
PCM forum Index
of WB Blog articles.
Go ahead: Eat your meat
“Reduce your intake of cholesterol, fat, and saturated fat.”
“Use more polyunsaturated fats.”
“Move more and eat less.”
“Oats are heart healthy.”
“Follow a balanced diet.”
“Eat more healthy whole grains.”
Well, add yet another “proven” statement of purported nutritional fact to this sad list of nutritional blunders: “Red meat is a carcinogen,” as was concluded by the International Agency for Research on Cancer, or IARC. Release of this analysis prompted the usual over-the-top headlines and exaggerations, such as NPR’s Alison Aubrey (a staunch defender of the dietary status quo) claiming that “The conclusion puts processed meats in the same category of cancer risk as tobacco smoking and asbestos.”
By now, I hope that you have acquired a healthy skepticism about any piece of advice that originates from “official” providers, as well as the dramatic headlines that follow. Such misguided advice has, in past, ignited huge growth in the processed food industry (e.g., corn oil, high-fructose corn syrup, pasta, low-cholesterol and low-fat products) and has made a major contribution to creating the worst epidemics of autoimmune disease, senile dementia, type 2 diabetes, and obesity ever witnessed in the history of the world–not reduced risk, but increased risk. A big part of the blundering is due to the fact that many in the nutritional world worship this false god of science: observational epidemiology. The crude observations generated via epidemiology, no matter how big the population studied, no matter how consistent, cannot be used to establish cause and effect. This is not my opinion; this is scientific fact.
Such wrongheaded cause-effect declarations are not unique to nutrition; similar mistakes have been made in healthcare, such as the widespread prescription of horse urine-sourced estrogens–“hormone replacement therapy,” or HRT, such as Premarin–because of apparent reductions in cardiovascular disease initially observed in epidemiological studies. Subsequent randomized, double-blind studies proved the apparent epidemiological benefits entirely untrue–HRT actually increased heart attack risk. Gary Taubes, author of Good Calories, Bad Calories, articulated this principle well in a New York Times article back in 2007:
“The catch with observational studies like the Nurses’ Health Study, no matter how well designed and how many tens of thousands of subjects they might include, is that they have a fundamental limitation. They can distinguish associations between two events — that women who take H.R.T. have less heart disease, for instance, than women who don’t. But they cannot inherently determine causation — the conclusion that one event causes the other; that H.R.T. protects against heart disease. As a result, observational studies can only provide what researchers call hypothesis-generating evidence — what a defense attorney would call circumstantial evidence.
“Testing these hypotheses in any definitive way requires a randomized-controlled trial — an experiment, not an observational study — and these clinical trials typically provide the flop to the flip-flop rhythm of medical wisdom. Until August 1998, the faith that H.R.T. prevented heart disease was based primarily on observational evidence, from the Nurses’ Health Study most prominently. Since then, the conventional wisdom has been based on clinical trials — first HERS, which tested H.R.T. against a placebo in 2,700 women with heart disease, and then the Women’s Health Initiative, which tested the therapy against a placebo in 16,500 healthy women. When the Women’s Health Initiative concluded in 2002 that H.R.T. caused far more harm than good, the lesson to be learned . . . was about the ‘disastrous inadequacy of lesser evidence’ for shaping medical and public-health policy.”
But only in nutrition have such observational epidemiological studies served as the basis for widespread nutritional policy, over and over again, despite that fact that such studies can rarely establish cause-effect associations. (The exception would be when the association is so powerful and consistent that it becomes virtually obvious and indisputable, as with smoking and lung cancer and heart disease, with relative risk as much as 30-fold over non-smoking, not the sorts of tiny percentages typically observed in nutritional studies.) As Taubes points out, such studies can only suggest an association, an hypothesis that needs to be proven by other means. Crafting nutritional policy based on observational epidemiological studies is therefore hazardous, as borne out by such advice as “cut your fat and eat more healthy whole grains.”
The small increase in colorectal cancer seen in such observational epidemiological studies of about 17% are meaningless–small differences that can easily be attributed to confounding factors that accompany the primary behavior (eating red meat). The people who consume the most red meats also tend to lead somewhat different lifestyles: less vegetables, less fiber, more booze, etc. The data also do not distinguish factory farm-sourced meats with different fatty acid composition, greater potential for intermittent antibiotic content, and other factors, but lump them all together.
This is the perennial Achilles’ heel of epidemiology: such studies, by design, cannot identify a cause, particularly when the connection is small or tenuous. In the majority of studies cited in the IARC Monograph, such as the 470,000-participant EPIC Study, much of the excess risk associated with red meat consumption was attenuated by fiber intake. (Unfortunately, no study has examined specifically the role of prebiotic fibers in attenuating the purported risk from red meats, not just all forms of fiber–my prediction: all the excess risk that appears to come from red meat consumption disappears with cultivation of bowel flora with higher prebiotic fiber intake.) Interestingly, this EPIC Study, the largest ever performed on this question, did not show any increased risk of colorectal cancer with beef consumption, only pork.
An experiment to once and for all answer this question would be logistically very difficult: randomly select people to either eat red meat ad lib and a matching (age, sex, other habits, socioeconomic status, etc.) group that eats no red meat, then both followed for an extended period of, say, 5 to 10 years, and compare which group fares better. Thus, the crude, non-cause-effect-generating epidemiological observations are used to craft opinion and policy–but they should never have been used in this fashion in the first place. And none of us should be driven to take action by such a misleading pronouncement nor the dramatic headlines that misinformed, scientifically naive journalists like Ms. Aubrey broadcast.
Dr. Peter Attia, a champion of clear-thinking and science in nutrition, has written eloquently about the misinterpretations/false conclusions that are the rule in nutritional advice based on observational epidemiology:
“I trust by now you have a better understanding of why the ‘science’ of nutrition is so bankrupt. It is based on almost a complete reliance on these observational studies. Virtually every piece of nutritional dogma we suffer from today stems from – you guessed it – an observational study. Whether it’s Ancel Keys’ observations and correlations of saturated fat intake and heart disease in his famous Seven Countries Study, which ‘proved’ saturated fat is harmful or Denis Burkitt’s observation that people in Africa ate more fiber than people in England and had less colon cancer ‘proving’ that eating fiber is the key to preventing colon cancer, virtually all of the nutritional dogma we are exposed to has not actually been scientifically tested. Perhaps the most influential current example of observational epidemiology is the work of T. Colin Campbell, lead author of The China Study, which claims, ‘the science is clear’ and ‘the results are unmistakable.’ Really? Not if you define science the way scientists do. This doesn’t mean Colin Campbell is wrong (though I wholeheartedly believe he is wrong on about 75% of what he says based on current data). It means he has not done any real science to advance the discussion and hypotheses he espouses.”
Now go roast up a good steak or hamburger, but just have it with some asparagus and lentils.