The press bombards us with messages about what’s bad for us: obesity, drugs, too much alcohol, (maybe), coffee, sugar, cell phones, and—most recently—loneliness, which we were told was as damaging to our health as obesity and smoking. Occasionally, we catch a break and get told what’s good for us: alcohol (moderate consumption thereof), marriage, socializing, and, possibly, coffee. How are we supposed to know when to believe these claims, and when not?
Thanks to the recently published recommendations of the US Dietary Guidelines Committee, we saw a Grande serving of “coffee-is-good-for-you-stories:” “It’s official, Americans should drink more coffee,” said The Washington Post’s Wonkblog, reporting on the findings. “Consuming as many as five cups of coffee each day (400 mg) is tied to several health benefits, including a reduced risk of cardiovascular disease and type 2 diabetes.”
Great! Except, the paper reported, no one anywhere in the world is drinking close to that amount per day, according to data from the US Department of Agriculture—least of all Americans, who are averaging just one cup a day. But, as the Post noted, “it’s rare for the government agencies to ignore the panel’s advice, so it’s fair to expect a federal endorsement for drinking coffee—as much as five cups a day no less—to be just around the bend.”
And not too soon, according to a February report by The Los Angeles Times: “Drink up, coffee lovers: Neurologists say a healthy appetite for coffee may reduce your risk of developing multiple sclerosis.” In this case, a study found that Americans who drank at least four cups of coffee were “one-third less likely to develop” the disease than people who didn’t drink coffee at all. Oddly, Swedes had to chug six cups to get the same benefit.
Even better, the BBC reported in early March on more new Korean research on the benefits of a three-to-five-cup-a-day habit: a lower likelihood of the early signs of heart disease.
All of this adds to the trove of coffee health reporting permanently brewing on the Internet: stories that promise the possibility of a lower risk of liver cirrhosis, Parkinson’s disease, skin cancer, suicide; stories that confirm a boost to intelligence, athleticism and happiness; stories about how drinking four cups of coffee per day increases your risk of early death… oh, wait.
To be fair, this particular study seems to be something of an outlier—or at least, you can easily find another study (and a gushing news story) claiming an opposite and reassuring effect. But the important question, posed by Stanford University’s John Ioannidis in an editorial for the BMJ in 2013, was whether any of these conclusions, good or bad, were truly meaningful, based on important limitations in the way these studies measured effects across populations.
Which is why it was heartening to read a recent piece by the Washington Post’s Ariana Eunjung Cha calling out the surge in coffee mania. All those recent reports on the positive effect of drinking coffee on heart attack, longevity, Parkinson’s disease, depression, and Type 2 diabetes were based on research in which the researchers themselves couldn’t conclude that coffee was producing a causal effect due to the design of their studies. In the case of the Korean study showing a lower prevalence of coronary atherosclerosis among coffee drinkers, “the scientists,” she said, “weren’t really able to control for other things, such as diet, exercise and environmental factors that could have affected the outcome.”
It’s a great piece in which the reporter appears to have actually read the studies—particularly the discussion part of the papers, where the researchers reflect on the limitations of their work, limitations which are often considerable. To sum up, as the Post’s headline put it: “Coffee may not be bad for you, but it’s not going to save your life.”
Please note that this is a forum for statisticians and mathematicians to critically evaluate the design and statistical methods used in studies. The subjects (products, procedures, treatments, etc.) of the studies being evaluated are neither endorsed nor rejected by Sense About Science USA. We encourage readers to use these articles as a starting point to discuss better study design and statistical analysis. While we strive for factual accuracy in these posts, they should not be considered journalistic works, but rather pieces of academic writing.