Go here and have a look over the last couple of days posts.
Dr Ray points to the usual epidemiological bullshit that produces these rounds of yes it is, no it isn't health reporting in the media.
None of this crap should be mentioned by journalists. None of the so-called studies, (or very few anyway), are worth anything in scientific terms. Instead, most are simply number crunching by computers looking for correlations in large data sets.
Within any large enough data set you will find correlations, but the often unanswered question is: are these correlations significant or not?
There are measures, such as relative risk, that try to give a sense of the robustness of a studies findings, but few journalists seem to be aware of this and certainly never think to ask questions about these.
Trouble being that so many of the epidemiological studies used as fodder by the media as either health shock stories or excessively optimistic ones have relative risk ratings of less than 3.0 and, at times, even below 2.0.
What this means is that a study's findings are weak and probably more to do with coincidence than an actual signal of causation.
A former prominent medical journal editor once said he would not consider any epidemiological study worthy of publication unless it had a relative risk of at least between 3.0 and 4.0.