|Sandy from Junkfood Science begins: "Loving parents have a hard job. They want to protect their children from harm and make the best healthcare decisions for them, but with all of the health information and misinformation swirling around, it can seem impossible to know what to believe. One question for some parents is whether childhood immunizations are necessary anymore. With fewer children dying of childhood illnesses today, it can seem like the diseases are no longer serious and that the vaccines might be putting their children at needless risk."|
Unfortunately there is a lot of uninformed and downright ignorant false information about childhood vaccination circulating these days, but some acknowledgement needs to be paid to sceptical parents.
They do want to do what is best for their children, and they are trying to inform themselves about the risks associated with vaccines.
And we do need to be up front and honest with them - there are levels of risk with any medication.
No medecine is without some possible side effects.
(Though unfortunately we also have irresponsible and scientifically illiterate conspiracy pushers, as seen with the claims of a MMR vaccine and autism link. Complete rubbish that has been shown conclusively to be totally untrue.)
But what these parents always seem to fail to do is try and understand the risks of not vaccinating children.
So what are the risks of not vaccinating children relative to vaccinating them?
Research just published in the June issue of Pediatrics looks specifically at pertussis, or whooping cough.
The results speak for themselves:
They...look[ed] at every case of pertussis infection identified in children in the Kaiser Permanente of Colorado health plan over more than a decade, between 1996 and 2007. They randomly matched each case to four controls and looked at the children’s vaccination records. The differences were striking. Only 0.5% of the healthy children had not been vaccinated, compared to 12% of the children who had gotten sick with pertussis.
You can read the whole thing by following the link at the top.
So parents need to balance concern about either small or false risks from vaccines themselves against the increased risk that their child will get a serious disease that may even kill him.
See also Lessons from the Vaccine-Autism Wars
I'd also urge anyone interested to follow her links in relation to "tenable correlations" and "relative risks". These are absolutely vital in sorting the wheat from the chaff when it comes to assessing claims in the media about some study claiming to have found some link or other.
All studies are not the same. The number that are poorly conducted pseudo-scientific rubbish with weak findings, but yet get saturation coverage in the media is not only astounding, it is depressing.
Recent claims about red meat, alcohol and high GI diets causing cancer and other illnesses all fall into this category.
Claims are made about the health implications for the general population based upon samples that are not representative of the general population.
The red meat gives you cancer nonsense recently peddled by naive and credulous journalists was a particularly egregious example.
In this case, the sample was formed by those members of the American Association of Retired Persons, ages 50 to 71, who could be bothered to fill out and post back a questionairre sent to them way back in 1995. (So yes, no new research at all was done for this study. It simply involved feeding the prior collected information through a computer that looked for correlations - hence the importance of understanding the difference between tenable and untenable correlations.)
What this means is that they were using an unrepresentative sample of an unrepresentative sample!
Why this wasn't setting off alarm bells for people is beyond me. Though I suspect that in truth, nobody bothered to check.
Correlation does not equal causation. Seems simple enough, but not for many of these researchers. The number of findings based on untenable correlations is appalling.
If you seach through any large enough data set, you will find correlations. Whether these correlations mean anything or are just the result of chance is another question entirely.
All too often it is these simple correlations that form the basis of press releases claiming to have found a link between our food and disease.
Then we have press releases claiming such links when the study's own data says something different.
Again, the recent red meat study stands out for claims that were simply not supported by its own results.
Despite the headlines shrieking that high red meat consumption increased your risk of getting cancer, examination of its results said something else.
While claimed rates for a number of cancers for men who who ate the most red meat were higher than those who ate the least, the difference between the two groups was only 1.4%. Now, I'm guessing, (and yes, this is a guess, but I'm prepared to put money on it), but I suspect that this falls within the study's own margin of error and thus statistically there is no real difference.
However, when it came to women, those with the highest consumption of red meat had lower rates of cancer compared to those with the lowest!
A question arising from this is: why did the study's authors then issue a press release that claimed to have found a clear link between higher rates of red meat consumption and cancer, when their own results said the opposite? (Not that any of this study's results should be taken seriously I'd hasten to add. It is rubbish from beginning to end.)
Part of the answer here is I believe the fact that health is one of those things that has replaced sex as the besetting concern of moral entrepreneurs.
They already "know" that red meat, alcohol, high GI diets, fat etc are "bad" and they need to be seen to be sending the "right" message.
This was obvious with the recent no safe level of drinking for women study conducted in the UK.
You can see the author's predetermined determination to find that alcohol is bad for women, even at moderate levels, and then her consternation at finding that her results don't support her belief.
So she reworks the data this way and that, trying to get the "right" result. In the end she fails.
Her own data clearly shows that fully 95% of women who drank had lower rates of cancer compared to those who didn't.
Only the top 5% heaviest drinkers had poorer health outcomes.
But what does she announce to the media? The "right" result, ie that there is no safe level of alcohol consumption for women.
Now, call me old fashioned, but I would have thought that the right result is what your data actually shows, not what you think it should show.