Earlier this week, yet another study was published that failed to find a link between thimerosal in vaccines and autism. This long awaited study was commissioned by the CDC and is the first serious look at the relationship between thimerosal and autism in US based populations since the controversial Verstraeten study in 2003.
The study looked at 256 children with autism and matched them against 762 typical children and found that the amount of thimerosal that they were exposed to had no relationship to their chances of having autism. Or in simpler terms, more mercury did not mean more autism. And actually, some of the analyses showed that having a higher exposure to thimerosal actually decreased the chances of having autism.
Yes, you read that correctly. The study found that more mercury meant less autism.
I was planning on writing a serious review of the pros and cons of this study because I think that this is an important one. But the more that I dug into the study, the less sense that it made to me. Sure, the study is well designed and appears to have been carried out meticulously. But how the data was presented and that nonsensical part of the conclusion made me wonder.
The study's stated goal was to "examine relationships between prenatal and infant ethylmercury exposure from vaccines and/or immunoglobulin preparations and ASD and 2 ASD subcategories: autistic disorder (AD) and ASD with regression."
To do that they looked through the health records for three HMOs, found all of the kids born between 1994 and 1999 that had a label of autism, and attempted to get them to participate in the study. They started out with 802 children with autism but ended up only including 256 of them in the final results. These children were matched against 70,801 potential control children with 762 being the final number of controls. The control children were pretty well matched against children with autism on a variety of factors such as birth year, gender, weight at birth, parent's age, mother's education level, as well as other factors.
The cumulative exposure to thimerosal was then calculated for three time periods - birth to 1 month, birth to 7 months, and birth to 20 months. These exposures (along with a bunch of other data) were put into a variety of models and no relationship between the level of thimerosal exposure and autism was found.
But, if you are paying attention, you might have noticed a problem. The goal of the study was to look for a relationship between thimerosal and autism - not to look at the relative risk from a higher thimerosal exposure. Yet, it is that second question that was the one actually addressed.
Let me put this another way. If I told you that I was doing a study to evaluate whether smoking can cause cancer but instead gave you a study that compared people who smoked one pack a day to those who smoked two packs a day. Then, when I found no increased risk of cancer between the two groups, I said that my study showed no relationship between cancer and smoking. While it might be true that smoking more does not lead to a higher risk of cancer that is not the same as saying that smoking at all doesn't increase the risk of cancer. When you compare cancer rates in smokers vs non-smokers the association jumps out.
Those of you who know the history of the tobacco research would recognize that studies like this were actually used by industry to obscure the relationship between smoking and cancer. But I didn't want to think that the CDC would put out such a distorted study, so I looked at the actual numbers a bit closer.
There are two measures of risk in the study. The first is an odds ratio associated with an increase of 1 unit (1 mcg per 1 kg) of exposure. That measure is clearly looking at whether more thimerosal means more autism, not whether thimerosal has a relationship to autism.
The second measure was an odds ratio between a low and high exposure group. The low exposure group is defined to be (roughly) the children in the bottom 10% while the high exposure group is defined to be the children in the top 90% group. The question then becomes what is the actual difference in exposures between these groups.
Are we talking about groups where the "low" exposure group had 5 shots vs the "high" exposure group getting 6 or are we talking about groups where the low exposure group basically have no exposure while the high exposure group had close to the maximum. The former comparison would be worthless while the later would be close to what we would want to see.
Here is where things got a little confusing.
The ranges between high and low are quoted in mcg per kg but the data published for the groups is quoted in cumulative mcg. For the birth to 1 month value is 4.08 mcg/kg which roughly translates into about 16 mcg of exposure - or about one shot. Then I looked at the exposure table - the range for birth to 1 month group was 0 to 74 with a mean of 2.7 for the autism group and 0 to 100 with a mean of 2.35 for the control groups. But how can the mean for the groups be that much less than one shot (12.5 mcg) and why does the control group have a higher maximum than the autism group?
For the birth to 7 month and birth to 20 month groups a similar pattern plays out. The range between the groups seems to get a little larger (about 125 mcg and 200 mcg, respectively) but the ranges and mean seem to be strange. All of the groups for all of the time frames have a minimum of zero exposure, so does that mean that the low exposure groups were primarily made up of children with no exposure to thimerosal?
The study doesn't really go into any details about the composition of the high and low exposure groups. But then I remembered there was a reference about supplemental data and went looking for it. Typically the supplemental data will contain some of the extra data that didn't fit in the study because of space constraints, and I thought it might be able to shed some light on the subject.
Well, I found the supplemental data here and here - all 387 (!) page of it. I have never seen a study with this much supplemental data in it. But even more interesting was were I found it, on the web site of Abt Associates.
It turns out that the majority of the work for the study was not done by the CDC but rather a private company. This is kind-of mentioned in the study but it doesn't really call out the fact that "one of the largest for-profit government and business research and consulting firms in the world" did the work for the study nor that "in 2008, Abt Associates was ranked 20th among the top 50 U.S. market research firms by the American Marketing Association."
It is possible that the answer to the my question is buried in the supplemental data but I don't have the time (or desire) to read through almost 400 pages to find the answer. The results of the study are somewhat confusing but the sheer amount of data that was dumped along with it makes it that much harder to get at what question was actually answered or what the results actually mean.
I am beginning to suspect that was the point.