CR Conner asked (and another email by George E was similar):

**CRC: " In your response to Epiwonk's
criticisms, you seem to have left out commenting on what I thought was the most
important part. That it is not appropriate to test the means
and the mercury blood levels are not close to being normally distributed. Why? "**

GE: "After reading the response, I have one question. It wasn't clear how you responded to EpiWonks assertion that an Arithmetic mean on blood samples isn't appropriate" and " The other implied response is that there are alternate acceptable ways to slice the data that we can all argue about, but no matter which way you slice it, there is still a significant (?) difference between the control and Autistic groups' blood levels. I believe that is what I read, but it is not crystal clear."

First, I want to say that Epiwonk has provided a service and made a substantive contribution and we thank Epiwonk.

However, on this point we disagree with Epiwonk. Here is a clear cut quote: "Logistic Regression requires that no assumptions about the distribution of the predictor variable(s) be made by the researcher. In other words, the predictor does not need to be normally distributed, linearly related, or have equal variances." p. 314 Advanced Multivariate Statistical Methods, 3rd edition by Mertler and Vannata (2005). Logistic regression is the best method to use precisely because the data is NOT normally distributed. It is also the best approach when the relationship of the predictor to the criterion is expected to be or might be non linear. The discussion of the data set provided by Epiwonk in terms of the non-normality of the blood mercury distribution as well as the differences in the distribution of the two groups makes it clear this was the right approach, actually. Also discussed in Tabachnick and Fiddell, 1996.

Technically, the type of logistic regression we did is binary logistic
regression with a single continuous predictor. We are glad to agree with Epiwonk
(which we have already acknowledged) that we should have reported odds ratios if
we wanted to avoid confusion. Epiwonk provides a valuable discussion of
several points and this should be clear. We wish to call attention that
there are definitely wrong ways to analyze/handle data but also usually more
than one way to analyze data correctly... any correct approach shows a
statistically significant effect in this data set. Logistic regression we think
is the best approach because it does not require the assumption that blood
levels are normally distributed. The analyses that Epiwonk performs appear to be
correct and are valuable contributions. Any result with a p value less than .05
is designated to be "statistically significant."

In a related matter, we considered transforming the data... outliers can be
dealt with by removal or transforming (this is beyond most of the web site
posters, but clearly a few contributors are savvy). Ee thought many non-stats
people who would be interested in the data set and results, and would be too
suspicious of transforming the data (think about it, discussing the conclusions
of transformed data requires some tediousness and would instill confusion
because results must be interpreted terms of the transformed variables). It is
perfectly acceptable to "deal with" outliers in more than one way, for example
one can either remove outliers or do transformation (such as taking logs) of
the data set. I do not think one would ever do both....

So, to GE: There are alternate acceptable ways to analyze the data,
yes. The analyses reported by Epiwonk -- *except the artificial dichotomizing
of the continuous variable blood mercury level, and I think Epiwonk was quite
correct and even witty to sense a 'disturbance in the force' when this was done
*-- but the stats performed by Epiwonk appear correct and we note the p
values are less than the .05 magic (and arbitrary) bar. To me, logistic
regression is the best approach, but there are other, alternate approaches that
are probably equally acceptable. Any acceptable stats approach ends up
illustrating that in this data set the effect is "statistically significant".