Originally posted by znsho
Many scientists insist that statistisical analiysis of more than two groups requires ANOVA and post ANOVA tests.
I disagree. ANOVA imparts no information whatsoever except, perhaps, there might be a difference somewhere. ANOVA does not show where differences might lie. More importantly, ANOVA might say ' there is no difference' when, in fact, a difference i ...[text shortened]... t, t-tests are the best method of statistical analysis no matter how many groups are involved.
In any situation where there are 2 or more factors in the experiment, a t-test will not give you the full story and will often mislead you as to the real locus of effects. t-tests in the absence of an ANOVA main effect in these situations are unwarranted, as they can point to effects in one factor while completely ignoring the other factor.
In a 1 factor situation, if your experiment is well predicated, you will be expecting some sort of consistency across your (say) 3 groups. If you do a t-test between groups 1 and 2 and ignore the effect of group 3, that could be misleading again. If, for example, group 3 is your control, and group 1shows the effect against group 2, but not the control, then there really is no effect of interest. Your t-test will produce a misleading result.
Further, to test for "inequality of means", which is the function of the F-test, you'd need 3 separate t-tests. Since you are performing 3 t-tests, you must reduce alpha by a factor of 3 (0.05/3) to control for family-wise error. This correction reduces your chances of finding a significant effect. The proper procedure of first running the f-test and then the post-hoc group comparison controls for this error in a more principled way.
I hope this goes some way to convincing you that there really is a place for ANOVA