CiteWeb id: 20160000005

CiteWeb score: 3535

Some time ago, a multiple comparison procedure for comparing several treatments simultaneously with a control or standard treatment was introduced by the present author (Dunnett [1955]). The procedure was designed to be used either to test the significance of the differences between each of the treatments and the control with a stated value 1 - P for the joint significance level, or to set confidence limits on the true values of the treatment differences from the control with a stated value P for the joint confidence coefficient. Thus the procedure has the property of controlling the experimentwise, rather than the per-comparison, error rate associated with the comparisons, in common with the multiple comparison procedures of Tukey [unpublished] and Scheffe [1953]. In the earlier paper, tables were provided enabling up to nine treatments to be compared with a control with joint confidence coefficient either .95 or .99. Tables for both one-sided and two-sided comparisons were given but, as explained in the paper, the two-sided values were inexact for the case of more than two comparisons as a result of an approximation which had to be made in the computations. The main purpose of the present paper is to give the exact tables for making two-sided comparisons. The necessary computations were done on a General Precision LGP-30 electronic computer, by a method described in section 3 below. The tables are given here as Tables II and III; these replace Tables 2a and 2b, respectively, of the previous paper. In addition to providing the exact values, a method is given for adjusting the tabulated values to cover the situation where the variance of the control mean is smaller than the variance of the treatment means, as occurs for example when a greater number of observations is allocated to the control than to any of the test treatments. Furthermore, the number of treatments which may be simultaneously compared with a control has been extended to twenty. 482