Statistical power analysis for one way a

See discussions, stats, and author profiles for this publication at:
https://www.researchgate.net/publication/225774703

Statistical power analysis for one-way
analysis of variance: A computer program
Article in Behavior Research Methods · May 1990
DOI: 10.3758/BF03209816

CITATIONS

READS

18

52

5 authors, including:
Michael Borenstein

Hannah Rothstein


Biostadt

City University of New York - Bernard M. …

90 PUBLICATIONS 7,780 CITATIONS

64 PUBLICATIONS 6,038 CITATIONS

SEE PROFILE

SEE PROFILE

All content following this page was uploaded by Hannah Rothstein on 05 February 2014.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.

Behavior Research Methods, Instruments, & Computers
1990, 22 (3), 271-282

-


METHODS & DESIGNS

Statistical power analysis for one-way analysis
of variance: A computer program
MICHAEL BORENSTEIN
Hillside Hospital, Long Island Jewish Medical Center, Glen Oaks, New York
and Biostatistical Programming Associates, Teaneck, New Jersey
JACOB COHEN
Biostatistical Programming Associates, Teaneck, New Jersey
and New York University, New York, New York
HANNAH R. ROTHSTEIN
Baruch College, City University of New York, New York, New York
SIMCHA POLLACK
Hillside Hospital, Long Island Jewish Medical Center, Glen Oaks, New York
Biostatistical Programming Associates, Teaneck, New Jersey
and St. Johns University, Jamaica, New York
and
JOHN M. KANE
Hillside Hospital, Long Island Jewish Medical Center, Glen Oaks, New York

To facilitate the computation of statistical power for analysis of variance, Cohen developed the
index of effect size f, defined as the SD between groups divided by the SD within groups. A microcomputer program for statistical power allows the user to compute the value of fin any of several
ways: by specifying the mean and SD for every cell in the ANOVA; by specifying the mean value
for the two extreme cells and the pattern of dispersion for the remaining cells; by estimating
the proportion of variance in the dependent variable that will be explained by group membership; and/or with reference to conventions for small, medium, and large effects. The program will
compute power for any single set of parameters; it will also allow the user to generate tables
and graphs showing how power will vary as a function of effect size, sample size, and ex.
The process of statistical inference carries with it the
potential for two types of error: A Type I error, which
is made when the treatment under study is actually not
effective but the researcher concludes that it is effective,
and a Type II error, which is made when the treatment
actually is effective but the researcher concludes that it
is not.
The probability that the study will result in a Type II
error ({3); or, expressed conversely, the probability that
the study will yield significant results if the treatment is
effective (the study's power) is a function of three factors: (1) sample size (the higher the sample size, the more

This research was supported in part by the following grants:

NIMH/SBIR l-R43-MH-43083-Ql, NIMH/SBIR l-R43-MH-43083-02,
and NIMH MH-41960-02. The authors would also like to express their
appreciation to the editor and reviewers for their comments on an earlier
draft of this paper. Correspondence should be addressed to Michael
Borenstein, Hillside Hospital, A Division of Long Island Jewish Medical
Center, P.O.B. 38, Glen Oaks, NY 11004.

confidence we have in the observed effect and so the higher
the power); (2) c< (the more liberal the criterion that will
be accepted as valid, the higher the power); and (3) effect
size (the larger the population effect, the more likely that
the effect observed in the study will meet the criterion
for statistical significance, and the higher the power).
It follows that in order to compute power, the researcher
must provide an estimate of the hypothesized effect size.
Whereas the specification of an effect size is fairly direct
for t tests (typically, the standardized mean difference is
used, as in Dunlap, 1981), for correlations (the correlation coefficient itself serves as the effect size, as in Dunlap
& Kemery, 1985), or proportions (the proportion ofpositive cases in each group is reported, as in Dunlap, 1982),
the specification of an effect size for analysis of variance

or covariance is problematic, since the effect is a function
of the dispersion among and within any number of groups.
One index of effect size that can be applied to analysis
of variance is the noncentrality parameter ¢ (Odeh & Fox,
1975), and computer programs that compute power for
271

Copyright 1990 Psychonomic Society, Inc.

272

BORENSTEIN, COHEN, ROTHSTEIN, POLLACK, AND KANE

analysis of variance typically require the user to specify
the effect size as some variant of this parameter (see
Goldstein, 1989). This approach is less than ideal, however, since ¢ is determined not only by the effect size
but also by the sample size. For the purpose of power
analysis, it would be preferable to use an index that was
determined solely by the magnitude of the effect, so that
the researcher could develop an intuitive feel for the index, and could manipulate the effect size and the sample

size independently of each other.
This need for a standardized measure of effect size is
addressed by the index f developed by Cohen (1977, 1988;
Borenstein & Cohen, 1988). In the present paper, we
describe this index and present a computer program that
enables the researcher to compute f, and, using f, to determine power for analysis of variance or covariance.
The f Index of Effect Size
To facilitate computation of power, Cohen (1977, 1988)
suggested the use of an effect size index,j, defined as the
ratio SDBetween Groups/SDWithin Groups (i.e., SDB/SDw)which is not to be confused with the test statistic denoted
by an uppercase F. In theory, this index can range in value
from zero to an indefinitely high positive value, but in
practice, it typically falls in the range from zero (indicating no effect; i.e., the null hypothesis) to 0.40 (representing an effect that is larger than is typically found in social science research). Since researchers are not generally
familiar with f, it would be instructive to place this index
in the context of other indices of effect size.
The effect size f in relation to effect size d. Cohen
(1977, 1988) created the index d as a measure of effect
size for the difference between two group means that are
to be compared by a t test. This index is defined as the
standardized mean difference (i.e., the absolute value of

the mean difference divided by the standard deviation
within a group). The indexfmay be seen as the extension
of this concept to the case of multiple group means and
analysis of variance, in thatfis again the standardized measure of dispersion between groups; but in this case, dispersion among multiple groups is defined in terms of the
assumed (population) standard deviation between groups
rather than in terms of the (population) difference score.
When the analysis is limited to two groups, the researcher has the option of comparing the groups by t test
and applying the effect size d, or of comparing the groups
by analysis of variance and applying the effect size f In
this case, d andfare related by the functionf = d/2. For
example, if we posit means of 10 versus 15, and SDw
of 10, d would be equal to (15 -10)/10, or 0.5. Equivalently, fwould equal 2.5/10, or 0.25.
When there are more than two groups, the correspondence between d and f is more complex, since d is able
to account for only two means, whereas f incorporates
information about more than two means. In this case, d
may be calculated on the basis of the single lowest and
highest group means, adjusted to reflect the dispersion
of the additional group means, and then used to derive
a value for f. The dispersion of the group means may be


described as matching one of three patterns, as follows:
(1) The remaining groups fall at the midpoint between the
two extreme groups; this serves to minimize the betweengroups dispersion and yields the lowest value off. (2) The
remaining groups are spread evenly over the entire range
between the two extremes; this results in somewhat more
dispersion between groups than in the previous case, and
a somewhat higher value for f. (3) The remaining groups
fall at either extreme; this results in the highest possible
dispersion between groups and the highest value off, given
the constraints imposed by the two extreme means and
the SD. The precise function for calculating fin this manner is presented in Appendix A.
The effect size f in relation to ."1. An intuitive measure of effect size, ."1 is defined as the proportion of variance in the dependent variable that may be explained as
is thus the ANOVA
a function of group membership.
analogue to R1, the value that would be cited in the case
of multiple regression. The effect size f is related to ."1
by the function

.,,1


f=
where

-JP

.,,1

I' = -I-."
-1'
In the range of.,,2 typically encountered (0.0-0.14),
f2 will be roughly comparable to.,,2, andfwill be roughly
comparable to the correlation coefficient r.

lin relation to cp. The index cp, which is used in other
treatments of power (see, e.g., Odeh & Fox, 1975; Owen,
1962; Scheffe, 1959; Winer, 1971), standardizes the
magnitude of the effect by the standard error of the sample mean and is thus (in part) a function of the size of
each sample, n, whilefis solely a descriptor of the population (Cohen, 1988). The relationship betweenfand ¢
is given by
¢

f= -

-In

or
¢

= f-Jn.

Conventions for the effect size f. Cohen suggests that
for research in the social sciences, a small effect would
correspond to anfon the order of 0.10, a medium effect
would correspond to anfon the order of 0.25, and a large
effect would correspond to f on the order of 0.40. These
conventions also seem appropriate for the hypothetical example from the physical sciences that is presented below.
In the case of the small effect (f = .10), the betweengroups dispersion (SD) is one tenth as large as the withingroups dispersion. Equivalently, about 1% of the variance in the dependent variable may be explained by group
membership. For example, a researcher is planning to
compare the total cholesterol levels of three groups whose
diets are comparable except for the type of oil used in
cooking. The researcher in this example anticipates that

the group means will range over 6 units (197, 200, 203).

STATISTICAL POWER FOR ANOVA
The SD B is 2.5 units and the SDw is 25 units; f would
be calculated as SD B/SDw(2.5/25), or 0.10. This magnitude of effect is displayed in the top segment of Figure 1.
A medium effect (f = .25) corresponds to the case in
which the between-groups dispersion is one fourth as large
as the within-groups dispersion. Equivalently, f = .25 indicates that some 6% of the variance in the dependent variable is explained by group membership. To extend the
cholesterol example, the same researcher is thinking about
comparing the cholesterol levels of three groups that
report no exercise, minimal exercise, and moderate exercise, respectively. In this case, the researcher believes
that the group means will vary over 16 units (192, 200,
208). The SD B is 6.5 units and the SD w is 25 units; f
would be calculated as SDB/SDw (6.5/25), or 0.24. This
effect is shown in the middle segment of Figure 1.
A large effect (f = .40) indicates that the SD B is four
tenths as large as the SD w . Equivalently, it implies that
14% of the variance in the dependent variable may be ex-

125

150

175

200

225

250

275

125

150

175

200

225

250

275

125

150

175

200

225

250

275

Figure 1. This figure displays three hypothetical studies. The top
segment represents a small effect size (j = .10); the middle segment
represents a medium effect size (j "" 0.25); and the bottom segment
represents a large effect size (/ "" 0.40). In each case, the means
of the three distributions are indicated by vertical lines.

273

plained by group membership. In the running example,
the researcher is thinking about comparing the cholesterol
levels in three groups whose diets differ in more substantial ways (i.e., a group of people who avoid foods high
in cholesterol, a group that eats these foods in moderation, and a group that eats them on a regular basis). The
researcher believes that the mean cholesterol levels for
these groups will range over 24 units (188, 200, 212).
The SDB is 9.75 units and the SD w is 25 units;fwould
be calculated as SDB/SD w (9.75/25), or 0.39. This effect is shown in the bottom segment of Figure 1.
Use of the index f in analysis of covariance. The
preceding discussion off as an index for analysis of variance may be applied to analysis of covariance as well,
except that in this case the dependent variable of interest
is now adjusted to take account of the covariate. Concretely, the formula SDB/SDw now applies to the SDs
of the adjusted means. Typically, the denominator will
shrink (the adjusted SD w is equal to the original SD w,
multiplied by -J I - r.h where X is the covariate and Y is
the dependent variable), while the numerator will undergo
no systematic change (and may increase). Therefore, the
effective f for analysis of covariance will be greater than
the f for the corresponding ANOV A. Equivalently, if f
is conceptualized in relation to the effect index d, the SD
used in computing d would be reduced with a corresponding increase in d and f Finally, iff is perceived in terms
of the proportion of variance explained, the total variance
to be explained is now reduced by the use of a covariate:
The amount of variance explained by group membership
is now seen as a proportion, not of the whole, but of the
part that remains unexplained following the introduction
of the covariate. A more detailed discussion of these points
is provided in Cohen (1988).
In the balance of this paper, we describe a computer program that enables the user to compute power for analysis
of variance or covariance, by specifying the effect size
fwith reference to any of the definitions discussed above.
Operation of the Program
Overview. The user is presented with a spreadsheet
(Figure 2) and prompted to enter values for the total sample size (N), the number of cells in the one-way ANOVA
(k), a, and the effect size f The cursor keys allow the
user to move freely between cells and repeatedly modify
any value(s) without having to reenter the other values.
After all values have been entered or modified in this way,
the user presses < F9 > and the program reports the value
of power.
HELP screens for calculating f. While any of the required values may be entered directly, the user will typically require some assistance in determining an appropriate value for the index f For this reason, the program
incorporates various HELP screens that enable the user
to compute f by entering the individual cell means and
SDs; to derive a value for fbased on its association with
d or T/ 2; or to assign a value based on the conventions
of small, medium, and large effects. The operation of
these various methods for calculating f is outlined here;

274

BORENSTEIN, COHEN, ROTHSTEIN, POLLACK, AND KANE

Cursor to Locate a Value
Press or to Erase that Value
Then use the TOP ROW of the keyboard to enter the new value
PROCEED WITH COMPUTATIONS
EXIT

Effect size f

0.280
80

TOTAL N
Number of Groups

4

Alpha

0.050

Enter the Effect Size f,
or
Enter Value for each cell
Enter Proportion variance

General HELP Screen
Enter Range of cells
view/Modify in Context

Figure 2. The user enters values for N, number of groups, and Q. The effect size f may also be entered directly from this screen. Alternatively, tbe user may invoke one of tbe otber screens and allow tbe program to compute f.

Cursor to Locate a Value
Press or to Erase that Value
Then use the TOP ROW of the keyboard to enter the new value
COMPUTE f
EXIT TO PROGRAM WITHOUT COMPUTING f

Group
Group
Group
Group

# 1
# 2
# 3
# 4

VALUE OF f=

0.28

TOTAL N=

80

MEAN

STD DEV

N

45.000
50.000
55.000
60.000

20.000
20.000
20.000
20.000

20
20
20
20

MEAN N PER CELL:

20

Press to make additional modifications on this screen
Press < F9 > to transfer value of f and N-Cases to program
Press < FlO > to return to program without transferring values

Figure 3. Metbod 1: The user specifies tbe mean, SD, and n for every cell, and tbe program computes tbe corresponding value of J.

STATISTICAL POWER FOR ANOVA
the computational algorithms are presented in Appendix A.
Method 1: Exact calculation off By pressing < F3 > ,
the user transfers control of the program to the HELP
screen shown in Figure 3, and is asked to enter the mean,
SD, and n for every cell in the ANOVA. When the user
presses < F9 > , the program calculates fby the formula
SDB/SD w and transfers this value to the main program.
Method 2: Approximate calculation off based on extreme cells and pattern ofdispersion. In some cases, the
user will find it difficult to specify a mean for each cell
in the ANOVA, but will have a fairly accurate idea of
the means for the extreme cells. By pressing < F4 > , the
user invokes the HELP screen shown in Figure 4, and
is asked to enter the mean for the two extreme cells, the
SD within cells, and the pattern of dispersion for the remaining cells (clustered at the center, evenly spaced, or
clustered at the extremes). The user then presses < F9 >
to compute the approximate value of f and transfer this
value to the main program.
Method 3: Calculation offas a function of 7j 2. By pressing < F5 > , the user invokes the screen shown in Figure 5.
The program reports that the f value specified earlier
would correspond to 7/2 of 0.07, indicating that some 7%
of the variance in the dependent variable is accounted for
by group membership. The user may accept this value
and return to the main screen. Alternatively, the user may
elect to type in a new value for 7j 2. In this case the program would compute the corresponding value of f, and
then transfer this value to the main program.
Method 4: Specifyingfin the context ofsmall, medium,
and large effect sizes. By pressing < F6 > , the user invokes
a screen in which the most recently specified value off
is displayed as a bar graph on which the points corresponding to small, medium, and large effects are highlighted
(Figure 6). Thus, the user is able to satisfy himself or herself that the specifications entered by one of the other
methods are consistent with the magnitude of effect that
is typical in the given field of study. The user has the option of entering a new value for f on the basis of the displayed conventions, and then transferring this value to the
main program.
Putting it all in perspective. Thus, the value off may
be entered directly into the main program or calculated
via any of these HELP screens. The user who is able to
specify a mean and an SD for every cell in the ANOVA
would want to use the first method to calculate an exact
value for f The user who is not able to specify a value
for every cell, but is able to estimate the value of the two
extreme means, the SD within cells, and the pattern for
the other cells, would use the second method to derive
an estimate for f The third method involves no assumptions about the means or SD of any cell. In this case, the
user is required only to specify the proportion of variance in the dependent variable that will be explained by
group membership. The final method requires simply that
the user choose a value based on conventions.
These various HELP screens are linked to each other,
so that the user may calculate and view the value off in

275

any number of ways, to ensure that the value selected is
appropriate from more than one perspective. For example, the user might calculate f initially by Method I,
specifying the mean, SD, and n for every cell as shown
in Figure 3, and finding thatf = 0.28. Alternatively, the
user might calculate f initially by Method 2, providing
the values for extreme means shown in Figure 4 and finding thatf = .28. (Note that the values in these two figures
describe the same population, and that both methods yield
the same value for f). Whether the initial estimate of f
had been obtained by the first or the second method, the
user might then proceed to invoke Method 3, and observe
that anfvalue of 0.28 corresponds to 7/ 2 of 0.07, indicating that 7% of the variance in the dependent variable may
be explained by group membership. Finally, the user
might invoke Method 4, and note that the specified effect
off = .28 places this effect slightly above the value (0.25)
adopted as a medium effect size (Figure 6). If these values
are consistent with the user's expectations, he or she would
proceed to compute power. Otherwise, the user would
be free to modify the specified effect size at any point.
Program Options
Calculation of power for a single set of parameters.
After providing values for sample size, number of groups,
a, and the effect size I. the user may press < F9 > to display the corresponding value of power.
Tables of power as a function of effect size and sample size. The user who is planning a study will typically
want to know how power varies as modifications are made
to the effect size (what happens if the population effect
is actually smaller or larger than our projections?) and
sample size (how much would power increase if the sample size were increased by N subjects?). The program allows the user to specify up to four values for f and up
to 70 values for sample size (e.g. ,jvaries from 0.10 by
0.10 to 0.40, and N varies from 20 by 2 to 150). The program will then create a table (Figure 7) that shows power
as a function of these values. In this example, the status
lines at the top of the screen indicate that the number of
groups is constant at 4 and that a is constant at 0.05, while
f and N are allowed to vary. If we assume an effect of
f = .30 and want to work with power of 0.80, the study
would require a total of 124 subjects divided among the
four groups.
Graphs of power as a function of effect size and sample size. By pressing a function key, the user is able to
transform this table into an on-screen graph (Figure 8).
In this example, the four lines correspond to fvalues of
0.10,0.20,0.30, and 0.40 while the 70 columns represent
sample size of 20, 22, ... 160. The cursor keys enable
the user to highlight any of the 280 points graphed on the
screen. As any point is highlighted, the precise values for
that point are shown at the top of the screen. In this example, the status lines at the top of the screen indicate
that the highlighted point corresponds tof = .30, N = 124,
number groups = 4, and a = .05. The status lines show
that for this set of values power is reported as 0.80, which
is consistent with the value shown in the tabular format.

276

BORENSTEIN, COHEN, ROTHSTEIN, POLLACK, AND KANE

{セ ]G

Number of Groups
standard Deviation WITHIN a group

= = = = =

If

J

4

20

L-==========----J
Mean for the LOWEST group in study
Mean for the HIGHEST group in study

60

Pattern for all groups

2

45

=:J

[,,============

IIIII

PATTERN 1
PATTERN 2
PATTERN 3

III
VALUE OF f

I

CLUSTERED

I

SPACED EVENLY

III

=

EXTREME

0.28

PRESS TO CHANGE PARAMETERS ON THIS SCREEN
PRESS < F9 > TO TRANSFER THIS VALUE TO MAIN PROGRAM
PRESS < FlO > TO EXIT TO PROGRAM WITHOUT TRANSFERRING VALUE OF f

Figure 4. Method 2: The user specifies the mean for the lowest and the highest cells. In addition, the user reports the pattern of dispersion for the remaining cells. (Pattern 1 includes five overlapping points at the center, and Pattern 3 includes three overlapping points
at either extreme. On the two-dimensional screen, these overlapping points are displayed as adjacent to each other.) The program estimates the corresponding value of f.

ャセ] ] ]

Proportion of variance Explained (eta-squared)

VALUE OF f

⦅Nセ]

=

.0727

0.28

PRESS TO CHANGE PARAMETERS ON THIS SCREEN
PRESS < F9 > TO TRANSFER THIS VALUE TO MAIN PROGRAM
PRESS < FlO > TO EXIT TO PROGRAM WITHOUT TRANSFERRING f

Figure 5. Method 3: This screen shows the f value computed earlier would imply that ,,' is 0.07; that is, some 7% of the variance in
the dependent variable may be explained by group membership. The user could elect to type in another value for ,,' at this point, and
the program would modify the f value in response.

STATISTICAL POWER FOR ANOVA

Value ot t

I

Small

Med

I

0.28

I I

Large

I


0.00

0.20

VALUE OF t :

]

.28

(Must be 0 or Greater)

277

0.60

0.40

0.80

1.00

0.28

PRESS TO CHANGE PARAMETERS ON THIS SCREEN
PRESS < F9
> TO TRANSFER THIS VALUE TO MAIN PROGRAM
PRESS < FlO > TO EXIT TO PROGRAM WITHOUT TRANSFERRING t

Figure 6. Metbod 4: ThIs screen shows that the/value computed earlier would correspond to a medium effect size. The program allows
the user to modify the value of / from this screen as well.

f

N TOT

:
:

N TOT

VARIES
VARIES

GROUPS:
ALPHA :
f

4
0.050
:

f

:

t

:

f

:

!

0.100

0.200

0.300

0.400

110
112
114
116
118
120
122
124
126
128
130
132

0.120
0.121
0.123
0.124
0.126
0.128
0.129
0.131
0.132
0.134
0.136
0.137

0.389
0.396
0.403
0.409
0.416
0.423
0.429
0.436
0.443
0.449
0.455
0.462

0.747
0.756
0.764
0.772
0.780
0.788
0.795
0.802
0.809
0.816
0.823
0.829

0.949
0.953
0.957
0.960
0.963
0.966
0.968
0.971
0.973
0.975
0.977
0.979

DISPLAY LINE GRAPH
PRINT FULL TABLE
CURSOR OR TO SCROLL TABLE

< 6> PRINT SCREEN
EXIT

Figure 7. This table shows how power will vary as a function of sample size and effect size. The legend at the top indicates that the
user has specified a study design with four groups and that a is set at 0.05.

278

BORENSTEIN, COHEN, ROTHSTEIN, POLLACK, AND KANE

f

N TOT

=

=

GROUPS=
ALPHA

0.300
124

1. 00

0.75

0.50

0.25

0.00

N TOT

•• ••

••
••
•••

••• •••

••••

POWER=

4
0.050

=

•••••
••••

HELP
TABLE

0.8024

.......

•••••••••••••••

•••••••••••••••••


··1······
••••••••
t

• ••••••• • •

0

•••••••••••
0000000000000000000000000000000000000000
·000000000000000000000000000

20

34

48

62

76

90

104

118

132

146

Figure 8. This graph shows how power (on the vertical axis) will vary as a function of effect size (represented by the four lines) and
sample size (on the horizontal axis). The cursor keys are used to highlight any point on the graph, and precise values for that point are
presented at the top of the screen.

Interface to other programs. The program allows the
tables and graphs to be sent to the printer or to an ASCII
file which may then be input into other programs for additional manipulations.
Algorithms
Power is calculated by determining the upper tail area
of the noncentral F distribution, corresponding to the alternate hypothesis, that exceeds the critical value for F
under the null hypothesis. This approach relies on three
algorithms. A subroutine adapted from equation 26.2.19
in Abramowitz and Stegun (1965) reports the value of z
corresponding to a. This value, together with values for
df and effect size, is sent to a subroutine that uses the
Laubscher (1960) chi-SQuare-based square root algorithm
to approximate the noncentral F distribution, and returns
a value of z corresponding to power (see also Cohen &
Nee, 1987, and Fowler, 1984, for a discussion ofthis algorithm's accuracy). This z value is used to compute the
area under the curve, or power, by means of the Odeh
and Evans (1974) algorithm for approximation of the inverse normal distribution function (see also Brophy,
1985). Additional details of the computational algorithms
are given in Appendix B.
Accuracy
To assess the accuracy of the program, values generated by the program were compared with the exact values
reported by Tiku (1967), who used the method described

by Tang (1938). This comparison was carried out for
a = .01, .05; dfnum = 1, 3, 9 (corresponding to number
of groups = 2, 4, 10), dIerror = 8, 10, 20, 40, 120; and
cP (used by Tiku) corresponding to 0.5, 1.0,2.0, and 3.0.
The comparisons for dferror = 20 are shown in Table 1.
Across this range of parameters (and specifically, when
d!error equals or exceeds 8), the algebraic error ranges from
-0.018 to +0.016, with a mean of 0.001; the mean absolute error is 0.004.
The requirement that d!error (defined as the number of
cases minus the number of groups) must equal or exceed
8 should have little impact on the utility of this program,
since this requirement will be met by virtually all studies.
For example, dferror of 8 corresponds to a study with 2
groups and 10 cases (5 per cell); 4 groups and 12 cases
(3 per cell); or 10 groups and 18 cases (mean of 1.8 cases
per cell). The user is cautioned, however, that the program is not intended for use in circumstances where the
dferror is less than 8. Under these circumstances, the
algebraic error was found to range from -0.092 to
+0.047 with a mean of -0.006; the mean absolute error
was 0.024.

Speed
Calculation of a single power value is virtually instantaneous on any model of mM-compatible personal computer. A table with four columns and 70 rows (i.e., 280
cells) is computed and the graph displayed in about 12 sec
on an mM PS/2 Model 80 with a math coprocessor. Once

STATISTICAL POWER FOR ANOVA

279

Table 1
Power Reported by the Program in Relation to Values Reported by Tiku
Program

Tilru

N
Ol

FI

F2

c/l

0.01
0.01
0.01
0.01
0.01
O.ol
0.01
0.01
0.01
0.01
0.01
0.01
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.05

I
I
1
I
3
3
3
3
9
9
9
9
I
I
I
I
3
3
3
3
9
9
9
9

20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20

0.50
1.00
2.00
3.00
0.50
1.00
2.00
3.00
0.50
1.00
2.00
3.00
0.50
1.00
2.00
3.00
0.50
1.00
2.00
3.00
0.50
1.00
2.00
300

Power* Groupst
0.028
0.101
0.508
0.904
0.027
0.113
0.653
0.979
0.029
0.159
0.864
1.000
0.103
0.270
0768
0.981
0.104
0.300
0.874
0.998
0.114
0.391
0.974
1000

2
2
2
2
4
4
4
4
10
10
10
10
2
2
2
2
4
4
4
4
10
10
10
10

Nt



Power

Difference I

22
22
22
22
24
24
24
24
30
30
30
30
22
22
22
22
24
24
24
24
30
30
30
30

0.151
0.302
0.603
0905
0.204
0.408
0.816
1.225
0.289
0.577
1.155
lo732
0.151
0.302
0.603
0.905
0.204
0.408
0.8[6
lo225
0.289
0.577
1.155
1.732

0.Q21
0.096
0.515
0.909
0.024
0.112
0653
0980
0.030
0.159
0.863
0.999
0.101
0.280
0.783
0.983
0.104
0.306
0877
0.998
0.116
0393
0.975
1.000

-0.007
-0.005
0.007
0.005
-0.003
-0.001
.000
0.001
0001
.000
-0.001
-0.001
-0.002
0.010
0.016
0002
.000
0.006
0003
0.000
0.002
0.002
0.001
0000

*Tiku (1967) reports (3; the values shown here are 1-{3. tNumber of groups computed as
+ I. tTotal number of cases computed as dfdenominatur + number of groups. §f is
computed as c/l/.../N/Number of groups. II Algebraic difference for program minus Tiku's value.

dfnum.5 THEN CUM=.5
IF TAILS = 2 THEN CUM = I - ALPHA/2
RESULT = Zfromprob(CUM, I)
END FUNCTION
REAL FUNCTION: ZFROMPROB ' IN CONCERT WITH ZFROMALPHA, RETURNS Z FOR ALPHA
'ADAPTED FROM ODEH AND EVANS (1974)
TAILS = I
All =4.53642210148E-05#
AI2 = .0204231210245#
A13= .342242088547 #
A14= .322232431088#
A15= .0038560700634#
A16= .10353775285#
A17= .531103462366#
A18= .588581570495#
A19= .099348462606#
XP=P:IF TAILS=2 THEN XP=P/2
R=XP:IF XP>.5 THEN R=I-XP
IF R< IE-20 THEN Z=IO: RESULT=Z: EXIT
Y =SQR( - 2*LOG(R»
Z=Y -««All *y +AI2)*Y +AI3)*Y + I)*Y + AI4)/««AI5*Y + AI6)*Y +AI7)*Y + AI8)*Y + A19)
IF XP