Thursday, December 18, 2008

Issues with Power and Multiple Randomized studies

In the discussion with regards to CTRT(chemoradiation) for cervical cancer, we got into a discussion of power and how this might explain the fact that 1 of the 5 studies examining CTRT vs RT was negative.

Definitions: Power is the chance that the study will find the difference as described in the statistical design. More formally, beta is the probability of accepting the null hypothesis, when in fact it is not true. This is a "type 2 error." Power is defined as 1-beta.

Most studies set this at 80% or that neighbor hood. Therefore, in a study with 80% power, the chance of not finding a real difference is 20%.

When you have 5 equivalent studies, all with 80% power, the chance of at least one of them commiting a type 2 error can be calculated as below:

1 - (0.8)^5 = 0.67 (this is most easily calculated by determining the chance that all of them won't make the error, and subtracting that from one).

Therefore from this thought experiment, we actually should not be surprised that one of these trials is negative, but rather that it was the most likely outcome.

No comments: