🔸The power of a study is defined as “the ability of a study to detect an effect or association if one really exists in a wider population.”
🔸In clinical research, we conduct studies on a subset of the patient population because it is not possible to measure a characteristic in the entire population. Therefore, whenever a statistical inference is made from a sample, it is subject to some error.
🔸Investigators try to reduce systematic errors with an appropriate design so that only random errors remain.
🔸Possible random errors to be considered before making inferences about the population under study are type I and type II errors.
🔸To make a statistical inference, 2 hypotheses must be set: the null hypothesis (there is no difference) and alternate hypothesis (there is a difference).
🔸The probability of reaching a statistically significant result if in truth there is no difference or of rejecting the null hypothesis when it should have been accepted is denoted as α, or the probability of type I error. It is similar to the false positive result of a clinical test.
🔸The probability of not detecting a minimum clinically important difference if in truth there is a difference or of accepting the null hypothesis when it should have been rejected is denoted as β, or the probability of type II error. It is similar to the false negative result of a clinical test.
🔸Properly, investigators choose the size of α and β before gathering data so that their choices cannot be influenced by study results.
🔸The typical value of α is set at 0.05, and the significance level (p value) determined from the data is compared with α to decide on statistical significance.
🔸The typical value of β is set at 0.2. The power of the study, its complement, is 1-β and is commonly reported as a percentage. Studies are often designed so that the chance of detecting a difference is 80% with a 20% (β = 0.2) chance of missing the Minimum Clinically Important Difference (MCID). This power value is arbitrary, and higher power is preferable to limit the chance of applying false negative (type II error) results.
🔸The belief is that the consequences of a false positive (type I error) claim are more serious than those of a false negative (type II error) claim, so investigators make more stringent efforts to prevent this type of error
🔸At the stage of planning a research study, investigators calculate the minimum required sample size by fixing the chances of a type I or II error, strength of association and population variability. This is called “power analysis,” and the purpose is to establish what sample size is needed to assure a given level of power (minimum 80%) to detect a specified effect size.
🔸From this, one can see that for a study to have greater power (smaller β or fewer type II errors), a larger sample size is needed.
🔸Sample size, in turn, is dependent on the magnitude of effect, or effect size. If the effect size is small, larger numbers of participants are required for the differences to be detected.
🔸Determining the sample size, therefore, requires the MCID in effect size to be agreed upon by the investigators.
🔸It is important for readers to remember that the point of powering a study is not to find a statistically significant difference between groups, but rather to find clinically important or relevant differences.
N.B.
The odds ratio is the ratio of the odds of the event happening in an exposed group versus a non-exposed group. The odds ratio is commonly used to report the strength of association between exposure and an event. The larger the odds ratio, the more likely the event is to be found with exposure. The smaller the odds ratio is than 1, the less likely the event is to be found with exposure.
No comments:
Post a Comment