Multiple comparisons,False positives

Multiple comparisons

In statistics, the multiple comparisons or multiple testing problem occurs when one considers a set of statistical inferences simultaneously. Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a stronger level of evidence to be observed in order for an individual comparison to be deemed "significant", so as to compensate for the number of inferences being made.

http://en.wikipedia.org/wiki/Multiple_testing

False positives

Type I error, also known as an error of the first kind, an α error or a false positive is the error of rejecting a true null hypothesis(H0). An example of this would be if a test shows that a woman is pregnant (H0: she is not) when in reality she is not, or telling a patient he is sick (H0: he is not), when in fact he is not . Type I error can be viewed as the error of excessive credulity . In terms of folk tales, an investigator may be "crying wolf" (setting a false alarm) without a wolf in sight (H0: no wolf).

http://en.wikipedia.org/wiki/False_positives#Type_I_error

False discovery rate

False discovery rate (FDR) control is a statistical method used in multiple hypothesis testing to correct for multiple comparisons. In a list of rejected hypotheses, FDR controls the expected proportion of incorrectly rejected null hypotheses (type I errors). It is a less conservative procedure for comparison, with greater power than familywise error rate (FWER) control, at a cost of increasing the likelihood of obtaining type I errors.

http://en.wikipedia.org/wiki/False_discovery_rate

原文地址:https://www.cnblogs.com/emanlee/p/2084537.html