The False Discovery Rate (FDR)is used to control multiple test problems. For a test procedure, it is defined as the expected ratio of incorrectly rejected null hypotheses to the rejected null hypotheses as a whole. In statistics, false discovery rate (FDR) is a method of detecting the rate of Type I errors in the testing of a null hypothesis. FDR control procedures are designed to check the expected distribution of “discoveries” (discarded null hypothesis) that are false (incorrect rejection of null hypothesis).
It should be noted that when testing multiple hypotheses, the probability of alpha error cumulation increases, i.e. a null hypothesis is occasionally rejected in multiple tests despite its correctness – a “false alarm” occurs. For this reason, in the significance test of multiple tests, the significance level must be stricter and therefore lower than in a single hypothesis test. FDR control procedures use less rigorous control to control a type I error than familywise error rate (FWER) control procedures (such as Bonferroni correction). Therefore, FDR control procedures have greater statistical power, but at the cost of a higher number of other errors.
The Bonferroni correction counters this alpha error accumulation with an equally low level of significance for all hypotheses, which makes a “false alarm” unlikely. The FDR is a quality criterion that measures the correctness of all accepted hypotheses and allows as a target variable a balance between a few “false discoveries” as possible but still as many correct hits as possible. The Benjamini-Hochberg procedure is a procedure that selects the significance level in such a way that the FDR does not become too high.
---
The widespread use of FDR has been driven by technological advances, which have allowed the co-examination of several different variables from person to person. As high-throughput technologies became commonplace, technological and/or financial constraints prompted researchers to process relatively small sample sizes (e.g. with few individuals tested) and a large number of variables per sample (e.g. thousands of gene expression levels). Few of the measured variables in these datasets showed statistical significance after classic corrections used in multiple comparison procedures. This has triggered the abandonment of FWER and unadjusted multiple hypothesis tests in many scientific communities to highlight and prioritise in publications in other ways variables that have a significant impact through individuals or treatments but would not be dismissed as a significant result after the usual correction of multiple testing. Multiple comparison procedures that control FDR are adaptive and scalable. This means that FDR control can be very permissive (if the data confirm this) or conservative (if FWER is controlled) – it all depends on the number of hypotheses tested and the level of significance. False coverage rate (FCR) is an analogue of FDR at the confidence interval. FCR represents the average rate of false coverage, that is, it does not cover the real parameters between the selected intervals.