enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Multiple comparisons problem - Wikipedia

    en.wikipedia.org/wiki/Multiple_comparisons_problem

    Although the 30 samples were all simulated under the null, one of the resulting p-values is small enough to produce a false rejection at the typical level 0.05 in the absence of correction. Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery".

  3. Student's t-test - Wikipedia

    en.wikipedia.org/wiki/Student's_t-test

    However, the sample size required for the sample means to converge to normality depends on the skewness of the distribution of the original data. The sample can vary from 30 to 100 or higher values depending on the skewness. [23] [24] F For non-normal data, the distribution of the sample variance may deviate substantially from a χ 2 distribution.

  4. Autocorrelation - Wikipedia

    en.wikipedia.org/wiki/Autocorrelation

    The simplest version of the test statistic from this auxiliary regression is TR 2, where T is the sample size and R 2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as χ 2 {\displaystyle \chi ^{2}} with k degrees of freedom.

  5. Bootstrapping (statistics) - Wikipedia

    en.wikipedia.org/wiki/Bootstrapping_(statistics)

    The bias-corrected and accelerated ... but with a different formula (note the inversion of the left and right quantiles): ... for a sample size n; this ...

  6. Holm–Bonferroni method - Wikipedia

    en.wikipedia.org/wiki/Holm–Bonferroni_method

    This is because () is the smallest in each one of the intersection sub-families and the size of the sub-families is at most , such that the Bonferroni threshold larger than /. The same rationale applies for H ( 2 ) {\displaystyle H_{(2)}} .

  7. Šidák correction - Wikipedia

    en.wikipedia.org/wiki/Šidák_correction

    The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be α 1 {\displaystyle \alpha _{1}} ; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant).

  8. Sampling fraction - Wikipedia

    en.wikipedia.org/wiki/Sampling_fraction

    In sampling theory, the sampling fraction is the ratio of sample size to population size or, in the context of stratified sampling, the ratio of the sample size to the size of the stratum. [1] The formula for the sampling fraction is =, where n is the sample size and N is the population size. A sampling fraction value close to 1 will occur if ...

  9. White test - Wikipedia

    en.wikipedia.org/wiki/White_test

    Conversely, a “large" R 2 (scaled by the sample size so that it follows the chi-squared distribution) counts against the hypothesis of homoskedasticity. An alternative to the White test is the Breusch–Pagan test, where the Breusch-Pagan test is designed to detect only linear forms of heteroskedasticity. Under certain conditions and a ...