Aller au contenu principal

Shapiro–Wilk test


Shapiro–Wilk test


The Shapiro–Wilk test is a test of normality. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.

Theory

The Shapiro–Wilk test tests the null hypothesis that a sample x1, ..., xn came from a normally distributed population. The test statistic is

W = ( i = 1 n a i x ( i ) ) 2 i = 1 n ( x i x ¯ ) 2 , {\displaystyle W={\left(\sum _{i=1}^{n}a_{i}x_{(i)}\right)^{2} \over \sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}},}

where

  • x ( i ) {\displaystyle x_{(i)}} with parentheses enclosing the subscript index i is the ith order statistic, i.e., the ith-smallest number in the sample (not to be confused with x i {\displaystyle x_{i}} ).
  • x ¯ = ( x 1 + + x n ) / n {\displaystyle {\overline {x}}=\left(x_{1}+\cdots +x_{n}\right)/n} is the sample mean.

The coefficients a i {\displaystyle a_{i}} are given by:

( a 1 , , a n ) = m T V 1 C , {\displaystyle (a_{1},\dots ,a_{n})={m^{\mathsf {T}}V^{-1} \over C},}

where C is a vector norm:

C = V 1 m = ( m T V 1 V 1 m ) 1 / 2 {\displaystyle C=\|V^{-1}m\|=(m^{\mathsf {T}}V^{-1}V^{-1}m)^{1/2}}

and the vector m,

m = ( m 1 , , m n ) T {\displaystyle m=(m_{1},\dots ,m_{n})^{\mathsf {T}}\,}

is made of the expected values of the order statistics of independent and identically distributed random variables sampled from the standard normal distribution; finally, V {\displaystyle V} is the covariance matrix of those normal order statistics.

There is no name for the distribution of W {\displaystyle W} . The cutoff values for the statistics are calculated through Monte Carlo simulations.

Interpretation

The null-hypothesis of this test is that the population is normally distributed. Thus, if the p value is less than the chosen alpha level, then the null hypothesis is rejected and there is evidence that the data tested are not normally distributed. On the other hand, if the p value is greater than the chosen alpha level, then the null hypothesis (that the data came from a normally distributed population) can not be rejected (e.g., for an alpha level of .05, a data set with a p value of less than .05 rejects the null hypothesis that the data are from a normally distributed population – consequently, a data set with a p value more than the .05 alpha value fails to reject the null hypothesis that the data is from a normally distributed population).

Like most statistical significance tests, if the sample size is sufficiently large this test may detect even trivial departures from the null hypothesis (i.e., although there may be some statistically significant effect, it may be too small to be of any practical significance); thus, additional investigation of the effect size is typically advisable, e.g., a Q–Q plot in this case.

Power analysis

Monte Carlo simulation has found that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov, and Lilliefors.

Approximation

Royston proposed an alternative method of calculating the coefficients vector by providing an algorithm for calculating values that extended the sample size from 50 to 2,000. This technique is used in several software packages including GraphPad Prism, Stata, SPSS and SAS. Rahman and Govidarajulu extended the sample size further up to 5,000.

See also

  • Anderson–Darling test
  • Cramér–von Mises criterion
  • D'Agostino's K-squared test
  • Kolmogorov–Smirnov test
  • Lilliefors test
  • Normal probability plot
  • Shapiro–Francia test
Collection James Bond 007

References

External links

  • Worked example using Excel
  • Algorithm AS R94 (Shapiro Wilk) FORTRAN code
  • Exploratory analysis using the Shapiro–Wilk normality test in R
  • Real Statistics Using Excel: the Shapiro-Wilk Expanded Test

Text submitted to CC-BY-SA license. Source: Shapiro–Wilk test by Wikipedia (Historical)