D'Agostino's K-squared test

From HandWiki
Short description: Goodness-of-fit measure in statistics


In statistics, D'Agostino's K2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null hypothesis that the data is a realization of independent, identically distributed Gaussian random variables. The test is based on transformations of the sample kurtosis and skewness, and has power only against the alternatives that the distribution is skewed and/or kurtic.

Skewness and kurtosis

In the following, { xi } denotes a sample of n observations, g1 and g2 are the sample skewness and kurtosis, mj’s are the j-th sample central moments, and [math]\displaystyle{ \bar{x} }[/math] is the sample mean. Frequently in the literature related to normality testing, the skewness and kurtosis are denoted as β1 and β2 respectively. Such notation can be inconvenient since, for example, β1 can be a negative quantity.

The sample skewness and kurtosis are defined as

[math]\displaystyle{ \begin{align} & g_1 = \frac{ m_3 }{ m_2^{3/2} } = \frac{\frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^3}{\left( \frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^2 \right)^{3/2}}\ , \\ & g_2 = \frac{ m_4 }{ m_2^{2} }-3 = \frac{\frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^4}{\left( \frac{1}{n} \sum_{i=1}^n \left( x_i - \bar{x} \right)^2 \right)^2} - 3\ . \end{align} }[/math]

These quantities consistently estimate the theoretical skewness and kurtosis of the distribution, respectively. Moreover, if the sample indeed comes from a normal population, then the exact finite sample distributions of the skewness and kurtosis can themselves be analysed in terms of their means μ1, variances μ2, skewnesses γ1, and kurtosis γ2. This has been done by (Pearson 1931), who derived the following expressions:[better source needed]

[math]\displaystyle{ \begin{align} & \mu_1(g_1) = 0, \\ & \mu_2(g_1) = \frac{ 6(n-2) }{ (n+1)(n+3) }, \\ & \gamma_1(g_1) \equiv \frac{\mu_3(g_1)}{\mu_2(g_1)^{3/2}} = 0, \\ & \gamma_2(g_1) \equiv \frac{\mu_4(g_1)}{\mu_2(g_1)^{2}}-3 = \frac{ 36(n-7)(n^2+2n-5) }{ (n-2)(n+5)(n+7)(n+9) }. \end{align} }[/math]

and

[math]\displaystyle{ \begin{align} & \mu_1(g_2) = - \frac{6}{n+1}, \\ & \mu_2(g_2) = \frac{ 24n(n-2)(n-3) }{ (n+1)^2(n+3)(n+5) }, \\ & \gamma_1(g_2) \equiv \frac{\mu_3(g_2)}{\mu_2(g_2)^{3/2}} = \frac{6(n^2-5n+2)}{(n+7)(n+9)} \sqrt{\frac{6(n+3)(n+5)}{n(n-2)(n-3)}}, \\ & \gamma_2(g_2) \equiv \frac{\mu_4(g_2)}{\mu_2(g_2)^{2}}-3 = \frac{ 36(15n^6-36n^5-628n^4+982n^3+5777n^2-6402n+900) }{ n(n-3)(n-2)(n+7)(n+9)(n+11)(n+13) }. \end{align} }[/math]

For example, a sample with size n = 1000 drawn from a normally distributed population can be expected to have a skewness of 0, SD 0.08 and a kurtosis of 0, SD 0.15, where SD indicates the standard deviation.[citation needed]

Transformed sample skewness and kurtosis

The sample skewness g1 and kurtosis g2 are both asymptotically normal. However, the rate of their convergence to the distribution limit is frustratingly slow, especially for g2. For example even with n = 5000 observations the sample kurtosis g2 has both the skewness and the kurtosis of approximately 0.3, which is not negligible. In order to remedy this situation, it has been suggested to transform the quantities g1 and g2 in a way that makes their distribution as close to standard normal as possible.

In particular, (D'Agostino Pearson) suggested the following transformation for sample skewness:

[math]\displaystyle{ Z_1(g_1) = \delta \operatorname{asinh}\left( \frac{g_1}{\alpha\sqrt{\mu_2}} \right), }[/math]

where constants α and δ are computed as

[math]\displaystyle{ \begin{align} & W^2 = \sqrt{2\gamma_2 + 4} - 1, \\ & \delta = 1 / \sqrt{\ln W}, \\ & \alpha^2 = 2 / (W^2-1), \end{align} }[/math]

and where μ2 = μ2(g1) is the variance of g1, and γ2 = γ2(g1) is the kurtosis — the expressions given in the previous section.

Similarly, (Anscombe Glynn) suggested a transformation for g2, which works reasonably well for sample sizes of 20 or greater:

[math]\displaystyle{ Z_2(g_2) = \sqrt{\frac{9A}{2}} \left\{1 - \frac{2}{9A} - \left(\frac{ 1-2/A }{ 1+\frac{g_2-\mu_1}{\sqrt{\mu_2}}\sqrt{2/(A-4)} }\right)^{\!1/3}\right\}, }[/math]

where

[math]\displaystyle{ A = 6 + \frac{8}{\gamma_1} \left( \frac{2}{\gamma_1} + \sqrt{1+4/\gamma_1^2}\right), }[/math]

and μ1 = μ1(g2), μ2 = μ2(g2), γ1 = γ1(g2) are the quantities computed by Pearson.

Omnibus K2 statistic

Statistics Z1 and Z2 can be combined to produce an omnibus test, able to detect deviations from normality due to either skewness or kurtosis (D'Agostino Belanger):

[math]\displaystyle{ K^2 = Z_1(g_1)^2 + Z_2(g_2)^2\, }[/math]

If the null hypothesis of normality is true, then K2 is approximately χ2-distributed with 2 degrees of freedom.

Note that the statistics g1, g2 are not independent, only uncorrelated. Therefore, their transforms Z1, Z2 will be dependent also (Shenton Bowman), rendering the validity of χ2 approximation questionable. Simulations show that under the null hypothesis the K2 test statistic is characterized by

expected value standard deviation 95% quantile
n = 20 1.971 2.339 6.373
n = 50 2.017 2.308 6.339
n = 100 2.026 2.267 6.271
n = 250 2.012 2.174 6.129
n = 500 2.009 2.113 6.063
n = 1000 2.000 2.062 6.038
χ2(2) distribution 2.000 2.000 5.991

See also

References