Design matrix

From HandWiki
Short description: Matrix of values of explanatory variables

In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model.[1][2][3] It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables.

The design matrix contains data on the independent variables (also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called a dependent variable). The theory relating to such models uses the design matrix as input to some linear algebra : see for example linear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of different experimental designs and statistical models, e.g., ANOVA, ANCOVA, and linear regression.[citation needed]

Definition

The design matrix is defined to be a matrix [math]\displaystyle{ X }[/math] such that [math]\displaystyle{ X_{ij} }[/math] (the jth column of the ith row of [math]\displaystyle{ X }[/math]) represents the value of the jth variable associated with the ith object.

A regression model may be represented via matrix multiplication as

[math]\displaystyle{ y=X\beta+e, }[/math]

where X is the design matrix, [math]\displaystyle{ \beta }[/math] is a vector of the model's coefficients (one for each variable), [math]\displaystyle{ e }[/math] is a vector of random errors with mean zero, and y is the vector of predicted outputs for each object.

Size

The design matrix has dimension n-by-p, where n is the number of samples observed, and p is the number of variables (features) measured in all samples.[4][5]

In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrix M would be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in row i and column j of this matrix would be the answer of the i th person to the j th question.

Examples

Arithmetic mean

The design matrix for an arithmetic mean is a column vector of ones.

Simple linear regression

This section gives an example of simple linear regression—that is, regression with only a single explanatory variable—with seven observations. The seven data points are {yi, xi}, for i = 1, 2, …, 7. The simple linear regression model is

[math]\displaystyle{ y_i = \beta_0 + \beta_1 x_i +\varepsilon_i, \, }[/math]

where [math]\displaystyle{ \beta_0 }[/math] is the y-intercept and [math]\displaystyle{ \beta_1 }[/math] is the slope of the regression line. This model can be represented in matrix form as

[math]\displaystyle{ \begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \\ y_7 \end{bmatrix} = \begin{bmatrix}1 & x_1 \\1 & x_2 \\1 & x_3 \\1 & x_4 \\1 & x_5 \\1 & x_6 \\ 1 & x_7 \end{bmatrix} \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5 \\ \varepsilon_6 \\ \varepsilon_7 \end{bmatrix} }[/math]

where the first column of 1s in the design matrix allows estimation of the y-intercept while the second column contains the x-values associated with the corresponding y-values. The matrix whose columns are 1's and x's in this example is the design matrix.

Multiple regression

This section contains an example of multiple regression with two covariates (explanatory variables): w and x. Again suppose that the data consist of seven observations, and that for each observed value to be predicted ([math]\displaystyle{ y_i }[/math]), values wi and xi of the two covariates are also observed. The model to be considered is

[math]\displaystyle{ y_i = \beta_0 + \beta_1 w_i + \beta_2 x_i + \varepsilon_i }[/math]

This model can be written in matrix terms as

[math]\displaystyle{ \begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \\ y_7 \end{bmatrix} = \begin{bmatrix} 1 & w_1 & x_1 \\1 & w_2 & x_2 \\1 & w_3 & x_3 \\1 & w_4 & x_4 \\1 & w_5 & x_5 \\1 & w_6 & x_6 \\ 1& w_7 & x_7 \end{bmatrix} \begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5 \\ \varepsilon_6 \\ \varepsilon_7 \end{bmatrix} }[/math]

Here the 7×3 matrix on the right side is the design matrix.

One-way ANOVA (cell means model)

This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group. If the model to be fit is just the mean of each group, then the model is

[math]\displaystyle{ y_{ij} = \mu_i + \varepsilon_{ij} }[/math]

which can be written

[math]\displaystyle{ \begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \\ y_7 \end{bmatrix} = \begin{bmatrix}1 & 0 & 0 \\1 &0 &0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix}\mu_1 \\ \mu_2 \\ \mu_3 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5 \\ \varepsilon_6 \\ \varepsilon_7 \end{bmatrix} }[/math]

In this model [math]\displaystyle{ \mu_i }[/math] represents the mean of the [math]\displaystyle{ i }[/math]th group.

One-way ANOVA (offset from reference group)

The ANOVA model could be equivalently written as each group parameter [math]\displaystyle{ \tau_i }[/math] being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is

[math]\displaystyle{ y_{ij} = \mu + \tau_i + \varepsilon_{ij} }[/math]

with the constraint that [math]\displaystyle{ \tau_1 }[/math] is zero.

[math]\displaystyle{ \begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \\ y_7 \end{bmatrix} = \begin{bmatrix}1 &0 &0 \\1 &0 &0 \\ 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \\ 1 & 0 & 1\end{bmatrix} \begin{bmatrix}\mu \\ \tau_2 \\ \tau_3 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5 \\ \varepsilon_6 \\ \varepsilon_7 \end{bmatrix} }[/math]

In this model [math]\displaystyle{ \mu }[/math] is the mean of the reference group and [math]\displaystyle{ \tau_i }[/math] is the difference from group [math]\displaystyle{ i }[/math] to the reference group. [math]\displaystyle{ \tau_1 }[/math] is not included in the matrix because its difference from the reference group (itself) is necessarily zero.

See also

References

  1. Everitt, B. S. (2002). Cambridge Dictionary of Statistics (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 0-521-81099-X. 
  2. Box, G. E. P.; Tiao, G. C. (1992). Bayesian Inference in Statistical Analysis. New York: John Wiley and Sons. ISBN 0-471-57428-7.  (Section 8.1.1)
  3. Timm, Neil H. (2007). Applied Multivariate Analysis. Springer Science & Business Media. p. 107. ISBN 9780387227719. https://books.google.com/books?id=vtiyg6fnnskC&pg=PA107. 
  4. Johnson, Richard A; Wichern, Dean W (2001). Applied Multivariate Statistical Analysis. Pearson. pp. 111–112. ISBN 0131877151. 
  5. "Basic Concepts for Multivariate Statistics p.2". SAS Institute. https://support.sas.com/publishing/pubcat/chaps/56902.pdf. 

Further reading

  • Verbeek, Albert (1984). "The Geometry of Model Selection in Regression". in Dijkstra, Theo K.. Misspecification Analysis. New York: Springer. pp. 20–36. ISBN 0-387-13893-5.