8.10 - Two-way MANOVA Additive Model and Assumptions

8.10 - Two-way MANOVA Additive Model and Assumptions

A model is formed for two-way multivariate analysis of variance.

 

Two-way MANOVA Additive Model

\(\underset{\mathbf{Y}_{ij}}{\underbrace{\left(\begin{array}{c}Y_{ij1}\\Y_{ij2}\\ \vdots \\ Y_{ijp}\end{array}\right)}} = \underset{\mathbf{\nu}}{\underbrace{\left(\begin{array}{c}\nu_1 \\ \nu_2 \\ \vdots \\ \nu_p \end{array}\right)}}+\underset{\mathbf{\alpha}_{i}}{\underbrace{\left(\begin{array}{c} \alpha_{i1} \\ \alpha_{i2} \\ \vdots \\ \alpha_{ip}\end{array}\right)}}+\underset{\mathbf{\beta}_{j}}{\underbrace{\left(\begin{array}{c}\beta_{j1} \\ \beta_{j2} \\ \vdots \\ \beta_{jp}\end{array}\right)}} + \underset{\mathbf{\epsilon}_{ij}}{\underbrace{\left(\begin{array}{c}\epsilon_{ij1} \\ \epsilon_{ij2} \\ \vdots \\ \epsilon_{ijp}\end{array}\right)}}\)

In this model:

  • \(\mathbf{Y}_{ij}\) is the p × 1 vector of observations for treatment i in block j;

This vector of observations is written as a function of the following

  • \(\nu_{k}\) is the overall mean for variable k; these are collected into the overall mean vector \(\boldsymbol{\nu}\)
  • \(\alpha_{ik}\) is the effect of treatment i on variable k; these are collected into the treatment effect vector \(\boldsymbol{\alpha}_{i}\)
  • \(\beta_{jk}\) is the effect of block j on variable k; these are collected in the block effect vector \(\boldsymbol{\beta}_{j}\)
  • \(\varepsilon_{ijk}\) is the experimental error for treatment i, block j, and variable k; these are collected into the error vector \(\boldsymbol{\varepsilon}_{ij}\)

Assumptions

These are fairly standard assumptions with one extra one added.

  1. The error vectors \(\varepsilon_{ij}\) have zero population mean;
  2. The error vectors \(\varepsilon_{ij}\) have common variance-covariance matrix \(\Sigma\) (the usual assumption of a homogeneous variance-covariance matrix)
  3. The error vectors \(\varepsilon_{ij}\) are independently sampled;
  4. The error vectors \(\varepsilon_{ij}\) are sampled from a multivariate normal distribution;
  5. There is no block by treatment interaction. This means that the effect of the treatment is not affected by, or does not depend on the block.

These are the standard assumptions.

We could define the treatment mean vector for treatment i such that:

\(\mu_i = \nu +\alpha_i\)

Here we could consider testing the null hypothesis that all of the treatment mean vectors are identical,

\(H_0\colon \boldsymbol{\mu_1 = \mu_2 = \dots = \mu_g}\)

or equivalently, the null hypothesis that there is no treatment effect:

\(H_0\colon \boldsymbol{\alpha_1 = \alpha_2 = \dots = \alpha_a = 0}\)

This is the same null hypothesis that we tested in the One-way MANOVA.

We would test this against the alternative hypothesis that there is a difference between at least one pair of treatments on at least one variable, or:

\(H_a\colon \mu_{ik} \ne \mu_{jk}\) for at least one \(i \ne j\) and at least one variable \(k\)

We will use standard dot notation to define mean vectors for treatments, mean vectors for blocks and a grand mean vector.

We define a mean vector for treatment i:

\(\mathbf{\bar{y}}_{i.} = \frac{1}{b}\sum_{j=1}^{b}\mathbf{Y}_{ij} = \left(\begin{array}{c}\bar{y}_{i.1}\\ \bar{y}_{i.2} \\ \vdots \\ \bar{y}_{i.p}\end{array}\right)\) = Sample mean vector for treatment i.

In this case it is comprised of the mean vectors for ith treatment for each of the p variables and it is obtained by summing over the blocks and then dividing by the number of blocks. The dot appears in the second position indicating that we are to sum over the second subscript, the position assigned to the blocks.

For example, \(\bar{y}_{i.k} = \frac{1}{b}\sum_{j=1}^{b}Y_{ijk}\) = Sample mean for variable k and treatment i.

We  define a mean vector for block j:

\(\mathbf{\bar{y}}_{.j} = \frac{1}{a}\sum_{i=1}^{a}\mathbf{Y}_{ij} = \left(\begin{array}{c}\bar{y}_{.j1}\\ \bar{y}_{.j2} \\ \vdots \\ \bar{y}_{.jp}\end{array}\right)\) = Sample mean vector for block j.

Here we will sum over the treatments in each of the blocks and so the dot appears in the first position. Therefore, this is essentially the block means for each of our variables.

For example, \(\bar{y}_{.jk} = \frac{1}{a}\sum_{i=1}^{a}Y_{ijk}\) = Sample mean for variable k and block j.

Finally, we define the Grand mean vector by summing all of the observation vectors over the treatments and the blocks. So you will see the double dots appearing in this case:

\(\mathbf{\bar{y}}_{..} = \frac{1}{ab}\sum_{i=1}^{a}\sum_{j=1}^{b}\mathbf{Y}_{ij} = \left(\begin{array}{c}\bar{y}_{..1}\\ \bar{y}_{..2} \\ \vdots \\ \bar{y}_{..p}\end{array}\right)\) = Grand mean vector.

This involves dividing by a × b, which is the sample size in this case.

For example, \(\bar{y}_{..k}=\frac{1}{ab}\sum_{i=1}^{a}\sum_{j=1}^{b}Y_{ijk}\) = Grand mean for variable k.

As before, we will define the Total Sum of Squares and Cross Products Matrix. This is the same definition that we used in the One-way MANOVA. It involves comparing the observation vectors for the individual subjects to the grand mean vector.

\(\mathbf{T = \sum_{i=1}^{a}\sum_{j=1}^{b}(Y_{ij}-\bar{y}_{..})(Y_{ij}-\bar{y}_{..})'}\)

Here, the \( \left(k, l \right)^{th}\) element of T is

\(\sum_{i=1}^{a}\sum_{j=1}^{b}(Y_{ijk}-\bar{y}_{..k})(Y_{ijl}-\bar{y}_{..l}).\)

  • For \( k = l \), this is the total sum of squares for variable k, and measures the total variation in variable k.
  • For \( k ≠ l \), this measures the association or dependency between variables k and l across all observations.

In this case the total sum of squares and cross products matrix may be partitioned into three matrices, three different sum of squares cross product matrices:

\begin{align} \mathbf{T} &= \underset{\mathbf{H}}{\underbrace{b\sum_{i=1}^{a}\mathbf{(\bar{y}_{i.}-\bar{y}_{..})(\bar{y}_{i.}-\bar{y}_{..})'}}}\\&+\underset{\mathbf{B}}{\underbrace{a\sum_{j=1}^{b}\mathbf{(\bar{y}_{.j}-\bar{y}_{..})(\bar{y}_{.j}-\bar{y}_{..})'}}}\\ &+\underset{\mathbf{E}}{\underbrace{\sum_{i=1}^{a}\sum_{j=1}^{b}\mathbf{(Y_{ij}-\bar{y}_{i.}-\bar{y}_{.j}+\bar{y}_{..})(Y_{ij}-\bar{y}_{i.}-\bar{y}_{.j}+\bar{y}_{..})'}}} \end{align}

As shown above:

  • H is the Treatment Sum of Squares and Cross Products matrix;
  • B is the Block Sum of Squares and Cross Products matrix;
  • E is the Error Sum of Squares and Cross Products matrix.

The \( \left(k, l \right)^{th}\) element of the Treatment Sum of Squares and Cross Products matrix H is

\(b\sum_{i=1}^{a}(\bar{y}_{i.k}-\bar{y}_{..k})(\bar{y}_{i.l}-\bar{y}_{..l})\)

  • If  \(k = l\), is the treatment sum of squares for variable k, and measures variation between treatments.
  • If \( k ≠ l \), this measures how variables k and l vary together across treatments.

The \( \left(k, l \right)^{th}\) element of the Block Sum of Squares and Cross Products matrix B is

\(a\sum_{j=1}^{a}(\bar{y}_{.jk}-\bar{y}_{..k})(\bar{y}_{.jl}-\bar{y}_{..l})\)

  • For \( k = l \), is the block sum of squares for variable k, and measures variation between or among blocks.
  • For \( k ≠ l \), this measures how variables k and l vary together across blocks (not usually of much interest).

The \( \left(k, l \right)^{th}\) element of the Error Sum of Squares and Cross Products matrix E is

\(\sum_{i=1}^{a}\sum_{j=1}^{b}(Y_{ijk}-\bar{y}_{i.k}-\bar{y}_{.jk}+\bar{y}_{..k})(Y_{ijl}-\bar{y}_{i.l}-\bar{y}_{.jl}+\bar{y}_{..l})\)

  • For \( k = l \), is the error sum of squares for variable k, and measures variability within treatment and block combinations of variable k.
  • For \( k ≠ l \), this measures the association or dependence between variables k and l after you take into account treatment and block.

Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility