# Three-Level Hierarchical Model with Across-Level Latent Variable Regression

We now consider a general xxM model for three level data with observed and latent variables at multiple levels.

# Motivating Example

We may have multiple indicators of student reading achievement, teacher quality, and school resources. In this case, students are nested within teachers and teachers are nested within schools. We are interested in examining the effects of latent teacher quality and school resources on latent student achievement.

In this case, we have four observed indicators of student achievement at level-1, no observed variables at the teacher level, and three measures of school resources at level-3.

## Three Level Random Intercepts Model with Latent Regression

Very simply, these models are complex. It is easier to visualize these models than describe them in terms of equations or matrices. One thing to keep in mind that xxM is intended to be flexible so as to allow a model to be specified in the most natural fashion. The following model can be specified in several different equivalent ways. The simplest expression of the model is presented here.

# Scalar Representation

## Within Level Equations

### Student submodel (Level-1)

The measurement model for the student achievement can be
presented as:

$y_{pijk}^1=\nu_p^1 + \lambda_p^{1,1} \times \eta_{1ijk}^1 + e_{pijk}^1$

where, $$y_{pij}$$ is $$p^{th}$$ observed indicator for student $$i$$, nested within teacher $$j$$, nested within school $$k$$ . The superscript 1 is for the student level.

$\eta_{1ij}^1 | \eta_{1j}^2 \sim N(0,\psi_{1,1}^{1,1})$

$e_{pij}^1 \sim N(0, \Theta^{1,1})$

The student model hypothesizes the following parameters:
1. $$(p-1)$$ factor loadings $$(\lambda_p)$$ with the first factor loading being fixed to 1.0 for scale identification.
2. Residual variance for each of the p observed indicators $$(\theta_(p,p))$$.
3. Single latent residual variance $$(\Psi_{1,1}^{1,1}).$$ Note: The student level latent variable is regressed on teacher level latent factor. As a result, $$(\theta_{1,1}^{1,1})$$ is the conditional or residual variance. This is discussed later.
4. Measurement intercepts for each of the p observed indicators $$(\nu_p)$$.

### Teacher submodel (Level-2)

The teacher level has a single latent variable with zero mean and unknown residual variance $$(\psi_{1,1}^{2,2}).$$

The superscript 2 is for the teacher level.

$\eta_{1jk}^2 | \eta_{1k}^3 \sim N(0,\psi_{1,1}^{2,2})$

### School submodel (Level-3)

The school level has two latent variables each with a mean of zero and unknown variances.

The second school level latent variable is the school-resource factor measured by three school level indicators. School level measurement model for the school-resource factor is:

$y_{pk}^3 = \nu_p^3 + \lambda_{2,p}^{3,3} \times \eta_{2k}^3 + e_{pk}^3$

$\eta_{2k}^3 \sim N(0,\psi_{2,2}^{3,3})$

$e_{pk}^3 \sim N(0,\Theta^{3,3})$

School level structural model regression latent student achievement on latent school resource factor is:

$\eta_{1k}^3 = \beta_{1,2}^{3,3} \times \eta_{2k}^3 + \xi_k^3$

$\xi_k^3 \sim N(0,\psi_{1,1}^{3,3})$

The structural model states that school level variability in student achievement is predicted by school resources. So far, our description has been limited to within-level models only. Latent variables representing random-intercepts for student achievement at the teacher and school levels were presented, but these have not yet been defined. Clearly, we need to link the latent student achievement factor
$$(\eta_{1ijk}^1)$$ with the corresponding teacher level intercept $$(\eta_{1jk}^2)$$. Similarly, we need to connect the school level intercept of student achievement to the student level achievement factor. There are many ways of specifying such links. Here we use a mediated effect approach. The effect of the school level intercept for student achievement on student level achievement is mediated by the teacher effect. In other words, we are envisioning regression among latent variables across levels.

## Between Level Equations

### Teacher To Student Effects

$\eta_{1ijk}^1 = \beta_{1,1}^{1,2} \times \eta_{1jk}^2 + \xi_{ijk}^1$

$\xi_{ijk}^1 \sim N(0,\psi_{1,1}^{1,1})$

Note:

1. The dependent variable is a level-1 latent variable (student achievement). The independent variable is a level-2 latent variable (teacher intercept of latent student achievement). This is reflected in the respective superscripts.
2. The superscript for the regression coefficient $$(\beta_{1,1}^{1,2})$$ indicates that the dependent variable is a level-1 variable and the independent variable is a level-2 variable.
3. The subscript for the regression coefficient is (1,1) meaning the first latent variable at level-1 is being regressed on the first latent variable at level-2. With single latent variables at both levels, this seems like overkill. However, with multiple variables, superscripts and subscripts become a necessary evil.
1. We return to the level-1 variance for the student achievement factor (?_1,11,1). This was incompletely specified in the student submodel.

### School To Teacher Effects

$\eta_{1jk}^2 = \beta_{1,1}^{2,3} \times \eta_{1k}^3 \xi_{jk}^2$

$\xi_{jk}^2 \sim N(0,\psi_{1,1}^{2,2})$

Note:

1. The dependent variable is a level-2 latent variable (teacher intercept of latent student achievement). The independent variable is a level-3 latent variable (school intercept of latent student achievement). This is reflected in the respective superscripts.
2. The superscript for the regression coefficient $$(\beta_{1,1}^{2,3})$$ indicates that the dependent variable is a level-2 variable and the independent variable is a level-3 variable.
3. The subscript for the regression coefficient is (1, 1) meaning the first latent variable at level-2 is being regressed on the first latent variable at level-3. In this case, we have two latent variables at level-3. Hence, we could in principle have two latent regressions coefficients ($$\beta_{1,1}^{2,3}$$ & $$\beta_{1,2}^{2,3}$$). Subscripts make it clear which latent variables are involved.
4. We return to the level-2 variance for the teacher achievement intercept $$(\psi_{1,1}^{2,2})$$. This was incompletely specified in the teacher sub-model.

# xxM Matrix equations

The scalar representation above is complex. The actual model in matrix form is simple:
$y^1=\nu^1 + \lambda^{1,1} \times \eta^1 + e^1$

$\eta^1=B^{1,2} \times \eta^2 + \xi^1$ $\eta^2=B^{2,3} \times \eta^3 + \xi^2$ $\eta^3=B^{3,3} \times \eta^3 + \xi^3$

$e^1 \sim N(0, \Theta^{1,1})$ $\xi^1 \sim N(0, \Psi^{1,1})$ $\xi^2 \sim N(0, \Psi^{2,2})$ $\xi^3 \sim N(0, \Psi^{3,3})$

Obviously, the actual structure of these matrices determines our model. The structure of each of these matrices is specified next.

# xxM Model Matrices

## Student submodel (Level-1)

$\Lambda_{pattern}^{1,1} = \begin{bmatrix} 0 \\ 1 \\ 1 \\ 1 \end{bmatrix}$
$\Lambda_{value}^{1,1} = \begin{bmatrix} 1.0 \\ 1.1 \\ 0.9 \\ 0.8 \end{bmatrix}$

The first factor-loading is fixed to 1.0. Hence, we need to fix the first parameter in the pattern matrix. The actual value at which the parameter is to be fixed is specified in the value matrix. In this case the first factor-loading is being fixed to a value of 1.0.

### Observed Residual Covariance Matrix (Theta)

$\Theta_{pattern}^{1,1} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$
$\Theta_{value}^{1,1} = \begin{bmatrix} 1.1 & 0 & 0 & 0 \\ 0 & 2.1 & 0 & 0 \\ 0 & 0 & 1.3 & 0 \\ 0 & 0 & 0 & 1.5 \end{bmatrix}$

Residual covariance matrix is a diagonal matrix, meaning we are only estimating residual variances. Residual covariances are all fixed to 0.0. Again we use a pattern and a value matrix to fix all off-diagonal elements to 0.0.

### Latent (Residual) Covariance Matrix (PSI)

$\psi_{pattern}^{1,1}=[1],\psi_{value}^{1,1}=[1.1]$

### Observed Variable Intercepts (nu)

$\nu_{pattern}^1 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}$
$\nu_{value}^1 = \begin{bmatrix} 1.1 \\ 2.1 \\ 1.3 \\ 0.71 \end{bmatrix}$

## Teacher submodel (Level-2)

### Latent (Residual) Covariance Matrix (PSI)

$\psi_{pattern}^{2,2}=[1],\psi_{value}^{2,2}=[0.05]$

## School submodel (Level-3)

$\Lambda_{pattern}^{2,3} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ 0 & 1 \end{bmatrix}$
$\Lambda_{value}^{2,3} = \begin{bmatrix} 0.0 & 1.0 \\ 0.0 & 1.1 \\ 0.0 & 0.9 \end{bmatrix}$

There are three observed and two latent variables at level-3. Hence the factor-loading matrix is 3×2. The first latent variable is the school level random-intercept of the teacher-level random-intercept of student achievement. Clearly, the first latent variable cannot have school-level latent indicators. Hence, the first column is zero in both pattern and value matrices. The second latent variable is the school-resource factor measured by all three level-3 indicators. As always, the first factor loading is fixed to 1.0 to identify the latent measurement scale.

### Observed Residual Covariance Matrix (Theta)

$\Theta_{pattern}^{3,3} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
$\Theta_{value}^{3,3} = \begin{bmatrix} 1.1 & 0.0 & 0.0 \\ 0.0 & 2.1 & 0.0 \\ 0.0 & 0.0 & 1.3 \end{bmatrix}$

### Observed Variable Intercepts (nu)

$\nu_{pattern}^3 = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$
$\nu_{value}^1 = \begin{bmatrix} 1.1 \\ 2.1 \\ 0.7 \end{bmatrix}$

### Latent variable regression matrix (Beta)

$B_{pattern}^{3,3} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$
$B_{value}^{3,3} = \begin{bmatrix} 0.0 & 0.4 \\ 0.0 & 0.0 \end{bmatrix}$

There are two latent variables at level-3 and the first latent variable is regressed on the second. Hence, element is freely estimated. The other three elements are fixed to zero.

### Latent variable (Residual) Covariance matrix (PSI)

$\psi_{pattern}^{3,3} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$

,

$\psi_{value}^{3,3} = \begin{bmatrix} 0.3 & 0.0 \\ 0.0 & 0.7 \end{bmatrix}$

Like the theta matrix, the psi matrix is a diagonal matrix. $$\psi_{1,1}^{3,3}$$ is the variance of the first latent variable (student achievement random intercept) and represents the variance in the intercept factor unexplained by the school-resource factor. $$\psi_{1,1}^{3,3}$$ is the unconditional variance of the school-resource factor.

## Teacher To Student Effects (Level 2 to Level 1)

### Latent Variable Regression Matrix (Beta)

$B_{pattern}^{1,2} = [0], B_{value}^{1,2} = [1.0]$

This matrix links the teacher latent random-intercept variable with the student latent achievement variable. As indicated earlier, the value is fixed to 1.0. Note that the superscript has two elements, the first element refers to the lower level (student) and the second element refers to the higher level (teacher). This is always true for all linking matrices.

## School To teacher Effects (Level 3 to Level 2)

### Latent Variable Regression Matrix (Beta)

$B_{pattern}^{2,3} = \begin{bmatrix} 0 & 0 \end{bmatrix}$
$B_{value}^{2,3} = \begin{bmatrix} 1.0 & 0.0 \end{bmatrix}$

This matrix links the school latent random-intercept of student achievement with the teacher latent intercept of achievement variable. There is a single latent variable at level-2 (teacher random-intercept of student achievement), but two latent variables at level-3 (school random-intercept of student achievement and school-resources). Only the school random-intercept of student achievement influences the teacher random intercept of student achievement. Hence, the first element is fixed to 1.0 and second element is fixed to 0.0.

# Model Matrices Summary

The following table provides a complete summary of parameter matrices:

Type Matrix Pattern
level 1: $$\Theta$$
$\Theta^{1,1} = \begin{bmatrix} \theta_{1,1}^{1,1} \\ \theta_{2,1}^{1,1} & \theta_{2,2}^{1,1} \\ \theta_{3,1}^{1,1} & \theta_{3,2}^{1,1} & \theta_{3,3}^{1,1} \\ \end{bmatrix}$
$\Theta^{1,1} = \begin{bmatrix} 1 \\ 0 & 1 \\ 0 & 0 & 1 \\ \end{bmatrix}$
level 2: $$\nu$$
$\nu^{1} = \begin{bmatrix} \nu_1^1 \\ \nu_2^1 \\ \nu_3^1 \end{bmatrix}$
$\nu^{1} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$
level 2: $$\Lambda$$
$\Lambda^{1,1} = \begin{bmatrix} \lambda_{1,1}^{1,1} \\ \lambda_{2,1}^{1,1} \\ \lambda_{3,1}^{1,1} \end{bmatrix}$
$\Lambda^{1,1} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}$
level1: $$\Psi$$
$\Psi^{2,2} = [\psi_{1,1}^{2,2}]$
$\Psi^{2,2} = [1]$
level-2 to level-1: $$B$$
$B^{1,2}_{1,1} = \begin{bmatrix} beta_{1,1}^{1,2} \end{bmatrix}$
$B^{1,2} = [0]$
level 2: $$\Psi$$
$\Psi^{2,2} = \begin{bmatrix} psi_{1,1}^{2,2} \end{bmatrix}$
$\Psi^{2,2} = [1]$
level 3: $$\Theta$$
$\Theta^{3,3} = \begin{bmatrix} \theta_{1,1}^{3,3} & \\ \theta_{2,1}^{3,3} & \theta_{2,2}^{3,3} \\ \theta_{3,1}^{3,3} & \theta_{3,2}^{3,3} & \theta_{3,3}^{3,3} \end{bmatrix}$
$\Theta^{3,3} = \begin{bmatrix} 1 \\ 0 & 1 \\ 0 & 0 & 1 \end{bmatrix}$
level 3: $$\nu$$
$\nu^{3} = \begin{bmatrix} \nu_1^3 \\ \nu_2^3 \\ \nu_3^3 \\ \end{bmatrix}$
$\nu^{3} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$
level 3 -> level 2: $$B$$
$B^{2,3} = \begin{bmatrix} \beta_{1,1}^{2,3} \\ \end{bmatrix}$
$B^{2,3} =[0]$
level 3 -> level 2: $$\Lambda$$
$\Lambda^{3,3} = \begin{bmatrix} \lambda_{1,1}^{3,3} \\ \lambda_{2,1}^{3,3} \\ \lambda_{3,1}^{3,3} \\ \end{bmatrix}$
$\Lambda^{3,3} = \begin{bmatrix} 0 \\ 1 \\ 1 \\ \end{bmatrix}$
level 3: $$B$$
$B^{3,3} = \begin{bmatrix} \beta_{1,1}^{3,3} \\ \beta_{2,1}^{3,3} & \beta_{2,2}^{3,3} \end{bmatrix}$
$B^{1,2} = \begin{bmatrix} 0 \\ 1 & 0 \\ \end{bmatrix}$
level 3: $$\Psi$$
$\Psi^{2,2} = \begin{bmatrix} \psi_{1,1}^{3,3} &\\ \psi_{2,1}^{3,3} & \psi_{2,2}^{3,3} \end{bmatrix}$
$\Psi^{2,2} = \begin{bmatrix} 1 & \\ 0 & 1 \end{bmatrix}$

# Code Listing

“xxM”

## xxM

The complete listing of xxM code is as follows:

### Construct R-matrices

For each parameter matrix, construct three related matrices:

1. pattern matrix: A matrix indicating free or fixed parameters.
2. value matrix: with start or fixed values for corresponding parameters.
3. label matrix: with user friendly label for each parameter. label matrix is optional.

### Construct main model object

xxmModel() is used to declare level names. The function returns a model object that is passed as a parameter to subsequent stattements.

### Add submodels to the model objects

For each declared level xxmSubmodel() is invoked to add corresponding submodel to the model object. The function adds three pieces of information:
1. parents declares a list of parents of the current level.
2. variables declares names of observed dependent (ys), observed independent (xs) and latent variables (etas) for the level.
3. data R data object for the current level.

### Add Within-level parameter matrices for each submodel

For each declared level xxmWithinMatrix() is used to add within-level parameter matrices. For each parameter matrix, the function adds the three matrices constructed earlier:

### Add Across-level parameter matrices to the model

Pairs of levels that share parent-child relationship have regression relationships. xxmBetweenMatrix() is used to add corresponding parameter matrices connecting the two levels.

• Level with the independent variable is the parent level.
• Level with the dependent variable is the child level.

For each parameter matrix, the function adds the three matrices constructed earlier:

• pattern
• value
• label (optional)

### Estimate model parameters

Estimation process is initiated by xxmRun(). If all goes well, a quick printed summary of results is produced.

### Estimate profile-likelihood confidence intervals

Once parameters are estimated, confidence inetrvals are estimated by invoking xxmCI() . Depending on the the number of observations and the complexity of the dependence structure xxmCI() may take very long. xxMCI() displays a table of parameter estimates and CIS.

### View results

A summary of results may be retrived as an R list by a call to xxmSummary()

### Free moodel object

xxM model object may hog a large amount of RAM outside of R’s memory. This memory will automatically be released, when R’s workspace is cleared by a call to rm(list=ls()) or at the end of the R session. Alternatively, xxmFree() may be called to release memory.

For the current dataset, the parameter estimates are:

Mean structure is not illustrated in this diagram. The student and school $$\nu$$ matrices were estimated as:

$\nu^{1} = \begin{bmatrix} .526 & \\ .571 \\ .592 \\ \end{bmatrix}$

and

$\nu^{1} = \begin{bmatrix} .129 & \\ .144 \\ .081 \\ \end{bmatrix}$