Latent Growth Curve Model – Long Format

The model equations and matrices in the present example are identical to the previous. However, there are two key features that make the present model distinct: (1) level-1 observations represent reaction times nested within individuals and (2) the level-1 predictor measures time since the study began (measurement occasion), leading to a latent growth curve model (LGC). We have chosen to structure the data in a manner consistent with the mixed-effects modeling approach to LGC analysis, which also allows us to draw an explicit parallel between the present model and the more general random-slopes model (example 2) in which the level-1 predictor represents another attribute of the level-1 unit. The data for the present example were drawn from the Reisby et al. (1977) example described in Hedeker’s (2004) introduction to growth modeling chapter. The outcome is ratings on the Hamilton depression rating scale (Hamilton, 1960). Depressions scores were taken over a period of weeks. A baseline was taken (week 0). Ratings were then taken after a week of the subjects consuming a placebo (week 1) and the following four weeks (week 1-5) participant took a depression drug.

Model Equations

Level-1

\[ HamD_{ij}= 1_{ij} \times \eta_{Intj} + week_{ij} \times \eta_{Slopej} + e_{ij} \]

Level-2

\[ \eta_{Intj} = \gamma_{00} + u_{0j} \] \[ \eta_{Slopej} = \gamma_{10} + u_{1j} \]

As in the previous model, the coefficients are latent variables and the predictors (\( 1_{ij} \) & \( week_{ij} \) are fixed. The following path-diagram has a one-to-one correspondence to the first and second level equations for a random-slopes model.

The above model is identical to the random-slopes model presented earlier:

\[ y_{ij}^1 = 1_{ij}^{1,2}  \times \eta_{1j}^2 + week_{ij}^{1,2} \times \eta_{2j}^2 + e_{ij}^1, \]

\[ \eta_{1}^2 = \alpha_{1}^2 + \zeta_{1j}^2 \] \[ \eta_{2}^2 = \alpha_{2}^2 + \zeta_{2j}^2 \]

Path diagram

alt lgc_long.xxmModel

xxM Model Matrices

Level-1: Within matrices

Residual Covariance Matrix

As before, we have a single dependent variable and hence a single parameter level-1 residual variance \( (\theta_{1,1}) \) at level-1. Hence, the residual covariance or theta matrix is a (1×1) matrix:

\[
\Theta^{1,1}=
\begin{bmatrix}
\theta^{1,1}_{1,1} \\
\end{bmatrix}
\]

Level-2: Within Matrices

At level-2, we have two latent variables: intercept and slope. Hence, we have two latent means and a covariance matrix.

Latent Means

The latent variable mean matrix is a (2×1) matrix:
\[
\alpha^2=
\begin{bmatrix}
\alpha^2_1 \\
\alpha_2^2
\end{bmatrix}
\]

\( (\alpha_1^2) \) is the mean of the intercept and \( (\alpha_2^2) \) is the mean of the slope parameter or the average effect of \( week_{ij} \) on \( HamD_{ij} \), a measure of depression.

Latent Factor Covariance Matrix

The latent covariance matrix is a (2×2) matrix with two variances and single covariance:
\[
\Psi^{2,2}=
\begin{bmatrix}
\psi^{2,2}_{1,1} \\
\psi^{2,2}_{2,1} & \psi^{2,2}_{2,2}
\end{bmatrix}
\]

\( \psi_{1,1}^{2,2} \) is the variance of the intercept factor representing variability in the intercept of \( y_{ij} \) across persons and \( \psi_{2,2}^{2,2} \) is the variance of the slope parameter representing between-persons variability in the effect of \( week_{ij} \) on \( HamD_{ij} \). Finally, \( \psi_{2,1}^{2,2} \) is the covariance between the intercept and slope factors.

Across level matrices: Person to Response

As described above, we need to capture the effect of level-2 intercept and slope factors on the level-1 dependent variable using a factor-loading matrix with fixed parameters.

Factor-loading matrix

The factor-loading matrix \( (\Lambda^{1,2}) \) has a single row and two columns (1×2):
\[
\Lambda^{1,2}=
\begin{bmatrix}
1.0 & week_{ij}
\end{bmatrix}
\]

The first column is fixed to 1.0, whereas the second column is fixed to person-specific value of \( week_{ij} \). Collectively, the within and across-level matrices described here specify all of the parameters necessary to model individual differences in over-time trajectories of reaction times across persons. Next, we provide SAS, MPlus, and xxM code for fitting LGCs.

Code Listing

“xxM”“SAS”

xxM

As in the previous example, there is a single level-1 dependent variable and a single level-1 independent variable. The number of matrices is the same as before, and their dimensions are also identical. :
1. We do not wish to estimate any factor-loadings. Factor-loadings are fixed to 1.0 and \( week_{ij} \). Hence, both elements of pattern matrix are zero:\[ \Lambda^{pat}= \begin{bmatrix} 0.0 & 0.0 \end{bmatrix} \]

  1. We want to fix the first-factor loading to 1.0 (intercept). We use the value matrix to provide the fixed-value of 1.0 for the first factor-loading. The second factor-loading does not have a single fixed value for every observation. Instead each observation \( (i) \) would have its own value for that factor-loading \( (x_{ij}) \). Clearly, the value matrix cannot be used for providing individual specific fixed values. Hence, the second element in the value matrix is left as 0.0. xxM ignores it internally.\[ \Lambda^{val}= \begin{bmatrix} 1.0 & 0.0 \end{bmatrix} \]
  2. The job of fixing the factor-loading is left to the label matrix. A label matrix is used to assign labels to each parameter within the matrix. Label matrices can be used to impose equality constraints across matrices. Any two parameters with the same label are constrained to be equal. Label matrices are also used for specifying that a specific parameter is to be fixed to data-values. In this case, the first-label is irrelevant as that parameter has already been fixed to 1.0. We use a descriptive label \( lambda_{11} \) as the first label (something such as Justin would have worked as well). The second factor-loading is the one we are interested in. We want to fix the second factor-loading to the observation specific values of the predictor \( (week_{ij}) \). This is accomplished by using a two-part label: levelName.predictorName. In this case, the predictor is a response level variable. Hence, the first part of the label is response. The second part is the actual predictor name, in this case week. \[ \Lambda^{label}= \begin{bmatrix} l_{1,1} & week_{ij} \end{bmatrix} \]

The complete listing of xxM code for the latent growth curve (long version) example follows:

Load xxM and data

Construct R-matrices

For each parameter matrix, construct three related matrices:

  1. pattern matrix: A matrix indicating free or fixed parameters.
  2. value matrix: with start or fixed values for corresponding parameters.
  3. label matrix: with user friendly label for each parameter. label matrix is optional.

Construct main model object

xxmModel() is used to declare level names. The function returns a model object that is passed as a parameter to subsequent stattements.

Add submodels to the model objects

For each declared level xxmSubmodel() is invoked to add corresponding submodel to the model object. The function adds three pieces of information:
1. parents declares a list of parents of the current level.
2. variables declares names of observed dependent (ys), observed independent (xs) and latent variables (etas) for the level.
3. data R data object for the current level.

Add Within-level parameter matrices for each submodel

For each declared level xxmWithinMatrix() is used to add within-level parameter matrices. For each parameter matrix, the function adds the three matrices constructed earlier:

Add Across-level parameter matrices to the model

Pairs of levels that share parent-child relationship have regression relationships. xxmBetweenMatrix() is used to add corresponding parameter matrices connecting the two levels.

  • Level with the independent variable is the parent level.
  • Level with the dependent variable is the child level.

For each parameter matrix, the function adds the three matrices constructed earlier:

  • pattern
  • value
  • label (optional)

Estimate model parameters

Estimation process is initiated by xxmRun(). If all goes well, a quick printed summary of results is produced.

Estimate profile-likelihood confidence intervals

Once parameters are estimated, confidence inetrvals are estimated by invoking xxmCI() . Depending on the the number of observations and the complexity of the dependence structure xxmCI() may take very long. xxMCI() displays a table of parameter estimates and CIS.

View results

A summary of results may be retrived as an R list by a call to xxmSummary()

Free model object

xxM model object may hog a large amount of RAM outside of R’s memory. This memory will automatically be released, when R’s workspace is cleared by a call to rm(list=ls()) or at the end of the R session. Alternatively, xxmFree() may be called to release memory.

Proc Mixed

SAS code for a random-slopes model uses a CLASS statement to identify the level-2 units, in this case person. The MODEL statement estimates the fixed-effects \( (\alpha) \). The RANDOM statement specifies that the level-1 intercepts and the effect of level-1 predictor \( week_{ij} \) is allowed to vary across “subject” (i.e., persons). The covariance among the random-effects (G) is freely estimated (specified by “type = UN”). The G matrix corresponds to the xxM \( \psi \) matrix. Finally, like all regression models, Proc Mixed estimates the residual variance of the level-1 dependent variable \( (\theta_{1,1}) \) by default. The important thing to note is that there is one-to-one correspondence between the parameters estimated in Proc Mixed and SEM.

For the current dataset, the parameter estimates are:

alt lgc_long.results