Using R to run multilevel models

I'm learning how to run multilevel models in R.

I tried the analysis of variance model, AKA, the intercept-only model.

fit<lme(post_test~1,random=~1|school,data=mySASData,control=list(opt="optim"))
summary(fit)
anova(fit)
VarCorr(fit)
summary(fit)

 

I run this in SAS and get the same results.  I didn't get the same degree of freedom.

proc glimmix data=sashlm.core_2014_4_years;
class school;
model post_test=/solution ddfm=kr dist=normal link=identity;
random intercept /subject=school;
run;

HLM: What happens if I enter level2 variables at level 1?

The goal of this exercise is to find out what happens if I intentionally use a level2 variable at level 1 in HLM.   I found that the coefficients and standard errors remain about the same.  The parameter that differed was just the degree of freedom, which was consistent with my expectation.

Using my old NELS dataset, I ran two different HLM models using Bryk and Raudenbush’s software (See model 1 and model 2 equations in the table below).

  • The URBAN as level 1 covariate model (I entered the level2 variable URBAN wrongly at level 1)
  • The URBAN as level 2 covariate model (I entered the level2 variable URBAN correctly at level 2)

The outcome variable is the achievement composite (POSTTEST), students are level 1 and schools are level 2.  When expressed as mixed models, the two models are identical, which is why I expected most parameters to come out the same.

POSTTESTij = γ00
γ10*URBANij  + u0jrij

The first model (MODEL 1; see below) included URBAN (students are in urban school) as a level 1 predictor.  Of course this is a wrong specification because urban is a school characteristic.  In the second model (MODEL 2), I used it at the expected level, which is at level 2 (school level).

These models look different, but AGAIN when expressed as mixed models, they are identical.  As the third model (MODEL 3), I replicated the same HLM model using SAS PROC GLIMMIX.  SAS requires that the equation be expressed as a mixed model.

Results showed that coefficients and standard errors are more or less the same across three models.  The only one thing that was different was degree of freedom.

Conclusion: As long as variables enter the model as fixed effects as done here, there is nothing magical about the HLM model.  HLM software or SAS PROC GLIMMIX (option ddfm=kr) adjust degree of freedom values, accounting for the fact that URBAN is a school-level variable and thus should not be awarded a value that is too large.  Notice that under the correct specification (MODEL 2 and MODEL 3), the degree of freedom for URBAN is close to the number of schools, not to the number of students.

Thanks for any comments you may have.

MODEL 1 MODEL 2 MODEL 3
URBAN as LEVEL 1 covariate URBAN as level 2 covariate SAS PROC GLIMMIX
Level-1 Model

POSTTESTij = β0j + β1j*(URBANij) + rij

Level-2 Model

β0j = γ00 + u0j
β1j = γ10

Mixed Model

POSTTESTij = γ00
γ10*URBANij  + u0jrij

Level-1 Model

POSTTESTij = β0j + rij

Level-2 Model

β0j = γ00 + γ01*(URBAN_LEj) + u0j

Mixed Model

POSTTESTij = γ00 + γ01*URBAN_LEj  + u0jrij

proc glimmix data=kaz.level1;

class schoolID;

model

posttest =

urban

/solution ddfm=kr dist=normal link=identity s ;

random schoolID;

run;

 

Final estimation of fixed effects
(with robust standard errors)

Fixed Effect  Coefficient  Standard
error
 t-ratio  Approx.
d.f.
 p-value
For INTRCPT1, β0
    INTRCPT2, γ00 52.643432 0.526139 100.056 125 <0.001
For URBAN slope, β1
    INTRCPT2, γ10 -0.450022 1.157924 -0.389 692 0.698

 

Final estimation of variance components

Random Effect Standard
Deviation
Variance
Component
  d.f. χ2 p-value
INTRCPT1, u0 3.76951 14.20923 125 292.48369 <0.001
level-1, r 8.39004 70.39271

Statistics for current covariance components model

 

Final estimation of fixed effects
(with robust standard errors)

Fixed Effect  Coefficient  Standard
error
 t-ratio  Approx.
d.f.
 p-value
For INTRCPT1, β0
    INTRCPT2, γ00 52.643459 0.526140 100.056 124 <0.001
    URBAN_LE, γ01 -0.449983 1.157920 -0.389 124 0.698

 

Final estimation of variance components

Random Effect Standard
Deviation
Variance
Component
  d.f. χ2 p-value
INTRCPT1, u0 3.76919 14.20678 124 292.48068 <0.001
level-1, r 8.39007 70.39334

Statistics for current covariance components model

 

Solutions for Fixed Effects
Effect Estimate Standard Error DF t Value Pr > |t|
Intercept 52.6434 0.5460 95.01 96.41 <.0001
urban -0.4501 1.1095 132.4 -0.41 0.6856
 

 

Covariance Parameter Estimates
Cov Parm Estimate Standard Error
schoolID 14.2150 3.4610
Residual 70.3913 3.7455

 

Datasets:

www.nippondream.com/file/datafiles_HLM.zip

Results from Model 1

Results from Model 2

Results from PROC GLIMMIX

 

Random effects vs. Fixed Effects

Random effects:

Effects in this context refer to group outcome averages estimed by a regression model.  In regular regression models (OLS, Logistic Regression model), these are often estimated as coefficients of a series of dummy variables representing group units (e.g., school A, school B, .. each coded as 0 or 1).  These are fixed effects if estimated by non HLM model.  If I am part of the school whose test score average is 500 and if it is estimated as fixed effects, that value is determined solely by the information obtained from that school (I used the word solely to emphasize the point, but this may not be exactly correct due to the fact that predictors from the whole data helped derive coefficients that in turn help adjust the average outcome).

HLM does something similar, but after the initial group averages were estimated, they will be adjusted by reliablity of the group estimates (and this process is called Baysian Shrinkage).  They are adjusted such that the estimates are pulled towards the grand mean. Reliably measured group estimate will stay close to the original fixed effect value.  Not so reliable estimates will be pulled towards the grand mean.  The idea is to mix the grand mean and the group mean to prevent not reliant group estimate from being away from the mean.

Reliabitliy of the group average is a function of a) n of subjects in it and b) outcome variance.  I think Intraclass correlation may be part of the algorithim, but I will check.

When to use HLM (Hierarchical Linear Modeling)

Not common: If your data is a census data (everyone is in the dataset, like US Census), you do not need to use HLM.  You do not even need statistical testing because every estimate you get is a true estimate.  (Note: my friend wrote me and said he disagrees.  He said even if he had all people's data he wants to do stat testing to compare male and female when the difference is very small.  Again I disagree.)

Common: If your data is a sample and data are hierarchically structured and thus errors are dependent (violation of the independence assumption), you may consider HLM to alleviate the clustering problem.  Examples of hierarchically structured data are:

  • Students nested within schools (2-levels)
  • Repeated measures nested within subjects who are nested within schools (3-levels)

It is sometimes said that the motivation for the use of HLM must be whether the group units are a random sample of the population.  The argument claims that if, for example, schools in the sample are a convenient sample, one cannot use HLM.  This is not exactly correct.  I state the following using RCT (randomized control trial) or QED (Quasi-experimental design) impact studies as a context.

If the group units are randomly sampled, one can generalize the result of impact analysis to the whole population.  If Intervention program A was found effective (or not effective) and the sample was a random sample of US population, this finding is generalizable.  If impact estimation relied on a convenient sample, one cannot generalize it to the whole population.  If RCT, the random sample vs. convinient sample difference should not affect the internal validity of the impact estimate.

There is a tricky case.  HLM is inappropriate if the group units are, for a lack of better word, distinct groups with apparent identity and as a researcher you are genuinely interested in the pure group estimates.  This is a case when the exact group estimate, derived as fixed effects not as random effects, are of interest.

For example, if group units are US states, HLM is most likely inappropriate.  State specific estimates should be interpreted as such and should not be treated as random effects. Imagine the outcome of interest is income level and the state you live in had the average income of 50,000.  Just because your state had a small number of survey respondents (and thus reliability of the estimate is lower and HLM will pull your average closer to the grand average), you do not want to see your state’s average to be changed to look more like the national average.  Another example would be a study of 20 regional hospitals.  You should be interested in the fixed estimates of hospital outcomes.

When HLM treats schools as the mixed effects, we are treating school units somewhat instrumentally (a bit rude thing to do :>) in order to obtain the best value for the intercept (=grade average of the group specific effects estimated as random effects).  So if you are a school, you may feel HLM is treating you without respect.  HLM will not respect your data if the sample size is small and outcome variance in your school is large.  But HLM is respecting you in a different way.  Your data is unreliable, so let me just adjust it to be more normal, so you won't embarrass yourself.

PROC GLIMMIX error messages

While running PROC GLIMMIX with a large dataset with categorical variables explicitly treated as CLASS variables AND with weight, I got the following error message:

WARNING: Obtaining minimum variance quadratic unbiased estimates as starting values for the         covariance parameters failed.

The weight and the use of categorical variables were the cause of the problem as I found out that without weight the model produces results.

SAS support staff spent time trying to figure out what the issue is/was.  I sent them a dataset (though I replaced the variables with randomly generated values) and the exact syntax.  They determined that (thank you so much for going extra miles to figure this out!):

"the computation of MIVQUE fails because the default singular value (1e-12) leads to the detection of singularity while sweeping of the mixed model equation"  -- May 12, 2016.

They proposed that singular value should be specified as a very small value to let the conversion happen:

proc glimmix data=example singular=1e-9;
weight psweight;
class race_cate level2 ;
model outcome= race_cate / s dist=normal link=identity ddfm=kr;
random intercept/subject=level2;
run;

****

Before hearing this final response from SAS, I wrote this and said that I chose to use PROC MIXED because it produces results.

 

SAS Support person suggested that I use of PARMS statement after Random, so I feed in the initial values for variance-covariance values manually/directly.  I did something like this:

PARMS (0.2) (0.4);

Then I get:

“ERROR: Values given in PARMS statement are not feasible.”

Out of curiosity, I used PROC MIXED to run the same model without PARMS but with a weight statement.  This time, the model produced results without encountering any errors.

PROC MIXED and PROC GLIMMIX produce identical results (or almost or essentially identical results) when the same models are specified.

I *think* my solution for now is to use PROC MIXED for this specific model.

The following two produced the identical results.
proc mixed data=analysis noclprint ;
where random_number > 0.8;
weight psweight;
class level2 race_cate2 loc_cate;
model zposttest=
<I put variable names here -- including categorical variables, level2 race_cate2 loc_cate>
/ s ddfm=kr ;
random int/subject=level2;
ParameterEstimates=kaz5
Diffs=DIF_GROUP5
LSMeans=LS5
Tests3=jointtest;
run;

proc glimmix data=analysis noclprint ;
where random_number > 0.8;
weight psweight;
class level2 race_cate2 loc_cate;
model zposttest=
<I put variable names here -- including categorical variables, race_cate loc_cate>
/ s dist=normal link=identity ddfm=kr STDCOEF ;
lsmeans
race_cate2 loc_cate
/ ilink diff
at (
treat
Zpretest
male
LEP
SPECED
DISADV
)=(0 0 0 0 0 0);
random int/subject=level2;
*parms (.03 ) (.45) /* /hold=1,2*/ ;
*ods output
ParameterEstimates=kaz5
Diffs=DIF_GROUP5
LSMeans=LS5
Tests3=jointtest;
run;

Doing HLM and Time-series analysis at the same time using GLIMMIX

HLM (multilevel models) and econometric analyses (e.g., time series analysis, ARIMA, etc.) are treated as different approaches (the goal of which is to deal with data dependency problem), they can be implemented in the same model via. SAS PROC GLIMMIX.  However, I believe doing this is computationally demanding and models may not converge.

http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect020.htm

For detailed discussion of this topic, see p.28 of my HLM manual:

http://www.estat.us/sas/PROCMIXED.doc

Cluster effect: What does HLM solve?

When I first learned HLM (Hierarchical linear modeling) at graduate program in 1994/5, I struggled with the following expression:

Errors are correlated.

Up to that point in Stat 101, correlation was about two columns of data (e.g., math test score and science test score).  Errors in the context of regression analysis are residuals from the model and they are stored in one column.  I had a conceptual difficulty trying to understand why values contained in one column (one variable) can be correlated.

When I learned about geostatistics again at a workshop, the model was supposed to correct data dependence issue caused by geographical proximity.  This time, it was about how temperature of town A, for example, is similar to an adjacent town B and thus observations are dependent on one another.

I also learned about econometric approach of trying to deal with the fact that time and observations are correlated (my test score today is dependent on my test score tomorrow).

After hearing again and again about statisticians' attempts to correct for data dependence, correlation of data, etc., I finally realized that data can be correlated within one column of data.  If you and someone else are from the same school, your outcome data are correlated.

The traditional statistical modeling technique, such as OLS regression model, relies on the assumption that outcome data are uncorrelated (observation 1 and 2 are completely not related to one another).  If this assumption is violated, we can no longer consider results of statistical test good.  In fact, in the presence of data dependence problem, results of statistical test will be over-optimistic (too many statistically significant results).

I also learned that the use of HLM is one thing you can do to improve the situation, but it may be just one of many problems you may have in data.  Student test scores may be also related within friendship networks.  Typically we do not have data of this membership.

In the same model, you can try to deal with group dependence (via. HLM) or time dependence (via. ARIMA  model, for example).  This is not impossible, but testing these two at the same time is computationally challenging.  You will have to choose your battle and fix one thing at one time.