### Kenward-Roger Degrees of Freedom (SAS GLIMMIX)

I will edit this essay later.

Imagine I have an HLM model where level-1 are students and level-2 are schools.  I can enter teacher-level variables, but people who are used to HLM software by SSI will wonder how the use of teacher-level variables is possible without making the model 3-level models (level1 students, level2 teachers, level3 schools).  This is because in the old HLM tradition (people who studied HLM using HLM software in 1990s), equations are written/processed separately by levels and data must be prepared by levels (student data file and school data file).

If there are student-level and school-level, some HLM users may wonder how teacher-level variables can enter the model (or prepared in what file? student-level file or school-level file???).

People who use SAS or other software programs (maybe R?) will not wonder this because they think in terms of one whole equation and say, "of course we can enter variables of any levels."

The software programs they use adjust degree of freedom to take into consideration that teacher-level variables have a lot less number of values compared to the outcome variable.

K-R option in PROC GLIMMIX adjusts degree of freedom to account for the fact that group-level variables have a lot less possible values when compared to outcome variables.

K-R degree of freedom option seems most appropriate for multilevel modeling applied in educational evaluation studies (where typically students are nested within schools).  This option kicks in only when at least one random coefficient is estimated (e.g., intercepts as random effects).

proc glimmix data=i3d;
class school ;
model y =< here variable names >
/solution ddfm=kr dist=normal link=identity s ;
random int /sub=school;
run;

After playing with data type of simulation (I will do a better and well-documented simulation in the future), I learned the following:

For student-level (level-1) variables, degree of freedom under KR option is close to the number of students (minus a small number of cases maybe close to the number of predictors).  DF is larger if variance contained in the variable is greater.  For binary variables, DF is the largest when variance is .50 (DF gets smaller as a proportion gets close to 0 or 1).

For school-level (level2) variables, DF is close to the number of schools minus a small number maybe close to a number of predictors.  This may be also adjusted by variance of the variable.

I created a fake teacher-level predictors by creating two possible numeric values per school.  DF for this variable was close to the number of teachers in the data (two per school) minus a small number.  I think this is also adjusted by variance of the variable (which I didn't test exactly).

### Proc mixed and Proc glimmix produce identical results

PROC MIXED and PROC GLIMMIX produce identical results for the following linear model settings.

proc glimmix data=kaz.asdf;
where IMPACT_ANALYSIS=1;
class school;
model
y =
x1 x2 x3
/solution ddfm=kr dist=normal link=identity s ;
random school;
run;

proc mixed data=kaz.asdf;
where IMPACT_ANALYSIS=1;
class school;
model
y =
x1 x2 x3
/solution s ddfm=kr;
random school;
run;

### Cluster effect: What does HLM solve?

When I first learned HLM (Hierarchical linear modeling) at graduate program in 1994/5, I struggled with the following expression:

Errors are correlated.

Up to that point in Stat 101, correlation was about two columns of data (e.g., math test score and science test score).  Errors in the context of regression analysis are residuals from the model and they are stored in one column.  I had a conceptual difficulty trying to understand why values contained in one column (one variable) can be correlated.

When I learned about geostatistics again at a workshop, the model was supposed to correct data dependence issue caused by geographical proximity.  This time, it was about how temperature of town A, for example, is similar to an adjacent town B and thus observations are dependent on one another.

I also learned about econometric approach of trying to deal with the fact that time and observations are correlated (my test score today is dependent on my test score tomorrow).

After hearing again and again about statisticians' attempts to correct for data dependence, correlation of data, etc., I finally realized that data can be correlated within one column of data.  If you and someone else are from the same school, your outcome data are correlated.

The traditional statistical modeling technique, such as OLS regression model, relies on the assumption that outcome data are uncorrelated (observation 1 and 2 are completely not related to one another).  If this assumption is violated, we can no longer consider results of statistical test good.  In fact, in the presence of data dependence problem, results of statistical test will be over-optimistic (too many statistically significant results).

I also learned that the use of HLM is one thing you can do to improve the situation, but it may be just one of many problems you may have in data.  Student test scores may be also related within friendship networks.  Typically we do not have data of this membership.

In the same model, you can try to deal with group dependence (via. HLM) or time dependence (via. ARIMA  model, for example).  This is not impossible, but testing these two at the same time is computationally challenging.  You will have to choose your battle and fix one thing at one time.

### How to derive subgroup adjusted outcome averages and conduct pairwise stat-test

When you run a statistical interaction model (e.g., Y=TREATMENT + GENDER + TREATMENT*GENDER), you also want to run a mathematically equivalent model:
Y= T_MALE + C_MALE + T_FEMALE + C_FEMALE.

SUBGROUP defines the four groups below.

This is for linear model (HLM because I have the random statement).

proc glimmix data=asdf3 namelen=32;
class CAMPUSID  SUBGROUP;
model y=
x1 x2 x3
SUBGROUP
/solution ddfm=kr dist=normal link=identity s noint;
random int / subject = CAMPUSID;
ods output ModelInfo=x1var1 ParameterEstimates=x2var1 CovParms=x3var1
nobs=x4var1 Diffs=DIF_RESULT1;
run;

This is for logistic regression model.

proc glimmix data=asdf METHOD=RSPL;
class CAMPUS_14 subgroup;
model y=x1 x2 x3 subgroup
ods output  ModelInfo=x1var1 ParameterEstimates=x2var1 CovParms=x3var1
Diffs=DIF_RESULT1 LSMeans=LS1;
run;

### RCT, power analysis, and mediation factor

Do typical RCTs of Education Interventions Have sufficient Statistical Power for Linking Impacts on Teacher Practice and Student Achievement Outcomes?

October 2009

Peter Z. Schochet

http://ies.ed.gov/ncee/pdf/20094065.pdf

### How to QC Rasch model results (Winstep output)

<under construction>

### Winsteps: Displacement measure

For a Rasch/Winsteps analysis that includes anchored items (difficulty parameters are fixed based on a set of values previously calibrated; as opposed to estimating them fresh from the data), one can check Table 14.1 of Winsteps output provides various diagnostics statistics.

http://www.winsteps.com/winman/displacement.htm

### What is vertically equated scores

In educational evaluation field, we often have access to vertically equated scales.  Scales means scores, measures, points.  Vertically equated scores in the context of education are the ones that are comparable across grades, which means that you can pick a score from 5th grader and a score from 8th grader and consider them to be measuring the same construct on the same scale, such as math ability.  I can say this or elaborate the concept in a couple of different ways.

• Vertically equated scales allow you to compare students of different grades on a common scale.
• If a 4th grader got a score of 50 and 9th grader also got score of 50, they have the same ability level.
• The 10 point difference among 5th graders (e.g., 50 and 60) and the 10 point difference among 8th graders (60 and 70) are considered equal.

Instead of providing a detailed methodological note, I'd like to use a metaphor to explain why equating is possible across different grades.

<Under construction>