— October 17th, 2018
I dropped my Samsung Phone and broke it. I was planning to buy a new phone anyways, so I ordered a ESSENTIAL phone on Amazon. The price was $340 plus tax, which was substantially cheaper than an i-phone or a Samsung phone (price range $800-$1000). If you directly order it from Essential website, the price is $499 and the phone comes with accessories (click here).
I always bought my phones at a Verizon service vendor, but my cousin assured me that buying a smartphone online is easy. My cousin also told me that ESSENTIAL phones are made by Andrew Rubin who created the android OS system. Their website was sharp looking and Internet reviews were positive. One review said that Essential phones are compatible with my phone provider, Verizon. To state my conclusion first:
When the Essential phone arrived the next day, I took out the SIM card out of my old Samsung Galaxy S4. I had to flip the backside panel open using a sharp object (I used the IFIXIT driver kit tool). With the new phone, I pressed the SIM card slot inward till it popped out.
The SIM card was larger than the nano SIM card required by Essential ph-1. Following the Internet discussion, I cut the plastic part of the card to make it small. I didn’t use the size template that people said one should use. I just used a pair of scissors. I cut it too small, so I put a Scotch tape on the back of the SIM card to stick it to the slot firmly. I didn’t want the tape to touch the golden side of the SIM card too much, but my understanding is that the only essential part is the center part of the gold side.
I thought hard about which side should be up, but the card can fit into the slot only in one way (because one corner of the card and the slot are both diagonally cut and they only match in one way/direction).
The phone did not start working immediately. I took the SIM card out and put it back a couple of times. At one point the phone started receiving texts. I was also able to send texts. The phone still did not work. It started working when I followed the Internet instruction “Disable Enhanced 4G LTE Mode.” This option was somewhere in the setting.
proc psmatch data=psm region=cs;
where &outcome ne .;
class FLAG districtname SCHOOLNAME ;
psmodel FLAG(Treated=”Y”)= &exactvar &predictors;
match method=greedy(k=1)/*(order=random)*/ exact=districtname stat=lps caliper=&caliper;
output out(obs=match)=outgs lps=_Lps matchid=_matchID;
proc sort data=outgs;by _matchID;run;
The result table of a regression model includes, among other things, a column of coefficients. The intercept value, shown at the top cell of the coefficient column, may look mysterious and even arbitrary. The intercept is the predicted value for a subject whose values for all predictors in the model are 0’s. If the regression model includes gender as a predictor (coded as 1 if male, else 0), the intercept will indicate the average outcome value for female subjects. If the model includes gender and body weight, the intercept value will indicate the average outcome value for females who has a body weight of zero. Nobody’s weight is 0; thus, the meaning of the intercept in this case is nonsensical. If an analyst is not particularly interested in adding a substantive meaning to the intercept, he/she can ignore the intercept and safely interpret the rest of coefficients.
Personally I want all values in my result tables to have a substantive and interpretative meaning. As mentioned, with dummy variables (coded as 1 or 0) included in the model, the intercept already has a meaning.
If the model includes continuous variables, however, I recommend centering those variables around the variables’ average value. If the variable in question is a test score whose value range is 0 to 100 and the average score was 65, I would subtract 65 from each subject’s test score (if a test score is 60, then 60 – 65. In SAS, you can do:
proc standard data=abc out=abc2 mean=0;
var testscore1 ;
With centering, the intercept will obtain a meaning. The intercept value indicates the predicted value for a subject whose test score is the average score. Again, the centering does not affect coefficients of other variables included in the model or any other values obtained from the model.
You can also center a predictor’s values and fix its standard deviation to be 1. If SAS, you can do:
proc standard data=abc out=abc2 mean=0 std=1;
The resulting value is called “z-score.” Z-score may be better-known than the concept of centering. Z-score is one specific type of centering. Its mean is zero (as all values are centered around the average value) and standard deviation is fixed as 1.
I typically apply “z-scoring” for a pretest variable whose scores are large numbers (e.g., 953, 405, etc.). Without this adjustment, the derived coefficients may be too small to read in the table (e.g., 0.00000014).
P. 13 of the WWC stadards document.
|Overall Attrition||Conservative Boundary||Liberal Boundary|
The following SAS datastep conducts a test using functions in a datastep.
proc means data=both STACKODSOUTPUT n mean std min max stderr ;
class treat ;
ods output summary=kaz2;
data c;set kaz2;
keep N_C MEAN_C StdDev_c MIN_C MAX_C StdErr_C Variable label;
data t;set kaz2;
keep N_T MEAN_T StdDev_t MIN_T MAX_T Variable_QC StdErr_t;
merge C T ;
POOLED_SE=sqrt( ( (StdDev_t*StdDev_t) / N_T ) + ( (StdDev_c*StdDev_c ) / N_C ) );
*if P_value < 0.1 then sig=”t”;
if P_value < 0.05 then sig=”* “;
if P_value < 0.01 then sig=”** “;
if P_value < 0.001 then sig=”***”;
if P_value =. then sig=””;
When I have a group represented in a series of dummy variables (e.g., race groups, grade levels, etc.), I want to also know if dummy variables as a meaningful group unit contribute to the model with statistical significance. The easiest way to do this is to treat those variables as classification variables. You will get a joint statistical test in one of the result tables.
proc glimmix ..;
class race grade_level;
In my application I almost always use numeric version of variables, i.e., dummy variables (coded as 0 or 1). I like this approach because I can just use PROC MEANS on them to create a descriptive statistics table.
The question is how I get joint statistical tests when all of my predictors are numerically coded and thus I can’t rely on the class statement (shown above in the syntax example).
The GLIMMIX syntax below treats race groups and grade levels as numerically coded dummy variables (if YES 1, else 0).
The parameter estimate tables will show coefficients derived for each of the numeric variables; however, I wouldn’t know if race groups as a group matters to the model or grade levels as a system matters to the model. For example, even when the coefficient derived for subjects being black is statistically significant, that is only about how black students are different from white students (reference group in this example). We don’t know if race as a group matters and race groups jointly make a statistically significant contribution to the model.
<Again this can be done easily by using class variables instead (as shown earlier); however, I like using numeric variables in my models.>
Contrast statements will do the trick.
proc glimmix data=usethis namelen=32;
model Y= treat black hispanic other grade09 grade10 grade11/
solution ddfm=kr dist=&dist link=&link ;
output out=&outcome.gmxout residual=resid;
random intercept /subject=groupunit;
CONTRAST ‘Joint F-Test Race groups ‘ Black 1, Hispanic 1, other 1;
CONTRAST ‘Joint F-Test Grade levels’ grade09 1, grade10 1, grade11 1,
The reason for using R (Restricted method) is because the alternative M (Maximum method) can have bias about covariance (level-2 variance in our application) and when the number of group unit is relatively small, so this is a real threat.
proc glimmix data=asdf METHOD=RSPL;
class CAMPUS_14 subgroup;
model y=x1 x2 x3 subgroup
/dist=binomial link=logit s ddfm=kr;
lsmeans group / ilink diff;
ods output ModelInfo=x1var1 ParameterEstimates=x2var1 CovParms=x3var1