Effects in this context refer to group outcome averages estimed by a regression model. In regular regression models (OLS, Logistic Regression model), these are often estimated as coefficients of a series of dummy variables representing group units (e.g., school A, school B, .. each coded as 0 or 1). These are fixed effects if estimated by non HLM model. If I am part of the school whose test score average is 500 and if it is estimated as fixed effects, that value is determined solely by the information obtained from that school (I used the word solely to emphasize the point, but this may not be exactly correct due to the fact that predictors from the whole data helped derive coefficients that in turn help adjust the average outcome).
HLM does something similar, but after the initial group averages were estimated, they will be adjusted by reliablity of the group estimates (and this process is called Baysian Shrinkage). They are adjusted such that the estimates are pulled towards the grand mean. Reliably measured group estimate will stay close to the original fixed effect value. Not so reliable estimates will be pulled towards the grand mean. The idea is to mix the grand mean and the group mean to prevent not reliant group estimate from being away from the mean.
Reliabitliy of the group average is a function of a) n of subjects in it and b) outcome variance. I think Intraclass correlation may be part of the algorithim, but I will check.