Generalizability Theory

Sources

Primer on G-theory

In a classic true score model, we have

X=T+EX = T + E

where:

  • XX is the observed score, TT is the true score and EE is the error

  • E[X]=TE[X]=T, so the measurement is unbiased

  • Cov(T,E)=0Cov(T, E) = 0, errors are independent

  • Cov(E1,E2)=0Cov(E_1, E_2)=0, errors across test forms are independent

  • Cov(E1,T2)=0Cov(E_1, T_2)=0, error on one form of test is independent of the true score on another form.

So in variance decomposition form, we then have

σX2=σT2+σE2\sigma^2_X=\sigma^2_T+\sigma_E^2

Note here we are dumping all different sources of error into one term.

The reliability can then be defined as σT2σX2=σT2σT2+σE2=ρXT2\frac{\sigma^2_T}{\sigma^2_X}=\frac{\sigma^2_T}{\sigma^2_T+\sigma^2_E}=\rho^2_{XT}

But in G-theory, the goal is to decompose the single error component in class test theory (CTT) into multiple components in G theory.

Generalizability Theory

In G-theory, each source of error is called a facet. Given multiple sources of error (multiple facets), the idea of reliability is replaced with the idea of generalizability. So instead of asking how accurately observed scores can reflect the true score , we ask how accurately observed scores allow us to generalize about the behavior of an individual in a particular universe.

Last updated