## Wednesday, September 26, 2012

### Item Response Theory Estimation

# Item response theory (IRT) is very tricky in a sense because it requires both estimation of the regressors (student ability) as well as estimation of the parameters of the items.

# IRT, I have heard that this somehwhat complicated task can be accomplished though a series of maximum likelihood estimations.  First by specifying specific student abilities then by estimating the item parameters.  Then estimating the student abilities by the item parameters, ect.

# Let's start by simulating some data.

nitem = 40

nstud = 100

# To begin with let's start with a single parameter IRT model

# Probability of getting the item i right for person j is is R(x_i=1)=exp(theta_j-b_i)/(1+exp(theta_j-b_i))

# We will range our item difficulties from -4 to 4

vec.b = seq(-4,4,length.out=nitem)

# We will also range our student ability from -4 to 4

vec.theta = seq(-4,4,length.out=nstud)

# Each student will have a probability p of getting each question right.

# Let's first make matrices from theta and the b parameter sets.
b = t(matrix(vec.b,nrow=nitem,ncol=nstud))
theta = matrix(vec.theta,nrow=nstud,ncol=nitem)
# Inputing vectors into the matrix command will cause them to be duplicated to fill out the values of the matrices.  The only trick is making sure that they are read in the right direction.  The command writes the input vector row by row.  Thus the b matrix becuase the rows are constant must be transposed while the theta matrix does not.

# Each column is an item and each row is a different student.

# Now let's calculate the probability of any person answering any of the questions correctly.
p = exp(theta-b)/(1+exp(theta-b))

# The next graph requires the plotrix package.
require(plotrix)
stackpoly(t(p),col=gray(seq(0,1,length=nstud)),
border=gray(seq(.3,1,length=nstud)-.2),
xlab="b - Difficulty Parameter",
ylab= "Probability of Correct Response",
main="Mapping of Probability of Correct Responses for Homogenous Items")

# Each line represents the response probability by a singe student.  All students have lower probability of correct response as the items get more difficult.

# Create a function that will draw a matrix of binomial responses
mat.binom <- function(n,p) {
bin.mat <- p*0
for (i in 1:nrow(p)) {
for (ii in 1:ncol(p)) {
# This
bin.mat[i,ii] <- rbinom(1,n,p[i,ii])
}
}
return(bin.mat)
}

# Now let's generate the actual responses to the probabilities
y = mat.binom(1,p)

total.score <- apply(y,1,sum)

plot(vec.theta, total.score, xlab=~ theta, ylab="Total Score",
main="Total Score as a Function of Ability")

bhat <- rep(NA,nitem)
for (i in 1:nitem) {
bhat[i] <- (glm(y[,i] ~ 1 , family=binomial("logit")))[[1]]
}

plot(vec.b,-bhat, xlab= "b - Item Difficulty", ylab= "Estimated Difficulty")
# Actually our graph is looking pretty good.  The scale is off but that is really unimportant since latent trait scales are arbitrary anyways.
# To scale the b I first make it so that the min is zero and the range is 1.  Then I multiply by the desired range (8) and add 4 to get the mean to 0.
b.scaled <- (bhat-min(bhat))/(max(bhat)-min(bhat))*-8+4

plot(vec.b,b.scaled, xlab= "b - Item Difficulty", ylab= "Estimated Difficulty", main="Estimates of Item Difficulty")

fit.line = lm(b.scaled~vec.b)

abline(fit.line, col="red")
# Interestingly, it is actually unneeded to do anything more to estimate student ability since the one parameter Rasch model is fully identified as a function of just the number of items each student got correct.

# Let's see it in action.

theta.hat <- rep(NA,nstud)
for (i in 1:nstud) {
theta.hat[i] <- print(glm(y[i,] ~ 1 , family=binomial("logit"))[[1]])
print(paste(i))
}

plot(vec.theta, theta.hat , xlab= ~ theta, ylab= "Estimated Student Ability", main="Estimated Student Ability Unscaled")
# Actually our graph is looking pretty good.  The scale is off but that is really unimportant since latent trait scales are arbitrary anyways.

theta.scaled <- (theta.hat-min(theta.hat))/(max(theta.hat)-min(theta.hat))*8+4

plot(vec.theta, theta.scaled, xlab= ~ theta, ylab= "Estimated Student Ability", main="Estimated Student Ability Scaled")

fit.line = lm(theta.scaled ~vec.theta)

abline(fit.line, col="red")

# This shows us that even a simple estimation technique, (two series of logit regressions) can we quite effective when we are estimating only one parameter for both students and one for the items.