Friday, November 22, 2013

Results of an Informal Survey of R users


# This post does some basic correlation analysis between responses
# to the survey I recently released through R Shiny at:
http://www.econometricsbysimulation.com/2013/11/Shiny-Survey-Tool.html
 
# I have saved the data from the survey after completing the survey myself.
# This data is incomplete because the survey has been running since I
# saved the survey and because shinyApp.io server automatically resets
# data every so often. Thus survey results are lost sometimes.
# (Something to be remedied at a future time.)
 
# Let's get our data
Rsurvey <- read.csv(paste0(
  "https://raw.github.com/EconometricsBySimulation/",
  "Shiny-Demos/master/Survey/sample-results.csv"))
 
summary(Rsurvey)
# Looking at the data we have 321 responses in total though
# on average it looks like we have closer to 250 responses
# to work with.
 
# The majority of respondents consider their knowledge of R
# to be either Advanced (119) or Moderate (92).
 
# The majority of users are aged 26-35 (121) or 36-45 (79).
 
# The frequency that respondents read R bloggers is
# most frequently daily (164) or weekly (75).
 
# The frequency of respondents read my blog never (149)
# or monthly (48).
 
# The self-reported technical knowledge of most users in a
# theoretic stastics/econometrics/pschometrics field was
# most frequently reported as either moderate (103) or
# advanced (97).
 
# The favorite colors of people was most frequently blue (114)
# and green (66).
 
# The vast majority of respondents were male (243) compared
# with only 22 females. Looks like R bloggers is not going
# to become a dating website in the near future.
 
# Of those who selected an area of research the majority
# chose data analysis (150) followed by (statistics).
 
# As for the user's knowledge of Shiny few had much at all with
# Basic (118) and Non (76) being the most frequent responses.
 
# Finally, as to the question of "What is the air speed velocity
# of an unladen swallow" the majority of respondents chose
# the correct response to the Monte Python reference (129)
# while the next largest group indicated that "they did not 
# know" (98).
 
# Well let's see if there is any correlation between particular
# outcomes of interest.
 
# Let's see if there is a correlation between knowledge of R
# and frequency of reading R bloggers.
knowledgeR <- rep(NA, nrow(Rsurvey))
 
knowledgeR[Rsurvey[,2]=="None"] <- 0
knowledgeR[Rsurvey[,2]=="Basic"] <- 1
knowledgeR[Rsurvey[,2]=="Moderate"] <- 2
knowledgeR[Rsurvey[,2]=="Advanced"] <- 3
knowledgeR[Rsurvey[,2]=="Expert"] <- 4
 
frequency <- rep(NA, nrow(Rsurvey))
 
# Convert these rates to number of days per year
# reading R bloggers.
frequency[Rsurvey[,5]=="None"] <- 0
frequency[Rsurvey[,5]=="daily"] <- 360
frequency[Rsurvey[,5]=="weekly"] <- 50
frequency[Rsurvey[,5]=="monthly"] <- 12
 
cor(frequency, knowledgeR, use="pairwise.complete.obs")
# There seems to be modest correlation beteen # of days spent
# reading R bloggers and self assessment of R expertise.
 
# Let's see if coldness or warmth allong the color
# spectrum is a useful variable.
warmth <- rep(NA, nrow(Rsurvey))
 
warmth[Rsurvey[,8]=="blue"] <- 0
warmth[Rsurvey[,8]=="green"] <- 1
warmth[Rsurvey[,8]=="orange"] <- 2
warmth[Rsurvey[,8]=="red"] <- 3
 
cor(warmth, knowledgeR, use="pairwise.complete.obs")
# There is a slight negative correlation between
# the favorite color warmth of users and self-reported
# knoweldge of R.
 
# Finally, let's look at success on the Monte Python
# trivia question.
monte <- rep(NA, nrow(Rsurvey))
 
monte[Rsurvey[,12]=="I don't know!"] <- 0
monte[Rsurvey[,12]=="~50 MPH"] <- 0
monte[Rsurvey[,12]==
  "What do you mean? An African or European swallow?"] <- 1
 
cor(monte, knowledgeR, use="pairwise.complete.obs")
# We see a modest positive correlation between
# knowledge of R and being able to answer Monte Python
# trivia.
 
knowledgeStats <- rep(NA, nrow(Rsurvey))
 
knowledgeStats[Rsurvey[,6]=="None"] <- 0
knowledgeStats[Rsurvey[,6]=="Basic"] <- 1
knowledgeStats[Rsurvey[,6]=="Moderate"] <- 2
knowledgeStats[Rsurvey[,6]=="Advanced"] <- 3
knowledgeStats[Rsurvey[,6]=="Expert"] <- 4
 
cor(knowledgeStats, knowledgeR, use="pairwise.complete.obs")
# There seems to be a very stong correlation with
# self reported knowledge of Statstics and knowedge
# of R.
 
# In order to see the data points a little more clearly
# I will add some tiny noise to both knowledge sets
knowledgeStatsN <- knowledgeStats + .15*rnorm(nrow(Rsurvey))
knowledgeRN <- knowledgeR + .15*rnorm(nrow(Rsurvey))
 
plot(knowledgeRN, knowledgeStatsN, 
    main="Knowledge of Stats against that of R",
    xlab="R", ylab="Stats")
# We can see the diagnol elements 2,2 and 3,3 have the most
# frequency. 
 
  
summary(lm(knowledgeR~warmth+frequency+monte+knowledgeStats))
# Overall, we can see that knowledge of Statistics
# stongly positively predicts knowledge of R.
# Frequency of reading R bloggers also seems to
# have an effect size significant nearly at the
# %5 level.
 
# The coefficient on frequency is very small but that is because
# the scale is quite large (from 0 to 360).  However
# if someone where to start reading R bloggers daily
# (assuming R-bloggers -> R knowledge one directionally)
# Then we would expect a change of R knowledge of:
.0006994*360
# 0.251784
 
# Not as large a predictor as knowledge of stats but certainy
# there exists some relationship.
 
# Of course little causal relationships can be inferred from the
# data.  We cannot expect reading R bloggers to be independent
# of self-assessed knowledge of R any more than knowledge of 
# statistics to be uncorrelated with knowledge of R since
# many users, learn R simultaneously with statistics.
Created by Pretty R at inside-R.org

Wednesday, November 20, 2013

Raising Statistical Standards Effect on Sample Size


The failure of mainstream research to consistently reproduce results have led many to look for the faults in current methodologies.

One of these potential faults identified is that the significance levels of current standards is too high.  A standard rejection rate of either .05 or .01 is not high enough.

Statistician Valen Johnson recently released an article in The National Proceedings Academy of Sciences which reccomends more appropriate standard of rejection being .005 or .001.

In this post I will attempt to examine how such a change could affect the required sample size for studies.

Before initiating an experimental study often a power analysis is done.  When possible it is using a relatively simple closed form numerical statistic. This is the case when relatively simple methods are intended to be used.  More complex methods often require the use of simulations to do a power analysis.

Usually the logic of a power analysis (as far as I know) goes something like.  Let's say the possible effect size is tau and we know the conditional distribution of outcomes (from previous work) has a standard deviation of SD. How many people would be need to reject the null at our intended level.

Referencing wikipedia:

With Pi being power and tau being effect size and alpha being rejection power.  Assuming lots of normality:

Pi(tau) = 1 - PDFNORMAL(Z(alpha)-tau*N^.5/SD)

We want to solve for N:

# PDFNORMAL^(-1)(1 - Pi) =  Z(alpha)-tau*N^.5/SD

# tau*N^.5/SD = PDFNORMAL^(-1)(1 - Pi) - Z(alpha)

# N = ((SD/tau)*(PDFNORMAL^(-1)(1 - Pi) - Z(alpha)))^2

samp.power <- alpha="" function="" p="" pi="" tau="">
  ceiling(((SD/tau)*(pnorm(1 - pi) - qnorm(alpha)))^2)
# added the ceiling function to round up.

Let's see it in action!

Let's say we have an outcome which we know has a SD=2 and we hope our effect will have at least a size of tau=1. Following standard practices we require a detection rate of 80% for our power analysis.  Let's see what happens when we vary our alpha rate!

samp.power(SD=2, tau=1, pi=.8, alpha=.05)
# 20

samp.power(SD=2, tau=1, pi=.8, alpha=.01)
# 34

samp.power(SD=2, tau=1, pi=.8, alpha=.005)
# 40

samp.power(SD=2, tau=1, pi=.8, alpha=.001)
# 54

We can see that increasing our rejection standards from .05 to .001 we are basically increasing our required sample pool by 2.7. Which is really not that bad.  Looking at a smaller effect size does not change things except by a squared factor s. tau'=tau/s

# Looking at our equation for N
# N = ((SD/tau)*(pnorm(1 - pi) - qnorm(alpha)))^2
# N = (1/tau * H)^2
# where H = (SD)*(pnorm(1 - pi) - qnorm(alpha))

# Thus substituting in tau'
# N = (1/tau/s * H)^2 = s^2 * (H/tau)^2

# So let's say s=10 then N(tau')=N(tau)*100
samp.power(SD=2, tau=1/10, pi=.8, alpha=.001)
# 5387

Not to make light of this new proposed rejection rate and its larger sample size.  By increasing the sample size by roughly a factor of 2.7 the cost of a study might easily double.

However, a researcher saying that the chance of an outcome occurring randomly going from 1 out of 20 to 1 out of 1000 might easily be worth the additional cost.

The nice thing about this new approach would be that it would still allow for less strong rejections even when the effect size is smaller than expected or when there is more noise in the sample than expected.

Well, that is what I have to say in support of the idea.  I also have some reservations.  If the cost of the larger rejection rates is really doubling the cost of the study then why not do two studies?  Assuming the outcomes of each study are random and iid the likelihood of rejecting the null 1 out of 20 (5%) of the time given two studies is 1 out of 20^2 or 1 out of 400.

.25% (1/400) is not as good as as .1% but it is still pretty strong.
It makes sense in a world where we do not know what really has an effect to have more smaller studies which we follow up with larger studies when we do find an effect.  In addition, there might be factors unique to individual studies which are for whatever reason unobservable and nonreproducable, driving the results.

Say, the researchers introduced bias without intending to.  Scaling up the project might have no effect on removing or controlling that bias.  However, having two different studies run by different research groups is less likely to reintroduce the same bias.

Overall, I think it may be useful to introduce higher standards into social science research especially in non-experimental data in which numerous potential researchers are looking at the data with different hypothesizes.  It is improbable that if there is enough researchers looking at the data from enough angles that there will not be at least a few that reject at a 5% level.  Imagine that you have 20 research teams each picking 5 different angles that is 1000 different draws. 

Assuming they are independent the likelihood of rejecting at a 5% level would lead researchers to falsely reject 50 null hypothesizes on average.  That is a lot of false rejections.  Choosing a level of .1% however would lead only one research team to reject one null falsely on average.  This is a pretty appealing change for a conservative statistician.   However, once again there will be many times in which we fail to reject the null when our hypothesizes are in fact true but our data or effect set are insufficiently large.  Which would we rather we have?

Tuesday, November 19, 2013

A Survey Tool Designed Entirely in Shiny Surveying Users of R

http://econometricsbysimulation.shinyapps.io/Survey/I have written a very basic survey tool built entirely in the Shiny package of R.  I hope the tool is useful.  Modifying the survey for your own purposes is trivially easy (I hope).


I have not commented my code so it is pretty messy right now.  You can find the source on GitHub.

In order to run your own survey all you need do is edit the Qlist.csv file.  It contains one row for each question and one column for the question text and additional columns for potential answers (which will enter the radio buttons).  All of the questions will have the default option of "Prefer not to answer" selected.  It should be easy to modify this option as well since you need only find where this text appears in the server.R code.

It should be equally easy to change the introductory message and the accreditation to me, though it would be nice if you left a link somewhere back to me.

I will comment the GitHub files as soon as I have chance.  If people find this tool useful perhaps I will work on improving it.  Please tell me if you find it useful or have suggestions for future improvements!

Please take note that it is really not up in running in a sustainable way since I was recently told that sometimes the ShinyApps.io server must be restarted which means that all of the files are restored to those when the app was originally set up.


http://econometricsbysimulation.shinyapps.io/Survey/

Sunday, November 17, 2013

Alpha testing shinyapps.io - first impressions

http://econometricsbysimulation.shinyapps.io/bounce/ShinyApps.io is a new server which is currently in alpha testing to host Shiny applications.  It is being designed by the RStudio team and provides some distinct features different from that of the ShinyApps.io is intended for larger applications and I am guessing commercial applications in the long run.  Right now it is only allowing users by invitation into their alpha program, but they are accepting applicants for beta testing.
spark.rstudio server which is intended primarily for pilot testing Shiny apps. 

The way the user interacts with the system is extremely powerful.  In the standard package for shiny it is extremely easy to experiment with applications under development by simply navigating to the directory of interest with the setwd() command then trying your app with the runApp() function in the shiny library.  The new server has its own functions including the new and extremely powerful deployApp() function which acts the same as the runApp() function except that it contacts the ShinyApps server and immediately sets up a connection and deploys your app.  This makes the entire process of developing shiny applications and deploying them much easier.

I hope that this new service will further increase the user base of shiny! I think an R based interface for generating graphs will provide an invaluable tool for teaching and demonstrating empirical analysis methods.

I have been playing around with Shiny's seeming animation functionality.  I have created an animation like interface simulating the bouncing of a ball.

http://econometricsbysimulation.shinyapps.io/bounce/

GitHub Source

Friday, November 15, 2013

A Shiny App for Experimenting with Dynamic Programming

http://econometricsbysimulation.shinyapps.io/Dynamic-Pro/This post demonstrates the dynamics involved in a susceptible, infected, and recovering (SIR) model previous post for the model.  The shiny ui and server code can be found on GitHub.
of dynamic programming.

As a dynamic infection model, I find it particularly satisfying to be able change parameters and observe instantaneously changes in predicted outcomes.

This is a very simple model.  However, there
are many interesting models feasible that use this basic structure.  A more involved though fundamentally no more complex model might consider a simulation in which there are multiple sub-populations with different contact rates and transmission rates.  How might an optimal intervention be positioned in order to minimize total population exposure?

You can experiment with the app yourself at:

http://econometricsbysimulation.shinyapps.io/Dynamic-Pro

Monday, November 11, 2013

A Shiny App for Playing with OLS


http://spark.rstudio.com/fsmart/OLS-App/Ordinary least squares continues to be the staple estimator for causal inference for good reason.  In order to help new and veteran OLS users get a better sense of how it is working I have created a shiny app that allows for instant interactivity returning coefficient estimates and prediction graphs through Shiny's easy to use user interface controls.

The app only has a single x variable which is randomly drawn from a normal distribution with mean 2 and standard deviation specified by the user.  There is also an error term u which has mean 0 and standard deviation specified by the user.

The user also has control of how many observations to generate and how to generate the dependent variable y.

To play around with the app go to: econometricsbysimulation.shinyapps.io/OLS-App/

Source can be found on GitHub: github.com/EconometricsBySimulation/OLS-demo-App/

Monday, November 4, 2013

The Motivation for the Poisson Distribution

# The Poisson distribution has the interesting property that it
# models outcomes from events that are independent and equally
# likely to occur.  The distribution takes only one parameter mu
# which is equal to both the mean (expected number of events) 
# as well as the variance.
 
# This distribution as with all distributions is somewhat 
# fascinating because it represents an approximation of a 
# real world phenomenon.
 
# Imagine you are trying to model the mail delivery on wednesdays.
 
# On average you recieve 9 pieces of mail. If the mail delivery
# system is well modeled by a poisson distribution then
# the standard deviation of mail delivery should be 3.
# Meaning most days you should recieve between 3 and 15 pieces
# of mail.  
 
# What underlying physical phenomenon must exist for this to be
# possible?
 
# In order to aid this discussion we will think of the poisson
# distribution as a limitting distribution of the sum of 
# outcomes from a number of independent binary draws:
 
DrawsApprox <- function(mu, N) sum(rbinom(N,1,mu/N))
 
# This idea is if we specify a number of expected outcomes mu
# and give a number of draws (N>mu) then we can approximate the
# single draw of a poisson by summing across outcomes.
 
DrawsApprox(9,9)
# In this case of course the sum is 9 and variance = 0
# Under this case there are 9 letters which are always
# sent out every Wednesday.
 
# More interestingly:
DrawsApprox(9,18)
# In this case there are 18 letters that may be sent out.
# Any one of them is possible at a 50% rate.
 
# We want to know what the mean and variance is.
# Let us design a simple function to achieve this.
evar <- function(fun, draw=100, outc=NULL, ...) {
  for(i in 1:draw) outc <- c(outc, get(fun)(...))
  list(outc=outc, mean=mean(outc), var=var(outc))
}
 
evar("DrawsApprox", draw=10000, N=18, mu=9)
# I get the mean very close to 9 as we should hope
# but interestingly the variance less than five.
# This is less than that of the poisson which is 9.
 
# Let's see what happens if we double the number of
# potential letters going out which will halve the 
# probability of any particular letter.
evar("DrawsApprox", draw=10000, N=36, mu=9)
# Now the variance is about 6.7
 
evar("DrawsApprox", draw=10000, N=72, mu=9)
# Now 7.7
 
evar("DrawsApprox", draw=10000, N=144, mu=9)
# 8.6
 
evar("DrawsApprox", draw=10000, N=288, mu=9)
# 8.65
 
# We can see that as the number of letters gets very large
# the mean and variance of the number letters approaches
# the same number 9.  I will never be able to choose a 
# large enough number of letters so that the variance exactly
# equals the mean.
 
# However the didactic point of how the distribution is 
# structured and when it may be appropriate to use should be
# clear.  Poisson is a good fit when the likelihood of each
# individual outcome is equal, yet the number of possible
# outcomes is large (in principal I could recieve 100 pieces
# of mail in a single day though it would be very unlikely).
 
bigdraw <- evar("DrawsApprox", draw=10000, N=1000, mu=9)
summary(bigdraw$outc)
 
Created by Pretty R at inside-R.org

Sunday, November 3, 2013

Batch Variable Rename - Stata

* Using macros it is very easy to rename any number of variables in Stata.

* Imagine you have a number of variables.

clear
set obs 10

forv i=1/100 {
  gen var`=`i'^2' = rnormal()
}

* We can rename variables by using a - mark which tells Stata to use
* the variable list.

foreach v of varlist var1-var100 {
  rename `v' `v'A
}

* We can also rename with the wildcards * and ?

foreach v of varlist var*4* {
  rename `v' four`v'
}

Formatted By Econometrics by Simulation

Saturday, November 2, 2013

Consumer's Choosing an Optimal Bundle - Utility Maximization

# The theoretical basis of classical consumer theory lies
# in utility maximization. The idea is that consumers
# make consumption decisions based on choosing a bundle 
# of goods that will maximize individual utility.
# Despite this hypothesis being largely unsupported
# by reproducible results indicating the superiority
# any utility function over all other functions this
# theory persists.
 
# In this simulation I set up an easy framework for the 
# user to simulate the decision of the consumer as a
# function of the utility function, price of goods,
# and consumer budget.
 
 
# Uof is the function that takes a choice of
# x and y and calculates the corresponding z as
# well as expected utility.
 
Uof <- function(XY) {
  # x cannot be less than 0 or more than all of the
  # budget expended on x
  x <- min(max(XY[1],0), b/px)
  # y cannot be less than 0 or more than the remaining
  # budget left after x expenditures 
  y <- min(max(XY[2],0), (b - x*px)/py)
 
  # z is purchased with whatever portion of the budget 
  # remains
  z <- (b - x*px - y*py)/pz
 
  # Display the quantity of x,y, and z chosen.
  print(paste0("x:", x, " y:", y, " z:", z))
 
  # Since optim minimizes a function I am making
  # the returned value equal to negative utility.  
  -utility(x,y,z)
}
 
# I have defined a few different potential utility
# functions.
 
cobb.douglas2  <- function(x,y,z) x^.3*y^.3
cobb.douglas3  <- function(x,y,z) x^.3*y^.3*z^.4
lientief3      <- function(x,y,z) min(x,y,z)
linear.concave <- function(x,y,z) x^.3 + y^.2 + z^.5
mixed          <- function(x,y,z) min(x, y)^.5*z^.5
addative       <- function(x,y,z) 2*x+3*y+z
 
# Choose the utility function to maximize
utility <- cobb.douglas3
 
# Choose the prices
px <- 1
py <- 1
pz <- 1
 
# Choose the total budget
b <- 100
 
# Let's see how much utility we get out of setting
# x=1 and y=1
Uof(c(1,1))
 
# The following command will maximize the utility
# subject the choice of the utility fuction, prices,
# and budget.
optim(c(1,1), Uof, method="BFGS")
 
# In general it is a good idea not to use such 
# computational methods as this since by instead
# solving mathematically in closed form for solutions
# to utility maximization functions, you can discover
# how exactly a change in one parameter in the model
# may lead to a change in quantity demanded of a
# type of good.
 
# Of course you could do something fairly simple along
# these lines in R as well.
 
# For instance, define vectors:
 
# Once again choose what utility function
utility <- linear.concave
 
xv  <- NULL
yv  <- NULL
 
pxv <- seq(.25,10,.25)
 
for (px in pxv) {
  res <- optim(c(1,1), Uof, method="BFGS")
  xv <- c(xv, max(res$par[1],0))
  yv <- c(yv, max(res$par[2],0))
}
 
plot(xv, pxv, type="l", main="Demand for X(px)",
     xlab="Quantity of X", ylab="Price of X")
 
# The map of the demand function for X 
 
  
 
plot(yv, pxv, type="l", main="Demand for Y(px)",
     xlab="Quantity of Y", ylab="Price of X") 
 
  
# The map of the demand function for y as a function
# of the price of x. It is a little erratic probably
# because of lack of precision in the optimization
# algorithm.
Highlighted by Pretty R at inside-R.org

Friday, November 1, 2013

Efficiency Balanced Information Criterion for Item Selection

# Han (2012) in the paper "An Efficiency Balanced Information Criterion 
# for Item Selection in Computerized Adaptive Testing" proposes a method
# of evaluating potential items based on expected item potential information 
# as a function of maximum potential item information.
 
# This method favors items which have lower a values to be initially
# selected when there is greater uncertainty in the test but favors selection
# of items with higher a parameters as the test progresses.
 
# This small bit of code demonstrates how such a proceedure rescales
# item information.
 
# First we will define a few functions that we will use to construct our scale.
 
# Birbaum approximates the theta which maximizes the information function at
# a specific a, b, and c parameter level:
tmax <- function(a,b,c,D=1.7)
  b+1/(D*a)+log((1+sqrt(1+8*c))/2)
 
# For example:
tmax(a=2,b=2,c=.2)
 
# This is the item information function for a 3PL (3 parameter logistic)
iinfo <- function(theta,a,b,c,D=1.7)
  ((D*a)^2*(1-c))/((c+exp(D*a*(theta-b)))*
                 (1+exp(-D*a*(theta-b)))^2)
 
iinfo(theta=0,a=1,b=0,c=.1)
 
# Now we define a function which approximates the integration of function 
# "fun" from start to end.
integ <- function(start,end, step, fun, ...) {
  x <- seq(start+step/2,end-step/2,step)
  sum(get(fun)(x, ...)*step)
}
# As step size goes to zero the integ function approaches true integration.
# Of course that would mean infinite calculations which would be impossible
# for any computer.  Thus a larger step size is a worse approximation but
# uses less machine time.
 
# For example
a <- function(x,y) x^y
 
# Let's see
integ(0,2,.00001, "a", y=0)
integ(0,2,.00001, "a", y=1)
# Looking good.
 
# This is the big function that we are interested in:
IE <- function(thetahat,SEE,a,b,c,D=1.7,step=.001) {
  # thetahat is the current estimate of ability
  # SSE is the current standard error of the estimate
  # step is the number of steps used to estimate the integral
 
  # We calculate the item information at the current thetahat
  ii <- iinfo(thetahat,a=a,b=b,c=c,D=D)
  # Now we calculate the "max" theta value for the item.
  thetamax <- tmax(a=a,b=b,c=c,D=D)
  # Now the max information for that item.
  maxI <- iinfo(thetamax,a=a,b=b,c=c,D=D)
  # The efficient information as defined by Han at the
  # current theta is:
  ie <- ii/maxI
 
  # einfo is the expected information for a particular
  # item integrated across the range thetahat-SEE to
  # thetahat+SEE.
  einfo <- integ(thetahat-SEE*2, 
               thetahat+SEE*2, 
               step=step, 
               "iinfo",
               a=a,b=b,c=c,D=D)
 
  # Finally we can rescale the expected item information
  # by the maxI to find the expected item efficiency.
  eie <- einfo/maxI
 
  # This provides a list of returned values.
  list(eie=eie, 
       ii=ii,
       ie=ie,
       maxI=maxI, 
       thetamax=thetamax, 
       einfo=einfo)
}
 
test <- IE(0,1,a=1,b=0,c=.1,step=.001)
test
 
# Let's see this criterion in action:
theta <- seq(-3,3,.1)
 
# Make a list of returns
returns <- names(test) 
 
for(v in returns) assign(v,NULL)
 
# Let's create one last function that returns a list of 
# mappings for each of the ability levels.
 
mapping <- function(theta=seq(-3,3,.1), SEE=.5,a=1,b=0,c=.1,step=.001) {
  I1 <- list()
  for(i in 1:length(theta)) {
    res <- IE(theta=theta[i],SEE=SEE,a=a,b=b,c=c,step=step)
    for(v in returns) I1[[v]][i] <- res[[v]]
  }
  I1
}
 
# Now let's imagine five different items
I1 <- mapping(a=.5 , b=-1.5, c=.3, SEE=.5)
I2 <- mapping(a=1  , b=-1  , c=.3, SEE=.5)
I3 <- mapping(a=1.7, b=0   , c=.3, SEE=.5)
I4 <- mapping(a=1  , b=1   , c=.3, SEE=.5)
I5 <- mapping(a=1.5, b=1.5 , c=.3, SEE=.5)
 
plot(theta , I3$ii, type="n",
     main="Item Information at ThetaHat
     SEE=.5",
     xlab="ThetaHat", ylab="Information")
lines(theta, I1$ii, lwd=2, col="red")
lines(theta, I2$ii, lwd=2, col="blue")
lines(theta, I3$ii, lwd=2, col="green")
lines(theta, I4$ii, lwd=2, col="purple")
lines(theta, I5$ii, lwd=2, col="black")
# We can see that some items have much more information
# than other items such that they would almost never
# be selected.  Item 4 for instance is almost never expected
# to yeild higher information.
 
# If we are less sure of our theta estimate we may instead # calculate our expected information. plot(theta , I3$einfo, type="n", main="Expected Item Information at ThetaHat SEE=.5", xlab="ThetaHat", ylab="Information") lines(theta, I1$einfo, lwd=2, col="red") lines(theta, I2$einfo, lwd=2, col="blue") lines(theta, I3$einfo, lwd=2, col="green") lines(theta, I4$einfo, lwd=2, col="purple") lines(theta, I5$einfo, lwd=2, col="black") # In general this basically makes the peaks less extreme but # does not generally favor our items with lower a values. 
 
# If we want to see how our expected efficiency item
# information value will do we can see that as well.
# However, before we do that imagine first each of these
# information functions divided by it's peak value.
plot(c(0,theta) , c(0,I1$eie), type="n",
     main="Expected Efficiency Item Information at ThetaHat
     SEE=.5",
     xlab="ThetaHat", ylab="Information")
lines(theta, I1$eie, lwd=2, col="red")
lines(theta, I2$eie, lwd=2, col="blue")
lines(theta, I3$eie, lwd=2, col="green")
lines(theta, I4$eie, lwd=2, col="purple")
lines(theta, I5$eie, lwd=2, col="black")
# Now we can see that item 1 (red) and 4 (purple) are favored by 
# this algorithm, though by standard item maximization or by 
# expected item maximization they would almost never have been
# chosen. 
# The authors suggest a summing or the Efficiency Information
# and that of expected information might yeild a good solution.
plot(c(0,theta) , c(0,I3$eie+I3$einfo), type="n",
     main="Expected Efficiency Item Information at ThetaHat
     SEE=.5",
     xlab="ThetaHat", ylab="Information")
lines(theta, I1$eie+I1$einfo, lwd=2, col="red")
lines(theta, I2$eie+I2$einfo, lwd=2, col="blue")
lines(theta, I3$eie+I3$einfo, lwd=2, col="green")
lines(theta, I4$eie+I4$einfo, lwd=2, col="purple")
lines(theta, I5$eie+I5$einfo, lwd=2, col="black")
# The argument is that as SEE gets small the information begins # to look much more like that of Item Information which is # appropropriate for later in the test. I1 <- mapping(a=.5 , b=-1.5, c=.3, SEE=.15) I2 <- mapping(a=1 , b=-1 , c=.3, SEE=.15) I3 <- mapping(a=1.7, b=0 , c=.3, SEE=.15) I4 <- mapping(a=1 , b=1 , c=.3, SEE=.15) I5 <- mapping(a=1.5, b=1.5 , c=.3, SEE=.15)   plot(c(0,theta) , c(0,I3$eie), type="n", main="Expected Efficiency Item Information at ThetaHat SEE=.15", xlab="ThetaHat", ylab="Information") lines(theta, I1$eie, lwd=2, col="red") lines(theta, I2$eie, lwd=2, col="blue") lines(theta, I3$eie, lwd=2, col="green") lines(theta, I4$eie, lwd=2, col="purple") lines(theta, I5$eie, lwd=2, col="black") # Now we can see that item 1 (red) and 4 (purple) are favored by # this algorithm, though by standard item maximization or by # expected item maximization they would almost never have been # chosen.
# The authors suggest a summing or the Efficiency Information # and that of expected information might yeild a good solution. plot(c(0,theta) , c(0,I3$eie+I3$einfo), type="n", main="Expected Efficiency Item Information at ThetaHat SEE=.15", xlab="ThetaHat", ylab="Information") lines(theta, I1$eie+I1$einfo, lwd=2, col="red") lines(theta, I2$eie+I2$einfo, lwd=2, col="blue") lines(theta, I3$eie+I3$einfo, lwd=2, col="green") lines(theta, I4$eie+I4$einfo, lwd=2, col="purple") lines(theta, I5$eie+I5$einfo, lwd=2, col="black")
  # We can see that item 1 is still favored though we expected # it to give us very little information. Overall, the # method seems interesting but not yet ideal.
Created by Pretty R at inside-R.org