Publication Date

4-2012

Comments

Paper presented at the the Annual Conference of the American Educational Research Association (AERA), Vancouver, 13-17 April 2012.

Abstract

This paper is concerned with the empirical validation of the competency rubric described in another paper presented at the same conference: Turner, Ross (April 2012). Some drivers of test item difficulty in mathematics. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), Vancouver, 13-17 April 2012 http://research.acer.edu.au/pisa/4/

Using items developed for the PISA 2012 survey, and data collected as part of an extensive field trial of the PISA tasks conducted during 2011 in some 67 countries, the authors use multidimensional Rasch modelling and latent regression to examine the following three questions: 1. What is the level of agreement among raters when they apply the competency rubric? 2. Does each of the competencies capture different dimensions of cognitive complexity in the tasks? 3. To what extent do ratings of the cognitive complexity account for (predict) the difficulty of the tasks for students?

Share

 
COinS