Start Date

23-8-2022 1:45 PM

End Date

23-8-2022 2:45 PM

Subjects

Formative evaluation, Interrater reliability, Measures, Moderation (Assessment), Monitoring (Assessment), Reliability, Student assessment, Written tests, Teacher evaluation of student performance, Primary secondary education

Abstract

Assessment is an integral component of effective teaching and a teacher’s professional judgement influences all routine aspects of their work. In the last 20 years, there has been considerable work internationally to support teachers in using assessment to improve student learning. However, there is a pressing issue that impedes teacher professional judgement being exploited to its full potential. The issue relates to teacher assessments in the context of extended performances such as essays and arises from the complexity of obtaining reliable or consistent teacher assessments of students’ work. Literature published in the United States, England and Australia details evidence of low reliability and bias in teacher assessments. As a result, despite policymakers’ willingness to consider making greater use of teachers’ judgements in summative assessment, and thus provide for greater parity of esteem between teachers’ assessment and standardised testing, few gains have been made. While low reliability of scoring is a pressing issue in contexts where the data are used for summative purposes, it also an issue for formative assessment. Inaccurate assessment necessarily impedes the effectiveness of any follow-up activity, and hence the effectiveness of formative assessment. In this session, Dr Sandy Heldsinger and Dr Stephen Humphry will share their research of writing assessment and explain how their research has led to the development of an innovative assessment process that provides the advantages of rubrics, comparative judgements and automated marking with few of the disadvantages.

Place of Publication

Melbourne, Australia

Publisher

Australian Council for Educational Research

ISBN

978-1-74286-685-7

DOI

https://doi.org/10.37517/978-1-74286-685-7-1

Share

COinS
 
Aug 23rd, 1:45 PM Aug 23rd, 2:45 PM

An innovative method for teachers to formatively assess writing online

Assessment is an integral component of effective teaching and a teacher’s professional judgement influences all routine aspects of their work. In the last 20 years, there has been considerable work internationally to support teachers in using assessment to improve student learning. However, there is a pressing issue that impedes teacher professional judgement being exploited to its full potential. The issue relates to teacher assessments in the context of extended performances such as essays and arises from the complexity of obtaining reliable or consistent teacher assessments of students’ work. Literature published in the United States, England and Australia details evidence of low reliability and bias in teacher assessments. As a result, despite policymakers’ willingness to consider making greater use of teachers’ judgements in summative assessment, and thus provide for greater parity of esteem between teachers’ assessment and standardised testing, few gains have been made. While low reliability of scoring is a pressing issue in contexts where the data are used for summative purposes, it also an issue for formative assessment. Inaccurate assessment necessarily impedes the effectiveness of any follow-up activity, and hence the effectiveness of formative assessment. In this session, Dr Sandy Heldsinger and Dr Stephen Humphry will share their research of writing assessment and explain how their research has led to the development of an innovative assessment process that provides the advantages of rubrics, comparative judgements and automated marking with few of the disadvantages.