Monday 25 November 2013

Teachers’ Use & Misuse of Formative Assessments

A key problem with the use of formative assessments occurs after the design and implementation phase. Formative assessments are often viewed as an object, rather than a process by which student achievement and understanding can be improved through the use of assessment information (Good, 2011). According to Good, the phrase formative use of assessment information is more appropriate than the simple term formative assessment, largely because it places the emphasis on the important aspect of the assessments—the use of the information vs. the instruments themselves. However, this move from assessment data to data use is often the most difficult to manage in the classroom. More specifically, once a diagnostic or formative assessment has been administered, teachers are often unsure how to interpret and act upon the data (Dixon & Haigh, 2009).


According to Heritage, Kim, Vendlinski, and Herman (2009), teachers find it more difficult to make instructional changes from assessment results than to perform other tasks including using student responses to assess student understanding. This difficulty can result in poor utilization of the information provided by these assessment instruments. As Poon and Lung (2009) observe, “[T]eachers do not understand their students’ learning process well, and hence their teaching skills and methodology do not match the needs of these students” (p. 58). Goertz et al. (2009) also found that the type of instructional change that teachers generally utilized in response to formative assessment results was deciding what topics to reteach, with very little deviation in approach or targeting of specific conceptual misunderstandings. This approach, while responding to data generated by formative assessments, often did not utilize the full range of potential information available to them from the assessments. 


Moreover, the limit at which teachers chose to respond with instructional change varied from school to school and even from teacher to teacher (Goertz et al., 2009). For example, in one classroom a teacher may use a classroom success rate of 80% percent while another teacher may use 60% percent as the threshold for re-teaching, causing differences from teacher to teacher regarding what level requires instructional change. In Heritage et al.’s (2009) study they found that the interaction between teachers’ pedagogical knowledge, knowledge of mathematical principles, and mathematical tasks produced the largest error variability in teachers’ knowledge of appropriate formative assessment use. This suggests that teachers with the most knowledge of the mathematical principles and tasks represented by the assessment knew how best to use the formative assessment instruments to inform their instructional practices.


Finally, teachers were affected by various factors when deciding how to alter instruction. For example, teachers often considered their own knowledge of individual students, how students performed compared to classmates, and their own perceptions about what students found challenging when they made their instructional decisions (Goertz et al., 2009). In addition, teachers in Goertz et al.’s study were not surprised by the results of the interim assessments and “they mentioned that the interim assessments largely confirmed what they already knew about student learning in mathematics” (p. 5). However, some teachers did follow-up with individual students in order to alter future instruction.4 These findings support those in Slavin et al. (2009) and provide a potential mechanism for why so much of teachers’ instructional success was related to teacher choices in their approach to teaching.


While these findings may appear obvious (it makes sense that teachers who understand mathematics best would use the formative assessments best and that teachers take into account  their knowledge of their students), there are important implications for introducing formative assessment practices into schools. First, in schools where teachers do not have strong understandings of mathematical principles or the assessments themselves, the mere introduction of formative assessments is less likely to produce positive changes in classroom pedagogy. In addition, there are implications for program design. First, the types of assessment instruments introduced should be consistent with the level of knowledge and pedagogical sophistication of the teachers. That is to say, those formative instruments that require less mathematical sophistication to use appropriately should be introduced where appropriate in order to scaffold teachers towards the appropriate use of the more complicated formative assessment tools described previously. Second, the more intimately involved in the design of the formative assessment instruments the teachers are the more likely they are to understand the purpose of those assessments and, thus, the more likely they are to use them more appropriately and effectively. Finally, the more input teachers can have in the creation of the formative assessment instruments, the more directly they can tailor them to reflect the local priorities and the knowledge of their students that they possess.


Ginsburg (2009) argues that a main challenge in mathematics education is providing professional development opportunities on assessment. Goertz et al. (2009) argue that teachers who assessed conceptual understanding were more likely to respond with instructional change and incorporate more varied instructional methods, such as using arrays for multiplication or relating the steps used in two-digit subtraction to the steps necessary to complete a three-digit subtraction problem. Given this observed relationship, fostering these types of behaviors could be a topic for professional development. “Professional development for teachers should focus as well on teacher content knowledge, developing teachers’ instructional repertoires, and capacity to assess students’ mathematical learning” (Goertz et al., p. 9). Furthermore, teachers and principals in Volante, Drake, and Beckett’s 2010 study reported that professional learning communities (PLCs) provided the opportunity to discuss with other practitioners samples of student work and allowed discussion on consistent measurement. Thus, PLCs could be a useful structure to provide these professional development opportunities and link the assessments to the instructional practices that will address the assessments’ findings.

No comments:

Post a Comment