Monday 25 November 2013

Integrating Assessment Results with Curriculum and Instructional Change

It is imperative that mathematics curricula are designed to incorporate results from a variety of assessments (Goertz et al., 2009). Ginsburg (2009) states that the foundation of formative assessments is its capability to provide information that teachers can use to make instructional decisions. Popham (2008) categorizes the possible changes that can occur from the intentional integration of formative assessments:

 teacher’s instructional change (teacher adjusts instruction based on assessment results)
 students’ learning tactic change (students use results to adjust their own procedures)
 classroom climate change (entire classroom expectations for learning are changed)
 school-wide change (through professional development or teacher learning communities, the school adopts a common formative assessment).


Formative assessment test questions are not always written in such a way that allow for analysis of mathematics procedural and conceptual understanding (Goertz et al., 2009). For example, multiple-choice tests often contain distractors (or wrong answers) that help assess common errors (e.g., in a mathematics problem asking for the area of a circle, distractors may include answers that used an incorrect area formula, a computational error, or a calculator inputting error). However, individual distractors may contain multiple errors, making it difficult for teachers to assess where the student made the mistake. In addition, the pattern of correct and incorrect answers could be used to look for specific misunderstandings and at the same time increase the reliability of the assessments (Shepard, Flexer, Hiebert, Marion, Mayfield, & Weston, 2005; Yun, 2005).


As discussed previously, formative test results are often difficult to interpret–even Piaget believed that he could not interpret the results of a standardized test because of the method of administration (Ginsburg, 2009). As a result, teachers often interpret student errors differently, resulting in differences in teacher responses to results (Goertz et al., 2009). In Goertz et al.’s study, responses to a student’s error varied from procedural to conceptual explanations. For example, with regard to a question requiring a student to add two fractions, some teachers diagnosed it as a procedural error in which the student failed to find the common denominator, while others diagnosed it as a conceptual error in which the student failed to understand that the denominator indicated how many parts were in the whole. These differences in interpretation are important because each of these explanations would require different pedagogical approaches to address them.


These findings suggest that it is important that the design of formative assessments clearly reflect their intended use such that the number and types of explanations for incorrect responses could be mitigated through the design of the assessment tool, or through additional inquiry intended to differentially diagnose the reasons for the incorrect response. Further, the literature suggests that professional communities could be created for teachers to discuss the specific differences in interpretation and come to a consensus about how to address them. In addition, constructs, format, and any supplemental component should align with state or district standards, and instructional strategies should align with the curriculum’s approach (Goertz et al., 2009). The broader principle underlying Goertz et al.’s work is that assessments should be used for a single purpose and, thus, tests intended for formative use may require the use of other tests to allow for evaluative and predictive purposes, such as a summative unit test or project (Goertz et al.).



No comments:

Post a Comment