Monday, 25 November 2013

Formative Assessment

Diagnostic assessments can be used formatively, but are somewhat distinct from true formative assessments in that they should not be administered often during a school year to avoid test–retest improvement through item familiarity. The National Mathematics Advisory Panel (2008) defines formative assessment as “the ongoing monitoring of student learning to inform instruction…[and] is generally considered a hallmark of effective instruction in any discipline” (p. 46). Formative assessment, by nature, is intended for instructional improvement and not to measure achievement or readiness and should be thought of as a process and not as individual instruments (Good, 2011).


Formative assessments can be informal or formal. Formal formative assessments are prepared instruments while informal formative assessments are typically spontaneous questions asked in class to check for student understanding (Ginsburg, 2009; McIntosh, 1997). Both approaches can be useful but are useful in different ways. Formal formative assessments are what most people think about when the topic is raised. These can be student quizzes, district benchmarks, or assessments created for a specific purpose.


The prepared nature of these assessments is both a strength and a weakness. On the plus side, since these assessments are prepared for particular purposes, they can be directly and thoughtfully linked to particular learning or curriculum theories. In addition, the interpretations of the data gathered from these assessments can be fixed ahead of time. For example, students who get questions 1 and 2 incorrect by choosing choices a) and c) can be quickly identified as making the same error in reducing fractions. However, on the down side, the highly structured nature of formal formative assessments lacks the real-time spontaneity that can be found in informal assessments. This can be a problem because it may limit the amount of feedback from a student. For example, a quiz on solving two-step algebraic equations may reveal procedural misunderstanding (such as subtracting a variable from both sides instead of adding it to both sides) or operational errors (making computational mistakes). However, informal formative assessments, such as discourse, allow a teacher to instantly ask the student questions when a misunderstanding or error is assessed. For example, a teacher could ask, “Why do we add this to both sides of the equation?” or “Can you explain why we did this step?” This additional information provides teachers with more nuanced information that can be used to understand why a procedural or operational error was made, not simply whether such an error was made.


While informal formative assessments can be prepared ahead of time, the interpretation of the responses and follow-up questions generally occur in the course of a dynamic classroom session. Thus, these informal assessments can occur many times in a course session, and can be tailored to the issues that come up on a day-to-day basis. However, the real-time interpretation relies very heavily on the mathematical knowledge and skills of the teacher to select appropriate questions, follow-up thoughtfully, diagnose quickly, and make meaningful modifications in that course session or in subsequent lessons.


This distinction between formal and informal assessment is useful from a functional perspective, yet it is less useful as a pedagogical categorization since it encompasses so many different forms of assessment. In 1976, Piaget provided a more useful framework for teachers.  He categorized formative assessments into three groups based on their form: observation, test, and clinical interview (as cited by Ginsburg, 2009).


Observation-based formative assessments intend to reveal information on “natural behavior” (Ginsburg, 2009, p. 112). This could include a conversation between two children about which number is ‘larger.’ Natural behavior may reveal informal or casual language use or everyday interactions between two students that may differ if the students were required to answer a question or solve a problem in front of a teacher or classroom. However, Ginsburg argues that observations are highly theoretical and can be difficult in large classrooms settings and, thus, may have limited utility for teachers trying to improve student performance. Task or test forms of formative assessments are pre-determined questions or projects given to some or all students that assess accuracy and problem solving strategy and are analogous to the formal assessments described previously. These types of formative assessment instruments can come in the form of worksheets, pop quizzes, mathematics journals, discourse, and student demonstrations.


Worksheets and pop quizzes can contain a number of questions that (like the diagnostic tests described earlier) can assess cognition through error and skill analysis. Student mathematics journals and student discourse about problems are additional formative assessment tools that allow students to directly express areas of concern and confusion and feelings toward instructional strategies and are not test-based. In addition, class discussions can help identify gaps in student understanding by allowing students to volunteer to speak or allowing the teacher to choose specific students to answer questions. Student demonstrations allow students to solve and explain problems in front of the class. Through this form, teachers can gain insight into student computational skills as well as student conceptual understanding through the student generated explanations. These brief formative assessments can be useful and reliable sources of information to check for student understanding but require a great deal of expertise developed by the teachers to capitalize on the information (Phelan, Kang, Niemi, Vendlinski, & Choi, 2009).


Additionally, instant forms of formative assessments, including the use of electronic clickers, index cards, and individual whiteboards (where teachers can ask questions and students can answer by holding up whiteboards) allow teachers to instantly re-teach topics where conceptual or computational errors exist (Crumrine & Demers, 2007). Because task or test forms of formative assessments may not capture cognitive processes, clinical interviews can be conducted (Piaget, 1976, as cited by Ginsburg, 2009). An adaptation of clinical interviews appropriate in the mathematics education setting would begin with an observation of the student performing a pre-chosen task. The interviewee proposes a hypothesis about the behavior, assigns new tasks, then asks a series of questions that prompt answers to how the student is behaving or thinking. The interview should be student-centered and questions should be constructed in real time (Piaget). Effective clinical interviews are based on strong theory, hypotheses, and evidence (Piaget). Although interviews can provide more insight into student thinking than observations or tests, they are dependent on human skill and may not be reliable (Ginsburg, 2009).


Formative assessments can reveal information about a student’s performance, thinking, knowledge, learning potential, affect, and motivation (Ginsburg, 2009). These assessments, when part of a structured process, may lead to significant increases in academic achievement (Black & Wiliam, 2009; Davis & McGowen, 2007). Black and Wiliam (1998) found that the use of formative assessments had an effect size between 0.4 and 0.7 standard deviation units and that, across 250 studies, areas of increased achievement all had the use of formative assessments in common.


Effective instructional change based on formative assessment results can have multiple effects. First, these assessments can benefit the current cohort of students through instructional  Effect sizes greater than 0.4 are considered moderate to strong. improvement tailored to their specific needs. Second, these instructional improvements remain available for future cohorts if their formative assessments reveal similar conceptual misconceptions or computational errors (Davis & McGowen, 2007). Black and Wiliam (2009) argue that using formative assessments must be an ongoing, iterative process because there is always room for improving the formative assessments as a guide to alter instruction and curriculum.


Formative assessments are growing in importance; are critical in revealing student knowledge, motivation, and thinking; and have been part of various educational reforms in the past decade. Formative assessments can be both formal and informal. Some are more appropriate for particular types of curricula. Although information exists on the types and use of formative assessments, difficulties and misunderstandings persist regarding how to interpret the results of formative assessments and what instructional steps should be made following the interpretation. From formative assessments, teachers may be able to see which topics to re-teach yet may not have a clear understanding of how to alter instructional strategies to re-teach the topic. One way to address these issues is through a common-error approach which can most effectively be used within a collaborative teacher setting to come to a consensus about explanations for those errors and how to interpret the data generated. Understanding common errors in algebra could help teachers develop, interpret, and react to formative assessments. To help build a connection between common algebra errors, formative assessments, and instructional practice, this literature review aims to move toward a better understanding of how teachers can develop formative assessments to address common errors in algebra, how they can respond post-assessment to clarify misunderstandings, the importance of collaboration, and the possible role that professional learning communities can play in this overall process.

No comments:

Post a Comment