Friday 29 November 2013

The Role of Mathematics Teachers towards Enhancing Students’ Participation in Classroom Activities

Large-scale comparative international and national surveys continue to show poor performance of students in Mathematics. Given such consistently poor productivity, much research has sought to identify students in school and out-of-school experiences that influence achievement and related outcomes especially those that are alterable or partly alterable by educators and could be manipulated by policy makers. Research in western countries has shifted attention away from school-lever factors to learning environment of the classroom (Saburoh, Shyoichi 1984). In fact, all factors that contribute to educational outcomes exist in one way or another in classrooms that differ in terms of learning environments. They have unique effects on pupils learning independently of factors operating at school and individual levels (Richard  1994).


The classroom activities influence on students achievement is two or three times more than the school level. Classroom teaching is nearly a universal activity designed to help students to learn. It is the process that brings the curriculum into contact with students and through which educational goals are to be contacted with
students and through which educational goals are to be achieved. The quality of classroom teaching is a key to improving students learning (Brown et al. 2003). Although, setting standards for content and performance is an important first step, but merely doing so and holding teachers accountable will not improve students learning (Anderson, Brophy 1998). Accordingly, particular attention should be paid to the actual process of teaching. However, a number of studies in classroom activities provide the critical link between students’ achievement data and teacher practices at classroom level, this link is unfortunately lacking in most national education surveys(Smith 1987).


Teaching and assessment are rarely studied at level, but education policy is often discussed nationally. It is important to know what aspects of teaching and assessment contribute significantly to achievement so that national discussions of classroom practices focus on the typical experiences of students (Richard 2003). Accordingly, research is needed to answer questions raised about the role of Mathematics teachers towards enhancing students practices related to instructional activities and classroom assessment environment.


Findings of research suggested that several classroom instructional activities were associated with achievement and noted that the ways in which instructional activities are presented in classroom context affects student achievement (Sewell1984; Anderson and Brophy 1998; Cooper 1998). Moreover, Sommer (1999) found that quality of instruction influence achievement at the class level.


Instructional activities in class include variables that describes aspects of classroom instruction such as quality of teaching style and opportunity to learn. The teaching context is established through preconceptions held by the teacher about the process of learning and how that might be facilitated (Mouly 1982). Perceptions of the learning process at various levels of constructive inform different teaching practices which in turn lead to modification of the students’ perception of the learning environment. It found that quality of teaching was a significant predicator of student achievement even after controlling for effects of students characteristics (Sommer 1999). In addition, Cavas (2002) found that quality of teaching did not have statistically significant effect on achievement at classroom level. An important part of any instructional setting is the teaching style. 


Research results suggested that teaching style exerted effects on student achievement that were independent of students’ characteristics (Smith 1987). The promise “one teaching style fits all” which is attributed to a teacher-centered teaching style is not working for a growing number of diverse student population. Problems occur when teaching styles conflict with students learning styles, often resulting in limited learning or no learning. Haladyna and Shaughnessy (1983) offers learner-centeredness as a model for responding to classroom challenges because of its viability for meeting divers needs. Both teaching styles (teacher and learnercentered) recognize the student as a key factor in improving student achievement .The teacher  centered style places control for learning in the hands of the teacher who decided what students would learn and how the teacher uses his expertise in content knowledge to help learner make connections. Teacher provides a variety of instructional methods and techniques for helping learners construct their learning and develop a system for applying knowledge and theory (Brown et al. 2003).


Cooper (1998) found that student learn more in classes where they spend most of their time being taught or supervised by teachers, rather than working on their own. One of the main factors related to achievement scores is opportunity to learn which refers to the amount of time students are given to learn the curriculum. The extent of the students’ opportunity to learn content bears directly and decisively on student achievement (Saburoh and Shyoichi 1984). In addition, homework is seen as a contribution towards students’ learning, extending the curriculum beyond the classroom and it can be conceived as one facet of opportunity to learn in the sense that home assignments offer students the opportunity to continue school work after regular school hours. Anderson and Prophy argued that through homework assignments teachers could be assured that students did their learning time beyond school hours (Anderson and Prophy 1998).


Homework could be considered as a proxy measure for the degree to which teachers academically challenged “pressed” their students. While doing homework in Mathematics, of the important subjects, are the amount, type and efficiency of homework may also be important. Research has indicated that the amount of homework given by teacher was found to have contradictory effect on achievement. For instance, Baumer showed that the frequency of homework assignments had a positive effect on achievement gains (Baumer 2002). Cooper observed a positive linear relationship between hours per week spent on homework (5 to 10
hours) and achievement through examining studies, also Cooper reported that the average correlation between time spent on homework and achievement was 0.21 (Cooper 1998). The assignment of appropriate homework can stimulate independent engagement in learning tasks. According to Gerades textbook-based homework was associated with higher achievement (Gerades 1991).


Whilst working on textbook problems and on projects were associated with lower test scores, regulate review of student homework can provide insight into student progress and source of problems. A clear message needs to be conveyed to students that responsibility to do the home-work is the same as the responsibility to work in class.


The classroom assessment environment has been defined as the context created for learners by several aspects of teachers’ use of formative and summative evaluations of their work, and assessment should as far as possible be integral to the normal teaching and learning programmed. For instance testing should be considered as an opportunity to learn (Anderson and Prophy 1998). In addition, teachers know how students are progressing and where they are having trouble, they can use this information to make necessary
instructional approaches of offering more opportunities for practice (Smith 19887).


Feedback is required because students need information about their accomplishments in order to grow and progress (Gerades 1991). Feedback related to assessment outcomes helps learners become aware of any gaps that exist between their desired goal and their current knowledge understanding skills and guides them through actions necessary to achieve the goal (Richard 1994).


Thus, the study relative to  this is considered of vital importance for the following reasons:

1. It lights on the important domains of the students’ participation in classroom activities that teachers of Mathematics use in their classrooms.

2. It lights on the role of Mathematics teacher towards enhancing students’ participation in classroom activities.

3. Help the decision-makers of the ministry of education to know the fact of students’ participation in classroom activities.

4. The results of this study are going to contribute to putting solutions in how to enhance students’ participation in classroom activities.

Student Interaction in the Math Classroom: Stealing Ideas or Building Understanding

“It was the third math class of the year. My Grade 7 students were unusually eager. We were looking for patterns in a strategic list of solutions generated from a number game. As one student described a complex pattern in the sequence, a second student shouted: ‘She stole my idea!’ At that point, I knew my work was cut out for me. How could I possibly move this group of competitive students from believing that math was an individual sport where power lies in the hoarding of information and ‘getting the answer first’, to understanding the exponential power of mathematical thinking when it is shared and built collectively?”                                                                                             Excerpted from a teacher’s journal

Research tells us that student interaction – through classroom discussion and other forms of interactive participation – is foundational to deep understanding and related student achievement. But implementing discussion in the mathematics classroom has been found to be challenging. In the math reform literature, learning math is viewed as a social endeavour. In this model, the math classroom functions as a community where thinking, talking, agreeing, and disagreeing are encouraged. The teacher provides students with powerful math problems to solve together and students are expected to justify and explain their solutions. The primary goal is to extend one’s own thinking as well as that of others. Powerful problems are problems that allow for a range of solutions, or a range of problem-solving strategies. Math problems are powerful when they take students beyond the singular goal of computational mastery into more complex math thinking. Research has firmly established that higher-order questions are correlated with increased student achievement, particularly for conceptual understanding.The benefits increase further when students share their reasoning with one another. Reform-based practices that emphasize student interaction improve both problem-solving and conceptual understanding without the loss of computational mastery. Why then does the traditional mathematics teaching model, focused on basic computational procedures with little facilitation of student discourse, continue to be the common instructional approach in many elementary schools?


Math teachers face a number of challenges in facilitating high-quality student interaction, or “math-talk”. The biggest is the complexity of trying to teach mathematics in ways they did not experience as students. Discomfort for some with their own level of math content knowledge and lack of sustained professional development opportunities also make teachers reluctant to adopt math-talk strategies. Further, the complex negotiation of math-talk in the classroom requires facilitation skills and heightened attention to classroom dynamics. The teacher must model math-talk so that students understand the norms of interaction in the math classroom, encourage students to justify their solutions and build on one another’s ideas, and finally step aside as students take increasing responsibility for sustaining and enriching interactions. Time is another challenge. In the face of curricular demands, the time required for facilitated interaction has been identified by teachers as an inhibitor to implementing math-talk. However, the research also tells us that despite these challenges, teachers have devised some particularly effective strategies for facilitating math-talk.


In an extensive study examining math classroom activity, student interaction was one of ten essential characteristics of effective mathematics teaching.19 However, left to their own devices, students will not necessarily engage in high-quality math-talk. The teacher plays an important role. According to this same study, three main activities of Ontario teachers who successfully facilitated math-talk were :

 1. The teacher assigned tasks that required students to work together to develop joint solutions and problem-solving strategies. 
2. The teacher provided instruction on and modeled expected behaviours focusing on group skills, shared leadership, and effective math communication. 
3. The teacher urged students to explain and compare their solutions and solution strategies with peers. Students were encouraged to be both supportive and challenging with peers. 

Other research has identified two more important roles:

4. The teacher knew when to intervene and when to let the conversation continue even if it was erroneous. 
5. Students were evaluated on their math-talk.


Five Strategies for Encouraging High-Quality Student Interaction

 1. The use of rich math tasks 

The quality of math tasks is of primary importance. When a task has multiple solutions and/or permits multiple solution strategies, students have increased opportunities to explain and justify their reasoning. If a task involves a simple  operation and single solution, there will be little or no opportunity to engage students.

2. Justification of solutions

Encouraging productive argumentation and justification in class discussions leads to greater student understanding. In a study of four teachers using the same lesson, Kazemi and Stipek found that there were significant differences in the quality of math-talk from class to class. Two of the four classes demonstrated evidence of deeper mathematical inquiry. In these two classes, the teachers explicitly asked students to justify their strategies mathematically and not merely recount procedures.

3. Students questioning one another

Getting students to ask each other good questions is a very powerful strategy. For example, King17 found that giving students prompt cards, with a range of higher-order questions, led to greater student achievement. The prompts were question stems such as “how are ... and ... similar?” Students applied current content to the questions (e.g., “how are squares and parallelograms similar?”). The students retained more when they used prompt cards than when they spent the same amount of time discussing content in small groups without prompts.

4. Use of wait time

Asking questions that call for higher-level thinking is not particularly helpful if students are not also given sufficient time to do the related thinking. Those teachers who increase the amount of time they give students to respond, allowing even three seconds instead of the usual one, have found that students give more detailed answers expressed with greater confidence. With increased wait time, combined with higher-level questions, student attitudes towards learning improve.

 5. Use of guidelines for math-talk

In a district-wide Grade 6 study, teachers were provided with professional development (PD) in mathematics content and pedagogical models for facilitating student interaction. The results on EQAO mathematics assessments, in year-over-year comparisons before and after the PD opportunity, indicated a substantial increase in student achievement, while the reading and writing scores remained consistent. In this project, guidelines for whole-class math-talk were modeled with teachers in active PD sessions and were subsequently implemented by participating teachers. A year later, some teachers were observed using the guidelines, which were still posted in their classrooms. These guidelines (see sidebar) help teachers and students engage in high-quality interaction leading to richer mathematical thinking, and deeper understanding of concepts and related applications.


In sum ... 

Let’s return to the concern raised in the opening vignette, where shared or similar solutions and strategies are described as the “stealing” of ideas. In order to move beyond this competitive and isolating approach which has had limited success, students must be encouraged to work, think, and talk together while engaging in powerful mathematics tasks. Clearly, the teacher plays a pivotal role in shaping the learning environment. By providing students with a framework for interaction, students can be guided effectively towards working as a learning community in which sharing math power extends understanding and leads to higher levels of achievement.


Tuesday 26 November 2013

Situative perspective on cognition

In this article we set out to consider what the situative perspective on cognition-that knowing and learning are situated in physical and social contexts, social in nature, and distributed across persons and tool-might offer those of us seeking to understand and improve teacher learning. As we pointed out earlier, these ideas are not entirely new. The fundamental issues about what it means to know and learn addressed by the situative perspective have engaged scholars for a long time. Almost a century ago, Thorndike and Dewey debated the nature of transfer and the connections between what people learn in school and their lives outside of school. These issues, in various forms, have continued to occupy the attention of psychologists and educational psychologists ever since (Greeno et al., 1996).


Labaree (1998) argued in a article that this sort of continual revisiting of fundamental issues is endemic to the field of education. Unlike the hard sciences, whose hallmark is replicable, agreed-upon knowledge, education and other soft knowledge fields deal with the inherent unpredictability of human action and values. As a result, the quest for knowledge about education and learning leaves scholars feeling as though they are perpetually struggling to move ahead but getting nowhere. If Sisyphus were a scholar, his field would be education. At the end of long and distinguished careers, senior educational researchers are likely to find that they are still working on the same questions that confronted them at the beginning. And the new generation of researchers they have trained will be taking up these questions as well, reconstructing the very foundations of the field over which their mentors labored during their entire careers. 


Questions about the nature of knowing and the processes of learning have not been matters only for academic debate. Teacher educators have long struggled to define what teachers should know (e.g., Carter, 1990; Holmes Group, 1986; National Board for Professional Teaching Standards, 1991) and to create environments that support meaningful teacher learning (e.g., Howey & Zimpher, 1996; Sykes & Bird, 1992).
These struggles have played out in ongoing attempts to teach pre-service teachers important principles of learning, teaching, and curriculum in ways that connect to and inform their work in classrooms. They have resulted in solutions as varied as teaching carefully specified behavioral competencies believed to be central to effective teaching (e.g., Rosenshine & Stevens, 1986) to building teacher education programs around immersion in public school classrooms (e.g., Holmes Group, 1990; Stallings & Kowalski, 1990).


Given the enduring nature of these questions and the debates surrounding them, what is to be gained by considering teacher knowledge and teacher learning from a situative perspective? Can this perspective help us think about teaching and teacher learning more productively? We believe it can-that the language and conceptual tools of social, situated, and distributed cognition provide powerful lenses for examining teaching, teacher learning, and the practices of teacher education (both pre-service and in-service) in new ways. For example, these ideas about cognition have helped us, in our own work, to see more clearly the strengths and limitations of various practices and settings for teacher learning. But this clarity comes only when we look closely at these concepts and their nuances. By starting with the assumption that all knowledge is situated in contexts, we were able to provide support for the general argument that teachers' learning should be grounded in some aspect of their teaching practice. Only by pushing beyond this general idea, however, to examine more closely the question of where to situate teachers' learning, were we able to identify specific advantages and limitations of the various contexts within which teachers' learning might be meaningfully situated: their own classrooms, group settings where participants' teaching is the focus of discussion, and settings emphasizing teachers' learning of subject matter. Similarly, ideas about the social and distributed nature of cognition help us think in new ways about the role of technological tools in creating new types of discourse communities for teachers, including unresolved issues regarding the guidance and support needed to ensure that conversations within these communities are educationally meaningful and worthwhile.


We close with two issues that warrant further consideration. First, it is important to recognize that the situative perspective entails a fundamental redefinition of learning and knowing. It is easy to misinterpret scholars in the situative camp as arguing that transfer is impossible-that meaningful learning takes place only in the very contexts in which the new ideas will be used (e.g., Anderson et al., 1996; Reder & Klatzky, 1994). The situative perspective is not an argument against transfer, however, but an attempt to recast the relationship between what people know and the settings in which they know-between the knower and the known (Greeno, 1997). The educational community (and our society at large) has typically considered knowledge to be something that persons have and can take from one setting to another.


When a person demonstrates some knowledge or skill in one setting but not another (e.g., successfully introducing a concept such as negative numbers to one's peers in a micro-teaching situation, but having difficulty teaching the same concept to children in a classroom mathematics lesson) a common view is that the person has the appropriate knowledge but is not able to access that knowledge in the new setting. This view is consistent with the educational approach prevalent in teacher education as well as K-12 classrooms  of teaching general knowledge, often in abstract forms, and then teaching students to apply that knowledge in multiple settings. Ball (1997), in contrast, has written about the impossibility of teachers determining what their students really know (and the imperative to try in spite of this impossibility). An insight demonstrated by a student during a small-group discussion "disappears" when the student tries to explain it to the whole class. A student "demonstrates mastery" of odd and even numbers on a standardized test yet is unable to give a convincing explanation of the difference between even and odd. Based on this "now you see it, now you don't" pattern, Ball argued that the contexts in which students learn and in which we assess what they know are inextricable aspects of their knowledge. In other words, learning and knowing are situated.


A parallel argument can be made for teacher learning. As teacher educators we have tended to think about how to facilitate teachers' learning of general principles, and then how to help them apply this knowledge in the classroom. From the situative perspective, what appear to be general principles are actually intertwined collections of more specific patterns that hold across a variety of situations. In this vein, some scholars have argued that some, if not most, of teachers' knowledge is situated within the contexts of classrooms and teaching (Carter, 1990; Carter & Doyle, 1989; Leinhardt, 1988). Carter and Doyle, for example, suggested
that much of expert teachers' knowledge is event-structured or episodic. This professional knowledge is developed in context, stored together with characteristic features of the classrooms and activities, organized around the tasks that teachers accomplish in classroom settings, and accessed for use in similar situations.


It is this sort of thinking in new ways about what and how teachers know that the situative perspective affords. Rather than negating the idea of transfer, the situative perspective helps us redefine it. These ideas about the relationships among knowing, learning, and settings lead to the second issue-the role that researchers play in the process of learning to teach. As researchers we inherently become a part of, and help to shape, the settings in which we study teachers' learning. In examining her own work with children, Ball (1997) found it was impossible to determine how, and the extent to which, the understandings and insights expressed by children during interactions with her were supported by her implicit (unconscious) guiding and structuring. She argued that teachers' sincere desire to help students and to believe that students have learned may lead them to "ask leading questions, fill in where students leave space, and hear more than what is being said because they so hope for student learning" (p. 800). Ball suggested that this unavoidable influence means we must recast the question of what children "really know," asking instead what they can do and how they think in particular contexts. Further, in addressing these questions, teachers must consider how their interactions affect their assessments of what students know.


Similarly, as researchers trying to understand what teachers know and how they learn, we must be particularly attentive to the support and guidance that we provide. In the heyday of behaviorist perspectives, process-product researchers worked hard to avoid this issue by making their observations of teachers' behaviors as objective as possible; the goal of the observer was to be a "fly on the wall," recording what transpired but not influencing it. With the shift to cognitive perspectives, many of the efforts to study teachers'
thinking and decision making maintained this goal of detached objectivity. Researchers working within the interpretive tradition and, more recently, those who hold a situative perspective, remind us that we are inevitably part of the contexts in which we seek to understand teachers' knowing and learning. Rather than pretending to be objective observers, we must be careful to consider our role in influencing and shaping the phenomena we study. This issue is obvious when individuals take on multiple roles of researchers, teachers, and teachers of teachers; it is equally important, though often more subtle, for projects in which researchers assume a non-participatory role.


As Labaree suggested, we will not resolve these issues concerning the relationships between knowing and context and between researcher and research context once and for all. Like Sisyphus, we will push these boulders up the hill again and again. But for now, the situative perspective can provide important conceptual tools for exploring these complex relationships, and for taking them into consideration as we design, enact, and study programs to facilitate teacher learning.

Where Should Teachers' Learning Be Situated?

Teacher educators have long struggled with how to create learning experiences powerful enough to transform teachers' classroom practice. Teachers, both experienced and novice, often complain that learning experiences outside the classroom are too removed from the day-to-day work of teaching to have a meaningful impact. At first glance, the idea that teachers' knowledge is situated in classroom practice lends support to this complaint, seeming to imply that most or all learning experiences for teachers should take place in actual classrooms. But the situative perspective holds that all knowledge is (by definition) situated. The question is not whether knowledge and learning are situated, but in what contexts they are situated. For some purposes, in fact, situating learning experiences for teachers outside of the classroom may be important-indeed essential for powerful learning. The situative perspective thus focuses researchers' attention
on how various settings for teachers' learning give rise to different kinds of knowing. We examine here some of the approaches that researchers and teacher educators have taken to help teachers learn and change in powerful ways, focusing on the kinds of knowing each approach addresses. We begin by considering professional development experiences for practicing teachers.


Learning Experience For Practicing Teachers

One approach to staff development is to ground teachers' learning experiences in their own practice by conducting activities at school sites, with a large component taking place in individual teachers' classrooms. The University of Colorado Assessment Project (Borko, Mayfield, Marion, Flexer, & Cumbo, 1997; Shepard et al., 1996) provides an example of this approach. The project's purpose was to help teachers design and implement classroom-based performance assessments compatible with their instructional goals in mathematics and literacy. As one component, a member of the research/staff development team worked with children in the classrooms of some participating teachers, observed their mathematical activities, and then shared her insights about their mathematical understandings with the teachers. Teachers reported that these conversations helped them to understand what to look for when observing students and to incorporate classroom-based observations of student performances into their assessment practices (Borko et al., 1997).


Another approach is to have teachers bring experiences from their classrooms to staff development activities, for example through ongoing workshops focused on instructional practices. In the UC Assessment Project (Borko et al., 1997), one particularly effective approach to situating learning occurred when members of the staff development/research team introduced materials and activities in a workshop session, the teachers attempted to enact these ideas in their classrooms, and the group discussed their experiences in a subsequent workshop session. Richardson and Anders's (1994) practical argument approach to staff development provides another example. These researchers structured discussions with participating elementary teachers to examine their practical arguments-the rationales, empirical support, and situational contexts that served as the basis for their instructional actions-often using videotapes of the teachers' classrooms as springboards for discussion. These approaches offer some obvious strengths when viewed from a situative perspective. The learning of teachers is intertwined with their ongoing practice, making it likely that what they learn will indeed influence and support their teaching practice in meaningful ways. But there are also some problems. One is the issue of scalability: Having researchers or staff developers spend significant amounts of time working alongside teachers is not practical on a widespread basis-at least not given the current social and economic structure of our schools. A second problem is that, even if it were possible in a practical sense to ground much of teachers' learning in their ongoing classroom practice, there are arguments for not always doing so. 


If the goal is to help teachers think in new ways, for example, it may be important to have them experience learning in different settings. The situative perspective helps us see that much of what we do and think is intertwined with the particular contexts in which we act. The classroom is a powerful environment for shaping and constraining how practicing teachers think and act. Many of their patterns of thought and action have become automatic-resistant to reflection or change. Engaging in learning experiences away from this setting may be necessary to help teachers "break set"-to experience things in new ways. For example, pervading many current educational reform documents is the argument that "school" versions of mathematics, science, literature, and other subject matters are limited-that they overemphasize routine, rote aspects of the subject over the more powerful and generative aspects of the discipline. Students and teachers, reformers argue, need opportunities to think of mathematics or science or writing in new ways. It may be difficult, however, for teachers to experience these disciplines in new ways in the context of their own classrooms-the pull of the existing classroom environment and culture is simply too strong. Teachers may need the opportunity to experience these and other content domains in a new and different context.


Some professional development projects have addressed this concern by providing intensive learning experiences through summer workshops housed in sites other than school buildings. Such workshops free teachers from the constraints of their own classroom situations and afford them the luxury of exploring ideas without worrying about what they are going to do tomorrow.  A key goal of the training was for teachers to experience the learning of mathematics in new ways. The Cognitively Guided Instruction (CGI) project (Carpenter, Fennema, Peterson, Chiang, & Loef, 1989) also included a summer institute, during which teachers were introduced to research based ideas about children's learning of addition and subtraction through a variety of experiences situated primarily in children's mathematics activities. In both projects, participants' beliefs and knowledge about teaching and learning mathematics shifted toward a perspective grounded in children's mathematical thinking.


Although settings away from the classroom can provide valuable opportunities for teachers to learn to think in new ways, the process of integrating ideas and practices learned outside the classroom into one's ongoing instructional program is rarely simple or straightforward. Thus we must consider whether and under what conditions teachers' out-of-classroom learning however powerful will be incorporated into their classroom practice. There is some evidence that staff development programs can successfully address this issue by systematically incorporating multiple contexts for teacher learning. One promising model for the use of multiple contexts combines summer workshops that introduce theoretical and research-based ideas with ongoing support during the year as teachers attempt to integrate these ideas into their instructional programs. The intensive program, in addition to providing opportunities for teachers to participate in mathematics learning activities, engaged them in creating similar instructional sequences for their own students. Throughout
the following school year, staff members provided feedback, demonstration teaching, and opportunities for reflection during weekly visits to the teachers' classrooms, as well as workshops for further exploring issues related to mathematics, learning, and teaching. This combination of experiences helped the teachers to develop different conceptions of mathematics and deeper understandings of mathematical learning and teaching, and to incorporate strategies such as group problem solving, use of manipulatives, and non-routine problems into their mathematics instruction. 


The CGI project provided a similar combination of experiences for some of its participants (Fennema et al., 1996; Franke, Carpenter, Fennema, Ansell, & Behrend, 1998). In addition to the summer workshops, these participants received support during the school year from a CGI staff member and a mentor teacher that included observing in the teacher's classroom and discussing the children's mathematical thinking, planning lessons together, and assessing children together. At the end of a 4-year period, most teachers had shifted from a view of teaching as demonstrating procedures and telling children how to think to one that stresses helping children develop their mathematical knowledge through creating learning environments, posing problems, questioning children about their problem solutions, and using children's thinking to guide instructional decisions. These two projects thus used a series of settings to introduce teachers to new ideas and practices and to support the integration of these learnings into classroom practice.


We have described in this section a variety of ways to situate experienced teachers' learning, ranging from staff developers working alongside teachers in their own classrooms; to teachers bringing problems, issues, and examples from their classrooms to group discussions; to summer workshops focused on the teachers' own learning of subject matter. Research on these projects suggests that the most appropriate staff development site depends on the specific goals for teachers' learning. For example, summer workshops appear to be particularly powerful settings for teachers to develop new relationships to subject matter and new insights about individual students' learning. Experiences situated in the teachers' own classrooms may be better suited to facilitating teachers' enactment of specific instructional practices. And, it may be that a combination of approaches, situated in a variety of contexts, holds the best promise for fostering powerful, multidimensional changes in teachers' thinking and practices. Further research is needed to better understand the complex dynamics of these multifaceted approaches to teacher learning.


Learning Experience For Prospective Teachers

The argument for providing in-service teachers with multiple learning settings in and out of classrooms has its counterpart in pre-service teacher education. In this case, the recommendation is to situate experiences in both the university and K-12 classrooms. Unlike experienced teachers, however, pre-service teachers do not have their own classrooms in which to situate learning activities and have limited teaching experiences from which to draw in discussions of pedagogical issues. Traditionally, teacher educators have relied upon student teaching and field experiences in K-12 classrooms as sites for learning.


In some situations, these classroom experiences are carefully combined with university course experiences to provide coordinated opportunities for pre-service teachers to learn new ideas and practices, as well as to reflect and receive feedback on their teaching. Wolf, for example, required pre-service teachers enrolled in her children's literature course to conduct a "reader response case study" with a young child (Wolf, Carey, & Mieras, 1996; Wolf, Mieras, & Carey, 1996). Each teacher read with a child on a weekly basis, kept detailed field notes of the reading sessions, and wrote a final paper on the child's response to literature and her or his own growth as a teacher of children's literature. The pre-service teachers' conceptions of literary response shifted toward an increased emphasis on interpretation over comprehension. They also came to hold higher expectations for children's capacity to interpret text and richer understandings of their roles as teachers of literature. Wolf and colleagues concluded that situating the pre-service teachers' learning simultaneously in university and field based experiences was crucial to the success of the course.


As they explained, Much of the necessary work to guide and support pre-service teachers' growing understandings of literary response can be accomplished in university class settings that emphasize subject matter knowledge.... Still, subject matter knowledge is only a part of the necessary training for pre-service teachers. To arrive at a more complete understanding of children's literary response, pre-service teachers must be involved with children-moving from the more distanced study of children in articles and books to the here and now of working with real children .... Thus, a university course infusion of new research ideas with multiple, though sometimes hypothetical, examples must be balanced with authentic, literary interaction with children, if we expect to see pre-service teachers shift from limited comprehension-based expectationst broader interpretive possibilities for literary discussion. (Wolf et al., 1996, p. 134)


Thus, thoughtfully combining university- and field-based experiences can lead to learning that can be difficult to accomplish in either setting alone. These approaches draw, at least implicitly, on an assumption of apprenticeship in an existing environment that important learning to teach takes place as novices experience actual classrooms alongside experienced teachers. A concern, however, is that K-12 classrooms embodying the kinds of teaching advocated by university teacher education programs may not be available. Without such classrooms, the apprenticeship model breaks down. As Sykes and Bird (1992) cautioned,  Finally, the situated cognition perspective draws on the image of apprenticeship in a guild or a professional community as a powerful form of learning. But this image requires a stable, satisfactory practice that the novice can join. If the aim of teacher education is a reformed practice that is not readily available, and if there is no reinforcing culture to support such practice, then the basic imagery of apprenticeship seems to break down. Teachers' knowledge is situated, but this truism creates a puzzle for reform.


Through what activities and situations do teachers learn new practices that may not be routinely reinforced in the work setting? (p. 501) An important question facing researchers and teacher educators is whether experiences can be designed that maintain the situatedness of practical and student teaching while avoiding the "pull" of the traditional school culture. To address this question, we will need to understand better the influence of school-based experiences on prospective teachers' ideas and practices.


Case-Based Learning Experience For Teachers

Teachers' learning experiences in university classrooms typically entail reading about and discussing ideas; their learning experiences in K-12 classrooms usually involve actually engaging in the activities of teaching. Case-based teaching provides another approach for creating meaningful settings for teacher learning (Doyle, 1990; Leinhardt, 1990; Merseth, 1996; Sykes & Bird, 1992). Rather than putting teachers in particular classroom settings, cases provide vicarious encounters with those settings. This experience of the setting may afford reflection and critical analysis that is not possible when acting in the setting.


Some proponents suggest that cases have several advantages over other activities used in pre-service and in-service teacher education. As with actual classroom experiences, they allow teachers to explore the richness and complexity of genuine pedagogical problems. Cases, however, provide shared experiences for teachers to examine together, using multiple perspectives and frameworks (Feltovich, Spiro, & Coulson, 1997; Spiro, Coulson, Feltovich, & Anderson, 1988).


They also afford the teacher educator more control over the situations and issues that teachers encounter, and the opportunity to prepare in advance for discussion and other activities in which the case materials are used (Sykes & Bird, 1992). For preservice programs, cases avoid the problem of placing prospective teachers in settings that do not embody the kinds of teaching advocated by university teacher educators. Although all cases limit the information provided, they vary in the richness or complexity of classroom life portrayed. Some media, such as videotape, can convey more of the complexity of classroom events than written cases. Interactive multimedia cases and hypermedia environments have the potential to provide even richer sets of materials documenting classroom teaching and learning. Lampert and Ball (1998), for example, developed a hypermedia learning environment that combines videotapes of classroom mathematics lessons, instructional materials, teacher journals, student notebooks, students' work, and teacher and student interviews, as well as tools for browsing, annotating, and constructing arguments. The non-linearity of such hypermedia systems, the ability to visit and revisit various sources of information quickly and easily, and the ability to build and store flexible and multiple links among various pieces of information, allow users to consider multiple perspectives on an event simultaneously (Feltovich et al., 1997; Spiro et al., 1988). Further, the extensiveness of the databases and ease of searching them enable teachers to define and explore problems of their own choosing (Merseth & Lacey, 1993). Like traditional cases, these multimedia and hypermedia materials provide a shared context for the exploration of pedagogical problems. They can come much closer, however, to mirroring the complexity of the problem space in which teachers work.


Despite vocal advocates and an increased use of cases in recent years there is much to learn about their effectiveness as instructional tools. Commenting on this "imbalance between promise and empirical data," Merseth (1996) noted, "the myriad claims for the use of cases and case methods far exceed the volume and quality of research specific to cases and case methods in teacher education" (p. 738). Questions for research include differences in what is learned from the rich and open-ended experiences provided by hypermedia cases versus more structured and focused written and videotaped cases, as well as comparisons of cases and case methods with other instructional materials and approaches. In addressing these questions, it will be important to understand and take into account the variety of purposes and uses of case-based pedagogy. We may learn, for example, that considerable limiting of complexity is desirable for some purposes, such as illustrating particular teaching concepts or strategies. For other purposes, such as reflecting the confluence of the many constraints on a teachers' problem solving, complex open-ended case materials may be important.

Monday 25 November 2013

Diagnostic tests

Diagnostic Tests: Response Analysis

Diagnostic tests assess prior knowledge and skills and come in two forms: response analyses and cognitive diagnostic assessments. Response analyses provide information on mastery and understanding and allow instructors to alter instruction to address students’ misunderstandings. Skills analyses can inform instructors of areas of difficulty when creating review activities, while error analyses may provide information to help plan re-teaching activities (Ketterlin-Geller & Yovanoff, 2009). Quizzes on computational facts such as decimal and fraction conversion can provide insight into which skills each student has mastered or partially mastered and which skills should be reviewed as a class. However, skills analyses do not reveal why the student did not answer the question correctly; therefore, error analyses are necessary for further information (Ketterlin-Geller & Yovanoff). For example, test questions on fraction conversions reveal information about computational skills and strategies.


Diagnostic Tests: Cognitive Diagnostics

Cognitive diagnostic assessments target specific cognitive processes and are used to design remedial programs or additional assistance (Ketterlin-Geller & Yovanoff, 2009). Ketterlin-Geller and Yovanoff offer a sample cognitive diagnostic matrix, where each response item is attached to one or more cognitive attribute. The matrix includes information on which test items the student answers correctly and incorrectly, providing information on possible patterns in cognitive gaps.


These types of assessments can be critical to school and teacher planning since they can provide important information that will allow teachers to plan and sequence their curriculum (or group students) in ways that match the particular strengths and weaknesses for specific groups. This allows teachers (or groups of teachers) to customize their approach or prepare lessons that address particular issues that are likely to vary from group to group. In addition, with enough information over time, teachers can isolate issues and topics that are sources of problems year after year, thus allowing for robust planning and research to address these chronic problems of misunderstanding.

Formative Assessment

Diagnostic assessments can be used formatively, but are somewhat distinct from true formative assessments in that they should not be administered often during a school year to avoid test–retest improvement through item familiarity. The National Mathematics Advisory Panel (2008) defines formative assessment as “the ongoing monitoring of student learning to inform instruction…[and] is generally considered a hallmark of effective instruction in any discipline” (p. 46). Formative assessment, by nature, is intended for instructional improvement and not to measure achievement or readiness and should be thought of as a process and not as individual instruments (Good, 2011).


Formative assessments can be informal or formal. Formal formative assessments are prepared instruments while informal formative assessments are typically spontaneous questions asked in class to check for student understanding (Ginsburg, 2009; McIntosh, 1997). Both approaches can be useful but are useful in different ways. Formal formative assessments are what most people think about when the topic is raised. These can be student quizzes, district benchmarks, or assessments created for a specific purpose.


The prepared nature of these assessments is both a strength and a weakness. On the plus side, since these assessments are prepared for particular purposes, they can be directly and thoughtfully linked to particular learning or curriculum theories. In addition, the interpretations of the data gathered from these assessments can be fixed ahead of time. For example, students who get questions 1 and 2 incorrect by choosing choices a) and c) can be quickly identified as making the same error in reducing fractions. However, on the down side, the highly structured nature of formal formative assessments lacks the real-time spontaneity that can be found in informal assessments. This can be a problem because it may limit the amount of feedback from a student. For example, a quiz on solving two-step algebraic equations may reveal procedural misunderstanding (such as subtracting a variable from both sides instead of adding it to both sides) or operational errors (making computational mistakes). However, informal formative assessments, such as discourse, allow a teacher to instantly ask the student questions when a misunderstanding or error is assessed. For example, a teacher could ask, “Why do we add this to both sides of the equation?” or “Can you explain why we did this step?” This additional information provides teachers with more nuanced information that can be used to understand why a procedural or operational error was made, not simply whether such an error was made.


While informal formative assessments can be prepared ahead of time, the interpretation of the responses and follow-up questions generally occur in the course of a dynamic classroom session. Thus, these informal assessments can occur many times in a course session, and can be tailored to the issues that come up on a day-to-day basis. However, the real-time interpretation relies very heavily on the mathematical knowledge and skills of the teacher to select appropriate questions, follow-up thoughtfully, diagnose quickly, and make meaningful modifications in that course session or in subsequent lessons.


This distinction between formal and informal assessment is useful from a functional perspective, yet it is less useful as a pedagogical categorization since it encompasses so many different forms of assessment. In 1976, Piaget provided a more useful framework for teachers.  He categorized formative assessments into three groups based on their form: observation, test, and clinical interview (as cited by Ginsburg, 2009).


Observation-based formative assessments intend to reveal information on “natural behavior” (Ginsburg, 2009, p. 112). This could include a conversation between two children about which number is ‘larger.’ Natural behavior may reveal informal or casual language use or everyday interactions between two students that may differ if the students were required to answer a question or solve a problem in front of a teacher or classroom. However, Ginsburg argues that observations are highly theoretical and can be difficult in large classrooms settings and, thus, may have limited utility for teachers trying to improve student performance. Task or test forms of formative assessments are pre-determined questions or projects given to some or all students that assess accuracy and problem solving strategy and are analogous to the formal assessments described previously. These types of formative assessment instruments can come in the form of worksheets, pop quizzes, mathematics journals, discourse, and student demonstrations.


Worksheets and pop quizzes can contain a number of questions that (like the diagnostic tests described earlier) can assess cognition through error and skill analysis. Student mathematics journals and student discourse about problems are additional formative assessment tools that allow students to directly express areas of concern and confusion and feelings toward instructional strategies and are not test-based. In addition, class discussions can help identify gaps in student understanding by allowing students to volunteer to speak or allowing the teacher to choose specific students to answer questions. Student demonstrations allow students to solve and explain problems in front of the class. Through this form, teachers can gain insight into student computational skills as well as student conceptual understanding through the student generated explanations. These brief formative assessments can be useful and reliable sources of information to check for student understanding but require a great deal of expertise developed by the teachers to capitalize on the information (Phelan, Kang, Niemi, Vendlinski, & Choi, 2009).


Additionally, instant forms of formative assessments, including the use of electronic clickers, index cards, and individual whiteboards (where teachers can ask questions and students can answer by holding up whiteboards) allow teachers to instantly re-teach topics where conceptual or computational errors exist (Crumrine & Demers, 2007). Because task or test forms of formative assessments may not capture cognitive processes, clinical interviews can be conducted (Piaget, 1976, as cited by Ginsburg, 2009). An adaptation of clinical interviews appropriate in the mathematics education setting would begin with an observation of the student performing a pre-chosen task. The interviewee proposes a hypothesis about the behavior, assigns new tasks, then asks a series of questions that prompt answers to how the student is behaving or thinking. The interview should be student-centered and questions should be constructed in real time (Piaget). Effective clinical interviews are based on strong theory, hypotheses, and evidence (Piaget). Although interviews can provide more insight into student thinking than observations or tests, they are dependent on human skill and may not be reliable (Ginsburg, 2009).


Formative assessments can reveal information about a student’s performance, thinking, knowledge, learning potential, affect, and motivation (Ginsburg, 2009). These assessments, when part of a structured process, may lead to significant increases in academic achievement (Black & Wiliam, 2009; Davis & McGowen, 2007). Black and Wiliam (1998) found that the use of formative assessments had an effect size between 0.4 and 0.7 standard deviation units and that, across 250 studies, areas of increased achievement all had the use of formative assessments in common.


Effective instructional change based on formative assessment results can have multiple effects. First, these assessments can benefit the current cohort of students through instructional  Effect sizes greater than 0.4 are considered moderate to strong. improvement tailored to their specific needs. Second, these instructional improvements remain available for future cohorts if their formative assessments reveal similar conceptual misconceptions or computational errors (Davis & McGowen, 2007). Black and Wiliam (2009) argue that using formative assessments must be an ongoing, iterative process because there is always room for improving the formative assessments as a guide to alter instruction and curriculum.


Formative assessments are growing in importance; are critical in revealing student knowledge, motivation, and thinking; and have been part of various educational reforms in the past decade. Formative assessments can be both formal and informal. Some are more appropriate for particular types of curricula. Although information exists on the types and use of formative assessments, difficulties and misunderstandings persist regarding how to interpret the results of formative assessments and what instructional steps should be made following the interpretation. From formative assessments, teachers may be able to see which topics to re-teach yet may not have a clear understanding of how to alter instructional strategies to re-teach the topic. One way to address these issues is through a common-error approach which can most effectively be used within a collaborative teacher setting to come to a consensus about explanations for those errors and how to interpret the data generated. Understanding common errors in algebra could help teachers develop, interpret, and react to formative assessments. To help build a connection between common algebra errors, formative assessments, and instructional practice, this literature review aims to move toward a better understanding of how teachers can develop formative assessments to address common errors in algebra, how they can respond post-assessment to clarify misunderstandings, the importance of collaboration, and the possible role that professional learning communities can play in this overall process.

Common Errors in Algebra

In the late 1970s, Hendrik Radatz issued a call for action models for teachers to integrate diagnostic teaching and findings from educational and social psychology, claiming that an “analysis of individual differences in the absence of a consideration of the content of mathematics instruction can seldom give the teacher practical help for individualizing instruction or providing therapy for difficulties in learning a specific task” (Radatz, 1979, p. 164). Societal and curricular differences make this connection difficult and, thus, instructors should consider other factors such as the teacher, the curriculum, the environment, and interactions. Given these multiple forces involved in the learning of mathematics, analyzing errors “in the learning of mathematics are the result of very complex processes. A sharp separation of the possible causes of a given error is often quite difficult because there is such a close interaction among causes” (Radatz, 1979, p. 164).


In order to simplify this set of complex causes, mathematical errors have been classified into five areas: language errors; difficulty with spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations and rigidity of thinking; and incorrect application of rules and strategies (Radatz, 1979). Common mistakes and misconceptions in algebra can be rooted in the meaning of symbols (letters), the shift from numerical data or language representation to variables or parameters with functional rules or patterns, and the recognition and use of structure (Kieran, 1989).


Language Errors

Language errors can have multiple sources including gaps in knowledge for English Language Learners (ELL) and English as a Second Language (ESL) students, as well as gaps in academic language knowledge. This is particularly true for all students working on word problems. Students may lack reading comprehension skills that are required to interpret the information needed to solve a problem. Students may also have difficulty understanding academic language required to solve a problem. Prompts, word banks, and fill-in-the-blank questions may be used to help students solve open-ended questions. For example, a prompt and fill-in-the-blank could be used when asking a student to distinguish similarities and differences between polygons: “Squares and rectangles both have ___ sides but are different because _______________.” Word banks can be used when defining properties of angles. For example, the words “acute,” “obtuse,” “vertical,” “equal” and “not equal” can be included with other terms in a word bank to help students fill-in the following sentences:

An angle that is less than 90 degrees is ________. (acute)
An angle that is greater than 90 degrees is ________. (obtuse)
________ angles are formed when two lines intersect and have ______ measurements. (vertical, equal)


Spatial Information Errors

Difficulties in obtaining spatial information can also cause errors. A strong correlation was found between spatial ability and algebraic ability (Poon & Leung, 2009). When problems are represented using icons and visuals, mathematics assessments assume students can think spatially. For example, students may make errors on questions about Venn diagrams due to difficulties in understanding that lines represent boundaries and may ignore the lines. “Perceptual analysis and synthesis often make greater demands on the pupil than does the mathematical problem itself” (Radatz, 1979, p. 165). Without considering this lack of spatial ability as a possible cause of the incorrect responses, teachers may invest a lot of time and energy in presenting new materials that would not address the root cause of the problem.


Poor Prerequisite Skills/Flexibility/Procedural Errors

When a student does not possess the necessary prerequisite skills, facts, and concepts to solve a problem, he or she will not be able to solve the problem correctly. For example, if a student does not know how to combine like-terms, he or she may face difficulty solving multistep equations involving combining like-terms. Difficulties due to incorrect associations or rigidity of thinking are also common areas of error in mathematics. “Inadequate flexibility in decoding and encoding new information often means that experience with similar problems will lead to habitual rigidity of thinking” (Radatz, 1979, p. 167). Further, students make procedural
errors when they incorrectly apply mathematical rules and strategies. Rushed solutions and carelessness can also cause errors. Interviews revealed that errors in simplifying expressions were caused by carelessness and could be fixed with improved working habits (Poon & Leung, 2009). In addition, many students do not have linear problem solving skills. In fact, for many students, when reaching a point of difficulty in a problem, they go back and change their translation of the problem to avoid the difficulty (VanLehn, 1988, as cited by Sebrechts, Enright, Bennet, Martin, 1996).


The ability to use assessments in order to reduce common algebra errors may, in turn, increase understanding and build prerequisite skills that may lead to a stronger understanding of more advanced topics for both students and teachers alike. The use of open-ended quiz or test items allows teachers to see all of a student’s work rather than just an answer, such as in the case of multiple-choice questions. However, teachers who use quizzes or tests with multiple-choice questions provided in textbooks and other curricula could discuss in professional learning communities (PLC) what potential errors could have led a student to choose that multiple-choice answer, whether it be procedural, conceptual, spatial, language, or random. From there, teachers may be able to see a pattern that arises among classes and share ideas on how to approach reteaching. Teachers in a PLC setting could share and discuss common errors that have surfaced in their classrooms and what strategies have helped to address the errors.

Integrating Assessment Results with Curriculum and Instructional Change

It is imperative that mathematics curricula are designed to incorporate results from a variety of assessments (Goertz et al., 2009). Ginsburg (2009) states that the foundation of formative assessments is its capability to provide information that teachers can use to make instructional decisions. Popham (2008) categorizes the possible changes that can occur from the intentional integration of formative assessments:

 teacher’s instructional change (teacher adjusts instruction based on assessment results)
 students’ learning tactic change (students use results to adjust their own procedures)
 classroom climate change (entire classroom expectations for learning are changed)
 school-wide change (through professional development or teacher learning communities, the school adopts a common formative assessment).


Formative assessment test questions are not always written in such a way that allow for analysis of mathematics procedural and conceptual understanding (Goertz et al., 2009). For example, multiple-choice tests often contain distractors (or wrong answers) that help assess common errors (e.g., in a mathematics problem asking for the area of a circle, distractors may include answers that used an incorrect area formula, a computational error, or a calculator inputting error). However, individual distractors may contain multiple errors, making it difficult for teachers to assess where the student made the mistake. In addition, the pattern of correct and incorrect answers could be used to look for specific misunderstandings and at the same time increase the reliability of the assessments (Shepard, Flexer, Hiebert, Marion, Mayfield, & Weston, 2005; Yun, 2005).


As discussed previously, formative test results are often difficult to interpret–even Piaget believed that he could not interpret the results of a standardized test because of the method of administration (Ginsburg, 2009). As a result, teachers often interpret student errors differently, resulting in differences in teacher responses to results (Goertz et al., 2009). In Goertz et al.’s study, responses to a student’s error varied from procedural to conceptual explanations. For example, with regard to a question requiring a student to add two fractions, some teachers diagnosed it as a procedural error in which the student failed to find the common denominator, while others diagnosed it as a conceptual error in which the student failed to understand that the denominator indicated how many parts were in the whole. These differences in interpretation are important because each of these explanations would require different pedagogical approaches to address them.


These findings suggest that it is important that the design of formative assessments clearly reflect their intended use such that the number and types of explanations for incorrect responses could be mitigated through the design of the assessment tool, or through additional inquiry intended to differentially diagnose the reasons for the incorrect response. Further, the literature suggests that professional communities could be created for teachers to discuss the specific differences in interpretation and come to a consensus about how to address them. In addition, constructs, format, and any supplemental component should align with state or district standards, and instructional strategies should align with the curriculum’s approach (Goertz et al., 2009). The broader principle underlying Goertz et al.’s work is that assessments should be used for a single purpose and, thus, tests intended for formative use may require the use of other tests to allow for evaluative and predictive purposes, such as a summative unit test or project (Goertz et al.).



Teachers’ Use & Misuse of Formative Assessments

A key problem with the use of formative assessments occurs after the design and implementation phase. Formative assessments are often viewed as an object, rather than a process by which student achievement and understanding can be improved through the use of assessment information (Good, 2011). According to Good, the phrase formative use of assessment information is more appropriate than the simple term formative assessment, largely because it places the emphasis on the important aspect of the assessments—the use of the information vs. the instruments themselves. However, this move from assessment data to data use is often the most difficult to manage in the classroom. More specifically, once a diagnostic or formative assessment has been administered, teachers are often unsure how to interpret and act upon the data (Dixon & Haigh, 2009).


According to Heritage, Kim, Vendlinski, and Herman (2009), teachers find it more difficult to make instructional changes from assessment results than to perform other tasks including using student responses to assess student understanding. This difficulty can result in poor utilization of the information provided by these assessment instruments. As Poon and Lung (2009) observe, “[T]eachers do not understand their students’ learning process well, and hence their teaching skills and methodology do not match the needs of these students” (p. 58). Goertz et al. (2009) also found that the type of instructional change that teachers generally utilized in response to formative assessment results was deciding what topics to reteach, with very little deviation in approach or targeting of specific conceptual misunderstandings. This approach, while responding to data generated by formative assessments, often did not utilize the full range of potential information available to them from the assessments. 


Moreover, the limit at which teachers chose to respond with instructional change varied from school to school and even from teacher to teacher (Goertz et al., 2009). For example, in one classroom a teacher may use a classroom success rate of 80% percent while another teacher may use 60% percent as the threshold for re-teaching, causing differences from teacher to teacher regarding what level requires instructional change. In Heritage et al.’s (2009) study they found that the interaction between teachers’ pedagogical knowledge, knowledge of mathematical principles, and mathematical tasks produced the largest error variability in teachers’ knowledge of appropriate formative assessment use. This suggests that teachers with the most knowledge of the mathematical principles and tasks represented by the assessment knew how best to use the formative assessment instruments to inform their instructional practices.


Finally, teachers were affected by various factors when deciding how to alter instruction. For example, teachers often considered their own knowledge of individual students, how students performed compared to classmates, and their own perceptions about what students found challenging when they made their instructional decisions (Goertz et al., 2009). In addition, teachers in Goertz et al.’s study were not surprised by the results of the interim assessments and “they mentioned that the interim assessments largely confirmed what they already knew about student learning in mathematics” (p. 5). However, some teachers did follow-up with individual students in order to alter future instruction.4 These findings support those in Slavin et al. (2009) and provide a potential mechanism for why so much of teachers’ instructional success was related to teacher choices in their approach to teaching.


While these findings may appear obvious (it makes sense that teachers who understand mathematics best would use the formative assessments best and that teachers take into account  their knowledge of their students), there are important implications for introducing formative assessment practices into schools. First, in schools where teachers do not have strong understandings of mathematical principles or the assessments themselves, the mere introduction of formative assessments is less likely to produce positive changes in classroom pedagogy. In addition, there are implications for program design. First, the types of assessment instruments introduced should be consistent with the level of knowledge and pedagogical sophistication of the teachers. That is to say, those formative instruments that require less mathematical sophistication to use appropriately should be introduced where appropriate in order to scaffold teachers towards the appropriate use of the more complicated formative assessment tools described previously. Second, the more intimately involved in the design of the formative assessment instruments the teachers are the more likely they are to understand the purpose of those assessments and, thus, the more likely they are to use them more appropriately and effectively. Finally, the more input teachers can have in the creation of the formative assessment instruments, the more directly they can tailor them to reflect the local priorities and the knowledge of their students that they possess.


Ginsburg (2009) argues that a main challenge in mathematics education is providing professional development opportunities on assessment. Goertz et al. (2009) argue that teachers who assessed conceptual understanding were more likely to respond with instructional change and incorporate more varied instructional methods, such as using arrays for multiplication or relating the steps used in two-digit subtraction to the steps necessary to complete a three-digit subtraction problem. Given this observed relationship, fostering these types of behaviors could be a topic for professional development. “Professional development for teachers should focus as well on teacher content knowledge, developing teachers’ instructional repertoires, and capacity to assess students’ mathematical learning” (Goertz et al., p. 9). Furthermore, teachers and principals in Volante, Drake, and Beckett’s 2010 study reported that professional learning communities (PLCs) provided the opportunity to discuss with other practitioners samples of student work and allowed discussion on consistent measurement. Thus, PLCs could be a useful structure to provide these professional development opportunities and link the assessments to the instructional practices that will address the assessments’ findings.

ASSESSMENT AS TOOL TO UNDERSTAND STUDENTS’ MATHEMATICAL LEARNING

The reformed curriculum suggested that every instructional activity is an assessment opportunity for teachers and a learning opportunity for students. The movement emphasized classroom assessment in gathering information on which teachers can inform their further instruction. Assessment integral to instruction contributes significantly to all students’ mathematics learning.


The new vision of assessment suggested that knowing how these assessment processes take place should become a focus of teacher education programs. Problem-posing task referred to in the study was that the task teachers designed requires students to generate one or more word problems. The professional standards suggested that teachers could use task selection and analysis as foci for thinking about instruction and assessment. According to De Lange (1995), a task that is open for students’ process and solution is a way of stimulating students’ high quality thinking. Training teachers in designing and using assessment tasks has also been proposed as a means of improving the quality of assessments (Clarke, 1996).


However, the design of open-ended tasks is a complex and challenging work for the teachers who are used to the traditional test. Thus, the tasks involving in the study were considered as an informal way of assessing what and how individual student learned from everyday lesson. Thus, the preparation of the tasks involving in this study was not prior to instruction; rather, teachers generated them from the activities in which students engaged in everyday lesson. The mathematics contents covered in the textbooks were a dimension of the assessment framework of the study.


The reformed curriculum calls for an increased emphasis on teachers’ responsibility for the quality of the tasks in which students engaged. The high quality of tasks should help students clarifying thinking and developing deeper understanding through the process of formulating problem, communicating, and reasoning (MET, 2000). Thus, these cognitive processes were the other dimension of the assessment framework. The tasks teachers designed in the study were to assess students’ problem posing, communicating, and reasoning. Due to the limitation of space, this paper is primarily concerned with the problem-posing tasks. Problem-posing is recognized as an important component in the nature of mathematical thinking (Kilpatric, 1987). More recently, there is an increased emphasis on giving students opportunities with problem posing in mathematics classroom (English & Hoalford, 1995; Stoyanova, 1998). These research has shown that instructional activities as having students generate problems as a means of improving ability of problem solving and their attitude toward mathematics (Winograd, 1991). Nevertheless, such reform requires first a commitment to creating an environment in which problem posing is a natural process of mathematics learning.
Second, it requires teachers figure out the strategies for helping students posing meaningful and enticing problems. Thus, there is a need to support teachers with a collaborative team whose students engage in problem-posing activities. This can only be achieved by establishing an assessment team who support mutually by providing them with dialogues on critical assessment issues related to instruction.


Problem-posing involves generating new problems and reformatting a given problems (Silver, 1994). Generating new problems is not on the solution but on creating a new problem. The quality of problems in which students generated depends on the given tasks (Leung & Silver, 1997). Research on problem posing has increased attention to the effect of problem posing on students’ mathematical ability and the effect of task formats on problem posing (Leung & Silver, 1997). Such problem-posing tasks that situations were presented in a story form were created by researcher rather than by classroom teachers. Moreover, there is a little research on teachers’ responsibility for the variety and the quality of the problem-posing tasks. The way in which teachers explored to create tasks for students generating problems from a contrived situation was investigated in the present study.


For teachers, the problem-posing tasks allowed them to gain insight into the way students constructing mathematical understanding and served to be a useful assessment tool. As an assessing tool, the tasks incorporating into everyday instruction, decisions about task appropriateness were often related to students’ communication of their thinking, or the students’ problem-solving strategies displayed in classroom. The mathematics concepts to be taught at a grade level became as an elementary element of designing assessment tasks integrated into instruction. Other decisions concerning the appropriateness of a task were relevant with teaching events students encountered in everyday lesson.



Sunday 24 November 2013

Classroom assessment as an essential aspect of effective teaching

Recent years have seen increased research on classroom assessment as an essential aspect of effective teaching and learning (Bryant and Driscoll, 1998; McMillan, Myran and Workman, 2002; Stiggins, 2002). It is becoming more and more evident that classroom assessment is an integral component of the teaching and learning process (Gipps, 1990; Black and Wiliam, 1998). The National Council of Teachers of Mathematics [NCTM](2000) regard assessment as a tool for learning mathematics. The NCTM contends that effective mathematics teaching requires understanding what students know and need to know. According to Roberts, Gerace, Mestra and Leanard (2000) assessment informs the teacher about what students think and about how they think. Classroom assessment helps teachers to establish what students already know and what they need to learn. Ampiah, Hart, Nkhata and Nyirenda (2003) contend that a teacher needs to know what children are able to do or not if he/she is to plan effectively.


Research has revealed that most students perceive mathematics as a difficult subject, which has no meaning in  real life (Countryman, 1992; Sobel & Maletsky, 1999; Van de Walle, 2001). This perception begins to develop at the elementary school where students find the subject very abstract and heavily relying on algorithm, which the students fail to understand. This trend continues up to middle, high school and college. By the time students get to high school they have lost interest in mathematics and they cannot explain some of the operations (Countryman, 1992). According to Countryman (1992), the rules and  procedures for school mathematics make little or no sense to many students. They memorize examples, they follow instructions, they do their homework, and they take tests, but they cannot say what their answers mean. Most research studies in both education and cognitive psychology have reported weaknesses in the way mathematics is taught. The most serious weakness is the psychological assumption about how mathematics is learned, which is based on the “stimulus-response” theory (Althouse, 1994; Cathcart, Pothier, Vance & Bezuk, 2001; Sheffield & Cruikshank, 2000). The “stimulus-response” theory states that learning occurs when a “bond” is established between some stimulus and a person’s response to it (Cathcart, Pothier, Vance & Bezuk, 2001). 


Cathcart et al.(2001) went further to say that, in the above scenario, drill becomes a major component in the instructional process because the more often a correct response is made to stimulus, the more established the bond becomes. Under this theory children are given lengthy and often complex problems, particularly computations with the belief that the exercises will strengthen the mind. Schools and teachers need to realize
that great philosophers, psychologists, scientists, mathematicians and many others created knowledge through investigation and experimentation (Baroody & Coslick, 1998; Phillips, 2000). They understood cause and effect through curiosity and investigation. They were free to study nature and phenomenon, as they existed. Today, learning mathematics seems to suggest repeating operations that were already done by other people and examinations that seek to fulfill the same pattern (Brooks & Brooks, 1999).


The constructivist view is different from the positivist view and, therefore, calls for different  teaching approaches (Baroody & Coslick, 1998; Cathcart, et al., 2001; von Glasersfeld, 1995). The constructivist view takes the position that children construct their own understanding of mathematical ideas by means of mental activities or through interaction with the physical world (Cathcart, et al., 2001). The assertion that children should construct their own mathematical knowledge is not to suggest that mathematics teachers should sit back and wait for this to happen. Rather, teachers must create the learning environment for students and then actively monitor the students through various classroom assessment methods as they engage in an investigation. The other role of the teacher should be to provide the students with experiences that will enable them to establish links and relationships. Teachers can only do this if they are able to monitor the learning process and are able to know what sort of support the learners need at a particular point.


The main hypothesis of constructivism is that knowledge is not passively received from an outside source but is actively constructed by the individual learner (Brooks and Brooks, 1999; von Glasersfeld, 1995). Within this hypothesis lies the crucial role of the teacher. Today many psychologists and educators believe that children construct their own knowledge as they interact with their environment (Brooks and Brooks, 1999; Cathcart, et al., 2001; Hatfield, Edwards, Bitter & Morrow, 2000; von Glasersfeld, 1995). Unfortunately, classrooms do not seem to reflect this thinking. Some teachers still continue to teach in the way perhaps they themselves were taught because human beings naturally look back and claim that the past offered the best. If children construct knowledge rather than passively receive it, they must be offered the  opportunities to act on their environment, physically and mentally, to use methods of learning that are meaningful to them, and to become aware of and solve their own problems (Althouse, 1994). Althouse is in agreement with Baroody and Coslick (1998) who suggest that teaching mathematics is essentially a process of translating mathematics into a form children can comprehend. Teaching mathematics is providing experiences that will enable children to discover relationships and construct meaning. Students should be assisted to see the importance of mathematics not by rote learning but by investigating and relating to real-life situations. Giving students dozens and dozens of problems to solve does not help them to understand mathematics, if anything it frustrates them even more. The more they do things they cannot understand or explain, the more they get
frustrated.


The way teachers perceive assessment may influence the way they teach and assess their students (Assessment Reform Group, 1999; Fennema and Romberg, 1999). To investigate teachers’ perceptions of classroom assessment in mathematics and their current classroom assessment practices. Specifically, the study sought to understand the methods and tools teachers use to assess their students. The researcher studied closely how classroom assessment was being carried out in the classroom by focusing on the strategies and tools the teachers used to assess the learners. 


Classroom assessment is one of the tools teachers can use to inform their teaching and the learning of their students. Unfortunately, the purpose of classroom assessment in most schools seems to be confused and, therefore, not supporting learning (Ainscow, 1988; Stiggins, 2002; Swan, 1993). The term assessment in some schools means testing and grading (Stiggins, 2002).


Researchers have attempted to investigate teachers’ perceptions of assessment in many different ways (Chester & Quilter, 1998). Chester and Quilter believed that studying teachers’ perceptions of assessment is important in the sense that it provides an indication of how different forms of assessment are being used or misused and what could be done to improve the situation. More critical also is the fact that perceptions affect behavior (Atweh, Bleicker & Cooper, 1998; Calderhead,1996; Cillessen & Lafontana, 2002) A study conducted by Chester and Quilter (1998) on inservice teachers’ perceptions of classroom assessment, standardized testing, and alternative methods concluded that teachers’ perceptions of classroom assessment affected their assessment classroom practices. Teachers that attached less value to classroom assessment used standardized tests most of the times in their classrooms.


Chester and Quilter went further to say that teachers with negative experiences in classroom assessment and
standardized testing are least likely to see the value in various forms of assessment for their classroom. They
recommended, therefore, that in-service training should focus on helping teachers see the value of assessment
methods rather than “how to” do assessment. A study conducted by Green (1992) on pre-service teachers with measurement training revealed that the pre-service teachers tended to believe that standardized tests address important educational outcomes and believed that classroom tests are less useful. In the same study inservice teachers believed that standardized tests are important, but not to the degree that pre-service teachers did. A case study of one science teacher conducted by Bielenberg (1993) showed that the teacher’s beliefs about science defined how she conducted her science classes.


Diene (1993) conducted a study to understand teacher change. The study considered the classroom practices and beliefs of four teachers. Findings suggest that teachers’ beliefs and practices were embedded within and tied to broader contexts, which include personal, social and previous ideas about a particular aspect.