Thursday 4 September 2014

Knowing What You Know: Assessment Competence

Learning is any process that enables a change in a person’s capacity to understand themselves and the world around them. Assessment is any feedback that enables a learner to know when there has been a change in their capacity to understand themselves and the world around them. Learning and assessment thus occur in the course of daily life as individuals interact with their environments, with each other, and as they internalize new knowledge and new ways of understanding. Assessment is integral to learning, whether provided in a formal classroom setting or any other context.


But learning is also an intentional process and competences for “learning to learn” are considered vital for effective lifelong learning.1 Learning-to-learn involves awareness of how and why an individual acquires and processes different types of knowledge (meta-cognition), and the learning methods, environments and habits she or he finds to be most effective (self-regulation) (Eurydice, 2002). Assessment is integral to all forms learning, whether the apparently effortless learning that occurs through interactions in daily life, or through intentional learning.


Formal assessments are based on quantitative and/or qualitative measurements. For novice learners, assessments are usually based on progress toward specific goals with clear criteria. At higher levels, learners may be working individually or collectively to solve problems with no clearly defined outcome, and assessment is part of this exploratory process. Feedback is an essential element in assessment to improve learning. Feedback may be provided in the course of an interactive discussion, in writing, in the context of a game, or through signals in the learner’s environment. It is most effective when it is timely and at an appropriate level of detail to suggest next steps (Black and Wiliam, 1998). In the case of goal-directed learning, feedback needs to be congruent with the learning aims. When learning is exploratory and open-ended feedback may, in some cases, be adequately conveyed through conventional and familiar indicators or it may not. In the latter case inventing new indicators and becoming sensitive to the meaning of such feedback becomes one of the key parts of improving assessment.


As already noted the context and technologies of learning and assessment are changing rapidly. Part of this change calls into question how we define and achieve the competences for learning to learn. Indeed, the term “competence”, which refers to the ability to mobilize both cognitive and non-cognitive resources in unknown contexts, seems particularly apt here. Learning, in both open and closed systems, requires learners to define what, how, with whom and why they learn; they and their collaborators, mentors and teachers need to extend and deepen their knowledge of effective assessment processes, and their ability to interpret and respond to feedback. Today’s changing context is calling for a deeper understanding of the factors (rational and non-rational) that enhance the ability of learners in all situations and environments to identify and track changes in their “capacity to understand themselves and the world around them”. This is assessment competency. 


New technologies are changing learning and assessment – and raising learners’ expectations that they will have a say in what and how they learn. They may also support new approaches to learning and education:


• Social media facilitate information sharing, usergenerated content, and collaboration in virtual communities. Increasingly, individuals expect the opportunity to participate in knowledge creation and to share their own assessments through blogs, wikis, feedback and rating services. Those with special interests may engage in collective learning and problem solving on a collective (crowd sourced) scale. In the education world, Open Educational Resources allow educators to share and shape learning content, and to provide input on effectiveness.


• Tracking programmes and electronic portfolios allow individuals to monitor their performance toward goals
(criterion-referenced assessments), to compare their progress against their own prior performances (ipsativereferenced assessments), or against the progress of other users (norm-referenced assessments). For example, Baumeister and Tierney (2011) note that several online programmes that help individuals to track exercise, sleep and savings habits can be very effective in supporting goal achievement. The research suggests that individuals who use tracking programmes to focus on how much further  they have to go (rather than progress already made) (Fishbach and Koo, 2010), and who share their progress with others are more likely to succeed in reaching goals.


• Video games both entertain players and provide rapid feedback on progress. Many games step up challenges as players develop their skills. Educationalists are now working with game developers to create games focused on building specific competences. For example, Popoviæ describes how novice learners in biochemistry have engaged in collective game play to address specific puzzles and problems within the discipline, and in the process have found solutions that have eluded experts. Delacruz and colleagues (2010) have found that effectively-designed games can tap into mathematical thinking and can be used for summative assessments of learning, and potentially, as more data on players’ thinking processes are gathered through game play, as a tool for formative assessment.


• Test developers are also focusing on how ICT-based assessments may support the integration of large-scale, external summative assessments and classroom-based formative assessments. Currently, there are a number of technical barriers to this kind of integration. Data gathered in large-scale assessments do not provide the level of detail needed to diagnose individual student learning needs, nor are they delivered in a timely enough manner to have an impact on the learning of students tested. There are also challenges related to creating reliable measures of higher-order skills, such as problem solving and collaboration.


• Learning Analytics, as discussed by Siemens in the following section, and by Shum in his work on Social Learning Analytics, take these ambitions a step further. In a previous paper, Siemens and Long (2011) noted
that social network analysis tracking learners’ online behaviour could potentially provide detailed information (“big data”) on learning processes and real time assessment of progress, along with suggestions for next steps. Importantly, learning analytics may also support “intelligent curriculum” and “adaptive content”, allowing deep personalisation of learning. Shum and Crick (forthcoming 2012) propose learning analytics as a tool to assess learners’ dispositions, values, attitudes and skills. These different technologies can facilitate assessment by and for learners. At the same time, they demand that both educators and learners develop deeper and more extensive competences for assessment.


Knowledge cannot be handled like a readymade tool that can be used without studying its nature. Knowing about knowledge should figure as a primary requirement to prepare the mind to confront the constant threat of error and illusion that parasitize the human mind. It is a question of arming minds in the vital combat for lucidity (p. 1) Morin, 1999. Knowing what you know, as Morin suggests, requires an understanding of the cultural, intellectual and cognitive properties and processes that shape knowledge, and the extent to which everything we know is subject to error, illusion and emergence of novelty. Assessment competences require an awareness of these vulnerabilities, and strategies to ensure greater capacity to define knowledge as it emerges from learning processes. The following elements are proposed as a framework for the development of assessment competences, with the hope of becoming better able to grasp emergence and to guard, as much as possible, against error and illusion.


1) Recognising the potential and limits of new and emerging assessment tools

Tools, whether high tech or as simple as a checklist or rubric, are an essential element of assessment. Many new and emerging technologies support greater user engagement in assessment processes, provide the timely
and detailed data needed to diagnose learning needs and connect users to relevant resources, build on effective tracking and monitoring strategies to support persistence toward goals, and scaffold learning challenges. Yet there are also some caveats to keep in mind. The quality of new and emerging ICT-based tools depends largely upon the quality of their design as well as the underlying algorithms. High quality tools will require significant investments in measurement technologies that take into account how people actually learn. Indeed, while cognitive scientists have made a great deal of progress in understanding the processes of learning in different subject domains over the last two decades, progress in measurement technologies has been much slower. Within the learning sciences, there is deeper understanding of how learners: move from novice to expert, develop typical misconceptions, create effective learning environments, and use self-assessment and meta-cognitive monitoring (Bransford et al., 1999; Pellegrino et al., 1999). Emerging technologies, including learning analytics, have ambitions of taking into account these advances in our understanding of learning as a cognitive and social process. However efforts to link technologies to effective
learning and assessment are at the early stages. Ensuring the validity and reliability of assessment tools calls for careful testing and innovative experimentation.

2) Making assessment criteria explicit

While technologies such as blogs, social media and rating services provide opportunities for anyone to express his or her opinion on any number of subjects, the criteria underlying assessments are not always made explicit. Both educators and learners need to be able to clearly identify criteria they are using to make judgments. For example, learners writing blogs need to learn to outline carefully constructed arguments and describe the basis on which they have developed an opinion or made a judgment – a skill emphasized in traditional schooling, but not always applied to the Internet. As consumers of information, learners also need to be able to identify the implicit basis for the evaluator’s assessment when criteria are not clear, and to appraise the validity of their judgments. Are ratings and assessments measuring what they purport to measure? How do others’ assessments relate to the learner’s own values and priorities and can online ratings inform their choices?

3) Strengthening meta-cognition and self-regulation

Meta-cognition (thinking about thinking) and selfregulation (self control) are both central to effective learning and assessment. This includes the way we construct our identities as learners, awareness of effective learning approaches and strategies, and how cognition affects perception and judgment, including the potential for bias and error. Self-regulation places the emphasis on the process of learning rather than the outcome. Assessment may focus on self-monitoring and the development of strategies to support persistence and effort. Here, the kind of tracking programmes mentioned above, that support self-assessment – whether against one’s own prior performances, against clear goals, or against the performance of peers (respectively ipsative-, criterionor norm-referenced assessments) may support the development of effective self-regulation.
Meta-cognition is also vital in understanding the quality of learning, whether toward specific goals or in exploring new ideas. Awareness of the potential for bias and error in assessments, are an important aspect of meta-cognition. Kahneman (2011) notes that, for the most part, we rely upon impressions and feelings and are confident in our intuitive judgments. Such judgments and actions may be appropriate most of the time. However, cognition can also induce people to make errors and biased judgments on a systematic basis. While individuals are capable of  sophisticated thinking, it requires much greater level of mental effort, and the normal human tendency is to revert to intuitive approaches. Awareness of these weaknesses and that they may be particularly dangerous in certain circumstances is a first step toward more effective assessment. More advanced assessment competences mean that individuals are better able to regulate the use of cognitive and intuitive approaches as well as recognize situations when error and bias might be more likely.

4) Judging the quality of information gathered in the assessment process

The quality of information gathered in assessment processes is vital for learning, whether outcomes are known or yet to be discovered. In the case of learning aimed at specific goals or outcomes, assessments of learner attainment are effective only if they are both valid and reliable. Validity means that the assessments measure what they purport to measure, while reliability means the assessment can be repeated and produce consistent results. Validity and reliability are important whether referring to large-scale, high-stakes assessments, classroom questioning, or self- and peer-assessment. The quality of information gathered in assessment is also vital for learning when it involves radical re-framing of ideas to open new ways of thinking and acting and finding novel solutions and tools to address problems. The learning outcome, in these cases, is unknown and assessment is more about testing ideas. It may involve “simple tinkering” or the development of predictive models to systematically test and verify hypotheses. Whether learning outcomes are known or not, both educators and learners need to develop an understanding of and appreciation for the quality of information gathered in different assessment processes. This includes an understanding of how different measurement models mediate that information, and recognition that no single assessment can capture all the information necessary. Multiple measures over time are necessary for more complete pictures. Assessment competences should thus include an understanding of the strengths and weakness of different approaches to measurement, and how they complement each other.

5) Inquiry

Both novice and expert learners may engage in inquiry as a form of deep learning. However, inquiry also requires strong competences to ask meaningful questions and to pursue lines of investigation. Questions, a form of assessment, provide a means to develop understanding, and to identify areas where thinking is unclear. For learners at “higher” levels of expertise, the outcomes of any inquiry may not necessarily be known. Thus, learners, collaborators, educators will need to develop competences to assess learning when information is incomplete and the answers are not clear. They will need to set the goals for different steps of the inquiry process, to develop criteria to gauge the quality of learning, and to develop ways to assess it. Here, the capacity to harness new learning analytical tools to mine “big data” and support inquiry hold particular promise.

6) Sense-making

Individuals, whether working individually or collaboratively, need to make sense of information – to weigh its value and to place it in context. Sensemaking competences include the ability to sort through information when there is too much of it, to recognize when there is too little information, to understand the ways in which individuals and groups shape information based on their own frames of reference, and similarly, the ways in which assessment technologies filter information. Sense making may also require reassessment of beliefs and prior knowledge in order to arrive at new and deeper understandings, and to create new meaning. Indeed, this kind of learning may be considered as “transformative” and can have a profound impact on development (Mezirow, 2011). The willingness to learn and to re-learn, to assess and to re-assess, to experiment and reflect creates the basis for wisdom (Miller et al., 2008).

7) Identifying next steps for learning and progress

When the aim of assessment is to improve future learning, then the information gathered in assessment processes should be linked to next steps for learning. The capacity to diagnose learning needs, and to identify appropriate strategies and resources for further learning, to search for further input from mentors and peers or the broader environment, are thus an essential part of the learning and assessment process. Indeed, this is an essential competence for learning-to-learn.



Assessment has always been an important competence for learning to learn, but it is increasingly central as new socio-economic contexts and new tools have increasingly required and enabled individuals to take charge of their own learning. The above elements represent a first effort at re-defining and deepening assessment competences as part of Morin’s ongoing “combat for lucidity” and knowledge in an emergent world.


Ultimately, assessment competences are about how each individual constructs her or his identity as a learner and as a person. Thus, the standards which guide a learners’ development as well as the ability to assess learning when there are no set standards, the capacity to re-assess and re-frame ideas and prior beliefs, and the ability to define next steps, are all vital steps along the path to lifelong learning.



No comments:

Post a Comment