Thursday, 14 November 2019

Analytics That Help Real-Time Student Learning

Measuring student achievement at every moment along the academic journey

How Do You Know What Students Are Really Learning?
When presenting to large number of students, how can an instructor identify who is struggling to understand the content? When seeking ways to improve teaching, how can instructors identify where and how to focus their efforts? When looking to increase student retention, how can a higher education institution find the most relevant data and apply it for the greatest impact?

These questions are receiving more attention as colleges and universities look to improve student performance and teaching quality through better use of learning technologies and data analytics. It all starts in the classroom, where limited instructional support tools have made it difficult to obtain real-time, actionable insights into student learning. Student information systems, learning management systems and other sources of student data don’t track the progress or roadblocks in the learning process. Without the right solutions, instructors can’t easily track the factors that contribute to a student’s learning progress, such as attending class, participating in group discussions and asking questions. And even when these tools are available, they may provide information only after the
class ends. Knowing at every moment which students are on track — and, more importantly, which students are struggling — is vital information for improving student retention.

Useful Data In and Out of Class
What instructors need to improve student learning is real-time, easy-to-use data that is available throughout the class session. With the right technologies, the instructor can view learning analytics, ask discussion questions, and generate quizzes and polls that gauge student understanding. By seeing what students need now, the instructor can adjust teaching immediately, when  it can have the highest impact.

Of course, a student’s learning doesn’t only happen in the classroom and analytics shouldn’t stop there. A holistic view of a student’s progress is possible when an instructor uses a video capture system to record lectures and learning modules for student review outside of class time. Because these videos are stored in the cloud for Web access, it’s easy for the system to collect objective data on how students are using these resources. For example, repeated views of a particular section in a lecture recording  may indicate a topic that needs more explanation. Students, in turn, benefit from personalized learning tools that allow them to customize their own study guides, ask questions anonymously and review measurements of their activities.

Guiding Improvements Across the Complete Academic Experience
When used correctly, active learning solutions, lecture capture, and other tools and their data will deliver benefits to instructors, students and the institution.

For Course Instructors
Learning analytics provide in-depth guidance to improve teaching practices in multiple ways.
Real-time insights about student learning. In-the-moment analytics give an instructor information about needed adjustments in order to meet the learning needs of the class as a whole and to deliver targeted help to struggling students.
Customizable measurements. Instructors gain more useful information when data can be tailored to focus on what’s important to their teaching — whether it be test scores, classroom participation, completion of online assignments or other factors.
Continuous feedback. Data about classroom and online activity supplements the feedback an instructor obtains from in-class polls, questions and discussions, helping to keep course materials and teaching techniques fresh and relevant.

For Students
Analytics help students become more aware of their learning needs and encourage improvement in academic performance.
More effective learning support. With analytics, students can monitor their progress, identify when they need support from the teaching staff or advisors, better prepare for exams and understand how to increase their academic achievement.
More learning success. Comprehensive, easily available learning data guides students throughout their college journey. Early alerts about lagging performance allow instructors and advisors to help students complete a course and stay on track for graduation.
More investment value. With more visibility into their own progress, students can take action to improve their studies and ensure they are getting the most value from their education investment

For the Institution
Colleges and universities that combine active learning solutions with data analytics will gain several academic advantages.
Improved teaching and learning. Better information about what’s working for instruction, both in and out of the classroom, means the institution can continuously improve academic quality. Course content and teaching techniques can be adapted more readily to changes in student needs, instructional technologies and formats for course delivery.
Increased student retention. The formula is simple: Happy students are students who stay. Analytics identify the early interventions that motivate students to stay in challenging courses. When students are successful in one class, they are more likely to be successful in others — and more satisfied with their overall college experience.
Substantiated value and differentiation. It’s hard for students and parents to accept rising tuition levels without the assurance of a good learning experience and a clear path to on-time graduation. Demonstrating the institution’s commitment to new pedagogical approaches is a key factor for substantiating the educational value delivered to students. Adopting new learning technologies and providing access to student learning indicators not only result in high return on investment for the institution in the form of better learning outcomes, but also differentiates the college or university when recruiting students.
Integration with strategic initiatives. Analytics available from active learning solutions can be integrated with data from learning management, student information and other systems to create a full view of student performance levels and needs. This integrated perspective also enables the institution to better target strategic initiatives for improving academics and student services.

Some recommendations for the successful implementation of learning analytics.

Provide Real Evidence: Learning analytics offer instructors the opportunity to explore student behaviors and the impact on learning outcomes. Instructors may be willing to try new pedagogies and tools if shown robust evidence the change results in learning and engagement improvements.
Keep it Anonymous: For administrators, reporting data in an aggregated and anonymous form will keep the focus on high-level insights for improving overall teaching quality. This approach avoids the issues that could arise from a perceived punitive review of data for individual instructors and courses.
Instructor Support Groups: Starting and supporting a discussion group will help instructors learn how to best apply learning analytics within the context of their courses and instructional goals.
Incentives: Offering institutional incentives can motivate instructors to use analytics data in the classroom. E.g. An educational organisation offers a Learning Fellows program, where instructors who adopt learning analytics are eligible to receive more research funding. The program also gives participants time and opportunity to explore analytics, and conveys the institution’s interest in supporting pedagogical change and growth.
Access: Require that vendors make all data collected from students on campus available to the institution in a form they can use to blend with other data. This opens opportunities for research as well as entrepreneurial endeavors by instructors and students.

Launching Learning Analytics on Your Campus
There are many point applications available to collect data, but the majority of them do not offer comprehensive learning analytics to identify trends and support an institution’s key initiatives. An effective learning analytics platform for a school or university is created from three core technologies:
  • A lecture capture system and cloud-based learning tools that allow students to access course content before and after classroom sessions, and help instructors easily manage their recorded lectures, teaching videos and other materials.
  • Active learning solutions, like in-class polling and quizzes, for use by instructors and students in the classroom to improve teaching and learning immediately.
  • Analytics tools for data tracking, analysis and reporting of student progress.
I recommend that a learning analytics initiative start with a pilot project involving a small number of courses. The project should be small enough to be easily manageable, yet large enough to identify appropriate plans for broader implementation and future scalability. A key factor in the success of a pilot project is finding instructors who are willing to be early adopters and are excited about using the latest tools for teaching innovation. If you help these pilot instructors succeed, they will become evangelists who will encourage adoption by other instructors. Listening to instructors’ feedback will also reduce any perception that the institution is implementing technology for its own sake.

Schools and universities are becoming data-driven organizations and are learning how to collect, store and access all the data that’s available today. But analytics shouldn’t stop there. As an institution, we want to be looking at other measures we can pursue in the future to give us better insight into the performance of our academics and services.

Yet the focus of learning analytics should rightly remain at the critical point of interaction between instructor and student in the classroom. “I’m excited that by the second week of a course I can identify students who may be in trouble and I can take steps right away to keep them engaged. That outreach is good for the students, for my class as a whole and for the institution. 

Friday, 8 November 2019

AIED - Artificial Intelligence and/in Education

"Wenn wir an die Zukunft der Welt denken, so meinen wir immer den Ort, wo sie sein wird, wenn sie so weiter läuft, wie wir sie jetzt laufen sehen, und denken nicht, daß sie nicht gerade läuft, sondern in einer Kurve, und ihre Richtung sich konstant ändert."

"When we think of the world’s future, we always mean the destination it will reach if it keeps going in the direction we can see it going in now; it does not occur to us that its path is not a straight line but a curve, constantly changing direction."
                                                                            Wittgenstein (1980), pp. 3 / 3e


If, as some anthropologically-minded archæologists would claim, the present is the key to the past, then perhaps the future is the key to the present? In this paper I assume the converse that the present and the past are keys to the future for the case of research in the field of  Artificial Intelligence and/in Education (henceforth abbreviated to "AIED").

Any view of what objectives a research field may achieve in the future must be based on a view of the nature of the field in question, up to the present day. I characterise the past, the present and the near future of AIED research in terms of a combination of different roles played by models of educational processes, namely: models as scientific tools, models as components of educational artefacts, and models as bases for design of educational artefacts. It should be noted that the views expressed here are not those of an objective historian of science, but rather of a researcher engaged in the field that is being discussed. In that case, description, prediction and prescription coincide to a certain extent.

One could say that there are basically three sorts of argumentative texts: those that argue (mostly) in favour of a particular view, those that argue (mostly) against one and those that attempt to weigh pro and contra arguments in the balance (the conventional form of academic discourse). This text falls (mainly) into the first category, and so no claim to exhaustivity is made in citing research that could constitute a rebuttal to the views argued for here. In the context of the special issue in which this text appears, I can only hope that some readers will be willing to supply counter-arguments and that a synthesis could emerge from any ensuing debate.

As with any field of scientific research, AIED involves elaborating theories and models with respect to a specific experimental field, in relation to the production of artefacts. What characterises a particular field is the nature of each of these elements and of the relations that are established between them: what types of theories are elaborated? what counts as a model? what is the experimental field studied? how close are the links between theories, models and artefacts? 

With respect to other research in the field of education, one of the specificities of AIED research lies in the different roles that models can play. A significant part of AIED research can be seen as the use of computers to model aspects of educational situations that themselves involve the use of computers as educational artefacts, some of which may incorporate computational models. By an educational situation I mean a situation that is designed in some way so that a specific form and content of learning will occur; by "educational process" I do not only mean the processes of learning and teaching, but also the larger scale processes by which social situations that are intended to enable teaching and learning to occur are designed.

There are thus three main roles for models of educational processes in AIED research, are as follows:

1. Model as scientific tool. 
A model — computational or other — is used as a means for understanding and predicting some aspect of an educational situation. For example, a computational model is developed in order to understand how the "self-explanation" effect works (VanLehn, Jones & Chi, 1992). This is often termed cognitive modelling (or simulation), although, as I discuss below, the term "cognitive" can have several interpretations.

2. Model as component. 
A computational model, corresponding to some aspect of the teaching or learning process, is used as a component of an educational artefact. For example, a computational/cognitive model of student problem solving is integrated into a computer-based learning environment as a student model. This enables the system to adapt its tutorial interventions to the learner's knowledge and skills. Alternatively, the model-component can be developed on the basis of existing AI techniques, and refined by empirical evaluation.

3. Model as basis for design. 
A model of an educational processes, with its attendant theory, forms the basis for design of a computer tool for education. For example, a model of task-oriented dialogue forms the basis of design and implementation of tools for computer-mediated communication between learners and teachers in a computer supported collaborative learning environment (e.g. Baker & Lund, 1997). In this case, a computational model is not directly transposed into a system component.

Although researchers often attempt to establish a close relationship between 1 and 2 — e.g. cognitive-computational models of student problem-solving becoming student models in Intelligent Tutoring Systems (henceforth, "ITS(s)") — there is no necessary relation between the two, since it may be that the most effective functional component (in an engineering sense) of an educational artefact does not operate in a way that models human cognition.

These three possibilities are not, of course, mutually exclusive: most often, a given AIED research programme contains elements of each, to a greater or lesser degree. For example, one part of an educational system may be based on study of students' conceptions, and other parts may be based on using existing computer science techniques. However, it is not always possible to do this in a way that simultaneously satisfies requirements of each type of use of models, i.e. produce a satisfactory scientific model that is an effective tutoring system component and which leads to an artefact that is genuinely useful in education. I believe that all three of these possibilities are valid and useful, provided that they are pursued in specific ways, that are coherent with the researcher’s goals.

Before moving on to a discussion of the future of AIED research in terms of these three roles of models, I need to say something about what a model is3. Across different sciences, many different types of abstract constructions count as models — for example, descriptive, explanatory, analytic, qualitative, quantitative, symbolic, analogue, or other models. Without entering into an extended discussion in the philosophy of science, it is possible, and useful here, to identify a small number of quite general characteristics of models.

Firstly — and classically —, the function of a model is to predict the existence or future incidence of some set of phenomena, in a determinate experimental field. For example, models of stock exchange transactions should predict changes in financial indices; models of the weather should predict the weather tomorrow; a model of cooperative problem-solving should predict what forms of cooperation can exist (see below), and ideally what interactive learning mechanisms they trigger; a student model should predict the evolution of a student's knowledge states; and so on.

Secondly, a further, and just as important function of a model is to enable elaboration or refinement of the theory on which it is based, by rendering explicit its commitments on epistemological (what can be known and how?) and ontological (what is claimed to exist?) planes. It is generally accepted that there should be a link between the epistemology and the ontology: one should not posit the existence of entities without saying something about how they can be known. Such a relation between model and theory can lead to explanation of phenomena. A theory is not at all the same thing as a model; it consists of a set of quite general assumptions and laws — e.g. the views according to which human cognition is complex symbolic information processing, or that knowledge is a relation between societal subjects and the socially constituted material world — that are not themselves intended to be directly (in)validated (for that, the theory must engender a model). Theories are foundational elements of paradigms, along with shared problems and methods (Kuhn, 1962).

Thirdly, a model necessarily involves abstraction from phenomena, selection of objects and events, in its corresponding experimental field; it necessarily takes some phenomena into account but not others. It is not relevant to criticise a model as such by claiming that it does not take all phenomena into account, but one can criticise its degree of coverage of an experimental field. The modelling process itself involves complex matching processes during which objects and events are selected and structured so as to correspond to the model, within the constraints of its syntax. Tiberghien (1994) has termed this process one of establishing a meaning, or a semantics, for the model, in relation to its experimental field.

Here artefacts enter into the picture? All research fields necessarily comprise aspects that are more or less close to the production and/or use of artefacts, in the sense of either 'applications' of theories or models, use of artefacts or instruments as experimental tools, sometimes on a large scale, or with respect to the study of artefacts themselves and their use, each of which can be a source of new research problems. Even highly theoretical work in mathematics, or descriptive work in botany, that is carried out as "pure research", may, perhaps decades later, find an unanticipated application via, for example, other domains such as physics or medical research. I do not believe that unidirectional 'application' exists: the relation between artefact, theory and model is always complex and multidirectional. Whilst it is clear that any field needs both theory and a close relation with the production of artefacts, it seems to me that one of the defining characteristics of AIED research is that it is closer to the theoretical end of the spectrum. 

There is nothing intrinsically wrong in that: for example, physics has for a long time comprised both theoretical and experimental branches. On that analogy, AIED research would be theoretically-oriented educational science, or even "Learning Science", that adopts a modelling approach.

Despite this variety of roles and types of models, I think that AIED as a field nevertheless still largely operates with a somewhat restricted view of what models are — i.e. symbolic and computational information-processing models. Whilst this view has been important in defining the field as such up to the present, I do not think that it is fruitful or realistic as a unique ‘model’ for what the field currently is and will become. Other types of models of educational processes, that are not necessarily cognitive (in the above sense) nor computational in nature, can, and will I think/hope play an important role in AIED research.

I have sketched a personal and prospective view of AIED research that turns on three possible roles for models: as scientific tools, as components of computational educational artefacts, and as bases for design of such artefacts.

In terms of the first role, my view is that AIED research, over the past three decades, has already mapped out a vast space of phenomena to be studied. We do not need to extend the space of phenomena, but rather to extend the range of theoretical tools from those available in cognitive science, and to adopt a wider (yet more strict) notion of what is and what is not a model. Specifically, and in terms of how I defined models themselves, I claimed that there is no a priori reason why interesting models should not be developed, that extend the notion of ’cognition’ to embrace action and perception, as embedded in artefacts and social relations.

AIED research should and will, I think, open out to a greater extent than is currently the case, into cognitive science, considered in the widest sense of the term. The role of a model, as scientific tool, is to help us to explain, to develop theory, and to predict. As such, any model abstracts from reality. Failure to take a particular phenomenon into account does not invalidate a model, it just restricts its usefulness.

In terms of the second role — models as components — I claimed that individualising ITS are not currently adapted to existing educational practices, largely because of, on a micro-level, problems associated with failing to take teachers, and other social actors, into account. Either we must adapt the components and the artefacts, or else change educational systems; and no doubt, most researchers aim for some realistic combination of both. Depending on the culture concerned, there may be a greater or lesser difference between the timescales of institutional and technological change. I proposed that ITS will, in the near future, be most appropriate for social situations that are less norm-based than most state education systems. 

Within such educational situations, intelligent information search for learners using the Web, rather than intelligent explanation generation, will come to the forefront in the near future, depending on the type of learning task involved. Intelligent explanation generation, and help systems in general, may turn out to be more important for teachers rather than for learners, in, for example, distributed learning communities. Models as intelligent components of educational artefacts have, I think, an important role to play in the near future; it is simply that their uses may not be in the situations that AIED researchers originally thought.

Finally, once we remember that (of course) models are not, by their nature, necessarily computational, this opens up a wide range of possible ways in which theories and models can form the bases of design of educational artefacts. What is required is that the specific nature of the relations between theory, model and design of artefacts be made as explicit as possible, as legitimate objects of scientific discussion and as means of generalising findings towards redesign.

Personally, I believe that theories and models will find their most effective application in design of collaborative distributed educational technologies.

I conclude with some brief remarks on the unity and future of AIED research, as a field. Given all the possible evolutions of AIED research that I have sketched, isn't there a strong possibility that AIED could dissipate into educational research and/or that part of cognitive science that is concerned with learning and teaching? Perhaps, and after all, why not? But I do not think so, and for the following reasons. 

In terms of the particular view of AIED research I have outlined above, what makes piece of research AIED research is, quite simply, that it has something innovative to say about all three of the possible roles of models, with a greater or lesser emphasis being put on each. Concretely, this means that the research in question proposes a specific, explicit and coherent set of relations between: (1) a theory, (2) a model, (3) an experimental field of educational phenomena, (4) computational-educational artefacts, whose use is part of (3), and (5) an educational design process. It is not enough to propose a model of an educational phenomenon; the research must also describe how the model relates to theory, how it is relevant to study or design of artefacts for teaching and learning, and how that design might proceed. This means that AIED research is very complex, and very difficult to carry out.

I think that those constraints will continue to be sufficient for distinguishing a specific field or area of research, whether it is called AIED or something else.

Tuesday, 13 August 2019

Potentials of Learning Analytics - Amit Bahl

The term Learning Analytics has emerged to describe the process in understanding the behaviours of learning process from the data gathered from the interactions between the learners and contents. The term can be defined as as the measurement, collection, analysis and reporting of information about learners and their contexts for the purposes of understanding and optimizing learning. Another simple definition states “learning analytics is about collecting traces that learners leave behind and using those traces to improve learning”. A number of authors have considered the importance and impact of learning analytics in the future of education. In their view, the field of learning analytics is the confluence of knowledge drawn from related disciplines such as educational psychology, learning sciences, machine learning, data mining and human-computer interaction (HCI). 

Many studies have been reported the positive contributions of learning analytics. The encouraging results confirm that if properly used, learning analytics can help instructors to identify the learning gaps, implement intervention strategies, increase students’ engagement and improve the learning outcomes. From the abstract and citation database of peer-reviewed literature, its identified that case studies report empirical findings on the application of learning analytics in higher education. A total of 43 studies were selected for in-depth analysis to discover the objectives, approaches and major outcomes from the studies. The study classifies six aspects that learning analytics can support to improve the education process. These are (i) improving student retention, (ii) supporting informed decision making, (iii) increasing cost-effectiveness, (iv) understanding students’ learning behavior, (v) arranging personalized assistance to students, and (vi) providing timely feedback and intervention. These aspects are not to consider in separate entity, but are inextricably linked.

(i) Improving student retention 

In educational settings detecting early warning signs for students who are coping with their study can be an advantage for the instructors. The issues and problems that students are facing may varies from social and emotional issues to academic matters or other factors that may lead to giving-up from the study. Those students can be provided with remedial instructions to overcome some of the problems. For example, Star and Collette (2010) report that knowing the circumstance and understanding the causes, instructor can increase the interaction with the students to provide personal interventions. As a result the students showed better academic performance and significantly increase the retention rate. In a similar study Sclater et al (2016) describe that increase interactions with students promote sense of belonging to the learner community and learning motivations. It was found that in the process the students’ attrition rate dropped from 18 to 12%. 

(ii) Supporting informed decision making 

The results from learning analytics can also be used to support informed decision making. A study by Toetenel and Rienties (2016) at the Open University in UK involves analyzing the learning designs of 157 courses taken by over 60,000 students and identify the common pedagogical patterns among the courses. The authors suggest that educators should take note of activity types and workload when designing a course and such information will be useful in decision making of specific learning design. However, the authors conclude that further studies are needed to find out whether particular learning design decisions result in better student outcomes.

(iii) Increasing cost-effectiveness 

With the funding cut and raising expenditure, cost-effective has become the key indicator for sustainability in the education sector. One of the effective ways is to take advantage of the learning management systems that not only deliver the course materials, also keep track of the learners’ activities. Instructors can analyze the activities and report the progress to the students and other stake holders in a costeffective manner. As Sclater et al (2016) note, after conducting the analysis, notifications were automatically generated and send to students and their parents on students’ performance. 

(iv) Understanding students’ learning behavior 

To better understand the students’ learning behavior, instructors can explore the data collected from the learning management systems and social media networks. Instructors can examine the relationships between students’ utilization of resources, learning patterns and preferences and learning outcomes. This approach has been adopted by Gewerc et al (2014) when attempted to examine the collaboration and social networking in a subject for education degree course. The study analyzes the intensity and relevance of the student’s contribution in the collaborative framework by using social network analysis and information extraction. The authors concluded that findings from the study help to understand more clearly how students behave during the course.

(v) Arranging personalized assistance to students 

Given the advantages of data mining techniques and algorithms that are used in business and manufacturing industry, learning analytics has emerged as educational data mining of students and the courses they study. An investigation into the application of such technique in education domain was conducted by Karkhanis and Dumbre (2015) to discover the insightful information about the students and interaction with the course. They report that after analyzing the students’ study results, demographics and social data, instructors are able to identify who need assistant most to provide individual counselling.

(vi) Providing timely feedback and intervention 

Providing feedback to students is an important role of teachers in any educational settings. This process enable students to learn from their action and can have a significant impact on motivation of the learners. The quality and timeliness of feedback are crucial in the learning process. From the learning analytics, teachers can identify students who are in need of assistance and provide appropriate intervention to the specific students. Dodge et al (2015) report that interventions through emails to the students work best and found that such approach impact on student achievement.

As the amount of data collected from the teaching-learning process increases, potential benefits of learning analytics can be far reaching to all stakeholders in education including students, teachers, leaders and policy makers. Its my firm believe that if properly leveraged, learning analytics can be an indispensable tool to narrow the achievement gap, increase student success and improve the quality of education in the digital era.

Thursday, 1 August 2019

Forgetting: A Tool For Learning

Goal-Directed Forgetting
People often view forgetting as an error in an otherwise functional memory system; that is, forgetting appears to be a nuisance in our daily activities. Yet forgetting is adaptive in many circumstances. For example, if you park your car in the same lot at work each day, you must inhibit the memory of where you parked yesterday (and every day before that!) to find your car today.

Goal-directed forgetting, that is, situations in which forgetting serves some implicit or explicit personal need. In recent years research has supported the notion that mechanisms of inhibition—analogous to those proposed in many areas of lower-level cognition, such as vision (explain, perhaps parenthetically)—play an important role in goal-directed forgetting. Researchers have developed and utilized a variety of experimental paradigms to investigate phenomena that exemplify goal-directed forgetting, including directed forgetting and retrieval-induced forgetting.

Directed forgetting
Forgetting is often viewed as an uncontrollable, undesirable failure of memory. Yet it is possible to experimentally induce forgetting in an individual that can lead to unexpected benefits. One such paradigm is known as “directed forgetting.” In the typical list-based directed forgetting paradigm, a participant will study two lists of words, and is notified after each list whether or not it will be tested later on. If a list is tested after the learner was notified that it would not be tested, the learner will show weaker recall for that list, compared to a baseline condition in which all lists are expected to be tested, demonstrating the costs of directed forgetting. Interestingly, it is commonly found that recall of any list that was expected to be tested will be greater than that of the baseline condition, demonstrating the unexpected benefits of directed forgetting.

Another common paradigm for directed forgetting is the item-based method, in which participants are told after each word whether or not it will be tested. A similar pattern of results is observed, in which recall rates for the to-be-forgotten words are depressed, while recall rates for the to-be-remembered words are increased. However, the mechanisms by which item-method directed forgetting occurs are purported to be different than the mechanisms by which list-method directed forgetting operates.

In addition to studying the basic phenomenon of directed forgetting, efforts in the lab are currently underway to further investigate the effects of list-based directed forgetting using different materials and different paradigms. For example, does the pattern of results extend beyond simple word lists to more educationally relevant materials, such as text passages or videos? What happens to the pattern of results when information between the two lists is related? In addition, we are investigating whether directed forgetting applies to other learning paradigms, such as induction learning.

Retrieval-induced forgetting
Memory cues, whether categories, positions in space, scents, or the name of a place, are often linked to many items in memory. For example, the category FRUIT is linked to dozens of exemplars, such as ORANGE, BANANA, MANGO, KIWI, and so on. When forced to select from memory a single item associated to a cue (e.g., FRUIT: OR____), what happens to other items associated to that general, organizing cue? Using the retrieval-practice paradigm, we and other researchers have demonstrated that access to those associates is reduced. Retrieval-induced forgetting, or the impaired access to non-retrieved items that share a cue with retrieved items, occurs only when those associates compete during the retrieval attempt (e.g., access to BANANA is reduced because it interferes with retrieval of ORANGE, but MANGO is unaffected because it is too weak of an exemplar to interfere. Researchers argue for retrieval-induced forgetting as an example of goal-directed forgetting because it is thought to be the result of inhibitory processes that help facilitate the retrieval of the target by reducing access to competitors. In this way, retrieval induced forgetting is an adaptive aspect of a functional memory system.

In recent years, research have explored this phenomenon in a variety of ways. For example, its found that items that suffer from retrieval-induced forgetting benefit more from relearning than control items. They have also demonstrated that retrieval success is not a necessary condition for retrieval induced forgetting to occur. That is, when participants are prompted to retrieve with cues that have no possible answer (FRUIT: WO____, rather than the standard, FRUIT: OR_____), access to competing items (BANANA) is impaired, as demonstrated on a final recall test. Furthermore, researchers are currently exploring the impact of variations of the type of cue support provided for retrieval attempts (FRUIT: OR_____; FISH: ____ORE; WEAPONS: DAGG_____). Research efforts in this domain currently rest on testing various assumptions of theoretical accounts of retrieval induced forgetting.


Tuesday, 28 May 2019

THE ROLE OF DATA-DRIVEN FEEDBACK IN LEARNING - LEARNING ANALYTICS

Discussions about feedback frequently take place within a framing of assessment and student achievement (Black & Wiliam, 1998; Boud, 2000). In this context, the primary role of feedback is to help the student address any perceived deficits as identified through the completion of an assessment item. Ironically, assessment scores and student achievement data have also become tools for driving political priorities and agendas, and are also used as indicators in quality assurance requirements. 

Assessment in essence is a two-edged sword used to foster learning as well as a tool for measuring quality assurance and establishing competitive rankings (Wiliam, Lee, Harrison, & Black, 2004). While acknowledging the importance of assessment for quality assurance, we focus specifically on the value of feedback often associated with formative assessment or simply as a component of student completion of set learning tasks. Thus, this article explores how student trace data can be exploited to facilitate the transformation of the essence of assessment practices by focusing on feedback mechanisms. With such a purpose, I highlight and discuss current approaches to the creation and delivery of data-enhanced feedback as exemplified through the vast body of research in learning analytics and educational data mining (LA/EDM).

Although there is no unified definition of feedback in educational contexts, several comprehensive analyses of its effects on learning have been undertaken (e.g., Evans, 2013; Hattie & Timperley, 2007; Kluger & DeNisi, 1996). In sum, strong empirical evidence indicates that feedback is one of the most powerful factors influencing student learning (Hattie, 2008). The majority of studies have concluded that the provision of feedback has positive impact on academic performance. However, the overall effect size varies and, in certain cases, a negative impact has been noted. 

For instance, a meta-analysis by Kluger and DeNisi (1996) demonstrated that poorly applied feedback, characterized by an inadequate level of detail or the lack of relevance of the provided information, could have a negative effect on student performance. In this case, the authors distinguished between three levels of the locus of learner’s attention in feedback: the task, the motivation, and the meta-task level. All three are equally important and can vary gradually in focus. Additionally, Shute (2008) classified feedback in relation to its complexity, and analyzed factors affecting the provision of feedback such as its potential for negative impact, the connection with goal orientation, motivation, the presence in scaffolding mechanisms, timing, or different learner achievement levels. Shute noted that to maximize impact, any feedback provided in response to a learner’s action should be non-evaluative, supportive, timely, and specific.

Early models relating feedback to learning largely aimed to identify the types of information provided
to the student. Essentially, these studies sought to characterize the effect that different types of information can play on student learning (Kulhavy & Stock, 1989). Initial conceptualizations of feedback were driven by the differences in learning science theorisations of how the gap between the actual and desired state of the learner can be bridged (cf. historical review Kluger & DeNisi, 1996; Mory, 2004). According to Mory (2004), contemporary models build upon pre-existing paradigms by viewing feedback in the context of self-regulated learning (SRL), i.e., a style of engaging with tasks in which students exercise a suite of powerful skills (Butler & Winne, 1995). These skills, setting goals, thinking about strategies, selecting the right strategies, and monitoring the effects of these strategies on the progress towards the goals are all associated with student achievement (Butler & Winne, 1995; Pintrich, 1999; Zimmerman, 1990). As part of their theoretical synthesis between feedback and self-regulated learning, Butler and Winne (1995, p. 248) embedded two feedback loops into their model. The first loop is contained within the so-called cognitive system and refers to the capacity of individuals to monitor their internal knowledge and beliefs, goals, tactics, and strategies and change them as required by the learning scenario. The second loop occurs when the product resulting from a student engaging with a task is measured, prompting the creation of external feedback relayed back to the student; for example, an assessment score, or an instructor commenting upon the completion of a task.

Hattie and Timperley (2007) have provided one of the most influential studies on feedback and its impact on achievement. The authors’ conceptual analysis was underpinned by a definition of feedback as the information provided by an agent regarding the performance or understanding of a student. The authors proposed a model of feedback articulated around the concept that any feedback should aim to reduce the discrepancy between a student’s current understanding and their desired learning goal. As such, feedback can be framed around three questions: where am I going, how am I going, and where to next? Hattie and Timperley (2007) proposed that each of these questions should be applied to four different levels: learning task, learning process, self-regulation, and self. The learning task level refers to the elements of a simple task; for example, notifying the student if an answer is correct or incorrect. The learning process refers to general learning objectives, including various tasks at different times. The self-regulation level refers to the capacity of reflecting on the learning goals, choosing the right strategy, and monitoring the progress towards those goals. Finally, the self level refers to abstract personality traits that may not be related to the learning experience. 

The process and regulation levels are argued to be the most effective in terms of promoting deep learning and mastery of tasks. Feedback at the task-level is effective only as a supplement to the previous two levels; feedback at the self-level has been shown to be the least effective. These three questions and four levels of feedback provide the right setting to connect feedback with other aspects such as timing, positive vs. negative messages (also referred to as polarity), and the consequences of including feedback as part of an assessment instrument. These aspects have been shown to have a interdependent effect that can be positive or negative (Nicol & Macfarlane-Dick, 2006).

In reviewing established feedback models, Boud and Molloy (2013) argued that they are at times based on unrealistic assumptions about the students and the educational setting. Commonly, due to resource constraints, the proposed feedback models or at least the mechanism for generating non-evaluative, supportive, timely, and specific feedback for each student is impractical or at least not sustainable in contemporary educational scenarios. At this juncture, LA/EDM work can play a significant role in moving feedback from an irregular and unidirectional state to an active dialogue between agents.

The first initiatives using vast amounts of data to improve aspects of learning can be traced to areas such as adaptive hypermedia (Brusilovsky, 1996; Kobsa, 2007), intelligent tutoring systems (ITSs) (Corbett, Koedinger, & Anderson, 1997; Graesser, Conley, & Olney, 2012), and academic analytics (Baepler & Murdoch, 2010; Campbell, DeBlois, & Oblinger, 2007; Goldstein & Katz, 2005). Much of this research has taken place within LA/EDM research communities that share a common interest in data-intensive approaches to the research of educational setting, with the purpose of advancing educational practices (Baker & Inventado, 2014). While these communities have many similarities, there are some acknowledged differences between LA and EDM (Baker & Siemens, 2014). For example, EDM has a more reductionist focus on automated methods for discovery, as opposed to LA’s human-led explorations situated within holistic systems. Baker and Inventado (2014) noted that the main differences between LA and EDM are not so much in the preferred methodologies, but in the focus, research questions, and eventual use of models.

When considering LA/EDM through the lens of feedback, the research approaches differ in relation to the direction and recipient of feedback. For instance, LA initiatives generally provide feedback aimed towards developing the student in the learning process (e.g., self-regulation, goal setting, motivation, strategies, and tactics). In contrast, EDM initiatives tend to focus on the provision of feedback to address changes in the learning environment (e.g., providing hints that modify a task, recommending heuristics that populate the environment with the relevant resources, et cetera).

It is important to note that these generalizations are not a hard categorization between the communities, more so an observed trend in LA/EDM works that reflects their disciplinary backgrounds and interests. The following section further unpacks the work in both the EDM and LA communities related to the provision of feedback to aid student learning.

Thursday, 27 September 2018

Mind: Consortium of agents

You know that everything you think and do is thought and done by you. But what's a "you"? What kinds of smaller entities cooperate inside your mind to do your work? To start to see how minds are like consortium of agents, try this:pick up a cup of tea!

Your GRASPING agents want to keep hold of the cup.
Your BALANCING agents want to keep the tea from spilling out.
Your THIRST agents want you to drink the tea.
Your MOVING agents want to get the cup to your lips.

Yet none of these consume your mind as you roam about the room talking to your friends. You scarcely think at all about Balance; Balance has no concern with Grasp; Grasp has no interest in Thirst; and Thirst is not involved with your social problems.Why not? Because they can depend on one another. If each does its own little job, the really big job will get done by all of them together: drinking tea.
How many processes are going on, to keep that tea cup level in your grasp? There must be at least a hundred of them, just to shape your wrist and palm and hand. Another thousand muscle systems must work to manage all the moving bones and joints that make your body walk around. And to keep everything in balance, each of those processes has to communicate with some of the others.What if you stumble and start to fall? Then many other processes quickly try to get things straight. Some of them are concerned with how you lean and where you place your feet. Others are occupied with what to do about the tea: you wouldn't want to burn your own hand, but neither would you want to scald someone else. You need ways to make quick decisions.

All this happens while you talk, and none of it appears to need much thought. But when you come to think of it, neither does your talk itself. What kinds of agents choose your words so that you can express the things you mean? How do those words get arranged in to phrases and sentences, each connected to the next? What agencies inside your mind keep track of all the things you've said-and, also, whom you've said them to? How foolish it can make you feel when you repeat-unless you're sure your audience is new.

We're always doing several things at once, like planning and walking and talking, and this all seems so natural that we take it for granted. But these processes actually involve more machinery than anyone can understand all at once. So, in the next few articles of this series,we'll focus on just one ordinary activity-making things with children's building-blocks. First we'll break this process into smaller parts,and then we'll see how each of them relates to all the other parts.

In doing this, we'll try to imitate how Galileo and Newton learned so much by studying the simplest kinds of pendulums and weights, mirrors and prisms. Our study of how to build with blocks will be like focusing a microscope on the simplest objects we can find, to open up a great and unexpected universe. It is the same reason why so many biologists today devote more attention to tiny germs and viruses than to magnificent lions and tigers. For me and a whole generation of students, the world of work with children's blocks has been the prism and the pendulum for studying intelligence.

In science, one can learn the most by studying what seems the least.

Wednesday, 5 September 2018

THE MIND AND THE BRAIN


It was never supposed [the poet Imlac said] that cogitation is
inherent in matter,or that every particle is a thinking being.Yet if
any part of matter be devoid of thought, what part can we suppose
to think? Matter can differ from matter only in form, bulk,
density, motion and direction of motion: to which of these,
however varied or combined, can consciousness annexed? To be
round or square,  to be solid or fluid, to be great or little, to be
moved slowly or swiftly one way or another, are modes of material
existence, all equally alien from the nature of cogitation. If matter
be once without thought, it can only be made to think by some new
modification, but all the modification which it can admit are
equally unconnected with cogitative powers.
                                                                                                   -Samuel Johnson
How could solid-seeming brains support such ghostly things as thoughts? This question troubled many thinkers of the past.T he world of thoughts and the world of things appeared to be too far apart to interact in any way. So long as thoughts seemed so utterly different from everything else, there seemed to be no place to start.

A few centuries ago it seemed equally impossible to explain Life, because living things appeared to be so different from anything else. Plants seemed to grow from nothing. Animals could move and learn. Both could reproduce themselves-while nothing else could do such things. But then that awesome gap began to close. Every living thing was found to be composed of smaller cells, and cells turned out to be composed of complex but comprehensible chemicals.

Soon it was found that plants did not create any substance at all but simply extracted most of their material from gases in the air. Mysteriously pulsing hearts turned out to be no more than mechanical pumps, composed of networks of muscle cells. But it was not until the present century that John von Neumann showed theoretically how cell-machines could reproduce while, almost independently, James Watson and Francis Crick discovered how each cell actually makes copies of its own hereditary code. No longer does an educated person have to seek any special,v ital force to animate each living thing.

Similarly,a century ago, we had essentially no way to start to explain how thinking works. Then psychologist like Sigmund Freud and Jean Piaget produced their theories about child development. Somewhat later, on the mechanical side,mathematicians like Kurt Godel and Alan Turing began to reveal the hitherto unknown range of what machines could be made to do. These two streams of thought began to merge only in the 1940's, when Warren McCulloch and Walter Pitt began to show how machines might be made to see, reason, and remember.

Research in the modern science of Artificial Intelligence started only in the 1950's, stimulated by the invention of modern computers. This inspired a flood of new ideas about how machines could do what only minds had done previously.

Most people still believe that no machine could ever be conscious, or feel ambition, jealousy, humor, or have any other mental life-experience. To be sure,we are still far from being able to create machines that do all the things people do. But this only means that we need better theories about how thinking works. This series of articles will show how the tiny machines that we'll call "agents of the mind" could be the long sought" particles"that those theories need.