Thursday, 14 November 2019

Analytics That Help Real-Time Student Learning

Measuring student achievement at every moment along the academic journey

How Do You Know What Students Are Really Learning?
When presenting to large number of students, how can an instructor identify who is struggling to understand the content? When seeking ways to improve teaching, how can instructors identify where and how to focus their efforts? When looking to increase student retention, how can a higher education institution find the most relevant data and apply it for the greatest impact?

These questions are receiving more attention as colleges and universities look to improve student performance and teaching quality through better use of learning technologies and data analytics. It all starts in the classroom, where limited instructional support tools have made it difficult to obtain real-time, actionable insights into student learning. Student information systems, learning management systems and other sources of student data don’t track the progress or roadblocks in the learning process. Without the right solutions, instructors can’t easily track the factors that contribute to a student’s learning progress, such as attending class, participating in group discussions and asking questions. And even when these tools are available, they may provide information only after the
class ends. Knowing at every moment which students are on track — and, more importantly, which students are struggling — is vital information for improving student retention.

Useful Data In and Out of Class
What instructors need to improve student learning is real-time, easy-to-use data that is available throughout the class session. With the right technologies, the instructor can view learning analytics, ask discussion questions, and generate quizzes and polls that gauge student understanding. By seeing what students need now, the instructor can adjust teaching immediately, when  it can have the highest impact.

Of course, a student’s learning doesn’t only happen in the classroom and analytics shouldn’t stop there. A holistic view of a student’s progress is possible when an instructor uses a video capture system to record lectures and learning modules for student review outside of class time. Because these videos are stored in the cloud for Web access, it’s easy for the system to collect objective data on how students are using these resources. For example, repeated views of a particular section in a lecture recording  may indicate a topic that needs more explanation. Students, in turn, benefit from personalized learning tools that allow them to customize their own study guides, ask questions anonymously and review measurements of their activities.

Guiding Improvements Across the Complete Academic Experience
When used correctly, active learning solutions, lecture capture, and other tools and their data will deliver benefits to instructors, students and the institution.

For Course Instructors
Learning analytics provide in-depth guidance to improve teaching practices in multiple ways.
Real-time insights about student learning. In-the-moment analytics give an instructor information about needed adjustments in order to meet the learning needs of the class as a whole and to deliver targeted help to struggling students.
Customizable measurements. Instructors gain more useful information when data can be tailored to focus on what’s important to their teaching — whether it be test scores, classroom participation, completion of online assignments or other factors.
Continuous feedback. Data about classroom and online activity supplements the feedback an instructor obtains from in-class polls, questions and discussions, helping to keep course materials and teaching techniques fresh and relevant.

For Students
Analytics help students become more aware of their learning needs and encourage improvement in academic performance.
More effective learning support. With analytics, students can monitor their progress, identify when they need support from the teaching staff or advisors, better prepare for exams and understand how to increase their academic achievement.
More learning success. Comprehensive, easily available learning data guides students throughout their college journey. Early alerts about lagging performance allow instructors and advisors to help students complete a course and stay on track for graduation.
More investment value. With more visibility into their own progress, students can take action to improve their studies and ensure they are getting the most value from their education investment

For the Institution
Colleges and universities that combine active learning solutions with data analytics will gain several academic advantages.
Improved teaching and learning. Better information about what’s working for instruction, both in and out of the classroom, means the institution can continuously improve academic quality. Course content and teaching techniques can be adapted more readily to changes in student needs, instructional technologies and formats for course delivery.
Increased student retention. The formula is simple: Happy students are students who stay. Analytics identify the early interventions that motivate students to stay in challenging courses. When students are successful in one class, they are more likely to be successful in others — and more satisfied with their overall college experience.
Substantiated value and differentiation. It’s hard for students and parents to accept rising tuition levels without the assurance of a good learning experience and a clear path to on-time graduation. Demonstrating the institution’s commitment to new pedagogical approaches is a key factor for substantiating the educational value delivered to students. Adopting new learning technologies and providing access to student learning indicators not only result in high return on investment for the institution in the form of better learning outcomes, but also differentiates the college or university when recruiting students.
Integration with strategic initiatives. Analytics available from active learning solutions can be integrated with data from learning management, student information and other systems to create a full view of student performance levels and needs. This integrated perspective also enables the institution to better target strategic initiatives for improving academics and student services.

Some recommendations for the successful implementation of learning analytics.

Provide Real Evidence: Learning analytics offer instructors the opportunity to explore student behaviors and the impact on learning outcomes. Instructors may be willing to try new pedagogies and tools if shown robust evidence the change results in learning and engagement improvements.
Keep it Anonymous: For administrators, reporting data in an aggregated and anonymous form will keep the focus on high-level insights for improving overall teaching quality. This approach avoids the issues that could arise from a perceived punitive review of data for individual instructors and courses.
Instructor Support Groups: Starting and supporting a discussion group will help instructors learn how to best apply learning analytics within the context of their courses and instructional goals.
Incentives: Offering institutional incentives can motivate instructors to use analytics data in the classroom. E.g. An educational organisation offers a Learning Fellows program, where instructors who adopt learning analytics are eligible to receive more research funding. The program also gives participants time and opportunity to explore analytics, and conveys the institution’s interest in supporting pedagogical change and growth.
Access: Require that vendors make all data collected from students on campus available to the institution in a form they can use to blend with other data. This opens opportunities for research as well as entrepreneurial endeavors by instructors and students.

Launching Learning Analytics on Your Campus
There are many point applications available to collect data, but the majority of them do not offer comprehensive learning analytics to identify trends and support an institution’s key initiatives. An effective learning analytics platform for a school or university is created from three core technologies:
  • A lecture capture system and cloud-based learning tools that allow students to access course content before and after classroom sessions, and help instructors easily manage their recorded lectures, teaching videos and other materials.
  • Active learning solutions, like in-class polling and quizzes, for use by instructors and students in the classroom to improve teaching and learning immediately.
  • Analytics tools for data tracking, analysis and reporting of student progress.
I recommend that a learning analytics initiative start with a pilot project involving a small number of courses. The project should be small enough to be easily manageable, yet large enough to identify appropriate plans for broader implementation and future scalability. A key factor in the success of a pilot project is finding instructors who are willing to be early adopters and are excited about using the latest tools for teaching innovation. If you help these pilot instructors succeed, they will become evangelists who will encourage adoption by other instructors. Listening to instructors’ feedback will also reduce any perception that the institution is implementing technology for its own sake.

Schools and universities are becoming data-driven organizations and are learning how to collect, store and access all the data that’s available today. But analytics shouldn’t stop there. As an institution, we want to be looking at other measures we can pursue in the future to give us better insight into the performance of our academics and services.

Yet the focus of learning analytics should rightly remain at the critical point of interaction between instructor and student in the classroom. “I’m excited that by the second week of a course I can identify students who may be in trouble and I can take steps right away to keep them engaged. That outreach is good for the students, for my class as a whole and for the institution. 

Friday, 8 November 2019

AIED - Artificial Intelligence and/in Education

"Wenn wir an die Zukunft der Welt denken, so meinen wir immer den Ort, wo sie sein wird, wenn sie so weiter läuft, wie wir sie jetzt laufen sehen, und denken nicht, daß sie nicht gerade läuft, sondern in einer Kurve, und ihre Richtung sich konstant ändert."

"When we think of the world’s future, we always mean the destination it will reach if it keeps going in the direction we can see it going in now; it does not occur to us that its path is not a straight line but a curve, constantly changing direction."
                                                                            Wittgenstein (1980), pp. 3 / 3e


If, as some anthropologically-minded archæologists would claim, the present is the key to the past, then perhaps the future is the key to the present? In this paper I assume the converse that the present and the past are keys to the future for the case of research in the field of  Artificial Intelligence and/in Education (henceforth abbreviated to "AIED").

Any view of what objectives a research field may achieve in the future must be based on a view of the nature of the field in question, up to the present day. I characterise the past, the present and the near future of AIED research in terms of a combination of different roles played by models of educational processes, namely: models as scientific tools, models as components of educational artefacts, and models as bases for design of educational artefacts. It should be noted that the views expressed here are not those of an objective historian of science, but rather of a researcher engaged in the field that is being discussed. In that case, description, prediction and prescription coincide to a certain extent.

One could say that there are basically three sorts of argumentative texts: those that argue (mostly) in favour of a particular view, those that argue (mostly) against one and those that attempt to weigh pro and contra arguments in the balance (the conventional form of academic discourse). This text falls (mainly) into the first category, and so no claim to exhaustivity is made in citing research that could constitute a rebuttal to the views argued for here. In the context of the special issue in which this text appears, I can only hope that some readers will be willing to supply counter-arguments and that a synthesis could emerge from any ensuing debate.

As with any field of scientific research, AIED involves elaborating theories and models with respect to a specific experimental field, in relation to the production of artefacts. What characterises a particular field is the nature of each of these elements and of the relations that are established between them: what types of theories are elaborated? what counts as a model? what is the experimental field studied? how close are the links between theories, models and artefacts? 

With respect to other research in the field of education, one of the specificities of AIED research lies in the different roles that models can play. A significant part of AIED research can be seen as the use of computers to model aspects of educational situations that themselves involve the use of computers as educational artefacts, some of which may incorporate computational models. By an educational situation I mean a situation that is designed in some way so that a specific form and content of learning will occur; by "educational process" I do not only mean the processes of learning and teaching, but also the larger scale processes by which social situations that are intended to enable teaching and learning to occur are designed.

There are thus three main roles for models of educational processes in AIED research, are as follows:

1. Model as scientific tool. 
A model — computational or other — is used as a means for understanding and predicting some aspect of an educational situation. For example, a computational model is developed in order to understand how the "self-explanation" effect works (VanLehn, Jones & Chi, 1992). This is often termed cognitive modelling (or simulation), although, as I discuss below, the term "cognitive" can have several interpretations.

2. Model as component. 
A computational model, corresponding to some aspect of the teaching or learning process, is used as a component of an educational artefact. For example, a computational/cognitive model of student problem solving is integrated into a computer-based learning environment as a student model. This enables the system to adapt its tutorial interventions to the learner's knowledge and skills. Alternatively, the model-component can be developed on the basis of existing AI techniques, and refined by empirical evaluation.

3. Model as basis for design. 
A model of an educational processes, with its attendant theory, forms the basis for design of a computer tool for education. For example, a model of task-oriented dialogue forms the basis of design and implementation of tools for computer-mediated communication between learners and teachers in a computer supported collaborative learning environment (e.g. Baker & Lund, 1997). In this case, a computational model is not directly transposed into a system component.

Although researchers often attempt to establish a close relationship between 1 and 2 — e.g. cognitive-computational models of student problem-solving becoming student models in Intelligent Tutoring Systems (henceforth, "ITS(s)") — there is no necessary relation between the two, since it may be that the most effective functional component (in an engineering sense) of an educational artefact does not operate in a way that models human cognition.

These three possibilities are not, of course, mutually exclusive: most often, a given AIED research programme contains elements of each, to a greater or lesser degree. For example, one part of an educational system may be based on study of students' conceptions, and other parts may be based on using existing computer science techniques. However, it is not always possible to do this in a way that simultaneously satisfies requirements of each type of use of models, i.e. produce a satisfactory scientific model that is an effective tutoring system component and which leads to an artefact that is genuinely useful in education. I believe that all three of these possibilities are valid and useful, provided that they are pursued in specific ways, that are coherent with the researcher’s goals.

Before moving on to a discussion of the future of AIED research in terms of these three roles of models, I need to say something about what a model is3. Across different sciences, many different types of abstract constructions count as models — for example, descriptive, explanatory, analytic, qualitative, quantitative, symbolic, analogue, or other models. Without entering into an extended discussion in the philosophy of science, it is possible, and useful here, to identify a small number of quite general characteristics of models.

Firstly — and classically —, the function of a model is to predict the existence or future incidence of some set of phenomena, in a determinate experimental field. For example, models of stock exchange transactions should predict changes in financial indices; models of the weather should predict the weather tomorrow; a model of cooperative problem-solving should predict what forms of cooperation can exist (see below), and ideally what interactive learning mechanisms they trigger; a student model should predict the evolution of a student's knowledge states; and so on.

Secondly, a further, and just as important function of a model is to enable elaboration or refinement of the theory on which it is based, by rendering explicit its commitments on epistemological (what can be known and how?) and ontological (what is claimed to exist?) planes. It is generally accepted that there should be a link between the epistemology and the ontology: one should not posit the existence of entities without saying something about how they can be known. Such a relation between model and theory can lead to explanation of phenomena. A theory is not at all the same thing as a model; it consists of a set of quite general assumptions and laws — e.g. the views according to which human cognition is complex symbolic information processing, or that knowledge is a relation between societal subjects and the socially constituted material world — that are not themselves intended to be directly (in)validated (for that, the theory must engender a model). Theories are foundational elements of paradigms, along with shared problems and methods (Kuhn, 1962).

Thirdly, a model necessarily involves abstraction from phenomena, selection of objects and events, in its corresponding experimental field; it necessarily takes some phenomena into account but not others. It is not relevant to criticise a model as such by claiming that it does not take all phenomena into account, but one can criticise its degree of coverage of an experimental field. The modelling process itself involves complex matching processes during which objects and events are selected and structured so as to correspond to the model, within the constraints of its syntax. Tiberghien (1994) has termed this process one of establishing a meaning, or a semantics, for the model, in relation to its experimental field.

Here artefacts enter into the picture? All research fields necessarily comprise aspects that are more or less close to the production and/or use of artefacts, in the sense of either 'applications' of theories or models, use of artefacts or instruments as experimental tools, sometimes on a large scale, or with respect to the study of artefacts themselves and their use, each of which can be a source of new research problems. Even highly theoretical work in mathematics, or descriptive work in botany, that is carried out as "pure research", may, perhaps decades later, find an unanticipated application via, for example, other domains such as physics or medical research. I do not believe that unidirectional 'application' exists: the relation between artefact, theory and model is always complex and multidirectional. Whilst it is clear that any field needs both theory and a close relation with the production of artefacts, it seems to me that one of the defining characteristics of AIED research is that it is closer to the theoretical end of the spectrum. 

There is nothing intrinsically wrong in that: for example, physics has for a long time comprised both theoretical and experimental branches. On that analogy, AIED research would be theoretically-oriented educational science, or even "Learning Science", that adopts a modelling approach.

Despite this variety of roles and types of models, I think that AIED as a field nevertheless still largely operates with a somewhat restricted view of what models are — i.e. symbolic and computational information-processing models. Whilst this view has been important in defining the field as such up to the present, I do not think that it is fruitful or realistic as a unique ‘model’ for what the field currently is and will become. Other types of models of educational processes, that are not necessarily cognitive (in the above sense) nor computational in nature, can, and will I think/hope play an important role in AIED research.

I have sketched a personal and prospective view of AIED research that turns on three possible roles for models: as scientific tools, as components of computational educational artefacts, and as bases for design of such artefacts.

In terms of the first role, my view is that AIED research, over the past three decades, has already mapped out a vast space of phenomena to be studied. We do not need to extend the space of phenomena, but rather to extend the range of theoretical tools from those available in cognitive science, and to adopt a wider (yet more strict) notion of what is and what is not a model. Specifically, and in terms of how I defined models themselves, I claimed that there is no a priori reason why interesting models should not be developed, that extend the notion of ’cognition’ to embrace action and perception, as embedded in artefacts and social relations.

AIED research should and will, I think, open out to a greater extent than is currently the case, into cognitive science, considered in the widest sense of the term. The role of a model, as scientific tool, is to help us to explain, to develop theory, and to predict. As such, any model abstracts from reality. Failure to take a particular phenomenon into account does not invalidate a model, it just restricts its usefulness.

In terms of the second role — models as components — I claimed that individualising ITS are not currently adapted to existing educational practices, largely because of, on a micro-level, problems associated with failing to take teachers, and other social actors, into account. Either we must adapt the components and the artefacts, or else change educational systems; and no doubt, most researchers aim for some realistic combination of both. Depending on the culture concerned, there may be a greater or lesser difference between the timescales of institutional and technological change. I proposed that ITS will, in the near future, be most appropriate for social situations that are less norm-based than most state education systems. 

Within such educational situations, intelligent information search for learners using the Web, rather than intelligent explanation generation, will come to the forefront in the near future, depending on the type of learning task involved. Intelligent explanation generation, and help systems in general, may turn out to be more important for teachers rather than for learners, in, for example, distributed learning communities. Models as intelligent components of educational artefacts have, I think, an important role to play in the near future; it is simply that their uses may not be in the situations that AIED researchers originally thought.

Finally, once we remember that (of course) models are not, by their nature, necessarily computational, this opens up a wide range of possible ways in which theories and models can form the bases of design of educational artefacts. What is required is that the specific nature of the relations between theory, model and design of artefacts be made as explicit as possible, as legitimate objects of scientific discussion and as means of generalising findings towards redesign.

Personally, I believe that theories and models will find their most effective application in design of collaborative distributed educational technologies.

I conclude with some brief remarks on the unity and future of AIED research, as a field. Given all the possible evolutions of AIED research that I have sketched, isn't there a strong possibility that AIED could dissipate into educational research and/or that part of cognitive science that is concerned with learning and teaching? Perhaps, and after all, why not? But I do not think so, and for the following reasons. 

In terms of the particular view of AIED research I have outlined above, what makes piece of research AIED research is, quite simply, that it has something innovative to say about all three of the possible roles of models, with a greater or lesser emphasis being put on each. Concretely, this means that the research in question proposes a specific, explicit and coherent set of relations between: (1) a theory, (2) a model, (3) an experimental field of educational phenomena, (4) computational-educational artefacts, whose use is part of (3), and (5) an educational design process. It is not enough to propose a model of an educational phenomenon; the research must also describe how the model relates to theory, how it is relevant to study or design of artefacts for teaching and learning, and how that design might proceed. This means that AIED research is very complex, and very difficult to carry out.

I think that those constraints will continue to be sufficient for distinguishing a specific field or area of research, whether it is called AIED or something else.