Saturday, 13 August 2016

Skill and Skill Learning - Machine Learning Perspective

Human skill is the ability to apply past knowledge and experience in performing various given tasks. Skill can be gained incrementally through learning and practicing. To acquire, represent, model, and transfer human skill or knowledge has been a core objective for more than two decades in the fields of artificial intelligence, robotics, and intelligent control. The problem is not only important to the theory of machine intelligence, but also essential in practice for developing an intelligent robotic system. The problem of skill learning is challenging because of the lack of a suitable mathematical model to describe human skill. Consider the skill as a mapping: mapping stimuli onto responses. A human associates responses with stimuli, associates actions with scenarios, labels with patterns, effects with causes. Once a human finds a mapping, intuitively he gains a skill. Therefore, if we consider the ‘ stimuli” as input and ‘ responses” as output, the skill can be viewed as a control system. This “control system” has the following characteristics: It is nonlinear, that is, there is no linear relationship between the stimuli and responses. It is time-variant, that is, the skill depends upon the environmental conditions from time to time. 0 It is non-deterministic, that is, the skill is of inherently stochastic property, and thus it can only be measured in the statistical sense. For example, even the most skillful artist can not draw identical lines without the aid of a ruler. It is generalizable, that is, it can be generalized through a learning process. 0 It is decomposable, that is, it can be decomposed into a number of low-level subsystems. 

The challenge of skill learning depends not only upon the above mentioned inherent nature of the skill, but also upon the difficulty of understanding the learning process and transferring human skill to robots. Consider the following: A human learns his skill through an incrementally improving process. It is difficult to exactly and quantitatively describe how the information is processed and the control action is selected during such a process. 0 A human possesses a variety of sensory organs such aa eyes and ears, but a robot has limited sensors. This implies that not all human skills can be transferred to robots. The environment and sensing are subject to noises and uncertainty for a robot. These characteristics make it difficult to describe human skill by general mathematical models or traditional AI methods. 

Skill learning has been studied from different disciplines in science and engineering with different emphasis and names. The idea of learning control presented in article is based on the observation that in machine learning, actions of learning machines being subject to "playback control mode", repeat their motions over and over in cycles. The research on learning control have been reviewed anf for a repeatable task operated over a fixed duration, each time the system input and response are stored, the learning controller computes a new input in a way that guarantees that the performance error will be reduced on the next trial. Under some assumptions, the P-, PI- and PD-type learning laws have been implemented. This approach is based on control theory, but the problem is certainly beyond the domain. According to the characteristics we discussed previously, it is obviously insufficient to approach such a comprehensive problem from only a control theory point of view. The concept of task-level learning can be found in related studies. 

The basic idea is that a given task can be viewed as an input/output system driven by an input vector responding with an output vector. There is a mapping which maps task commands onto task performance. In order to select the appropriate commands to achieve a desired task performance, an inverse task mapping is needed. Task-level learning has been studied in great deal for "trajectory learning" to provide an optimum trajectory through learning and has been successful for some simple cases. For a more complicated case which is realistic in practice, the inverse task mapping is too difficult to obtain. Both learning control and task-level learning emphasize achieving a certain goal by practice, and pay no attention to modeling and learning the skill. From a different angle, a research group at MIT has been working on representing human skill. The pattern recognition method and process dynamics model method were used to represent the control behavior of human experts for a debugging process. In the pattern recognition approach, the form of IF-THEN relationship: IF(signal pattern), THEN(control action) was used to represent human skill. 

Human skill pattern model is a non-parametric model and a large database is needed to characterize the task features. The idea of the process dynamics model method is to correlate the human motion to the task process state to find out how humans change their movements and tool holding compliance in relation to the task process characteristics. The problem with this approach is that human skill can not always be represented by the explicit process dynamics model and if there is no such model, or if the model is incorrect, this method will not be feasible. Considerable research efforts have been directed toward learning control architectures using connectionist or Neural Networks. Neural Network (NN) approaches are interesting because of the learning capacity. Most of the learning methods studied by connectionists are parameter estimation methods. In order to describe the input/output behavior of a dynamic system, NN is trained using input/output data, based on the assumption that the nonlinear static map generated by NN can adequately represent the system behavior for certain applications. Although NNs have been successfully applied to various tasks, their behaviors are difficult to analyze and interpret mathematically. Usually, the performance of the NN approach is highly dependent on the architectures; however, it is hard to modify the architecture to improve the performance.

Another issue is the real-time learning, i.e., dynamically updating the model to achieve the most likely performance. In real-time circumstance, we need to compute the frequencies of occurrence of the new data and add them to the model. The procedure is the same as that used to cope with multiple independent sequences. In this study, we have shown the fundamental theory and method that are needed and the preliminary experiments for real-time learning. However, various issues on real-time learning have not been discussed extensively. For example, what happens if the measured data fed in the learning process represents the poor skill, Le., unskilled performance. Using the current method, the model will be updated to best match the performance of the operator, not to best represent the good skill. This is because we have a criterion to judge the model of the skill, but do not have a criterion to judge the skill itself. In other words, it is possible to become more unskilled in real-time learning. This is a common problem in other recognition fields such as speech recognition. One way to minimize the problem is to ensure the feeding data always represents the good performance. This again needs criterion to describe how good the skill is. We will look at this issue in the future.

In this article I presented a novel method for human skill learning using HMM. HMM is a powerful parametric model and is feasible to characterize two stochastic processes - the measurable action process and immeasurable mental states - which are involved in the skill learning. Based on “the most likely performance’! criterion, we can select the best action sequence out from all previously measured action data by modeling the skill as HMM. This selection process can be updated in real-time by feeding new action data and updating the HMM, and learning through this selection process.

The method provides a feasible way to abstract human skill as a parametric model which is easily updated by new measurement. It will be found useful in various applications in education space, besides tele-robotics, such as human action recognition in man-machine interface, coordination in anthropomorphic master robot control, feedback learning in the system with uncertainty and time-varying, and pilot skill learning for the unmanned helicopter. By selecting different units for the measured data in different problems, the basic idea is applicable for a variety of skill learning problems.



Learning Trajectories

Wood, Bruner, and Ross (1976) and Bruner (1986) developed the concept of leading children’s learning forward through “scaffolding”. This involves the teacher providing a pedagogical trajectory to support children’s movement into new territories. In articulating Bruner’s notion of guided participation, Rogoff (1991) argued that the teacher’s main role is to “build bridges from children’s current understanding to reach new understanding through processes inherent in communication” (p. 351). Later, Bruner (1996) drew on Vygotsky’s notion of the “zone of proximal development” (Vygotsky, 1978, p. 86) when he further defined scaffolding as a logical structuring of ideas to be understood in an order that leads children to develop further and faster than they would on their own. A variety of images for teachers’ roles in scaffolding learning have been presented. 

In describing quality teaching, Wood (1991, p. 109) used the term “leading by following”, noting that the most effective scaffolding draws on the interests and understandings of the child. Cobb and McClain (1999) described an instructional sequence that follows a conjectured learning trajectory that “culminates with the mathematical ideas that constitute our overall instructional intent” (p. 24). Hiebert et al. (1997) used the term “residue” to describe the knowledge that children gain from teaching that may be used as a basis for further planning of sequences of tasks aimed at the development of further particular residues over time. Scardamalia, Bereiter, McLean, Swallow, and Woodruff (1989) portrayed learning trajectories as social phenomena, with teachers employing scaffolding to create more general pathways of potential development of mathematical concepts and procedures. Lerman (1998) discussed the teachers’ roles in setting up loci of development—social interactions with mutual appropriation by teachers and students. Simon (1995) demonstrated how the continually changing knowledge of the teacher creates change in expectations of how students might learn a specific idea. 

A hypothetical learning trajectory provides the teacher with a rationale for choosing a particular instructional design; thus, I (as a teacher) make my design decisions based on my best guess of how learning might proceed. This can be seen in the thinking and planning that preceded my instructional interventions … as well as the spontaneous decisions that I make in response to students’ thinking. (pp. 135-136) Simon used the word “hypothetical” to suggest that all three parts of the trajectory are likely to be somewhat flexible, with teachers changing the learning goals and adapting aspects of planned activities in response to (a) their perceptions of students’ levels of understanding and (b) their on-going evaluations of students’ performance of classroom tasks. Thus actual learning trajectories cannot be known in advance. Further, Simon noted that such a trajectory is made up of three components: the learning goal that determines the desired direction of teaching and learning, the activities to be undertaken by the teacher and students, and a hypothetical cognitive process, “a prediction of how the students’ thinking and understanding will evolve in the context of the learning activities” (p. 136). 

In discussion of Simon’s paper, Steffe and Ambrosio (1995) described teachers’ working hypotheses of what students could learn as being determined by the teacher as she interprets the schemes and operations available to the student’s actions in solving different tasks in the context of interactive mathematical communication. The anticipation is based on the teacher’s knowledge of other students’ ways of operating, on the teacher’s knowledge of the particular mathematics of that student, and on results of the teacher’s interactions with that student. (p. 154) In this discussion, Steffe and Ambrosio raised an important question—one that is at the heart of this paper: “How does a teacher modify a task that fails to activate certain schemes?” (p. 155). This is a question that we return to below. Throughout these varied discussions about scaffolding of learning via the creation of specific learning trajectories, the general picture is one of a teacher planning to create a context in which the class will follow one learning trajectory.

The concepts of “remedial work” and “extension work”, as well as teachers’ everyday experience of some students being more successful than others, suggest that actual learning trajectories are likely to take a shape. “Ability” grouping, setting or streaming present a further model, with teachers aiming to lead groups or whole classes of students to different learning goals (see Figure 3). However, the literature on the negative effects of such grouping in primary and lower secondary schools is extensive (see for example Boaler, 1997; Gamoran, 1992; Ireson, Hallam, Hack, Clark, & Plewis, 2002; Mousley, 1998; Zevenbergen, 2003).

Negative effects of differentiated learning expectations can include lowering for some students of teachers’ expectations, self- and peer-expectations, self concepts, opportunities for positive modelling and mentoring, and student motivation. In fact, this solution to diversity has the potential to exacerbate disadvantage due to self-fulfilling prophesy effects (Brophy, 1983). In our experience, teachers who use groups in this way are aware of these potential effects but also want to set achievable goals for all students. They realise that most classes have pupils with sufficiently divergent needs that any one task may not be appropriate for all. Clearly there is a need to research forms of pedagogy that may help teachers to adapt classroom tasks to the needs of the range of individual pupils in their classes. Thus it is important to research how teacher may modify tasks that fail to enable some students to meet specific learning goals.

 “Clearly … the trajectories followed by those who learn will be extremely diverse and may not be predictable” (Lave & Wenger, 1991) 
In choosing to focus on learning trajectories, we embrace a metaphor that, for all its appeal, implies that learning unfolds following a predictable, sequenced path. Everyone knows it is not that simple; researchers and educators alike acknowledge the complexity of learning. As Simon (1995) emphasized, learning trajectories are essentially provisional. We can think of them as the provisional creation of teachers who are deliberating about how to support students’ learning and we can think of them as the provisional creation of researchers attempting to understand students’ learning and to represent it in a way that is useful for teachers, curriculum designers, and test makers.

I firmly believe that a critical part of our mission as researchers is to produce something that is of use to the field and serves as a resource for teachers and curriculum designers to optimize student learning. No doubt this includes creating, testing, and refining empirically based representations of students’ learning for teachers to use in professional decision-making and, further, investigating ways to support teachers’ decision-making without stripping teachers of the agency needed to hypothesize learning trajectories for individual children as they teach. This focus would add a layer of complexity to our research on learning and invite us to think seriously about how to support teachers to incorporate knowledge of children’s learning into their purposeful decision-making about instruction.

Further, I suggest we consider, in the end, “Whose responsibility is it to construct learning trajectories?” (Steffe, 2004, p. 130). If we researchers can figure out how to supply teachers with knowledge frameworks and formative assessment tools to facilitate their work, teachers will be able to exercise this responsibility with increasing skill, professionalism, and effectiveness. Because of the growing popularity of learning trajectories in education circles, it is worth thinking hard about the role of learning trajectory representations in teaching, and in particular, whether a learning trajectory can exist meaningfully apart from the relationship between a teacher and a student at a specific time and place. Simon’s (1995) perspective on teaching and learning suggests not. As the field moves forward with research on learning trajectories and strive for coherence in learning across the grades, I would like to remain mindful of both the affordances and constraints this particular type of representation offers for teachers and students alike.


EXPANSIVE LEARNING AND ACTIVITY THEORY


Another social learning model which has been expounded in a rather profound, dialectical, and somewhat philosophical way, is Yrjö Engeström’s expansive learning theory (Engestrom, 1987). Viewing psychology to be “at the limits of cognitivism” (ch. 2, p. 1) Engestrom took upon himself the challenge to construct a “coherent theoretical [instrument] for grasping and bringing about processes where ‘circumstances are changed by men and the educator himself is educated'”(ch 2., p. 8). Although the following summary of his theory is rather brief a more detailed reading can be found in Learning by Expanding (Engestrom, 1987), the publication in which the theory was first introduced, or in one of Engstrom’s more recent articles (such as,  Engestrom, 2000a; 2001; 2009; 2010).


It should be noted that Engestrom’s target was not merely a theory of learning but something much more comprehensive, i.e., “a viable root model of human activity” (Engestrom, 1987, ch. 2, p. 8). To guide him toward this objective, he set for himself some rather stringent initial criteria: (a) “activity must be pictured in its simplest, genetically original structural form, as the smallest unit that still preserves the essential unity and quality behind any complex activity ” (ch. 2, p. 8);  (b) “activity must be analyzable in its dynamics and transformations [and] in its evolution and historical change…no static or eternal models ” (ch. 2, p. 8); (c) “activity must be analyzable as a contextual or ecological phenomenon [concentrating] on systemic relations between the individual and the outside world” (ch. 2, p. 8); and (d) “activity must be analyzable as culturally mediated phenomenon [sic]…no dyadic organism-environment models will suffice [he insisted upon a triadic structure of human activity]” (ch. 2, p. 8).
To find his theoretical starting point, Engestrom identified three previous lines of research that met his initial requirements (Engestrom, 1987, ch. 2, p. 9):
  1. Theorizing on signs – consisting of research beginning with the triadic relationship of object, mental interpretant, and sign by C.S. Pierce, one of the founders of semiotics, down through Karl Popper, who posited a conception of three worlds (physical, mental states, and contents of thought)
  2. The genesis of intersubjectivity – the continuity studies of infant communication and language development, founded by G. H. Mead
  3. The cultural-historical school of psychology – consisting of ideas that began with Vygotsky and reach maturity with Leont’ev
The first line of research, theorizing on signs, he rejected as a model because it “narrows human activity down to individual intellectual understanding [and provides] little cues for grasping how material culture is created in joint activity” (Engestrom, 1987, ch. 2, p. 15). The second—though it includes the social, interactive, symbol-mediated construction of reality—he also rejected, because its construction “is still conceived of as construction-for-the-mind, not as practical material construction” (ch. 2, p. 22). The third, he accepted as a starting point, because it “gives birth to the concept of activity based on material production, mediated by technical and psychological tools as well as other human beings” (ch. 2, p. 32). On this premise he erected what he referred to as the third generation of cultural-historical activity theory, starting with Vygotsky’s “famous triangular model in which the conditioned direct connection between stimulus (S) and response (R) was transcended by ‘a complex mediated act’…commonly expressed as the triad of a subject, object, and mediating artifact” (Engestrom, 2001, p. 134). This common expression of Vygotsky’s model is referred to by Engestrom as the first generationof activity theory (Engestrom, 1999, pp. 1-3; 2001, p. 134).
Engestrom considered the insertion of mediating cultural artifacts into human action to be revolutionary, providing a way to bind the individual to his culture and society to the individual:
The insertion of cultural artifacts into human actions was revolutionary in that the basic unit of analysis now overcame the split between the Cartesian individual and the untouchable societal structure. The individual could no longer be understood without his or her cultural means; and the society could no longer be understood without the agency of individuals who use and produce artifacts. This meant that objects ceased to be just raw material for the formation of logical operations in the subject as they were for Piaget. Objects became the cultural entities and the object-orientedness of action became the key to understanding human psyche….The concept of activity took the paradigm a huge step forward in that it turned the focus on complex interrelations between the individual subject and his or her community. (Engestrom, 2001, p. 134)
For Engestrom there was still one important limitation of Vygotsky’s model; it focused on the individual. Engstrom overcame this by drawing on Leont’ev’s famous example of the primeval collective hunt which “showed how historically evolving division of labor has brought about the crucial differentiation between an individual action and a collective activity” (Engestrom, 1999, “Three Generations of Activity Theory”, para. 3). Beginning with a ” general mode of biological adaptation as the animal form of activity may be depicted” (Engestrom, 1987, ch. 2, p. 33), Engestrom applied Leont’ev’s ideas to complete a “derivation…[by] genetic analysis” (ch. 2, p. 33) and demonstrate evolutional “ruptures” in the three sides of the biological adaptation triangle.Individual survival is ruptured by the emerging use of tools.  Social life is ruptured by collective traditions, rituals, and rules. And collective survival is ruptured by division of labor.[1]
Through further derivations in line with Leont’ev’s differentiation of the individual action and the collective activity, he took “what used to be separate ruptures or emerging mediators” (Engestrom, 1987, ch. 2, p. 35) and converted them to “unified determining factors” (ch. 2, p. 35), thus completing a graphical representation of what he referred to as the second generation of activity theory (1999, pp. 1-3; 2001, p. 134). This model accounted not only for individual actions, but for collective activity of a community.
Note that in the second generation model what used to be biological adaptive activity has been transformed into consumption and placed in subordinate relation to three dominant aspects of human activity: (a) production, (b) distribution, and (c) exchange (Engestrom, 1987, ch. 2, p. 36). Marx (Marx, 1973, p. 89 as cited in Engestrom, 1987) explained the relationship between these three dominant aspects of human activity and the individual aspect of consumption as follows:
Production creates the objects which correspond to the given needs; distribution divides them up according to social laws; exchange further parcels out the already divided shares in accord with individual needs; and finally, in consumption, the product steps outside this social movement and becomes a direct object and servant of individual need, and satisfies it in being consumed. Thus production appears to be the point of departure, consumption as the conclusion, distribution and exchange as the middle (…). (ch. 2, p. 36)
Two examples of how this model might be instantiated in the representation and analysis of a specific activity are given in Engestrom (2000a, p. 962). The first example, represents the subject—in this case a physician—engaged in the activity of reviewing patient records prior to meeting with the patient. The object of this activity is the patient records. The expected outcome is an understanding of the patient’s history and the purpose of the visit. Notice that the interaction between the subject (the physician) and the object (the patient records) is mediated by the physician’s medical knowledge, a tool which he leverages to interpret the records and formulate an understanding of the patient’s general health condition. Continuing the scenario, the second example represents the activity of examining and diagnosing the patient, in which the patient becomes the object and his preliminary assessment the intended outcome.
Building on the second generation triangular model of human activity, Engestrom described “the minimal model for the third generation of activity theory” (Engestrom, 2001, p. 136) as requiring at least two interacting activity systems. An example of this model can be found in Engestrom (2010, p. 6). This example depicts the activity of a home healthcare care worker engaged in completing a list of routine tasks while visiting the client’s home, and this in relation to the client’s activity of “maintaining a meaningful and dignified life at home while struggling with threats such as loneliness, loss of physical mobility and the ability to act independently, and memory problems commonly known as dementia” (p. 6). This model of two activity systems in relation to one another is the minimal model for third generation activity theory.
Engestrom (2001) summarized the following five principles of his revised activity theory as follows:
1. Prime unit of analysis: “A collective, artifact-mediated and object-oriented activity system, seen in its network relations to other activity systems, is taken as the prime unit of analysis” (p. 136).
2. Multi-voicedness: “An activity system is always a community of multiple points of view, traditions and interests” (p. 136).
3. Historicity: “Activity systems take shape and get transformed over lengthy periods of time. Their problems and potentials can only be understood against  their own history” (p. 136).
4. Contradictions: Contradictions play a central role as “sources of change and development…[They] are historically accumulating structural tensions within and between activity systems” (p. 137).
5. Possibility of expansive transformations: “An expansive transformation is accomplished when the object and motive of the activity are reconceptualized to embrace a radically wider horizon of possibilities than in the previous mode of activity” (p. 137).

Expansive learning theory is different from all other theories previously reviewed in three significant ways. First, it is concerned with the learning of new forms of activity as they are created, rather than the mastery of putative stable, well-defined, existing knowledge and skill:
Standard theories of learning are focused on processes where a subject (traditionally an individual, more recently possibly an organization) acquires some identifiable knowledge or skills in such a way that a corresponding, relatively lasting change in the behavior of the subject may be observed. It is a self-evident presupposition that the knowledge or skill to be acquired is itself stable and reasonably well defined. There is a competent ‘teacher’ who knows what is to be learned.
The problem is that much of the most intriguing kinds of learning in work organizations violates this presupposition. People and organizations are all the time learning something that is not stable, not even defined or understood ahead of time. In important transformations of our personal lives and organizational practices, we must learn new forms of activity which are not yet there. They are literally learned as they are being created. There is no competent teacher. Standard learning theories have little to offer if one wants to understand these processes. (Engestrom, 2001, pp. 137-138)
Engestrom voiced a rather strong view against a notion of learning “limited to processes of acquisition of skills, knowledge and behaviors, already mastered and codified by educational institutions” (Engestrom, 2000b, p. 526), arguing that such a perspective makes learning irrelevant to the discovery and implementation of novel solutions:
If our notion of learning is limited to processes of acquisition of skills, knowledge, and behaviors already mastered and codified by educational institutions and other accepted representatives of cultural heritage, then finding and implementing future-oriented novel solutions to pressing societal problems has little to do with learning.
I have proposed that a historically new form of learning, namely expansive learning of cultural patterns of activity that are not yet there, is emerging and needs to be understood (Engestrom, 1987). (p. 526)
He further argued that the traditional view of learning is a perpetuated relic of the enlightenment era, and called for a shift of focus toward emergent learning processes from below as a necessary alternative in order for education to maintain relevance:
Give people facts, open their minds, and eventually they will realize what the world should become….I would call this an enlightenment view of learning. Learning is a fairly simple matter of acquiring, accepting, and putting together deeper, more valid facts about the world. Of course, this tacitly presupposes that there are teachers around who already know the facts and the needed course of development. Inner contradictions, self-movement, and agency from below are all but excluded. It is a paternalistic conception of learning that assumes a fixed, Olympian point of view high above, where the truth is plain to see. (Engestrom, 2000b, p. 530)

If education is to remain relevant, educators need to study carefully these changes and build on their internal contradictions and emergent learning processes from below, rather than continue preaching the right answers from above. (Engestrom, 2000b, pp. 533-534)
Second, expansive learning theory is concerned with collective transformation, rather than individual learning. Although changes in the collective are initiated by individuals within the community, the transformation itself is a change in the collective system:
The object of expansive learning activity is the entire activity system in which the learners are engaged. Expansive learning activity produces culturally new patterns of activity. Expansive learning at work produces new forms of work activity. (Engestrom, 2001, p. 139)
Although change originates with individual participants in the collective, the effective change takes place in collective activity system as a whole:
Human collective activity systems move through relatively long cycles of qualitative transformations. As the inner contradictions of an activity system are aggravated, some individual participants begin to question and deviate from its established norms. In some cases, this escalates into collaborative envisioning and a deliberate collective change effort from below. (Engestrom, 2000b, p. 526)
In fact, in his original presentation of expansion learning theory, Engestrom actually reformulated Vygotsky’s conception of zone of proximal development (Engestrom, 1987, ch. 3, p. 27) in terms of collective activities. Although he indicated it to be provisional at the time, he is still using the same reformulated definition:
Vygotsky’s concept of zone of proximal development is another important root of the theory of expansive learning. Vygotsky (1978, p. 86) defined the zone as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.” In Learning by Expanding, Vygotsky’s individually oriented concept was redefined to deal with learning and development at the level of collective activities:

“It is the distance between the present everyday actions of the individuals and the historically new form of the societal activity that can be collectively generated as a solution to the double bind potentially embedded in the everyday actions.” (Engestrom, 1987, p. 174)

In effect, the zone of proximal development was redefined as the space for expansive transition from actions to activity (Engestrom, 2000). (Engestrom, 2010, p. 4)
Third, expansive learning theory focuses on horizontal development rather than vertical. Although it acknowledges a vertical dimension, it emphasizes a focus on the horizontal dimension:
We habitually tend to depict learning and development as vertical processes, aimed at elevating humans upward, to higher levels of competence. Rather than merely denounce this view as an outdated relic of enlightenment, I suggest that we focus on constructing a complementary perspective, namely that of horizontal or sideways learning and development. Both dimensions are involved in expansion. (Engestrom, 2000b, p. 533)
The impetus for change in expansive learning theory is attributed to inner contradictions from within an activity or between two activities:
Contradictions are not just inevitable features of activity. They are “the principle of its self-movement and (…) the form in which the development is cast” (Ilyenkov 1977, 330). This means that new qualitative stages and forms of activity emerge as solutions to the contradictions of the preceding stage of form. This in turn takes place in the form of ‘invisible breakthroughs’. (Engestrom, 1987, ch. 2, p. 45)
Engestrom developed the concept of the contradiction leveraging Bateson’s description of inner contradictions, which were referred to as the double bind (Engestrom, 1987, ch. 3, p. 4). He, of course, reformulated Bateson’s individual dilemma in terms of a social one:
The type of development we are concerned with here—expansive generation of new activity structures—requires above all an instinctive or conscious mastery of double binds. Double bind may now be reformulated as a social, societally essential dilemma which cannot be resolved through separate individual actions alone—but in which joint co-operative actions can push a historically new form of activity into emergence. (Engestrom, 1987, ch. 3, p. 20).
Engestrom described four levels of contradictions which may appear in the human activity system (Engestrom, 1987, ch. 2, pp. 43-45):
Level 1: Primary inner contradiction (double nature) within each constituent component of the central activity.
Level 2: Secondary contradictions between the constituents of the central activity.
Level 3: Tertiary contradiction between the objective/motive of the dominant form of the central activity and the object/motive of a culturally more advanced form of the central activity.
Level 4: Quaternary contradictions between the central activity and its neighbor activities. (ch. 2, p. 44)
In concert with his redefined zone of proximal development, Engestrom identified the collective generation of solutions to the double bind potentiality (i.e., learning) as occurring in long cycles of qualitative transformations, driving by inner contradictions of the activity system, which causes individual participants to question established norms:
Human collective activity systems move through relatively long cycles of qualitative transformations. As the inner contradictions of an activity system are aggravated, some individual participants begin to question and deviate from its established norms. In some cases, this escalates into collaborative envisioning and a deliberate collective change effort from below. (Engestrom, 2000b, p. 526)

As described in Engestrom (2001, p. 152), the seven steps in the cycle are (a) primary contradiction, (b) secondary contradiction, (c) modeling the new situation, (d) new model, (e) implementing the new model, (f) quaternary contradictions and realignment with neighbors, and (g) consolidating the new practice. Later, Engstrom (2010, p. 8) presented the same seven steps with simpler names that highlighted the major activities at each step. They revised labels are (a) questioning, (b) analysis, (c) modeling the new solution, (d) examining and testing the new model, (e) implementing the new model, (f) reflecting on the process, and (g) consolidating and generalizing the new practice. Repeated iterations of these seven steps form an “expansive cycle or spiral” (p. 7), and facilitate the ascension of the activity patterns from the abstract to the concrete. This ascension is characterized by the following description:
This is a method of grasping the essence of an object by tracing and reproducing theoretically the logic of its development, of its historical formation through the emergence and resolution of its inner contradictions. A new theoretical idea or concept is initially produced in the form of an abstract, simple explanatory relationship, a ‘germ cell’. This initial abstraction is step-by-step enriched and transformed into a concrete system of multiple, constantly developing manifestations. In learning activity, the initial simple idea is transformed into a complex object, into a new form of practice. (Engestrom, 2010, p. 5)
Through process of the cycle, the object and motive of the activity are reconceptualized to allow for greater possibility and flexibility than the previous pattern of activity:
An expansive transformation is accomplished when the object and motive of the activity are reconceptualized to embrace a radically wider horizon of possibilities than in the previous mode of the activity. A full cycle of expansive transformation may be understood as a collective journey through the zone of proximal development of the activity. (Engestrom, 2000b, p. 526; Engestrom, 2001, p. 137)
The steps of the cyclical model, of course, are a heuristic device comprised of an ideal sequence that Engestrom explains is likely never followed exactly:
The process of expansive learning should be understood as construction and resolution of successively evolving contradictions….The cycle of expansive learning is not a universal formula of phases or stages. In fact, one probably never finds a concrete collective learning process which would cleanly follow the ideal-typical model. The model is a heuristic conceptual device derived from the logic of ascending from the abstract to the concrete. (Engestrom, 2010, p. 7)


Saturday, 16 July 2016

Pokémon Go is the future of education.........

Five reasons why Pokémon Go is the future of education…
1. It’s popular.
2. It’s fun.
3. It’s on phones and kids like their phones, so education of the future will have to be on phones.
4. It utilizes augmented reality, which is better than reality because as Jane McGonigal tells us, “reality is broken,” so if we can fix reality be augmenting it, we should.
5. Disruptive technology is coming for education, and if previous disruptive technologies such as MOOCs, adaptive software, Instagram, Uber, Snapchat, Twitter, badges, Candy Crush, the Kardashians, microcredentials, Comet Hale-Bopp, and so on haven’t managed to disrupt education, then surely Pokémon Go will because something has to eventually.
I downloaded and played Pokémon Go long enough to sufficiently understand its appeal, after which I deleted the app because I could tell its existence on my phone was incompatible with my goal of finishing a pedagogy model this summer. I’m even too old to experience Pokémon nostalgia and still could sense its potential for becoming all-consuming.
Even in record-setting heat I’ve seen kids roaming my neighborhood, provisioned with extra hydration and external chargers for their smart phones. When I jogged by the local (Pokémon) “gym” at 6:15 in the morning, three people were already there, engrossed in combat.
I do not mean to harsh the buzz of Pokémon Go fans. It looks legitimately fun, and the salutary  benefits of getting people off their couches and into the world (even in this unbelievable heat) are all to the good. Downtown was lousy with players talking, coaching, socializing. If I wasn’t under this self-imposed deadline, I might’ve joined in.
The cooperative, public nature of the game is genuinely exciting. There’s reports of the shy feeling emboldened to interact in the context of the game and people struggling with depression find the game motivating to get out in the world. We should be interested in studying this power.
But I can also already hear the ed-tech machine grinding away trying to spin this phenomenon into disruption gold.
Because of this we should keep some things in mind.
I do believe the game has taken off because it shares something important with good education, emphasizing process over product – it is actively fun to play - but while the gameplay is cooperative, it is also a competition, that carries all the usual pressures and incentives to prioritize “winning” over playing, product over process. I am reminded of the Tamagotchi craze of the 90’s, hearing a colleague’s bag chirping during a meeting, and after her satisfying the device’s need, explaining how her daughter’s school had banned the game, and so she promised to keep the virtual creature alive during the day.
There is probably some parent out there already, not enjoying the game in tandem with their child, but capturing Pokémons and leveling up on their kid’s behalf.
We should also notice that Pokémon Go replicates some of the existing divides already present in education. Players of means with more access to time, data, money (for power-ups and lures) will do better than those without. There is already a Pokémon Go gap between those who pay for their upgrades v. those who have to earn each level, the old-fashioned way, but hurling virtual balls at virtual monsters.
Even after a week, I imagine there are players who are priced out of competing at the most coveted gyms, who are defeated by a system where others have access to more resources.
Others have raised the concerns about the amount and kind of data that players are asked to relinquish for the opportunity to play the game.
Technology is a tool, and I hope and trust that some smart people are thinking about how the popularity of this particularl tool can be translated into the educational realm.
But let’s not mistake the tool for the thing itself. Education is the thing. Too often in our rush for a technological solution to current problems, we redefine the thing into something technology can handle. Technology can’t handle the complicated, but meaningful stuff, so we flatten, we standardize.
This is why we have people working on software that can grade essays. Software will never be able to truly respond to writing as humans do, so we have to train the humans to write essays that satisfy the limits of the algorithm.
But writing that is read only by algorithms isn’t writing, so why are we messing around with that stuff?
Progress? Disruption?
Education doesn’t happen to students, but inside them. I can’t remember where I heard that, but it’s true.
We don't need education to look more like Pokémon Go just because it's popular. What makes Pokémon Go and enjoyable game may have nothing to do with making education compelling.
The magic (if you want to call it that) of Pokémon Go isn’t the technology, but what the technology unlocks inside the person using it. If Pokémon Go is meant to inform the work of educators, let’s focus on that, rather than the technological tool itself.

Saturday, 9 July 2016

Learning Analytics

Defining Learning Analytics 
“Learning analytics refers to the interpretation of a wide range of data produced by and gathered on behalf of students in order to assess academic progress, predict future performance, and spot potential issues. Data are collected from explicit student actions, such as completing assignments and taking exams, and from tacit actions, including online social interactions, extracurricular activities, posts on discussion forums, and other activities that are not directly assessed as part of the student’s educational progress. Analysis models that process and display the data assist faculty members and school personnel in interpretation. The goal of learning analytics is to enable teachers and schools to tailor educational opportunities to each student’s level of need and ability.” “Learning analytics need not simply focus on student performance. It might be used as well to assess curricula, programs, and institutions. It could contribute to existing assessment efforts on a campus, helping provide a deeper analysis, or it might be used to transform pedagogy in a more radical manner. It might also be used by students themselves, creating opportunities for holistic synthesis across both formal and informal learning activities.” 

Learning analytics is becoming defined as an area of research and application and is related to academic analytics, action analytics, and predictive analytics. 1 Learning analytics emphasizes measurement and data collection as activities that institutions need to undertake and understand, and focuses on the analysis and reporting of the data. Unlike educational data mining, learning analytics does not generally address the development of new computational methods for data analysis but instead addresses the application of known methods and models to answer important questions that affect student learning and organizational learning systems. 

The goal of learning analytics as enabling teachers and schools to tailor educational opportunities to each student’s level of need and ability. Unlike educational data mining, which emphasizes system generated and automated responses to students, learning analytics enables human tailoring of responses, such as through adapting instructional content, intervening with atrisk students, and providing feedback. Learning analytics draws on a broader array of academic disciplines than educational data mining, incorporating concepts and techniques from information science and sociology, in addition to computer science, statistics, psychology, and the learning sciences. Unlike educational data mining, learning analytics generally does not emphasize reducing learning into components but instead seeks to understand entire systems and to support human decision making. 

Technical methods used in learning analytics are varied and draw from those used in educational data mining. Additionally, learning analytics may employ: 
• Social network analysis (e.g., analysis of student-tostudent and student-to-teacher relationships and interactions to identify disconnected students, influencers, etc.) and 
• Social or “attention” metadata to determine what a user is engaged with. As with educational data mining, providing a visual representation of analytics is critical to generate actionable analyses; information is often represented as “dashboards” that show data in an easily digestible form. 

A key application of learning analytics is monitoring and predicting students’ learning performance and spotting potential issues early so that interventions can be provided to identify students at risk of failing a course or program of study. Several learning analytics models have been developed to identify student risk level in real time to increase the students’ likelihood of success. Educational institutions have shown increased interest in learning analytics as they face calls for more transparency and greater scrutiny of their student recruitment and retention practices. Data mining of student behavior in online courses has revealed differences between successful and unsuccessful students (as measured by final course grades) in terms of such variables as level of participation in discussion boards, number of emails sent, and number of quizzes completed. Analytics based on these student behavior variables can be used in feedback loops to provide more fluid and flexible curricula and to support immediate course alterations (e.g., sequencing of examples, exercises, and self-assessments) based on analyses of real-time learning data. 
In summary, learning analytics systems apply models to answer such questions as: 
• When are students ready to move on to the next topic? 
• When are students falling behind in a course? 
• When is a student at risk for not completing a course? 
• What grade is a student likely to get without intervention? 
• What is the best next course for a given student? 
• Should a student be referred to a counselor for help? 

Learning Analytics Applications 

Educational data mining and learning analytics research are beginning to answer increasingly complex questions about what a student knows and whether a student is engaged. For example, questions may concern what a short-term boost in performance in reading a word says about overall learning of that word, and whether gaze-tracking machinery can learn to detect student engagement. Researchers have experimented with new techniques for model building and also with new kinds of learning system data that have shown promise for predicting student outcomes. 

The application areas were discerned from the review of the published and gray literature and were used to frame the interviews with industry experts. These areas represent the broad categories in which data mining and analytics can be applied to online activity, especially as it relates to learning online. This is in contrast to the more general areas for big data use, such as health care, manufacturing, and retail. These application areas are 
(1) modeling of user knowledge, user behavior, and user experience; 
(2) user profiling; 
(3) modeling of key concepts in a domain and modeling a domain’s knowledge components, 
(4) and trend analysis. 

Another application area concerns how analytics are used to adapt to or personalize the user’s experience. Each of these application areas uses different sources of data, and describes questions that these categories answer and lists data sources that have been used thus far in these applications.

New technology start-ups founded on big data (e.g., Knewton, Desire2Learn) are optimistic about applying data mining and analytics—user and domain modeling and trend analysis—to adapt their online learning systems to offer users a personalized experience. Companies that “own” personal data (e.g., Yahoo!, Google, LinkedIn, Facebook) have supported open-source developments of big data software (e.g., Apache Foundation’s Hadoop) and encourage collective learning through public gatherings of developers to train them on the use of these tools (called hackdays or hackathons). The big data community is, in general, more tolerant of public trialand-error efforts as they push data mining and analytics technology to maturity.

The challenges in implementing data mining and learning analytics within K–20 settings. Experts pose a range of implementation considerations and potential barriers to adopting educational data mining and learning analytics, including technical challenges, institutional capacity, legal, and ethical issues. Successful application of educational data mining and learning analytics will not come without effort, cost, and a change in educational culture to more frequent use of data to make decisions. What is the gap between the big data applications in the commerce, social, and service sectors and K–20 education? Given that learning analytics practices have been applied primarily in higher education thus far, the time to full adoption may be longer in different educational settings, such as K–12 institutions.

Education institutions pioneering the use of data mining and learning analytics are starting to see a payoff in improved learning and student retention. As described student data can help educators both track academic progress and understand which instructional practices are effective. How students can examine their own assessment data to identify their strengths and weaknesses and set learning goals for themselves. Recommendations from this guide are that K–12 schools should have a clear strategy for developing a data-driven culture and a concentrated focus on building the infrastructure required to aggregate and visualize data trends in timely and meaningful ways, a strategy that builds in privacy and ethical considerations at the beginning. The vision that data can be used by educators to drive instructional improvement and by students to help monitor their own learning is not new. However, the feasibility of implementing a data-driven approach to learning is greater with the more detailed learning micro data generated when students learn online, with newly available tools for data mining and analytics, with more awareness of how these data and tools can be used for product improvement and in commercial applications, and with growing evidence of their practical application and utility in K–12 and higher education. There is also substantial evidence of effectiveness in other areas, such as energy and health care.



Personalized Learning Scenarios

Online consumer experiences provide strong evidence that computer scientists are developing methods to exploit user activity data and adapt accordingly. Consider the experience a consumer has when using a Movie app to choose a movie. Members can browse app offerings by category (e.g., Comedy) or search by a specific actor, director, or title. On choosing a movie, the member can see a brief description of it and compare its average rating by app users with that of other films in the same category. After watching a film, the member is asked to provide a simple rating of how much he or she enjoyed it. The next time the member returns to app, his or her browsing, watching, and rating activity data are used as a basis for recommending more films. The more a person uses app, the more app learns about his or her preferences and the more accurate the predicted enjoyment. But that is not all the data that are used. Because many other members are browsing, watching, and rating the same movies, the app recommendation algorithm is able to group members based on their activity data. Once members are matched, activities by some group members can be used to recommend movies to other group members. Such customization is not unique to Movie app, of course. Companies such as Amazon, Overstock, and Pandora keep track of users’ online activities and provide personalized recommendations in a similar way. 

Education is getting very close to a time when personalisation will become commonplace in learning. Imagine an introductory biology course. The instructor is responsible for supporting student learning, but her role has changed to one of designing, orchestrating, and supporting learning experiences rather than “telling.” Working within whatever parameters are set by the institution within which the course is offered, the instructor elaborates and communicates the course’s learning objectives and identifies resources and experiences through which those learning goals can be attained. Rather than requiring all students to listen to the same lectures and complete the same homework in the same sequence and at the same pace, the instructor points students toward a rich set of resources, some of which are online, and some of which are provided within classrooms and laboratories. Thus, students learn the required material by building and following their own learning maps. 

Suppose a student has reached a place where the next unit is population genetics. In an online learning system, the student’s dashboard shows a set of 20 different population genetics learning resources, including lectures by a master teacher, sophisticated video productions emphasizing visual images related to the genetics concepts, interactive population genetics simulation games, an online collaborative group project, and combinations of text and practice exercises. Each resource comes with a rating of how much of the population genetics portion of the learning map it covers, the size and range of learning gains attained by students who have used it in the past, and student ratings of the resource for ease and enjoyment of use. These ratings are derived from past activities of all students, such as “like” indicators, assessment results, and correlations between student activity and assessment results. 

The student chooses a resource to work with, and his or her interactions with it are used to continuously update the system’s model of how much he or she knows about population genetics. After the student has worked with the resource, the dashboard shows updated ratings for each population genetics learning resource; these ratings indicate how much of the unit content the student has not yet mastered is covered by each resource. At any time, the student may choose to take an online practice assessment for the population genetics unit. Student responses to this assessment give the system—and the student—an even better idea of what he or she has already mastered, how helpful different resources have been in achieving that mastery, and what still needs to be addressed. The teacher and the institution have access to the online learning data, which they can use to certify the student’s accomplishments. This scenario shows the possibility of leveraging data for improving student performance; another example of data use for “sensing” student learning and engagement is described in the sidebar on the moment of learning and illustrates how using detailed behavior data can pinpoint cognitive events. The increased ability to use data in these ways is due in part to developments in several fields of computer science and statistics. To support the understanding of what kinds of analyses are possible, the next section defines educational data mining, learning analytics, and visual data analytics, and describes the techniques they use to answer questions relevant to teaching and learning. 

                Capturing the Moment of Learning by Tracking Game Players’ Behaviors 
The Wheeling Jesuit University’s Cyber enabled Teaching and Learning through Game-based, Metaphor-Enhanced Learning Objects (CyGaMEs) project was successful in measuring learning using assessments embedded in games. CyGaMEs quantifies game play activity to track timed progress toward the game’s goal and uses this progress as a measure of player learning. CyGaMEs also captures a self-report on the game player’s engagement or flow, i.e., feelings of skill and challenge, as these feelings vary throughout the game play. In addition to timed progress and self-report of engagement, CyGaMEs captures behaviors the player uses during play. Reese et al. (in press) showed that this behavior data exposed a prototypical “moment of learning” that was confirmed by the timed progress report. Research using the flow data to determine how user experience interacts with learning is ongoing. 

Friday, 8 July 2016

Neuromythologies in Education

Neuromythologies in education- VAK Learning Styles, Multiple Intelligence, 10% usage theory and Left-Right sided brain thinking…
                   
Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain.

Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, left- and right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny.

Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate left- and right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits.

Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.



Introduction
Neuromythologies are those popular accounts of brain functioning, which often appear within so-called ‘brain-based’ educational applications. They could be categorised into neuromyths where more is better: ‘If we can get more of the brain to ‘‘light up’’, then learning will improve . . .’, and neuromyths where specificity is better: ‘If we concentrate teaching on the ‘‘lit-up’’ brain areas then learning will improve . . .’. Prominent examples of neuromythologies of the former include: the 10% myth, that we only use 10% of our brain; multiple intelligences; and Brain Gym. Prominent examples of neuromytholgies of the latter include: left- and right-brained thinking; VAK (visual, auditory and kinaesthetic) learning styles; and water as brain food. Characteristically, the evidential basis of these schemes does not lie in cognitive neuroscience, but rather with the various enthusiastic promoters; in fact, sometimes the scientific evidence flatly contradicts the brain-based claims. The assumption here is that educational practices which claim to be concomitant with the workings of the brain should, in fact, be so, at least to the extent that the scientific jury can ever be conclusive (Blakemore and Frith 2005). A counter-argument might be posed that the ultimate criterion is pragmatic, not evidential, and if it works in the classroom who cares if it seems scientifically untenable. For this author, basing education on scientific evidence is the hallmark of sound professional practice, and should be encouraged within the educational profession wherever possible. The counter-argument only serves to undermine the professionalism of teachers, and so should be resisted. This is not to say that there is not a glimmer of truth embedded within various neuromyths. Usually their origins do lie in valid scientific research; it is just that the extrapolations go well beyond the data, especially in transfer out of the laboratory and into the classroom (Howard-Jones 2007). For example, there is plenty of evidence that cognitive function benefits from cardiovascular fitness; hence, general exercise is good for the brain in general (Blakemore and Frith 2005). But this does not mean that pressing particular spots on one’s body, as per Brain Gym, will enhance the activation of particular areas in the brain. As another example, there are undoubtedly individual differences in perceptual acuities which are modality based, and include visual, auditory and kinaesthetic sensations (although smell and taste are more notable), but this does not mean that learning is restricted to, or even necessarily associated with, one’s superior sense. All of us have areas of ability in which we perform better than others, especially as we grow older and spend more time on one rather than another. Consequently, a school curriculum which offers multiple opportunities is commendable, but this does not necessarily depend on there being multiple intelligences within each child which fortuitously map on to the various areas of curriculum. General cognitive ability could just as well play an important role in learning outcomes across the board. The generation of such neuromythologies and possible reasons for their widespread acceptance has become a matter for investigation itself. In particular, the phenomenon of their widespread and largely uncritical acceptance in education raises several questions: why has this happened?; what might this suggest about the capacity for the education profession to engage in professional reflection on complex scientific evidence?

And one cannot help but wonder about the extent to which political pressure for endless improvement in standardised test scores, publicised via school league tables, drives teachers to adopt a one-size-fits-all, brain-based life-raft when their daily classroom experience is replete with children’s individual differences. To gather some data about these issues, Pickering and Howard-Jones (2007) surveyed nearly 200 teachers either attending an education and brain conference in the UK (one brain based, the other academic) or contributing to an OECD website internationally. All respondents were enthusiastic about the prospects of neuroscience informing teaching practice, particularly for pedagogy, but less so for curriculum design. Moreover, despite a prevailing ethos of pragmatism (notably with the brain-based conference attendees), it was generally conceded that the role of neuroscientists was to be professionally informative rather than prescriptive. This, in turn, points to the critical necessity for a mutually comprehensible language with which neuroscientists and educators can engage in a genuine interdisciplinary dialogue. The American Nobel Laureate physicist Richard Feynman, in one of his more famous graduation addresses at Caltech, warned his audience of young science graduates about ‘cargo cult science’ (Feynman 1974). His point was that, while it might accord with ‘human nature’ to engage in wishful thinking, good scientists have to learn not to fool themselves. Feynman’s warning could well be applied to the myriad ‘brain-based’ strategies that pervade current educational thinking. Whereas it is commonly stated in such schemes that the brain is the most complex object in the universe (although how this could possibly be verified remains unexplained), this assumption is then completely ignored in proposing a pedagogy based on the simplest of analyses – e.g., in the brain there are two hemispheres, left and right, therefore there are two kinds of thinking: of-the-left brain and of-the-right-brain, and therefore there are only two kinds of teaching necessary: for-the-left-brain and for-the-right-brain. Not a very exciting universe where the most complex object has only two states! And not, fortunately, the universe in which we exist, where the complexity of the human brain has been the focus of intense investigation for over a century, but particularly over the past two decades, thanks to the invention of neuroimaging technologies. The resulting neuroimages – brains with brightly coloured areas – are disarmingly simple, and seem to fit with a common sense view of the brain as having localised specialist functions which enable us to do the various things we do. But such apparent simplicity is generated out of considerable complexity. In functional magnetic resonance imaging (fMRI), for example, the images are the end-result of many years’ work on understanding the quantum mechanics of nuclear magnetic resonance phenomena, the development of the engineering of superconducting magnets, the application of inverse fast Fourier transforms to large data sets and the refinement of high-speed computing hardware and software to analyse large data sets across multiple parameters. The neuroimaging picture is undoubtedly worth the proverbial thousand words, but the scientist’s words can be quite different from those of the layperson. A crucial point that most of the media overlook, or ignore, is that neuroimaging data are statistical.


The coloured blobs on brain maps representing areas of significant activation (so-called ‘lighting up’) are like the peaks of sub-oceanic mountains which rise above sea level, in neuroimaging, how much or how little activation (sea level) to reveal being determined by the researcher in setting a suitable level of statistical threshold.
In fact, the most challenging aspect of most neuroimaging experimental design is to determine suitable control conditions to highlight a particular area of experimental interest and thus avoid showing how most of the brain is involved in most cognitive tasks.
So, in a classroom it would be quite silly to think that only a small portion of pupils’ brains are involved in a task, just because a small area of brain activity was reported in a neuroimaging study of a similar task (Geake 2006). Neuroscience is a laboratory-based endeavour. Even with the best of intentions, extrapolations from the lab to the classroom need to be made with considerable caution (Howard-Jones 2007). As Nobel Laureate Charles Sherrington (1938, 181) warned in Oxford some 70 years ago: ‘To suppose the roof-brain consists of point to point centres identified each with a particular item of intelligent concrete behaviour is a scheme over simplified and to be abandoned.’ In other words, we have to be very wary of oversimplifications of the neuro-level of description in seeking applications at the cogntive or behavioural levels. The central characteristic of brain function which generates its complexity is neural functional interconnectivity. There are common brain functions for all acts of intelligence, Educational Research 125 especially those involved in school learning (Geake in press). These interconnected brain functions (and implicated brain areas) include:
·         Working memory (lateral frontal cortex).
·         Long-term memory (hippocampus and other cortical areas).
·         Decision-making (orbitofrontal cortex).
·         Emotional mediation (limbic subcortex and associated frontal areas).
·         Sequencing of symbolic representation (fusiform gyrus and temporal lobes).
·         Conceptual interrelationships (parietal lobe).
Conceptual and motor rehearsal (cerebellum). This parallel interconnected functioning is occurring all the time our brains are alive. Importantly, these neural contributions to intelligence are necessary for all school subjects, and all other aspects of cognition. Creative thinking would not be possible without our extensive neural interconnectivity (Geake and Dobson 2005). Moreover, there are no individual modules in the brain which correspond directly to the school curriculum (Geake 2006). Cerebral interconnectivity is necessary for all domain-specific learning, from music to maths to history to French as a second language. Neuromyths typically ignore such interconnectivity in their pursuit of simplicity. Steve Mithen (2005) argues that it was a characteristic of the Neanderthal brain that it was not well interconnected. This could explain the curious stasis of Neanderthal culture over several hundred thousand years, and the even more curious fact that Neanderthal culture was rapidly out-competed by our physically less robust Cro-Magnon forebears, whose brains, Mithen argues, had evolved to become well interconnected.

Multiple intelligences
Highly evolved cerebral interconnectedness has implications for any brain-based justification of the widely promoted model of multiple intelligences (MI). Gardner (1993) divided human cognitive abilities into seven intelligences: logic-mathematics, verbal, interpersonal, spatial, music, movement and intrapersonal. Some 2500 years earlier, Plato recommended that a balanced curriculum have the following six subjects: logic, rhetoric, arithmetic, geometry-astronomy, music and dance-physical. For philosopher-kings, additionally, meditation was recommended. Clearly MI is nothing new: Gardner has just recycled Plato. But although such a curriculum scheme is long-standing, it doesn’t mean that our brains think about these areas completely independently from one another. Each MI requires sensory information processing, memory, language, and so on. Rather, this just demonstrates Sherrington’s point that the way the brain goes about dividing its labours is quite separate from how we see such divisions on the outside, so to speak. In other words, there are no multiple intelligences, but rather, it is argued, multiple applications of the same multifaceted intelligence.
Whereas undoubtedly there are large individual differences in subject-specific abilities, the evidence which conflicts with a multiple intelligences interpretation of brain function is that these subject-specific abilities are positively correlated, as shown by Carroll (1993) in his large meta-analysis. Such a pervasive correlation between different abilities is conceptualised as general intelligence, g. The existence of g not only suggests that the same brain modules are likely to be involved in many different abilities, but that their functional connectivity is of paramount importance. In fact, the main thrust of research in cognitive neuroscience in the next decade will be the mapping of functional connectivity, 126 J. Geake that is how functional modules transfer information, anatomically, bio-chemically, bioelectrically, rhythmically, synchronistically, and so on. A recent study along these lines sought evidence for neural correlates of general intelligence – i.e., where and how does the brain generate measures of general intelligence? Duncan et al. (2000) found a common brain involvement, in the frontal cortex of adult subjects, on both spatial and verbal IQ tests. A further meta-analysis of 20 neuroimaging studies involving language, logic, mathematics and memory showed that the same frontal cortical areas were involved (Duncan 2001). It seems unlikely that these intelligences are independent if the same part of the brain is common to all. This point is elaborated in a recent critique of MI (Waterhouse 2006, 213). The human brain is unlikely to function via Gardner’s multiple intelligences. Taken together the evidence for the inter-correlations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping ‘what is it?’ and ‘where is it?’ neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner’s intelligences could operate ‘via a different set of neural mechanisms’ [as Gardner claims]. To explain how those same pathways support high-level general intelligence across so many different cognitive areas, Duncan (2001, 824) suggested that: ‘neurons in selected frontal regions adapt their properties to code information of relevance to current behaviour, pruning away . . . all that is currently task-irrelevant.’
So, underlying our specific abilities is adaptive brain functioning. In support of this idea of an adapting brain, Dehaene and his colleagues have proposed a dynamic model of brain functioning in which these frontal adaptive neurons coordinate the myriad inputs from our perceptual modules from all over the brain, and continually assess the relative importance of these inputs such that from time to time, a thought becomes conscious; it literally ‘comes to mind’ (Dehaene, Kerszberg, and Changeux 1998). It could be predicted, then, that deliberate attempts to restrict intelligence within classrooms according to MI theory would not promote children’s learning, and it could be noted in passing that one of the ‘independent consultants’ who advocates brain-based learning strategies acknowledges teachers’ frustration with the lack of long-term impact of applying MI theory (Beere 2006).

10% Usage Theory
None of the above implies that g is all that there is to intelligence – quite the opposite. With its population age-norming, IQ might be a convenient surrogate for intelligence in the laboratory, but not even the most resolute empiricist would claim that IQ captures all of the variance in cognitive abilities. Rather, intelligence in all its manifestations illustrates the underlying dynamic complexity of its generative neural processes, with emphasis on ‘dynamic’. There is overwhelming evidence that the brain is perpetually busy, and that even when any of our brain cells are not involved in processing some information, they still fire randomly. As an organ which has evolved not to know what is going to happen next, such constant activity keeps our brain in a state of readiness. Consequently, the neuromyth that ‘We only use 10% of our brains’ could not be more in error.
The absurdity has been pointed out by Beyerstein (2004): evolution does not produce excess, much less 90% excess. In the millions of studies of the brain, no one has ever found an unused portion of the brain! It is unfortunate that teachers are constantly subjected to such pervasive nonsense about the brain, so it is worth pausing to investigate the various sources of the 10% myth Educational Research 127 (Nyhus and Sobel 2003). It seems to have begun with an Italian neuro-surgeon c.1890 who removed scoops of brains of psychiatric patients to see if there were any differences in their reported behaviours. The myth received an unexpected boost c.1920 during a radio interview with Albert Einstein, when the physicist used the 10% figure to implore us to think more. The myth received its widest circulation before the Second World War when some American advertisers of home-help manuals re-invented the 10% figure in order to convince customers that they were not very smart. Odd, then, that it has been so enthusiastically adopted by wishful-thinking educationists at the end of the twentieth century. It would be nice if the brains of our students had all this spare educable capacity. To be sure, the plasticity of young (and even older) brains should never be underestimated. But what plasticity requires is a dynamically engaged brain, with all neurons firing. To put it bluntly, if you are only using 10% of your brain, then you are in a vegetative state so close to death that you should hope (not that you could) that your relatives will pull out the plug of the life support machine!


Left- and right-brained thinking
 Another pervasive example of over-simplification has been the misinterpretation of laterality studies to produce so-called ‘left- and right-brained thinking’.
Historically, the original studies were of split-brain patients: patients who had the major communication tract between the two brain hemispheres, the corpus callosum, surgically severed in an attempt to reduce life-threatening epilepsy. It was found that the separate hemispheres of these patients could separately process different types of information, but only the left hemisphere processing was reported by the patients. Unfortunately, the caveat that the researchers who carried out these studies back in the 1970s did emphasise – i.e., that these patients had abnormal brains – was largely ignored. For normal people, as Singh and O’Boyle (2004, 671) point out: the brain does not consist of two hemispheres operating in isolation. In fact, the different cognitive specialties of the LH and RH are so well integrated that they seldom cause significant processing conflicts . . . hemispheric specialisation . . . consists of a dynamic interactive partnership between the two. Creative thinking, in particular, requires the interaction of both hemispheric specialists, neither one can operate in isolation from the other: Since the right hemisphere and the left hemisphere are massively interconnected (through the corpus callosum), it is not only possible, but also highly likely, that the creative person can iterate back and forth between these specialized modes to arrive at a practical solution to a real problem. If the right hemisphere were somehow disconnected from the left and confined to its own specialized thinking modes, it might be relegated to only ‘soft’ fantasy solutions, pipe dreams or weird ideas that would be difficult, if not impossible, to fully implement in the real world. The left brain helps keep the right brain on track. (Herrmann 1998, http://www.sciam.com) This, then, has important implications for the misguided ‘right-brain’ promotion of creative thinking in the school classroom. Goswami (2004) draws attention to a recent OECD report in which left brain/right brain learning is the most troubling of several neuromyths – a sort of anti-intellectual virus which spreads among lay people as misinformation about what neuroscience can offer education.
This is not to say that there isn’t abundant good evidence that much brain functioning is modular, and that many higher cognitive functions, such as language production, are critically reliant on modules which are usually found in one or other hemisphere, such as Broca’s Area (BA), usually 128 J. Geake found in the left frontal cortex. But there are notable differences between individuals as to where these modules are located. In about 5% of right-handed males, BA is found in the right frontal cortex, and in a higher number of females, the principle function of BA, language production, is found in both the left and right frontal cortices. In left-handed people, only 60% have BA functions on the left, with the rest having their language production involving frontal areas on both sides or on the right (Kolb and Wishaw 1990). An implication of this for neuroscience research is that practically all subjects in neuroimaging studies are screened for extreme right-handedness – it is a way of maximising the probability that the group map has contributions from all subjects (that is, their functional modules involved in the study will be in much the same place in the different individual’s brains).
Consequently, with a nice circularity, the data which show that language production is on the left comes almost exclusively from subjects who’ve been chosen to have their language production areas on the left. Thus the left- and right-brain thinking myth seems to have arisen from misapplying lab studies which show that the semantic system is left-lateralised (language information processing in the left hemisphere; graphic and emotional information processing in the right hemisphere) by ignoring several important caveats. First, the left-lateralisation is in fact a statistically significant bias, not an absolute. Even in left-lateralised individuals, language processing does stimulate some right hemisphere activation. Second, the subjects for such studies are extremely right-handed. As language researchers are at pains to point out: ‘It is dangerous to suppose that language processing only occurs in the left hemisphere of all people’ (Thierry, Giraud, and Price 2003, 506). The largest interconnection to transmit information in the brain is the corpus callosum, the thick band of fibres which connects the two hemispheres. It seems that the left and right sides of our brains cannot help but pass all information between them. In fact, there is some evidence that constrictions in the corpus callosum could be predictive of deficiencies in reading abilities (Fine 2005), which obviously could not occur if language processing was an exclusively left hemisphere activity. It would be neat if all cognitive functioning was simply lateralised, and towards such a schema some commentators have suggested that perhaps there are stylistic differences between left and right hemispheric functions, with the left mediating detail, while the holistic right focuses on the bigger picture. For example, using EEG to describe the time course of activations identified by fMRI, Jung-Beeman et al. (2004) found that the insight or ‘aha’ moment of problem solution elicits increased neural activity in the right hemisphere’s temporal lobe. Jung-Beeman et al. (2004) suggest that the this right hemisphere function facilitates a coarse-level integration of information from distant relational sources, in contrast to the finer-level information processing characteristic of its left hemisphere homologue. However, researchers in music cognition disagree (Peretz 2003). Even regarding the left hemisphere (metaphorically if not literally) as a verbal processor, music, as non-verbal information par excellence, is not exclusively processed in the right, but in both hemispheres (Peretz 2003). Moreover, neuroimaging studies have shown that the location and extent of various areas of the brain involved with music perception and production shift and grow with musical experience (Parsons 2003).
In fact, there is a strong evolutionary argument that music plays a crucial role in promoting the growth of the inter-module connections which underpin cognitive development in infants and young children (Cross 1999). Consequently, for the many reasons noted above, leading neuroscientists have been calling on the neuroscience community to shift their interpretative focus of brain function from modularisation to interaction. As Hellige (2000, 206) pleads: ‘Having learned so much about hemispheric differences . . . it is now time to put the brain back together again.’ Or as Walsh and Pascual-Leone (2003, 206) summarise: ‘Human brain function and behaviour seem best explained on the basis of functional connectivity between brain structures rather than on the basis of localization of a given function to a specific brain structure.’

VAK Learning styles
This emphasis on connectedness rather than separateness of brain functions has important implications for education (Geake 2004).
The multi-sensory pedagogies, which experienced teachers know to be effective, are supported by fMRI research. The work of Calvert, Campbell and Brammer (2000), on imaging brain sites of cross-modal binding in human subjects, seems relevant. Bimodal processing of congruent information has a supra-additive effect (e.g., simultaneously seeing and hearing the same information works better than first just seeing and then hearing it). These findings are consistent with observed behaviour. Much good pedagogy in the early years of schooling is based on coincident bimodal information processing, especially sight and sound, or sight and speech, as demonstrated by every early years teacher pointing to the words of the story as she reads them aloud. However, such ‘natural’ pedagogy is threatened by the promulgation of learning styles. The notion that individual differences in academic abilities can be partly attributed to individual learning styles has considerable intuitive appeal if we are to judge by the number of learning style models or inventories that have been devised – 170 at the last count, and rising (Coffield et al. 2004). The myriad ways that approaches to learning can seem to be partitioned, labelled and measured seems to know no bounds. The disappointing outcome of all of this endeavour is that, overall, the evidence consistently shows that modifying a teaching approach to cater for differences in learning styles does not result in any improvement in learning outcomes (Coffield et al. 2004). Despite the lack of positive evidence, the education community has been swamped by claims for a learning style model based on the sensory modalities: visual, auditory and kinaesthetic (VAK) (Dunn, Dunn and Price 1984). The idea is that children can be tested to ascertain which is their dominant learning style, V, A or K, and then taught accordingly. Some schools have even gone so far as to label children with V, A and K shirts, presumably because these purported differences are no longer obvious in the classroom. The implicit assumption here is that the information gained through one sensory modality is processed in the brain to be learned independently from information gained through another sensory modality. There is plenty of evidence from a plethora of cross-modal investigations as to why such an assumption is wrong. What is possibly more insidious is that focusing on one sensory modality flies in the face of the brain’s natural interconnectivity. VAK might, if it has any effect at all, be actually harming the academic prospects of the children so inflicted. A simple demonstration of the ineffectiveness of VAK as a model of cognition comes from asking 5-year-olds to distinguish different sized groups of dots where the groups are too large for counting (Gilmore, McCarthy, and Spike 2007). So long as the group sizes are not almost equal, young children can do this quite reliably.
Now, what happens when one group is replaced by as many sounds played too rapidly for counting? There is no change in accuracy! Going from a V versus V version of the task to a V versus A version makes no difference to task performance. The reason is that input modalities in the brain are interlinked: visual with auditory; visual with motor; motor with auditory; visual with taste; and so on.
There are well-adapted evolutionary reasons for this. Out on the savannah as a pre-hominid hunter-gatherer, coordinating sight and sound makes all the difference between detecting dinner and being dinner. As Sherrington (1938, 217) noted: The naive observer would have expected evolution in its course to have supplied us with more various sense organs for ampler perception of the world . . . Not new senses but better liaison between the old senses is what the developing nervous system has in this respect stood for. To emphasise the cross-modal nature of sensory experience, Kayser (2007) writes that: ‘the brain sees with its ears and touch, and hears with its eyes.’ Moreover, as primates, we are predominantly processors of visual information. This is true even for congenitally blind children who instantiate Braille not in the kinaesthetic areas of their brains, but in those parts of their visual cortices that sighted children dedicate to learning written language. Moreover, unsighted people create the same mental spatial maps of their physical reality as sighted people do (Kriegseis et al. in press). Obviously the information to create spatial maps by blind people comes from auditory and tactile inputs, but it gets used as though it was visual. Similarly, people who after losing their hearing get a cochlear implant find that they are suddenly much more dependent on visual speech, such as cues for segmentation and formats, to conduct conversation (Thomas and Pilling in press). Wright (2007) points out just how interconnected our daily neural processes must be. Eating does not engage just taste, but smell, tactile (inside the mouth), auditory and visual sensations. Learning a language, and the practice of it, requires the coordinated use of visual, auditory and kinaesthetic modalities, in addition to memory, emotion, will, thinking and imagination: To an anatomist this implies the need for an immense number of neural connections between many parts of the brain. In particular, there must be numerous links between the primary auditory cortex (in the temporal lobe), the primary proprioceptive-tactile cortex (in the parietal lobe) and the primary visual cortex (in the occipital lobe). There is indeed such a neural concourse, in the parieto-temporo-occipital ‘association’ cortex in each cerebral hemisphere. (Wright 2007, 275) Input information is abstracted to be processed and learnt, mostly unconsciously, through the brain’s interconnectivity (Dehaene, Kerszberg, and Changeux 1998). Actually, we don’t even create sensory perception in our sensory cortices: For a long time it was thought that the primary sensory areas are the substrate of our perception. . . . these zones simply generate representational maps of the sensorial information . . . although these respond to stimuli, they are not responsible for . . . perceptions . . . Perceptual experience occurs in certain zones of the frontal lobes [where] neurons combine sensory information with memory information. (Trujillo 2006, M9) Literally following a VAK regime in real classrooms would lead to all sorts of ridiculous paradoxes: what does a teacher do with: the V and K ‘learners’ in a music lesson/ the A and K ‘learners’ at an art lesson/ the V and A ‘learners’ in a craft practical lesson? The images of blindfolds and corks in mouths are all too reminiscent of Tommy, the rock opera by The Who. As Sharp, Byrne and Bowker (in press) elaborate, VAK trivialises the complexity of learning, and in doing so, threatens the professionality of educators. Fortunately, many teachers have not been taken in. Ironically, VAK has become, in the hands of practitioners, a recipe for a mixed-modality pedagogy where lessons have explicit presentations of material in V, A and K modes.

Teachers quickly observed that their pupils’ so-called learning styles were not stable, that the expressions of V-, A- and K-ness varied with the demands of the lessons, as they should (Geake 2006). As with other learning-style inventories, research has shown that there is no improvement of learning outcomes with VAK above teacher enthusiasm, where ‘attempts to focus on learning styles were wasted effort’ (Kratzig and Arbuthnot 2006). We might speculate in passing why do VAK and other ‘learning styles’ seem so attractive? I wonder if two aspects of folk psychology, where we seem to learn differently from each other, and we have five senses, have created folk neuroscience: the working of our brains directly reflects our folk psychology. Of course, if our brains were that simple, we wouldn’t be here today!