Saturday 16 July 2016

Pokémon Go is the future of education.........

Five reasons why Pokémon Go is the future of education…
1. It’s popular.
2. It’s fun.
3. It’s on phones and kids like their phones, so education of the future will have to be on phones.
4. It utilizes augmented reality, which is better than reality because as Jane McGonigal tells us, “reality is broken,” so if we can fix reality be augmenting it, we should.
5. Disruptive technology is coming for education, and if previous disruptive technologies such as MOOCs, adaptive software, Instagram, Uber, Snapchat, Twitter, badges, Candy Crush, the Kardashians, microcredentials, Comet Hale-Bopp, and so on haven’t managed to disrupt education, then surely Pokémon Go will because something has to eventually.
I downloaded and played Pokémon Go long enough to sufficiently understand its appeal, after which I deleted the app because I could tell its existence on my phone was incompatible with my goal of finishing a pedagogy model this summer. I’m even too old to experience Pokémon nostalgia and still could sense its potential for becoming all-consuming.
Even in record-setting heat I’ve seen kids roaming my neighborhood, provisioned with extra hydration and external chargers for their smart phones. When I jogged by the local (Pokémon) “gym” at 6:15 in the morning, three people were already there, engrossed in combat.
I do not mean to harsh the buzz of Pokémon Go fans. It looks legitimately fun, and the salutary  benefits of getting people off their couches and into the world (even in this unbelievable heat) are all to the good. Downtown was lousy with players talking, coaching, socializing. If I wasn’t under this self-imposed deadline, I might’ve joined in.
The cooperative, public nature of the game is genuinely exciting. There’s reports of the shy feeling emboldened to interact in the context of the game and people struggling with depression find the game motivating to get out in the world. We should be interested in studying this power.
But I can also already hear the ed-tech machine grinding away trying to spin this phenomenon into disruption gold.
Because of this we should keep some things in mind.
I do believe the game has taken off because it shares something important with good education, emphasizing process over product – it is actively fun to play - but while the gameplay is cooperative, it is also a competition, that carries all the usual pressures and incentives to prioritize “winning” over playing, product over process. I am reminded of the Tamagotchi craze of the 90’s, hearing a colleague’s bag chirping during a meeting, and after her satisfying the device’s need, explaining how her daughter’s school had banned the game, and so she promised to keep the virtual creature alive during the day.
There is probably some parent out there already, not enjoying the game in tandem with their child, but capturing Pokémons and leveling up on their kid’s behalf.
We should also notice that Pokémon Go replicates some of the existing divides already present in education. Players of means with more access to time, data, money (for power-ups and lures) will do better than those without. There is already a Pokémon Go gap between those who pay for their upgrades v. those who have to earn each level, the old-fashioned way, but hurling virtual balls at virtual monsters.
Even after a week, I imagine there are players who are priced out of competing at the most coveted gyms, who are defeated by a system where others have access to more resources.
Others have raised the concerns about the amount and kind of data that players are asked to relinquish for the opportunity to play the game.
Technology is a tool, and I hope and trust that some smart people are thinking about how the popularity of this particularl tool can be translated into the educational realm.
But let’s not mistake the tool for the thing itself. Education is the thing. Too often in our rush for a technological solution to current problems, we redefine the thing into something technology can handle. Technology can’t handle the complicated, but meaningful stuff, so we flatten, we standardize.
This is why we have people working on software that can grade essays. Software will never be able to truly respond to writing as humans do, so we have to train the humans to write essays that satisfy the limits of the algorithm.
But writing that is read only by algorithms isn’t writing, so why are we messing around with that stuff?
Progress? Disruption?
Education doesn’t happen to students, but inside them. I can’t remember where I heard that, but it’s true.
We don't need education to look more like Pokémon Go just because it's popular. What makes Pokémon Go and enjoyable game may have nothing to do with making education compelling.
The magic (if you want to call it that) of Pokémon Go isn’t the technology, but what the technology unlocks inside the person using it. If Pokémon Go is meant to inform the work of educators, let’s focus on that, rather than the technological tool itself.

Saturday 9 July 2016

Learning Analytics

Defining Learning Analytics 
“Learning analytics refers to the interpretation of a wide range of data produced by and gathered on behalf of students in order to assess academic progress, predict future performance, and spot potential issues. Data are collected from explicit student actions, such as completing assignments and taking exams, and from tacit actions, including online social interactions, extracurricular activities, posts on discussion forums, and other activities that are not directly assessed as part of the student’s educational progress. Analysis models that process and display the data assist faculty members and school personnel in interpretation. The goal of learning analytics is to enable teachers and schools to tailor educational opportunities to each student’s level of need and ability.” “Learning analytics need not simply focus on student performance. It might be used as well to assess curricula, programs, and institutions. It could contribute to existing assessment efforts on a campus, helping provide a deeper analysis, or it might be used to transform pedagogy in a more radical manner. It might also be used by students themselves, creating opportunities for holistic synthesis across both formal and informal learning activities.” 

Learning analytics is becoming defined as an area of research and application and is related to academic analytics, action analytics, and predictive analytics. 1 Learning analytics emphasizes measurement and data collection as activities that institutions need to undertake and understand, and focuses on the analysis and reporting of the data. Unlike educational data mining, learning analytics does not generally address the development of new computational methods for data analysis but instead addresses the application of known methods and models to answer important questions that affect student learning and organizational learning systems. 

The goal of learning analytics as enabling teachers and schools to tailor educational opportunities to each student’s level of need and ability. Unlike educational data mining, which emphasizes system generated and automated responses to students, learning analytics enables human tailoring of responses, such as through adapting instructional content, intervening with atrisk students, and providing feedback. Learning analytics draws on a broader array of academic disciplines than educational data mining, incorporating concepts and techniques from information science and sociology, in addition to computer science, statistics, psychology, and the learning sciences. Unlike educational data mining, learning analytics generally does not emphasize reducing learning into components but instead seeks to understand entire systems and to support human decision making. 

Technical methods used in learning analytics are varied and draw from those used in educational data mining. Additionally, learning analytics may employ: 
• Social network analysis (e.g., analysis of student-tostudent and student-to-teacher relationships and interactions to identify disconnected students, influencers, etc.) and 
• Social or “attention” metadata to determine what a user is engaged with. As with educational data mining, providing a visual representation of analytics is critical to generate actionable analyses; information is often represented as “dashboards” that show data in an easily digestible form. 

A key application of learning analytics is monitoring and predicting students’ learning performance and spotting potential issues early so that interventions can be provided to identify students at risk of failing a course or program of study. Several learning analytics models have been developed to identify student risk level in real time to increase the students’ likelihood of success. Educational institutions have shown increased interest in learning analytics as they face calls for more transparency and greater scrutiny of their student recruitment and retention practices. Data mining of student behavior in online courses has revealed differences between successful and unsuccessful students (as measured by final course grades) in terms of such variables as level of participation in discussion boards, number of emails sent, and number of quizzes completed. Analytics based on these student behavior variables can be used in feedback loops to provide more fluid and flexible curricula and to support immediate course alterations (e.g., sequencing of examples, exercises, and self-assessments) based on analyses of real-time learning data. 
In summary, learning analytics systems apply models to answer such questions as: 
• When are students ready to move on to the next topic? 
• When are students falling behind in a course? 
• When is a student at risk for not completing a course? 
• What grade is a student likely to get without intervention? 
• What is the best next course for a given student? 
• Should a student be referred to a counselor for help? 

Learning Analytics Applications 

Educational data mining and learning analytics research are beginning to answer increasingly complex questions about what a student knows and whether a student is engaged. For example, questions may concern what a short-term boost in performance in reading a word says about overall learning of that word, and whether gaze-tracking machinery can learn to detect student engagement. Researchers have experimented with new techniques for model building and also with new kinds of learning system data that have shown promise for predicting student outcomes. 

The application areas were discerned from the review of the published and gray literature and were used to frame the interviews with industry experts. These areas represent the broad categories in which data mining and analytics can be applied to online activity, especially as it relates to learning online. This is in contrast to the more general areas for big data use, such as health care, manufacturing, and retail. These application areas are 
(1) modeling of user knowledge, user behavior, and user experience; 
(2) user profiling; 
(3) modeling of key concepts in a domain and modeling a domain’s knowledge components, 
(4) and trend analysis. 

Another application area concerns how analytics are used to adapt to or personalize the user’s experience. Each of these application areas uses different sources of data, and describes questions that these categories answer and lists data sources that have been used thus far in these applications.

New technology start-ups founded on big data (e.g., Knewton, Desire2Learn) are optimistic about applying data mining and analytics—user and domain modeling and trend analysis—to adapt their online learning systems to offer users a personalized experience. Companies that “own” personal data (e.g., Yahoo!, Google, LinkedIn, Facebook) have supported open-source developments of big data software (e.g., Apache Foundation’s Hadoop) and encourage collective learning through public gatherings of developers to train them on the use of these tools (called hackdays or hackathons). The big data community is, in general, more tolerant of public trialand-error efforts as they push data mining and analytics technology to maturity.

The challenges in implementing data mining and learning analytics within K–20 settings. Experts pose a range of implementation considerations and potential barriers to adopting educational data mining and learning analytics, including technical challenges, institutional capacity, legal, and ethical issues. Successful application of educational data mining and learning analytics will not come without effort, cost, and a change in educational culture to more frequent use of data to make decisions. What is the gap between the big data applications in the commerce, social, and service sectors and K–20 education? Given that learning analytics practices have been applied primarily in higher education thus far, the time to full adoption may be longer in different educational settings, such as K–12 institutions.

Education institutions pioneering the use of data mining and learning analytics are starting to see a payoff in improved learning and student retention. As described student data can help educators both track academic progress and understand which instructional practices are effective. How students can examine their own assessment data to identify their strengths and weaknesses and set learning goals for themselves. Recommendations from this guide are that K–12 schools should have a clear strategy for developing a data-driven culture and a concentrated focus on building the infrastructure required to aggregate and visualize data trends in timely and meaningful ways, a strategy that builds in privacy and ethical considerations at the beginning. The vision that data can be used by educators to drive instructional improvement and by students to help monitor their own learning is not new. However, the feasibility of implementing a data-driven approach to learning is greater with the more detailed learning micro data generated when students learn online, with newly available tools for data mining and analytics, with more awareness of how these data and tools can be used for product improvement and in commercial applications, and with growing evidence of their practical application and utility in K–12 and higher education. There is also substantial evidence of effectiveness in other areas, such as energy and health care.



Personalized Learning Scenarios

Online consumer experiences provide strong evidence that computer scientists are developing methods to exploit user activity data and adapt accordingly. Consider the experience a consumer has when using a Movie app to choose a movie. Members can browse app offerings by category (e.g., Comedy) or search by a specific actor, director, or title. On choosing a movie, the member can see a brief description of it and compare its average rating by app users with that of other films in the same category. After watching a film, the member is asked to provide a simple rating of how much he or she enjoyed it. The next time the member returns to app, his or her browsing, watching, and rating activity data are used as a basis for recommending more films. The more a person uses app, the more app learns about his or her preferences and the more accurate the predicted enjoyment. But that is not all the data that are used. Because many other members are browsing, watching, and rating the same movies, the app recommendation algorithm is able to group members based on their activity data. Once members are matched, activities by some group members can be used to recommend movies to other group members. Such customization is not unique to Movie app, of course. Companies such as Amazon, Overstock, and Pandora keep track of users’ online activities and provide personalized recommendations in a similar way. 

Education is getting very close to a time when personalisation will become commonplace in learning. Imagine an introductory biology course. The instructor is responsible for supporting student learning, but her role has changed to one of designing, orchestrating, and supporting learning experiences rather than “telling.” Working within whatever parameters are set by the institution within which the course is offered, the instructor elaborates and communicates the course’s learning objectives and identifies resources and experiences through which those learning goals can be attained. Rather than requiring all students to listen to the same lectures and complete the same homework in the same sequence and at the same pace, the instructor points students toward a rich set of resources, some of which are online, and some of which are provided within classrooms and laboratories. Thus, students learn the required material by building and following their own learning maps. 

Suppose a student has reached a place where the next unit is population genetics. In an online learning system, the student’s dashboard shows a set of 20 different population genetics learning resources, including lectures by a master teacher, sophisticated video productions emphasizing visual images related to the genetics concepts, interactive population genetics simulation games, an online collaborative group project, and combinations of text and practice exercises. Each resource comes with a rating of how much of the population genetics portion of the learning map it covers, the size and range of learning gains attained by students who have used it in the past, and student ratings of the resource for ease and enjoyment of use. These ratings are derived from past activities of all students, such as “like” indicators, assessment results, and correlations between student activity and assessment results. 

The student chooses a resource to work with, and his or her interactions with it are used to continuously update the system’s model of how much he or she knows about population genetics. After the student has worked with the resource, the dashboard shows updated ratings for each population genetics learning resource; these ratings indicate how much of the unit content the student has not yet mastered is covered by each resource. At any time, the student may choose to take an online practice assessment for the population genetics unit. Student responses to this assessment give the system—and the student—an even better idea of what he or she has already mastered, how helpful different resources have been in achieving that mastery, and what still needs to be addressed. The teacher and the institution have access to the online learning data, which they can use to certify the student’s accomplishments. This scenario shows the possibility of leveraging data for improving student performance; another example of data use for “sensing” student learning and engagement is described in the sidebar on the moment of learning and illustrates how using detailed behavior data can pinpoint cognitive events. The increased ability to use data in these ways is due in part to developments in several fields of computer science and statistics. To support the understanding of what kinds of analyses are possible, the next section defines educational data mining, learning analytics, and visual data analytics, and describes the techniques they use to answer questions relevant to teaching and learning. 

                Capturing the Moment of Learning by Tracking Game Players’ Behaviors 
The Wheeling Jesuit University’s Cyber enabled Teaching and Learning through Game-based, Metaphor-Enhanced Learning Objects (CyGaMEs) project was successful in measuring learning using assessments embedded in games. CyGaMEs quantifies game play activity to track timed progress toward the game’s goal and uses this progress as a measure of player learning. CyGaMEs also captures a self-report on the game player’s engagement or flow, i.e., feelings of skill and challenge, as these feelings vary throughout the game play. In addition to timed progress and self-report of engagement, CyGaMEs captures behaviors the player uses during play. Reese et al. (in press) showed that this behavior data exposed a prototypical “moment of learning” that was confirmed by the timed progress report. Research using the flow data to determine how user experience interacts with learning is ongoing. 

Friday 8 July 2016

Neuromythologies in Education

Neuromythologies in education- VAK Learning Styles, Multiple Intelligence, 10% usage theory and Left-Right sided brain thinking…
                   
Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain.

Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, left- and right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny.

Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate left- and right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits.

Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.



Introduction
Neuromythologies are those popular accounts of brain functioning, which often appear within so-called ‘brain-based’ educational applications. They could be categorised into neuromyths where more is better: ‘If we can get more of the brain to ‘‘light up’’, then learning will improve . . .’, and neuromyths where specificity is better: ‘If we concentrate teaching on the ‘‘lit-up’’ brain areas then learning will improve . . .’. Prominent examples of neuromythologies of the former include: the 10% myth, that we only use 10% of our brain; multiple intelligences; and Brain Gym. Prominent examples of neuromytholgies of the latter include: left- and right-brained thinking; VAK (visual, auditory and kinaesthetic) learning styles; and water as brain food. Characteristically, the evidential basis of these schemes does not lie in cognitive neuroscience, but rather with the various enthusiastic promoters; in fact, sometimes the scientific evidence flatly contradicts the brain-based claims. The assumption here is that educational practices which claim to be concomitant with the workings of the brain should, in fact, be so, at least to the extent that the scientific jury can ever be conclusive (Blakemore and Frith 2005). A counter-argument might be posed that the ultimate criterion is pragmatic, not evidential, and if it works in the classroom who cares if it seems scientifically untenable. For this author, basing education on scientific evidence is the hallmark of sound professional practice, and should be encouraged within the educational profession wherever possible. The counter-argument only serves to undermine the professionalism of teachers, and so should be resisted. This is not to say that there is not a glimmer of truth embedded within various neuromyths. Usually their origins do lie in valid scientific research; it is just that the extrapolations go well beyond the data, especially in transfer out of the laboratory and into the classroom (Howard-Jones 2007). For example, there is plenty of evidence that cognitive function benefits from cardiovascular fitness; hence, general exercise is good for the brain in general (Blakemore and Frith 2005). But this does not mean that pressing particular spots on one’s body, as per Brain Gym, will enhance the activation of particular areas in the brain. As another example, there are undoubtedly individual differences in perceptual acuities which are modality based, and include visual, auditory and kinaesthetic sensations (although smell and taste are more notable), but this does not mean that learning is restricted to, or even necessarily associated with, one’s superior sense. All of us have areas of ability in which we perform better than others, especially as we grow older and spend more time on one rather than another. Consequently, a school curriculum which offers multiple opportunities is commendable, but this does not necessarily depend on there being multiple intelligences within each child which fortuitously map on to the various areas of curriculum. General cognitive ability could just as well play an important role in learning outcomes across the board. The generation of such neuromythologies and possible reasons for their widespread acceptance has become a matter for investigation itself. In particular, the phenomenon of their widespread and largely uncritical acceptance in education raises several questions: why has this happened?; what might this suggest about the capacity for the education profession to engage in professional reflection on complex scientific evidence?

And one cannot help but wonder about the extent to which political pressure for endless improvement in standardised test scores, publicised via school league tables, drives teachers to adopt a one-size-fits-all, brain-based life-raft when their daily classroom experience is replete with children’s individual differences. To gather some data about these issues, Pickering and Howard-Jones (2007) surveyed nearly 200 teachers either attending an education and brain conference in the UK (one brain based, the other academic) or contributing to an OECD website internationally. All respondents were enthusiastic about the prospects of neuroscience informing teaching practice, particularly for pedagogy, but less so for curriculum design. Moreover, despite a prevailing ethos of pragmatism (notably with the brain-based conference attendees), it was generally conceded that the role of neuroscientists was to be professionally informative rather than prescriptive. This, in turn, points to the critical necessity for a mutually comprehensible language with which neuroscientists and educators can engage in a genuine interdisciplinary dialogue. The American Nobel Laureate physicist Richard Feynman, in one of his more famous graduation addresses at Caltech, warned his audience of young science graduates about ‘cargo cult science’ (Feynman 1974). His point was that, while it might accord with ‘human nature’ to engage in wishful thinking, good scientists have to learn not to fool themselves. Feynman’s warning could well be applied to the myriad ‘brain-based’ strategies that pervade current educational thinking. Whereas it is commonly stated in such schemes that the brain is the most complex object in the universe (although how this could possibly be verified remains unexplained), this assumption is then completely ignored in proposing a pedagogy based on the simplest of analyses – e.g., in the brain there are two hemispheres, left and right, therefore there are two kinds of thinking: of-the-left brain and of-the-right-brain, and therefore there are only two kinds of teaching necessary: for-the-left-brain and for-the-right-brain. Not a very exciting universe where the most complex object has only two states! And not, fortunately, the universe in which we exist, where the complexity of the human brain has been the focus of intense investigation for over a century, but particularly over the past two decades, thanks to the invention of neuroimaging technologies. The resulting neuroimages – brains with brightly coloured areas – are disarmingly simple, and seem to fit with a common sense view of the brain as having localised specialist functions which enable us to do the various things we do. But such apparent simplicity is generated out of considerable complexity. In functional magnetic resonance imaging (fMRI), for example, the images are the end-result of many years’ work on understanding the quantum mechanics of nuclear magnetic resonance phenomena, the development of the engineering of superconducting magnets, the application of inverse fast Fourier transforms to large data sets and the refinement of high-speed computing hardware and software to analyse large data sets across multiple parameters. The neuroimaging picture is undoubtedly worth the proverbial thousand words, but the scientist’s words can be quite different from those of the layperson. A crucial point that most of the media overlook, or ignore, is that neuroimaging data are statistical.


The coloured blobs on brain maps representing areas of significant activation (so-called ‘lighting up’) are like the peaks of sub-oceanic mountains which rise above sea level, in neuroimaging, how much or how little activation (sea level) to reveal being determined by the researcher in setting a suitable level of statistical threshold.
In fact, the most challenging aspect of most neuroimaging experimental design is to determine suitable control conditions to highlight a particular area of experimental interest and thus avoid showing how most of the brain is involved in most cognitive tasks.
So, in a classroom it would be quite silly to think that only a small portion of pupils’ brains are involved in a task, just because a small area of brain activity was reported in a neuroimaging study of a similar task (Geake 2006). Neuroscience is a laboratory-based endeavour. Even with the best of intentions, extrapolations from the lab to the classroom need to be made with considerable caution (Howard-Jones 2007). As Nobel Laureate Charles Sherrington (1938, 181) warned in Oxford some 70 years ago: ‘To suppose the roof-brain consists of point to point centres identified each with a particular item of intelligent concrete behaviour is a scheme over simplified and to be abandoned.’ In other words, we have to be very wary of oversimplifications of the neuro-level of description in seeking applications at the cogntive or behavioural levels. The central characteristic of brain function which generates its complexity is neural functional interconnectivity. There are common brain functions for all acts of intelligence, Educational Research 125 especially those involved in school learning (Geake in press). These interconnected brain functions (and implicated brain areas) include:
·         Working memory (lateral frontal cortex).
·         Long-term memory (hippocampus and other cortical areas).
·         Decision-making (orbitofrontal cortex).
·         Emotional mediation (limbic subcortex and associated frontal areas).
·         Sequencing of symbolic representation (fusiform gyrus and temporal lobes).
·         Conceptual interrelationships (parietal lobe).
Conceptual and motor rehearsal (cerebellum). This parallel interconnected functioning is occurring all the time our brains are alive. Importantly, these neural contributions to intelligence are necessary for all school subjects, and all other aspects of cognition. Creative thinking would not be possible without our extensive neural interconnectivity (Geake and Dobson 2005). Moreover, there are no individual modules in the brain which correspond directly to the school curriculum (Geake 2006). Cerebral interconnectivity is necessary for all domain-specific learning, from music to maths to history to French as a second language. Neuromyths typically ignore such interconnectivity in their pursuit of simplicity. Steve Mithen (2005) argues that it was a characteristic of the Neanderthal brain that it was not well interconnected. This could explain the curious stasis of Neanderthal culture over several hundred thousand years, and the even more curious fact that Neanderthal culture was rapidly out-competed by our physically less robust Cro-Magnon forebears, whose brains, Mithen argues, had evolved to become well interconnected.

Multiple intelligences
Highly evolved cerebral interconnectedness has implications for any brain-based justification of the widely promoted model of multiple intelligences (MI). Gardner (1993) divided human cognitive abilities into seven intelligences: logic-mathematics, verbal, interpersonal, spatial, music, movement and intrapersonal. Some 2500 years earlier, Plato recommended that a balanced curriculum have the following six subjects: logic, rhetoric, arithmetic, geometry-astronomy, music and dance-physical. For philosopher-kings, additionally, meditation was recommended. Clearly MI is nothing new: Gardner has just recycled Plato. But although such a curriculum scheme is long-standing, it doesn’t mean that our brains think about these areas completely independently from one another. Each MI requires sensory information processing, memory, language, and so on. Rather, this just demonstrates Sherrington’s point that the way the brain goes about dividing its labours is quite separate from how we see such divisions on the outside, so to speak. In other words, there are no multiple intelligences, but rather, it is argued, multiple applications of the same multifaceted intelligence.
Whereas undoubtedly there are large individual differences in subject-specific abilities, the evidence which conflicts with a multiple intelligences interpretation of brain function is that these subject-specific abilities are positively correlated, as shown by Carroll (1993) in his large meta-analysis. Such a pervasive correlation between different abilities is conceptualised as general intelligence, g. The existence of g not only suggests that the same brain modules are likely to be involved in many different abilities, but that their functional connectivity is of paramount importance. In fact, the main thrust of research in cognitive neuroscience in the next decade will be the mapping of functional connectivity, 126 J. Geake that is how functional modules transfer information, anatomically, bio-chemically, bioelectrically, rhythmically, synchronistically, and so on. A recent study along these lines sought evidence for neural correlates of general intelligence – i.e., where and how does the brain generate measures of general intelligence? Duncan et al. (2000) found a common brain involvement, in the frontal cortex of adult subjects, on both spatial and verbal IQ tests. A further meta-analysis of 20 neuroimaging studies involving language, logic, mathematics and memory showed that the same frontal cortical areas were involved (Duncan 2001). It seems unlikely that these intelligences are independent if the same part of the brain is common to all. This point is elaborated in a recent critique of MI (Waterhouse 2006, 213). The human brain is unlikely to function via Gardner’s multiple intelligences. Taken together the evidence for the inter-correlations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping ‘what is it?’ and ‘where is it?’ neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner’s intelligences could operate ‘via a different set of neural mechanisms’ [as Gardner claims]. To explain how those same pathways support high-level general intelligence across so many different cognitive areas, Duncan (2001, 824) suggested that: ‘neurons in selected frontal regions adapt their properties to code information of relevance to current behaviour, pruning away . . . all that is currently task-irrelevant.’
So, underlying our specific abilities is adaptive brain functioning. In support of this idea of an adapting brain, Dehaene and his colleagues have proposed a dynamic model of brain functioning in which these frontal adaptive neurons coordinate the myriad inputs from our perceptual modules from all over the brain, and continually assess the relative importance of these inputs such that from time to time, a thought becomes conscious; it literally ‘comes to mind’ (Dehaene, Kerszberg, and Changeux 1998). It could be predicted, then, that deliberate attempts to restrict intelligence within classrooms according to MI theory would not promote children’s learning, and it could be noted in passing that one of the ‘independent consultants’ who advocates brain-based learning strategies acknowledges teachers’ frustration with the lack of long-term impact of applying MI theory (Beere 2006).

10% Usage Theory
None of the above implies that g is all that there is to intelligence – quite the opposite. With its population age-norming, IQ might be a convenient surrogate for intelligence in the laboratory, but not even the most resolute empiricist would claim that IQ captures all of the variance in cognitive abilities. Rather, intelligence in all its manifestations illustrates the underlying dynamic complexity of its generative neural processes, with emphasis on ‘dynamic’. There is overwhelming evidence that the brain is perpetually busy, and that even when any of our brain cells are not involved in processing some information, they still fire randomly. As an organ which has evolved not to know what is going to happen next, such constant activity keeps our brain in a state of readiness. Consequently, the neuromyth that ‘We only use 10% of our brains’ could not be more in error.
The absurdity has been pointed out by Beyerstein (2004): evolution does not produce excess, much less 90% excess. In the millions of studies of the brain, no one has ever found an unused portion of the brain! It is unfortunate that teachers are constantly subjected to such pervasive nonsense about the brain, so it is worth pausing to investigate the various sources of the 10% myth Educational Research 127 (Nyhus and Sobel 2003). It seems to have begun with an Italian neuro-surgeon c.1890 who removed scoops of brains of psychiatric patients to see if there were any differences in their reported behaviours. The myth received an unexpected boost c.1920 during a radio interview with Albert Einstein, when the physicist used the 10% figure to implore us to think more. The myth received its widest circulation before the Second World War when some American advertisers of home-help manuals re-invented the 10% figure in order to convince customers that they were not very smart. Odd, then, that it has been so enthusiastically adopted by wishful-thinking educationists at the end of the twentieth century. It would be nice if the brains of our students had all this spare educable capacity. To be sure, the plasticity of young (and even older) brains should never be underestimated. But what plasticity requires is a dynamically engaged brain, with all neurons firing. To put it bluntly, if you are only using 10% of your brain, then you are in a vegetative state so close to death that you should hope (not that you could) that your relatives will pull out the plug of the life support machine!


Left- and right-brained thinking
 Another pervasive example of over-simplification has been the misinterpretation of laterality studies to produce so-called ‘left- and right-brained thinking’.
Historically, the original studies were of split-brain patients: patients who had the major communication tract between the two brain hemispheres, the corpus callosum, surgically severed in an attempt to reduce life-threatening epilepsy. It was found that the separate hemispheres of these patients could separately process different types of information, but only the left hemisphere processing was reported by the patients. Unfortunately, the caveat that the researchers who carried out these studies back in the 1970s did emphasise – i.e., that these patients had abnormal brains – was largely ignored. For normal people, as Singh and O’Boyle (2004, 671) point out: the brain does not consist of two hemispheres operating in isolation. In fact, the different cognitive specialties of the LH and RH are so well integrated that they seldom cause significant processing conflicts . . . hemispheric specialisation . . . consists of a dynamic interactive partnership between the two. Creative thinking, in particular, requires the interaction of both hemispheric specialists, neither one can operate in isolation from the other: Since the right hemisphere and the left hemisphere are massively interconnected (through the corpus callosum), it is not only possible, but also highly likely, that the creative person can iterate back and forth between these specialized modes to arrive at a practical solution to a real problem. If the right hemisphere were somehow disconnected from the left and confined to its own specialized thinking modes, it might be relegated to only ‘soft’ fantasy solutions, pipe dreams or weird ideas that would be difficult, if not impossible, to fully implement in the real world. The left brain helps keep the right brain on track. (Herrmann 1998, http://www.sciam.com) This, then, has important implications for the misguided ‘right-brain’ promotion of creative thinking in the school classroom. Goswami (2004) draws attention to a recent OECD report in which left brain/right brain learning is the most troubling of several neuromyths – a sort of anti-intellectual virus which spreads among lay people as misinformation about what neuroscience can offer education.
This is not to say that there isn’t abundant good evidence that much brain functioning is modular, and that many higher cognitive functions, such as language production, are critically reliant on modules which are usually found in one or other hemisphere, such as Broca’s Area (BA), usually 128 J. Geake found in the left frontal cortex. But there are notable differences between individuals as to where these modules are located. In about 5% of right-handed males, BA is found in the right frontal cortex, and in a higher number of females, the principle function of BA, language production, is found in both the left and right frontal cortices. In left-handed people, only 60% have BA functions on the left, with the rest having their language production involving frontal areas on both sides or on the right (Kolb and Wishaw 1990). An implication of this for neuroscience research is that practically all subjects in neuroimaging studies are screened for extreme right-handedness – it is a way of maximising the probability that the group map has contributions from all subjects (that is, their functional modules involved in the study will be in much the same place in the different individual’s brains).
Consequently, with a nice circularity, the data which show that language production is on the left comes almost exclusively from subjects who’ve been chosen to have their language production areas on the left. Thus the left- and right-brain thinking myth seems to have arisen from misapplying lab studies which show that the semantic system is left-lateralised (language information processing in the left hemisphere; graphic and emotional information processing in the right hemisphere) by ignoring several important caveats. First, the left-lateralisation is in fact a statistically significant bias, not an absolute. Even in left-lateralised individuals, language processing does stimulate some right hemisphere activation. Second, the subjects for such studies are extremely right-handed. As language researchers are at pains to point out: ‘It is dangerous to suppose that language processing only occurs in the left hemisphere of all people’ (Thierry, Giraud, and Price 2003, 506). The largest interconnection to transmit information in the brain is the corpus callosum, the thick band of fibres which connects the two hemispheres. It seems that the left and right sides of our brains cannot help but pass all information between them. In fact, there is some evidence that constrictions in the corpus callosum could be predictive of deficiencies in reading abilities (Fine 2005), which obviously could not occur if language processing was an exclusively left hemisphere activity. It would be neat if all cognitive functioning was simply lateralised, and towards such a schema some commentators have suggested that perhaps there are stylistic differences between left and right hemispheric functions, with the left mediating detail, while the holistic right focuses on the bigger picture. For example, using EEG to describe the time course of activations identified by fMRI, Jung-Beeman et al. (2004) found that the insight or ‘aha’ moment of problem solution elicits increased neural activity in the right hemisphere’s temporal lobe. Jung-Beeman et al. (2004) suggest that the this right hemisphere function facilitates a coarse-level integration of information from distant relational sources, in contrast to the finer-level information processing characteristic of its left hemisphere homologue. However, researchers in music cognition disagree (Peretz 2003). Even regarding the left hemisphere (metaphorically if not literally) as a verbal processor, music, as non-verbal information par excellence, is not exclusively processed in the right, but in both hemispheres (Peretz 2003). Moreover, neuroimaging studies have shown that the location and extent of various areas of the brain involved with music perception and production shift and grow with musical experience (Parsons 2003).
In fact, there is a strong evolutionary argument that music plays a crucial role in promoting the growth of the inter-module connections which underpin cognitive development in infants and young children (Cross 1999). Consequently, for the many reasons noted above, leading neuroscientists have been calling on the neuroscience community to shift their interpretative focus of brain function from modularisation to interaction. As Hellige (2000, 206) pleads: ‘Having learned so much about hemispheric differences . . . it is now time to put the brain back together again.’ Or as Walsh and Pascual-Leone (2003, 206) summarise: ‘Human brain function and behaviour seem best explained on the basis of functional connectivity between brain structures rather than on the basis of localization of a given function to a specific brain structure.’

VAK Learning styles
This emphasis on connectedness rather than separateness of brain functions has important implications for education (Geake 2004).
The multi-sensory pedagogies, which experienced teachers know to be effective, are supported by fMRI research. The work of Calvert, Campbell and Brammer (2000), on imaging brain sites of cross-modal binding in human subjects, seems relevant. Bimodal processing of congruent information has a supra-additive effect (e.g., simultaneously seeing and hearing the same information works better than first just seeing and then hearing it). These findings are consistent with observed behaviour. Much good pedagogy in the early years of schooling is based on coincident bimodal information processing, especially sight and sound, or sight and speech, as demonstrated by every early years teacher pointing to the words of the story as she reads them aloud. However, such ‘natural’ pedagogy is threatened by the promulgation of learning styles. The notion that individual differences in academic abilities can be partly attributed to individual learning styles has considerable intuitive appeal if we are to judge by the number of learning style models or inventories that have been devised – 170 at the last count, and rising (Coffield et al. 2004). The myriad ways that approaches to learning can seem to be partitioned, labelled and measured seems to know no bounds. The disappointing outcome of all of this endeavour is that, overall, the evidence consistently shows that modifying a teaching approach to cater for differences in learning styles does not result in any improvement in learning outcomes (Coffield et al. 2004). Despite the lack of positive evidence, the education community has been swamped by claims for a learning style model based on the sensory modalities: visual, auditory and kinaesthetic (VAK) (Dunn, Dunn and Price 1984). The idea is that children can be tested to ascertain which is their dominant learning style, V, A or K, and then taught accordingly. Some schools have even gone so far as to label children with V, A and K shirts, presumably because these purported differences are no longer obvious in the classroom. The implicit assumption here is that the information gained through one sensory modality is processed in the brain to be learned independently from information gained through another sensory modality. There is plenty of evidence from a plethora of cross-modal investigations as to why such an assumption is wrong. What is possibly more insidious is that focusing on one sensory modality flies in the face of the brain’s natural interconnectivity. VAK might, if it has any effect at all, be actually harming the academic prospects of the children so inflicted. A simple demonstration of the ineffectiveness of VAK as a model of cognition comes from asking 5-year-olds to distinguish different sized groups of dots where the groups are too large for counting (Gilmore, McCarthy, and Spike 2007). So long as the group sizes are not almost equal, young children can do this quite reliably.
Now, what happens when one group is replaced by as many sounds played too rapidly for counting? There is no change in accuracy! Going from a V versus V version of the task to a V versus A version makes no difference to task performance. The reason is that input modalities in the brain are interlinked: visual with auditory; visual with motor; motor with auditory; visual with taste; and so on.
There are well-adapted evolutionary reasons for this. Out on the savannah as a pre-hominid hunter-gatherer, coordinating sight and sound makes all the difference between detecting dinner and being dinner. As Sherrington (1938, 217) noted: The naive observer would have expected evolution in its course to have supplied us with more various sense organs for ampler perception of the world . . . Not new senses but better liaison between the old senses is what the developing nervous system has in this respect stood for. To emphasise the cross-modal nature of sensory experience, Kayser (2007) writes that: ‘the brain sees with its ears and touch, and hears with its eyes.’ Moreover, as primates, we are predominantly processors of visual information. This is true even for congenitally blind children who instantiate Braille not in the kinaesthetic areas of their brains, but in those parts of their visual cortices that sighted children dedicate to learning written language. Moreover, unsighted people create the same mental spatial maps of their physical reality as sighted people do (Kriegseis et al. in press). Obviously the information to create spatial maps by blind people comes from auditory and tactile inputs, but it gets used as though it was visual. Similarly, people who after losing their hearing get a cochlear implant find that they are suddenly much more dependent on visual speech, such as cues for segmentation and formats, to conduct conversation (Thomas and Pilling in press). Wright (2007) points out just how interconnected our daily neural processes must be. Eating does not engage just taste, but smell, tactile (inside the mouth), auditory and visual sensations. Learning a language, and the practice of it, requires the coordinated use of visual, auditory and kinaesthetic modalities, in addition to memory, emotion, will, thinking and imagination: To an anatomist this implies the need for an immense number of neural connections between many parts of the brain. In particular, there must be numerous links between the primary auditory cortex (in the temporal lobe), the primary proprioceptive-tactile cortex (in the parietal lobe) and the primary visual cortex (in the occipital lobe). There is indeed such a neural concourse, in the parieto-temporo-occipital ‘association’ cortex in each cerebral hemisphere. (Wright 2007, 275) Input information is abstracted to be processed and learnt, mostly unconsciously, through the brain’s interconnectivity (Dehaene, Kerszberg, and Changeux 1998). Actually, we don’t even create sensory perception in our sensory cortices: For a long time it was thought that the primary sensory areas are the substrate of our perception. . . . these zones simply generate representational maps of the sensorial information . . . although these respond to stimuli, they are not responsible for . . . perceptions . . . Perceptual experience occurs in certain zones of the frontal lobes [where] neurons combine sensory information with memory information. (Trujillo 2006, M9) Literally following a VAK regime in real classrooms would lead to all sorts of ridiculous paradoxes: what does a teacher do with: the V and K ‘learners’ in a music lesson/ the A and K ‘learners’ at an art lesson/ the V and A ‘learners’ in a craft practical lesson? The images of blindfolds and corks in mouths are all too reminiscent of Tommy, the rock opera by The Who. As Sharp, Byrne and Bowker (in press) elaborate, VAK trivialises the complexity of learning, and in doing so, threatens the professionality of educators. Fortunately, many teachers have not been taken in. Ironically, VAK has become, in the hands of practitioners, a recipe for a mixed-modality pedagogy where lessons have explicit presentations of material in V, A and K modes.

Teachers quickly observed that their pupils’ so-called learning styles were not stable, that the expressions of V-, A- and K-ness varied with the demands of the lessons, as they should (Geake 2006). As with other learning-style inventories, research has shown that there is no improvement of learning outcomes with VAK above teacher enthusiasm, where ‘attempts to focus on learning styles were wasted effort’ (Kratzig and Arbuthnot 2006). We might speculate in passing why do VAK and other ‘learning styles’ seem so attractive? I wonder if two aspects of folk psychology, where we seem to learn differently from each other, and we have five senses, have created folk neuroscience: the working of our brains directly reflects our folk psychology. Of course, if our brains were that simple, we wouldn’t be here today!