Neuromythologies in education- VAK Learning Styles, Multiple
Intelligence, 10% usage theory and Left-Right sided brain thinking…
Background: Many popular educational programmes claim to be
‘brain-based’, despite pleas from the neuroscience community that these
neuromyths do not have a basis in scientific evidence about the brain.
Purpose: The main aim of this paper is to examine several of the most popular
neuromyths in the light of the relevant neuroscientific and educational
evidence. Examples of neuromyths include: 10% brain usage, left- and
right-brained thinking, VAK learning styles and multiple intelligences Sources
of evidence: The basis for the argument put forward includes a literature
review of relevant cognitive neuroscientific studies, often involving
neuroimaging, together with several comprehensive education reviews of the
brain-based approaches under scrutiny.
Main argument: The main elements of the argument are as follows. We use
most of our brains most of the time, not some restricted 10% brain usage. This
is because our brains are densely interconnected, and we exploit this
interconnectivity to enable our primitively evolved primate brains to live in
our complex modern human world. Although brain imaging delineates areas of
higher (and lower) activation in response to particular tasks, thinking
involves coordinated interconnectivity from both sides of the brain, not
separate left- and right-brained thinking. High intelligence requires higher
levels of inter-hemispheric and other connected activity. The brain’s
interconnectivity includes the senses, especially vision and hearing. We do not
learn by one sense alone, hence VAK learning styles do not reflect how our
brains actually learn, nor the individual differences we observe in classrooms.
Neuroimaging studies do not support multiple intelligences; in fact, the
opposite is true. Through the activity of its frontal cortices, among other
areas, the human brain seems to operate with general intelligence, applied to
multiple areas of endeavour. Studies of educational effectiveness of applying
any of these ideas in the classroom have failed to find any educational
benefits.
Conclusions: The main conclusions arising from the argument are that
teachers should seek independent scientific validation before adopting
brain-based products in their classrooms. A more sceptical approach to
educational panaceas could contribute to an enhanced professionalism of the
field.
Introduction
Neuromythologies are those popular accounts of brain
functioning, which often appear within so-called ‘brain-based’ educational
applications. They could be categorised into neuromyths where more is better:
‘If we can get more of the brain to ‘‘light up’’, then learning will improve .
. .’, and neuromyths where specificity is better: ‘If we concentrate teaching
on the ‘‘lit-up’’ brain areas then learning will improve . . .’. Prominent
examples of neuromythologies of the former include: the 10% myth, that we only
use 10% of our brain; multiple intelligences; and Brain Gym. Prominent examples
of neuromytholgies of the latter include: left- and right-brained thinking; VAK
(visual, auditory and kinaesthetic) learning styles; and water as brain food.
Characteristically, the evidential basis of these schemes does not lie in
cognitive neuroscience, but rather with the various enthusiastic promoters; in
fact, sometimes the scientific evidence flatly contradicts the brain-based
claims. The assumption here is that educational practices which claim to be
concomitant with the workings of the brain should, in fact, be so, at least to
the extent that the scientific jury can ever be conclusive (Blakemore and Frith
2005). A counter-argument might be posed that the ultimate criterion is
pragmatic, not evidential, and if it works in the classroom who cares if it
seems scientifically untenable. For this author, basing education on scientific
evidence is the hallmark of sound professional practice, and should be
encouraged within the educational profession wherever possible. The
counter-argument only serves to undermine the professionalism of teachers, and
so should be resisted. This is not to say that there is not a glimmer of truth
embedded within various neuromyths. Usually their origins do lie in valid
scientific research; it is just that the extrapolations go well beyond the
data, especially in transfer out of the laboratory and into the classroom
(Howard-Jones 2007). For example, there is plenty of evidence that cognitive
function benefits from cardiovascular fitness; hence, general exercise is good
for the brain in general (Blakemore and Frith 2005). But this does not mean
that pressing particular spots on one’s body, as per Brain Gym, will enhance
the activation of particular areas in the brain. As another example, there are
undoubtedly individual differences in perceptual acuities which are modality
based, and include visual, auditory and kinaesthetic sensations (although smell
and taste are more notable), but this does not mean that learning is restricted
to, or even necessarily associated with, one’s superior sense. All of us have
areas of ability in which we perform better than others, especially as we grow
older and spend more time on one rather than another. Consequently, a school
curriculum which offers multiple opportunities is commendable, but this does
not necessarily depend on there being multiple intelligences within each child
which fortuitously map on to the various areas of curriculum. General cognitive
ability could just as well play an important role in learning outcomes across
the board. The generation of such neuromythologies and possible reasons for
their widespread acceptance has become a matter for investigation itself. In
particular, the phenomenon of their widespread and largely uncritical
acceptance in education raises several questions: why has this happened?; what
might this suggest about the capacity for the education profession to engage in
professional reflection on complex scientific evidence?
And one cannot help but wonder about the extent to which
political pressure for endless improvement in standardised test scores,
publicised via school league tables, drives teachers to adopt a
one-size-fits-all, brain-based life-raft when their daily classroom experience
is replete with children’s individual differences. To gather some data about
these issues, Pickering and Howard-Jones (2007) surveyed nearly 200 teachers
either attending an education and brain conference in the UK (one brain based,
the other academic) or contributing to an OECD website internationally. All
respondents were enthusiastic about the prospects of neuroscience informing
teaching practice, particularly for pedagogy, but less so for curriculum
design. Moreover, despite a prevailing ethos of pragmatism (notably with the
brain-based conference attendees), it was generally conceded that the role of
neuroscientists was to be professionally informative rather than prescriptive.
This, in turn, points to the critical necessity for a mutually comprehensible
language with which neuroscientists and educators can engage in a genuine
interdisciplinary dialogue. The American Nobel Laureate physicist Richard
Feynman, in one of his more famous graduation addresses at Caltech, warned his
audience of young science graduates about ‘cargo cult science’ (Feynman 1974).
His point was that, while it might accord with ‘human nature’ to engage in
wishful thinking, good scientists have to learn not to fool themselves. Feynman’s
warning could well be applied to the myriad ‘brain-based’ strategies that
pervade current educational thinking. Whereas it is commonly stated in such
schemes that the brain is the most complex object in the universe (although how
this could possibly be verified remains unexplained), this assumption is then
completely ignored in proposing a pedagogy based on the simplest of analyses –
e.g., in the brain there are two hemispheres, left and right, therefore there
are two kinds of thinking: of-the-left brain and of-the-right-brain, and
therefore there are only two kinds of teaching necessary: for-the-left-brain
and for-the-right-brain. Not a very exciting universe where the most complex
object has only two states! And not, fortunately, the universe in which we
exist, where the complexity of the human brain has been the focus of intense
investigation for over a century, but particularly over the past two decades,
thanks to the invention of neuroimaging technologies. The resulting neuroimages
– brains with brightly coloured areas – are disarmingly simple, and seem to fit
with a common sense view of the brain as having localised specialist functions
which enable us to do the various things we do. But such apparent simplicity is
generated out of considerable complexity. In functional magnetic resonance
imaging (fMRI), for example, the images are the end-result of many years’ work
on understanding the quantum mechanics of nuclear magnetic resonance phenomena,
the development of the engineering of superconducting magnets, the application
of inverse fast Fourier transforms to large data sets and the refinement of
high-speed computing hardware and software to analyse large data sets across
multiple parameters. The neuroimaging picture is undoubtedly worth the proverbial
thousand words, but the scientist’s words can be quite different from those of
the layperson. A crucial point that most of the media overlook, or ignore, is
that neuroimaging data are statistical.
The coloured blobs on brain maps representing areas of
significant activation (so-called ‘lighting up’) are like the peaks of
sub-oceanic mountains which rise above sea level, in neuroimaging, how much or
how little activation (sea level) to reveal being determined by the researcher
in setting a suitable level of statistical threshold.
In fact, the most challenging aspect of most neuroimaging
experimental design is to determine suitable control conditions to highlight a
particular area of experimental interest and thus avoid showing how most of the
brain is involved in most cognitive tasks.
So, in a classroom it would be quite silly to think that only
a small portion of pupils’ brains are involved in a task, just because a small
area of brain activity was reported in a neuroimaging study of a similar task
(Geake 2006). Neuroscience is a laboratory-based endeavour. Even with the best
of intentions, extrapolations from the lab to the classroom need to be made
with considerable caution (Howard-Jones 2007). As Nobel Laureate Charles
Sherrington (1938, 181) warned in Oxford some 70 years ago: ‘To suppose the
roof-brain consists of point to point centres identified each with a particular
item of intelligent concrete behaviour is a scheme over simplified and to be
abandoned.’ In other words, we have to be very wary of oversimplifications of
the neuro-level of description in seeking applications at the cogntive or
behavioural levels. The central characteristic of brain function which
generates its complexity is neural functional interconnectivity. There are
common brain functions for all acts of intelligence, Educational Research 125
especially those involved in school learning (Geake in press). These
interconnected brain functions (and implicated brain areas) include:
·
Working
memory (lateral frontal cortex).
·
Long-term
memory (hippocampus and other cortical areas).
·
Decision-making
(orbitofrontal cortex).
·
Emotional
mediation (limbic subcortex and associated frontal areas).
·
Sequencing
of symbolic representation (fusiform gyrus and temporal lobes).
·
Conceptual
interrelationships (parietal lobe).
Conceptual and motor rehearsal (cerebellum). This parallel
interconnected functioning is occurring all the time our brains are alive.
Importantly, these neural contributions to intelligence are necessary for all
school subjects, and all other aspects of cognition. Creative thinking would
not be possible without our extensive neural interconnectivity (Geake and
Dobson 2005). Moreover, there are no individual modules in the brain which
correspond directly to the school curriculum (Geake 2006). Cerebral interconnectivity
is necessary for all domain-specific learning, from music to maths to history
to French as a second language. Neuromyths typically ignore such
interconnectivity in their pursuit of simplicity. Steve Mithen (2005) argues
that it was a characteristic of the Neanderthal brain that it was not well
interconnected. This could explain the curious stasis of Neanderthal culture
over several hundred thousand years, and the even more curious fact that
Neanderthal culture was rapidly out-competed by our physically less robust
Cro-Magnon forebears, whose brains, Mithen argues, had evolved to become well
interconnected.
Multiple intelligences
Highly evolved cerebral interconnectedness has implications
for any brain-based justification of the widely promoted model of multiple
intelligences (MI). Gardner (1993) divided human cognitive abilities into seven
intelligences: logic-mathematics, verbal, interpersonal, spatial, music,
movement and intrapersonal. Some 2500 years earlier, Plato recommended that a
balanced curriculum have the following six subjects: logic, rhetoric,
arithmetic, geometry-astronomy, music and dance-physical. For
philosopher-kings, additionally, meditation was recommended. Clearly MI is
nothing new: Gardner has just recycled Plato. But although such a curriculum
scheme is long-standing, it doesn’t mean that our brains think about these
areas completely independently from one another. Each MI requires sensory
information processing, memory, language, and so on. Rather, this just
demonstrates Sherrington’s point that the way the brain goes about dividing its
labours is quite separate from how we see such divisions on the outside, so to
speak. In other words, there are no multiple intelligences, but rather, it is
argued, multiple applications of the same multifaceted intelligence.
Whereas undoubtedly there are large individual differences in
subject-specific abilities, the evidence which conflicts with a multiple
intelligences interpretation of brain function is that these subject-specific
abilities are positively correlated, as shown by Carroll (1993) in his large
meta-analysis. Such a pervasive correlation between different abilities is
conceptualised as general intelligence, g. The existence of g not only suggests
that the same brain modules are likely to be involved in many different
abilities, but that their functional connectivity is of paramount importance.
In fact, the main thrust of research in cognitive neuroscience in the next
decade will be the mapping of functional connectivity, 126 J. Geake that is how
functional modules transfer information, anatomically, bio-chemically,
bioelectrically, rhythmically, synchronistically, and so on. A recent study
along these lines sought evidence for neural correlates of general intelligence
– i.e., where and how does the brain generate measures of general intelligence?
Duncan et al. (2000) found a common brain involvement, in the frontal cortex of
adult subjects, on both spatial and verbal IQ tests. A further meta-analysis of
20 neuroimaging studies involving language, logic, mathematics and memory
showed that the same frontal cortical areas were involved (Duncan 2001). It
seems unlikely that these intelligences are independent if the same part of the
brain is common to all. This point is elaborated in a recent critique of MI
(Waterhouse 2006, 213). The human brain is unlikely to function via Gardner’s
multiple intelligences. Taken together the evidence for the inter-correlations
of subskills of IQ measures, the evidence for a shared set of genes associated
with mathematics, reading, and g, and the evidence for shared and overlapping
‘what is it?’ and ‘where is it?’ neural processing pathways, and shared neural
pathways for language, music, motor skills, and emotions suggest that it is
unlikely that each of Gardner’s intelligences could operate ‘via a different
set of neural mechanisms’ [as Gardner claims]. To explain how those same
pathways support high-level general intelligence across so many different
cognitive areas, Duncan (2001, 824) suggested that: ‘neurons in selected
frontal regions adapt their properties to code information of relevance to
current behaviour, pruning away . . . all that is currently task-irrelevant.’
So, underlying our specific abilities is adaptive brain
functioning. In support of this idea of an adapting brain, Dehaene and his
colleagues have proposed a dynamic model of brain functioning in which these
frontal adaptive neurons coordinate the myriad inputs from our perceptual
modules from all over the brain, and continually assess the relative importance
of these inputs such that from time to time, a thought becomes conscious; it
literally ‘comes to mind’ (Dehaene, Kerszberg, and Changeux 1998). It could be
predicted, then, that deliberate attempts to restrict intelligence within classrooms
according to MI theory would not promote children’s learning, and it could be
noted in passing that one of the ‘independent consultants’ who advocates
brain-based learning strategies acknowledges teachers’ frustration with the
lack of long-term impact of applying MI theory (Beere 2006).
10% Usage Theory
None of the above implies that g is all that there is to
intelligence – quite the opposite. With its population age-norming, IQ might be
a convenient surrogate for intelligence in the laboratory, but not even the
most resolute empiricist would claim that IQ captures all of the variance in
cognitive abilities. Rather, intelligence in all its manifestations illustrates
the underlying dynamic complexity of its generative neural processes, with
emphasis on ‘dynamic’. There is overwhelming evidence that the brain is
perpetually busy, and that even when any of our brain cells are not involved in
processing some information, they still fire randomly. As an organ which has
evolved not to know what is going to happen next, such constant activity keeps
our brain in a state of readiness. Consequently, the neuromyth that ‘We only
use 10% of our brains’ could not be more in error.
The absurdity has been pointed out by Beyerstein (2004):
evolution does not produce excess, much less 90% excess. In the millions of
studies of the brain, no one has ever found an unused portion of the brain! It
is unfortunate that teachers are constantly subjected to such pervasive
nonsense about the brain, so it is worth pausing to investigate the various
sources of the 10% myth Educational Research 127 (Nyhus and Sobel 2003). It
seems to have begun with an Italian neuro-surgeon c.1890 who removed scoops of
brains of psychiatric patients to see if there were any differences in their
reported behaviours. The myth received an unexpected boost c.1920 during a
radio interview with Albert Einstein, when the physicist used the 10% figure to
implore us to think more. The myth received its widest circulation before the
Second World War when some American advertisers of home-help manuals
re-invented the 10% figure in order to convince customers that they were not
very smart. Odd, then, that it has been so enthusiastically adopted by
wishful-thinking educationists at the end of the twentieth century. It would be
nice if the brains of our students had all this spare educable capacity. To be
sure, the plasticity of young (and even older) brains should never be
underestimated. But what plasticity requires is a dynamically engaged brain,
with all neurons firing. To put it bluntly, if you are only using 10% of your
brain, then you are in a vegetative state so close to death that you should
hope (not that you could) that your relatives will pull out the plug of the
life support machine!
Left- and right-brained thinking
Another pervasive
example of over-simplification has been the misinterpretation of laterality
studies to produce so-called ‘left- and right-brained thinking’.
Historically, the original studies were of split-brain
patients: patients who had the major communication tract between the two brain
hemispheres, the corpus callosum, surgically severed in an attempt to reduce
life-threatening epilepsy. It was found that the separate hemispheres of these
patients could separately process different types of information, but only the
left hemisphere processing was reported by the patients. Unfortunately, the
caveat that the researchers who carried out these studies back in the 1970s did
emphasise – i.e., that these patients had abnormal brains – was largely
ignored. For normal people, as Singh and O’Boyle (2004, 671) point out: the
brain does not consist of two hemispheres operating in isolation. In fact, the
different cognitive specialties of the LH and RH are so well integrated that
they seldom cause significant processing conflicts . . . hemispheric
specialisation . . . consists of a dynamic interactive partnership between the
two. Creative thinking, in particular, requires the interaction of both
hemispheric specialists, neither one can operate in isolation from the other:
Since the right hemisphere and the left hemisphere are massively interconnected
(through the corpus callosum), it is not only possible, but also highly likely,
that the creative person can iterate back and forth between these specialized
modes to arrive at a practical solution to a real problem. If the right
hemisphere were somehow disconnected from the left and confined to its own
specialized thinking modes, it might be relegated to only ‘soft’ fantasy
solutions, pipe dreams or weird ideas that would be difficult, if not
impossible, to fully implement in the real world. The left brain helps keep the
right brain on track. (Herrmann 1998, http://www.sciam.com) This, then, has
important implications for the misguided ‘right-brain’ promotion of creative
thinking in the school classroom. Goswami (2004) draws attention to a recent
OECD report in which left brain/right brain learning is the most troubling of
several neuromyths – a sort of anti-intellectual virus which spreads among lay
people as misinformation about what neuroscience can offer education.
This is not to say that there isn’t abundant good evidence
that much brain functioning is modular, and that many higher cognitive
functions, such as language production, are critically reliant on modules which
are usually found in one or other hemisphere, such as Broca’s Area (BA),
usually 128 J. Geake found in the left frontal cortex. But there are notable
differences between individuals as to where these modules are located. In about
5% of right-handed males, BA is found in the right frontal cortex, and in a
higher number of females, the principle function of BA, language production, is
found in both the left and right frontal cortices. In left-handed people, only
60% have BA functions on the left, with the rest having their language
production involving frontal areas on both sides or on the right (Kolb and
Wishaw 1990). An implication of this for neuroscience research is that
practically all subjects in neuroimaging studies are screened for extreme
right-handedness – it is a way of maximising the probability that the group map
has contributions from all subjects (that is, their functional modules involved
in the study will be in much the same place in the different individual’s
brains).
Consequently, with a nice circularity, the data which show
that language production is on the left comes almost exclusively from subjects
who’ve been chosen to have their language production areas on the left. Thus
the left- and right-brain thinking myth seems to have arisen from misapplying
lab studies which show that the semantic system is left-lateralised (language
information processing in the left hemisphere; graphic and emotional
information processing in the right hemisphere) by ignoring several important caveats.
First, the left-lateralisation is in fact a statistically significant bias, not
an absolute. Even in left-lateralised individuals, language processing does
stimulate some right hemisphere activation. Second, the subjects for such
studies are extremely right-handed. As language researchers are at pains to
point out: ‘It is dangerous to suppose that language processing only occurs in
the left hemisphere of all people’ (Thierry, Giraud, and Price 2003, 506). The
largest interconnection to transmit information in the brain is the corpus
callosum, the thick band of fibres which connects the two hemispheres. It seems
that the left and right sides of our brains cannot help but pass all
information between them. In fact, there is some evidence that constrictions in
the corpus callosum could be predictive of deficiencies in reading abilities
(Fine 2005), which obviously could not occur if language processing was an
exclusively left hemisphere activity. It would be neat if all cognitive
functioning was simply lateralised, and towards such a schema some commentators
have suggested that perhaps there are stylistic differences between left and
right hemispheric functions, with the left mediating detail, while the holistic
right focuses on the bigger picture. For example, using EEG to describe the
time course of activations identified by fMRI, Jung-Beeman et al. (2004) found
that the insight or ‘aha’ moment of problem solution elicits increased neural
activity in the right hemisphere’s temporal lobe. Jung-Beeman et al. (2004)
suggest that the this right hemisphere function facilitates a coarse-level
integration of information from distant relational sources, in contrast to the
finer-level information processing characteristic of its left hemisphere
homologue. However, researchers in music cognition disagree (Peretz 2003). Even
regarding the left hemisphere (metaphorically if not literally) as a verbal
processor, music, as non-verbal information par excellence, is not exclusively
processed in the right, but in both hemispheres (Peretz 2003). Moreover,
neuroimaging studies have shown that the location and extent of various areas
of the brain involved with music perception and production shift and grow with
musical experience (Parsons 2003).
In fact, there is a strong evolutionary argument that music
plays a crucial role in promoting the growth of the inter-module connections
which underpin cognitive development in infants and young children (Cross
1999). Consequently, for the many reasons noted above, leading neuroscientists
have been calling on the neuroscience community to shift their interpretative
focus of brain function from modularisation to interaction. As Hellige (2000,
206) pleads: ‘Having learned so much about hemispheric differences . . . it is
now time to put the brain back together again.’ Or as Walsh and Pascual-Leone
(2003, 206) summarise: ‘Human brain function and behaviour seem best explained
on the basis of functional connectivity between brain structures rather than on
the basis of localization of a given function to a specific brain structure.’
VAK Learning styles
This emphasis on connectedness rather than separateness of
brain functions has important implications for education (Geake 2004).
The multi-sensory pedagogies, which experienced teachers know
to be effective, are supported by fMRI research. The work of Calvert, Campbell
and Brammer (2000), on imaging brain sites of cross-modal binding in human
subjects, seems relevant. Bimodal processing of congruent information has a
supra-additive effect (e.g., simultaneously seeing and hearing the same
information works better than first just seeing and then hearing it). These
findings are consistent with observed behaviour. Much good pedagogy in the
early years of schooling is based on coincident bimodal information processing,
especially sight and sound, or sight and speech, as demonstrated by every early
years teacher pointing to the words of the story as she reads them aloud.
However, such ‘natural’ pedagogy is threatened by the promulgation of learning
styles. The notion that individual differences in academic abilities can be
partly attributed to individual learning styles has considerable intuitive
appeal if we are to judge by the number of learning style models or inventories
that have been devised – 170 at the last count, and rising (Coffield et al.
2004). The myriad ways that approaches to learning can seem to be partitioned,
labelled and measured seems to know no bounds. The disappointing outcome of all
of this endeavour is that, overall, the evidence consistently shows that
modifying a teaching approach to cater for differences in learning styles does
not result in any improvement in learning outcomes (Coffield et al. 2004).
Despite the lack of positive evidence, the education community has been swamped
by claims for a learning style model based on the sensory modalities: visual,
auditory and kinaesthetic (VAK) (Dunn, Dunn and Price 1984). The idea is that
children can be tested to ascertain which is their dominant learning style, V,
A or K, and then taught accordingly. Some schools have even gone so far as to
label children with V, A and K shirts, presumably because these purported
differences are no longer obvious in the classroom. The implicit assumption
here is that the information gained through one sensory modality is processed
in the brain to be learned independently from information gained through
another sensory modality. There is plenty of evidence from a plethora of
cross-modal investigations as to why such an assumption is wrong. What is
possibly more insidious is that focusing on one sensory modality flies in the
face of the brain’s natural interconnectivity. VAK might, if it has any effect
at all, be actually harming the academic prospects of the children so
inflicted. A simple demonstration of the ineffectiveness of VAK as a model of
cognition comes from asking 5-year-olds to distinguish different sized groups
of dots where the groups are too large for counting (Gilmore, McCarthy, and
Spike 2007). So long as the group sizes are not almost equal, young children
can do this quite reliably.
Now, what happens when one group is replaced by as many
sounds played too rapidly for counting? There is no change in accuracy! Going
from a V versus V version of the task to a V versus A version makes no
difference to task performance. The reason is that input modalities in the
brain are interlinked: visual with auditory; visual with motor; motor with
auditory; visual with taste; and so on.
There are well-adapted evolutionary reasons for this. Out on
the savannah as a pre-hominid hunter-gatherer, coordinating sight and sound
makes all the difference between detecting dinner and being dinner. As
Sherrington (1938, 217) noted: The naive observer would have expected evolution
in its course to have supplied us with more various sense organs for ampler
perception of the world . . . Not new senses but better liaison between the old
senses is what the developing nervous system has in this respect stood for. To
emphasise the cross-modal nature of sensory experience, Kayser (2007) writes
that: ‘the brain sees with its ears and touch, and hears with its eyes.’
Moreover, as primates, we are predominantly processors of visual information.
This is true even for congenitally blind children who instantiate Braille not in
the kinaesthetic areas of their brains, but in those parts of their visual
cortices that sighted children dedicate to learning written language. Moreover,
unsighted people create the same mental spatial maps of their physical reality
as sighted people do (Kriegseis et al. in press). Obviously the information to
create spatial maps by blind people comes from auditory and tactile inputs, but
it gets used as though it was visual. Similarly, people who after losing their
hearing get a cochlear implant find that they are suddenly much more dependent
on visual speech, such as cues for segmentation and formats, to conduct
conversation (Thomas and Pilling in press). Wright (2007) points out just how
interconnected our daily neural processes must be. Eating does not engage just
taste, but smell, tactile (inside the mouth), auditory and visual sensations.
Learning a language, and the practice of it, requires the coordinated use of
visual, auditory and kinaesthetic modalities, in addition to memory, emotion,
will, thinking and imagination: To an anatomist this implies the need for an
immense number of neural connections between many parts of the brain. In
particular, there must be numerous links between the primary auditory cortex
(in the temporal lobe), the primary proprioceptive-tactile cortex (in the
parietal lobe) and the primary visual cortex (in the occipital lobe). There is
indeed such a neural concourse, in the parieto-temporo-occipital ‘association’
cortex in each cerebral hemisphere. (Wright 2007, 275) Input information is
abstracted to be processed and learnt, mostly unconsciously, through the
brain’s interconnectivity (Dehaene, Kerszberg, and Changeux 1998). Actually, we
don’t even create sensory perception in our sensory cortices: For a long time
it was thought that the primary sensory areas are the substrate of our
perception. . . . these zones simply generate representational maps of the
sensorial information . . . although these respond to stimuli, they are not
responsible for . . . perceptions . . . Perceptual experience occurs in certain
zones of the frontal lobes [where] neurons combine sensory information with
memory information. (Trujillo 2006, M9) Literally following a VAK regime in
real classrooms would lead to all sorts of ridiculous paradoxes: what does a
teacher do with: the V and K ‘learners’ in a music lesson/ the A and K
‘learners’ at an art lesson/ the V and A ‘learners’ in a craft practical
lesson? The images of blindfolds and corks in mouths are all too reminiscent of
Tommy, the rock opera by The Who. As Sharp, Byrne and Bowker (in press)
elaborate, VAK trivialises the complexity of learning, and in doing so,
threatens the professionality of educators. Fortunately, many teachers have not
been taken in. Ironically, VAK has become, in the hands of practitioners, a
recipe for a mixed-modality pedagogy where lessons have explicit presentations
of material in V, A and K modes.
Teachers quickly observed that their pupils’ so-called
learning styles were not stable, that the expressions of V-, A- and K-ness
varied with the demands of the lessons, as they should (Geake 2006). As with
other learning-style inventories, research has shown that there is no
improvement of learning outcomes with VAK above teacher enthusiasm, where
‘attempts to focus on learning styles were wasted effort’ (Kratzig and Arbuthnot
2006). We might speculate in passing why do VAK and other ‘learning styles’
seem so attractive? I wonder if two aspects of folk psychology, where we seem
to learn differently from each other, and we have five senses, have created
folk neuroscience: the working of our brains directly reflects our folk
psychology. Of course, if our brains were that simple, we wouldn’t be here
today!
No comments:
Post a Comment