The critics of learning styles can be divided into two main camps. First, there are those who accept the basic assumptions of the discipline (e.g the positivist methodology and the individualistic approach), but who nevertheless claim that certain models or certain features within a particular model do not meet the criteria of that discipline. A second group of critics, however, adopts an altogether more op-positional stand: it does not accept the basic premises on which this body of research, its theories, findings and implications for teaching have been built. As all the other sections of this report are devoted to a rigorous examination of 13 models of learning styles within the parameters set by the discipline itself, this sub-section will briefly explain the central objections raised by those hostile to the learning styles camp, who mutter at conferences in the informal breaks between presentations, who confide their reservations in private, but who rarely publish their disagreement. We wish to bring this semi-public critique out into the open.
The opponents, who are mainly those who espouse qualitative rather than quantitative research methods, dispute the objectivity of the test scores derived from the instruments. They argue, for example, that the learning style theorists claim to ‘measure’ the learning preferences of students. But these ‘measurements’ are derived from the subjective judgement which students make about themselves in response to the test items when they ‘report on themselves’. These are not objective measurements to be compared with, say, those which can be made of the height or weight of students, and yet the statistics treat both sets of measures as if they were identical. In other words, no matter how sophisticated the subsequent statistical treatments of these subjective scores are, they rest on shaky and insecure foundations. No wonder, say the skeptics, that learning style researchers, even within the criteria laid down by their discipline, have difficulty establishing reliability, never mind validity. Respondents are also encouraged to give the first answer which occurs to them. But the first response may not be the most accurate and is unlikely to be the most considered; evidence is needed to back the contention that the first response is always the one with which psychologists and practitioners should work.
The detractors also have reservations about some test items and cannot take others seriously. They point, for example, to item 65 in Vermunt’s ILS which reads: ‘The only aim of my studies is to enrich myself.’ The problem may be one of translation from the Dutch, but in English, the item could refer to either intellectual or financial enrichment and it is therefore ambiguous. Or they single out the item in Entwistle’s ASSIST which reads: ‘When I look back, I sometimes wonder why I ever decided to come here.’ Doesn’t everyone think this at some stage in an undergraduate course?Others quote from the Dunn, Dunn and Price PEPS instrument , the final item of which is ‘I often wear a sweater or jacket indoors’. The answers from middle-class aesthetes in London, who prefer to keep their air-conditioning low to save energy, are treated in exactly the same way as those from the poor in Surgut in Siberia, who need to wear both sweaters and jackets indoors to keep themselves from freezing to death. What, ask the critics, has this got to do with learning and what sense does it make to ignore the socio-economic, cultural and even geographic context
of the learner?
Those who simply wish to send up the Dunn, Dunn and Price LSI for 6–18 year old reveal that it contains such items as: ‘I like to do things with adults’; ‘I like to feel what I learn inside of me’; and ‘It is easy for me to remember what I learn when I feel it inside me.’ It is no surprise that some psychologists argue that criticism should not be directed at individual items and that one or two poor items out of 100 do not vitiate the whole instrument. Our response is that if a few items are risible, then the instrument may be treated with scorn.
Other opponents object to the commercialisation of some of the leading tests, whose authors, when refuting criticism, are protecting more than their academic reputations. Rita Dunn, for example, insists that it is easy to implement her 22-element model, but that it is also necessary to be trained by her and her husband in a New York hotel. The training course in July 2003 cost $950 per person and lasted for 7 days at a further outlay of $1384 for accommodation. The cost of training all 400,000 teachers in England in the Dunn methodology would clearly be expensive for the government, but lucrative for the Dunns.
Some opponents question what they judge to be the unjustified prominence which is now accorded to learning styles by many practitioners. Surely, these academics argue, learning styles are only one of a host of influences on learning and are unlikely to be the most significant? They go further by requesting an answer to a question which they pose in the terms used by the learning style developers, namely: ‘What percentage of the variance in test scores is attributable to learning styles?’ The only direct answer to that question which we have found in the literature comes from Furnham, Jackson and Miller (1999), who study the relationship between, on the one hand, personality (Eysenck’s Personality Inventory) and learning style (Honey and Mumford’s LSQ); and on the other, ratings of the actual performance and development potential of 200+ telephone sales staff: ‘the percentage of variance explained by personality and learning styles together was only about 8%’ (1999, 1120). The critics suggest that it is perhaps time that the learning style experts paid some attention to those factors responsible for the other 92%.
Others seek to disparage the achievements of research into learning styles by belittling what they call the rather simple conclusions which emanate from the increasingly elaborate statistical treatment of the test scores. Their argument can be summarized and presented as follows:
For more than 40 years, hundreds of thousands of students, managers and employees have filled in learning style inventories, their scores have been subjected to factor analyses of increasing complexity, numerous learning styles have been identified, and what are the conclusions that stem from such intensive labour? We are informed that the same teaching method does not work for all learners, that learners learn in different ways and that teachers should employ a variety of methods of teaching and assessment. Comenius knew that and more in seventeenth century Prague and he did not need a series of large research grants to help him find it out.
This is, of course, high-flying hyperbole, but I leave readers to judge the accuracy of this assessment .
No comments:
Post a Comment