Entwistle and his colleagues have spent almost 30 years refining the validity and reliability of their inventories to arrive at items that have reasonable predictive validity. They acknowledge the tendency for detailed, continuous refinements to make technical constructs less credible and less easy to use by researchers outside educational psychology. They have therefore supplemented their analysis of approaches to learning with data from qualitative studies to explore the consistency and variability of learning approaches within specific contexts (see McCune and Entwistle 2000; Entwistle and Walker 2000). In this respect, their methodology and the data their studies have produced offer a rich, authentic account of learning in higher education.
However, one feature of a positivist methodology, which aims for precise measures of psychometric traits, is that items proliferate in order to try to capture the nuances of approaches to learning. There are other limitations to quantitative measures of approaches to learning. For example, apparently robust classifications of meaning and reproduction orientations in a questionnaire are shown to be less valid when interviews are used with the same students. Richardson (1997) argued that interviews by Marton and Säljö show deep and surface approaches as different categories or forms of understanding, or as a single bipolar dimension along which individuals may vary. In contrast, questionnaires operationalise these approaches as separate scales that turn out to be essentially orthogonal to each other; a student may therefore score high or low on both. According to Richardson, this difference highlights the need for researchers to differentiate between methods that aim to reveal average and general dispositions within a group and those that aim to explain the subtlety
of individuals’ actions and motives.
Despite attempts to reflect the complexity of environmental factors affecting students’ approaches to learning and studying, the model does not discuss the impact of broader factors such as class, race and gender. Although the model takes some account of intensifying political and institutional pressures in higher education, such as quality assurance and funding, sociological influences on participation and attitudes to learning are not encompassed by Entwistle’s model.
There is also confusion over the theoretical basis for constructs in the ASI and ASSIST and subsequent interpretation of them in external evaluations. Two contrasting research traditions create these constructs: information processing in cognitive psychology; and qualitative interpretation of students’ approaches to learning. Outside the work of Entwistle and his colleagues, a proliferation of instruments and scales, based on the original measure (the ASI), has led to the merging of constructs from both research traditions. Unless there is discussion of the original traditions from which the constructs came, the result is a growing lack of theoretical clarity in the field as a whole (Biggs 1993). Entwistle and his colleagues have themselves warned of this problem and provided an overview of the conceptions of learning, their history within the ‘approaches to learning’ model and how different inventories such as those of Entwistle and Vermunt relate to each other (Entwistle and McCune 2003).
There are a number of strengths in Entwistle’s work. For example, he has shown that ecological validity is essential to prevent a tendency to label and stereotype students when psychological theory is translated into the practice of non-specialists. The issue of ecological validity illuminates an important point for our review as a whole, namely that the expertise and knowledge of non-specialists are both context-specific and idiosyncratic and this affects their ability to evaluate claims and ideas about a particular model of learning styles. High ecological validity makes a model or instrument much more accessible to non-specialists. Entwistle’s work has also aimed to simplify the diverse and sometimes contradictory factors in students’ approaches to studying and learning, and to offer a theoretical rationale for them. He has attempted to reconcile ideas about the stability of learning styles with the idea that approaches are idiosyncratic and fluctuating and affected by complex learning environments. His work highlights the need for researchers to relate analysis and theoretical constructs to the everyday experience of teachers and students, and to make their constructs accessible (see also Laurillard 1979).
These features and the high output of work by Entwistle and his colleagues have made it credible with practitioners and staff developers within UK higher education. It has provided a model of learning with which academics who wish to be good teachers can engage: this is absent in teacher training for the further and adult education sectors, and for work-based trainers, where there is no influential theory of learning that could improve professional understanding and skills. Nevertheless, it is perhaps worth reiterating Haggis’s warning (2003) that the model runs the risk of becoming a rigid framework that excludes social models of learning.
Finally, although Entwistle and his colleagues argue that researchers need to build up case studies by observing students studying and interviewing them about their approaches, it is not clear how far ASSIST is usable by university lecturers. Entwistle’s concern to safeguard ideas about learning approaches from oversimplification in general use might be a reason for this. Nevertheless, notions such as ‘deep’, ‘surface’ and ‘strategic’ approaches to learning are now part of the everyday vocabulary of many teachers and the wealth of books on teaching techniques that draw directly on many of the concepts reviewed here is testimony to Entwistle’s continuing influence on pedagogy in higher education. To use a term coined by Entwistle himself, the model has proved to be ‘pedagogically fertile’ in generating new ideas about teaching and learning in higher education.
Strengths
1.Model aims to encompass approaches to learning, study strategies, intellectual development skills and attitudes in higher education.
2.Assesses study/learning orientations, approaches to study and preferences for course organisation and instruction.
3.Internal and external evaluations suggest satisfactory reliability and internal consistency.
4.Extensive testing by authors of construct validity. Validity of deep, surface and strategic approaches confirmed by external analysis.
5.Teachers and learners can share ideas about effective and ineffective strategies for learning. Course teams and managers can use approaches as a basis for redesigning instruction and assessment. Model can inform the redesign of learning milieux within departments and courses.
6.Has been influential in training courses and staff development in universities.
Weaknesses
1.Complexity of the developing model and instruments is not easy for non-specialists to access.
2.There are dangers if the model is used by teachers without in-depth understanding of its underlying
implications.
3.Many of the sub-scales are less reliable. Test–retest reliability not shown.
4.Construct and predictive validity have been challenged by external studies. Unquestioned preference for deep approaches, but strategic and even surface approaches may be effective in some contexts. Rather weak relationships between approaches and attainment.
5.The scope for manoeuvre in course design is variable outside the relative autonomy of higher education, especially in relation to assessment regimes. There is a large gap between using the instrument and transforming the pedagogic environment. As the terms ‘deep’ and ‘surface’ become popular, they become attached to individuals rather than behaviors, against the author’s intention.
6.Not tested directly as a basis for pedagogical interventions.
Potentially useful model and instrument for some post-16 contexts outside the success it has had in higher education, but significant development and testing will be needed.
No comments:
Post a Comment