Thursday 5 September 2013

The Cognitive Styles Analysis (CSA)

Riding (1991a, 1991b, 1998a, 1998b) has developed a computerised assessment method called the Cognitive Styles Analysis (CSA). This is not a self-report measure, but presents cognitive tasks in such a way that it is not evident to the participant exactly what is being measured. The test items in the CSA for the holist-analytic dimension are all visual, and the scoring is based on a comparison of speed of response (not accuracy) on a matching task (holist preference) and on an embedded figures task (analytic preference). The items for the verbal-imagery dimension are all verbal and are based on relative speed of response to categorising items as being similar by virtue of their conceptual similarity (verbal preference) or colour (visual preference). The literacy demand of the verbal test is not high, as only single words are involved, but this has not been formally assessed. The instrument is suitable for use by adults and has been used in research studies with pupils as young as 9 years.


No information about the reliability of the CSA has been published by Riding. Using a sample of 50 undergraduates, Peterson, Deary and Austin (2003a) report that the short-term test–retest reliability of the CSA verbal-imager dimension is very low and statistically not significant (r=0.27), while that of the holist-analytic dimension is also unsatisfactory in psychometric terms (r=0.53, p<0.001). With 38 students who were retested on the CSA after 12 days, Redmond, Mullally and Parkinson (2002) reported a negative test–retest correlation for the verbal-imager dimension (r=–0.21) and a result of r=0.56 for the holist-analytic dimension. These studies provide the only evidence of reliability to date, despite more than a decade of research with the instrument. Riding’s criticisms (2003a) of Peterson, Deary and Austin’s study have been more than adequately answered by that study’s authors (2003b).


As adequate test reliability has not been established, it is impossible to evaluate properly the many published studies in which construct, concurrent or predictive validity have been addressed. Riding (2003b) takes issue with this point, claiming that a test can be valid without being reliable. Yet he offers no reasons for suggesting that the CSA is valid when first administered, but not on later occasions. He claims that the CSA asks people to do simple cognitive tasks in a relaxed manner, so ensuring that they use their natural or ‘default’ styles. A counter-argument might be that people are often less relaxed in a new test situation, when they do not know how difficult the tasks will be. The unreliability of the CSA may be one of the reasons why correlations of the holist-analytic and verbal-imagery ratios with other measures have often been close to zero. Examples of this include Riding and Wigley’s (1997) study of the relationship between cognitive style and personality in FE students; the study by Sadler-Smith, Allinson and Hayes (2000) of the relationship between the holist-analytic dimension of the CSA and the intuition-analysis dimension of Allinson and Hayes’ Cognitive Style  Index (CSI), and Sadler-Smith and Riding’s (1999) use of cognitive style to predict learning outcomes .


Despite the appeal of simplicity, there are unresolved conceptual issues with Riding’s model and serious problems with its accompanying test, the CSA. Riding and Cheema (1991) argue that their holistic-analytic dimension can be identified under different descriptors in many other typologies. However, being relatively quick at recognizing a rectangle hidden in a set of superimposed outlines is not necessarily linked with valuing conceptual or verbal accuracy and detail, being a deep learner or having preference for convergent or step-wise reasoning. Analysis can mean different things at perceptual and conceptual levels and in different domains, such as cognitive and effective. In his taxonomy of educational objectives, Bloom (1956) views analysis as a simpler process than synthesis (which bears some resemblance to holistic thinking). Riding takes a rather different view, seeing holists as field-dependent and impulsive, unwilling to engage in complex analytical tasks. Another point of difference is that where Riding places analysis and synthesis as polar opposites, Bloom sees them as interdependent processes. We simply do not know enough about the interaction and interdependence of analytic and holistic thinking in different contexts to claim that they are opposites.


There are also conceptual problems with the verbaliser-imager dimension. Few tasks in everyday life make exclusive demands on either verbal or non-verbal processing, which are more often interdependent or integrated aspects of thinking. While there is convincing evidence from factor-analytic studies of cognitive ability for individual differences in broad and specific verbal and spatial abilities (eg Carroll 1993), this does not prove that people who are very competent verbally (or spatially) tend consistently to avoid other forms of thinking. Further problems arise over the extent to which styles are fixed. Riding’s definition of cognitive styles refers to both preferred and habitual processes, but he sees ‘default’ cognitive styles as incapable of modification. Here he differs from other researchers such as Vermunt (1996) and Antonietti (1999), both of whom emphasise the role of metacognition and of metacognitive training in modifying learning styles. For Riding, metacognition includes an awareness of cognitive styles and facilitates the development of a repertoire of learning strategies (not styles). Riding seems to consider the ‘default’ position as being constant, rather than variable. He has not designed studies to look at the extent to which learners are capable of moving up and down cognitive style dimensions in accordance with task demands and motivation. Although he cautions against the dangers of labelling learners, he does not avoid this in his own writing.


Turning now to the CSA instrument, there are problems with basing the assessment of cognitive style on only
one or two tasks and in using an exclusively verbal or non-verbal form of presentation for each dimension. The onus must be on the test constructor to show that consistent results are obtainable with different types of task and with both verbal and non-verbal presentation. There are also serious problems in basing the assessment on a ratio measure, as two sources of unreliability are present instead of one. It is possible that the conceptual issues raised above can be resolved, and that the construct validity of Riding’s model of cognitive styles may eventually prove more robust than the reliability of the CSA would suggest. As Riding and Cheema (1991) argue, similar dimensions or categories do appear in many other typologies. However, as things stand, our impression is that Riding has cast his net too wide and has not succeeded in arriving at a classification of learning styles that is consistent across tasks, consistent across levels of task difficulty and complexity, and independent of motivational and situational factors.

1 comment:

  1. Interesting to see my conclusions are being supported by other researchers many years later. I still find it incredible that riding never investigated the psychometric properties of the XSA.To claim that a test can be valid and not reliable is quite absurd.

    ReplyDelete