posted on 2017-02-17, 02:54authored byJacobs, Kate Erin
Traditional performance-based measures of cognitive ability provide information essential to the development of effective interventions aimed at ameliorating presenting referral concerns. While recent advances in cognitive theory afforded by the Cattell-Horn-Carroll (CHC) model means that we now have a greater understanding of the abilities that contribute to achievement in specific domains, deciding which of the numerous CHC abilities should form the focus of an assessment can be a daunting task. Economies could be gained if diagnostic hypotheses formed from intake information could be pretested, allowing for selectively constructed cognitive assessments that most effectively and efficiently address referral concerns. The notion of obtaining valid self-reports of cognitive ability was therefore revisited as a method for screening which abilities most critically require formal assessment. The previous lack of substantive valid ability models was considered one significant reason as to why earlier attempts at obtaining valid self-reports of cognitive functioning had been problematic. Consequently this research took advantage of recent theoretical advances by basing the development of a pilot measure on three extensively validated and defined abilities from CHC theory; Fluid reasoning (Gf), Comprehension-knowledge (Gc), and Visual processing (Gv). This research began with a meta-analytic review of 40 studies that investigated the validity of self-reports of cognitive ability; providing suggestions regarding the measurement conditions most conducive to valid assessment of cognitive ability by self-report. By applying these meta-analytic insights in combination with the theoretical framework of CHC theory, the Self-Report Measure of Cognitive Abilities (SRMCA) was developed and its content validity established. A priori expectations of a three-factor solution were supported by Study 1 (N = 230) using exploratory factor analysis, and replicated in Study 2 (N = 214) using confirmatory factor analysis. The external validity of the developed measure was also investigated in Study 2 by obtaining performance measures of Gf, Gc, and Gv, as well as a measure of self-deceptive enhancement (SDE), a socially desirable responding variable. Additionally participants provided single-item self-estimates of Gf, Gc, and Gv, the validity of which was compared to the SRMCA subscales. Multitrait-multimethod analysis found that while single-item self-estimates displayed (on-average) greater levels of convergent validity (attributed to the differing instructions between this measure and the SRMCA); the SRMCA demonstrated greater levels of discriminant validity which was explained as emanating from the multi-item format. Furthermore, the effects of SDE were identified as dependent on the type of cognitive ability area self-evaluated, rather than the type of measure used, with self-ratings of Gc appearing immune. The finding that participants could effectively recognise differences between distinct cognitive abilities when completing the SRMCA supports the importance of using a strong theoretical model that contains clear and well-validated factors when developing a self-report measure of cognitive ability. Results also highlighted the importance of considering the type of ability being evaluated, in addition to the response format used. This dissertation provides renewed optimism regarding the potential to develop a self-report measure of cognitive ability that validly predicts traditional performance measures of intelligence. It is recommended that future research give due consideration to established psychometric principles, as well as the application of robust theoretical models.
Awards: Winner of the Mollie Holman Doctoral Medal for Excellence, Faculty of Education, [2013].