Scale validation in applied linguistics: Methodological trends and challenges
Scale validation plays a critical role in ensuring the reliability and applicability of measurement instruments in applied linguistics research. This review examines scale validation studies published in Research Methods in Applied Linguistics, with the aim of identifying methodological trends and areas in need of refinement. Fourteen studies were analyzed with attention to their validation frameworks, statistical techniques, and approaches to reliability and external validation. Findings reveal a growing use of Exploratory Structural Equation Modeling (ESEM) and bifactor modeling to support construct validation in multidimensional scales, while Confirmatory Factor Analysis (CFA) remains prevalent for theory-driven, unidimensional constructs. However, external validation—particularly predictive validity and independent sample cross-validation—remains limited, reducing the generalizability of many validated instruments in domains such as language assessment and second language acquisition. Although reliability assessment is evolving through the use of Rasch modeling and Generalizability Theory, Cronbach’s alpha continues to dominate, despite its known limitations in complex constructs. Given the practical constraints faced by researchers, the review advocates for a flexible, goal-aligned approach to validation that emphasizes foundational steps such as construct conceptualization, pre-validation, and structural modeling. Enhancing predictive validity, incorporating independent sample cross-validation, and improving methodological transparency can further strengthen the rigor and practical relevance of scale development. While the scope is limited to a single journal, this review offers a roadmap for improving validation practices in applied linguistics and contributes to more robust research in language assessment, teacher education, and second language acquisition.