Investigating measurement invariance in computer-based personality testing: The impact of using anchor items on effect size indices

Iris J. L. Egberink, Rob R. Meijer, Jorge N. Tendeiro

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)
167 Downloads (Pure)

Abstract

A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many anchor items should be selected and which method will provide the best results using empirical data. In the present study, we examined the impact of using different numbers of anchor items on effect size indices when investigating measurement invariance on a personality questionnaire in two different assessment situations. Results suggested that the effect size indices were not influenced by using different numbers of anchor items. The values were comparable across different number of anchor items used and were small, which indicate that the effect of differential functioning at the item and test level is very small if not negligible. Practical implications are discussed and we discuss the use of anchor items and effect size indices in practice.
Original languageEnglish
Pages (from-to)126-145
Number of pages20
JournalEducational and Psychological Measurement
Volume75
Issue number1
Early online date3-Feb-2014
DOIs
Publication statusPublished - Feb-2015

Keywords

  • LIKELIHOOD RATIO TEST
  • ITEM FUNCTIONING DETECTION
  • RESPONSE THEORY
  • SELECTION
  • SCALES

Fingerprint

Dive into the research topics of 'Investigating measurement invariance in computer-based personality testing: The impact of using anchor items on effect size indices'. Together they form a unique fingerprint.

Cite this