Quality of courses evaluated by 'predictions' rather than opinions: Fewer respondents needed for similar results

    Research output: Contribution to journalArticleAcademicpeer-review

    6 Citations (Scopus)

    Abstract

    Background: A well-known problem with student surveys is a too low response rate. Experiences with predicting electoral outcomes, which required much smaller sample sizes, inspired us to adopt a similar approach to course evaluation. We expected that having respondents estimate the average opinions of their peers required fewer respondents for comparable outcomes than giving own opinions.

    Methods: Two course evaluation studies were performed among successive first-year medical students (N=380 and 450, respectively). Study 1: Half the cohort gave opinions on nine questions, while the other half predicted the average outcomes. A prize was offered for the three best predictions (motivational remedy). Study 2: Half the cohort gave opinions, a quarter made predictions without a prize and a quarter made predictions with previous year's results as prior knowledge (cognitive remedy). The numbers of respondents required for stable outcomes were determined following an iterative process. Differences between numbers of respondents required and between average scores were analysed with ANOVA.

    Results: In both studies, the prediction conditions required significantly fewer respondents (p

    Conclusion: Problems with response rates can be reduced by asking respondents to predict evaluation outcomes rather than giving opinions.

    Original languageEnglish
    Pages (from-to)851-856
    Number of pages6
    JournalMedical Teacher
    Volume32
    Issue number10
    DOIs
    Publication statusPublished - 2010

    Keywords

    • Curriculum
    • Education, Medical
    • Humans
    • Netherlands
    • Program Evaluation
    • Quality Control
    • Questionnaires
    • Sample Size
    • Students, Medical

    Cite this