Randomize it release_kkj73xrcrfhcvhyo7kmsi6xnzq

by Dane Christian Joseph

Published in Journal of effective teaching in higher education by University of North Carolina Wilmington - The Centers for Teaching Excellence and Faculty Leadership.

2019   p80-92

Abstract

Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic-driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of the correct answer throughout the entire answer key. Doing this theoretically limits the number of correct guesses that test-takers can make and thus reduces the amount of construct-irrelevant variance in test score interpretations. This study empirically tested the strategy to randomize the answer-key. Specifically, a factorial ANOVA was conducted to examine differences in General Biology classroom multiple-choice test scores by the interaction of method for varying the correct answer's position and student-ability. Although no statistically significant differences were found, the paper argues that the guideline is nevertheless ethically substantiated.
In application/xml+jats format

Archived Files and Locations

application/pdf  175.5 kB
file_544t7pj32ffmrmssljhokqbyfq
jethe.org (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2019-04-17
Journal Metadata
Open Access Publication
In DOAJ
Not in Keepers Registry
ISSN-L:  2578-7608
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 000fc9a3-fc58-4ed8-a534-24da93dddaee
API URL: JSON