Randomize it
release_kkj73xrcrfhcvhyo7kmsi6xnzq
by
Dane Christian Joseph
2019 p80-92
Abstract
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic-driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of the correct answer throughout the entire answer key. Doing this theoretically limits the number of correct guesses that test-takers can make and thus reduces the amount of construct-irrelevant variance in test score interpretations. This study empirically tested the strategy to randomize the answer-key. Specifically, a factorial ANOVA was conducted to examine differences in General Biology classroom multiple-choice test scores by the interaction of method for varying the correct answer's position and student-ability. Although no statistically significant differences were found, the paper argues that the guideline is nevertheless ethically substantiated.
In application/xml+jats
format
Archived Files and Locations
application/pdf 175.5 kB
file_544t7pj32ffmrmssljhokqbyfq
|
jethe.org (publisher) web.archive.org (webarchive) |
article-journal
Stage
published
Date 2019-04-17
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar