This post originally appeared in the now defunctCentral Texas Instructional Design blog on this date.
Back to distracters on multiple-choice assessments.
Distracters are simply incorrect options on multiple-choice assessments. To be useful, a distracter must be plausible and compelling—seductive, according to the University of St. Thomas Academic Support (n.d.). Distracters should be able to seduce learners who are uncertain of the correct answer into making an incorrect choice. At the same time, a good distracter must be thoroughly wrong, and the question of wrongness causes the most lively debates over whether a question is useful.
My rule of thumb is that if the experts argue over a distracter or question, learners will, too. I would not use any question that causes such arguments on an assessment, especially not a high-stakes assessment where a learner’s performance rating or job is on the line. There are plenty of opportunities to use these questions in the learning event. Arguable questions make excellent discussion points in face-to-face classes. You can even find creative ways to use them in online modules. Using them on an assessment only calls the validity of the assessment into question.
Assuming that a distracter is inarguably wrong, what makes it seductive? Let’s examine some example questions to identify their traits. The first example comes from the written portion of the test I took to obtain my Texas drivers license.
I still remember this question after all these years (I won’t say how many here) because it embarrassed me by making me laugh out loud while taking the test. The test writer probably intended to introduce a little levity with that last distracter. It worked, but a test is not the place for humor. Assuming that I didn’t know that the sign in question marked the edge of the pavement and was not familiar with the placard placed on slow moving vehicles, the humorous distracter improved my chances of guessing correctly by about 17%. It simply was not a plausible distracter.
You can find plausible distracters during the needs analysis or gap analysis. Corporate training usually addresses some performance gap or seeks to change a behavior. The best distracters come from what you are trying to teach people not to do. Here are a few examples of what I mean:
- If a number of people doing a job engage in behaviors they should avoid—such as interrupting a customer—those common misbehaviors are natural, plausible distracters on questions asking for the correct behavior.
- Similarly, if policies change, the old policy (which was once the correct answer) provides a plausible distracter.
- Applications can also provide plausible distracters. If an application provides a drop-down menu of choices to make based on situation, any of the choices that are not appropriate for the situation described in the stem make excellent distracters.
- Common sense also provides plausible distracters. Last month I mentioned an application that used color coding in a non-intuitive manner. In this case, choices listed in red were to be offered to customers when green or yellow choices were not appropriate, but employees never offered their customers red choices. If that client had not been willing or able to change their color coding, “Never mention this to a customer” would have been a compelling and plausible distracter to a question about the meaning of red choices in the application.
Here is an example of a question developed for one of my clients. The question passed all reviews but was not selected to be on an assessment. Some of the necessary context to answer this question, namely the product being trained, is absent, but you can still see what makes the distracters compelling.
What does it mean when the LED in the Wireless switch is flashing blue?
- The system has connected to a weak signal source.
- The system is communicating with a Bluetooth signal source.
- The system is communicating with a strong signal source.
- The system is searching for a signal.
Lets review each of the options as if they were all distracters.
- The first is plausible, if not compelling, because a weak signal source can be sporadic. The learner might interpret the blinking light as connecting and disconnecting to the source.
- The second is plausible because the learner might think that the blue LED indicates Bluetooth. Blinking also indicates traffic on some network adapters.
- The third is probably the least plausible of the options. It relies only on the assumption that the blinking light indicates traffic.
- This option is plausible because many network adapters have two LEDs. One that indicates connection when solid and one that indicates traffic. In this case, the assumption is that the blue LED is the one that indicates connection rather than traffic.
You probably noticed that all the examples are in the cognitive domain. They assess what a learner knows. Multiple-choice assessments are particularly suited to the cognitive domain, but they are not so applicable to other domains. For those domains, we need other types of assessment.
To sum up, I like to say that anyone can tell a really bad question. Only your learners can tell a good question, and then only if you have their performance data. I’ll talk about that soon.
References
- University of St. Thomas Academic Support. (n.d.). Multiple choice strategies for psychology tests. Retrieved May 15, 2008.
No comments:
Post a Comment