Making Multiple Choice work

For sins in my past, I’ve been thinking about assessments a bit lately. And one of the biggest problems comes from trying to find solutions that are meaningful yet easy to implement. You can ask learners to develop meaningful artifacts, but getting them assessed at scale is problematic. Mostly, auto-marked stuff is used to do trivial knowledge checks. Can we do better.

To be fair, there are more and more approaches (largely machine-learning powered), that can do a good job of assessing complex artifacts, e.g. writing. If you can create good examples, they can do a decent job of learning to evaluate how well a learner has approximated it. However, those tools aren’t ubiquitous. What is are the typical variations on multiple choice: drag and drop, image clicks, etc. The question is, can we use these to do good things?

Read the full story by


Top Trends in Active and Collaborative Learning [Hammett] And the myths go on
We are updating our Privacy Policy, so please make sure you take a minute to review it. As of May 25, 2018 your continued use of our services will be subject to this new Privacy Policy.
Review Privacy Policy OK