Multiple-Choice, MC, assessments inhibit long term learning. As a rule, they may work for summative assessment, but not formative. Furthermore, in mathematics, since guessing can yield correct answers, false positives usually cloud software responses. As a result, more MC problems are needed for students to solve in online classes, which usually decelerates learning. One of the secrets of better adaptive math software, such as ALEKS and Carnegie Tutor, is that they employ only a few constructed response questions. They allow a student to show mastery faster than most multiple-choice programs which struggle to compensate for false positives; especially since quick guessing is a common strategy with low achieving students. MC programs generate unexpected student responses because they are too ready to help. XLPrep and possibly Study Island, are examples of MC software that may frustrate student learning because their response to errors or successes don't demand that each student needs to want instruction at that moment and struggle through it! While at first counterintuitive, pretty instruction is merely a time-killer for many low-achieving students. ALEKS and SmartMath offer simple explanations only after a student, somewhat reluctantly, asks for it, knowing that he or she, will have to read to understand. This may also be a problem with the constructed-response software iPass. It's detailed instruction is beautiful, but students eyes wander during the videos, because they are forced to watched.
However, multiple-choice software doesn't have to suffer from the conventional maladies. In particular, SmartMath from Encyclopedia Brittanica (USA) and Planetii (Hong Kong) demands students answer 30 MC questions in a row without error! Random guessing is automatic test failure, not just a missed question. Students can earn stars, actually limited insurance, to save themselves from immediate failure by doing many problems correctly during earlier practice sessions. Adapting to a rapid train of MC questions seems to turn the problem of inhibited long term learning on its head. Instead of letting distractors interfere with their learning, students quickly seek them out to discard them in the search for a correct answer. Distractors seemingly concentrate thinking, not dissipate it. Finally, the active decision-making on choosing to earn zero to six stars, which allow 30 out of 30 to 30 out of 36 (a missed question generates an equally difficult question) involves the students responsibly in their own learning. They have to decide when to take the test because earning six stars takes a long time. This engagement and active self-assessment is the secret behind SmartMath, which allows issues with multiple-choice to evaporate.
After using many different brands of software in classrooms using during semester trials over the last four years, two programs have worked well together for full class coverage: SmartMath and ALEKS. The results of a quick assessment place students into SmartMath or ALEKS. While SmartMath has a rapid mastery option, students can take a Billy Madison approach: they try to spend only 1-2 weeks per level (six levels total) and start at Level One, which is a tough first grade! With cute, spinning avatars jumping for joy with correct answers, students march through the curriculum feeling their way through the inherent difficulties. As a subtle extra motivator, stressing that SmartMath was originally developed in Hong Kong also raises the importance of each individual's success. Students realize that they are being evaluated from an international perspective. Success matters more.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment