Can Multiple Choice Tests Measure Higher-level Thinking?

October 15, 2014

By Lynn Bartels

Multiple choice questions are used widely in higher education particularly for large classes where it would be difficult to grade a large number of constructed response tests (e.g., short answer, essays). Despite their widespread use, multiple choice tests are not without their critics.

Testing the Levels of Bloom’s Taxonomy

One of the chief criticisms is that multiple choice questions don’t measure higher-level thinking (Frederiksen, 1984). The cognitive thinking skills of Bloom’s taxonomy may be combined into three levels (Recall, Interpretation and Problem-Solving) (Waller, 2008).  At Level 1 (Recall), lower-order thinking skills involve knowledge and comprehension. Multiple choice test questions tap the recall level easily. For example,

Recall Question

“In operant conditioning when you remove something aversive, which type of conditioning has occurred?”

a.  Aversive Reinforcement

b.  Negative Reinforcement

c.  Punishment

d.  Classical Conditioning

 

The recall question above could be rewritten to measure level 2 skills (Interpretation). Level 2 includes skills such as Application and Analysis. These questions often involve an example or scenario and require students to apply their knowledge to analyze the situation.

Interpretation Question

Katherine’s mother is constantly nagging her to clean her room. Much to Katherine’s relief after she cleaned her room, her mother quit nagging her. Which type of conditioning has Katherine experienced?

a.  Aversive Reinforcement

b.  Negative Reinforcement

c.  Punishment

d.  Classical Conditioning

 

Research generally shows a relationship between performance on multiple choice and constructed response questions at levels 1 and 2. One study (Hancock, 1994) compared performance on the two types of questions (multiple choice and constructed response) written to measure four levels of cognition (knowledge, comprehension, application and analysis). The author found that within skill level, students’ scores were correlated and suggesting that the two types of questions measure similar constructs. Students who perform well on multiple choice questions will generally also perform well on constructed response questions.

The highest level of thinking skills is Level 3 (Problem-Solving) Level 3 includes Synthesis and Evaluation. Since this level involves creating a unique product and divergent thinking, it is generally recommended that these skills are best measured with constructed response questions.

Yet some test developers have found ways to tap higher-level thinking with multiple choice questions. Linda Suskie (from our recent Midweek Mentor Video “How Can I Improve the Effectiveness of my multiple choice questions?”) recommends interpretive exercises where students are provided a chart, graph or reading passage and asked to use their analysis and evaluation skills to answer questions about the provided information. She also recommends a more sophisticated matching type question. (Her examples are available in our on-demand video collection in the supplemental information handout.)

Which of the three levels of questions is the best?  If you ask students they generally prefer level 1 definitional questions because they are easier to answer. At higher-levels, students need to recall the information correctly and then use it, making the questions more difficult. The most important thing to remember is that your test questions should match your objectives. You should write test questions that assess the knowledge at the level you expect your students to demonstrate. For example, if your objective is for your students to apply knowledge, then your test questions should be application type questions.

Here are a few more suggestions for using multiple choice questions effectively:

Avoid test questions, problems, or exercises students have already seen

To increase higher-level thinking, avoid using examples, problems, or scenarios from class or the book. Answering a familiar question involves remembering the answer rather than new information processing (Mueller, 2014; Suskie, 2009). When test items include examples already presented in class, they measure ability to memorize the answer rather than thinking through the answer.

Combine constructed response and multiple choice test questions

Including constructed response items along with multiple choice questions may change how students study. For example, when students expect constructed response test questions, they may study more deeply expecting that they will need to retrieve the answer from memory rather than just recognizing the answer. One study found that students were less anxious about taking multiple-choice tests, which may lead to less studying (Balch, 2007). Students may study harder when they expect some constructed response questions on an exam.

The Future of Multiple Choice Tests

Increased technology is changing testing in higher-education in at least two important ways. First, there is a proliferation of knowledge which is easily available to students through the internet, cell phones and laptops. Students can “Google” facts and information which may make recall less important in the future. It still may be important for students to develop a base knowledge of terms and concepts that they understand readily, but a lot of information can be looked up rather than stored in memory. This change in knowledge availability may lead us to rethink some of our learning objectives which should drive methods of testing and increasingly focus learning and assessment on higher-level thinking skills.

Second, more courses are being offered in online formats. Non-proctored online testing creates huge opportunities for cheating. There are many strategies to help minimize cheating. One of the strategies is to ask higher-level thinking questions which go beyond information that can easily be looked up in a textbook. Another method is to use alternative methods of assessment (See Wayne Nelson’s Pros and Cons of Online Testing blog). In the future, it is likely that we will see reduced reliance on recall-based multiple choice questions and increased use of higher-level test questions and other testing formats.

References

Balch, W. R. (2007). Effects of test expectation on multiple-choice performance and subjective ratings. Teaching of Psychology34(4), 219-225.

Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist39(3), 193-202.

Hancock, G. R. (1994). Cognitive complexity and the comparability of multiple-choice and constructed-response test formats. The Journal of Experimental Education, 62(2), 143-157.

Mueller, J. (2014). Authentic assessment toolbox.  Retrieved from http://jfmueller.faculty.noctrl.edu/toolbox/tests/morethanfacts.htm

Suskie, L. (2009). Assessing Student Learning: A Common Sense Guide. (2nd ed.) San Francisco:  Jossey-Bass.

Waller, K. V (2008).  Writing instructional objectives. National Accrediting Agency for Clinical Laboratory Sciences.  Retrieved from http://www.naacls.org/PDFviewer.asp?mainUrl=/docs/announcement/writing-objectives.pdf

Categories: All Categories, Students, Teachers, Classroom