Any episode of Who Wants to be a Millionaire? illustrates some of the problems with multiple choice questions, and true/false question formats: there is a fixed probability that students who pick the correct answer are just guessing.

Experienced students can narrow the choices down using clues from the options, but even with only two choices, there is a 50% likelihood that the student will pick the correct answer by chance. When students choose “correct” answers, how can we evaluate how much they really know?

One common solution—using short-answer and open-ended questions—is only partially successful. Consider this excerpt from Richard LaVoie’s F.A.T. (Frustration, Anxiety and Tension) City Workshop using typical “comprehension” questions. First, students read this passage:

Last Serney, Flingledope and Pribin were in the Nerd-link treppering gloopy caples and cleaming burly greps. Suddenly a ditty strezzle boofed into Flingledope’s tresk. Pribin glaped and glaped. “Oh, Flingledope.” he chifed. “that ditty strezzle is tuning in your grep!”

LaVoie then asks participants:

  • When did this take place?
  • Who was with Flingledope?
  • They were treppering something, what were they treppering?
  • Then a strezzle showed up, what kind of strezzle? What did it do?
  • Pribin was no help, what did he do?

(Complete transcript here.)

After only a little schooling students can answer all these questions “correctly,” but are completely unable to recount the story in their own words—because they understood nothing.

We want students to know the answers to our learning assessment questions, but we also want them to understand why the correct answer is correct.

Using classroom response systems such as Top Hat during lectures can provide more opportunities both to encourage and assess deeper understanding. Shapiro and others (2017)1 suggest that merely asking questions about underlying concepts does not guarantee student mastery. So, we need challenges in class that help students link the content of the question with the deeper concepts.

One way to ask the question again

This is the Let’s Make a Deal strategy: Students respond to a content question, and review their choices (without showing the correct answer). Then, if fewer than 70% of responses are correct, we present additional information in class. We identify one of the incorrect answers and tell students to pick again from the remaining choices.

Here’s this strategy in an example (based on a news report) from a lesson on the role of DNA in the inheritance of features from generation to generation. The first attempt at this question produced these answers from students:

Title Clinton DNA Responses
Question In the 2008 Presidential campaign, a reporter remarked that Hillary Clinton could not concede graciously because it “wasn’t in the Clinton DNA” to do so. Is this an accurate assessment?  
A No, DNA has nothing to do with temperament and personality. 10%
B No, where would she get Clinton DNA? 15%
C Yes, DNA has everything to do with temperament and personality. 20%
D Somewhat; DNA has something to do with temperament and personality. 55%

In this case, most students were hedging their bets with the least specific answer (answer D). Most of the answers were incorrect, so before the second round we identified choice C as incorrect and polled again.

Title Clinton DNA Responses
Question In the 2008 Presidential campaign, a reporter remarked that Hillary Clinton could not concede graciously because it “wasn’t in the Clinton DNA” to do so. Is this an accurate assessment?  
A No, DNA has nothing to do with temperament and personality. 31%
B No, where would she get Clinton DNA? 24%
C Yes, DNA has everything to do with temperament and personality. 2%
D Somewhat; DNA has something to do with temperament and personality. 43%

In the second poll, most students switched away from answer C, but the bet-hedging answer D was still the most common choice. And, even though D is true, the important concept in this lesson was the role of DNA in inheritance of traits across generations, not how DNA accounts for those traits.

Since Hillary Clinton has no Clinton ancestors, answer B was the correct choice. This result allows instructors to revisit the issue of how DNA links generations by inheritance. Luckily, we had several married women in the class, whom we asked if we would find their husbands’ DNA when the women’s DNA was tested; the general response was “Gawd! I hope not!”

“Tell me Why”

The second strategy is based on the learning cycle model described by Lawson (1995)2. The goal is for students who have mastered descriptive (content) information to account for underlying causal relationships that explain the correct answers. The first question is a simple choice of content options, but the second asks students to connect the previous answer to the underlying reason why the answer is correct.

Title BLS1 Responses
Question Which of these is not a function performed by the brainstem?  
A Patterned motor activity (such as walking) 84%
B Activating the gag reflex 13%
C Regulating ventilation rates 3%
D Regulating cardiac functions 0%

With 84% of students answering correctly (choice A), we proceeded to the reason why this answer is correct.

Title BLS2 Responses
Question Why is the brainstem called a basic-life-support (BLS) structure?  
A It integrates conscious control of vital functions 20%
B It controls somatic reflexes 0%
C It controls vital functions and some visceral reflexes 80%

In this case, 80% of students were able to connect the specific actions performed by the brainstem in BLS1 with the choice in the follow-up question about what they all have in common: the control of vital functions in the cardiovascular, respiratory, and digestive systems, and reflexes involving gagging, vomiting and swallowing. All of these preserve life at the most basic level.

Segmenting your results

In the previous example, 84% of students answered correctly in the content question, and 80% answered the follow-up correctly. But you should determine if the students answering the second question correctly all answered the first one correctly… or if some of our 80% just made a lucky guess. One strategy that Top Hat allows is to use segmented results to find the connection between the answers chosen by students in one question and their answers in a follow-up.

Here are the results for the BLS questions. The segmented responses are reported as a percentage of the whole class. These percentages include only groups that responded to both BLS questions.

Title BLS2 Total Responses Segmented Responses
Question Why is the brainstem called a basic-life-support (BLS) structure?    
A It integrates conscious control of vital functions 20% Patterns 15.0%
Gag 3.3%
Ventilation 1.7%
C It controls vital functions and some visceral reflexes 80% Patterns 75.0%
Gag 6.7%
Ventilation 1.7%

The segmented response results show us that even though 84% of students answered the first question correctly, only 75% of students answered both questions correctly; and more than half the students who answered BLS1 incorrectly picked the correct answer in BLS2. What is more, 15% of the students who chose correctly in BLS1, answered BLS2 incorrectly. This means that they did not understand the concept that explained the connections among the correct and incorrect answers.

These examples hold two main messages for instructors. First, the results of content questions alone tell us a limited amount about our students’ learning, but by using follow-up questions, we can dig more deeply to assess student success. Secondly, we can use the segmented response function in Top Hat to help us to identify students who can match their correct responses to content questions with the correct responses to causal or explanatory questions. Both give us a deeper understanding of our students’ performance and success.

References

  1. Shapiro AM, Sims-Knight J, O’Reily GV, Capaldo P, Pedlow T, Gordon L, Monteiro K. 2017. Clickers can promote fact retention but impede conceptual understanding: The effect of the interaction between clicker use and pedagogy on learning. Computers and Education 111:44‒59. Available from http://dx.doi.org/10.1016/j.compedu.2017.03.017.
  2. Lawson AE. 1995. Science Teaching and the Development of Thinking. Belmont (CA): Wadsworth.

Tagged as: