How to ask hard questions
Clients often ask us how to make their Blank Slate questions more difficult. It’s not that they want to stump people for no reason, but rather to challenge people to think more deeply about the information they need to know to do their job.
Whether based in fact or intuition, clients’ interest in asking tough questions is smart and scientifically-sound. Research from cognitive science has shown that people don’t learn much from answering easy questions. Instead, people form strong and resilient long-term memories when they have to answer difficult questions [1, 2], which often means getting them wrong a few times and learning the correct answer through trial and error.
Blank Slate is uniquely positioned to offer advice on how to create challenging questions. On any given day, Blank Slate users collectively answer thousands of questions. Multiply that by several years, and you get a massive database full of information about the qualities of questions that are easy versus difficult to answer.
To determine what makes a question hard, we pulled from our database the 1,000 easiest questions and the 1,000 hardest questions that our clients have ever asked. Their average accuracy scores were 97.8% and 76.2%, respectively. All of these questions had been answered at least 100 times each, ensuring that we had a good read on how easy or hard each one was. Then we used a large language model to analyze the differences between the easy and hard questions, both in terms of question formatting and content.
Formatting a hard question
We found that easy questions were shorter in length and used question formats such as true/false, multiple choice, or simple fill-in-the blank requiring a short answer. Some example questions included, “What year was the Battle of Bunker Hill?” and “True or False: Seizures can be a symptom of epilepsy.”
Hard questions were longer and featured more complex formatting. Some questions prompted people to select multiple correct answers from a list, select the steps for a procedure in the correct order, or type out a multiple-word answer. Examples included, “While an officer is transporting a prisoner from one location to another, in what circumstances may he/she become involved in other enforcement activity?” and “Place the seven steps for using a tourniquet in the correct order.”
Creating the content for a hard question
Easy questions emphasized broad, factual knowledge. These questions were more suitable for assessing basic understanding of rules, laws, and definitions. Easy questions also had simpler, more straightforward language. Questions like, “What are two types of strokes?” or “Who designed the Massachusetts State House?” presented clear, single-fact inquiries that don’t require much interpretation.
Hard questions focused heavily on procedural knowledge and situational judgment, demanding deeper application of principles and critical thinking about real-world scenarios. These questions often placed the responder in an active decision-making role. For example, questions for law-enforcement officers included “What should officers consider before entering a building during a search?” or “During a ride-along, officers will notify the rider that… (select all)” These questions require an officer to understand and apply knowledge of procedures.
Hard questions also used more technical and specialized terminology. For example, “What is the significance of asymmetric pupillary responses in an unconscious patient?” assumes that the responder already knows some field-specific terms. Though difficult, these questions can be helpful for exposing knowledge gaps in technical language.
Here’s a table for quick reference that sums up the results of our question analysis.
Taking the guesswork out of creating questions
Guided by insights from our extensive database and analytics, Blank Slate works with clients to design questions that challenge their teams and maximize long-term memory for critical information. Whether your workforce relies on procedural knowledge, situational judgment, technical expertise, or all of the above, our platform provides the most efficient and effective way to retain it all.
References
- Pyc, M. A., & Rawson, K. A. (2009). Testing the retrieval effort hypothesis: Does greater difficulty correctly recalling information lead to higher levels of memory? Journal of Memory and Language, 60(4), 437–447. https://doi.org/10.1016/j.jml.2009.01.004
- Carpenter, S.K., Delosh, E.L. Impoverished cue support enhances subsequent retention: Support for the elaborative retrieval explanation of the testing effect. Memory & Cognition 34, 268–276 (2006). https://doi.org/10.3758/BF03193405
_____
Amy Smith, PhD
Chief Scientific Officer, Blank Slate Technologies