In the hospital, doctors routinely diagnose and treat within minutes of meeting their patients. Out of the hundreds of thousands of diseases and medications that exist, clinicians are able to analyze and determine the correct illness and medication within minutes. It’s like some doctors just know what’s happening. This is possible due to years of training and experience which produce a finely tuned diagnostic mind. Many clinicians use intuition and shortcuts called heuristics to guide their way to a clinical conclusion. But what if these shortcuts go wrong?
It is estimated that humans spend 95% of our time using intuitive thinking, which allows us to use as little mental energy as possible throughout the day. Researchers call this type of thinking “Type 1” thinking. Remember of the last time you drove to work, or made yourself breakfast? In both of these situations, your brain is on autopilot, using accumulated knowledge and experience to guide you without much thought. I rarely remember my drives to work anymore, let alone what I had for breakfast. In contrast, Type 2 thinking is analytical thinking. When we tap into our type 2 thinking, we use data to critically analyze problems. It is estimated that 75% of errors in internal medicine were cognitive in origin, possibly a result of type 1 thinking.
Type 1 thinking is not always a bad thing, though. Common things are common, and relying on intuition and experience can often save time and money when a clinician encounters a problem they seen a lot of. Errors in reasoning are also inevitable, even if clinicians use type 2 thinking, due to the human nature. It is not a sign of intelligence or competence to use one form of thinking or the other. Doctors who work on “autopilot” are not bad doctors. In fact, they are often some of the most experienced doctors. But relying on this type of cognition becomes a problem when a patient has something rare or something the clinician hasn’t seen before. These problems require critical analysis, i.e. type 2 thinking. These pitfalls that sometimes get clinicians into trouble are termed “cognitive biases”, and many of them are well documented in the literature.
An example of a common cognitive bias is “availability bias”. Imagine a physician recently missed an early diagnosis of necrotizing fasciitis, a serious bacterial infection that often results in amputation. Due to this error, the patient had to get a larger portion of their limb amputated. Because this hypothetical doctor is a good doctor, they will remember the mistake and order more biopsies on other patients in the future, regardless of whether they need it or not. This is called “availability bias”, when a recent (available) solution to a problem affects your current clinical reasoning. This is bad because it subjects patients to unnecessary medical procedures along with their associated side effects and costs. Necrotizing fasciitis is rare compared to other diseases like cellulitis, and biopsies come with their own risks (such as infection).
Below is a list of other common cognitive biases from Sullivan and Schofield:

Even though awareness of these biases in clinical medicine has increased, there is a general lack of research on the subject. This is because people typically (and ironically) are unaware of their own cognitive biases, a phenomenon called the “blind spot bias”. This makes finding solutions to these errors difficult. Additionally, it may be hard to convince clinicians that the way they think is a source of error, to begin with. There have been several proposed tactics to deal with these issues, however.
One such tactic is “bias teaching sessions”. These sessions are designed to enhance doctors awareness of the existence of biases, and how they impact their decision making. Although these sessions are relatively common (this is in fact how I learned about cognitive biases) they are relatively ineffective at improving clinical decision making. However, according to some interventional studies, teaching sessions showed only small and statistically insignificant results. Other data show mixed efficacity indicating that educational sessions are likely low yield at best.
Slowing down during clinical reasoning may be another way to mitigate bias. Many studies show broadly positive outcomes for this method, with better clinical outcomes. However, these results are not universal, and sometimes slowing down leads to longer analysis but equal diagnostic inaccuracy. This means slower and more costly care without better outcomes. Currently, it is unclear why some studies find a significant improvement whereas others do not. On the whole, slowing down may be a simple debiasing technique to avoid falling into “type 1” thinking in the clinic.
Another solution may be metacognition. This is simply understanding one’s own shortcomings in thought processing and consciously focusing on deficient areas. One example would be forcing yourself to ask “why” you have chosen a specific treatment plan, or forcing yourself to look for alternative scenarios. In medical school, we are taught to develop at least 5 different differential diagnoses, even if the diagnosis is obvious. Another way to understand your own cognitive limitations is to estimate your confidence. Sometimes our diagnostic confidence is low, but we only realize this when we purposefully ask ourselves.
Checklists have also been shown to be a helpful debiasing strategy. These are already used extensively in industry and in surgery. One of my favorite books on medicine is “The Checklist Manifesto” by Harvard surgeon Atul Gawande. In his book, Gawande explains how simple checklists can be used to prevent avoidable but common mistakes in the operating room. Even though these are simple tools, they are quite effective.
In conclusion, cognitive biases may have a huge impact on clinical reasoning and decision making. Experienced doctors are inclined to slip into what researchers call “Type 1” thinking, a low effort type of analysis that relies heavily on pattern recognition and heuristics. although type 1 thinking isn’t always negative, it may be beneficial to develop techniques to promote type 2 analysis instead. There are several methods to do this, but to date research on the subject is limited. This is because it is inherently difficult to study how individuals think. Additionally, doctors may be unaware of their own biases due to the blind-spot bias. Regardless of these challenges, future research in this area and the development of new debiasing techniques may be critical to reducing diagnostic mistakes in medicine.
No content on this site should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician, please see the disclaimer page
This article is a synopsis of the article “Cognitive Bias in Clinical Medicine” (Sullivan and Schofield 2018). All data are from the original article.
O’Sullivan ED, Schofield SJ. Cognitive bias in clinical medicine. J R Coll Physicians Edinb. 2018 Sep;48(3):225-232. doi: 10.4997/JRCPE.2018.306. PMID: 30191910.
Absolutely. But this “Cognitive Biases” isn’t only affecting the medical industry. It affects almost every industry in the world. Anyways, this was beautifully written!
LikeLiked by 1 person
Absolutely, it is something everyone can learn from. Thank you for reading!
LikeLiked by 1 person