Risk and Uncertainty in Leadership: 3-Dimensional Thinking
Revisiting the concept of risk & uncertainty in terms of Impact × Probability × Predictability • Introduction to black swan theory
In 1921, a fellow named Frank Knight published a book entitled Risk, Uncertainty, and Profit. The book was apparently influential; fast-forward one century and people still use the terms “Knightian risks” to describe quantifiable risks and “Knightian uncertainty” for unquantifiable risks.
Knightian risks are known and computable. For example, most revolvers can hold 5-10 rounds (bullets) at a time, so a game of Russian Roulette comes with a 1-in-5 to 1-in-10 chance of “winning” (between 10% and 20%). If you know what gun you’re using, you know your odds.
Contrast that with Knightian uncertainty, which is unknown and impervious to math. Imagine playing Russian Roulette, except the revolver’s cylinder is magically sealed and you can’t see inside. Is there one chamber inside the cylinder? Thousands of chambers? How many of those chambers are loaded with lethal firepower?
No one knows ¯\_(ツ)_/¯
You’ll only find Knightian risks (math-able) in artificial environments, such as casinos, lotteries, games, and other situations where humans make and enforce clear rules. Take chess, for instance: the board is visible to everyone (i.e. no secrets except for your opponent’s thoughts) and there is a mathematically optimal way to play the game. Another example is a boxing match. Each round lasts no more than two or three minutes and it’s illegal to headbutt your opponent in the crotch, tase them before delivering the knockout punch, etc.
In real-life leadership, however, most decisions that matter are in the realm of Knightian uncertainty (unmath-able)1. There’s no way to compute the likelihood of a competitor developing a secret new product that usurps your market dominance. You can’t cry foul to a referee when an unprecedented flood washes away a town that was supposed to be outside of the 500-year flood zone. You could send war elephants into battle to terrorize your enemies, only for the opposing army to counterattack by lighting pigs on fire and terrorizing your elephants into trampling your own soldiers (this is not hyperbole – this actually happened in antiquity). An environment without clear rules is unpredictable.
In other words, leaders must learn to expect the unexpected. That requires a better grasp of the predictable vs. the unpredictable.
Risk & Uncertainty in Terms of Impact × Probability × Predictability
Since Knightian uncertainty is unmath-able, it’s untame-ble and unpredictable. Let’s add a third axis, Predictability, to the Impact × Probability framework we introduced in Risk and Uncertainty in Leadership: 2-Dimensional Thinking. You can see the new predictability axis popping out of your screen in this illustration:

Wow, this is getting complicated. When we had two dimensions, we only had four quadrants. After adding the third dimension, we now have eight octants. Fortunately, we’ll only cover the high-impact (upper) half of this 3-D framework. We already wrote about low-impact risks and uncertainty in the previous article (regarding cost-effective solutions, anti-fragile learning, and avoiding overreactions). The third axis of predictability doesn’t add anything worth mentioning to low-impact events.
We wrote in the previous article that human brains are terribad at probabilistic thinking. Turns out…
We’re really bad at predictions, too
“Wait a minute,” you say. “The 10-day weather forecast is very accurate. And if my phone slips out of my hand, I’m pretty sure (i.e. I predict with high certainty) that it will fall to the ground rather than fly straight up into outer space. Why would this rando on the internet claim that I’m bad at predicting?”
That’s because fundamental sciences like math and physics are relatively free of the randomness that throws off prediction accuracy. Because you understand gravity, you can predict the trajectory of your airborne smartphone with high certainty. Using high-school physics, you can even predict the velocity of your $800 ballistic smartphone at the precise instant you need to buy a new one2. Meteorology, one level removed from the physics and chemistry on which it depends, is subject to greater randomness. Meteorological forecasts are less accurate than predictions based on Newton’s laws of motion, but short-term weather predictions are accurate enough for planning your weekend activities.
The further you venture from math and physics, the more your prediction accuracy degrades from increasing randomness. Look at the 3-D framework again – predictability goes down as you venture deeper into the Fog of Uncertainty. By the time you reach the “soft sciences” like sociology, economics, and political science, predictions have become nearly useless. Consider the early work of Philip Tetlock, who asked experts from a variety of fields to predict the outcomes of political, economic, and military events from 3-5 years in the future. Tetlock then evaluated how accurate these high-status, highly-credentialed people were at predicting the future. These “experts” turned out to forecast only slightly better than dart-throwing monkeys. In fact, the more famous people were worse at predicting than the peers who had less star power3.
Leaders have the murkiest crystal balls, yet they need the most clarity
Leadership falls squarely in the soft sciences camp, if you can deign to call leadership a “science” at all. If you believe that:
Applied chemistry is biology,
Applied biology is psychology,
Applied psychology is sociology,
Applied sociology is political science and economics,
Then leadership is politics, economics, and sociology applied to wielding power and influence more effectively. By this point, leadership is as far removed from math and physics as, say, the art of seduction. That means leadership is subject to extreme randomness, which complicates the task of predicting which decisions will lead to the best outcomes.
Worse still, people often react to a leader’s predictions, throwing off the analysis which led to those predictions in the first place. You can see this happening instantly when someone makes a self-defeating claim (See 7 Characteristics To Always Show, Never Tell and 7 MORE Characteristics to Always Show, Never Tell). People change their behavior when you study them; things don’t.
Physicists and mathematicians have the privilege of dealing with clean and elegant solutions that either work perfectly or not at all. Leaders, on the other hand, must often decide whether turd A or turd B is less crap-tastic and implement the one that stinks less. A Chinese mathematician calculated pi (π) to 7-digit precision in 480 AD, and those calculations have remained valid for thousands of years. A leader’s calculations can fall apart at any moment. It could take seconds, like during a failed negotiation or a military ambush. It could take decades, like Jack Welch’s General Electric imploding after his retirement. Every hiring decision is a prediction that the selected candidate will aid the organization in its mission; every firing decision is a failed prediction.
I can calculate the motion of heavenly bodies, but not the madness of the people.
– Isaac Newton, quoted (apocryphally) after losing £20,000 in the South Sea Bubble of 1720. That’s about £3.5 million ($4.4 million) after adjusting for 2025 inflation.
In other words: leaders are stricken with “prediction blindness.” We must make far-reaching, expensive decisions whilst blinded by the Fog of Uncertainty. When we look up, we see the pointy end of the Sword of Damocles dangling over our heads. We need the clarity of foresight more than most, yet we’re cursed to have it the least.
Swan Dive Into the Fallibility of Science

Leaders engage in all sorts of bizarre superstitious rituals to fight prediction blindness. Historically, they’ve read tea leaves, performed rain dances, and sliced beating hearts out of sacrificial victims. Things have improved only a little bit since the Dark Ages. We forecast oil prices, interest rates, election results, and revenue streams 3, 5, 10, 30 years into the future…and we’re bad at it. We use personality tests (DISC assessment, MBTI, True Colors, etc.) which are basically horoscopes repackaged into professional wrapping paper. We pay big bucks to management consultants who always conclude that our problems can only be solved with more management consulting services.
The form of superstitious rituals has changed, but the function has not. Leaders continue to partake in theatrical rituals to A) reduce their own anxiety by satisfying the bias for action over inaction, B) appease observers by making the decision appear less arbitrary than it really is, C) find a convenient scapegoat to blame later if a decision produces bad results, or D) all of the above.
If you’re from a STEM field, you might be tempted to believe that the scientific leaps we’ve made since the Renaissance will banish the darkness of superstition. Let’s see what happens when we faithfully use the scientific method taught in grade school:
Form a hypothesis (“I suspect that all swans are white”).
Go out and test the hypothesis by repeatedly collecting data (“I counted 7,384 swans, all of which were either white or drab juveniles in the process of growing white plumage)”.
Form a conclusion (“I conclude with 100.0% confidence that all swans are white”).
For thousands of years, a black swan was a ludicrous concept – just like a flying pig or a pink elephant today. Then came the 1600s, when Dutch sailors observed actual black swans (Cygnus atratus) living in the newly-discovered continent of Australia. The conclusion that all swans are white, supported by millions of observations spanning thousands of years, was invalidated by a single observation of a black swan.
This asymmetry is known more generally as the problem of induction. You can collect a boatload of data to support a conclusion, but you can still miss a low-probability, high-impact outlier lurking deep within Fog of Uncertainty. The classic example is a farm animal which, through repeated observations, learns that humans are walking food dispensers. With each passing day, a turkey trained in statistical inference calculates (with increasing confidence) that the arrival of a human will result in a free meal. The turkey’s confidence reaches its highest point on the Wednesday before Thanksgiving – the fateful day the turkey makes a one-way trip to the slaughterhouse.
Anatomy of a Black Swan (The Concept, Not the Critter)
In his book The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb lays out three criteria that must be met for something to qualify as a Black Swan event:
It must be improbable.
It must be impactful.
It must be unpredictable before it happens. Afterwards, human nature makes us believe that the event was more predictable and explicable than it really was.
Here’s where Black Swan events fit into our 3-D Impact × Probability × Predictability framework:

Under Taleb’s definition, Black Swans events are High Impact × Low Probability × Low Predictability. Note that a high-probability event not happening is also a Black Swan. For example, let’s say you bought a new smartphone to replace the one you lost to gravity. The moment you tear the packaging off your new device, it flies straight up into the stratosphere like a giant middle finger in defiance of gravity.
Examples of Black Swan events in leadership
An anodyne coworker with no criminal history (not even a parking ticket) goes on a murderous rampage. Afterwards, the media will showcase numerous interviews with people who “had no idea” or “can’t believe such a gentle person was capable of atrocious acts.”
You lose a high-volume customer (or a rainmaker with a loyal client base) to a competitor, causing a double-digit percentage decline in revenue.
An opponent’s blunder allows you to seize territory, market share, or a promotion for your own gain.
Mass layoffs in careers traditionally regarded as “stable” or “safe” (e.g. civil servants, tenured professors, jobs protected by unions).
A profitable company (adored by Wall Street and featured on the cover of Forbes) implodes upon the revelation of an Enron-style accounting scandal.
Discovery of a secret that allows your company to monopolize a market before competitors can put their pants on.
A competitor unveils a revolutionary product before your organization can put its pants on.
Being sued by a longtime friend or business partner, instantly pushing you out of a collaborative relationship and into the Straits of Conflicting Interests.
Recruiting someone with rare skills through an unconventional channel (your conventional HR systems will filter out anyone weird – including anyone weirdly talented).
Betrayals are a special case of Black Swan events. It’ll feel more personal (thus, more painful) when the same hand that feeds you is the one that wrings your neck. But not all Black Swans have negative outcomes. Our list included several examples that have a High Positive Impact × Low Probability × Low Predictability.
I see a black swan, you see a white duck
The probability of any event (i.e. its frequency of occurrence) is an objective fact. Probability can and should be measured by machines because human brains are poorly equipped for probabilistic thinking. It’s the opposite for the event’s impact and predictability – those are entirely subjective to the person observing the event. Perspective is non-negotiable for measuring impact and predictability; a life-changing Black Swan event for one person could be a trivial non-swan (a duck?) for someone else.
Take the impact axis, for instance. In the ~10 minutes it took for you to read this far down, two dozen people (on average) have died from a road injury somewhere in the world4. Imagine kissing your partner goodbye in the morning, but they never make it home in the evening because they were killed in a traffic accident during their commute – a devastating Black Swan event. But you and I are oblivious to the 3,300 people (on average) who die every day in road accidents around the world. These deaths are statistics, not tragedies. We feel no impact, so these 3,300 daily deaths aren’t Black Swan events from our perspective.
Now consider the predictability axis.
The sighting of a black swan (the bird) was a shock for the Dutch sailors, but not for the aboriginal Australians who had been living there for millennia.
The COVID-19 pandemic was a Black Swan event for the general public, but not for epidemiologists who studied past pandemics.
The turkey’s visit to the slaughterhouse was a Black Swan event for the bird, but not for the butcher who was just collecting another paycheck.
The massive earthquake that will wreck the U.S. city of Seattle will be a Black Swan for most people. To geologists who study the Cascadia subduction zone, it’s an inevitability. The only question is not “if” but “when.”
The indigenous people of Jamaica couldn’t predict the lunar eclipse of March 1504. Christopher Columbus knew it was coming, and he used that knowledge to intimidate the islanders into providing him with free food.
Nassim Taleb put it succinctly:
The Black Swan is a sucker’s problem. It occurs relative to your expectation.
Here’s a generous serving of irony for you
Knowing about a Black Swan’s existence makes it more predictable, and thus less Black Swan-y. A true Black Swan is a shadowy creature birthed from the Inconceivabilia of the Unknown Abyss. Once you’re aware of it, a Black Swan becomes a Gray Swan. It changes from Inconceivabilia to Mystery (see How to Discover Secrets in Leadership Land, Part 1). It progresses from “unknown unknown” to “known unknown.”
The irony in writing this article is that, by giving you numerous examples of Black Swans in leadership, we’ve downgraded them into Gray Swans. The Black Swans that we haven’t planted into your consciousness are the events that will help you or hurt you the most.
How science turns us into suckers
Earlier, we covered the scientific method’s glaring flaw: the “problem of induction” where 1,000 observations won’t prepare you for the Black Swan that appears on the 1,001st. What happens when we incorporate that fateful data point into our understanding of the world? Do we learn that we can’t predict nearly as well as we believed? Do we learn that the most consequential events are the inconceivable Black Swans that we haven’t witnessed, still lurking within the Unknown Abyss?
No.
By faithfully following the scientific method, we fixate on what’s known and visible (knowledge) while ignoring the contents of the Unknown Abyss (anti-knowledge). Gobsmacked by a recent Black Swan, our minds scramble to re-establish order by spinning a coherent (but not necessarily true) narrative. Soon enough, we’re telling ourselves “I should’ve known better” or even worse, “I knew it all along!” Taleb called this retrospective distortion: believing that a Black Swan event was more predictable than it actually was, but only in hindsight.
Following one high-impact event, we build science-based models to predict the next one (but those models are usually wrong because it’s difficult to predict with a sample size of one). This obsession on science-ing the hell out of a single observation creates a mental block; it’s like tunnel vision for one’s thoughts. It makes us less prepared for the next Black Swan, which will creep up on us from behind while we’re hunched over our crystal balls.
We (the authors) are suckers for the same errors in judgment that we’re railing against. The 9/11 terrorist attacks should’ve taught us that the next major terrorist attack will be a Black Swan event that we can’t even imagine, let alone prepare for. We should be looking down at our feet, where terrorists could poison the water supply and claim 10× or 100× the number of 9/11 victims. Instead, we look up at the gleaming steel-and-glass skyscrapers and shiver at the thought of another plane crashing into them.
Our Black Swan blindness also applies to positive impacts. After learning about successful people, our instinct is to imitate them. In many cases, that’s the exact opposite of what we should do. We study Hannibal, Steve Jobs, and Einstein because they did unprecedented things. The next brilliant commander will not cross the Alps. The next talented entrepreneur will not create beautiful consumer electronics. The next ingenious scientist will not discover E = mc2. Instead, they will do something unprecedented, something inconceivable, something unpredictable – essentially, a Black Swan.
It took us many years of copying past innovators (and failing to replicate their success) to realize that we were science-ing this all wrong. We were learning everything about those pioneers except for the things that mattered.
Leadership in an Unpredictable World
Around the time Frank Knight developed his ideas of Knightian risk and Knightian uncertainty, another fellow named Karl Popper was hard at work solving the problem of induction. Popper believed scientific theories could be disproven but never verified. In Popper’s worldview, there are two types of scientific theories:
Theories known to be wrong because they’ve been disproven through rigorous testing
Theories not yet known to be wrong because they haven’t been disproven yet.
This is the opposite of the scientific method, where you collect enough data until you can conclude that you’ve proven your hypothesis. Popper’s idea, called “falsifiability,” is both powerful and weird. It’s powerful because you’ll never know if all swans are white, but you can disprove the notion with a single piece of evidence: a black swan. It’s weird because it means that the theory of gravity can never be definitively proven. On the flip side, you can disprove gravity if you let go of your smartphone and it falls straight up into the sky, forcing you to buy a yet another one.
This asymmetry of information is simple and elegant. Applying the Popperian worldview to leadership means there are two types of leaders:
Leaders who have experienced a career failure and are now in the Career Swamp or Silent Graveyard
Leaders who have not failed…yet.
Just like no scientific theories are “right,” this also means that no leader can be considered “successful.” Is your CEO a “successful leader” or a failure waiting to happen? After all, Executive Mountain is a volcano known for its High Impact × Low Probability × Low Predictability eruptions.
Escaping the trap of false certainty
When we conclude that all swans are white and that pigs will never fly, our minds snap shut. We become resistant to further analysis. We’ve fallen for the trap of false certainty. The truest mark of a sucker is to believe that Black Swan events only happen to other people. When the Black Swan surprises you from behind, it feels…unreal. Unfair. It’s humiliating and dehumanizing to become a statistic!
We like applying Popper’s approach to leadership because it frees us from the trap. The world is no longer divided between right and wrong; there is only wrong and “less wrong.” Instead of trying to be right all the time to impress our bosses and influence our subordinates, we can spend our energy being less wrong.
Only the paranoid survive
Harry from the legal department is funny and charming. Once promoted into the Middle Management Foothills, however, he becomes a “kiss up, kick down” bosshole who takes credit for his subordinates’ work and blames them for his mistakes. That’s the problem of induction applied to an organization: 1,000 observations of psychopathic charm will not prepare you for the emergence of the Machiavellian Black Swan.
But the Popperian approach to leadership will take you to some dark places. You’d have to assume that everyone is either:
Someone known to be a psychopath (their empathy has been disproven)
Someone whose empathy has not been disproven yet.
You could do this with all sorts of undesirable behavior: a murderer, an embezzler, a freeloader…the list is inexhaustible. We previously wrote about the value of mistrust in How to Prevent Problems with a Pre-Mortem Analysis. And since constant paranoia is emotionally draining, we also covered trust and mistrust in Living With Uncertainty – How Leaders Can Manage Emotions.
If you must predict, stick with the “hard” sciences as closely as possible
The hard sciences tend to deal with objects, molecules, and abstractions. The soft sciences tend to deal with humans, whose free will adds tremendous randomness to any model and confounds the prediction attempts of the “experts.”
Psychology seems to be the softest field in which we have any predictive power at all, and every psychological prediction is weak. You can use psychological tricks in your next negotiation, but it won’t guarantee success like a Jedi mind trick would. At best, you’d nudge the other party toward a slightly more favorable outcome.
Check back next week for Risk and Uncertainty in Leadership: 4-Dimensional Thinking. We’ll spend the final article in the risk & uncertainty series on how to make better decisions under uncertainty.
This is post #9 in the Leadership Land Consistency Challenge, Phase I. We’re building better writing habits by publishing weekly between 12/20/24 – 3/7/25, instead of once every someday.
Some of you might work entirely in human-controlled fields, like a nonprofit organization that stays solvent by applying for government grants handed out under clearly-defined rules. These decisions are both impactful and computable, but they’re an oddity in leadership.
Even physics isn’t immune to randomness and uncertainty. After Newton was (probably not) bonked in the head by an apple in 1666, classical mechanics was all we had for about two centuries. After the 1800s, physicists had to add quantum mechanics for teeny-tiny things, general relativity for ginormassive things, and special relativity for super-duper fast things.
Tetlock later pushed the idea of “superforecasting,” claiming to have figured out a way to predict more accurately than a dart-throwing monkey. Nassim Taleb, who cited Tetlock’s early work in The Black Swan, has become a vocal critic of Tetlock’s newer work. Taleb’s armor-piercing question is: “if superforecasting is so good at predicting, why haven’t the practitioners become rich from scrying their crystal balls?”
We’re with Taleb on this one.
Rough estimate based on the 2021 death rate statistics from Our World in Data.