Artificial Intelligence (AI) Can't Tell You Secrets
Does "AI" stand for artificial intelligence or artificial ignorance? • AIs are consensus-seekers, not truth-seekers • How leaders can avoid being replaced by robots
As of this writing, the artificial intelligence boom is in full swing1. Nvidia (market leader in AI chips) is raking in astronomical profits. Recruiters in the Interview Mountains are buckling under the weight of AI-generated résumés. You can add instant sex appeal to an app, conference, or investment offering by slapping “artificial intelligence” into its name (or publication title…we’re guilty of this).
We’re dabbling in AI-generated content to investigate whether AIs threaten our careers. After playing around with AIs for the past year:
We now use AIs to generate most of our images.
There isn’t a single word of AI-generated prose in our articles. Adventures in Leadership Land remains 100% handwritten by humans. Our writing is variable in tone and clarity, like how handmade products have tiny inconsistencies while machine-made products are so perfect that they’re devoid of individuality. More bluntly, our writing is characteristically bad.
Our verdict is that AIs aren’t as “intelligent” as they’re hyped-up to be. They’re just talkative calculators.
We won’t get into the technical details of how AIs operate because, frankly, we don’t understand them. To ensure quality control, we asked an AI expert to review this article and bless it with a robotic wand.
Playing Mad Libs With an AI
Consider this sequence of numbers:
1, 2, 3, ___ , 5, 6, 7, 8, 9, 10
You learned to count to ten in early childhood using the Arabic numeral system, so it’s literally child’s play for you to fill in the blank with “4” based on the surrounding information.
Something similar happens when you ask an AI for a delicious cookie recipe. You’re essentially asking the AI to “fill in the blank.” The AI searches its memory and finds 739,173 cookie recipes. It then fills in the blank by responding with a recipe that combines elements of the surrounding recipes.
AI microchips operate 24/7 in huge warehouses where they process inconceivable volumes of data. This allows an AI to (sometimes) fill in the blanks with incredible details that are indistinguishable from genuine human creativity. AI-generated résumés have become so good that they’ve reduced applicant tracking system (ATS) filters to the effectiveness of screen doors on a submarine. In 2022, an AI-generated art piece won the Colorado State Fair’s fine art competition, beating out all the human-made entries2. AIs that mimic girlfriends, boyfriends, and therapists have exploded in popularity3.
Now consider the following sequences:
I, II, III, ___ , V, VI, VII, VIII, IX, X
一, 二, 三, ___ , 五, 六, 七, 八, 九, 十
34, 55, 89, 144, 233, 377, ___, 987, 1597, 2584, 4181
31 39 39, 32 31 31, 32 32 33, 32 32 37, 32 32 39, 32 33 33, 32 33 39, 32 34 31, ______, 32 35 37, 32 36 33, 32 36 39
Filling in the blank became more difficult the further down you went, right?4 You wouldn’t know how to fill in the blanks unless you studied each numerical system. Without training, you’d have to guesstimate the answer solely through pattern-recognition.
AI isn’t much different. If you ask an AI to “fill in the blank” and it can’t find an authoritative answer in its training databases, it will try to guess anyway. That’s when AIs are prone to fill in the blank with utter garbage. Midjourney generates images of human hands with eight fingers. ChatGPT outputs false information with the confidence of a pathological liar. Just like a human BSing their way through an interview instead of saying “I don’t know,” AIs will BS their way through your prompts instead of saying “data not found.”
AI: Artificial Intelligence or Artificial Ignorance?
To know what you know and what you do not know, that is true knowledge.
– Analects of Confucius, chapter II
Since the Stone Ages (i.e. five years ago), AIs have become adept at processing human languages, art, and music. The first one – language – is most important because we can now talk to computers using our languages, rather than their languages of beeps, boops, zeroes and ones.
AIs can now process language, music, and art because these disciplines are:
Highly structured. Alphabets, words, and sentences follow predictable structures. Musical notes follow rhythmic patterns. An art style becomes distinct when it contains patterns that are absent in other styles.
Highly verifiable. Small variations and errors in language, music, and art are inevitable, but these generally cancel each other out when AIs accumulate a massive dataset5. Even when AIs learn something incorrectly (e.g. mistaking the BMW iX automobile for a beaver’s bucktoothed face), human users can provide corrective feedback to help the AI unlearn its error.
This “structure vs. verifiability” model is our current understanding of what AIs can and can’t do:
Structured/Verifiable: Upper-Right Quadrant
This is the Institute of Conventional Wisdom’s realm.
Master artists, musicians, and writers seem to possess esoteric secrets, but their disciplines are highly structured. Art, music, and language are inaccessible to most of us because we lack the 10,000+ hours of deliberate practice it takes to build a wealth of muscle memory, spatial intelligence, and tacit knowledge.
AIs can generate images, music, and prose like magic, but they do so through brute force. Imagination, human experiences, and other unstructured (often abstract) things can’t be readily translated into zeroes and ones. Anything that can’t be processed into numbers remains inaccessible to talking calculators. That’s why AIs possess master technical skills and zero comprehension – an odd, sometimes unnerving mix you rarely see in a human being.
These deficiencies are slowly closing as humans provide accurate data to train the AIs. Whether you know it or not, you’re helping! What do you think you’re doing when you fill out a CAPTCHA that forces you to name distorted letters, or to identify traffic lights in grainy photographs? You’re providing data to turn “Artificial Ignorance” into “Artificial Intelligence!”
Structured/Unverifiable: Upper-Left Quadrant
This quadrant is shrouded in the Fog of Uncertainty.
A well-designed predictive AI can be very good at narrowing the confidence intervals around probabilities, e.g. narrowing “40-80% chance of success” down to “65-70% chance of success.” But within 1,000 parallel timelines, where 650-700 timelines contain a successful outcome and 300-350 result in failure, the AI can’t influence which single outcome will happen any more than a fortune teller waving her hands over a crystal ball.
This is where all the sci-fi movies grossly exaggerate the power of predictive AIs: they can crunch out precise probabilities, but they can’t influence which outcome will occur. Furthermore, future planning is fragile to prediction error. If the AI predicts a single thing incorrectly, that failure will cascade to all other possibilities down the line.
Unstructured/Verifiable: Lower-Right Quadrant
It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
– Unknown, often misattributed to Mark Twain
This quadrant is also blanketed by the Fog of Uncertainty – just a different flavor of it.
AIs can detect patterns in massive datasets, but the algorithms inherited the fallibility of their human designers. As the world becomes more complex and surveillance systems become more Big Brother-ly, the volume of noise (useless data) is growing faster than the volume of signal (useful data).
AIs make it easier to detect patterns in useless noise (unstructured), and larger datasets provide greater statistical confidence (higher verifiability) in whatever conclusions can be drawn from those patterns. All this makes it easier for a gullible human to confuse useful data with useless noise. Or worse: AIs make it easier for a human with a hidden agenda to find statistically-significant trends to support any preconceived notion (or whatever foregone conclusion the person funding the research wants to believe). Imagine what we wrote about in the Lies We Tell in Leadership, Part 4: Cherry-Picking, but less benign and on an industrial scale.
AIs could be drowning us in a sea of irrelevance, and we wouldn’t even realize it.
Unstructured/Unverifiable: Lower-Left Quadrant
This quadrant is the realm of Liar's Lair.
We don’t mean that an AI will intentionally deceive its users (unless designed to do so). We mean that an AI in this quadrant will reach spurious conclusions and present them as fact. Even if an AI didn’t generate the original lie, they will serve as an accomplice in the spread of other people’s (or other AIs’) misinformation.
Imagine that Ralph Wigglesworth is an electoral candidate for Chancellor Supreme of Nonexistonia. An anonymous source claims that Ralph stopped beating his mother after she died in 2015. The allegation is unfalsifiable (or at least, very difficult to debunk), and doesn’t come with a shred of evidence. Nevertheless, Ralph’s political opponents slather the allegations all over social media as part of a merciless smear campaign. Some opponents have the audacity to claim that Ralph’s beatings were the cause of his mother’s death, and insinuate that he cremated her remains to cover up the evidence. The Nonexistonia local news, smelling blood (and advertising dollars) in the water, picks up the story and amplifies the intrigue.
A poorly-designed AI, devoid of skepticism and unaware of “innocent until proven guilty,” finds 1,273 claims of Ralph’s alleged crimes and 92 claims of innocence. This AI generates content that heavily overweighs Ralph’s guilt – content that is later incorporated into the machine-learning data of other AIs. These AIs are contributing to the feedback loop of misinformation, helping a lie travel halfway around the world before the truth can put its pants on.
AIs Don’t Know What They Don’t Know
Let’s now turn our attention to a special case that doesn’t exist on our structure vs. verifiability four-quadrant model: what if AIs have no data at all?
We introduced the concept of “anti-knowledge” last week in Why Epistemology is Important in Leadership. Anti-knowledge is the presence of something unknown. Remember our accusation earlier that AIs are merely talkative calculators? Calculators and AIs function by performing mathematical operations on real numbers and real data. They can’t operate on anti-knowledge or anti-data.
What happens when you prompt an AI to “fill in the blank” with information that doesn’t exist in its databases? Let’s visit the Contrarian Caves, Secret Grottos, and the Unknown Abyss to find out.
Consequence 1: AIs are consensus-seeking machines, not truth-seeking machines
Human beings can conduct experiments to gather empirical evidence and transform anti-knowledge into knowledge. Even more importantly: humans can debunk (falsify) existing beliefs by collecting disconfirmatory evidence. AIs have no such abilities; they can’t do diddly-squat to turn anti-data into data, or to verify/falsify its own conclusions.
When an AI binge-eats a massive dataset, it will invariably encounter conflicting opinions, mutually-exclusive beliefs, and flat-out lies. Unless programmed by its creator, an AI can’t tell if the data in its memory banks are ironclad facts or opinions backed by the scientific rigor of soggy tissue paper. Confronted with conflicting data, an AI’s only option is to calculate which explanation is “probably approximately correct.”
For many applications, “probably approximately correct” is good enough. If you ask an AI to write you a recipe for delicious cookies, it will search its memory and find 739,173 recipes. Most of them have instructions for preheating the oven to somewhere in the range of 300-400°F (150-200°C), but there’s one recipe that calls for 3500°F (1,925°C). The AI will decide that 350°F is the “probably approximately correct” response simply because most of the dataset converged on that number. The AI isn’t speculating that the 3500°F was written by an author who carelessly typed an extra “0” into the recipe, or that maybe the author lives near the earth’s core. The AI simply concluded that the consensus temperature of 350°F was “probably approximately correct.”
The problem is that what’s correct isn’t necessarily true. If you asked an AI:
“What is the sun?” in 400 BC, the response would’ve been “That’s Helios riding across the heavens in his blazing chariot.”
“What is the best investment?” in 1636, 1720, and 2000, the response would’ve been tulip bulbs, the South Sea Company, and Enron, respectively.
“Why do I keep getting sick?” before Louis Pasteur, the response would’ve been “Because you’re breathing in miasma borne upon a foul wind, which causes imbalance in your four humors.”
“How do I improve land transportation?” before the steam locomotive, the response would’ve been “You should breed a faster horse that poops less.”
Unless you ask an AI specifically for a dissenting opinion, it will default to the “probably approximately correct” consensus. You can ask the AI for a contrarian opinion, but it will lack the judgment to know whether baking cookies at 3500°F is a mistake, a matter personal taste, or a creative way to commit arson while maintaining plausible deniability (“I was just following the recipe!”).
Consequence 2: AIs can’t tell you secrets
AIs can’t handle anti-knowledge, so they’re incapable of responding to human prompts with the most valuable form of anti-knowledge: secrets.
In Leadership Land, secrets take the physical form of Cerebrium: glowing crystals that become dimmer as more people behold its light. By definition, a secret cannot be widely known. Once a secret becomes widespread, it becomes conventional wisdom and its light is extinguished. What we consider to be common knowledge today – heliocentric orbit, infernal combustion engines, the optimal temperature for baking cookies – were all secrets in ages past. The Institute of Conventional Wisdom was built with inert Cerebrium that once glowed with the light of secrecy a long, long time ago.
Since AIs are consensus-seeking machines, they belong primarily to the Institute of Conventional Wisdom. An AI will never tell you a secret because:
The secret never appeared in the AI’s training data
The secret appears as a minority dissenting opinion, which the AI will discard in favor of the “probably approximately correct” consensus.
The only way an AI would recognize information as a “secret” would be if the consensus opinion agreed that it’s a secret…but by the time enough people agree on something to reach consensus, the secret will have already degraded into conventional wisdom. Because of this circular chicken-or-egg problem, an AI wouldn’t recognize a secret if you slapped its motherboard with a slab of Cerebrium. If it claims to know a secret, it’s wrong – the secret has already become conventional wisdom.
What secrets today will be tomorrow’s conventional wisdom? Maybe we’ll disprove gravity! Or maybe the DISC assessment will finally be discredited as corporate astrology! Whatever the next revolutionary piece of Cerebrium does to change the world, we think it’ll originate from a talented human sitting alone in their own headspace, not from a warehouse full of microprocessors crunching numbers at light speed.
How the AI Boom Affects You as a Leader
To recap:
An AI can do incredible things with well-structured data if humans are around to verify its successes and refute its mistakes.
AIs struggle with poorly-structured data. They may cause more harm than good by misleading humans or spreading misinformation.
AIs cannot distinguish verifiable/falsifiable facts from flimsy opinions. They are consensus-seeking machines, not truth-seeking machines.
AIs can’t tell you secrets because they rely on consensus opinions. By the time a consensus exists on something, it’s no longer a secret.
Here are some ways to apply your knowledge of AI’s abilities and limitations to the practice of leadership:
Conform as efficiently as possible
Since AIs perform best in the Institute of Conventional Wisdom, they are exceedingly efficient at making you and your organization just like everyone else’s. If your goal is do what everyone else is doing (tax filings, regulatory compliance, baking cookies), AI will probably benefit you more than they harm you.
Artificial intelligence vs. human ignorance
Last week, we made a big fuss about how anti-knowledge (the presence of something unknown) was distinct from ignorance (the absence of knowledge). The distinction matters here: an AI cannot tell you secrets unknown to all of humankind, but it can fill in your knowledge gaps. In other words, AIs can’t process anti-knowledge but they can reduce ignorance.
No fleshy human has the brain capacity or lifespan to master every structured/verifiable discipline in the world. In the structured/verifiable fields where you lack expertise, AIs can be helpful consultants. For most people, filling a knowledge gap with someone else’s expertise can be more impactful than being the first human to discover a secret.
(Mis-)trust and verify (quickly)
AI art has become extremely popular because verification is cheap and rapid. It only takes a minute to generate multiple images and either 1) reject the whole batch and prompt for new images or 2) refine the promising ones. Receiving rapid feedback from humans also boosts the AI’s machine learning process.
What if you’re relying on AI for something that’s expensive and slow to verify, like selecting drilling locations or choosing which product to bring to market? You can’t verify those results for a long time, and you can’t un-spend the money if you don’t like the results.
AIs are most useful when a human expert can rapidly verify their outputs. Don’t trust an AI if an error will lead to unrecoverable losses.
Cat-and-mouse games in the Interview Mountains
If you’re struggling against the flood of AI-generated content in your interview process, make your process less rigid. Change your process from well-structured → poorly-structured. Give your recruiters more autonomy to do unorthodox things. Ask creative people for weird interview questions that can confuse an AI. Feed your prompts into an AI to test them, then reward the AI if it outputs an incoherent or incorrect answer. This taints the AI’s machine learning process and makes it easier for you to detect fraudulent applicant responses.
If you’re hesitant to loosen the rails on your interview process, remember this: your HR rules about treating candidates fairly are there to avoid discrimination lawsuits. It’s not illegal to discriminate against someone for misrepresenting an AI’s output as their own.
Go spelunking beneath Leadership Land
If you don’t want to be replaced by a robot, don’t spend all your time on the surface of Leadership Land. The certainty offered by the Institute of Conventional Wisdom is comforting, but that’s the first location that will be overrun by AIs. Spend time alone in the Contrarian Caves forming your own opinions and analyses. Spend time in the Secret Grottos mining Cerebrium.
If you can excel in the areas that are off-limits to AI, they become complementary tools rather than threats to your livelihood. For example: an AI can summarize the latest research how to push the psychological buttons that exist in the minds of every human being (well-structured). However, an AI can neither separate truth from consensus, nor gather real-world empirical evidence to transmute anti-data into data (poorly-structured). Offload more of the former to an AI so you can more time on the latter.
Use AIs as inspiration for where not to hunt for secrets
Cerebrium becomes dimmer as its secret transforms into conventional wisdom. This implies that secrets are the opposite of conventional wisdom. If AIs are good at dispensing conventional wisdom, can you find secrets by doing the opposite of what an AI tells you?
In When "Best Practices" Produce the Worst Results, we warned that being contrarian and wrong is generally worse than conforming with the crowd. You won’t find secrets unless you’re contrarian and right. You could ask an AI for a contrarian opinion, and it might teach you a better way of doing things that’s still gaining traction by early adopters. But in our experience, contrarian opinions tend to be nonsensical or wrong, like the equivalent of baking cookies at 3500°F.
We’re not convinced that doing the opposite of an AI’s instructions will lead to secrets, but we’re open to persuasion and we’re actively researching this area. For now, we’re pretty sure that an AI’s consensus opinion is a good litmus test for confirming that something is no longer a secret. And if you’re determined to hunt for secrets and don’t know where to start, asking an AI for a dissenting opinion is a good place to search for inspiration.
Use AIs as a mental sparring partner
Sometimes you’re struck by a lightning bolt of insight, complete with a thunderous “Eureka!” You can test the solidity of your idea by pitting it against an AI:
Ask for critical takes or disconfirmatory evidence. If a talkative calculator can shoot down your idea, maybe it wasn’t such a great idea after all.
Ask for cross-domain examples where your idea (or something like it) has already cropped up in an unfamiliar discipline.
Prepare for Artificial Intelligeddon
AIs are churning out tons of consensus-derived content every day. AIs are also incorporating other AIs’ output into their databases, irrespective of truth. This cycle is accelerating faster than human skeptics can verify the facts and debunk the falsehoods.
Artificial Intelligeddon is the day that AI-generated content for a certain topic reaches a critical mass. Whatever consensus opinion exists on that day, be it true or false, will become gospel.
The arrival of Artificial Intelligeddon is only frightening to the unprepared. If you spent a lot of time alone in the Secret Grottos, hunting for Cerebrium, Artificial Intelligeddon is the day your hard work pays off.
Over the next two weeks, we’ll conclude this mini-series on epistemology with How to Discover Secrets in Leadership Land, Part 1. Once we have a solid grasp of knowledge vs. anti-knowledge, we’ll combine it with the fragile—robust—anti-fragile framework from earlier and apply both frameworks to risk and uncertainty.
This is post #4 in the Leadership Land Consistency Experiment, Phase I. We’re building better writing habits by publishing weekly between 12/20/24 – 2/28/25, instead of once every someday. Are we compromising quality for increased quantity? Was this post any better or worse than usual? Please share your comments below or reply directly if you’re reading the newsletter!
In case anyone’s reading this in the future, we’re referring to the artificial intelligence boom underpinned by large language models (LLMs). There have been AI advances in past decades (e.g. Deep Blue defeating chess champion Garry Kasparov), and there will more in the future (e.g. Skynet becomes self-aware and turns us into batteries).
It’s important to note that the winner of the art competition did a lot more work than merely asking an AI to spit out an image, submitting it as his own, and winning the prize. He went through 624 text prompts and revisions before the AI provided a suitable template, which he then edited in Adobe Photoshop and upscaled with another AI program.
We don’t think people are flocking to AI boyfriends/girlfriends/therapists as replacements for the real deal. We suspect the AIs are popular because many countries are experiencing epidemics of loneliness. In 2023, the United States Surgeon General published an advisory on “Our Epidemic of Loneliness and Isolation.” One highlight from the report:
The mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day, and even greater than that associated with obesity and physical inactivity.
For those curious, the responses are:
IV – 4 in Roman numerals
四 – 4 in Chinese numerals
610 – #15 in the Fibonacci sequence
32 35 31 – this is a two-part puzzle. You first need to decipher all the numbers from hexadecimal (base-16) to the standard decimal (base-10) system. Then you need to recognize the sequence as being all prime numbers. #54 in the prime number sequence (251) fills in the blank, but you must translate 251 back into hexadecimal to get 32 35 31. We intentionally made the final sequence opaque to just about everyone who isn’t a mathematician, computer scientist, or codebreaker for the CIA.
If you’re interested in why this happens, see the Law of Large Numbers.