What generative AI does to what leaders know about their own organization
Edward Geist showed in 2023, in Fog-of-War Machines (Chapter 5 of Deterrence under Uncertainty: Artificial Intelligence and Nuclear Warfare, Oxford University Press, pp. 169–188), how states can use AI as a weapon of obfuscation. Toby Stuart showed in HBR’s April 2026 piece The Future Is Shrouded in an AI Fog that AI has produced a different kind of fog, one that hangs over markets and careers. Both look outward. What happens inside has barely been described.
Fog of war, before AI
Carl von Clausewitz coined the term in On War (1832). Three-quarters of what decisions rest on, he wrote, lies “in a fog of greater or lesser uncertainty.” Management discourse has used the term ever since to describe what leadership structurally cannot know: what employees actually work on, where friction sits, what gets said in the hallway.
The classical tools are imperfect. 1:1s, skip-levels, standups, management by walking around. Employees filter, status updates are political texts. But the tools have a property that is rarely named: known bias profiles. When a senior colleague reports that everything is on track, her manager knows she is structurally over-optimistic, because he has known her for years. The information is filtered. The filters are known.
Before broad AI adoption, organizations lived with fog. Nobody expected full visibility. The tools were calibrated for the invisible, not for its dissolution.
The promise and the early findings
In October 2023, Mark Purdy and A. Mark Williams set out the optimistic version in HBR: How AI Can Help Leaders Make Better Decisions Under Pressure. AI as co-pilot, sounding board, decision helper. Data provenance is named as a residual risk, in a subordinate clause.
Three years later, what was a subordinate clause is the structural problem. In September 2025, a research group from BetterUp Labs and the Stanford Social Media Lab delivered the quantitative counter-finding: AI-Generated “Workslop” Is Destroying Productivity. Forty percent of the 1,150 U.S. employees they surveyed had received AI-generated output in the past month that looked good but produced rework. One hour and 56 minutes of extra effort per instance. $186 per person per month. Over $9 million per year for an organization of 10,000.
Stuart added the strategic version in April 2026. Geist had the military one in 2023. What none of them describes is what happens inside an organization that uses AI day to day.
Five old phenomena that AI scales
Nothing that follows is new. All five phenomena existed before AI. What has changed is scaling. Generative AI lets individuals parallelize tasks. With parallelization, the standards, heuristics, and filters each person brings to their work scale too. The effect runs both ways. Whoever works carefully multiplies their impact. Whoever passes assumptions on multiplies those too.
Swayam Bagaria has already named the underlying idea: in The Fog of AI (Harvard Divinity Bulletin, Autumn/Winter 2025), he describes an “uncertain informational provenance” that AI casts around the concepts of human life. What follows is a translation of that observation into the operational.
Provenance fog
The receiver does not know whose judgment lies behind a piece of output. Did the senior colleague think that herself? Did she ask AI and write up its synthesis? Did she forward AI output unchanged? Three radically different levels of trust. Same output. No visible provenance.
With AI’s help, I drafted a conference talk in a domain I have worked in for years. The language was professional, the structure clean. An analyst on the team read it and said, after a short look, that the premise was wrong. If I had sent the draft to someone outside the domain, it would probably have passed. Not because it argued well, but because it sounded good.
Ghostwriting and staff drafts existed before. The attribution was conventionalized. AI suspends that convention. Any output can have any provenance.
Verification fog
“I checked it” has multiple meanings now. A marketing colleague tells a tech lead that the company website blocks AI crawlers. The source: a screenshot from an AI conversation. The tech lead reviews the robots.txt, compares it against the logs, walks through the configuration. Half an hour later the result is in: the website is configured correctly. Three people were in the loop, and a hallucination has traveled through the team as a critical issue.
The damage is not primarily time. It is the small erosion of credit that every future “AI found …” message now carries with it.
Shadow-skill fog
In every organization today, parallel AI workflows are running, built privately by individual employees. Prompt libraries, skills, custom agents. Most of them are documented nowhere.
A tech lead spends several weeks building a pipeline-review skill: three systems, three heuristics, one briefing. The output is consistent. The heuristics live as a markdown file on his laptop. When he leaves the organization, institutional knowledge leaves with him whose existence nobody knew about.
Tribal knowledge in individual heads is old. What has changed is the threshold. Casting one’s judgment into a reproducible workflow used to take effort. Today it is a matter of hours. Whoever works carefully multiplies their impact. Whoever encodes assumptions without reflection cements them. IBM and the Cloud Security Alliance frame Shadow AI as a compliance and security question. The skill aspect is still under-discussed there.
Confidence fog
Language models sound especially confident when they are wrong. A 2026 MIT study measured 34 percent more high-confidence words such as “definitely” or “without a doubt” when an answer was hallucinated than when it was correct. Research at Carnegie Mellon Dietrich has confirmed the pattern across multiple model generations.
Rhetorically skilled managers have always existed. The skill was acquired through experience. Today it has grown on top of language models and is available in every output, regardless of the depth of the person behind it. The heuristic “whoever sounds convincing probably knows what they are talking about” held for a long time. It does not anymore.
Drift fog
Auto-summaries and auto-tags write themselves into CRM records without anyone deciding these outputs should exist. A typical pattern: the auto-summary of an important customer interaction misrepresents the priorities. A week later, the same in another account. How many false summaries have already been cemented across the systems, no one knows.
Jonathan Rosenthal and Neal Zuckerman (Decision-Making by Consensus Doesn’t Work in the AI Era, HBR April 2026) coined a related term: Success Theater. They mean the reports that middle managers curate. Drift fog is the upstream machine stage. It is not managers who curate the picture but defaults, before any human is in the loop.
What is actually new
Before AI, organizations lived with fog. They built tools that assumed exactly that condition. 1:1s work because a manager knows her direct reports’ bias profiles. Skip-levels work because two hierarchy levels lay different filters on the same reality. Standups work because “yesterday I did X” is a statement about human work whose reproducibility is known.
Information asymmetry was never a flaw in organizations. It was the tool itself. That a manager knows how her people think, that she knows their optimism corrections, that she can read small distortions in a status update, are not side effects of classical management. They are its functional mechanisms.
Generative AI inverts these mechanisms. It produces output without a bias profile, because the model has none anyone could know. It scales packaging faster than packaging can be learned. It distributes provenance across so many hands that origin is practically not reconstructable anymore.
The old tools worked because managers knew how their people think. With AI, that bias knowledge disappears. What remains can be described as an Accidental Fog-of-War Machine: not a designed machine, but a byproduct of daily AI use.
Between Geist and Stuart
Geist ends with a catch: the deceiver can never be sure the opponent does not see through the deception. Stuart ends with a question he does not answer: whether institutions will be redesigned for opacity before all kinds of investment seize up.
The layer in between is harder than either. With a deliberate opponent, there would be a strategy. With a market phenomenon, there would be a hedging logic. With the emergent fog inside one’s own house, there is only the necessity to understand the phenomenon before falling for an answer.
The tools of the pre-AI organization were made for filtered information with known provenance. The tools of the post-AI organization will have to be made for unfiltered information without provenance. What they will look like, no one yet knows.
AI does not make us see less. It makes us see differently. What used to be known, because it could be read from people’s bias profiles, is no longer known. What used to be invisible, because it sat in the shadow of hierarchy, is now visible through auto-summaries and daily briefings. Seeing and not-seeing have shifted, not expanded.


Leave a Reply