The global financial crisis. European sovereign debt contagion. The cybersecurity wave. In each case, the data was available. The models existed. The professionals were qualified. And yet the institutions that should have seen it coming were caught flat-footed.
The common explanation is that these were "black swans" — unpredictable by definition. But that's not quite right. In each case, someone saw it coming. Someone connected the dots. Someone raised the flag. The question isn't whether the risk was visible. The question is why the people paid to see it didn't.
The answer lies not in the quality of their models, but in the way they attend to the world.
The Divided Brain
Psychiatrist and philosopher Iain McGilchrist has spent decades studying brain hemisphere asymmetry — not the pop-psychology clichés about "creative vs. analytical," but something far more fundamental about how we apprehend reality.
The left hemisphere, McGilchrist argues, is about re-presenting the world: narrow, focused attention on what we know how to name, measure, and model. It prefers the explicit, the procedural, the rule-based. It builds models and optimizes for utility. It is uncomfortable with ambiguity.
The right hemisphere is about being on the lookout: broad, vigilant attention to context, to the whole, to what might be new or threatening. It tolerates ambiguity. It perceives relationships and uses metaphor. It is open to the unknown.
| Left Brain — "Re-presenting" | Right Brain — "On the lookout" |
|---|---|
| Narrow, focused attention | Broad, vigilant attention |
| Materialist and reductive | Holistic and nuanced |
| Propositional, procedural, rules-based | Perceptive and participative |
| Model-making and utility | Openness and flexibility |
| Prefers what it knows | Tolerates ambiguity and the unknown |
Both modes are essential. The problem arises when one dominates — when the re-presenting mode crowds out the lookout mode, when the model becomes more real than the world it represents.
Modern enterprise risk management is almost entirely a left-brain enterprise. Risk registers, rating scales, and heat maps focus on what we know how to name, measure, and model. Useful — but they crowd out the right-brain capacity: broad, vigilant attention that catches what's emerging before it fits the framework.
A World of Uncertainty and Ambiguity
The Oxford Scenario Planning Approach uses the acronym TUNA — Turbulence, Uncertainty, Novelty, and Ambiguity — to describe conditions under which left-brain frameworks fail. Historical data doesn't predict novel situations. Procedural rules don't capture ambiguous contexts. Narrow attention misses the emerging threat at the periphery.
These conditions aren't an occasional disruption. They're the permanent operating environment. Digital and physical domains are now integrated. Supply chains span continents. A submarine cable cut in the South China Sea affects operations in Pennsylvania. An earthquake in India's National Capital Region cascades through your outsourced development team. A drought stresses the grid that powers your cloud provider.
The outside world — geopolitical, macroeconomic, climatic, infrastructural — is no longer external to your risk model. It is your risk model. And left-brain frameworks weren't built to see it.
"Our knowledge of the way things work, in society and in nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty."
— Kenneth Arrow, Nobel laureate in EconomicsWachovia: A Case Study in Hemispheric Failure
In 2007, the quantitative models at most financial institutions showed acceptable counterparty exposure. The historical data supported the assumptions. The regulatory boxes were checked.
But the left-brain view missed what mattered: the catastrophic balance sheet implications if mortgage assumptions failed at scale. This wasn't a question of counterparty credit limits or VaR calculations. It was a question of institutional survival — whether the capital base could absorb losses of that magnitude, whether liquidity would evaporate, whether the bank would exist in twelve months.
Wachovia's stock peaked at $90 billion in April 2006. The Golden West acquisition — $25.5 billion for a California mortgage lender at the top of the market — was a left-brain decision: the models supported it, the deal metrics worked, the growth trajectory looked good. What the models didn't show was the world outside the model: the rot in mortgage origination, the interconnected fragility of the securitization chain, the way a housing correction could cascade into existential crisis.
The tells were there for anyone watching with right-brain attention. In May 2008, Wachovia appointed a new general counsel — one who specialized in protecting boards of directors from litigation. A month later, the CEO resigned. The CFO followed. These weren't model outputs. They were pattern signals: leadership preparing for the worst while the official risk metrics still showed green.
By September 2008, a "silent run" following the Washington Mutual collapse triggered the endgame. Wells Fargo acquired what remained for $11.5 billion.
The data to foresee this was available. But seeing it required right-brain attention: broad vigilance, tolerance for uncomfortable implications, willingness to read the signals that don't fit in a spreadsheet.
What Right-Brained Risk Requires
This isn't about replacing quantitative analysis with intuition. It's about weaving the two modes together — using left-brain precision where it applies, while maintaining right-brain vigilance for what lies outside the frame.
It requires asking different questions:
- To what do we attend? Are we focused only on what we already know, or scanning for what we don't?
- What do we perceive? Are we seeing the model, or the world the model represents?
- What narratives do we create or take in? Are we telling ourselves a story that makes us comfortable?
- How do we respond? Do we have the institutional courage to act on uncomfortable truths?
It requires different capabilities: cross-disciplinary education that connects risk to geopolitics, infrastructure, and systems thinking. Scenario planning that takes uncomfortable possibilities seriously. Reporting that tells a story rather than presenting a data dump.
And it requires awareness. Awareness of which hemisphere we favor, individually and institutionally. Awareness that the mode of attention shapes what we can see. Awareness that in conditions of uncertainty and ambiguity, the lookout function is not optional.
The Bottom Line
The left brain is necessary but not sufficient. Quantitative models are tools, not answers. Regulatory frameworks are floors, not ceilings. Risk registers capture the known, not the emerging.
If you want an ERM program that actually protects the enterprise — that would have told you about Wachovia before the acquisition, about European sovereign contagion before the crisis, about cyber exposure before the breach wave — you need to make room for right-brain risk.
That means getting the basics right first: governance, documentation, process, the foundational ERM infrastructure that earns credibility with regulators and boards. But it also means building the capacity to see what the basics can't capture — the emerging, the novel, the ambiguous, the outside world pressing in.
Uncertainty and ambiguity are here to stay. The question is whether your risk function is equipped to see what's emerging.