Black swan theory
The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The term is based on an ancient saying that presumed black swans did not exist – a saying that became reinterpreted to teach a different lesson after black swans were discovered in the wild.
The theory was developed by Nassim Nicholas Taleb to explain:
- The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.
- The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).
- The psychological biases that blind people, both individually and collectively, to uncertainty and to a rare event's massive role in historical affairs.
Unlike the earlier and broader "black swan problem" in philosophy (i.e. the problem of induction), Taleb's "black swan theory" refers only to unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences.:xxi More technically, in the scientific monograph 'Silent Risk', Taleb mathematically defines the black swan problem as "stemming from the use of degenerate metaprobability".
The phrase "black swan" derives from a Latin expression; its oldest known occurrence is from the 2nd-century Roman poet Juvenal's characterization of something being "rara avis in terris nigroque simillima cygno" ("a rare bird in the lands and very much like a black swan").:165 When the phrase was coined, the black swan was presumed not to exist. The importance of the metaphor lies in its analogy to the fragility of any system of thought. A set of conclusions is potentially undone once any of its fundamental postulates is disproved. In this case, the observation of a single black swan would be the undoing of the logic of any system of thought, as well as any reasoning that followed from that underlying logic.
Juvenal's phrase was a common expression in 16th century London as a statement of impossibility. The London expression derives from the Old World presumption that all swans must be white because all historical records of swans reported that they had white feathers. In that context, a black swan was impossible or at least nonexistent.
However, in 1697, Dutch explorers led by Willem de Vlamingh became the first Europeans to see black swans, in Western Australia. The term subsequently metamorphosed to connote the idea that a perceived impossibility might later be disproven. Taleb notes that in the 19th century, John Stuart Mill used the black swan logical fallacy as a new term to identify falsification.
Black swan events were discussed by Nassim Nicholas Taleb in his 2001 book Fooled By Randomness, which concerned financial events. His 2007 book The Black Swan extended the metaphor to events outside of financial markets. Taleb regards almost all major scientific discoveries, historical events, and artistic accomplishments as "black swans"—undirected and unpredicted. He gives the rise of the Internet, the personal computer, World War I, the dissolution of the Soviet Union, and the September 11, 2001 attacks as examples of black swan events.:prologue
What we call here a Black Swan (and capitalize it) is an event with the following three attributes.
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme 'impact'. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.
Based on the author's criteria:
- The event is a surprise (to the observer).
- The event has a major effect.
- After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.
The practical aim of Taleb's book is not to attempt to predict events which are unpredictable, but to build robustness against negative events while still exploiting positive events. Taleb contends that banks and trading firms are very vulnerable to hazardous black swan events and are exposed to unpredictable losses. On the subject of business, and quantitative finance in particular, Taleb critiques the widespread use of the normal distribution model employed in financial engineering, calling it a Great Intellectual Fraud. Taleb elaborates the robustness concept as a central topic of his later book, Antifragile: Things That Gain From Disorder.
Taleb states that a black swan event depends on the observer. For example, what may be a black swan surprise for a turkey is not a black swan surprise to its butcher; hence the objective should be to "avoid being the turkey" by identifying areas of vulnerability in order to "turn the Black Swans white".
Taleb's black swan is different from the earlier philosophical versions of the problem, specifically in epistemology, as it concerns a phenomenon with specific empirical and statistical properties which he calls, "the fourth quadrant".
Taleb's problem is about epistemic limitations in some parts of the areas covered in decision making. These limitations are twofold: philosophical (mathematical) and empirical (human known epistemic biases). The philosophical problem is about the decrease in knowledge when it comes to rare events as these are not visible in past samples and therefore require a strong a priori, or an extrapolating theory; accordingly predictions of events depend more and more on theories when their probability is small. In the fourth quadrant, knowledge is uncertain and consequences are large, requiring more robustness.
According to Taleb, thinkers who came before him who dealt with the notion of the improbable, such as Hume, Mill, and Popper focused on the problem of induction in logic, specifically, that of drawing general conclusions from specific observations. The central and unique attribute of Taleb's black swan event is that it is high-profile. His claim is that almost all consequential events in history come from the unexpected – yet humans later convince themselves that these events are explainable in hindsight.
One problem, labeled the ludic fallacy by Taleb, is the belief that the unstructured randomness found in life resembles the structured randomness found in games. This stems from the assumption that the unexpected may be predicted by extrapolating from variations in statistics based on past observations, especially when these statistics are presumed to represent samples from a normal distribution. These concerns often are highly relevant in financial markets, where major players sometimes assume normal distributions when using value at risk models, although market returns typically have fat tail distributions.
Taleb said "I don't particularly care about the usual. If you want to get an idea of a friend's temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on an ordinary day? Can we understand health without considering wild diseases and epidemics? Indeed the normal is often irrelevant. Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the 'normal,' particularly with 'bell curve' methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud."
More generally, decision theory, which is based on a fixed universe or a model of possible outcomes, ignores and minimizes the effect of events that are "outside the model". For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987), but might not model the breakdown of markets following the 9/11 attacks. A fixed model considers the "known unknowns", but ignores the "unknown unknowns", made famous by a statement of Donald Rumsfeld. The term "unknown unknowns" appeared in a 1982 New Yorker article on the aerospace industry, which cites the example of metal fatigue, the cause of crashes in Comet airliners in the 1950s.
Taleb notes that other distributions are not usable with precision, but often are more descriptive, such as the fractal, power law, or scalable distributions and that awareness of these might help to temper expectations.
Beyond this, he emphasizes that many events simply are without precedent, undercutting the basis of this type of reasoning altogether.
Taleb also argues for the use of counterfactual reasoning when considering risk.:p. xvii
- Bad beat – Term in poker
- Currency crisis
- Falsifiability, also known as Black swan fallacy – The possibility of a statement to be proven wrong by observation
- Butterfly effect – Idea that small causes can have large effects
- Deus ex machina – Apparently contrived plot device
- Domino Effect
- Dragon king theory – Event that is both extremely large in impact and of unique origins
- Elephant in the room – Obvious major problem that no-one mentions
- Extreme risk
- Global catastrophic risk – Hypothetical future event that has the potential to damage human well-being on a global scale
- Hindsight bias
- Holy grail distribution – probability distribution with a positive mean and a right fat tail
- Kurtosis risk – term in decision theory
- List of cognitive biases
- Miracle – An event not explicable by natural or scientific laws
- Normal Accidents
- Normalcy bias
- Outside Context Problem
- Popper's solution to the black swan problem in science – The possibility of a statement to be proven wrong by observation
- Quasi-empiricism in mathematics
- Rare events
- Tail risk
- Taleb distribution – probability distribution with a positive mean and a left fat tail; with expected small positive payoff and a small chance of serious losses
- Technological singularity – The hypothesis of an eventual runaway technological growth
- The Long Tail
- There are known knowns – Saying associated with the US invasion of Iraq
- Wild card (foresight)
- Taleb, Nassim Nicholas (2010) . The Black Swan: the impact of the highly improbable (2nd ed.). London: Penguin. ISBN 978-0-14103459-1. Retrieved 23 May 2012.
- Taleb, Nassim Nicholas (2015), Doing Statistics Under Fat Tails: The Program, retrieved 20 January 2016
- Puhvel, Jaan (Summer 1984). "The Origin of Etruscan tusna ("Swan")". The American Journal of Philology. Johns Hopkins University Press. 105 (2): 209–212. doi:10.2307/294875. JSTOR 294875.
- Taleb, Nassim Nicholas. "Opacity". Fooled by randomness. Retrieved 20 January 2016.
- "Black Swan Unique to Western Australia", Parliament, AU: Curriculum, archived from the original on 13 September 2009.
- Hammond, Peter (October 2009), "Adapting to the entirely unpredictable: black swans, fat tails, aberrant events, and hubristic models", WERI Bulletin, UK: Warwick (1), retrieved 20 January 2016
- Taleb, Nassim Nicholas (22 April 2007). "The Black Swan: Chapter 1: The Impact of the Highly Improbable". The New York Times. Retrieved 20 January 2016.
- Taleb, Nassim Nicholas (7 April 2009), Ten Principles for a Black Swan Robust World (PDF), Fooled by randomness, retrieved 20 January 2016
- Webb, Allen (December 2008). "Taking improbable events seriously: An interview with the author of The Black Swan (Corporate Finance)" (PDF). McKinsey Quarterly. McKinsey. p. 3. Archived from the original (Interview; PDF) on 7 September 2012. Retrieved 23 May 2012.
Taleb: In fact, I tried in The Black Swan to turn a lot of black swans white! That’s why I kept going on and on against financial theories, financial-risk managers, and people who do quantitative finance.
- Taleb, Nassim Nicholas (September 2008), The Fourth Quadrant: A Map of the Limits of Statistics, Third Culture, The Edge Foundation, retrieved 23 May 2012
- Taleb, Nassim Nicholas (April 2007). The Black Swan: The Impact of the Highly Improbable (1st ed.). London: Penguin. p. 400. ISBN 1-84614045-5. Retrieved 23 May 2012.
- Trevir Nath, "Fat Tail Risk: What It Means and Why You Should Be Aware Of It", NASDAQ, 2015
- DoD News Briefing – Secretary Rumsfeld and Gen. Myer, February 12, 2002 11:30 AM EDT Archived 3 September 2014 at the Wayback Machine
- Newhouse, J. (14 June 1982), "A reporter at large: a sporty game: i-betting the company", The New Yorker, pp. 48–105
- Gelman, Andrew (April 2007). "Nassim Taleb's "The Black Swan"". Statistical Modeling, Causal Inference, and Social Science. Columbia University. Retrieved 23 May 2012.
- Gangahar, Anuj (16 April 2008). "Market Risk: Mispriced risk tests market faith in a prized formula". The Financial Times. New York. Archived from the original on 20 April 2008. Retrieved 23 May 2012.
- Taleb, Nassim Nicholas (2010) , The Black Swan: the impact of the highly improbable (2nd ed.), London: Penguin, ISBN 978-0-14103459-1, retrieved 26 February 2017.
- Taleb, Nassim Nicholas (September 2008), "The Fourth Quadrant: A Map of the Limits of Statistics", Third Culture, The Edge Foundation, retrieved 23 May 2012.
- McGee, Suzanne (5 December 2012), Black Swan Stocks Could Make Your Portfolio a Turkey, Fiscal Times, CNBC, retrieved 20 January 2016.