The psychologist James Shanteau undertook the task of finding out which disciplines have experts and which have none. Note the confirmation problem here: if you want to prove that there are no experts, then you will be able to find a profession in which experts are useless. And you can prove the opposite just as well. But there is a regularity: there are professions where experts play a role, and others where there is no evidence of skills. Which are which?
Experts who tend to be experts: livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians (when they deal with mathematical problems, not empirical ones), accountants, grain inspectors, photo interpreters, insurance analysts (dealing with bell curve–style statistics).
Experts who tend to be … not experts: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, councilors, personnel selectors, intelligence analysts (the CIA’s record, in spite of its costs, is pitiful, unless one takes into account some great dose of invisible prevention). I would add these results from my own examination of the literature: economists, financial forecasters, finance professors, political scientists, “risk experts,” Bank for International Settlements staff, august members of the International Association of Financial Engineers, and personal financial advisers.
Simply, things that move, and therefore require knowledge, do not usually have experts, while things that don’t move seem to have some experts. In other words, professions that deal with the future and base their studies on the nonrepeatable past have an expert problem (with the exception of the weather and businesses involving short-term physical processes, not socioeconomic ones). I am not saying that no one who deals with the future provides any valuable information (as I pointed out earlier, newspapers can predict theater opening hours rather well), but rather that those who provide no tangible added value are generally dealing with the future.
Another way to see it is that things that move are often Black Swan–prone. Experts are narrowly focused persons who need to “tunnel.” In situations where tunneling is safe, because Black Swans are not consequential, the expert will do well.
[…] The problem with prediction is a little more subtle. It comes mainly from the fact that we are living in Extremistan, not Mediocristan. Our predictors may be good at predicting the ordinary, but not the irregular, and this is where they ultimately fail. All you need to do is miss one interest-rates move, from 6 percent to 1 percent in a longer-term projection (what happened between 2000 and 2001) to have all your subsequent forecasts rendered completely ineffectual in correcting your cumulative track record. What matters is not how often you are right, but how large your cumulative errors are.
And these cumulative errors depend largely on the big surprises, the big opportunities. Not only do economic, financial, and political predictors miss them, but they are quite ashamed to say anything outlandish to their clients—and yet events, it turns out, are almost always outlandish . Furthermore, as we will see in the next section, economic forecasters tend to fall closer to one another than to the resulting outcome. Nobody wants to be off the wall.
Since my testing has been informal, for commercial and entertainment purposes, for my own consumption and not formatted for publishing, I will use the more formal results of other researchers who did the dog work of dealing with the tedium of the publishing process. I am surprised that so little introspection has been done to check on the usefulness of these professions. There are a few—but not many—formal tests in three domains: security analysis, political science, and economics. We will no doubt have more in a few years. Or perhaps not—the authors of such papers might become stigmatized by his colleagues. Out of close to a million papers published in politics, finance, and economics, there have been only a small number of checks on the predictive quality of such knowledge.
The long tail’s contribution is not yet numerical; it is still confined to the Web and its small-scale online commerce. But consider how the long tail could affect the future of culture, information, and political life. It could free us from the dominant political parties, from the academic system, from the clusters of the press—anything that is currently in the hands of ossified, conceited, and self-serving authority. The long tail will help foster cognitive diversity. One highlight of the year 2006 was to find in my mailbox a draft manuscript of a book called Cognitive Diversity: How Our Individual Differences Produce Collective Benefits , by Scott Page. Page examines the effects of cognitive diversity on problem solving and shows how variability in views and methods acts like an engine for tinkering. It works like evolution. By subverting the big structures we also get rid of the Platonified one way of doing things—in the end, the bottom-up theory-free empiricist should prevail.
Unbeknownst to me, 1987 was not the first time the idea of the Gaussian was shown to be lunacy. Mandelbrot proposed the scalable to the economics establishment around 1960, and showed them how the Gaussian curve did not fit prices then . But after they got over their excitement, they realized that they would have to relearn their trade. One of the influential economists of the day, the late Paul Cootner, wrote, “Mandelbrot, like Prime Minister Churchill before him, promised us not utopia, but blood, sweat, toil, and tears. If he is right, almost all our statistical tools are obsolete [or] meaningless.” I propose two corrections to Cootner’s statement. First, I would replace almost all with all . Second, I disagree with the blood and sweat business. I find Mandelbrot’s randomness considerably easier to understand than the conventional statistics. If you come fresh to the business, do not rely on the old theoretical tools, and do not have a high expectation of certainty.
I am most often irritated by those who attack the bishop but somehow fall statisticians. Using the confirmation bias, these people will tell you that religion was horrible for mankind by counting deaths from the Inquisition and various religious wars. But they will not show you how many people were killed by nationalism, social science, and political theory under Stalinism or during the Vietnam War. Even priests don’t go to bishops when they feel ill: their first stop is the doctor’s. But we stop by the offices of many pseudo-scientists and “experts” without alternative. We no longer believe in papal infallibility; we seem to believe in the infallibility of the Nobel, though […]
The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb