Finding New Leaders: Lessons from Michael Lewis & The Undoing Project
Michael Lewis is guaranteed to write a book to engage anyone, even or especially someone with little or no interest in the subject. Not sure baseball is even still played? Find yourself enthralled by MONEYBALL and Billy Beane’s innovative way of assembling a winning major league team.
Working with a hypnotist to help you exorcise the memory of your underwater mortgage from 2009? Read THE BIG SHORT and wonder why you never learned more from your introductory college course in finance.
So a book devoted to the friendship of two Israeli psychologists and their heretical thinking about probabilities, statistics and judgment entitled THE UNDOING PROJECT might well lose out to reruns of “The Simpson.” That is, unless the author is Michael Lewis and he reports on, among other things, how simple mathematical models actually make better decisions than the trained professional experts who help construct those models.
And what, pray tell, does any of this have to do with the search for and recruitment of leadership of academic, research or medical organizations? Two points of relevance.
Small Numbers and Simple Models
Daniel Khaneman and Amos Tversky are the psychologists about whom Lewis writes in THE UNDOING PROJECT. The first product of their collaboration “teased out the implications of a single mental error that people commonly made – even when those people were trained statisticians.”
People mistook even a very small part of a thing for the whole. . . . Even people trained in statistics and probability failed to intuit how much more variable a small sample could than the general population – and that the smaller the sample, the lower the likelihood that it would mirror the broader population. They assumed that the sample would correct itself until it mirrored the population from which it was drawn. In very large populations, the law of large numbers did indeed guarantee this result If you flipped a coin a thousand times, you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times. [But] “[p]eople’s intuition about random sampling appear to satisfy the law of small numbers, which asserts that the law of large numbers applies to small numbers as well,” Danny and Amos wrote.
A second topic is one Khaneman and Tversky built on by drawing from work involving Paul Hoffman and Lew Goldberg of the Oregon Research Institute. Seeking to understand better how experts make judgments, a study was made of radiologists and how they determined from a stomach x-ray if a patient had cancer. The doctors agreed on seven key signs and their combinations as the “cues” by which they arrived at a diagnosis and they described “their thought processes as subtle and complicated and difficult to model.”
As Lewis recounts, the researchers “began by creating, as a starting point, a very simple algorithm, in which the likelihood that an ulcer was malignant depended on the seven factors the doctors had mentioned, equally weighted.”
The researchers then asked the doctors to judge the probability of cancer in ninety-six different individual stomach ulcers, on a seven-point scale from “definitely malignant” to “definitely benign.” Without telling the doctors what they were up to, they showed them each ulcer twice, mixing up the duplicates randomly in the pile so the doctors wouldn’t notice they were being asked to diagnose the exact same ulcer they had already diagnosed.
The results, according to Goldberg, were “generally terrifying.” The simple model “starting point” turned out to be quite good predicting the radiologists’ diagnoses. The doctors’ thinking processes might be complex and subtle, but the model nevertheless mimicked those processes quite well.
Just as surprising was the extent to which the doctors’ diagnoses differed from one another: “they were all over the map.”
What’s more, doctors apparently could not even agree with themselves since “when presented with duplicates of the same ulcer, every doctor had contradicted himself and rendered more than one diagnosis. . .”
This study was repeated, but this time the subjects were clinical psychologists and psychiatrists who - after providing the “cues” they considered when deciding whether or not to release a patient from a psychiatric hospital – reviewed cases and provided their decisions.
Once again, the experts were all over the map. Even more bizarrely, those with the least training (graduate students) were just as accurate as the fully trained ones (paid pros) in their predictions about what any given psychiatric patient would get up to if you let him out the door. Experience appeared to be of little value in judging, say, whether a person was at risk of committing suicide. Or, as Goldberg put it, “Accuracy on this task was not associated with the amount of professional experience of the judge.”
Implications for the Search for Leadership
In one sense, a search process is inherently vulnerable to the law of small numbers: after all, the task is one of winnowing a group of candidates down to those whose records exemplify the attributes of successful leaders and the specific culture of a given organization. By virtue of the small number of finalists usually considered – anywhere from three to five, the “sample” is bound to vary substantially from the population of possible candidates. Indeed, that is the purpose of a search: to find the atypical person, the individual whose achievements and attributes, at once, set her apart but match her to the distinctive needs of the times and the institution, yet represent the pool of promising leaders.
Still, the law of small numbers serves notice to search committees and their organizations. The small number of finalists almost guarantees significant variation from the population of candidates, so the process by which the finalists are culled ought to be conducted with great care and an eye toward, depending on the circumstances of the particular search and the needs of the organization, seeking very similar or very different finalists.
And what implications may be drawn from studies of expertise and judgment? For one thing there appears to be a case for continuing to constitute search committees with persons who make up a diverse group from an array of disciplines and professions rather than specialists and who bring with them little, some and much experience.
Next, the collective “wisdom” of search committees seems to warrant respect. If clinicians’ judgments vary even when the situation is one they have already judged, a group of what are, after all, mostly “amateurs” in selecting leaders are not likely to do too much damage in their recommendations of whom to select for leadership roles.
Finally, the track records of search firms in identifying candidates who are ultimately selected for leadership positions are important indicators of the value a firm can bring to a search, especially when those candidates are retained in their posts for extended periods of time after appointment.