Irrational Expectations

Download for Kindle

Irrational Expectations

Irrational Expectations

How statistical thinking can lead us to better decisions

Predictive analytics is emerging as a practical discipline, one that can help business leaders make better decisions. In a world whose complexity in many respects has moved beyond our cognitive abilities, numbers are nothing to fear. In fact, they may be your greatest ally.

Behind door #1…

Readers of a certain age who grew up watching U.S. television fondly remember — well let’s just say remember — the game show Let’s Make a Deal, hosted by Monty Hall. At a key point in each episode, a contestant was asked to guess which of three doors – #1, #2, or #3 – concealed a valuable prize (say, a wood-paneled station wagon). To build suspense, Monty would first open one of the two doors that the contestant didn’t choose. Pointing out that the door he opened had not concealed the prize, Monty would offer the contestant the option to change his or her guess.

Suppose you are a contestant and you have guessed that the car is behind door #1. Monty then opens door #3, revealing not the car but a barnyard goat. Monty then gives you a choice: you can stay with your original guess, or change your guess to door #2. What should you do? Should you switch to door #2, or stick with door #1?

If your answer is that it does not matter, you are not alone. Most people intuit that after Monty revealed the goat behind door #3, the probabilities of the car being behind doors #1, #2, and #3 go from 1/3-1/3-1/3 to 1/2-1/2-0.  Therefore Monty’s opening door #3 should have no bearing on your subsequent decision about whether to switch from door #1 to door #2.

But this common answer turns out to be wrong: if you switch from door #1 to door #2, you double your chance of winning. Few people—these authors included—can correctly solve problems like this in real time, and most people have difficulty with such problems even given unlimited time. In fact, when this problem was popularized by Marilyn vos Savant in Parade magazine in 1990, thousands of readers, hundreds of whom were mathematicians, wrote back to Parade, chiding vos Savant for publishing the wrong answer. Even the eminent mathematician Paul Erdös reportedly pondered the Monty Hall problem on his deathbed.1

Beane’s problem was that wealthier teams such as the New York Yankees, with many multiples of the A’s salary budget, could out-bid the A’s when scouting for new talent. Beane addressed this problem with a crucial insight: baseball scouts often use flawed reasoning and fallible “gut feelings” or “professional judgment” when selecting baseball players.

The Monty Hall problem is significant beyond being a mere brain teaser. In all areas of business, people must constantly process information in real time to arrive at decisions. To avoid becoming overwhelmed, decision-makers inevitably rely on intuition, heuristics and other mental short-cuts when weighing various factors. Unfortunately, as the Monty Hall problem so vividly illustrates, our unaided intuitions are quite capable of leading us astray.

This happens much more regularly and severely than people realize. Recent advances in cognitive science and behavioral economics have taught us that in a surprising array of domains, human decision-makers need statistical tools just as badly as nearsighted people need eyeglasses. The human mind simply did not evolve to make the kinds of decisions we are called on to make every day in business settings. Statistical analyses and predictive models are the necessary correctives. The business implications of these insights are considerable.

Ecce Homo

Until recently, much of the economic theory underpinning business practice has paid little heed to the sorts of cognitive limitations exemplified by the Monty Hall problem. Indeed a central tenet of much modern economic theory is the assumption of rational expectations. This is the notion that people’s guesses, or expectations, about the future are – on average – the best ones possible because they take into account all available information. Of course individual people make incorrect guesses all the time. But according to rational expectations, their guesses should diverge from the truth in random ways that average out to zero.

To the extent economic actors are rational, it should be impossible to profit from relevant information that is widely available: somebody else would have already used it to make a profit. Hence the old joke about the Chicago economics professor who refused to pick up the $20 bill on the sidewalk on the grounds that if it were real, someone else would already have picked it up. In a phrase, the doctrine of rational expectations implies that markets are efficient.

In their new book Nudge, University of Chicago behavioral economist Richard Thaler and law professor Cass Sunstein attempt to debunk the assumption of rational expectations as a distracting myth:

Whether or not they have ever studied economics, many people seem at least implicitly committed to the idea of homo economicus, or economic man—the notion that each of us thinks and chooses unfailingly well, and thus fits within the textbook picture of human beings offered by economists.

If you look at economics textbooks, you will learn that homo economicus can think like Albert Einstein, store as much memory as IBM’s Big Blue, and exercise the willpower of Mahatma Gandhi. Really. But the folks that we know are not like that. Real people have trouble with long division if they don’t have a calculator, sometimes forget their spouse’s birthday, and have a hangover on New Year’s Day. They are not homo economicus; they are homo sapiens.2

When considering the efficiency of various markets, it is useful to remember that these markets are run by fallible homo sapiens, not idealized homo economicus. This insight was not lost on Monty Hall and the producers of Let’s Make a Deal.

A new ballgame

This is the theme – either implicit or explicit – underlying a spate of recent books about the growing ubiquity of analytic and predictive modeling applications in fields such as business, law, medicine, education and even professional sports. In his 2003 book Moneyball, Michael Lewis described a new way to think about managing the business of Major League Baseball. Lewis related how the Oakland A’s general manager Billy Beane was able to take his cash-strapped team to the top of the American League through the use of statistical analysis.3

As Sunstein and Thaler write, “Even when the stakes are high, rational behavior does not always emerge. It takes time and effort to switch from simple intuitions to careful assessments of evidence.”

Beane’s problem was that wealthier teams such as the New York Yankees, with many multiples of the A’s salary budget, could outbid the A’s when scouting for new talent. Beane addressed this problem with a crucial insight: baseball scouts often use flawed reasoning and fallible “gut feelings” or “professional judgment” when selecting baseball players. Beane realized that by using a more objective approach, he could identify excellent players ignored by the richer teams and lure them to the A’s at bargain salaries.

As recounted by Lewis, Beane’s insight was borne of personal experience. As a high school player, Beane was singled out by the scouts as a future baseball star. The scouts’ judgments were based mostly on appearances and intuitions and hardly at all on baseball statistics.  When the scouts evaluated him, they saw a fit athlete, a fast runner and strong batter: somebody who simply looked the part of a good baseball player. They didn’t have to think hard about the statistics; it was simply obvious to them that Beane had the makings of a top baseball player.

But the scouts turned out to be wrong. Billy Beane failed to thrive as a big league player and ultimately quit to become a scout for the A’s. By the time he made general manager he was determined not to repeat the mistakes of the scouts who had singled him out in high school.  Beane turned to the writings of the baseball statistician Bill James in order to take a more scientific approach to evaluating baseball players. For example, James had constructed a formula that could predict the number of runs a hitter is expected to create as a function of his on-base percentage. Taking his cue from James, Beane hired Paul DePodesta to statistically analyze players’ performances. One of Beane’s and DePodesta’s findings was that college baseball players went on to perform better than high school recruits. Based on this finding, Beane decided to let the richer teams spend their time recruiting players out of high school while he and DePodesta used their statistical analyses to select the excellent college players ignored by the scouts.

In short, Beane realized that the market for baseball players was inefficient because it was dominated by scouts making decisions based on intuition rather than objective, data-driven analyses. To borrow a phrase from the medical profession, you might say he took an “evidence-based” approach to player selection. Because of this, Beane was able to, as Lewis put it, “run circles around taller piles of cash.”4

Analyzing analytics

Why do Bill James’ simple formulas predict things that baseball scouts can’t predict even after years of experience?  In an insightful review of Moneyball, Sunstein and Thaler discuss the clues that Lewis offers. They write:

Why do professional baseball executives, many of whom have spent their lives in the game, make so many colossal mistakes? They are paid well, and they are specialists. They have every incentive to evaluate talent correctly. So why do they blunder? In an intriguing passage, Lewis offers three clues. First, those who played the game seem to overgeneralize from personal experience: “People always thought their own experience was typical when it wasn’t.” Second, the professionals were unduly affected by how a player had performed most recently, even though recent performance is not always a good guide. Third, people were biased by what they saw, or thought they saw, with their own eyes. This is a real problem, because the human mind plays tricks, and because there is “a lot you couldn’t see when you watched a baseball game.”5

Sunstein and Thaler then point out that Lewis is describing a central finding in cognitive psychology:  people tend to use what is known as the “availability heuristic” when making judgments:

As Daniel Kahneman and Amos Tversky have shown, people often assess the probability of an event by asking whether relevant examples are cognitively “available”.  Thus, people are likely to think that more words, on a random page, end with the letters “ing” than have “n” as their next to last letter – even though a moment’s reflection will show that this could not possibly be the case.6

Perhaps this is also why so many people get the Monty Hall problem wrong.  It’s easy to think of cases where “complete ignorance” means “equiprobability”:  tossing a coin, rolling a die, or spinning a roulette wheel. Many of Monty Hall’s contestants – and Marilyn vos Savant’s readers – were perhaps led astray by a false analogy between tossing a coin and the door #1-door #2 decision.

As Sunstein and Thaler point out, the problem is not that professionals are foolish or uneducated; it is that they are human. Out of necessity, they rely on fallible intuitions, mental heuristics, and tribal wisdom when processing information to make decisions. The problem, as behavioral economics teaches us, is that such systematic biases in human cognition as anchoring, the availability heuristic, and herd behavior can prevent markets from becoming more efficient. As Sunstein and Thaler write, “even when the stakes are high, rational behavior does not always emerge. It takes time and effort to switch from simple intuitions to careful assessments of evidence.”7

Viewed in this light, Moneyball therefore constitutes a case study in behavioral economics:  despite vast quantities of data available and money at stake, pre-Billy Beane scouts relied on fallible mental heuristics and rules of thumb to make very large decisions. A die-hard believer in efficient markets might bemoan such departures from the theoretical ideal. But a pragmatist can view non-rational and inefficient markets as business opportunities.

FROM MONEYBALL TO WORKFORCE INTELLIGENCE

The story of how Billy Beane used analytics to identify undervalued baseball players has far-reaching implications for many industries. The most obvious parallel is the war for talent. Baseball is not the only domain where the stakes are high when it comes to attracting and retaining talented employees. Consider the following facts: Most companies must devote anywhere between 40 and 70 percent of their operating expenses to compensation, benefits and other employee-related expenses.8 In many domains, a rule of thumb estimate of the cost of replacing an employee is 1.5 times that employee’s salary. Finally, the business press is replete with warnings that as the population ages, the competition to attract and retain talented workers will intensify. Yet most large organizations still make their hiring decisions using a highly labor-intensive and subjective approach often centering on subjective evaluations of candidates’ performances at interviews. Potentially useful sources of data are ignored, and the data that are considered are weighed in subjective and inconsistent ways.

Therefore, while the market for talent has grown increasingly competitive, it has not necessarily grown more efficient. This creates opportunities for far-sighted organizations similar to the opportunity that Billy Beane saw 10 years ago. Moneyball can be read as an early example of what we call “workforce intelligence”: the use of analytics to bridge the gap that often exists between workforce-related data sources and the business issues to which they should be applied.

For example, predictive models are being built to help HR managers make better hiring decisions. In detail, this involves constructing a database about a company’s current and previous employees in which high-performing employees are flagged. A predictive model is then built by optimally combining a set of leading indicators – predictive variables – of high performance. The model is built and validated on past data, but used to rate the applications of incoming job candidates.

It is obvious that living far from the office, frequently working weekends, and working for a problematic or poorly rated manager are all risk factors indicative of a valued employee’s likelihood to quit. But unlike a human decision-maker, a predictive model has the ability to optimally combine these and many other factors to efficiently estimate the employee’s relative likelihood of leaving.

In this way, the model serves as a “scoring engine” used to triage resumes on the fly. HR personnel can then focus on evaluating those candidates that the model identifies as potential top performers. The model doesn’t usurp the decision-making process, but can help anchor the decision in a predicatively optimal combination of inputs rather than in purely subjective judgments.

Similarly, models are being built to predict which employees are most likely to voluntarily resign. For example, it is obvious that living far from the office, frequently working weekends, and working for a problematic or poorly rated manager are all risk factors indicative of a valued employee’s likelihood to quit. But unlike a human decision-maker, a predictive model has the ability to optimally combine these and many other factors to efficiently estimate the employee’s relative likelihood of leaving. And unlike the human decision-maker, the predictive model will arrive at the same answer before and after lunch, takes virtually no time to draw conclusions, and is not affected by prejudices, pre-conceived ideas or cognitive biases.

In short, predictive models can help us better approximate the ideally rational homo economicus. As Billy Beane demonstrated, these tools can transform the process of selecting and managing talent into a key component of an organization’s core strategy.

Enter the super crunchers

The strategic potential of workforce intelligence is the most obvious lesson of Moneyball, but the full implications of the book are even broader. In our work as consultants providing services that involve multi-disciplinary data mining and predictive modeling, we have helped decision-makers in their efforts to perform analyses and build predictive algorithms in a wide variety of domains. The parallel of our work to Moneyball has not been lost on us, and we have even recommended the book to many of our HR and non-HR clients.  For example, the chief underwriter of a major U.S. insurance company used the Moneyball story to motivate his colleagues to embrace the underwriting predictive model that we were in the process of helping them build.

Despite – or perhaps because of – its simplicity, the decision tree algorithm outperforms doctors relying solely on their intuitions and professional judgment in the heat of the moment.

Ian Ayres, a law and economics professor at Yale, picks up on this theme in his recent book Super Crunchers.  By discussing applications of predictive analytics in a number of disparate domains, Ayres continues where Lewis, Sunstein and Thaler leave off. “Super Crunching” is Ayres’ umbrella term for the various types of data mining, predictive modeling, and econometric, statistical, or actuarial analyses that can be used to guide human decisions.9 The sheer breadth of the examples Ayres discusses is compelling, and comports well with our own experience applying predictive analytics to a variety of domains.

Many of Ayres’ examples are valuable in that they encourage one to think creatively about new ways in which predictive analytics can be applied. Everyone knows that credit scores outperform loan officers at assessing mortgage default risk. But consider these examples:  Ayres’ colleague, the Princeton economist Orly Ashenfelter, has built regression models that have proven more effective than wine critics at identifying excellent vintages of Bordeaux wine. Neural net models have been built to predict movie box office returns using features of the movies’ scripts. Karl Rove has repeatedly used consumer segmentation and target marketing techniques to win elections by strategically contacting swing voters.

Another compelling example comes from Malcolm Gladwell’s book Blink. Cook County Hospital has a representation of a decision tree algorithm on a wall of its emergency room.  The decision tree is used to triage patients complaining of chest pain based on their likelihood of suffering heart attacks. Despite – or perhaps because of – its simplicity, the decision algorithm outperforms doctors relying solely on their intuitions and professional judgment in the heat of the moment.  (Incidentally, the story actually undercuts Gladwell’s own thesis that highly intuitive snap judgments and “thinking without thinking” are more reliable than deliberative reasoning.)10

We can add several examples, from our own work, to Ayres’ list of cases in which analytics are used to make better decisions.

  • Insurance underwriting and pricing:  We and our colleagues have helped hundreds of underwriters build multivariate scoring models to better select and price insurance risks. These models find gaps in traditional risk assessment methodologies and thereby provide novel ways for insurers to better distinguish between seemingly similar or identical risks. Unlike the “experts versus equations” tenor of Moneyball and Super Crunchers, our goal has never been to replace human decision-makers, but rather to help them develop a tool that enables them to make better decisions. Just as analytical methods outperform traditional methods of scouting baseball players, we have seen that underwriters consistently do a better job of selecting risks with predictive models in hand. The implications of these observations have proven very valuable to insurance companies that have adopted analytical methods.
  • Talent management:  We have helped employers use psychometric data to better predict employee performance.  In one study, we found that employees with certain combinations of behavioral traits had twice the chance of being promoted, whereas employees lacking a different combination of traits had virtually no chance of being promoted.  Workforce intelligence findings such as this are useful when making hiring and talent management decisions.
  • Medical malpractice prediction:  We have helped physicians – both general practitioners and specialists – build models to better predict whether they are more likely to be sued for malpractice. We have found that, as with the talent management example above, behavioral as well as other factors are predictive. Predictive models using psychometric data could be used to selectively reach out to physicians and ultimately lower the incidence of medical malpractice suits.

Behavioral economists such as Richard Thaler and Dan Ariely help us better appreciate the strategic implications for businesses, hospitals, governments, and other organizations. A recipe for success is to use analytics to exploit market inefficiencies resulting from intuitionist decision-making.

  • Member retention:  We have helped hospitals build models to better predict which Medicaid beneficiaries are at highest risk of dis-enrolling from their Medicaid health plan.  The resulting scores provided the hospitals’ outreach personnel with a tool for better focusing their retention efforts.
  • Consumer business:  We have helped companies use analytics to better understand their customers and sales patterns. While it is true that some companies make extensive use of their data to segment, target and cross-sell to their customers, we have found that many others use their data only to generate business metrics and fairly stale management reports. The situation is to a surprising degree similar to what we have found in the emerging field of workforce intelligence: the data exist but are not being used to refine decisions rooted in intuition and mental heuristics.  Analytics and predictive models can therefore be brought to bear to exploit the resulting market inefficiencies.
  • Mortgage triage:  We are assisting mortgage lenders to use predictive modeling to better identify potentially troubled loans before borrowers fall behind on their payments or default. In these tumultuous times, traditional reactive and subjective loan management methods are proving unsatisfactory. We are helping to bring predictive analytics to bear for mortgage lenders to design proactive loan and credit-line portfolio management strategies. Loans can be saved – and mortgagees can be kept in their homes – by strategically offering mitigation strategies before borrowers default.
  • Claims and medical case management:  Medical case management for workers’ compensation and disability cases has traditionally been managed primarily as medical events. Thus a case worker helps an injured worker return to his or her job through a prescribed medical treatment process. Rarely has the portfolio of cases been managed analytically with early identification of those cases that are likely to become high-severity or long-duration. We have helped companies build models that combine medical, biographic, demographic and psychographic information to better predict which cases are more likely to exceed industry standard norms for severity and duration. With this improved case management tool, workers can be helped to return to work more efficiently and abusers of the system can be more easily identified.
  • Medical insurance customer relationship: We have been working with health insurance contact centers to combine medically based analytics with reengineered business processes to better identify those medical insurance customers who have complex treatment issues and experience difficulty navigating the treatment approval and claims handling process. By segmenting these customers and directing their needs to a specially trained servicing team, outbound calls are being placed to customers to offer upfront assistance and guidance for streamlining their treatment and claims management. This combination of advanced analytics and improved customer experience has resulted in higher levels of customer satisfaction and reduced medical and claims expense.

As Ayres points out, the surprising ability of equations to help experts has been the subject of psychological research for over fifty years. The subject originated in 1954 with the publication of the psychologist Paul Meehl’s book Clinical Versus Statistical Prediction.11 Meehl’s “disturbing little book”, as he called it, documented over 20 empirical studies comparing the predictions of human experts with those of simple actuarial models. The types of predictions ranged from how well schizophrenic patients would respond to electroshock to how well prisoners would respond to parole. Meehl concluded that in none of the 20 cases could human experts outperform the actuarial models.12

Near the end of his life, surveying the field he initiated in the fifties, Meehl wrote:

There is no controversy in social science which shows such a large body of quantitatively diverse studies coming out so uniformly in the same direction as this one.  When you are pushing over 100 investigations, predicting everything from the outcome of football games to the diagnosis of liver disease, and when you can hardly come up with half a dozen studies showing even a weak tendency in favor of the clinician, it is time to draw a practical conclusion.13

Ayres quotes two other cognitive psychologists who put the matter even more starkly:  “Human judges are not merely worse than optimal regression equations; they are worse than almost any regression equation.”14 The business implications of this statement are huge.

Cognitive scientists such as Meehl, Kahneman, and Tversky help us understand that the increasing ubiquity of predictive analytics is in large measure due to fundamental limitations in human cognition. Thus there is good reason to expect that analytical methods will continue to gain traction in an ever widening field of endeavors.

Behavioral economists such as Richard Thaler and Dan Ariely15 help us better appreciate the strategic implications for businesses, hospitals, governments and other organizations. A recipe for success is to use analytics to exploit market inefficiencies resulting from intuitionist decision-making.

The revolution in predictive analytics is already with us; cognitive science teaches us why this is so; and behavioral economics teaches us that we can use this insight to exploit market inefficiencies. Unfortunately, building and executing predictive analytics strategies is not as easy as picking up a $20 bill on a sidewalk. Predictive modeling is a highly complex and multi-disciplinary undertaking.  This is a major reason predictive analytics isn’t even more ubiquitous than it already is. On the plus side, many opportunities remain.

Synthesizing analytics

Predictive analytics is now coming into its own both because of the findings of cognitive science and behavioral economics and also because of a recent and rapid proliferation of huge databases, cheap computing power, and advances in data visualization, applied statistics and machine learning techniques. Any business process that calls on human decision-makers to repeatedly weigh multiple factors to arrive at decisions could more likely than not be improved through predictive analytics. Furthermore, if these decisions are central to a company’s core strategy (such as underwriting for an insurance company, or hiring dependably friendly and motivated workers for a restaurant chain) much more is at stake than improvements in business process efficiencies. Analytics and predictive models can help companies win by exploiting market inefficiencies.

But basing competitive strategies on analytics is by no means ubiquitous. We have often been surprised by the modest extent to which analytics is embraced even at prestigious companies. One probable reason: analytics is difficult and often misunderstood. Aside from the obvious entry barrier of mastering advanced “super crunching” techniques, analytics is multi-disciplinary in ways that are not always made clear in academic or journalistic discussions.  Indeed, modeling is not often the dominant phase of an end-to-end predictive modeling project. A full-blown predictive analytics project calls on a broad range of skills that includes business strategy, subject-matter experience and knowledge, project management, knowledge of statistics and machine learning techniques, programming, technological and business implementation, and organizational change management. Furthermore, all of these ingredients must be leavened with the tacit knowledge of experienced predictive modelers who know what complexities to avoid and ways to attack the complexities that remain. Full-blown predictive analytics projects are therefore team efforts that require time, investment, and a multi-disciplinary range of skills.

Perhaps influenced by breezy journalistic accounts of analytics, managers often underestimate the array of resources and level of investment needed to pull off an end-to-end modeling project… Executive sponsorship can do an end-run around this “blind men encountering the elephant” problem by ensuring that appropriate investments are made up front in the multi-disciplinary array of skills needed.

In our experience, this important fact is often overlooked by companies that intend to embark on analytics or predictive modeling projects. Perhaps influenced by breezy journalistic accounts of analytics, managers often underestimate the array of resources and level of investment needed to pull off an end-to-end modeling project. For example, IT managers often think that point-and-click analytics tools can do the job; statisticians often revel in the purely technical aspects of the project, while downplaying other project phases and required skills; actuaries often view modeling projects as pure-play actuarial projects; and business analysts often think that spreadsheet-based analyses suffice.

Executive sponsorship can do an end-run around this “blind men encountering the elephant” problem by ensuring that appropriate investments are made up front in the multi-disciplinary array of skills needed.

Finally, enlightened executive sponsorship is also needed to ensure that the fruits of analytics projects are embraced by the larger organization. This can be its own challenge. Some organizations face initial indifference or even hostility to analytics-based strategy. This was certainly the situation Billy Beane faced when he undertook the difficult process of transforming the Oakland A’s into an analytically oriented team. As Lewis describes, instilling culture change was a major – and difficult – part of Beane’s job. Other organizations, such as insurance companies or large retailers, will be more naturally inclined to embrace analytics-based strategies. In such organizations, the challenge is to ensure that the ultimate users are engaged in the design and construction of the analytical tools that they must eventually embrace.  In either scenario, executive sponsorship is needed to ensure that the analytics project becomes more than a back-office technical exercise.

In Competing On Analytics, Thomas Davenport and Jeanne Harris discuss this theme and relate that one well-known CEO kept a sign on his desk quoting Edwards Deming’s famous aphorism, “In God we trust; all others bring data.”16  We find it rational to expect that this CEO will find himself in good company in the coming years.

Postscript

Still haunted by the Monty Hall problem? Here is a heuristic that might help.  First let us clarify the rules of the game: Monty knows which of the doors conceals the car; he must open a door with a goat; and if the car really is behind the door you chose (#1), he will open one of the other two doors (#2 or #3) with equal probability.  Clearly at the beginning of the game, it was equally likely that the car was behind doors 1, 2, and 3. In particular, it follows that the probability of the car being behind door #1 was 1/3 and the probability of it being behind either doors #2 or #3 was 2/3. This remains true even after Monty opened door #3 and revealed not the car but a goat.  But now the probability that door #3 conceals the car is zero. Therefore all of the 2/3 probability is shifted to door #2.

An on-line interactive feature from the New York Times17 enables you to perform a computer simulation yourself.  Also, one of Kevin Spacey’s MIT students tries to explain the Monty Hall problem in the film 21. But it is unlikely that his explanation would receive full marks at the real-life MIT.

Endnotes

View all endnotes
  1. John Kay, Financial Times, August 16, 2005 <http://www.johnkay.com/society/401>.
  2. Richard H. Thaler and Cass R. Sunstein, Nudge:  Improving Decisions about Health, Wealth, and Happiness (Yale University Press, 2008) pp. 6-7
  3. Michael Lewis, Moneyball:  The Art of Winning an Unfair Game (W. W. Norton & Company, 2003)
  4. Ibid, p.122
  5. Richard H. Thaler and Cass R. Sunstein, “Who’s on First,” The New Republic, September 1, 2003 <http://www.law.uchicago.edu/news/susntein/2003/moneyball.html>.
  6. Ibid
  7. Ibid
  8. John Houston and Russ Clarke “Moving Beyond ‘Data Rich, Knowledge Poor’ in Human Resources,” Deloitte Consulting, 2008. <http://www.deloitte.com/dtt/cda/doc/content/us_consulting_hc_workforceintelligence_211008.pdf>.
  9. Ian Ayres, Super CrunchersWhy Thinking-by-Numbers Is the New Way to Be Smart (Bantam Books, 2007), p. 10
  10. Malcolm Gladwell, Blink: The Power of Thinking without Thinking (Little Brown and Co, 2005) p. 134
  11. Paul Meehl, Clinical Versus Statistical Prediction:  A Theoretical Analysis and a Review of the Evidence (University of Minnesota Press, 1954)
  12. Michael A. Bishop and J. D. Trout, “50 Years of Successful Predictive Modeling Should be Enough:  Lessons for Philosophy of Science,” Philosophy of Science 69, September 2002, p. S198
  13. Paul Meehl, “Causes and Effects of My Disturbing Little Book,” Journal of Personality Assessment 50, pp. 370-375
  14. Richard Nisbett and Lee Ross, Human Inference:  Strategies and Shortcomings of Social Judgment (Prentice-Hall, 1980), p. 141
  15. Dan Ariely, Predictably Irrational:  The Hidden Forces that Shape Our Decisions (HarperCollins 2008)
  16. Thomas H. Davenport and Jeanne G. Harris, Competing on Analytics: The New Science of Winning (Harvard Business School Press, 2006) p.30
  17. Interactive feature, The New York Times, April 8, 2008  <http://www.nytimes.com/2008/04/08/science/08monty.html#>.

About The Authors

James Guszcza

James Guszcza, PhD, FCAS, MAAA is a senior manager with Deloitte Consulting LLP and the predictive analytics leader of the Advanced Quantitative Solutions service line.

John Lucker

John Lucker is a principal with Deloitte Consulting LLP and the co-leader of the Advanced Quantitative Solutions service line (data mining, predictive modeling and advanced analytics related services)

Irrational expectations: Statistical thinking for better decisions
Cover Image by Sterling Hundley