Research shows that people tend to be overly risk averse when weighing the potential success or failure of a decision. This tendency is compounded when we consider the vast number of decisions being made across an organization. For various reasons, both individuals and groups are often cautious at the expense of their long-term success. From an analytics practitioner’s point of view, this misalignment presents opportunities to improve outcomes through the strategic use of data.
Before we discuss these opportunities, it is important to understand the psychological causes of risk aversion. Why is it that people avoid taking risks? One reason can be demonstrated through a betting game. Imagine that you have the opportunity to bet on a single coin flip, and you get to call heads or tails. If your choice comes up, you win $150. If the coin lands on the other side, you lose $100. Will you play the game?
When asked this question, most people decline the offer, despite its positive expected value (though their responses can change depending on the probabilities and magnitudes of the winning and losing amounts). The reasoning behind that choice is that the pain of losing is greater than the pleasure of winning. Most of us intuitively know that playing many rounds of the game virtually ensures a positive outcome, yet a single round seems risky. This type of perception has been thoroughly studied by behavioral economists and psychologists like Daniel Kahneman, who discusses it in his book Thinking, Fast and Slow.
Kahneman explains the phenomenon as a combination of two factors. One is “loss aversion,” the tendency to treat a loss as more salient than an equivalent gain. The other is “narrow framing,” or evaluating a decision in isolation rather than as one of many. Narrow framing makes sense if there is a lot riding on the outcome of a single decision; the problem is that people treat many decisions as disproportionately momentous. It is better to think broadly and consider each decision as one item in a large portfolio. If we want to maximize success overall and have the sum of our actions reflect our appetite for risk, we should not let ourselves be too bothered by any single outcome.
Loss aversion is related to other ideas that Kahneman studied, boiling down to people’s propensity to overpay for greater certainty. Indeed, the statistician Andrew Gelman has referred to some descriptions of loss aversion as “uncertainty aversion.” Uncertainty aversion can come into play even in situations without potential for direct loss; people also consider the opportunity cost of change. Whether we are thinking in terms of dollars or some other metric, a choice could lead to a loss, or breaking even, or a profit; but we are also concerned with how that result compares to alternative courses of action.
To illustrate, let’s consider two potential projects, A and B. Choice A has a modest investment cost and a good chance for a moderate return. Choice B has a greater cost and slightly more uncertain results, though there is greater upside potential. Which one should we choose? Again, the specifics matter, and if the failure of Choice B would be catastrophic, it is not a reasonable option. However, even in the absence of a disastrous potential outcome, many of us are biased toward the safer option, despite the fact that it may be inferior from the “decision portfolio” point of view. This does not mean that lottery ticket-type bets are good strategic choices; an outcome with a very high reward but very low probability is not useful unless it is cheap and repeatable at scale. We need to stay in the realm of reasonably likely possibilities.
A third option in our hypothetical example might be to choose neither A nor B – perhaps each of the choices seems too risky (which might be true!). Inaction is often the easiest path, and sure enough, “status quo bias” is another well-known cognitive phenomenon.
Our decisions can hurt us individually in the long run, but there is also a collective cost to extreme risk aversion. Corporations can fall prey to these same cognitive biases. Although Kahneman argued that playing it safe can be irrational on an individual level, risk aversion is rational for a person in an organization that incentivizes the “safe option.” Corporate cultures often influence their employees to do things the way they have always been done (a collective demonstration of the status quo bias). Why take more risk than necessary when the downside could be losing your job?
This mentality is particularly evident in sports, where ample game data is readily available and the public can witness coaching decisions and their outcomes. Coaches have traditionally maintained job security by making strategic decisions “by the book,” using the accepted judgment that has supposedly passed the test of time. The problem lies in the fact that the data doesn’t always support the orthodoxy. Even in the post-Moneyball era, you have to look below the professional leagues to find coaches who are truly innovative in their data-based strategies; on professional teams, only coaches with long track records of success are given enough rope to take a chance without the risk of losing their jobs if their strategies fail in small samples. The same type of attitude surely exists in other industries, and two prime reasons for this are the underuse of data and insufficient incentives for appropriate levels of risk taking.
Why is it difficult to avoid these biases toward overly conservative decisions? Our contrived coin flip example with known probabilities and discrete outcomes should be simple to evaluate, yet even that bet leads to sub-optimal behavior. When we consider real-world decisions, it is much trickier. Thankfully, one of the purposes of analytics is to take better advantage of data in order to improve decision making. Let’s consider some ways to do this.
One way to better equip decision makers is to help them more accurately judge the probability of the potential outcomes of resource allocation decisions. Imagine a scenario where a person or team is tasked with selecting a set of items on which to take some action, such as sales leads to pursue, or insurance claims to investigate. Perhaps the existing methodology for making those selections is based on a combination of business rules and intuition. Regardless of how items have been selected in the past, each has an outcome of success or failure (such as making a sale or confirming your suspicion of fraud), aggregating to some overall measure(s) of success. Using historical data containing the attributes and outcomes of past selections, we could create a predictive model to identify the cases most likely to be successful. At a minimum, this allows us to examine a case with a particular profile and get a data-backed estimate of its probability of success.
There are some caveats with this particular modeling approach. Because such a model is trained on previously selected records, it will be biased by the old case selection process. We don’t know the outcomes of cases not pursued, and there may have been some desirable portion of those that the old process was systematically missing. There are ways to combat this “reinforcement bias,” but they require a long-term plan; the company must select cases that the model does not recommend, pursue them and record the outcomes, and incorporate that new data into future iterations of the model. Another caveat is that this situation could benefit from a two-stage modeling approach, with one model predicting the likelihood of success (sale, fraud) and a second predicting the degree of the successful outcomes.
When applying a predictive model, one way to mitigate the risk inherent in change is to partially operationalize the model rather than completely doing away with the old approach. Implementation of the model in a pilot mode could consist of randomly splitting the cases and then applying the model to one portion (the treatment group) and the old selection process to the other (the control group). This is beneficial because it allows for proper measurement of the model’s effectiveness – does the model really improve results compared to the baseline rate of the old method? If the model leads to inferior performance, the downside is limited because you have not yet fully committed to the model or abandoned old processes.
How can an organization avoid the tendency to make too many safe decisions because they are easier to defend? Organizational culture is difficult to change, but starting with deliberate, targeted steps can get the ball rolling. Managers should find ways to encourage their teams to take smart risks. People need to know that rewards are available for taking appropriate risks, and that they will not be punished for individual decisions that turn out poorly in the short term but make sense in the bigger picture. In analyzing results, it is critical to make efforts to separate controllable and uncontrollable factors, and assign credit or blame based on the former. Think about the decision-making process and not only the outcomes. Augmenting human decisions with predictive models is a good way to codify that process.
More generally, organizations should be striving to try new things, learn from mistakes, share those learnings for the benefit of the team, and move on. You must make a habit of re-evaluating processes and asking, “Why do we do it this way?” Don’t make inaction the default action, and don’t take the safe option just because it feels more comfortable in that particular instance. Think broadly and remember that just because something works, that does not automatically mean it is optimal. Look for ways to measure baseline success against new approaches.
Making these changes has the potential to ripple out as people get more comfortable with productive risk taking. Superior, data-driven processes can become the new norms against which the next potential ideas are pitted. In this environment, individuals are more empowered and the organization is better positioned through improved risk management. Incorporating analytics-based thinking in as many areas as possible provides the opportunity for innovation while also ensuring that the organization’s overall risk preferences are reflected in individual choices.
 Whether you are using the most appropriate metrics is another matter.
Ryan McGibony’s work as a Data Scientist at Elder Research has included network analysis and graph databases, contract fraud modeling, anomaly detection, and facility valuation. Previously, he conducted and analyzed custom marketing research for corporate and non-profit clients at a full-service research firm, incorporating customer segmentation and predictive modeling techniques. Ryan also spent two years in Mongolia as a Peace Corps volunteer in the Community Economic Development program. Throughout his career, Ryan has enjoyed working with a wide variety of clients, taking care to understand their needs, and finding solutions to their problems.