Decisions, decisions, …
So we’re beginning a new project, and it has come time to decide what we’re going to build. In many ways, we are spoiled for choice. There are many ways in which the existing software could be improved. Do we pay down some technical debt, include a new feature or support new hardware?
If it were me, the choice would be clear. Pay down technical debt, and increase your production capacity. That way, you can have customer visible features more quickly. Alas, it is not me who gets to decide. This privilege falls to a combination of Management or Marketing.
When it comes to making decisions, these people fall headlong into a type of decision paralysis due to loss aversion. Given everything we could do, and with our limited time & resources with which to do it, getting to a decision seems torturous.
As an example, say we have sufficient resources to take on 5 features from our backlog of 15. The relative priorities of all 15 tasks will be analysied while various stakeholders horse-trade. Meanwhile, sight gets lost of the fact that we can only do 5, regardless of how high the priority of 15 are.
I see this as an example of loss aversion at work. Rather than choose to do (or not do) something, people tend to flip-flop on priorities and end up in analysis paralysis. A nice example of this is documented in Ariely (2009) via CodingHorror:
Ariely and Shin conducted an experiment on MIT students. They devised a computer game which offered players three doors: Red, Blue, and Green. You started with 100 clicks. You clicked to enter a room. Once in a room, each click netted you between 1-10 cents. You could also switch rooms (at the cost of a click). The rooms were programmed to provide different levels of rewards (there was variation within each room’s payoffs, but it was pretty easy to tell which one provided the best payout).
Players tended to try all three rooms, figure out which one had the highest payout, and then spend all their time there. (These are MIT students we’re talking about). Then, however, Ariely introduced a new wrinkle: Any door left unvisited for 12 clicks would disappear forever. With each click, the unclicked doors shrank by 1/12th.
Now, players jumped from door to door, trying to keep their options open.They made 15% less money; in fact, by choosing any of the doors and sticking with it, they could have made more money.
Ariely increased the cost of opening a door to 3 cents; no change–players still seemed compelled to keeping their options open. Ariely told participants the exact monetary payoff of each door; no change. Ariely allowed participants as many practice runs as they wanted before the actual experiment; no change. Ariely changed the rules so that any door could be “reincarnated” with a single click; no change.
Players just couldn’t tolerate the idea of the loss, and so they did whatever was necessary to prevent their doors from closing, even though disappearance had no real consequences and could be easily reversed. We feel compelled to preserve options, even at great expense, even when it doesn’t make sense.
If anybody knows how to avoid this behaviour, I’d love to know. How do I encourage those with choices to make, to make choices?
- Ariely, D. (2009) Predictably Irrational: The Hidden Forces That Shape Our Decisions. Harper