Choice Architecture:  Don’t be Evil

HBR’s recent article, Leaders as Decision Architects, is a detailed investigation of how business managers can improve employee choices by adjusting the framework, context and process in which those decisions are made. The authors draw from two of my favorite books in behavioral economics, “Thinking Fast and Slow” and “Nudge”. There’s no doubt that in the right hands, these techniques could lead to more productive, healthier and wealthier employees. However, while the ends are laudable, some of the means are not consistent with a modern view of human dignity and integrity.

The article begins with a roadmap for identifying when behavioral economics can improve decision making:

(1) Understand the systematic errors in decision making that can occur, (2) determine whether behavioral issues are at the heart of the poor decisions in question, (3) pinpoint the specific underlying causes, (4) redesign the decision-making context to mitigate the negative impacts of biases and inadequate motivation, and (5) rigorously test the solution

The authors make a helpful distinction between poor decisions (or absence of decision) resulting from insufficient motivation and this which result from cognitive biases. Decisions that suffer from a lack of motivation can benefit from reminders and changes in defaults, such as when 401K participation is the default and does not require a pro-active sign up process. Decisions that result in systematic errors due to cognitive biases can be improved by adjusting how they are framed and the tools provided. For example, the authors mention a corporate cafeteria (Google) that reduced consumption by placing a sign next to the plates noting that people who choose the bigger plates usually eat a lot more.

I also think the idea of rigorously testing alternative techniques using classic social science tools, like having a control group is helpful. However, according to Zhong (2011), an excessive focus on quantitative measures of success can result in morally questionable decisions.

Indeed, the article gets into trouble when it proposes choice architecture as a tool to improve employees’ decisions in their own best interests more reliably than education and monetary incentives. The latter require employees to change the way they think about a problem, which is hard, and can fail when employees don’t do what’s best for them. How comfortable would you be if you were on the receiving end of these techniques:

  1. Leverage emotions: for example, have new employees write about how they can apply their personal strengths to their new job.
  2. “Harness biases”: For example, having a pipeline of new talent ready can spur current employees to work harder out of fear for their jobs.

Companies, perhaps even more than governments need to tread carefully on this paternalistic and potentially manipulative ground. When the choice architecture impacts the employees own well-being, even for their own apparent good, the employee’s agency and autonomy ought to be respected. Some techniques, especially those that “harness biases” clearly fail tests like the Golden Rule or Kant’s Categorical Imperative.  Ethics aside, there is plenty of evidence that employees require autonomy to be maximally productive and loyal. A simple potential fix is disclosure. While I wouldn’t want to work at a bias-harnessing company, such a policy seems slightly less egregious if employees are informed, and ideally consenting.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.