As the influence of behavioral economics has grown, companies have increasingly been adopting “nudges” to influence how users of their products or services make choices. But nudges — changes in how choices are presented or set up to influence people to select specific ones — can have troubling consequences. Consequently, business leaders need to look critically at how they nudge users to understand whether they are truly acting in their best interests. Drawing from a landmark report to guide the conduct of biomedical and behavioral research involving human subjects, this article offers three principles to help companies design ethical nudges.

As these examples make abundantly clear, business leaders need to look critically at how they nudge users to understand whether they are truly acting in their best interests.

People aren’t fully rational. Environments, whether physical or digital, influence the choices people make and how they behave. Anyone who has followed the cues to socially distance himself or herself from others in a line in a supermarket during the pandemic or ended up donating more money to a charity than they had originally intended due to the suggested donation amounts on the charity’s webpage has likely been subject to a nudge. Originating in the field of behavioral economics, nudges are changes in how choices are presented or set up to influence people to take a specific action. They are extremely effective in steering consumer behavior but can have troubling consequences. Consider how Facebook’s “like” button has contributed to digital addiction and the way YouTube’s recommendation algorithm has fueled extremism and hate. As these examples make abundantly clear, business leaders need to look critically at how they nudge users to understand whether they are truly acting in their best interests.

Richard Thaler and Cass Sunstein who pioneered nudge theory offer a few guiding principles on how to “nudge for good.” Nudges should be transparent, never misleading, and easily opted out of. They should be driven by the strong belief that the behavior being encouraged will improve the welfare of those being nudged and not run counter to their interests like those that generated criticism of Uber in 2017. Similarly, Nir Eyal, author of Hooked, suggests using a his Manipulation Matrix to determine whether nudges should be redesigned. It is involves answering these two questions: 1) “Will I use the product myself?” and 2) “Will the product help users materially improve their lives?”

These principles are a great starting point but aren’t sufficient. In this article, we present a more robust framework for designing and evaluating nudges. It draws on three principles presented in 1979 in the U.S. Department of Health, Education, and Welfare’s Belmont Report to guide the conduct of biomedical and behavioral research involving human subjects. They have greatly shaped how research subjects are selected, consented, and treated today.

Principle 1: Respect for People

This principle consists of two parts:

Individuals should be treated as autonomous agents. Here’s what that means:

“An autonomous person is an individual capable of deliberation about personal goals and of acting under the direction of such deliberation. To respect autonomy is to give weight to autonomous persons’ considered opinions and choices while refraining from obstructing their actions unless they are clearly detrimental to others. To show lack of respect for an autonomous agent is to repudiate that person’s considered judgments, to deny an individual the freedom to act on those considered judgments, or to withhold information necessary to make a considered judgment, when there are no compelling reasons to do so.”

People with diminished autonomy are entitled to protection. The report explains:

“The capacity for self-determination matures during an individual’s life, and some individuals lose this capacity wholly or in part because of illness, mental disability, or circumstances that severely restrict liberty. Respect for the immature and the incapacitated may require protecting them as they mature or while they are incapacitated. Some persons are in need of extensive protection, even to the point of excluding them from activities which may harm them; other persons require little protection beyond making sure they undertake activities freely and with awareness of possible adverse consequence.”

Applying this principle to persuasive design — how a product or service is designed to influence the user’s behavior — business leaders should think beyond being transparent about nudges and allowing users to opt out. To truly preserve and protect autonomy, leaders should consider mechanisms to obtain the user’s consent before influencing their behaviors, even if it is for their benefit.

That presents a challenge: Some behavioral nudges don’t work as well if the recipient is aware of it happening. If you tell schoolchildren that the vegetables were placed first in the cafeteria line in the hope of increasing the odds that that they will choose and eat them, they will likely do opposite and skip them. But not telling them can diminish their autonomy. One way to address this conflict is to find a happy medium by being vague but transparent. For example, Headspace, a guided meditation app, asks users during sign-up to consent to receiving nudges in the form of notifications that are relevant to their specific goals (e.g., improve mindfulness, aid with sleep). Moments like these build trust with users. (In the case of the school cafeteria, a possible solution is to add a sign that says: “We offer you wholesome, healthy meals that require a mix of carbs, vegetables, and proteins.”)

One could argue that providing options for a user to ignore or dismiss the nudge negates the need for explicit permission upfront. This may be true, but it is important to consider whether or not people are being manipulated to do something they really don’t want to do (e.g., by making the effort to opt out of the nudge too great for them to do so). If they are, then obtaining their upfront permissions is necessary.

Principle 2: Beneficence

The second Belmont principle is having the interests of others in mind. It includes not only protecting others from harm but also trying to secure their wellbeing. The principle of beneficence guides researchers to minimize risks to participants and maximize benefits to participants and society. When applied to product and innovation design, this principle guides leaders to assess and account for any potential downsides of nudges.

For example, as revealed in a 2017 exposé by the New York Times, ride-share apps have nudges to help queue up another ride and inform drivers if they are meeting their income goals. While normally this convenient feature benefits drivers, we can see how it could cause harm. Should the app nudge drivers who have driven for 12 hours straight to take that one last ride so that they can achieve their weekly goal of $1,000? Or should the app weigh the risk of their likely exhaustion and determine that the nudge should not occur at this particular time? Similarly, a video-streaming service could detect patterns in normal usage, understand when users are binge watching a show late into the night, and ask the user at that moment if they want the service to forgo auto-playing another episode past a certain time of night. This goes beyond simply doing what Netflix did in reaction to criticism and offering users the ability to navigate deep into a menu to turn autoplay off.

Principle 3: Justice

The third principle has to do with the equitable distribution of the burdens and benefits of research. Violation of this principle occurs when one group clearly bears the costs of the research while another group reaps its benefits. An example is targeting people of lower socioeconomic means to participate in a study that results in a drug that only the wealthy can afford. At a time when sensitivities to and demands for equity, diversity, and inclusion are high, it’s especially important for business leaders to evaluate whether nudges are negatively impacting one group over another. Is the design nudging customers of a particular race or ethnicity more than others and is it resulting in inequities? Are there biases built into the algorithm that weren’t clear until it started working?

Companies are only getting more powerful — thanks to the numerous activities we do online and developments in data science and artificial intelligence. They are starting to really understand what makes us tick. But these advances mean that it’s even more important for industry leaders to set standards for what is permissible and what is right.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *