View the original post
There’s a new framework designers can use to think about ethics. Get the recap.
When we talk to designers, they consistently mention wanting to shape a better world. Many have an underlying desire to help society and uphold the basic human rights of every human being. Maybe you’re one of them. Wanting to shape a better world may be necessary but it isn’t sufficient on the path to produce ethical outcomes. Ethics is a behavioral challenge and we need to create organizational approaches to help people change their behaviors. One emerging solution is outlined in a new report from the World Economic Forum (WEF).
A new framework for designers to think about ethics
The report — “Ethics by Design: An organizational approach to responsible use of technology” — illuminates ways that companies can help employees (especially designers) make ethical choices. Read on for designer-centered takeaways and download the full report.
[Report Authored by Daniel Lim, Nicholas Bullard, James Guszcza, Emily Ratté, Ann Gregg Skeet, Inna Sverdlova and Lorraine White]
We’ve already seen how moving fast without ethical frameworks has the potential to fortify or erode some of our fundamental human values.
For example, the report outlines how unchecked AI and social media technologies can promote the spread of disinformation (as shown in “The Social Dilemma” documentary and “The Hype Machine” guide). Yet those same technologies have been used by Amnesty International and Element AI to identify and measure online abuse against women, shining a light on toxic dynamics in our culture and offering concrete steps to combat it.
It’s part of corporate responsibility to ensure AI is used ethically and designers play an important role in ensuring those outcomes. WEF’s Head of Machine Learning and AI, Kay Firth-Butterfield, agrees: “I think, intrinsically, it’s important for designers to understand that what they are doing could really benefit human beings and the planet but mistakes are going to be bad.” This belief has given way to ethical training tools for organizations, such as the “Ethics by Design” Trailhead module by Salesforce’s Office of Ethical and Humane Use.
“For designers, there’s just a sense of responsibility we have to create the world we want to live in… We’re creating the systems that are affecting so much of our society”
-Daniel Lim, Senior Director of Experience Design (Salesforce) and Artificial Intelligence and Machine Learning Fellow (World Economic Forum)
For 50 years, WEF has been on a mission to improve the state of the world. They recently collaborated with Deloitte and the Markkula Center for Applied Ethics at Santa Clara University to present this comprehensive stance. In the report’s foreword; Beena Ammanath, Kay Firth-Butterfield and Don Heider articulate this mission:
Leaders must prepare their people to:
○ be aware of the ethical risks posed by emerging tools
○ make ethical choices even in situations in which information is imperfect or ambiguous
○ motivate them to act upon that judgement in ways that advance prosocial goals
The power of business, technology, and design to shape our world for better, and worse, has become more visible. 2020 shook our society into new awareness of the intersecting economic, social, and political crises afflicting our culture. This has also spurred a greater sense of responsibility from the business world. In 2020, roughly 90% of CEO survey respondents agreed that corporations should be responsible not only to shareholders but also to customers, employees, communities, and suppliers.”
To establish responsible practices and accountable practitioners, the report outlines a framing that any organization can put to work. It all begins with three design principles.
Three design principles for promoting the responsible use of technology
Whether you’re a UX designer, an experience architect, a product designer, or a creative director, you understand that design principles set a project up for success. To promote ethical behavior, the report suggests three key principles: Attention, Construal and Motivation.
Examples: Reminders, checklists, pre-mortems, blogs
Whether making or using technology, it’s easy to lose sight of the ethical ramifications of our choices. Prompts throughout the product development process and within the user experience can offer opportunities for ongoing reflection about technology’s intended or unintended consequences. It’s the power of leveraging attention.
One example of this approach in Salesforce products is the sensitive fields notification within Einstein Content Selection Flag. The feature got off the ground when the Salesforce team recognized that sensitive information (such as race, gender, zip code, or religion) has the potential to introduce bias into audience segmentation in Marketing Cloud and should be flagged for users. This way, when users are running an analysis or prediction, they can be alerted to the potential discrimination they might be perpetuating.
The result? A warning triangle in the UI (featured below) that indicates the field contains sensitive information and therefore potentially adding bias to their decision making.
“In that building of the audience, we’re inserting some intentional ‘Are you sure?’ moments, where appropriate, based on data,” explains Rob Katz, Senior Director of Salesforce’s Office of Ethical & Humane Use. It’s also one way to avoid deceptive design practices known as dark patterns. “We built this because we knew that marketing automation using AI is subject to bias and we want to try and avoid that if possible.”
It comes down to intentional friction. There’s a need for users to revisit their assumptions on a semi-regular basis. That frequency can be a hard balance to strike but it’s a necessary step. It should be often enough to avoid bias but not so often that people begin to ignore the reminder.
And it’s equally important for teams to do this in their own process.
The report gives us an example of what this can look like. Insurer Allstate created a way to leverage attention to drive ethical outcomes internally. The organization required their senior vice presidents to sign quarterly affirmations that their teams had upheld company values. This shows how small prompts can shine a light on ingrained processes or default patterns of thinking. And one of the most essential ways to continually surface assumptions and default thought patterns is to diversify the perspectives participating in the process.
This is reinforced by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce. She shares, “In my mind, there is no tech ethics without thinking about the who and bringing in the perspectives of folks that are most impacted by these problems.” The need to consider wider perspectives is further amplified in the design principle of construals.
Examples: Consequence scanning, advisory councils, focus groups
The way we interpret the world and the behaviors of the people in it are called construals. The WEF report reinforces that “changing how people perceive a situation can affect the behavior they deem appropriate.” At work, many people consider what they do in terms of legal or economic terms. However, Lim reminds us that something being legal doesn’t necessarily mean it’s ethical. There’s an opportunity — possibly, an obligation — to raise the profile of ethics in organizations.
“As individuals, we’re always in an environment that’s put on us — in a company, a country, a team — and there are cultural norms that drive certain outcomes,”
It’s critical to encourage ethics as a collective so that all opinions and interpretations of an ethical decision are considered. That includes listening to the feedback of coworkers, users and community stakeholders who have diverse lived experiences.
Designers have an opportunity to create diverse, cross-sectional forums by rethinking who is asked to collaborate and to give feedback. Organizations can’t pressure test their thinking if they only focus on a narrow set of average users or don’t look outside their department. It should be a red flag if the only sets of eyes are from a homogenous group such as white, cisgendered, able-bodied, college-educated people — even if they work across disciplines. Expanding the invitation is a way for teams to remap the design process in order to both protect people from harm and drive positive change.
In short: It’s how ethical oversights in an organization get exposed.
Firth-Butterfield expands on this, noting: “You’ve got to think of all the various people who are going to be using your tool because otherwise your tool is useless and discriminatory to some of your purchasers.” Contributors to the report build on this idea, noting: “More can be done to empower practitioners with frameworks and guidance and to institutionalize the practice of including diverse perspectives (e.g., technical, cultural or socioeconomic) in decision-making that sufficiently consider ethical consequences.”
One method that is proving successful is Consequence Scanning.
With diverse voices represented, a Consequence Scanning workshop is a concrete way to think through a product’s positive and negative impacts, early on. Look no further if you’re a designer who wants to move the discussion — and the default construals — from “could we do it?” to “should we do it?”“It’s another tool in the tool kit of asking ‘Are we sure?’ and being accountable to that answer.” (Rob Katz offers guidance on how to run a Consequence Scanning workshop here.)
To attempt to shift construals in this way, you need to lay the groundwork for an open-minded organization with employees willing to think critically about their work and potential impact. But how can organizations make this culture shift? That revolves around the third design principle: Motivation.
Examples: Historical reviews, self-assessments
There are different ways to cue someone to act with the ethical use of technology in mind. One of the most powerful is speaking to their intrinsic motivations — the satisfying internal rewards that can drive behavior. “When you’re thinking about how to motivate your staff in the AI space, you really have to go into that thinking with your AI Ethics or Responsible AI hat on,” reinforces Firth-Butterfield.
Many people crave a sense of acceptance, recognition, impact and achievement. Organizations can speak to these drivers by prioritizing empathy and compassion. It’s all connected: When designers have compassion with the people who they work with and the people who use their products, they feel more accountable to them. When they feel more accountable, they are more likely to engage in constructive debate and make things that protect people from harm and create positive impact. This has wide-ranging effects. And it all starts with feeling for the people we collaborate with and serve — and the communities we all live in. We’ve seen that when relationships thrive, business thrives. To learn more about the commitment Salesforce has made to designing for trusted relationships and the practice of Relationship Design, start with the Trailhead module.
“Organizations can encourage ethical action through the cultivation of empathetic relationships between different stakeholder groups, both within and outside of the firm,” -Beena Ammanath, Kay Firth-Butterfield and Don Heider
Bringing social values to the forefront allows corporations to build their capacity for “ethical organizational reflexivity.” This term was coined by The Markkula Center for Applied Ethics to demonstrate the tendency to engage in ethical behavior and deliberations more routinely. Others, like Yoav Schlesinger, Salesforce’s Principal of Ethical AI Practice have called it “moral muscle memory.” It’s also been named “ethical spidey sense” by Kathy Baxter, our Principal Architect of Ethical AI Practice who wrote the primers on ”How to Build Ethics into AI” (part one and part two). Whatever you call it, the report’s three design principles can help make it a reality in any organization.
Just remember: There’s no easy answer or step-by-step way to breeze through hard ethical choices.
Even when an organization institutes the principles above, that doesn’t mean ethics has simply been achieved. Katz reminds us of this. “Not all of the subjects that we discuss during a Consequence Scanning workshop, for example, are going to be — in fact, most are not easily — bucketed into do this and don’t do this. They’re all on a spectrum, usually, and they’re usually very ambiguous.”
The aim is to help designers navigate ambiguity.
Design leaders play an important role here if they have power inside the company. When they endorse ethical behaviors, their teams are more motivated to promote responsible technology. For example, leaders can model ethical consideration, create space for ethical reflection, and prioritize bringing diverse perspectives to the table–especially in the hiring and research processes. Leaders can incentivize the use of ethical practices and behaviors, and encourage teams to raise red flags when they are concerned about potential design consequences.
Lim transparently adds, “We don’t have all the answers. We wrote this paper as a way to show some methodical approaches to changing organizational behavior around ethical tech. These articles are a way to invite comments or to join the table.”
Has your team tried any of these approaches? Curious about the future of the ethical use of technology? Comment below today.
If you’re feeling compelled to do this work, dive into the full report today: “Ethics by Design: An organizational approach to responsible use of technology.” This is a necessary step for design to take its seat at the table and be a part of shaping a more ethical world.
Thanks to the team who brought this article to life, including Daniel Lim, Kay Firth-Butterfield, Rob Katz, Paula Goldman, Anna Kowalczyk, Christina Zhang, Kathy Baxter, Yoav Schelsinger, Doug White and Madeline Davis.
*Hear more from the co-authors on the webinar: Thursday, February 18 at 10am est.
*Skill up and start the Responsible Creation of Artificial Intelligence Trailhead module.
*Read ”How Salesforce Incorporates Ethics into its AI” article featuring @KathyKBaxter
*Read the “Lead with Purpose” report by 2020 IIT Institute of Design Report
What Designers Need to Know About WEF’s New “Ethics by Design” Report was originally published in Salesforce Design on Medium, where people are continuing the conversation by highlighting and responding to this story.