Hi.

Welcome to my blog. I document my adventures in travel, style, and food. Hope you have a nice stay!

Co-design Workshops: A Moral Algorithm?

Co-design Workshops: A Moral Algorithm?

On November 29th and December 6th, we ran co-design workshops at CIID and ThingsCon, where we explored a new flow and structure for some of the different components we have been building over the last few months.

We gathered 6 people at CIID for the first run - a mix of designers, technologists, a storyteller, a healer and a community manager. At ThingsCon, we had 12 people - again a mix of designers and technologists, though in this case the individuals were already interested in IOT and its implications.

We were able to iterate slightly upon the workshop flow and tools from CIID to ThingsCon based on the feedback of our participants at CIID. Below, we share the experience that we designed for ThingsCon.

PART 1: Company and Values

Semi-fictitious connected toy product

Semi-fictitious connected toy product

We begin the workshop by inviting the participants to join a new connected toy company, "Bear & Co." We introduce the company by showing a video about the current product that explains how it works and demonstrates some use cases.

We then ask participants to reflect upon how they do or do not identify with our company values - a predefined list of what we at Bear & Co. have decided is important. They first draw their engagement with the values by creating a radial visualisation, and then translate this visualisation into the numbers it represents.

Participants first draw their abstract shape of values, and then unfold and answer exactly what each value is on a scale from 0-1

Participants first draw their abstract shape of values, and then unfold and answer exactly what each value is on a scale from 0-1

Rationale: Our rationale for using a fictitious company and product is two-fold. Firstly, these co-design sessions are oriented towards a broader audience than just one company, therefore we needed to have a project for the group to be able to all understand (still at varying levels given different technical backgrounds) and engage with. Secondly, we are interested in the potential of a simulation or rehearsal of ethics, where individuals and teams are not immediately working on their own problems, rather they are getting familiar with how to talk about and navigate ethical issues without the immediate consequence of impacting their company's own product development. The overarching idea for the rehearsal is that it is part of a series of simulations where the next simulation might be a simulated ethical situation that involves a specific company's own product. The next situation might be the company's actual product and an actual dilemma that they are currently facing or will face in the near future given their product development plan.

Notes:

  1. Some groups chose to show their individual lines rather than only have a single group line. This confirms a potential idea for augmenting the Values Visualisation process such that individuals would first note their own lines and then negotiate their lines with their teammates' lines, coming up with a final visualisation that potentially shows their differences and therefore possible future tensions.

  2. We ask participants to take on a role when they sit down to the company table. However, the role immersion is extremely minimal: a piece of paper with a role such as "product designer" on it. Participants suggested that we might include more information so that they would know what their role might care about in terms of the values and scenario evaluation.

PART 2: The Problem

We present "The Problem" as a question that a remote co-worker is trying to evaluate, incorporated as a Skype call (pre-recorded video) in the middle of the flow of the workshop. The question has at least two distinct choices. It is not immediately obvious whether one or the other would be better. We also provide a few discussion points for the group to consider the choices. They are invited to create their own alternatives but given the time constraint this is not the main focus of that workshop's activity.

“The Problem” sheet that participants receive after listening to the call

“The Problem” sheet that participants receive after listening to the call

Rationale: As our ethnographic team at LSE points out, and as Katie Shilton writes, pivot points could be moments to integrate an ethical reflection tool. Pivot points could be about technical constraints or getting platform approval, as two examples. "Confronting technical constraints such as not being able to collect data continuously from phone cameras or microphones also spurred values conversations about why these constraints might exist (Shilton and Greene, 2017)." Therefore, we chose to force a situation where certain technical constraints and new features were being added to the product.

PART 3: Moral Imagination

After understanding the question their "remote colleague" posed to them, we asked the participants to evaluate their options by engaging their moral imagination: that is, how could things go well, weird, or bad if they took either one of the options?

We asked them to consider "destabilising factors" such as under-represented communities or users in this scenario building.

Participants’ scenario inputs to the option of implementing A.I. in the bear

Participants’ scenario inputs to the option of implementing A.I. in the bear

In the above case, the participants considered the option to Implement the A.I. (Option A) and the good, weird and bad scenarios that could come from this option. They used our provided destabilising factors but swapped some. In “good x under-represented”, they wrote: “People in the spectrum can benefit from extra social/emotional information they might miss, therefore they can participate more.” In “weird x climate”, they wrote “False sense of happiness because the A.I. is based on fake/false emotions. This creates the impression that there is something wrong with you. This makes you less busy with living sustainably.” The connection to the destabilising factor is tenuous in this scenario, though the emotional consequences are clear. Lastly, in relation to “bad x context”, they wrote “In China, the bears are used for social credit score. This creates unease and therefore social unrest and results in violence. The Tianemen Big Bear Burning.” These scenarios are in the realm of possible and were written by a team that already works on IOT design and development on a daily basis.

Based on the situations they envisioned could occur if they took either option, they then rated how well each option would do in meeting their core values. If an option was misaligned with a value, they rated it at 0, whereas if an option would clearly support a value, they rated it at 100. This step again crosses between rich storytelling and numerical evaluation.

Option A is to implement the A.I., Option B is to  not  implementing the A.I.

Option A is to implement the A.I., Option B is to not implementing the A.I.

Notes:

The step of scenario making requires more structure though they work as of now.

  1. As they consider an option, they want to more fluidly map out the positive and negative possibilities before diving into a scenario. This is demonstrated by the notes that participants took for themselves on The Problem sheet as well as the discussion that began and often needed to continue for a long time before a group was ready to engage in writing a scenario.

  2. Writing a scenario comes more easily to some than others. Furthermore, they would benefit from direction about where in the Futures Cone (Voros, 2007) of likelihood they should aim. The cone includes possible, probable and preferable futures. These are the three areas we would like them to explore but perhaps this could be integrated into the experience both spatially and graphically.

  3. How might we weave the consideration of values more directly in the scenarios as opposed to integrating them as bookends to the scenario writing experience? For example, they could work on scenarios that cross the following angles: "weird", "sustainability" and "under-represented populations."

  4. Other approaches to scenario writing include the facilitators sharing certain predictions and trends to inspire the participants' understanding of the possible outcomes or futures that could occur from a given dilemma. We could integrate this step more clearly; as of now, those relevant trends are to some extent encompassed in the "Destabilising Factors." However, we only hand out simple cards with a terse description rather than going into depth about each one.

PART 4: Algorithmic Evaluation

Having completed the values and moral imagination exercises, participants input the numbers they have created throughout these exercises into the "Moral Algorithm" spreadsheet. In this spreadsheet, they create weighted ratings (that take the importance of each value into account) and once they add each weighted rating for each option, one option will have gathered more points than another.

The points numerically show that one option was evaluated to be more aligned with the group's values and the relative importance of those values.

Option B (not to implement the A.I.) has the most points in this team’s case

Option B (not to implement the A.I.) has the most points in this team’s case

Rationale: While a checkmark solution to ethics is strongly against VIRTEU's ethos, this series of steps towards a mathematical answer is less a checkmark than a complex algorithm that documents participants' internal evaluations. According to Steven Johnson's book, Farsighted, "A moral algorithm is a series of instructions for manipulating data that generates a result in this case a numerical rating for the various options being considered. I suspect many of us will find this kind of calculation to be too reductive, taking a complex, emotional decision and compressing it down to a mathematical formula. But, of course, the whole process is dependent on the many steps that have preceded it."

PART 5: Newspaper

We created a template of a newspaper article for participants to summarise their experience and pull out some crucial elements, such as which values were the most supported or tested by their decision-making.

IMG_9815.jpeg

The team wrote that they became particularly considered when considering possible social outcomes of implementing A.I. in the bear. They also write that the experience was hard work and insightful. Their decision not to implement A.I. “came from their values of privacy and security.” Indeed, if we look back at their Values Card, we see that they noted .9 for privacy, security and social impact in terms of the importance of those values to their team. Furthermore, technically, the Moral Algorithm worksheet gave them the answer that Not implementing the A.I. would be more in line with their values.

PART 6: Feedback

After having experienced the workshop, we invited the participants to brainstorm alternatives for the different steps and tools they experienced. They left feedback for each major area and some volunteered to continue to work on the project with us.

The main points of constructive critical feedback are the following, and are all points that we will take into account in the further iterations of the experience.

INTRODUCTION

Give an overview of the exercises in the beginning, a personal introduction with people's backgrounds and examples of what an A.I. could or could not do in "The Bear Case."

Give more explanation about the roles and their according views. More movement - maybe try changing roles

VALUES

All values seemed equally important. Maybe have more diverse values such as technical, financial, social, environmental impact

To use values it would be good if there was some scarcity in points you can assign. Now there is no reason not to draw a full circle, but in real life that is not realistic.

Describe an example instead of terminology: This can support the team so we can talk about the same thing. Words are open to interpretation, leading to miscommunication.

Value Bingo: Make it a method on its own in order to have team members synchronise their values and discuss those where there is less overlap.

1 group marked their shape with each individual’s own perspective

MORAL IMAGINATION

The statement "if everyone in the world" contradicts the exercise because we are asked to think of specific scenarios and contexts that do not apply to everyone in the world

It was not easy to think about under-represented groups. You might come up with a set of cards.

Rating Card: 1 group divided the rating into a rating for users and a rating for the company.

MORAL ALGORITHM

Make a simple app to fill in certain parts instead of only papers

In the beginning, it was difficult for me to understand and to put values on moral content but at the end everything became clear with the algorithm method. And I saw how it could help me / us to see which could be a good decision or not for the company.

One group’s notes on the algorithm sheet: Option A scores higher, but has to do with the fact that because of the AI we valued values for AI higher than without, because there was more risk. So we didn't value identically for each.

OVERALL: How might we package this experience in different ways?

Could we have another session online?

The workshop should be necessary for any AI project at the EU level to start

Think before building: this would be useful at the ideation and business strategy phase. For example, when deciding if AI would be a good idea. Then explorations can be done in each expertise before anything is built. Especially in legal/privacy/security matters in context to user needs.

Help companies get their priorities straight: I can imagine this as a consultancy service. What values are important in the company, but also some learning about the different values, for example, compare it to the pyramid of Maslow: without basic safety (e.g. privacy), other values are less relevant. And what should be the role of the company? We are quick in assessing that governments should not provide social media, but is it the task of a company to adjust the behaviour of people

Useful in IOT Dev: When deciding on what to develop. When considering it being "useful" or "beneficial" for a user.

ITERATION PRE-THINGSCON

As mentioned above, we had a first run of the flow and framing at CIID with a group that mixed designers, technologists, journalists and project managers. Based on feedback and brainstorms after the CIID co-design session, we decided to make several changes.

1. We changed from participants choosing a few values out of a list to have participants work with a full list of the values of Bear & Co. - as when you work for a company, you cannot pick and choose immediately, rather you have to negotiate your stance.

2. We tested a new visual method for reflecting upon and weighing one's values.

3. We decided to incorporate the numerical ratings and weightings throughout the more creative steps of the process - both for ease of navigation and for a tighter reflection of a two-sided process (where one is more creative, whether through visuals or stories, and the other is purely quantitative translations).

ITERATION POST-THINGSCON

Several aspects of the “flow” would benefit from more structure, attention and time:

Values:

  • Coming up with communal re-definitions of the values

  • Openness of adding more

  • Limiting how much each value can be supported

Moral Imagination & Storytelling:

  • Try to integrate more of a “futurescaping” approach

  • “Destabilising factors” / trends / predictions

  • Risk / likelihood of scenarios: staying on the edge of imagination and reality

  • More clear tie to the values

  • Consider the potential of other tools for storytelling: collages, prototyping

Roleplay:

  • More full description of what the roles mean and how to play them

  • Possibly shifting roles

Newspaper:

  • A solid technique to summarise, find threads

  • Could be improved to indicate next steps?

OVERALL ITERATION

This workshop is a simulation: the company, roles and problem are all semi-fictitious. It is a moment for people to practice reflecting and working on problems that are not immediately in their sights, but that they may have to consider at some point in their near future. The kind of mental gymnastics that we put our participants through is designed to be used again and again - until it becomes a pair of glasses, a coat, a pair of boots they put on when they face similar problems in their own companies.

It could be possible to make the simulation closer and closer to a given IOT company’s current decision-making. Each part of the exercise could become more modular, but fit together as a full day as well. Values orientation could occur at designated moments throughout a product journey, whereas the decision-making “Moral Algorithm” could be used for important decisions (but how to identify those… when often “tiny” decisions have big impact). The reflective newspaper-article could be a way to understand the role the product could have in a bigger context. Each part could help a company to also negotiate with their potential clients or investors if they sense misalignment.

To be continued…

Bear & Co

Bear & Co