The promise of artificial intelligence is to free humans from tedious jobs, but it is also putting people out of jobs, reinforcing existing or introducing new biases in decision-making, and creating invisible, poorly paid ghost workers that we take for granted. I pledge to question the working conditions in which our product is made and maintained and pay attention to unintended side effects that affect people at large.
I will think about:
- Have the algorithms we use been trained on data sets representing people of different abilities, gender, race, cultures, and social class?
- Who chose the categories and labels we use in our training sets and algorithms? What existing prejudices and power dynamics do these labels reinforce?
- How might jobs be changed, displaced, or created because of our product? Does this help people?
- How is our company treating employees and outside contractors? What ghost work (such as moderation) is our product creating? Is my company treating these workers fairly?
Suggested actions:
- Use machine learning auditing tools to look for bias in our models and training datasets.
- Question the assumptions behind categories and labels used across our products.
- When using outside AI products, check how the models were trained and ask for greater transparency.
- Advocate for better employment conditions inside the company and full employment for people working on the product full-time. Snacks shouldn’t replace healthcare, paid parental leave, and other basic human needs.
- Advocate for better working conditions for people outside the company, especially the invisible ghost workers.
Explore additional resources: