Type of activity: Scheduled session
Main topic: T147708: Facilitate Wikidev'17 main topic "Artificial Intelligence to build and navigate content"
Timing: Tuesday, January 10th at 1:10PM PST
Location: Room 2
Stream link: https://www.youtube.com/watch?v=w__x1p66y5U
Back channel: #wikimedia-ai
Etherpad: https://etherpad.wikimedia.org/p/devsummit17-AI_ethics
How do we make sure that our filtering and ranking algorithms do not perpetuate biases or cause other types of social problems? What aspects of AIs should we make transparent and what are some good strategies for doing so? In this session, we'll develop a call to action and gather resources for a best practices document
Problem statement
There's no best practices document for not causing problems with your algorithm. What are common problems we can cause? What are users' expectations?
Expected outcome
A document containing prescriptions for transparency around new AI projects. The beginning of a set of guidelines and best practices.
Summary of discussion
There's clear interest, but it seems like we'll probably want a brief summary of the critical algorithms literature as part of a session. We could probably compress a useful overview into less than 10 minutes so that it doesn't dominate the discussion.
Concerns were raised in regards to ORES (@Halfak) and ElasticSearch (@EBernhardson). @Tbayer has been reading some the recent literature. Generally, interest has been signaled (via token and subscriptions) by @Aklapper, @jmatazzoni, @Lydia_Pintscher, @Capt_Swing, @Arlolra, @gpaumier, and @Siznax.
(Updated Nov. 21st, 2016)