The last decade has seen rapid progress in the field of machine learning and neural networking. Using these techniques, computers now routinely recognise images, parse and respond to human speech, answer questions and make decisions. Welcome to the early robot future. We have increasingly sophisticated "narrow" artificial intelligences, but only the first beginnings of systems that think in open ended and general ways like we do.
There is a wide range of views about how urgent or profound the policy questions raised by general, "human level", artificial intelligence may be. But regardless of whether you think general purpose AI is imminent or still in the distant future, there are some topics raised by the state of the art in neural networking and machine learning algorithms that need to be addressed in the short term. For instance:
- What rules, if any, should constrain the use of machine learning methods when coupled to the large scale surveillance technologies operated by intelligence agencies? What about the large datasets collected by private tech companies?
- When algorithms, including AI and machine learning systems, make decisions that affect human lives, from the mundane (e.g. price discrimination) to the profound (e.g. sentencing recommendations), what standards of transparency, openness and accountability should apply to those decisions? If the decisions are "wrong", who is legally and ethically responsible?
- How do we prevent machine learning systems from producing racially biased results, or from engaging in other problematic forms of "profiling"?
EFF is tracking these issues, and will intervene to ensure there are protections against the privacy, safety and due process problems that could be caused by poorly designed or deployed machine learning systems, while protecting the rights of innovators to build, experiment with and deploy awesome new forms of AI.