Jump to content

Center for Human-Compatible Artificial Intelligence

From Wikipedia, the free encyclopedia
Center for Human-Compatible Artificial Intelligence
Formation2016; 8 years ago (2016)
HeadquartersBerkeley, California
Leader
Stuart J. Russell
Parent organization
University of California, Berkeley
Websitehumancompatible.ai

The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell.[1][2] Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.

CHAI's faculty membership includes Russell, Pieter Abbeel and Anca Dragan from Berkeley, Bart Selman and Joseph Halpern from Cornell,[3] Michael Wellman and Satinder Singh Baveja from the University of Michigan, and Tom Griffiths and Tania Lombrozo from Princeton.[4] In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years.[5] CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council.[6][7][8]

Research

[edit]

CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior.[9] It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding.[10]

See also

[edit]

References

[edit]
  1. ^ Norris, Jeffrey (Aug 29, 2016). "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Retrieved Dec 27, 2019.
  2. ^ Solon, Olivia (Aug 30, 2016). "The rise of robots: forget evil AI – the real risk is far more insidious". The Guardian. Retrieved Dec 27, 2019.
  3. ^ Cornell University. "Human-Compatible AI". Retrieved Dec 27, 2019.
  4. ^ Center for Human-Compatible Artificial Intelligence. "People". Retrieved Dec 27, 2019.
  5. ^ Open Philanthropy Project (Aug 2016). "UC Berkeley — Center for Human-Compatible AI (2016)". Retrieved Dec 27, 2019.
  6. ^ Open Philanthropy Project (Nov 2019). "UC Berkeley — Center for Human-Compatible AI (2019)". Retrieved Dec 27, 2019.
  7. ^ "UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)". openphilanthropy.org.
  8. ^ "World Economic Forum — Global AI Council Workshop". Open Philanthropy. April 2020. Archived from the original on 2023-09-01. Retrieved 2023-09-01.
  9. ^ Conn, Ariel (Aug 31, 2016). "New Center for Human-Compatible AI". Future of Life Institute. Retrieved Dec 27, 2019.
  10. ^ Bridge, Mark (June 10, 2017). "Making robots less confident could prevent them taking over". The Times.
[edit]