A huge congratulations to Stanford HAI faculty affiliate Jay McClelland for being honored with the Golden Goose Award! Alongside David Rumelhart and Geoff Hinton, his groundbreaking work on neural network models of cognition paved the way for today's AI innovations. Check out their inspiring journey and its impact on modern technology: https://lnkd.in/gr_bWYdZ
Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Higher Education
Stanford, California 102,566 followers
Advancing AI research, education, policy, and practice to improve humanity.
About us
At Stanford HAI, our vision for the future is led by our commitment to studying, guiding and developing human-centered AI technologies and applications. We believe AI should be collaborative, augmentative, and enhancing to human productivity and quality of life. Stanford HAI leverages the university’s strength across all disciplines, including: business, economics, genomics, law, literature, medicine, neuroscience, philosophy and more. These complement Stanford's tradition of leadership in AI, computer science, engineering and robotics. Our goal is for Stanford HAI to become an interdisciplinary, global hub for AI thinkers, learners, researchers, developers, builders and users from academia, government and industry, as well as leaders and policymakers who want to understand and leverage AI’s impact and potential.
- Website
-
http://hai.stanford.edu
External link for Stanford Institute for Human-Centered Artificial Intelligence (HAI)
- Industry
- Higher Education
- Company size
- 11-50 employees
- Headquarters
- Stanford, California
- Type
- Nonprofit
- Founded
- 2018
Locations
-
Primary
Stanford, California 94305, US
Employees at Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Updates
-
How should policymakers think about regulating emerging technologies? Stanford HAI’s new Policy Fellow Riana Pfefferkorn says the first question should always be, “Do you really need a new law or can you use the law as it exists today?” https://lnkd.in/eVrtDsFv
Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties
hai.stanford.edu
-
Together with Stanford Online, we’ve invited the most renowned faculty and instructors at Stanford to give you a truly unique learning experience exploring generative AI and foundation models. Watch this preview featuring Stanford HAI co-director James Landay: https://bit.ly/3XMMrqx
Short Program Overview - Generative AI: Technology, Business & Society
https://www.youtube.com/
-
“AI has enormous potential to help improve both the quality and efficiency of psychotherapy training. Forming an interdisciplinary team enabled us to progress in this exciting new area of investigation,” says Stanford psychologist Bruce Arnow. https://lnkd.in/gm_ifQw5
Using AI To Train Peer Counselors
hai.stanford.edu
-
Covert racism against speakers of African American English persists in many large language models. Stanford researchers outline their findings in a new study published in Nature: https://lnkd.in/gmDVXdCy
Covert Racism in AI: How Language Models Are Reinforcing Outdated Stereotypes
hai.stanford.edu
-
HAI is honored to play a continued partnership with the California Government, UC Berkeley College of Computing, Data Science, and Society, and Mariano-Florentino (Tino) Cuéllar to ensure CA is not only a place where AI innovation and safety flourish, but also is a leader on human-centered AI. We look forward to collaborating on empirically driven research to help govern AI. https://lnkd.in/eSq32k8X
Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians
https://www.gov.ca.gov
-
Using a method called the matched guise technique, a team of researchers explored how large language models respond to African American English. Their findings illustrate how these models continue to perpetuate harmful racial biases. https://lnkd.in/gSWYfAnF
-
It’s not inevitable that AI will lead to more freedom and participation in democracy. It is our responsibility and opportunity to approach these urgent questions with the conviction that we have the agency to change it. The Digitalist Papers present varied proposed solutions with the understanding that a technology as fast-changing as AI requires nimbleness, keeping an open mind, and the continual engagement of a diverse group of leaders to debate and guide the technology. Thank you to Greg Beato, Laura Claster Bisesto, John H. Cochrane, Sarah Friar, Mona Hamdy, Saffron Huang, Reid Hoffman, Lawrence Lessig, James Manyika, Johnnie Moore, Jennifer Pahlka, Alex 'Sandy' Pentland, Nathaniel Persily, Eric Schmidt, Divya Siddarth, Audrey Tang, Lily Tsai, Eugene Volokh, and E. Glen Weyl for providing their invaluable insights into shaping our digital future. https://lnkd.in/grnckYqZ
-
How can we design AI in smart homes and offices without sacrificing user privacy? How can AI-driven assistants be designed to simulate empathy? These are some of the research questions scholars at Hasso Plattner Institut and Stanford HAI are exploring as part of our collaboration: https://lnkd.in/gdEWeqMm
-
Just days ago, the Stanford Digital Economy Lab launched The Digitalist Papers, a series of essays that explores how AI and digital technologies can reshape governance and democracy. The essays explore topics ranging from how to rethink democratic engagement, why the government risks losing an active electorate if it doesn’t embrace the potential of AI, and how we should be thinking about regulating AI. The bedrock of every essay—whether by a technology executive, law professor or economist—was that democracy and AI can and should work in unison; that our democratic institutions are strong and can be ushered into the AI era. https://lnkd.in/gv-aBHhP