Center for AI and Digital Policy

Center for AI and Digital Policy

Public Policy Offices

Washington, DC 55,224 followers

"Filter coffee. Not people."

About us

The Center for AI and Digital Policy aims to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit corporation, the Center for AI and Digital Policy will bring together world leaders, innovators, advocates, and thinkers to promote established frameworks for AI policy – including the OECD AI Principles and the Universal Guidelines for AI – and to explore emerging challenges.

Website
https://caidp.org
Industry
Public Policy Offices
Company size
11-50 employees
Headquarters
Washington, DC
Type
Educational
Founded
2021
Specialties
Public Policy, Artificial Intelligence, Privacy, and AI

Locations

Employees at Center for AI and Digital Policy

Updates

  • 📢 Senate Commerce Committee to Consider Several AI Bills This Week The influential Senate Commerce Committee is expected to consider several AI bills at an Executive Session scheduled for Thursday, August 1 at 10:00 am EDT. The AI bills under consideration include: - S. 2714, CREATE AI Act of 2023 - S. 3162, TEST AI Act of 2023 - S. 3312, Artificial Intelligence Research, Innovation, and Accountability Act of 2023 - S. 4178, Future of Artificial Intelligence Innovation Act of 2024 - S. 4394, National Science Foundation Artificial Intelligence Education Act of 2024 - S. 4487, Small Business Artificial Intelligence Training and Toolkit Act of 2024 - S. 4569, TAKE IT DOWN Act (Detailed information about pending legislation in the US Congress is available at congres.gov. Insert the bill # in the search box) If approved by the Senate Committee, the AI bills could then proceed to the full Senate for consideration. Passage by the House and signature by the President are still required before the bills become law. Three bills concerning AI and elections were approved earlier by the Senate Rules Committee and are expected to be considered by the Senate in the next few weeks. At a time of political polarization in the United States, there has been a surprising level of bipartisan support for AI legislation across a wide range of topics, from countering misinformation and deep fakes to standard setting and public investment. The AI bills to be considered this week by the Commerce Committee typically include endorsements by both Democratic and Republican Senators. The Center for AI and Digital Policy has provided several statements to Senate Committees over the past year with recommendations concerning the pending legislation. United States Senate U.S. Senate Committee on Commerce, Science, and Transportation Majority Senator Maria Cantwell Ted Cruz #aigoverance Merve Hickok Christabel R. Kyler Zhou Brianna G. Rodriguez https://lnkd.in/e4paRtjv

    Executive Session

    Executive Session

    commerce.senate.gov

  • View organization page for Center for AI and Digital Policy, graphic

    55,224 followers

    📢 European Parliament Sets Up Working Group to Oversee Implementation of the AI Act According to both Euractiv and MLex, the Internal Market and Consumer Protection (IMCO) Committee and the Civil Liberties, Justice and Home Affairs (LIBE) Committee will set up a Working Group to ensure effective implementation of the AI Act. The European Commission plays a central role in the implementation of the AI Act. The Parliament is seeking to ensure the Commission fulfills is its responsibilities. MLEX - "The initiative follows the footprints of similar working groups that were set up to monitor the enforcement of the Digital Services Act and the Digital Markets Act and maintain pressure on the European Commission." Euractiv - "Members of the European Parliament have previously expressed concerns over the lack of transparency around the AI Office’s staffing process, as well as the involvement of civil society in parts of the implementation process." Members of the Center for AI and Digital Policy met with members of the IMCO and LIBE committees during the development of the AI Act, provided detailed comments on the legislation, and then published a report for the European AI & Society Fund "Making the AI Act work: How civil society can ensure Europe’s new regulation serves people & society" (September 2023) - https://lnkd.in/gaZDTUas There are several upcoming deadlines for the AI Act: 6-months after adoption (Feb 2025) ➡ Ban on high-risk AI systems (subliminal techniques, exploitive technqiues, web scraping of facial images) 🔥 🔥 🔥 ➡ AI literacy obligations published 12-months (August 2025) ➡ Obligations for General Purpose AI Systems established ➡ Serious incident reporting guidance published ➡ Enforcement provisions established 18-months (Feb 2026) ➡ Guidance for High-risk AI Systems published European Parliament European Commission #IMCO #LIBE #aigovernance #aiact #euaiact Center for AI and Digital Policy Europe https://lnkd.in/emQ65r54

    Parliament sets up cross-committee working group to monitor AI Act implementation

    Parliament sets up cross-committee working group to monitor AI Act implementation

    https://www.euractiv.com

  • View organization page for Center for AI and Digital Policy, graphic

    55,224 followers

    📢 CAIDP Provides Comments to UNESCO on Neurotech and AI In response to a request for public comments and a draft document, the Center for AI and Digital Policy has provided detailed recommendation to UNESCO regarding the Ethics of Neurotechnology. Neurotechnology offers promise for medical care but can also manipulate human decision-making, with long-term consequences for freedom of thought, cognitive liberty, and free will. UNESCO has identified a range of concerns, including cerebral and mental integrity and human dignity, personal identity, and mental privacy and brain data confidentiality. AI techniques are likely to exacerbate the risks associated with the deployment of neurotechnology. 🔥 CAIDP called for the "protection of the human rights to mental privacy and cognitive liberty" and made several recommendations for the governance of neurotechnology. 1️⃣ Prohibit the secret and unauthorized collection and use of neural data and cognitive biometrics 2️⃣ Establish Privacy Enhancing Technologies (PET) that minimize or eliminate the collection of personal data as the cornerstone of data protection practices for neurotechnologies 3️⃣ Adopt the Universal Guidelines on AI as foundational requirements for the development and deployment of neurotechnologies 4️⃣ Require transparent and independent audits of neurotechnology companies data practices by regulators CAIDP called for global standards and a treaty for the protection of human rights to mental privacy and cognitive liberty. Merve Hickok Marc Rotenberg Nayyara Rahman Tamiko Eto Dominique Greene-Sanders Caroline Friedman Levy UNESCO #neurotechnology

  • 📢 MIT Technology Review Publishes Assessment of TechCos AI Self-Regulatory Commitments "One year ago, on July 21, 2023, seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—committed with the White House to a set of eight voluntary commitments on how to develop AI in a safe and trustworthy way. "These included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks. ➡ "On the first anniversary of the voluntary commitments, MIT Technology Review asked the AI companies that signed the commitments for details on their work so far. Their replies show that the tech sector has made some welcome progress, with big caveats. . . ." 🔥 "One year on, we see some good practices towards their own products, but [they’re] nowhere near where we need them to be in terms of good governance or protection of rights at large,” says Merve Hickok the president and research director of Center for AI and Digital Policy, who reviewed the companies’ replies as requested by MIT Technology Review. Many of these companies continue to push unsubstantiated claims about their products, such as saying that they can supersede human intelligence and capabilities, adds Hickok. 🔥 "One area of improvement for AI companies would be to increase transparency on their governance structures and on the financial relationships between companies, Hickok says. She would also have liked to see companies be more public about data provenance, model training processes, safety incidents, and energy use. #aigovernance Melissa Heikkilä MIT Technology Review https://lnkd.in/e55nx9gG

    AI companies promised to self-regulate one year ago. What’s changed?

    AI companies promised to self-regulate one year ago. What’s changed?

    technologyreview.com

  • 📢 CAIDP Update 6.29 - AI Policy News (July 22, 2024) 🌐 Council of Europe Provides Overview of Groundbreaking AI Treaty 🇬🇧 UK's New Government Signals Tougher Stance on AI Regulation 🤖 GPAI Report Highlights Importance of Algorithmic Transparency in Public Sector 🇪🇺 EU Bodies Address AI Regulation and Data Privacy Challenges 🇺🇸 White House Announces $100M for Public Interest Tech; FCC Targets AI Robocalls 🇹🇼 Taiwan Proposes AI Law Promoting Innovation and Human Rights 🇨🇳 UN Adopts China-Sponsored AI Resolution, Backed by US 🇷🇺 Russia Enacts Law Mandating AI Risk Insurance 🗣 CAIDP Advocates for Ethical AI Use in California Employment 🌐 UNESCO Report Addresses Critical AI and Democracy Challenges Council of Europe #uk GPAI European Data Protection Board CNIL - Commission Nationale de l'Informatique et des Libertés #dpa The White House #taiwan United Nations UNESCO #russia California Civil Rights Department #aigovernance #transparency #democracy #accountability Evelina Ayrapetyan Merve Hickok

  • 📢 CAIDP California Speaks at Civil Rights Council about Automated Decisionmaking and Employment CAIDP Research Fellow Evelina Ayrapetyan testified at the California Civil Rights Council Public Hearing on July 18, 2024 regarding the Council’s proposed modifications to employment regulations for automated-decision systems. Ayrapetyan explained, "AI is quickly transforming the employment landscape and while it has great potential, without proper guardrails in place, the use of AI systems in employment can amplify existing biases and negatively impact historically marginalized communities." She made several recommendations. 1️⃣ Require public disclosure: 1) that a system is in use, 2) methods for opt in/out, 3) explanation of the system’s logic regarding the candidate or employee, and 4) results of the independent impact assessment algorithmic decision systems 2️⃣ Explicitly Require Human oversight and individualized assessments in final employment decisions, even when a job applicant/employee opts-in to algorithmic decision-making 3️⃣ Require pre-deployment independent algorithmic system impact assessments as a pre-condition to deploying AI systems or algorithmic tools in employment 4️⃣ Expand the definition of “Algorithmic-Decision System” and consider including the term ‘use of an algorithmic employment tool’ She concluded, "We applaud the Civic Rights Council for ensuring California employers deploy AI systems ethically and responsibly while respecting human rights and the rule of law, and protecting people in vulnerable situations who are most likely to be harmed by algorithmic decision-making." Evelina Ayrapetyan and several other members of the Center for AI and Digital Policy Research Group have launched a new California-based affiliate for CAIDP. The affiliate will represent CAIDP in state-level AI policy issues. The California legislature is currently considering several AI bills, including discrimination in housing and health care services, AI and employment, protections for creative artists, privacy and surveillance, and standards for foundational models. #aigovernance #california #employment Evelina Ayrapetyan Nidhi Sinha Jaya V. Christabel R. Merve Hickok Marc Rotenberg

  • 📢 GPAI Publishes Report on Algorithmic Transparency in the Public Sector The report from the GPAI reviews algorithmic transparency instruments in the public sector and focuses on repositories or registers of public algorithms. The project's objective is to study algorithmic transparency in the public sector with an emphasis on assessing transparency instruments that enable governments to comply with algorithmic transparency principles, standards, and rules. The GPAI report explains that "algorithmic transparency arises within the broader context of public interest regulation. The principle derives from the democratic right to know and access information." 🔥 GPAI - "Algorithmic transparency is a means for fulfilling fundamental rights enshrined in public interest regulation. Applied to the public sector, for example, information on how state services are provided enables the population to access health and education rights. Moreover, information about how the state makes certain decisions affecting people's llives and liberties is indispensable to protecting the right to due process." 🔥 GPAI - "transparency in the public sector is one of the pillars of Open Government initiatives that governments worldwide have pledged to promote. . . . algorithmic transparency has become central to the new generations of Open Government initiatives." 🔥 GPAI - "algorithmic transparency enables citizen oversight over governmental activities and decisions associated with the adoption and implementation of ADM systems. For example, accessing meaningful information may allow civil society organizations to assess whether the use of ADM system complies with the law." The Center for AI and Digital Policy welcomes the GPAI report on Algorithmic Transparency ➡ Algorithmic transparency is one of the key metrics in our annual evaluation of national AI policies and practices in the CAIDP "AI and Democratic Values Index" ➡ The GPAI Report responds to the urgent need to move from principles to action to promote algorithmic transparency and accountability ➡ The GPAI Report builds on well-established principles of citizen access to information about government-decisionmaking ➡ The Center for AI and Digital Policy has previously advised international organizations to promote algorithmic transparency as part of AI governance. In 2021 and 2023, we asked the #G20 nations "to promote fairness, accountability, and transparency for all AI systems, particularly for public services. G20 leaders should adopt new laws to ensure algorithmic transparency and to limit algorithmic bias so that unfair treatment is not embedded in automated systems.” CAIDP President Merve Hickok has written extensively about the need to promote accountability of AI systems in the public sector. Juan David Gutiérrez Rodríguez Alison Gillwald CEIMIA #aigovernance OECD.AI Daniela Constantin Nayyara Rahman

    algorithmic-transparency-in-the-public-sector.pdf

    gpai.ai

  • View organization page for Center for AI and Digital Policy, graphic

    55,224 followers

    📢 Council of Europe Publishes Overview of AI Treaty 📘 The Council of Europe has published an important overview of the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law," the first international legally binding instrument for AI. ➡ The AI Treaty "aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation." ➡ Activities within the lifecycle of AI systems must comply with the following fundamental principles: ► Human dignity and individual autonomy ► Equality and non-discrimination ► Respect for privacy and personal data protection ► Transparency and oversight ► Accountability and responsibility ► Reliability ► Safe innovation 🔥 There is the possibility for the authorities to ban or establish moratoria on certain application of AI systems (“red lines”). 🔥 The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf – and private actors. ➡ The Convention offers Parties two modalities to comply with its principles and obligations when regulating the private sector: Parties may opt to be directly obliged by the relevant Convention provisions or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. ➡ Parties to the Framework Convention are not required to apply the provisions of the treaty to activities related to the protection of their national security interests but must ensure that such activities respect international law and democratic institutions and processes. The Framework Convention does not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy, or the rule of law. The Center for AI and Digital Policy worked closely with civil society organizations, academic experts, member delegations, and the secretariat on the development of the AI Treaty. We look forward to adoption and implementation. The treaty will be open for signature on September 5, 2024. #aigovernance #aitreaty Merve Hickok Karine Caunes Daniela Constantin Marc Rotenberg Christabel R. Francesca Fanucci Center for AI and Digital Policy Europe European Center for Not-for-Profit Law Stichting

  • 🌪 White House Announces $100 m Funding to Advance Public Interest Technology (July 16, 2024) From the White House Office of Science and Technology Policy: "The Biden-Harris Administration is committed to advancing technology that protects our safety, security, democratic values, and human rights. In his Executive Order on the Safe, Trustworthy, and Responsible Development and Use of AI, President Biden instructed the United States government to pull every lever to attract and hire highly skilled talent in AI and critical and emerging technologies." ➡ The National Science Foundation (NSF) will provide at least $48 million to advance research, implementation, and learning opportunities. ➡ The Department of Defense, with support from the Office of Management and Budget and OSTP, will launch the Trusted Advisors Pilot this year. ➡ The Ford Foundation is dedicating more than $20 million to enhance the field of public interest technology. ➡ The Siegel Family Endowment will invest $20 million in the public interest technology ecosystem over the next three years. ➡ The Mitchell Kapor Foundation is committing nearly $1.5 million to promote responsible, equitable, and ethical AI systems for the public interest. ➡ The University of Michigan Science, Technology, and Public Policy Program’s Community Partnerships Initiative will develop a collaborative AI innovation process with social justice organizations and local governments in southeast Michigan https://lnkd.in/gH6Fgckz

    Fact Sheet: Biden-Harris Administration Announces Commitments from Across Technology Ecosystem including Nearly $100 Million to Advance Public Interest Technology | OSTP | The White House

    Fact Sheet: Biden-Harris Administration Announces Commitments from Across Technology Ecosystem including Nearly $100 Million to Advance Public Interest Technology | OSTP | The White House

    whitehouse.gov

  • 📢 CAIDP Update 6.28 - AI Policy News (July 15, 2024) ✝ 🕌 ☸ 🕉 ✡ World Religions Unite in Hiroshima to Sign AI Ethics Pledge 🇺🇸 FTC Outlines AI Oversight as CAIDP Urges Action on OpenAI Investigation 🇪🇺 EU AI Act Published: Compliance Countdown Begins for Companies NATO Unveils New AI Strategy in Tech Transformation Push 🇧🇷 Brazil Maintains Ban on Meta's AI Data Collection 🇩🇪 German Bundesrat Proposes Law to Criminalize Deepfakes 🗣 🏛 CAIDP Report Urges Congress to Act on FTC's Stalled OpenAI Investigation 🗣 🇬🇧 CAIDP President Addresses AI Challenges at Data for Policy Conference 🗣 🏛 CAIDP Urges Senate Action on AI Privacy Protections #aigovernance #euaia #brazil #germany Merve Hickok Federal Trade Commission NATO

Similar pages

Browse jobs

Funding