Center for AI Policy

Center for AI Policy

Government Relations Services

Washington, DC 5,504 followers

Developing and promoting policy to mitigate catastrophic risks from AI

About us

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Website
https://aipolicy.us/
Industry
Government Relations Services
Company size
2-10 employees
Headquarters
Washington, DC
Type
Nonprofit
Founded
2023

Locations

Employees at Center for AI Policy

Updates

  • View organization page for Center for AI Policy, graphic

    5,504 followers

    On Tuesday, September 10th, 2024 the Center for AI Policy held a briefing for House and Senate staff on Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies. The Center's Executive Director, Jason Green-Lowe, moderated a discussion between a panel of esteemed experts: • Michael Brickman, Education Policy Director, The Cicero InstituteBethany Abbate, AI Policy Manager, Software and Information Industry Association (SIIA ) • Punya Mishra, Professor, Mary Lou Fulton Teachers College at Arizona State University (ASU) • Pati R., Senior Director of Edtech and Emerging Technologies, Digital Promise If you missed the event, you can watch a video recording here: https://lnkd.in/efWRWmPf

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    Hurricane Helene Floods Spruce Pine Mines, Testing Resilience of AI Chip Supply Chains Hurricane Helene has caused immense harm and economic damage to the United States since making landfall in Florida last week. One particular impact has been in Spruce Pine, North Carolina, which received 24 inches of rainfall in a three-day period. This caused a local river to flood, inflicting heavy damage on the town. The hurricane’s tragic devastation has implications for the semiconductor industry, as Spruce Pine is home to two high-purity quartz mines owned by Sibelco Group and The Quartz Corp. According to Bloomberg, these sites “account for more than 80% of the world’s supply of commercial high-purity quartz.” Both companies halted operations on September 26th. Although quartz is widely available as a common form of sand, only extremely pure varieties are suitable for several critical processes in computer chip manufacturing. For example, Sibelco’s top-of-the-line “Iota 8” quartz is 99.9992% pure and can sell for $10,000 per ton, thousands of times more expensive than regular construction sand. This material is essential for building quartz crucibles used in the Czochralski method, a key technique for producing semiconductor wafers. The pivotal Spruce Pine mines are currently closed, and it’s unclear when they will resume operations. Sibelco has allegedly “sent ‘force majeure’ notices to customers—freeing it from liability if it can’t fulfill orders.” Depending on the extent of the flooding damage, a full recovery could take weeks or months. Wafer companies have inventory stockpiles that will last for at least a couple months. In a pessimistic recovery scenario, those companies would turn to alternative sources of high-purity quartz, such as synthetic techniques or lower-quality mines. In turn, new suppliers would need to rapidly ramp up production, since Spruce Pine is currently responsible for 80% of global supply. Time will tell whether the Spruce Pine flooding causes significant challenges for the chip industry, which experienced global shortages during the COVID-19 pandemic. At the very least, it’s possible that prices will increase. “The folks I’ve spoken to in the industry in recent days seem relatively sanguine,” says Ed Conway, author of Material World. “But they’re certainly spending a lot of time checking their stockpiles and talking to suppliers. It’ll be a nervy few weeks.” The Spruce Pine flooding serves as a stark reminder of the intricate global supply chains that underpin modern AI research. As AI companies reach for the stars, their feet remain planted firmly on Earth. #AI #Semiconductors #Quartz #Mining #Helene Pictured: Hunterbrook Media found footage revealing a flooded entrance at one of the mining facilities.

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. Eric Schmidt, Google's ex-CEO, to join PM at UK investment summit 2. AI-generated deepfake audio causes controversy in Baltimore suburb 3. Google's share of US search ad market projected to drop below 50% next year 4. OpenAI faces executive departures amid warnings of danger 5. Grindr testing AI 'wingman' bot October 7, 2024 For the full newsletter - Check it out - click here:  https://lnkd.in/efX83Zfu #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    In AI Policy Weekly No. 43: 1) Hurricane Helene has flooded quartz mines in North Carolina, testing the resilience of AI chip supply chains. 2) United States Senate Committee on Foreign Relations Chairman Ben Cardin was targeted by a sophisticated deepfake operation. 3) The European Commission unveiled the EU AI Pact, a voluntary AI governance initiative with over 100 initial signatories from various sectors. Quote of the Week: Ivanka Trump recommends an essay series on artificial general intelligence (AGI). Read the full stories: https://lnkd.in/ewPweHzQ #AI #AIPolicy #Helene #Deepfake #EuropeanUnion

    AI Policy Weekly #43

    AI Policy Weekly #43

    aipolicyus.substack.com

  • View organization page for Center for AI Policy, graphic

    5,504 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. AI's impact on military planning and decision-making 2. OpenAI's substantial funding and credit arrangements 3. Legal challenges to AI companies over copyright and First Amendment issues 4. Advancements in AI-powered consumer technologies 5. Growing interest in nuclear power for data centers and tech companies October 4, 2024 For the full newsletter - Check it out - click here:  https://lnkd.in/eP55Sa8K #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    Advanced AI systems have already outpaced their developers' understanding because developers can not fully explain AI decision-making, and a system sometimes acts contrary to its operators' interests. Without fully explaining, let alone controlling, AI systems' behavior, the public will face risks ranging from biased employment decisions to physical harm. The Center for AI Policy's latest report offers an overview of explainability concepts, techniques, and recommendations for reasonable policies to mitigate risk. Full report here: https://lnkd.in/gKs834Rf -- Mark Reddish

  • View organization page for Center for AI Policy, graphic

    5,504 followers

    House Science Committee Maintains AI Momentum with Second Markup The U.S. House Committee on Science, Space, and Technology continues to demonstrate bipartisan leadership in AI policy, holding its second AI-relevant markup in just one month. Last Wednesday, the committee approved four bills, with two squarely focused on AI. First, the AI Incident Reporting and Security Enhancement Act directs the National Institute of Standards and Technology (NIST) to update the National Vulnerability Database for AI systems and study the need for voluntary reporting of AI security and safety incidents, similar to the Senate’s Secure AI Act. The bill calls for establishing common definitions for AI vulnerabilities and developing processes for managing them. Second, the Department of Energy AI Act establishes a cross-cutting R&D program at the U.S. Department of Energy (DOE) to advance AI tools and capabilities. The bill authorizes $300 million annually from 2025 to 2030, directing research in areas such as large-scale simulations, applied mathematics, and the development of trustworthy AI systems. While many forms of AI innovation are important, the Center for AI Policy (CAIP) would like to see more explicit prioritization of AI safety innovation within this budget, such as the AI risk and evaluation program in the Senate’s version of the bill. Another benefit of the Senate’s version is that it would formally establish the DOE Office of Critical and Emerging Technologies, with a mission that includes providing for “rapid response to emerging threats and technological surprise.” CAIP calls on the Senate and House to align their bills on AI security and AI at the DOE and send both bills to the President’s desk for signature. #AI #AIPolicy #Energy #Science #Congress Pictured: The House Science Committee during its markup meeting.

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. OpenAI raises $6.6 billion in new funding, reaching a $157 billion valuation 2. OpenAI asks investors not to back rival AI start-ups 3. California's AI bill criticized as well-meaning but flawed 4. Concerns arise about OpenAI's potential new logo design 5. Nvidia explores spatial AI and 'omniverse' as the next big opportunity October 3, 2024 For the full newsletter - Check it out - click here:  https://lnkd.in/efxtry25 #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    Ignoring AI threats doesn’t make them go away Earlier this month, the Senate Select Committee on Intelligence, normally a very discreet panel, bipartisanly, publicly, and loudly implored the American people and the private sector to remain vigilant against election interference. As Chairman Mark Warner (D-VA) said in the hearing, Foreign Threats to the 2024 Elections, “[T]his is really our effort to try to urge [technology companies] to do more - to kind of alert the public that this problem has not gone away.” The Center for AI Policy (CAIP) is very supportive of this effort. With an emphasis on artificial intelligence (AI) safety, CAIP shares the committee’s and participants’ concerns about how increasingly capable technologies are being illicitly used to undermine US elections and sow domestic division. Just weeks before the hearing, the US Department of Justice filed an indictment against Russian conspirators bankrolling social media influencers. From past operations to ongoing efforts, Russia was mentioned over 60 times during the hearing and linked to antidemocratic efforts, not only in the United States, but also in interference operations around the world. It’s not just Russia, of course. Sen. Susan Collins (R-ME) highlighted intelligence showing that China is interfering with “down ballot races at the state level, county level, local level.” This is particularly troubling because resources are often less available to guard against interference in these races. We know about Chinese espionage efforts to influence a staffer in the New York governor’s office – there are almost surely similar plots we have not yet discovered in states that have fewer investigative journalists. Iran and North Korea were also implicated by the committee and panelists for their election meddling and disinformation campaigns. Full post here: https://lnkd.in/gim7GfvH -- Brian Waldrip Makeda Heman-Ackah

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    5,504 followers

    The need for AI safety has bipartisan consensus at the highest ranks The verdict is in: the need to move on AI safety is one of the rare bipartisan agreements these days. In the past week, both President Joe Biden and former (and possibly future) first daughter Ivanka Trump have made significant statements on the need for AI safety regimes. In his recent speech to the United Nations General Assembly, President Biden spent a decent amount of time discussing the need for AI safety: “But let’s be honest. This is just the tip of the iceberg of what we need to do to manage this new technology. Nothing is certain about how AI will evolve or how it will be deployed. No one knows all the answers. [...] “Will we ensure that AI supports, rather than undermines, the core principles that human life has value and all humans deserve dignity? We must make certain that the awesome capabilities of AI will be used to uplift and empower everyday people, not to give dictators more powerful shackles on human — on the human spirit.” Meanwhile, Ms. Trump has made public her concern about the issue, going so far as to do a deep dive study into the issue. In one of her most recent comments on X (formerly Twitter), she notes: ”Leopold Aschenbrenner’s SITUATIONAL AWARENESS predicts we are on a course for Artificial General Intelliigence (AGI) by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks.” The document she shared highlights the fact that ‘leading AI labs treat security as an afterthought.’ Yet unfortunately, California Governor Gavin Newsom, who has generally been strong on consumer rights and AI issues, vetoed SB 1047, the strongest state bill to date intending to codify safety regulations. Even Elon Musk, of Silicon Valley fame, backed the bill, which passed through both chambers. Anthropic, a leading AI lab, said ”we believe its benefits likely outweigh its costs.” At the end of the day, the Governor was the lone dissenter. The EU has moved forward on AI regulation. California has tried and so far failed to initiate significant AI safety regulations. Yet there is clear bipartisan support for safety in AI at the highest levels, no matter how the election plays out come next month. Congress has been working on this issue for months and months, and we now have further validation from both sides of the aisle. And to give some credit, multiple bipartisan bills have been introduced, but none show signs of getting across the finish line. It’s time to make a concerted effort to do so, to ensure AI safety on a national level while we still have the time, because AI waits for no one. -- Kate Forscey

    • No alternative text description for this image

Similar pages