Key AI Developments in June 2024

Key AI Developments in June 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker. Each month, Holistic AI’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.


Europe

1. Council of Europe presents EU with Framework Convention on AI

  • On 26 June, the European Commission presented a proposal for a Council of the EU decision on the signing, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.

  • This follows the adoption of the Convention by the Council of Europe last month.

  • The Convention aims to ensure that AI systems are used in a way that is consistent with human rights, democracy, and the rule of law throughout their lifecycle.

  • The Convention will be binding on the States that sign and ratify it, requiring these states to develop their own jurisdictional-specific rules and actions.

2. European Data Protection Supervisor publishes guidelines on generative AI

  • On 3 June 2024 the European Data Protection Supervisor, the independent supervisory authority for European institutions and bodies, published guidelines on the use of Generative AI.

  • Aimed at supporting EU institutions, bodies, offices, and agencies to comply with Regulation (EU) 2018/1725 when either using or developing generative AI tools, the guidelines are structured as a Q&A and aim to provide “concrete examples” to make core data protection principles actionable for generative AI.

  • The guidelines cover how to identify whether a generative AI tool is processing personal data.

  • They also outline the role of DPOs and the importance of DPIAs where personal data is processed.

  • Moreover, the guidelines provide best practices for ensuring data minimization, data accuracy, data security, appropriate transparency, and the avoidance of bias when using generative AI tools.

3. Meta pauses AI training plans after complaints received in several European countries

  • On 26 June 2024, Meta’s new privacy policy was due to come into effect, which would give it the right to collect public Instagram and Facebook data to train its AI models.

  • While the company did give users an opt-out option, the advocacy group None of Your Business (NOYB) filed complaints in 11 European countries over concerns about the lack of the right to be forgotten once a user’s data is in the database.

  • As a result, the Irish Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) called for Meta to delay its training of its LLMs until concerns could be satisfied.

  • Accordingly, Meta has paused its AI training plans. 

4. France’s data protection agency publishes recommendations on the application of the GDPR to AI

  • On 8 April 2024, the French data protection agency CNIL published recommendations on compliance with the GDPR when developing AI. On 7 June, the agency published the same guidance in English.

  • The guidelines cover all steps prior to the deployment of AI, including system design, dataset creation, and model training, noting that system improvement is an iterative process that can revisit development steps.

  • The guidelines outline a seven-step approach to AI development in line with the GDPR:

  1. Determine the objective purpose of the system, including if using a general- purpose AI system

  2. Determine your responsibilities in terms of controller vs subcontractor

  3. Define the legal basis for the processing of personal data by the AI system whether a public or private actor

  4. Check if certain personal data can be reused for the purpose of AI development, including open-source and third-party data

  5. Minimize the amount of personal data used

  6. Define the data retention period so that data subjects can be informed

  7. Carry out a data protection impact assessment with a view to determining the appropriate security measures.

5. Dutch Data Protection Authority and National Digital Infrastructure Inspectorate publish advice on the supervision of AI systems

  • On 11 June 2024, the Dutch Data Protection Authority (AP) and National Digital Infrastructure Inspectorate (RDI) announced the publication of second interim advice on the supervision of AI systems drafted with 20 other supervisors.

  • The advice precedes the transition period for the EU AI Act and recommends that the supervision of high-risk AI systems across sectors should be aligned with existing supervision, where the supervision of products that already require a CE mark can remain the same.

  • For products that do not currently require a CE marking, supervision is predominantly the responsibility of the AP and supplemented by sectoral supervision.

  • The only two exceptions to this are the financial sector, where the Netherlands Authority for the Financial Markets (AFM) and the Dutch Central Bank (DNB) will be responsible for the supervision of AI systems, and critical infrastructure, where the Human Environment and Transport Inspectorate (ILT) and the RDI will supervise.

6. Council of the EU adopts regulation on supercomputing in AI development

  • On 17 June, the Council of the EU announced its adoption of an amendment to the regulation on the European High-Performance Computing (EuroHPC) joint undertaking to address the development and operation of AI factories.

  • These factories provide infrastructure for AI supercomputing through the combination of an AI supercomputer, data center, and AI-oriented supercomputing services, and will be promoted and operated by the EuroHPC joint undertaking.

  • The factories will be open to public and private users, where SMEs and start-ups will have ad-hoc access conditions.

  • The regulation will enter into force 20 days after it is published in the Official Journal of the European Union.

7. Apple to withhold AI technologies in the EU and European Commission opens investigation over DMA concerns

  • Apple is reportedly blocking the release of Apple Intelligence in the EU due to concerns about the Digital Markets Act.

  • The DMA regulates so-called gatekeepers in the EU to promote fair competition among digital platforms by prohibiting them from favoring their own services over competitors’.

  • Apple claims that Apple Intelligence would have downgraded security and privacy features if deployed in a way that is compliant with the DMA, resulting in the technology not being made available in the EU.

  • Moreover, the European Commission published a preliminary view that the App Store violates the DMA due to terms that prevent developers from steering customers to alternatives.

  • As a result, the Commission has opened additional non-compliance investigations into Apple, adding to a prior investigation launched on 25 March, and could adopt a non-compliance decision against Apple if suspicions were confirmed.

US

8. Federal law on AI training introduced

  • On 11 June, US senators Maria Cantwell and Jerry Moran introduced the Small Business Artificial Intelligence Training and Toolkit Act of 2024 (S4487) to mandate the Secretary of Commerce to develop AI training resources and toolkits for US small businesses in collaboration with the Small Business Administration (SBA).

  • The trainings would be centered around using AI to benefit accounting, business planning and operations, marketing, supply chain management, government contracting, and exporting.

  • Training resources would also be available through SBA resource partners, including Small Business Development Centers, Women’s Business Centers, SCORE, Veteran Business Opportunity Centers, and the Apex Accelerator.

  • The move is aimed at enhancing AI integration and utilization to support the growth of small businesses in the US. 

9. New York introduces bill to restrict state agencies from using LLMs or AI for decision-making on individual rights

  • On 20 June 2024, New York policymakers introduced A10583 to mandate that state agencies and state-owned entities cannot use large language models (LLMs) or artificial intelligence (AI) systems for decisions affecting individuals' rights, benefits, or services, reserving such decisions exclusively for human personnel. 

  • However, the bill allows AI and LLMs to be used in advisory roles and for non-decision-making purposes like data analysis and research, provided final decisions are made by humans.

  • Under the bill, state agencies must also develop compliance policies with enforcement oversight by the attorney general. 

10. New York introduces bill on AI and FRT in criminal investigations

  • On the same day, A10625 was introduced to require the Division of Criminal Justice Services to establish a standardized protocol for law enforcement's use of AI and facial recognition technology in criminal investigations.

  • This includes transparency measures, independent audits, and training for law enforcement officers.

  • The bill prohibits AI-generated outputs, including facial recognition results, from being used as evidence in court proceedings and grants defendants the right to expert witnesses on AI reliability.

  • Moreover, prosecutors must disclose comprehensive information on AI and FRT usage, including error rates and biases, ensuring transparency and fairness in legal proceedings.

11. Clearview AI settles lawsuit over scraping internet photos to train facial recognition models

  • New York based facial recognition technology provider Clearview AI has reportedly settled a lawsuit over its scraping of internet images to train its tool that is shared with law enforcement agencies.

  • The lawsuit, a result of a New York Times article, was consolidated into a single class action in Illinois after multiple lawsuits were filed over the unlawful collection of biometric data, which violates Illinois’ Biometric Information Privacy Act.

  • The proposed settlement will provide a collective 23% stake in the company to everyone whose data was scraped to train the technology.

12. Google defeats lawsuit over internet data scraping for AI training

  • On 6 June 2024, U.S. District Judge Araceli Martinez-Olguin dismissed a 143-page proposed class action against Alphabet Inc. in a two-page document following Google’s motion to dismiss the lawsuit.

  • The complaint, filed in 2023, accused Google's web scraping practices for its Bard AI model of violating privacy, anti-hacking, and intellectual property laws.

  • Judge Martinez-Olguín cited a similar dismissal of a scraping lawsuit against OpenAI Inc., highlighting the excessive length and lack of clarity in the complaint. 

  • Both lawsuits involve some of the same plaintiffs, attorneys, and causes of action.

13. FTC and DOJ to Launch Antitrust Investigations into Microsoft, OpenAI, and Nvidia

  • The Federal Trade Commission (FTC) and the Justice Department (DOJ) are initiating antitrust investigations into Microsoft, OpenAI, and Nvidia, focusing on their influence in the AI industry.

  • The FTC will lead the investigations into Microsoft and OpenAI, while the DOJ will examine Nvidia. 

  • The investigations aim to scrutinize the companies' conduct rather than mergers and acquisitions amid concerns over the rapid advancement of AI technologies and the substantial financial investments required for their development. 

  • This news follows the FTC's announcement of a comprehensive study on major AI industry players and an open letter from current and former OpenAI employees highlighting the lack of oversight and whistleblower protections in the AI sector.  

14. Department of Treasury publishes Request for Information on Artificial Intelligence Financial Services

  • On 12 June 2024, the U.S. Department of the Treasury published a request for information (RFI) seeking insights on AI's current and potential applications within the financial sector. 

  • Financial institutions are expected to disclose how AI is used in various functions such as product offerings, risk management, customer service, regulatory compliance, and marketing. 

  • The RFI highlights concerns regarding AI's impact on transparency, bias in decision-making, privacy risks, and compliance with existing laws, prompting stakeholders to propose enhanced regulatory frameworks. 

  • Written comments and information are requested on or before 12 August 2024. 

15. Major record companies file lawsuit against AI song generator Suno

  • On 24 June 2024, record companies UMG, LLC, Sony, Atlantic, ARC, Rhino, The All Blacks, Warner Music, and  Warner Records collectively filed a lawsuit against AI song generator Suno over copyright infringement.

  • The lawsuit claims that Suno AI illegally copied their sound recordings to train its generative AI, resulting in the production of music that competes with and devalues the original works. 

  • The plaintiffs argue that Suno's use of their recordings does not qualify as fair use due to its commercial nature, the substantiality of the copied content, and the negative impact on the market for the original recordings.  

16. Google escapes lawsuit over AI-powered call listening

  • Google has beat a lawsuit filed in California in October 2023 over its Cloud Contact Center AI, an artificial intelligence service used by various call centers.

  • The plaintiff, Misael Ambriz, alleged that Google eavesdropped on his conversation with a Verizon employee through the Cloud Contact Center AI and analyzed the context to suggest "smart replies" and news articles to the Verizon agent.

  • Ambriz claimed that this violates California's wiretap law, which prohibits the interception of electronic communications without the consent of all parties involved.

  • However, on 20 June 2024, Judge Rita F. Lin dismissed the case on the grounds that Google was exempt from the California Invasion of Privacy Act as it was acting as an Agent of Verizon.

  • The five-page dismissal document gives Ambriz until 22 July to submit an amended complaint.

Global

17.  Hong Kong Privacy Commissioner releases first AI-focused Personal Data Protection Framework in APAC

  • Hong Kong’s Office of the Privacy Commissioner for Personal Data (PCPD) issued the city’s first set of personal data protection guidelines for businesses using generative artificial intelligence (AI) services.

  • The guidelines advise companies that use generative AI solutions to take various measures to protect personal data, including risk assessments, human oversight, and minimizing the use of personal data for model training.

  • Companies are also encouraged to establish an internal AI governance committee led by a C-suite executive who reports to the board.

  • The document marks the first AI-focused regulatory guidance from the city, where there are otherwise no other AI laws. 

18. Türkiye proposes AI law to prevent AI harms

  • On 24 June 2024, policymakers in Türkiye introduced a proposal for a law to ensure the safe, ethical, and fair use of AI.

  • The proposal aims to ensure the protection of personal data, prevent violation of privacy rights, and create a regulatory framework for the development and use of artificial intelligence systems.

  • It does so by requiring adherence to a set of principles and introducing risk assessments of AI systems to identify and minimize possible hazards, with a risk-based approach imposing specific requirements for systems considered high-risk.

  • Broadly drafted obligations for high-risk systems include security measures, monitoring, and control mechanisms, as well as registration with relevant supervisory authorities and conformity assessments.

  • Specific details are left to the competent authorities

19. U.S. and Singapore announce shared principles and collaboration on AI

  • On 5 June, the US and Singapore jointly released a factsheet detailing their shared principles and objectives related to AI, along with plans for further collaboration.

  • The announcement follows a Roundtable on AI hosted by U.S. Secretary of Commerce Gina Raimondo and Singapore Minister for Communications and Information (MCI) Josephine Teo with U.S. and Singaporean companies and government officials.

  • As part of the initiative, the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework, which sits within the U.S. Commerce Department, will be mapped to the MCI’s equivalent guidance, AI Verify. The two institutes will continue to collaborate following the mapping.

  • The U.S. AI Safety Institute and Singapore’s Digital Trust Center are also planning to collaborate on advancing AI safety as part of the initiative.

20. African Ministers adopt continental Artificial Intelligence Strategy and African Digital Compact

  • This month, ministers from across Africa endorsed two landmark initiatives, the Continental Artificial Intelligence (AI) Strategy and the African Digital Compact, which aim to accelerate the continent’s digital transformation with the help of emerging technologies.

  • The Continental AI Strategy advises national governments on how to responsibly use AI for development goals in education, health, agriculture, infrastructure, peace, and security.

  • The African Digital Compact is the African Union’s strategic outline for achieving sustainable development via digital transformation initiatives.

  • The announcement follows the 2nd Extraordinary session of the Specialized Technical Committee on Communication and ICT hosted by the African Union in mid-June. 

21. Australia publishes National Framework for the assurance of AI in Government

  • On 21 June 2024, the Australian state and territory Governments published a national framework for AI assurance.

  • Focused on the implementation of Australia’s AI ethics principles in practice, the framework recommends organizations establish robust governance structures including policies, processes, and roles to ensure safe and responsible AI use. 

  • It also recommends implementing a risk-based approach to assess and mitigate AI-related risks throughout the lifecycle of AI systems, with a heightened focus on high-risk applications. 

  • Moreover, the framework highlights the importance of stringent data governance practices to ensure reliable data inputs for AI systems, complying with relevant legislation, and minimizing risks associated with data management. 


Introducing the Holistic AI Feed and Expert Community 🎉

We are excited to announce the launch of the Holistic AI Tracker Feed and Expert Community – your go-to for updates on AI regulation, legislation, legal action, penalties and fines, and standards around the world. With regularly-posted content, the Feed brings together experts in policy, law, business psychology, computer science, and more, all with a focus on AI Governance.

To stay up to date with the latest developments, create a free account. Once you’ve signed up, you will be able to read all of our previous policy content and keep up with developments as we continue to publish regular insights.


Holistic AI policy updates

At the start of the month, we hosted our first Policy Connect event in Brussels ahead of the IAPP AI Governance Global 2024 conference. We were joined by Ashley Casovan, Managing Director, IAPP - International Association of Privacy Professionals AI Governance Centre; Kai Zenner , Head of Office for MEP Axel Voss, European Parliament; and Elinor Wahal, Legal and Policy Officer, DG-CNECT (Artificial Intelligence and Digital Industry – Directorate A), European Commission for a panel on the EU AI Act led by our Legal and Regulatory Lead Osman Gazi Güçlütürk.


We also hosted our last Policy Hour webinar until after summer. This time, Osman was joined by Luis Aranda, Artificial Intelligence Policy Analyst and Economist at the OECD - OCDE, to discuss Global AI Governance.

The discussion covered key elements of AI Governance and related trends, including the role of principles, including the OECD’s own AI principles, voluntary frameworks, legislation, and standards. Missed it or want a recap? Check out the event summary with the slides and webinar recording here

Authored by Holistic AI’s Policy Team.

Ebikara Spiff

• Responsible AI. • Technical Writer. • 𝙰𝙸 𝚒𝚗 𝙳𝚒𝚜𝚙𝚞𝚝𝚎 𝚁𝚎𝚜𝚘𝚕𝚞𝚝𝚒𝚘𝚗.

4w

Always looking forward to your updates 📌

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics