My Take on the #EU #AI Act: A Game-Changer with Some Head-Scratchers 🧐 The European Union AI Act, a 458-page document with 113 articles, is making waves in the AI industry. Here's what you need to know: 🛑 Unacceptable #Risk: Bans on government social scoring and manipulative AI systems. 🚦 High Risk: Strict #compliance for AI in critical areas like #infrastructure and #healthcare. 💬 Limited Risk: #Transparency requirements for #chatbots and similar systems. ✅ Minimal Risk: Encouragement to follow best practices, but no extra obligations. 🤔 The legislation needs to be flexible and keep pace with rapid AI advancements. #Startups and small businesses are given some attention, but the definition of "excessive cost" remains unclear. 🏥 Interesting exceptions include allowances for AI in #medical treatment and #advertising practices. However, transparency requirements for AI giants like Microsoft, Google, and OpenAI could be challenging. 🧩 Disclosure of #copyrighted data used in AI training is a complex issue that companies need to navigate carefully. 🚀 The Act aims to boost responsible AI development, but its complexity could hinder adoption, especially for smaller startups. 🕰️ Companies should start aligning their AI systems with these new standards as the Act will be phased in over several years. Read this article by Oliver King-Smith, CEO and founder smartR AI at https://lnkd.in/dQYCttQw
TeckNexus ’s Post
More Relevant Posts
-
Author | Keynote Speaker | Board Member | Associate Professor working on AI Ethics at the University of Oxford
"The planned regulation sets out a list of prohibited uses of AI (aka unacceptable risk), such as using AI for social scoring; brings in some governance rules for high risk uses (where AI apps might harm health, safety, fundamental rights, environment, democracy and the rule of law) and for the most powerful general purpose/foundational models deemed to pose “systemic risk”; and applies transparency requirements on apps like AI chatbots. But ‘low risk’ applications of AI will not be in scope of the law." #AIAct #AIEthics https://lnkd.in/evTHTP3M
EU's AI Act passes last big hurdle on the way to adoption | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Founder, Responsible Metaverse Alliance; Shark on Shark Tank (2023); LinkedIn Top Voice in Technology; Chair, Boab AI; Author Checkmate Humanity; Adj Professor; AI & Metaverse specialist; Psychedelic Renaissance advocate
Australian Government’s response to the need for AI regulation … it’s not adequate 🤖 The Industry and Science Minister Ed Husic MP has released the government’s interim response calling for the establishment of an expert advisory committee and the development of voluntary labels and watermarking of AI-generated content. It is positive that the Australian Government has consulted with the Australian industry and public and has prepared an interim response to the significant challenges that AI poses. The proposed actions which are still noted as 'considerations' however do not go far enough. Government is targeting only 'high-risk' AI applications and outcomes - those that may pose risk mainly to life and to jobs. They also note that ‘high-risk’ includes impacts that were ‘systemic, irreversible or perpetual’ including the use of AI-enabled robots for medical surgery and the use of AI in self-driving cars to make real-time decisions. This consideration only of high-risk AI impacts completely misses the 'unacceptable risk' category that proposed legislation such as the EU's AI Act addresses. The EU AI Act includes high risk, unacceptable risk, general AI and generative AI risk and limited risk. Australia's model should do the same - and at a minimum include unacceptable risks. An unacceptable risk may include AI such as Fortune reported on recently where generative AI is powering child porn avatars. Interpol and the Australian Federal Police are already monitoring AI used to exploit and extort money from children. The Australian government must address unacceptable risks such as these and not just focus on risks to industry and jobs. It is also unclear how much budget the government will allocate to this initiative. Previously we have seen the government fund token amounts to AI based initiatives such as the announcement in December 2023 that there would be $17m allocated to assisting small businesses adopt AI. This equates to about $6.50 per business which would amount to about half a month's subscription to a single AI application. It's not enough. And - most likely the tech giants will calculate the financial risk of not adhering to the regulations and if it impacts their drive for profit - it might seem a reasonable thing to wear the fine and break the rules. The risks that AI poses to Australians are very real and dangerous and more needs to be done. Both high risks and unacceptable risks should be addressed in this government's response strategy. Thanks David Swan for covering this important topic. #ai #responsibleai Responsible Metaverse Alliance Gradient Institute Stela SOLAR Anton van den Hengel Jon Whittle Creel Price Judy Slatyer Tiberio Caetano Bill Simpson-Young Louis Rosenberg Kavya Pearlman ⚠️ Safety First ⚠️ Julie Inman - Grant Lorraine Finlay Toby Dagg Patrick Hooton Channel 10 Shark Tank Australia The Sydney Morning Herald Marty McCarthy Ed Santow Toby Walsh Kimberlee Weatherall Chris Dolman
Husic shuns EU path for AI, unveils government’s vision
smh.com.au
To view or add a comment, sign in
-
Founder & CEO at Immersifi - The next era in luxury membership with Innovate Tech | Pioneering the Future of Fashion Retail & Customer Experience with Web3 and AI | Blockchain | Metaverse | NFT Tech | Future Thinker
Is Australia going to be behind the times in AI regulation as well? “We’d want to make sure that whatever we do in this space, that regulation can keep pace for future development as well.” Husic said tech giants such as Google, Microsoft and ChatGPT maker OpenAI would need to work co-operatively with the Australian government to ensure their AI products complied with Australian laws. If they really wanted to make sure regulation keeps pace for future - then they should be involved so much earlier. This tech does not happen overnight. Parts of Asia have some of the strictest rules in innovative tech and they are the ones that are thriving.
Founder, Responsible Metaverse Alliance; Shark on Shark Tank (2023); LinkedIn Top Voice in Technology; Chair, Boab AI; Author Checkmate Humanity; Adj Professor; AI & Metaverse specialist; Psychedelic Renaissance advocate
Australian Government’s response to the need for AI regulation … it’s not adequate 🤖 The Industry and Science Minister Ed Husic MP has released the government’s interim response calling for the establishment of an expert advisory committee and the development of voluntary labels and watermarking of AI-generated content. It is positive that the Australian Government has consulted with the Australian industry and public and has prepared an interim response to the significant challenges that AI poses. The proposed actions which are still noted as 'considerations' however do not go far enough. Government is targeting only 'high-risk' AI applications and outcomes - those that may pose risk mainly to life and to jobs. They also note that ‘high-risk’ includes impacts that were ‘systemic, irreversible or perpetual’ including the use of AI-enabled robots for medical surgery and the use of AI in self-driving cars to make real-time decisions. This consideration only of high-risk AI impacts completely misses the 'unacceptable risk' category that proposed legislation such as the EU's AI Act addresses. The EU AI Act includes high risk, unacceptable risk, general AI and generative AI risk and limited risk. Australia's model should do the same - and at a minimum include unacceptable risks. An unacceptable risk may include AI such as Fortune reported on recently where generative AI is powering child porn avatars. Interpol and the Australian Federal Police are already monitoring AI used to exploit and extort money from children. The Australian government must address unacceptable risks such as these and not just focus on risks to industry and jobs. It is also unclear how much budget the government will allocate to this initiative. Previously we have seen the government fund token amounts to AI based initiatives such as the announcement in December 2023 that there would be $17m allocated to assisting small businesses adopt AI. This equates to about $6.50 per business which would amount to about half a month's subscription to a single AI application. It's not enough. And - most likely the tech giants will calculate the financial risk of not adhering to the regulations and if it impacts their drive for profit - it might seem a reasonable thing to wear the fine and break the rules. The risks that AI poses to Australians are very real and dangerous and more needs to be done. Both high risks and unacceptable risks should be addressed in this government's response strategy. Thanks David Swan for covering this important topic. #ai #responsibleai Responsible Metaverse Alliance Gradient Institute Stela SOLAR Anton van den Hengel Jon Whittle Creel Price Judy Slatyer Tiberio Caetano Bill Simpson-Young Louis Rosenberg Kavya Pearlman ⚠️ Safety First ⚠️ Julie Inman - Grant Lorraine Finlay Toby Dagg Patrick Hooton Channel 10 Shark Tank Australia The Sydney Morning Herald Marty McCarthy Ed Santow Toby Walsh Kimberlee Weatherall Chris Dolman
Husic shuns EU path for AI, unveils government’s vision
smh.com.au
To view or add a comment, sign in
-
🥊 The AI vs. Creative Industries saga continues after the New York Times sued OpenAI and Microsoft in December, highlighting once more the complexities of drafting legislations that protect intellectual property from being absorbed by Large Language Models. What happened in the UK? 🇬🇧 The UK government has delayed releasing a code regulating AI model training with copyrighted material. This has raised concerns among stakeholders within the creative industries who are seeking transparency regarding which models are being trained on which works. 💬The Intellectual Property Office (IPO) has consulted with tech and AI companies such as Microsoft, DeepMind and Stability and the arts and news organisations, including the BBC, the British Library and the Financial Times, as the government would prefer a voluntary approach, such as a new code to address the issue, without resorting to legislation. Despite consultations, industry executives could not reach an agreement on the code, leading to the responsibility being returned to government officials. And it’s easy to understand why they couldn’t reach an agreement: AI companies seek easy access to content for training models, while creative industries are concerned about fair compensation for using their materials. In my opinion, governments will never be able to catch up with the fast-evolving AI landscape through their legislative infrastructure unless they heavily restrict its use. However, this would pose a risk not just in terms of being left behind in the AI gold rush but also in terms of missing opportunities for improving their public service. 🤖We have all learned that technological advancements cannot be stopped, and the only way is forward. Hence, rather than slow-paced legislation that might be obsolete by the time it gets released, we need more transparency from the tech giants on how they train their AI models, and this is only possible in an open-source scenario where society comes before profit. Or, at least, in the same place. #EthicalAI #AI #IntellectualProperty
To view or add a comment, sign in
-
-
Founder @ Dynamic Tech Media | Translate tech speak into plain English | Content creation and digital marketing
"The planned regulation sets out a list of prohibited uses of AI (aka unacceptable risk), such as using AI for social scoring; brings in some governance rules for high risk uses (where AI apps might harm health, safety, fundamental rights, environment, democracy and the rule of law) and for the most powerful general purpose/foundational models deemed to pose “systemic risk”; and applies transparency requirements on apps like AI chatbots. But ‘low risk’ applications of AI will not be in scope of the law." https://lnkd.in/gvPk9GW8
EU's AI Act passes last big hurdle on the way to adoption | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
EU’s AI Act passes last big hurdle on the way to adoption "The planned regulation sets out a list of prohibited uses of AI (aka unacceptable risk), such as using AI for social scoring; brings in some governance rules for high risk uses (where AI apps might harm health, safety, fundamental rights, environment, democracy and the rule of law) and for the most powerful general purpose/foundational models deemed to pose “systemic risk”; and applies transparency requirements on apps like AI chatbots. But ‘low risk’ applications of AI will not be in scope of the law." https://lnkd.in/ggU9ndFY
EU's AI Act passes last big hurdle on the way to adoption | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Big Tech Under Scrutiny: EU Investigates Generative AI Risks #AI #generativeAI #EUregulation #developers Attention developers! The EU is flexing its muscles with the Digital Services Act (DSA) to investigate how big tech companies are handling the risks of generative AI, like deepfakes. This could have a big impact on how these technologies are developed and deployed. What are your thoughts? Here are some key points from the article: The EU sent questionnaires to major platforms like Google, Facebook, and TikTok to understand their strategies for mitigating generative AI risks. Concerns include the spread of misinformation, manipulation of services, and potential harm to vulnerable groups. This investigation comes ahead of the EU's comprehensive AI rulebook taking effect next year. This news is relevant to both developers and designers, as it touches on the ethical considerations and potential misuse of AI-powered tools. What are the biggest challenges with generative AI? How can developers ensure responsible development of these technologies? What role should regulations play in shaping the future of AI? Stay tuned for further updates on this developing story! Link to the article: https://lnkd.in/dyRbtfSX
Europe asks Google, Facebook, TikTok and other platforms how they’re reducing generative AI risks
thehindu.com
To view or add a comment, sign in
-
Founder, consultant, technologist. Currently building isAI - a system to promote AI legal conformance. Consulting on AI investment strategies (hype avoidance, value identification...) and system architectures.
Look carefully at the screenshot. "Voice disguised by AI." This was on BBC1, this morning. I am currently building isAI - a system for informing customers that an AI system is in use. Some people have seen the demo. https://lnkd.in/e4-rNY27 My system. My money. No VC, as yet. A few people have asked me whether I am worried by broadcasters and social media companies detecting and declaring #AI usage. No, I am delighted. Here's why. 1️⃣ The fact that reputable broadcasters such as the BBC are doing this sets the context. It helps develop a viewer expectation that we are told when AI is in use. (It will also be a legal requirement in some jurisdictions). 2️⃣ The social media companies are using AI tool detection and then labelling content when they find it. You'll see this on #instagram and #facebook. We believe that AI Declaration - telling people rather than waiting for it to be found - is what suits big brands best. 3️⃣ Our own examination of video metadata files suggests that detection is going to be an uphill struggle. 4️⃣ What also suits big brands, advertisers and call centres well is the records that isAI collects. With the advent of "informed AI" (good practice) will come a swathe of law suits for "uninformed AI", particularly in the US. So, no, I am delighted that the #BBC is inserting these little AI notices. Great progress. Starts to establish the market and, more importantly, establishes clarity and trust around AI usage in media. As it says in our demo video: "Welcome to isAI. AI done well." Tom Pugh (he/him), Asif Ashiq Rana, Sahil Jain, Craig Mullins, Seren Yashar, Richard Tyler, Steve Yurisich, Emily Ford, Sebastian Haire
To view or add a comment, sign in
-
-
Rise Above The Blah ⬆️ AI-supported marketing for small biz 🤖🛠️ get my weekly emails for tools and tips 🚫 No overwhelm allowed
Regulating isses with AI is like playing whack-a-mole. Is this the best approach? Won't we get overwhelmed with issues to regulate on? Myself and Justin Collery were discussing the fact that the House Judiciary Committee now want answers about how Google trained it's Gemini AI chatbot... "[Ohio Rep. Jim ] Jordan said he was alarmed by internal reports from within the Gemini team that it had followed Biden White House guidance stipulating that AI must advance 'equity' — a left wing idea which argues that black people and other historically marginalized groups should be promoted and spotlighted ahead of white people and other groups not historically marginalized — regardless of merit." ** Justin reckons we need to get more nimble about addressing issues as they occur. I can't help thinking we need more regulations that place responsibility firmly with the companies releasing the tools, to ensure they do their due diligence, adequate testing, and are taken to task for any resulting harms arising from the use of their tools. Are we too terrified of stifling innovation to ensure companies are held responsible? -- ** Source: 'House Republicans demand Google reveal what role US government had in developing woke Gemini AI' on judiciary dot house dot gov
To view or add a comment, sign in