Employees are entering sensitive data into ChatGPT. In a report from Menlo Security Inc. , we share our findings & recommendations for the safe usage of #GenAI. https://lnkd.in/gw9SN_iY #MenloSecurity #BrowserSecurity #CyberSecurity
Bryan Kam’s Post
More Relevant Posts
-
🚨 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗟𝗲𝗮𝗱𝗲𝗿𝘀! 🚨 😱 Over 225,000 OpenAI credentials were compromised and sold on the dark web last year! As ChatGPT integrates into business operations, the stakes are high. Classified info and proprietary code inputted into ChatGPT can become a goldmine for cybercriminals if credentials are stolen. 🛡️ Don’t let your enterprise's innovative stride become a cybersecurity nightmare. It’s crucial to establish robust policies for using AI tools responsibly. Protect your sensitive data and stay ahead of threats! 🔐Need expert advice on crafting a 𝘀𝗲𝗰𝘂𝗿𝗲 𝗔𝗜 𝘂𝘀𝗮𝗴𝗲 𝗽𝗼𝗹𝗶𝗰𝘆? Contact ADACOM at [email protected] for tailored strategies to enhance your cybersecurity posture and safeguard your assets. 💼 👓 https://lnkd.in/d28p9Rie #Cybersecurity #ChatGPT #DataProtection #securitybuiltontrust
ChatGPT credentials snagged by infostealers on 225K infected devices
scmagazine.com
To view or add a comment, sign in
-
Safeguard your enterprise against the risk of data breaches stemming from the use of ChatGPT and similar Generative AI platforms by leveraging LayerX. Our solution enables you to detect and manage instances where sensitive data might be shared. By establishing policies that either restrict or entirely prevent the entry of confidential corporate data into ChatGPT, you ensure an added layer of security. Discover the full details on our blog: https://lnkd.in/dPWeCBtS #BrowserSecurity #CyberSecurity #ChatGPTProtection
Protecting Your Company's Data Crown Jewels: How to Solve ChatGPT Security Risks with LayerX Browser Security Platform - LayerX
https://layerxsecurity.com
To view or add a comment, sign in
-
In 2023, the Kaspersky Digital Footprint Intelligence service stumbled upon a wealth of dark web posts, exceeding 3,000 in number, addressing the misuse of ChatGPT for illegal purposes and discussing AI-powered tools. Although discussions peaked in March, the dialogue persists. According to Kaspersky, a cybersecurity solutions company, the conversations usually span from crafting malicious alternatives to jailbreaking the original ChatGPT, among others. Stolen ChatGPT accounts and services offering automated mass creation are flooding dark web channels. #KasperskyDiscovery #DarkWebTalks #AIExploitation #Cybercrime #PelionCyberSecurityAlert #TechSecurityThreats #DigitalSafetyConcerns
Kaspersky uncovers talks on dark web about exploiting AI for cybercrime - Back End News
http://backendnews.net
To view or add a comment, sign in
-
Technology Consulting | Digital Transformation, Intelligent Solutions, Disruptive Innovations| I Help Companies Enhance Efficiency and Boost Revenue.
Artificial intelligence in the hands of cybercriminals poses an existential threat to organizations—IT security teams need “defensive AI” to fight back. Additional link in the comments section. #cybersecurity #cybercrime #chatgpt #ai #email #novigosolutions
Darktrace warns of rise in AI-enhanced scams since ChatGPT release
theguardian.com
To view or add a comment, sign in
-
Generative AI tools, like ChatGPT, are becoming increasingly popular, however for businesses they mean increased cyber risks. Should your organisation be implementing more robust cybersecurity measures? https://lnkd.in/eRHkAbgt #cybersecurity #businesssecurity #cyberrisk #airiskmanagement #riskmanagement
ChatGPT malware use is growing at an alarming rate
techradar.com
To view or add a comment, sign in
-
Bold(er) Cybersecurity Predictions For 2024 Me llama la atención esta: “Large language models (LLMs) such as ChatGPT will be the biggest disappointment to offensive and defensive cybersecurity that we have ever seen. In fact, the ease with which threat actors can poison these models and destroy the ability to create usable code will cause the use for LLMs in cybersecurity to die within 9 months.” Enlace: https://lnkd.in/d248T_7m
Bold(er) Cybersecurity Predictions For 2024
forbes.com
To view or add a comment, sign in
-
https://lnkd.in/di-9p8wY Credential-stealing emails are getting past artificial intelligence's "known good" email security controls by cloaking malicious payloads within seemingly benign emails. The tactic poses a significant threat to enterprise networks. A novel cyberattack method dubbed "Conversation Overflow" has surfaced, attempting to get credential-harvesting phishing emails past artificial intelligence (AI)- and machine learning (ML)-enabled security platforms. The emails can escape AI/ML algorithms' threat detection through use of hidden text designed to mimic legitimate communication, according to SlashNext threat researchers, who released an analysis on the tactic today. They noted that it's being used in a spate of attacks in what appears to be a test-driving exercise on the part of the bad actors, to probe for ways to get around advanced cyber defenses.
'Conversation Overflow' Cyberattacks Bypass AI Security to Target Execs
darkreading.com
To view or add a comment, sign in
-
Full-stack Product with Technical background. 6 x 0 to 1. Currently Sov Cloud Operability and tinkering with GenAI.
Most of yesteryear security solutions are becoming obsolete - anti-phishing, KYC, voice-based auth, etc. The space is in need of a new, radically different approach. Can GenAI fight GenAI enabled cyber attacks? Few things are clear - zero trust is a must, systems must be designed as if the breach is anything but certain, FIDO passwordless auth (although biometric second factors are out).
Five ways criminals are using AI
technologyreview.com
To view or add a comment, sign in
-
📊 Stealer logs, tradeable in dark web forums, contain stolen data such as payment details and credentials. 👥 ChatGPT users are among the impacted, with credentials exposed via malware on OpenAI domains. - 180 million users on ChatGPT. - OpenAI receives 1.6 billion visits monthly. - From January to October 2023, 225,000 stealer logs with information on ChatGPT users were identified. 👥 **Detailed Findings in Stealer Logs:** - Analysis revealed 701,884 user records from 2023 across 2,164 victim IP addresses. - Compromised data includes personal details and login credentials. 🌐 **Geographic Insights:** - Stealer logs show top victim countries as Brazil, Vietnam, Egypt, Pakistan, and Bangladesh. - However, top web traffic comes from the US, India, Brazil, Indonesia, and the Philippines. 💳 **Types of Compromised Data:** - Email addresses, credit card information, and hashed data were commonly extracted. - Top countries for email inclusion: Brazil, Vietnam, Egypt, Nigeria, and Argentina. - For credit card data: Egypt, Dominican Republic, Colombia, Brazil, and Nigeria. - Hash inclusion led by the UK, Argentina, Brazil, Spain, and Colombia. 🌎 **Browsers and Security:** - Chrome is the most used browser among compromised ChatGPT users. - Edge and Opera are also significantly represented. Summary (auto generated) from the ChatGPT Users in Stealer Logs: A 2023 Stealer Analysis of OpenAI by SOCRadar® Extended Threat Intelligence Source: https://lnkd.in/dBKS9ZUx #securitytrends #infosec #cybersecurity #threatintel #threatintelligence #ciso
ChatGPT Users in Stealer Logs: A 2023 Stealer Analysis of OpenAI - SOCRadar® Cyber Intelligence Inc.
socradar.io
To view or add a comment, sign in
-
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023 "The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the number of hosts infected with information stealers" #OpenAI #ChatGPT #LLM #GenAI #ArtificialIntelligence #AI #DarkWeb #infosec #security #cybersecurity #hackers #hacking
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets
thehackernews.com
To view or add a comment, sign in