📺 Randy Hayes, VP of Public Sector at VAST Data, as he explores the crucial intersection of cybersecurity and artificial intelligence in federal agencies. Key Takeaways: 🔹 The necessity of automating cybersecurity using AI and machine learning to counteract increasing cyberattacks. 🔹 Ensuring the security of AI models, which are prime targets for adversaries seeking to steal or exploit them. 🔹 The critical role of auditability in protecting AI models from being poisoned by malicious data. "AI's potential to transform security is immense, but securing AI models and data is paramount to protect our digital assets," says Randy. Watch the full discussion below. Presented by VAST Data & Carahsoft #AI #Cybersecurity #DataProtection #PublicSector #VASTData
Transcript
Randy, thanks for joining me again. It's great to see you. What's the intersection that you're seeing now, cybersecurity and artificial intelligence? What are organizations and government doing or what should they be doing in order to secure that they're the work that they're doing with AI? Yeah, that's a great question. We run into this and it's something that affects every single agency in the federal government and if you look at the amount of attacks that we're getting. Across all the federal government, IT it can't be handled by people. It has to be automated, It has to be built into a system that is intelligent. And I think one of the big challenges is it's happening everywhere. It's critical infrastructure, it's happening at every single agency. And so having the ability to build automation in with machine learning and artificial intelligence to be able to counteract all of this because our adversaries are 100%. Using the same technology to attack us with, right. And so we have to build both from a software perspective and from a hardware perspective, a functioning architecture that can fend off all of these different attacks that's coming from our, you know, adversaries. And so I think one of the really interesting things, at least from a vast perspective, is we built this global namespace that spans edge, core and cloud and it allows us to handle these ingest. All of the packet capture data and all these different things that we need to look at in order to figure out what's actually going on with these networks and figuring out hey, that packet doesn't belong here, somebody needs to go investigate that. And so by trying to really look into and inspect what's going on on these networks, we've given a lot of our customers the ability to just force multiply with the number of people that they have actually doing this job. There are two concepts that people in government. Talking to me about a lot regarding these two things that you just talked about and that is one is secured using AI tools to secure the enterprise. The other is how do you secure the AI tools whether you're using those for cyber security or for some other task. What's the difference, What are the differences and similarities between those two concepts? Yeah, so and I'll I'll touch on the latter first because I think it's the most important if you look at the most valuable companies in the world. They're the ones that hold all the data, right Meta, Facebook, Google, they have all of this information and they're worth that much money because they have all of the data. And so if you think about trying to safeguard AI tools, it's the most important part of what's going on. Because if you're going to train a tool, if you're going to train a model, it requires a lot of data and it requires a lot of infrastructure from companies like NVIDIA and others. And it's just costs a lot of money to do that. And So what? Our adversaries are going to try to do is steal the model that's already been trained on our data and then they're going to take and then try to exploit that model because they know what it's been trained on. And so I think securing these AI models are without a doubt the most important thing that agencies have to be aware of, especially when they start looking at training on government owned data, right. And so you're basically taking the crown jewels of all this information, you're training a model and so. You're going to, our adversaries are going to come and try and steal that model and so you have to really be aware of that and lock those systems down, strikes me. You also have to continue to accelerate the development of those models so that if the adversary gets one, they have an old version and and by changing those models adding to the security of them. Is that fair read on, No, absolutely. And I think the other thing that people don't realize is there's the idea of poisoning a model where somebody could come in and insert different datasets to make a model look. Different and act different. And so that whole idea of poisoning a model also comes into play and that's why, you know, I think auditability is going to be something that's going to show, right. The government, you know, really likes to kind of mandate things and and regulate things and AI is going to get regulated very hard. I think we're we're approaching kind of the Sarbanes-Oxley moment of artificial intelligence and you know I look at you know 1215 years ago the E discovery. Thing where you know hey we're going and and you need to provide all the emails that you know this certain you know this person said in the last 12 months. And if you don't provide those you're automatically guilty. And I think always joke you know I've I've small kids and you know it's going to be like elementary school math. You're going to have to show your work, right. And you're going to have to show the model, how it was trained, the data that was trained. And so you're going to have to keep all this information not only for a really long time but you're also going to have to keep it secure it quick. Final thought is. The security required for protecting against poisoned models the same different from the security in securing the actual data itself. So I think it can be the same because again it really comes down to auditability and showing where and who accessed what data and when. And so I think it's a combination of both. You know, obviously have encryption at rest, encryption in flight and then also being. Able to make sure that you can show the auditability. And so I think those it all kind of comes together in kind of the same thought process when you're not only trying to secure people from you know ransomware and things like that, but it's not any different than you know the model poisoning and things like things things. On that note, Randy, thanks very much for joining me, absolutely appreciate. Good to see you again.To view or add a comment, sign in