Seth Abrams, Chief Technology Officer, Homeland and Force Protection at Leidos, discusses the transformative impact of Trusted Mission AI on federal agencies and the importance of maintaining quality controls while innovating. Key Takeaways: ▫️ AI adoption should focus on mission results and maintain established quality controls. ▫️ AI excels in managing large data sets and enhancing developer productivity. ▫️ Establishing guardrails for AI experimentation is crucial for responsible innovation and security. 💻 Watch: https://lnkd.in/ecvgwQBH
Transcript
Welcome back. Several guiding documents from federal agencies are shaping the way departments of the government are dealing with artificial intelligence. New guidance from the General Services Administration on AI acquisition and from the Office of Personnel Management on AI usage by employees and building an AI ready workforce are the documents agencies are tracking. Seth Abrams is chief technology officer for homeland and Force Protection at Leidos. Seth, welcome. Thanks for joining me. Those are the policy and strategy documents that the government is looking at. What should they be thinking about, though, from an enterprise perspective about, hey, how AI intersects with the work they're trying to do across agencies? Yeah, when, when it comes to agency application of technology, we really want to focus on the results, right? We want to focus on the mission. And what ends up happening is that when, when new technology comes in, there's a there's a big rush to to adopt to it. And what we want to do is we want to make sure that. Keep control of the, the quality that you you've established overtime. We had a big switch to to agile. And with that switch we introduced all these pipelines that introduced a lot of controls. And so when AI comes in, you want to make sure that those same controls are applied when from an enterprise perspective, what do you expect to see change or what are you starting to see change already? As agencies, you get more use cases put into pilot programs and put into actual use. Yeah, well, we're we're seeing a lot of back end adoption, which is really great, right? Because I think that's the place where where the AI really, really shines. When you have a massive amount of data and you need to comb through it and and get more eyes on it. A I really, really shines. What we're what we're seeing is that on the same side when you do a developers using AI, then you can have a really a bigger increase of productivity when it comes to. Getting code out, the quality of control, security controls around it. And So what you're left with is, you know, AI enabled developers, which is really great. What do you expect to see move moving forward in the, in that intersection of AI and software development and how can agencies really leverage the potential that you just described? Well, there's, there's new technologies coming out all the time and so there's assisted code development, but at the same time, on the big data front, now we're starting to. Introduce a lot of AI generative or generative AI investigations. And so it used to be that people needed to know how to write SQL code in order to do investigations. And now what we can do is we can do use natural language models to actually ask the question and then start getting some really good answers. You talked about the fact that this technology is changing rapidly. Innovation has a tendency sometimes to be overcome by agencies trying to make sure they don't get too far over their skis. How are you? Seeing agencies that are successful at letting innovation bloom in this area do so within the guardrails that they need to stay in, right? Well, so at Lighthouse, we set up a, a guardrail ourselves, right? So we have a framework for for AI resilience and security. And so, So what that allows them to do is allows them to experiment with AI and really adapt the new technologies that are out there and at the same time have those guardrails. And so you can, you can basically turn up the dial where it goes from, you know? The person on the loop to person in the loop to person out of the loop, right. And so, so that, that dial mechanism is really what we're, we're looking at making sure that the agencies structure around. There are a lot of agencies that are telling me that kind of a sandbox, that ability to experiment is really important to them. What's the advantage that the agency gains by doing that by having that ability? Ohh, it's, it's huge. So what we think about the government is, is that, you know, it's a, it's where we get trusted answers from. Right. And so to make sure that we don't have governments putting out answers that we're not supposed to. And So what that allows you to do is that allows you to to build a card of your guardrails around the information that you're supposed to be giving to your constituents. I'll ask you the hard one. We have about a minute left because over the horizon, it's impossible to predict based on what we've seen over the last several years regarding AI. But what are you encouraging agencies to anticipate now about how they should prepare for? A year out, two years out, and so on. Well, there's two different sides to it, right? So agencies should be prepared for, you know, being able to do a lot more with the data that they have, right? So don't get rid of it. We're going to be able to start to mine that. But the on the flip side, you know, the adversaries also have AI. And so we're going to have to make sure that our AI defenses are up to speed as well. Seth, it's great to have you on the program. Thanks very much for your time today.To view or add a comment, sign in