

Terry Gerton As we come together to talk about the potential role of AI and cyber, I think it’s helpful if we set a baseline and talk about current cyber environment. I mean, the Trump 2026 budget that’s come out just recently proposed a reduction of almost 30% of CISA’s workforce, and almost $500 million reduction to their budget. What might that mean for both CISA and the national cybersecurity posture?
Sarbari Gupta CISA is our nation’s premier cyber defense agency. They also coordinate the security and resilience of our critical infrastructure sectors. So it is a little concerning to hear about the cuts. But CISA has been doing a lot of excellent work in the area of AI over the past few years, providing standards and guidance mostly on responsible ways to use AI and to apply it for different use cases. In fact, NIST has also been publishing a lot really good content related to the use of AI, including the NIST AI Risk Management Framework, which is a voluntary framework that organizations can adopt as they leverage the use of AI. But regarding the proposed budget cuts and the workforce reductions, I really don’t have any specific information that I can give you sort of feedback on, but I’m sure like any organization that suffers budget cuts or workforce reductions there will be impact. Now the exact nature of the impacts, we’ll see. We’ll see as it comes along. But in general, regardless of what happens at CISA, cybersecurity threats aren’t going away, and with AI they’re even more and more insidious. So the federal government does have to figure out how to keep an eye on how to leverage AI as well as defend itself from AI-enabled threat actors.
Terry Gerton Well, at the same time, Congress is considering legislation that would prohibit states from independently regulating AI, kind of reserving that capability to the federal government. In your estimation, is that a good or a bad approach?
Sarbari Gupta There are many benefits to centrally regulating certain technologies or industries. So for example, telecommunications, civil aviation — many industrial examples where federal level regulations make the most sense because of the industries impact either national security, or our economic stability or the geopolitical landscape. So the benefits for central regulation of AI are many, because it’s such a very impactful technology. Consistency across all states is a good thing — if there is a consistent regulatory framework, it allows innovators and solution-builders to not have to worry about local nuances of regulation and be able to comply with them, rather they can build based on the broader national-level framework. But on the other hand, taking control completely away from the states for such an important technology actually makes it much more difficult for local regulations or requirements that resolve unique challenges or situations that the states may face. And so I think a balanced approach would be to have a broad, national-level framework for regulating AI, but supplemented with, as necessary, state-level additional requirements or regulations to solve their local problems.
Terry Gerton That’s a helpful recommendation, but in the meantime, there’s quite a bit of uncertainty in this space and we’ve got AI — we’re hearing about AI headlines every day where it’s now capable of deep-faking information so that humans really can no longer tell what is true or what is false. So when you think about the capability of AI, the unclear regulatory framework and the importance of cybersecurity and resilience to cyber threats, you’ve got a lot of experience in this arena. How should we approach the inevitable integration of AI and cybersecurity? What’s the danger here? What’s your counsel?
Sarbari Gupta It’s inevitable, the integration of AI into cyber. But I would also claim that it’s essential. We all have heard that cybersecurity threats are just increasing, evolving constantly. And there’s a severe shortage of trained and qualified cyber workforce professionals. And so how do you solve this problem? The work is vast. Not enough knowledgeable people to do the work. So integration of AI can be a wonderful helping hand to address this challenge. However, we do need to deploy AI very thoughtfully, deliberately and with the cautions that you are talking about. For example, some ways to reap the benefits of AI while mitigating some of the risks are to use AI, at least in the cybersecurity space, to enhance and assist the human rather than replace the human decision-making engine. So always keep the human in the loop. And then organizations that deploy AI for their cyber security needs, they need to implement strict or rigorous AI governance frameworks to have accountability structures, to make sure that they’re aware of where their AI systems were developed and the full supply chain aspect of it, and to conduct risk assessments periodically. And the other important point about using AI for unique use cases is that many times we feed tons of data to the AI. We need to really have data governance around that data, because we need to ensure that the AI uses data from clean and verifiable sources, and it’s accurate for the AI to use. And then you asked about cautions, of course, as I mentioned before, not to be over-reliant on the AI. I mean, I read a recent article about a summer reading list where there were a bunch of book titles and authors. However the funny thing was, I’m sure AI generated that list — the authors were genuine, but the book titles were nonexistent. So that’s very embarrassing, right? You don’t want to make mistakes like that. And also watching out for bias and discrimination. So as a company, of course, we do a lot of recruiting. We’re a federal government contractor. Yourself. We are really looking into how to leverage AI to assist our recruiting teams to vet resumes and to select the right kinds of candidates. However, we do recognize there’s a strong possibility of bias here, and we need to be cautious of that and continuously monitor what’s happening there and take action as necessary.
Terry Gerton I’m speaking with Dr. Sarbari Gupta. She is the founder and CEO of Electrosoft. You just gave us a very long list of cautions. Right in the center of it was keeping a human in the loop because humans are going to have to do the data governance, the accountability frameworks, the risk assessment. How should agencies, especially our cybersecurity agencies, be thinking about this balance between humans and AI? How should they think about deployment?
Sarbari Gupta We have begun to build AI-based solutions for various use cases where we currently use human beings, manual labor, to do the work. But we feel that the best way to start is to use AI in potentially high-value but low-risk use cases so that we assist and augment the human rather than replace him or her. So as a company, we are building a variety of solutions to address specific use cases in the cyber domain, and we’re trying to do it in a way that we minimize or mitigate the risks involved. And so, for example, one use case is to have an AI assistant to cyber analysts that are watching the network for incidents, and when there’s some sort of an alert regarding and impending incident, to have the AI assistant suggest or recommend actions to be taken or further analysis to be performed. But the human finally makes the decision, not the tool. Similarly, we can have AI LLMs, large language models, analyze vast streams of data from logs of servers or network traffic to help us to identify potential incidents or to even do threat-hunting based on previous logs of actions that have occurred. And also, we are building a use case or solution for where we can use the AI to either vet and/or train cyber analysts to do their job effectively. Because again, as I said, it’s really hard to find highly qualified and capable cyber analysts. There’s a severe shortage. So how can we use the AI to augment that and to help in that regard? So anyway, just a couple of use cases. But in terms of mitigating risks, of course, as I mentioned, keeping the human in the loop for now. Since cyber is a critical arena, you don’t want to make mistakes, right? And also ensuring that the AI supply chain — you’re familiar with it, you’re securing it. And continuous monitoring. You can use AI, but you just can’t close your eyes after deploying it. You have to continuously monitor its effectiveness, the risks it is creating, and see how to refine the AI so that you get the right results and mitigate the risks.
Terry Gerton So much of your conversation there was about using AI to augment our own detection and performance in the cybersecurity space. How do we level the playing field as more of our adversaries use AI against us?
Sarbari Gupta We are all aware that the critical infrastructure sector in our country is largely owned by the private industry. It’s operated and owned by private industry, and so I think the adversaries, the foreign adversaries especially, that’s what they’ll target. Cybersecurity is always about finding the weakest link in the chain and attacking through that. I mean if you have a fence around your property, wherever the fence is lowest or broken that’s where the attackers are going to target, right? So I think to level the playing field, we need to use AI to improve what we do, but we also need to fix where the vulnerabilities are. And I’d say start with organizations in the critical infrastructure sector, guide them as they start using security frameworks such as the NIST cybersecurity framework. Our organization was quite involved in helping to mature it, you know, over the last decade or so that it’s been around. And I know that many of these critical infrastructure sort of organizations, they’re starting to use the NIST cybersecurity framework to assess where they stand in the maturity journey with respect to cybersecurity, and to help them progressively, in a focused way, mature their capabilities over time. So that at least the low-hanging fruit or the easy ways in, they are plugging those before they move forward.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.