AI

Nick Kathmann, CISO/CIO at LogicGate – Interview Series

Nicholas Katman is the chief information security personnel (CISO) in Logicate, where he leads the company’s information security program, supervises security innovations for security, and participates with clients in managing cybersecurity. With more than two decades of information technology experience and 18 years of cybersecurity, Kathman has built and led security operations across small companies and Fortune 100 institutions.

LogicGate is a risk and compliance platform that helps institutions to automate and expand the scope of their risks, risks and compliance (GRC). Through its main product, Risk Cloud®, LogicGate enables the difference to identify, evaluate and manage risk through the institution through customized workflow, actual time and integration. The platform supports a wide range of cases of use, including third -party risk, compliance with cybersecurity, internal audit management, and assisting companies to build more flexible and flexible risk strategies

You work as CISO and CIO in Logicate – How do you see artificial intelligence transform the responsibilities of these roles in the next 2-3 years?

Artificial intelligence is already converting these two roles, but in the next 2-3 years, I think we will see a significant rise in AIGEAC AI with the authority to re-imagine how to deal with commercial operations on a daily basis. Anything can be dealt with usually transmitted to the IT office – such as re -setting passwords, installation of applications, and more – by artificial intelligence agent. Another critical use is to take advantage of artificial intelligence agents to deal with hard audit rates, allowing Cisos and Cios to set the priorities of strategic requests.

By demobilizing federal cyber workers and trends of canceling organizational restrictions, how should institutions deal with spreading artificial intelligence while maintaining a strong security position?

While we see the direction of canceling restrictions in the United States, the regulations are already enhanced in the European Union. Therefore, if you are a multinational institution, I expect to comply with the global regulatory requirements about the use of artificial intelligence. For companies that operate only in the United States, I see a learning period in terms of adopting artificial intelligence. I think it is important for these institutions to constitute strong AI governance policies and maintain some human supervision in the publication process, while ensuring that nothing is going.

What are the biggest blind spots that you see today when it comes to integrating artificial intelligence into the current cyber security frameworks?

While there are some areas that I can think of, the most influential blind point is where your data is located and crossing it. The introduction of artificial intelligence will only make supervision in this field more than a challenge. Sellers enable artificial intelligence features in their products, but these data are not always transmitted to the artificial intelligence/seller. This makes traditional safety tools such as DLP and web monitoring effectively.

Most of the strategies of artificial Intelligence governance are “paper tigers”. What are the basic ingredients for the governance framework that already works?

When I say “paper tigers”, I specifically refer to governance strategies where only a small team knows the processes and standards, and it is not applied or even understood throughout the organization. Artificial intelligence is very widespread, which means it affects every group and every team. “One size suits everyone” will not work. The financing team that implements the features of artificial intelligence in its ERP differs from the product team that implements the artificial intelligence feature in a specific product, and the list continues. The basic components of the strong governance framework, but IAPP, OWASP, NIST and other consulting bodies have very good frameworks to determine what to evaluate. The most difficult part is to discover when the requirements apply to each use.

How can companies avoid drifting artificial intelligence model and ensure responsible use over time without excessive engineering their policies?

Difficulty and deterioration is just part of the use of technology, but Amnesty International can greatly accelerate the process. But if the erosion becomes very large, then corrective measures will be needed. The comprehensive test strategy that is looking for accuracy, bias and other red flags is necessary over time and measures it. If companies want to avoid bias and erosion, they must start by ensuring that they have the tools in force to define and measure it.

What role should Changelogs play, limited policy updates, and actual time -time feeding rings in maintaining AI AGILE governance?

Although it plays a role at the present time to reduce risks and responsibility towards the provider, the actual time feeding rings hinder the ability of customers and users to perform artificial intelligence governance, especially if changes in communication mechanisms occur repeatedly.

What concerns do you have about the bias of artificial intelligence, discrimination in subscription or credit registration, especially with “buy now, pay later” (BNPL)?

Last year, I spoke to AI/ML researcher at a large multinational bank that was trying with Amnesty International/LLMS through their risk models. Models, even when they are trained in large and accurate data collections, would make sudden and unaccredited decisions to approval or reject the subscription. For example, if the phrase “great credit” is mentioned in the chat version or communications with customers, the models will default, reject the loan – regardless of whether the customer said that or the bank employee said that. If artificial intelligence will depend on them, they need the banks to supervise and hold better, and “surprises” must be reduced.

What do you think of how to check or evaluate algorithms that make high-risk decisions-and who should be responsible?

This is due to the comprehensive test model, as it is necessary to test the algorithm/models constantly as possible as possible. This may be difficult, because the product product may have desired results that need humans to determine extremist values. As a banking example, the model that denies all flat loans will have a major risk classification, because the zero loans that he confirms will completely fail to pay. In this case, the organization that implements the model/algorithm should be responsible for the results of the model, just as if human beings are decision -making.

With more institutions that require insurance on the Internet, how is the artificial intelligence tools to be reshaped on both the scene risks and the subscription insurance itself?

Magnificated artificial intelligence tools in spreading large quantities of data and finding patterns or trends. On the customer, these tools will be effective in understanding the actual risks of the institution and managing this risk. On the part of the believer, these tools will be useful in finding contradictions and organizations that have become immature over time.

How can companies take advantage of artificial intelligence to reduce cypressing risks in a proactive way and negotiate better conditions in the insurance market today?

Today, the best way to take advantage of artificial intelligence to reduce risks and negotiate the best insurance conditions is to filter noise and deviations, which helps you focus on the most important risks. If you reduce these risks in a comprehensive way, the Internet insurance rates should decrease. It is very easy to overwhelm the huge size of the risks. Do not get involved in trying to address each one issue when focusing on the most important, it can have a much greater impact.

What are the tactical steps that it recommends for companies that want to implement artificial intelligence with responsibility – but you don’t know where to start?

First, you need to understand what cases of use are and document the desired results. Everyone wants to implement artificial intelligence, but it is important to think about your goals first and work back from there – which I think many organizations are struggling with them today. Once you have a good understanding of your use situations, you can search for different artificial intelligence frameworks and understand any of the applicable controls important in cases of use and implementation. Strong artificial governance is also extremely important, in order to alleviate the risks and efficiency because automation is useful only like data entry. Organizations that benefit from artificial intelligence must do this responsibly, as partners and prospects ask difficult questions about the extension of artificial intelligence and use. Failure to know the answer can mean the loss of commercial deals, which directly affects the end result.

If you have to predict the largest security risks related to a five-year post from now on, what will it be-how can we prepare today?

Prediction is that due to the presence of Agency AI in business operations and applications, the attackers will participate in fraud and abuse of use to manipulate these agents in providing harmful results. We have already seen this while manipulating customer service agents, which led to unauthorized deals. The actors used linguistic tricks to overcome policies and overlap with decisions to the agent.

Thank you for the wonderful interview, readers who want to know more visitgate.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-05-21 17:20:00

Related Articles

Back to top button