Published 06. Nov. 2023

AI-Powered Cybersecurity: Start With a Chief AI Officer

General

In this era of digitization where data and connectivity underpin every business decision, protecting your digital assets isn’t just crucial; it’s the fundamental core of business survival. AI offers a potential of a more resilient digital infrastructure, a proactive approach to threat management, and a complete overhaul of digital security.

According to a survey conducted by The Economist Intelligence Unit, approximately 48.9% of top executives and leading security experts worldwide believe that artificial intelligence (AI) and machine learning (ML) represent the most effective tools for combating modern cyberthreats.

However, a survey conducted by Baker McKenzie highlights that C-level leaders tend to overestimate their organizations’ readiness when it comes to AI in cybersecurity. This underscores the critical importance of conducting realistic assessments of AI-related cybersecurity strategies.

Dr. Bruce Watson and Dr. Mohammad A. Razzaque shared actionable insights for digital leaders on implementing AI-powered cybersecurity.

 
Dr. Bruce Watson is a distinguished leader in Applied AI, holding the Chair of Applied AI at Stellenbosch University in South Africa, where he spearheads groundbreaking initiatives in data science and computational thinking. His influence extends across continents, as he serves as the Chief Advisor to the National Security Centre of Excellence in Canada.
Dr. Mohammad A. Razzaque is an accomplished academic and a visionary in the fields of IoT, cybersecurity, machine learning, and artificial intelligence. He is an Associate Professor (Research & Innovation) at Teesside University
 

The combination of AI and cybersecurity is a game changer. Is it a solution or a threat?

 

Bruce: Quite honestly, it’s both. It’s fantastic that we’ve seen the arrival of artificial intelligence that’s been in the works for many decades. Now it’s useable and is having a real impact on business. At the same time, we still have cybersecurity issues. The emergence of ways to combine these two things is exciting.

Razzaque: It has benefits and serious challenges, depending on context. For example, in critical applications such as healthcare or driverless card, it can be challenging. Driverless cars were projected to be on the roads by 2020, but it may take another 10 years. Similarly with the safety of AI, I think it’s hard to say.

 

What are your respective experiences in the field of cybersecurity and AI?

 

B: I come from a traditional cybersecurity background where it was all about penetration testing and exploring the limits of a security system. In the last couple of years, we’ve observed that the bad guys are quickly able to use artificial intelligence techniques. To an extent, these things have been commoditized. They’re available through cloud service providers and there are open-source libraries with resources for people to make use of AI. It means the barrier for entry for bad actors is now very low. In practice at the university as well as when we interface with the industry at large, we incentivize people to bring AI techniques to bear on the defensive side of things. That’s where I think there’s a real potential impact.  

It’s asymmetrical warfare. Anyone defending using traditional methods will be very quickly overrun by those who use AI techniques to generate attacks at an extreme rate.

R: I’m currently working on secure machine learning. I work with companies that are developing solutions to use generative AI for automated responses to security incident. I’m also working on research on secure sensing, such as for autonomous vehicles. This is about making sure that the sensors data is accurate, since companies like Tesla rely on machine learning. If you have garbage in, you’ll produce garbage out.

 

Given AI’s nature, is there a risk of AI developing itself as an attacker?

 

B: It fits well with the horror scenarios from science fiction movies. Everyone is familiar with Terminator, for example. We’re not at that point yet where there’s a possibility of AI developing arbitrary new ways to attack systems. However, we’re also not far from that point. Generative AI, when given access to a large body of malicious code, or even fragments of computer viruses, malware, or other attack techniques, it is able to hybridize these things rapidly into new forms of attack, quicker than humans can. In that sense, we’re seeing a runaway process. But it is still stoppable, because systems are trained on data that we provide them in the first place. At a certain point, if we let this free to fetch codes on the internet or be fed by bad actors, then we’ll have a problem where attacks will start to dramatically exceed what we can reasonably detect with traditional firewalls or anomaly detection systems.

It scares me to some extent, but doesn’t keep me awake at night yet. I tend to be an optimist and that optimism is based on the possibility for us to act now. There isn’t time for people to set around and wait until next year before embracing the combination of AI and cybersecurity. There are solutions now so there’s no good reason for anyone to be sitting back and waiting for an AI-cybersecurity apocalypse. We can start mitigating now.

R: We use ChatGPT and other LLMs that are part of the generative AI revolution. But there are also tools out there for bad actors like FraudGPT. That’s a service you can buy to generate an attack scenario. The market for these types of tools is growing, but we’re not yet at a self-generating stage.

 

Are we overestimating the threat of AI to cybersecurity?

 

B: A potential issue is that we simply do not know what else is out there in the malware community. Or rather, we have some idea as we interact with malware and the hacker community as much as we can without getting into trouble ourselves, but we do see that they’re making significant advances. They’re spending a lot of time doing their own research using commodity and open-source products and manipulating them in such a way that they’re getting interesting and potentially dangerous results.

 

How can the good guys stay ahead of bad actors? Is it a question of money, or the red tape of regulations?

 

R: Based on my research experience, humans are the weakest link in cybersecurity. We’re the ones we should be worried about. IoT is responsible for about 25% of overall security concerns but only sees about 10% of investment. That’s a huge gap. The bad guys are always going to be ahead of us because they do not have bureaucracy. They are proactive while we need time to make decisions. And yes, staying ahead is also a question of money but it’s also about understand the importance of acting promptly. This doesn’t mean forgoing compliance and regulation. It means we have to behave responsibly, like changing out passwords regularly.

B: It’s very difficult to advocate for getting rid of governance and compliance, because these things keep us honest. There are some ways out of this conundrum, because this is definitely asymmetrical warfare where the bad guys can keep us occupied with minimal resources while we need tremendous resources to counter them.

One of the ways around it is to do a lot of the compliance and governance using AI systems themselves. For monitoring, reporting, compliance – those can be automated. As long as we keep humans in the loop of the business processes, we will experience a slowdown.

The other way of countering the issue is to get together on the defensive side of things. There’s far too little sharing of information. I’m talking about Cyberthreat Intelligence (CTI). Everyone has recognized for a long time that we need to share when we have a breach or even a potential breach. Rather than going into secrecy mode where we disclose as little as possible to anyone, we should be sharing information with governments and partner organizations. That way, we actually gain from their defensive posture and abilities.

Sharing cyberthreat intelligence is our way of pulling the cost down and spreading the burden across a collective defence network.

 

What is the first thing business leaders should to do prepare for what AI can do and will be used for?

 

R: When it comes to cybersecurity, technical solutions are only 10%. The other 90% is responsibility. Research shows that between 90 to 95% of cybersecurity incidents could have been avoided if we behaved responsibly. The second thing is that cybersecurity should be a consideration right from the start, not an afterthought. It’s like healthcare. You need to do what you can to avoid ever needing medical care in the first place. It’s the same here.

B: The number one thing is to make sure that your company appoints a Chief AI Officer. This may be someone who is also the CIO or CSO, but at the very least there should be board-level representation of AI and its impact on the business. Any business in the knowledge economy, financial industry, technology, as well as manufacturing and service industries – all are going to have to embrace AI. People may think it’s a fad, but AI will absolutely steamroll organizations that don’t embrace it immediately.  That’s what I would do on day one. Within a couple of days after that, there must be a working group within the company to figure out how to roll out AI, because people will be using it whether openly or discreetly. AI forms a tremendous force multiplier for running your business but also a potential security threat for leakage of information out of the business as well. So you need a coherent roll out in terms – in terms of information flow, your potential weaknesses, embedding it into corporate culture and bringing it into cybersecurity. Any company that ignores these things is in peril.

 

Where does ethics come into this?

 

R: No one can solve the problem of AI or cybersecurity individually. It needs to be collaborative. The EU AI Act outlines four categories of risk – unacceptable, high, limited, and minimal. The EU doesn’t consider it an individual state problem. In fact, they also have a cybersecurity legislation that clearly states that it would supersede state-level regulations. The UK, on the other hand, is slightly more pro-innovation. The good news is that they are focused on AI assurance research which include things like ethics, fairness, security, and explainability. So if businesses follow the EU AI Act and focus on AI assurance, they can lead with AI securely and responsibly.

B: There are a couple of leading frameworks for ethical and responsible AI use including from the European Union as well as the UN. Many of the standard organizations have been working hard on these frameworks. Still, there is a sense that this is not something that can be naturally embedded within AI systems. On the other said, I think it’s become increasingly likely and possible that we can build limited AI systems that have only one job of looking out for the ethical and responsible behaviour of either humans or other systems. So we are potentially equipping ourselves with the ability to have the guardrails themselves be a form of AI that is very restricted and conforms to the rules of the EU or other jurisdictions.

 

Which areas do you see as having the biggest potential for using AI within cybersecurity – for example identification, detections, response, recovery?

 

B: I’m hesitant to separate them because each of those are exactly where AI should be applied. It’s possible to apply them in tandem. AI has an immediate role in detection and prevention. We can use it to evaluate the security posture of an organization and make immediate suggestions and recommendations for how to strengthen it. Still, we know that at a certain point, something will get through. It’s impossible to defend against absolutely everything. But it is important to make quick moved in terms of defending and limiting damage, sharing information, and recovering. Humans are potentially the weak links there too. Humans monitoring a system will need time to assess a situation and find the best path forward, whereas an AI can embody all the relevant knowledge within our network and security operation centres and generation recommendations quicker. We can have faster response times which are key to minimizing damage.

 

What are some significant upcoming challenges and opportunities within the AI-powered cybersecurity domain in the next two years?

 

R: Definitely behaviour analysis, not only to analyse systems but users as well for real-time, proactive solutions. The systems we design, including AI, are for us. We need to analyse our behaviour to ensure that we’re not causing harm.

B: Another thing AI is used for is training, within cybersecurity but across corporations as well. There’s a tremendous amount of knowledge and many companies have training for a wide variety of things. These can be fed into AI systems that resemble large language models. AI can be used as a vector for training. The other thing is a challenge on how quickly organizations will decide to be open with peer companies. Will you have your Chief AI Officer sit at a roundtable of peers from other companies to actually share your cybersecurity horror stories? The other significant challenged is related to change management. People are going to get past the novelty of ChatGPT as a fun thing to play around with and actually develop increasing fears about potential job losses and other threats posed by AI.

Sign up as a member of our Executive Business Network Aurora Live to connect with leading tech leaders across Europe all year round.