Training your AI and data models to detect threat actor cyberattacks

Listen to this article

The power of AI lies in accessing structured and unstructured data, identifying patterns, presenting results, and learning from this routine, explains Bernard Montel, EMEA Technical Director and Security Strategist at Tenable. Cybersecurity vendors like Tenable are training CISOs on how best to apply these practices to their organisation.

Most vendors are investing significant resources into leveraging artificial intelligence (AI) to solve foundational security problems. Being able to predict the vulnerabilities likely to be targeted by attackers, crawling through millions of lines of log data to identify potential issues, automating analysis of code to find security flaws before they go live, are all being made smarter with AI.

Even with visibility of the vulnerable attack surface, it is difficult for security teams to conduct analysis, interpret findings and identify steps to be taken to reduce risk as quickly as possible. As a result, security teams are constantly in react mode, delivering maximum effort but often a step behind the attackers.

According to a commissioned study of 825 security and IT professionals conducted by Forrester Consulting on behalf of Tenable, nearly six in 10 respondents (58 per cent) say the security team is too busy fighting critical incidents to take a preventive approach to reduce their organisation’s exposure.

The vast majority (73 per cent) believe their organisation would be more successful at defending against cyberattacks if they could devote more resources to preventive cybersecurity.

AI has the potential to do just that. It can be used by cybersecurity professionals to search for patterns in the simplest language possible to decide what actions need to be taken to reduce cyber risk.

Exploiting vulnerabilities

While AI is being used by threat actors to automate targeted and convincing attacks, the flaws these cyberattacks target have not changed. AI driven cyberattacks exploit the same vulnerabilities as other cyberattacks, and therefore it follows that the foundation for defending against any style of cyberattack, whether it is AI or human driven, remains unchanged.

Enterprises must learn how to reduce threats from misinformation. Usually, it comes down to educating users to never trust, always verify, especially when the content is demanding some kind of response such as clicking a link, requesting information or asking for the content to be shared with others.

There are three elements that are core to a strong enterprise defence: enterprises should implement robust multi-factor authentication to mitigate the issues around credential misuse; focus should be placed on addressing vulnerabilities and misconfigurations known to be targeted by threat actors; and good detection and response should be in place for anything that slips through.

How AI works

Historically AI was used to analyse data. Machine learning, an application of AI, uses mathematical models of data to help a computer learn without direct instruction. Deep Learning, part of a broader family of machine learning methods, structures algorithms in layers to create an artificial neural network that can learn and make intelligent decisions on its own.

Today, with Generative AI, also a subset of AI, it is possible to learn about artefacts from data and generate innovative new creations that are similar to, but do not repeat, the original.

Harnessing the power and speed of Generative AI, such as Google’s PaLm2 on Vertex AI, OpenAI ChatGPT-4, LangChain and many others, it is possible to return new intelligent information in minutes.

Importance of data

Generally speaking, AI depends on the breadth and quality of data to provide clear and accurate insights. If you have unique data, then you are going to have unique intelligence guiding decisions. It is truly garbage in and garbage out; or gold in and gold out; depending on the source of data being modelled. In addition, humans need to educate the model, and so teach the engine, as to what is correct and what is not. By tuning the algorithm, it reduces the false positives and boosts quality of response.

To be truly effective in the practice of preventive cybersecurity, AI requires breaking down silos to enable defenders to take the data from their mix of multiple cybersecurity point solutions and use it to create something wholly new. And they have to rely on multiple systems and methods to pull all that data together: aggregation tools, internal data lake, and the old reliable multi-tabbed spreadsheet.

It is important to remember that, if you fail to educate the model correctly, then the model fails to deliver reliable results.

About Faisal Ebrahim

Tech enthusiast, IT & Cybersecurity consultant & Sales manager. I'm passionate about staying ahead of the curve on emerging technologies, including EVs, AI, robotics, and the metaverse. For over 15 years, I've explored and shared these innovations on my blog, itechbahrain.com.

Buy Me a Coffee