Threat detection and response are among the most critical elements of cyber security — particularly in recent times, with statistics suggesting that cyber crime is up by 600% since the pandemic began.
Threat modelling in IT security is a procedure for optimising applications, systems or security processes by identifying potential threats — such as structural vulnerabilities or the absence of suitable safeguards — and mitigating against them.
However, traditional threat models have some significant limitations that, especially on a large scale, may cause issues as security staff struggle to keep up with constant technological advancements.
Failure to intercept a threat may lead to a loss of services or security breach that will negatively impact a business’ staff and customers. That is why many IT security specialists are adopting artificial intelligence (AI) and machine learning (ML) into their threat models and systems infrastructure to improve the efficiency and sophistication of their preventative strategy.
What is the traditional threat model?
Traditional threat modelling works by manually designing an assessment strategy to detect and interpret all risks posed to a business. This includes analysing the different threats facing a system — such as threat actors, threat vectors, threat types and threat scenarios.
There are several methodologies for threat modelling for software developers to choose from depending on the needs of their organisation. For example, the Process for Attack Simulation and Threat Analysis (PASTA) model is an attacker-focused, risk-centric model that uses attack simulation and threat analysis to identify and rank threats based on seven stages of assessment.
These methodologies are essential for keeping on top of cyber threats by creating an abstraction of the system, profiles of potential attackers and a catalogue of threats that may arise. However, to perform at a high level, designing and maintaining these programs requires specialist training and experience — not to mention a lot of time and energy.
Additionally, as governments and other institutions seek to protect businesses and customers from cyber attacks, data privacy regulations become all the more important. It could take organisations days, weeks or even months to identify and understand the full impact of a cyber attack or data breach using existing traditional models — something that no business can afford, neither at a resource level or a compliance level.
Should they fail to protect themselves, organisations may fall foul of malicious data as hackers continue to ramp up the frequency of cyber attacks. And considering that the average cost of a security breach for UK businesses is £2,670 and increases with business size (plus time and resources), it is more important than ever for every company to ensure their threat model is robust against attack.
Incorporating AI into a threat modelling strategy
With new technology comes the acceleration of the number of weaknesses, threats and vulnerabilities facing an organisation’s system. So, threat modelling software developers need to constantly identify and update relevant codes in response to rapidly changing technologies and their present risks. Because these traditional threat models rely on human monitoring, it is becoming more and more likely that something will slip through the cracks — particularly on a larger scale.
This is why many IT security professionals are opting to incorporate AI into their threat models. Based on its ability to model the ‘normal’ behaviour of an organisation and its users, AI can predict the probability of a particular networking behaviour being associated with a specific user.
AI will help systems learn patterns that can reveal where threats and weaknesses lie. By identifying anomalous behaviour within net flows or event data that could signify a new danger, automated processes will inform ML and ‘teach’ systems what needs analysing. In this way, AI becomes a vital part of the overall monitoring and threat management process.
Artificial intelligence and machine learning can be beneficial for large organisations that need to scale up risk assessments across a broad and complex scope of individual processes. Furthermore, it is vital in identifying ‘zero-day’ events to change from a ‘reaction and recovery’ defence monitoring strategy to an ‘identify and prevent’ strategy.
A combined approach to threat modelling
Automating risk assessment processes can improve the functionality of a threat model and optimise its response to the ever-changing risks posed to users. But although AI and ML can be used effectively to improve the quality and efficiency of threat modelling strategies, they are not without drawbacks.
Creating a threat model which is wholly dependent on AI can mean that issues within the system may go unnoticed. For instance, AI systems may force benign emails to be classified as spam or cause a malicious example to go undetected, and attacker-crafted inputs may reduce the confidence level of correct classification — especially in high-consequence scenarios.
Therefore, rather than leaving your organisation’s cyber security entirely at the mercy of AI and ML technology (which is by no means infallible), these technologies should be adopted to support existing traditional threat models and achieve the best of both approaches.
Burning Tree can combine traditional threat modelling consultancy with best-in-class AI monitoring tools, such as CyGlass’ solutions, to ‘listen to’ and interpret network traffic. Get in touch today to discuss your company’s cyber security needs.