Artificial intelligence (AI) and machine learning (ML) applications have multiplied significantly over the past few years. As they grow in popularity, these tools are increasingly underpinning our everyday lives in a range of areas — from healthcare and finance to critical infrastructure and defence.

But how secure are they? Let us take a closer look…

Making human-like decisions

In simple terms, AI involves algorithms based on large volumes of data which can make human-like decisions at a faster and more efficient rate. To some extent, ML can then be seen as a sub-category of AI. While AI is a broader concept, ML involves a more specific methodology whereby a computer programme learns to recognise patterns of data without being explicitly programmed.

These tools can be used in various ways, and organisations across a range of sectors have increasingly been incorporating them into their strategies to drive operational and cost efficiencies.

However, they are primarily used to analyse large volumes of data from diverse sources and in different formats — extracting useful insights, linking data points and finding relationships across the sources to support decision-making processes. Through data analysis, AI and ML can also identify and flag suspicious activities for review by human analysts.

In this sense, the adoption of AI and ML is still in its nascent stage, with projects employed mainly for back-office functions. Yet, customer-facing technologies, such as ‘chatbots’, are now also starting to emerge — reducing the time and resources needed to address customer issues. Through ML, these chatbots can quickly identify user intent and recommend relevant content to help resolve customer queries or transfer the customer to a support team for more complex matters.

Acknowledging the inherent risks

Although there are many positives attributed to AI and ML tools, there are also some inherent risks to consider before adopting these projects.

For one, algorithms are prone to mistakes. Algorithmic bias caused by inaccurate or insufficient data can easily lead to poor decisions being made — as can lack of training. Equally, algorithms can be influenced by bad actors or malicious interference.

Threats can range in severity from minor attacks that cause inconvenience and temporary loss of productivity to sophisticated cyber security methods like evasion, data poisoning, trojans or backdooring, which can lead to significant disruption.

Despite the risks, many organisations fail to scrutinise the security of their systems in their eagerness to accelerate their adoption of AI and ML. This oversight often points to a lack of understanding of the importance of securing AI systems against adversarial threats.

However, with AI used increasingly in key applications like critical infrastructure, public safety or financial transactions, the prospects of an attack are highly concerning. As such, firms must ensure effective governance of any use of AI.

Introducing a risk management framework

AI and ML systems are designed to improve efficiency; they are not a replacement for other security monitoring solutions. To keep systems secure, organisations should, therefore, apply appropriate risk management frameworks to AI and ML applications.

Risk management should be considered in the context of lines of defence. The first line of defence is control over the IT systems, processes and people protecting the organisation. For example, this could involve identity and access management (IAM) protocols to identify, authenticate and authorise employees, as well as the hardware and applications they need to access.

The second involves identifying areas of risk and managing these through a designated process. Organisations should keep clear records of the data used by the ML, as well as the decision making around the use of ML and how systems are trained and tested. Network monitoring software such as CyGlass and Fidelis can also help to identify areas of risk using machine learning, whilst SCADAfence can monitor internet of things (IoT), operational technology (OT) and SCADA systems.

For the third and final line of defence, organisations must audit who identifies issues not known to risk or control. This means there must be a ‘human-in-the-loop’ element to AI and ML projects whereby decisions made by the application are only executed after review or approval from a human.

If you would like to find out more about developing your risk management framework and how our security transformation consulting services can help, please contact us today.