Thanks to the likes of Amazon, Google and Facebook, the term Artificial Intelligence (AI) has become more widespread than ever before. Defined as a branch of computer science dealing with the simulation of intelligent behaviour in computers, AI promises all sorts from self-driving cars and smart home appliances to robot “employees”.

As algorithms become increasingly complex, AI has evolved to become a crucial aspect of cyber security solutions – with the technology being applied across various fields including spam filter applications, fraud and botnet detection.

By automating threat detection, AI can help to identify threats more efficiently and effectively than other software-driven approaches – ultimately easing the burden and workload of employees. But relying too heavily on these technologies poses a number of risks and if these are overlooked, there is a worry that AI algorithms could create a false sense of security…

A blessing or a curse?

Many companies in a range of industries have already started implementing AI initiatives to great effect. But although AI presents a number of opportunities for businesses and there are many benefits of using these technologies, they also come with substantial risks including malicious corruption and manipulation of data, devices and systems.

Just as companies can use AI to automate and improve business processes, hackers can break through defences using the same technology and automate the identification of vulnerabilities. So, for CIOs and CISOs who are responsible for corporate security, AI can prove to be both a blessing and a curse.

If hackers are able to fool learning-based systems, they will also be able to exploit the classification systems AI is built on. For example, they could learn to imitate people’s writing styles – enabling them to execute more realistic and effective phishing attacks. Or they could prevent AI learning altogether, using methods such as tunnelling protocol and altering log files to scrub their tracks – meaning the AI won’t be able to identify a similar attack in the future.

Fighting AI with AI

Fortunately, there are ways for companies to protect their data and systems from AI-based attacks. Part of the answer lies in harnessing the power of AI itself to help strengthen existing cyber security processes and capabilities. Appropriate defences also need to be put in place for existing AI initiatives to ensure they are secure.

A combination of AI and blockchain technology could help develop a decentralised, cryptographically-sealed system log, for example – meaning hackers would not be able to use traditional methods of scrubbing their presence from log files. For high-risk targets such as banks, which may experience a number of cyber incidents every day, AI can also be used to automate systems and help filter out truly dangerous threats from easily-addressed issues.

An intelligent response

By investing in AI, companies can hope to streamline their cyber security processes and reduce the workload for CIOs and CISOs – enabling them to prioritise risk areas and to intelligently automate manual tasks. Not only will this improve operational efficiencies, but it will also help to reduce costs. What’s more, AI can also aid and facilitate intelligent responses to attacks based on shared knowledge and learning.

To find out more about AI-driven solutions and how you can leverage them to keep your business and data secure from cyber-attacks, contact Burning Tree on01252 843014 or info@burningtree.co.uk.