There has been much excitement about Artificial Intelligence (AI) and it’s application in cyber security. Here at Burning Tree we are monitoring the technology closely to see where it can help our clients protect their business and data, and in what areas AI can improve information security.
Many organisations already have some AI cyber security solutions in the form of machine learning tools. While these are not true AI solutions (those would have the capability to rewrite code to protect systems and shore up vulnerabilities), machine learning is a step closer to this scenario.
Currently, machine learning solutions are often used to monitor activity and take action if unusual behaviours are detected. These solutions ‘learn’ what is normal, identify what isn’t, and then depending on predetermined rulesets, take remedial action. That might be to flag the issue up as a priority for a security analyst, block access to certain users, or any other automated action you choose.
The key benefit of machine learning (and ultimately full on AI cyber security solutions) is that data can be processed and analysed so much quicker than many traditional tools. This means that breach detection times can be reduced significantly, minimising the potential disruption a breach could have. It also means that your information security team can prioritise work much more effectively; they may be alerted when an incident meets certain rulesets, but the rest of the time they can focus on more rewarding work.
AI and Identity & Access Management (IAM)
Of particular interest to our team at Burning Tree is AI’s application in Identity and Access Management (IAM). We believe that IAM should be at the heart of your cyber security and data protection strategies: in part to protect businesses from insider actors, but also to protect businesses from human error and breaches through social engineering, phishing etc.
However, IAM does present some problems for organisations. While the ‘least privilege access’ strategy – where users are only given access to the minimum resources needed to do their role – is best practice, it can be difficult to implement and manage. Hence the need for IAM tools and expert support.
One particular challenge is where credentials are shared with the wrong people. That might be a user sharing logins with a colleague or with an external actor. In the first instance a user may want one-time access to a system or has forgotten their password and requests access via a colleague’s account, for non-malicious reasons. However, as with sharing credentials with external parties, it could be a deliberate attempt by a malicious user to access data and systems that they don’t have the right privileges for.
Here AI could help. Instead of checking a users identity against predefined credentials, dynamic authentication tools could be used such as using visual or aural clues. AI solutions could go beyond biometrics, and really learn what the user looks like, sounds like and how they behave. This application has the potential to also increase real-time security after a user as logged in. Is the person using the system the same person that logged in? Have they left their desk and someone else is now downloading files?
The scope for AI could go beyond monitoring user activity on the system. As well as using visual and aural clues, users could also be assessed based on other factors such as their social media profiles. Have they recently started engaging with competitors online, following company pages and making connections with people within those organisations? AI could then determine whether their behaviour, such as downloading certain files, could suggest a risk. Perhaps they’re looking for a new job, or are planning to sell data to a competitor.
We’re not quite there yet with AI, but it is certainly a hot topic in IAM circles. We’ll be exploring this, amongst other subjects, at our next Breakfast Briefing on board HQS Wellington on Wednesday 28th February. If you want to find out more register here for this free event.