Artificial Intelligence (AI) is seemingly everywhere...
Unless you’re living under a rock, you’ve surely noticed the AI-hype-train ploughing through the media promising no less than transforming every industry. While there’s certainly currently both exaggerated claims and inflated expectations when it comes to Artificial Intelligence, more than the general public may think is already in the realm of reality rather than fiction. In fact, most likely you’ve been knowingly or unknowingly using AI-based technology already more than once today by, for example, either using Face ID to authenticate on your iPhone, searching on Google, reading subtitles on YouTube or looking at recommended items on Amazon. And, as with many other technologies, as soon as it works it’s not considered AI (technology) anymore.
AI is also all over cybersecurity. While 2-3 years ago only a few vendors positioned themselves on AI, nowadays every player in the market claims to have AI built in. Customers’ opinions about AI in security vary widely: while some believe that it will redefine cybersecurity and put an end to the cat-and-mouse game between attackers and defenders, there are also many skeptics who degrade everything related to AI as marketing-buzz.
What is the true potential of AI in security? What applications are already available now? What are the limitations? What questions should I ask my cybersecurity vendors? How are attackers using AI? Will AI take our jobs and kill us all? Let’s try to find answers to these questions.
AI vs. ML vs. DL – and why now?
The terms Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are sometimes used interchangeably, which is not quite correct. While AI is a very generic concept comprising all sort of intelligent agents (including expert systems with a lot of if/else statements in the code), ML systems are a subset of AI that learn from data. That is, ML algorithms are able to generalize and build models from examples without being explicitly programmed. Supervised learning (we have labeled data, e.g. “this e-mail is spam”), unsupervised learning (no labels) and reinforcement learning (agent acts in an environment and tries to optimize the rewards of his behaviour) are the most common classes of ML. In supervised learning classification (tell whether email is spam or ham) and regression (predicting a value) are the most common use cases. Artificial Neural Networks (ANN) are specific ML models with layers of connected neurons that are used in all of the three ML disciplines above. To put it simply, Deep Learning (DL) is subset of ANN where we have two or more hidden layers of neurons. All of the above are algorithms based on mathematics (and statistics) and not magic (even though some vendors make it sound otherwise).
Most of what’s used today in cybersecurity in terms of ML is from an algorithmic point of view not novelty – although there have been some relevant breakthroughs, especially in DL and Recurrent Neural Networks (RNN). The actual driver of the current AI/ML wave comes rather from the availability of a lot of data to train the models and corresponding computing power (incl. GPUs, TPUs etc. and cloud-based infrastructures) to support the necessary calculations. While algorithms are important, more data is usually even more important and a good algorithm with tons of data is likely to outperform an excellent algorithm with limited training data to learn from.
There are currently no AI/ML algorithms that do well on most problems. They need to be applied on a use case and data specific basis. Finding the right algorithm (and its meta-parameters) to solve a certain problem is still both a research/engineering discipline as well as somewhat of an art. Automation regarding algorithm selection and meta-parameter search is in the making, though (e.g. Google’s AutoML). Let’s now look at some examples where AI is already applied in cybersecurity:
Evolution of anti-spam
Spam is one of the oldest problems in security and anti-spam systems are an educational example of how technology evolved behind the scenes to keep e-mail communications productive and secure despite a spam-rate of 85%.
Simple word filtering and IP blacklists were the basis of the first anti-spam systems in the late 90s. Later, more refined content filtering, sender reputation and partially also engagement metrics (recipient behaviour) were added to increase accuracy. Nowadays, ML models (such as Naïve Bayes) with hundreds of input features, making use of Natural Language Processing (NLP) to check grammar and applying also computer vision (does the e-mail look “phishy”?) are able to automatically find better definitions of and classification criteria for spam than ever before.
Malware classification 2.0
Due to evolving and self-changing malware (polymorphism) signature-based malware detection has a catch-rate below 50%. Modern anti-malware solutions usually combine behaviour analysis and advanced static analysis to keep catch-rates high. Behaviour analysis can use ML and non-ML algorithms to identify whether the running malware at execution shows suspicious activity, such as trying to access the registry or snooping in memory. While behaviour is usually a very good indicator of malicious intent, it can only be observed and detected at run-time (dynamic analysis). Ideally, we’d rather like to prevent malware from executing at all. Several state-of-the-art endpoint and breach detection solutions are using ML models to classify malware before it executes (static analysis). In contrast to hard matching a malware hash signature, ML models rely on thousands of features of a malware sample to make a statistical verdict. Some of these features might have been engineered by humans, others might be completely self-learned. An ML algorithm (by means of unsupervised feature learning with auto-encoders) can learn and weigh features itself and discover relevant properties of malware samples that even the best malware experts cannot find. Once a model has been trained on malware samples it can be kept ”fresh” by means of continuously feeding new samples (online learning).
Network traffic analysis
To detect attackers that have breached the perimeter and are now active in our network, traffic analysis is the right approach. Based on packet header and flow data information (e.g. protocol, number of bytes, rates, counters) ML powered traffic analysis systems can employ both supervised and unsupervised learning algorithms to classify and cluster attacks. Compared to anti-spam and anti-malware solutions usually also anomaly detection algorithms are at work in network traffic analysis. Conceptually and technically these work differently in that they establish a baseline of “normal” behaviour and then detect deviations from it (in contrast to building a model of “bad” behaviour”). Network detection needs to have some form of memory built in in order to retain the context of activities over time, which is crucial to identify slow-moving attacks.
Other applications profiting from AI: vulnerability testing, automation and authentication
AI can also already be found in security products other than the anti-spam, anti-malware and network detection solutions described above. Vulnerability assessment, deception/honeypot solutions, pen-testing, SOC/IR automation and user authentication are exciting areas where promising advances have been made thanks to the application of machine learning algorithms.
How attackers are using AI
Of course, attackers will also be quickly adopting new technology as a means to their ends. For example, an attacker AI-model might learn to generate samples of malware or phishing e-mails that go undetected by a particular security system. An attacker might even go further and create a model that automatically generates targeted attacks based on existing information about a company or user. The attacker’s model might learn to generate customized phishing e-mails based on past e-mail thread history and other available user-specific information to increase the likelihood of the user opening a malicious e-mail attachment.
On the network level, an AI-based attack might learn about traffic patterns and automatically engage in reconnaissance while blending in as much as possible with existing traffic to avoid detection. Generally, my take is that AI will allow attackers to leverage data to automate large-scale targeted attacks, which currently are still powered by human intelligence on the adversarial side. The next step in weaponising AI is probably the “productisation” of automated AI-attacks into toolkits allowing non-expert attackers to run automated, sophisticated and targeted attacks – think of script kiddies on steroids.
Limitations and challenges of AI/ML in cybersecurity
First of all, it’s important to keep in mind that ML algorithms have been developed as generic methods and not specifically for cybersecurity. Due to the abundance of data in the field, ML lends itself very well to cybersecurity, but the typical trade-offs of ML-algorithms regarding false positives vs. false negatives (precision vs. recall) make their application in security sometimes tricky: on the one hand we don’t want to let an attack go unnoticed (false negative), on the other hand false positives may create alert fatigue. The cost of a misclassification (false negative) in cybersecurity is usually higher than in other application areas.
An other challenge in cybersecurity is the limited explainability of most ML algorithms, meaning that more often than not, we don’t know why an ML algorithm classified a sample as good or bad or what exactly triggered an alert, and we have to accept the algorithm as a black box. Explainability is helpful both in investigation and incident response. Choosing a model that has lower performance but higher explainability might be the right trade-off in some cybersecurity use cases.
ML builds on complex models, the parameters of which are optimized by means of the training data at hand. The performance of an algorithm on a new sample relies heavily on the amount and quality of the training data. Per definition, ML algorithms are statistical learners and cannot yield guaranteed results. On new samples that are “sufficiently different” from the samples in the training data, ML models might behave unexpectedly and calling for measures to make sure that algorithms fail gracefully.
The combination of the above properties are potential vulnerabilities and make ML algorithms susceptible to so-called “adversarial machine learning”. Attackers might, for example, exploit online algorithms (systems that continuously take data input to update their models on the fly) by providing them with adversarial samples to have them learn “the wrong way” (known also as causative attacks or model poisoning). Or a savvy attacker might “hack” an algorithm by finding “blind spots” (lack of training data in feature distribution space) of the model and present fabricated samples to the system (similar to stickers on stop signs to trick the vision algorithms of self-driving cars) that get misclassified. These types of attacks against a ML system can somewhat be mitigated by providing adversarial samples already during training – research on how to make ML systems generally more robust is still ongoing.
Last but not least, while usually the computationally expensive part is the training phase, algorithms that are also complex and resource-intensive at evaluation cannot be implemented in environments with computing and memory constraints (e.g. embedded systems).
Questions to ask your security vendors about AI
Considering the AI hype (not every box that says AI on it truly contains AI) as well as the limitations of AI, it’s worth asking your vendor a couple of questions. You might want to ask what kind of AI/ML the solution contains, whether it also contains non-AI models, how models are generated, what data the model is trained on and how it’s updated (online/offline learning). Or, if you don’t care too much about the contents of the security “box”, at least test the solution in a proof-of-concept deployment to validate whether it works in your particular environment and configuration (which you should do with any security product anyway, I think).
Conclusion: AI is here, but it’s not magic
Like most technologies, AI/ML is no silver bullet and has its strengths and weaknesses that need to be carefully considered in each use case. In cybersecurity a promising approach is to combine AI/ML with more “classic” approaches into hybrid solutions leveraging the best of both worlds in order to increase performance.
Considering the continuing arms race and the very dynamic nature of cybersecurity we should be expecting to see more applications of AI/ML and exciting innovations in the field over the coming years.
Christian Schwarzer
Christian Schwarzer war von Dezember 2016 bis Januar 2023 Co-CEO bei AVANTEC. Er interessiert sich u.a. für Technologie- und Geschäftsmodell-Innovationen.