top of page

Machine Learning: a Tool and a Danger

There’s no question that cybersecurity has benefited from advancements in machine learning (ML) and artificial intelligence (AI).


These days, security practitioners are swamped with data that could indicate suspicious activity but pinpointing the threat can be likened to finding the proverbial needle in the haystack. ML and AI help security teams separate the wheat from the chaff, and find the genuine threats in the company’s data by employing pattern recognition in network traffic, to identify anomalous behaviours, and other indicators of malware.


On the flip side of the coin, bad actors have also discovered that AI and ML can benefit them too, and can be used as tools against the security community to further their own ends. For one, the ubiquitous access to cloud environments makes it easy to get started with AI and build powerful, highly capable learning models.


There are several ways in which bad actors are using ML to target entities in every industry. Firstly, threat actors are testing the success or failure of their malicious code against AI and ML tools. If they build their own ML environments they can model their tactics, techniques, and procedures (TTPs) to discover the types of events and behaviours that defenders are looking for.


By watching and predicting how TTPs are detected by security solutions, threat actors can tweak and modify indicators and behaviours on a regular basis to stay one step ahead of the industry that depends on ML tools to root out attacks.


Another way cybercriminals are using ML for evil, is by compromising AI with inaccurate data. ML and AI models depend heavily on data samples that are labelled correctly to build precise and repeatable detection profiles. By introducing other files that appear similar to malware, or by creating patterns of behaviour that turn out to be false positives, malefactors can fool AI and ML models into thinking attack behaviours are benign. Similarly, they can Attackers can also poison AI models by sneaking in malicious files that AI tools have deemed as legitimate.


Finally, hackers often attempt to map existing and evolving AI models used by cyber security teams and practitioners, because if they learn these models work and what they do, cyber crooks can disrupt ML operations and models during their cycles, enabling them to fool the system into favouring the bad actors and their tricks. It can also help attackers completely evade known models by subtly tweaking data to avoid detection based on known patterns.


On the plus side, there are ways to prevent AI-focused cyber attacks too, although this is no easy task. Security teams need to make sure that data employed in learning models have accurate label identifiers, and that the development of patterns is 100% accurate. Security teams and vendors who are building AI-based detection models, need to throw adversarial TTPs into the mix to help align pattern recognition with all the tactics that are seen in the wild.


As with all new innovations, the more AI is used to defend against attacks, the more cybercriminals will seek to exploit it, and destroy defenders' efforts to protect organisations. This is why training for security teams plays such a crucial role in helping security practitioners stay abreast of threat actors tactics.

bottom of page