There is little doubt that the increasing sophistication of cyberattacks has resulted in the need to adopt new approaches. Dr Alex Tarter, CTO at Thales Cyber and Consulting, discusses whether Artificial Intelligence has a key role to play in the battle against cybercriminals.
How have cyberattack techniques of recent years become increasingly difficult to detect?
Cyberattacks are getting harder to detect for two key reasons. Firstly, technology has evolved to more closely align with how a business operates. The adoption of mobile phones, tablets and IoT devices as part of Digital Transformation strategies has opened companies up to connect with more people outside their organisation.
Huge amounts of data is being shared with external parties as businesses turn to technology to boost their revenue streams. As a result, hackers are operating with larger attack surface areas and their activities are harder to detect, with much of it lost in this increased data network.
Alongside this, the threat community is rapidly growing as it becomes easier to launch reproducible and unsophisticated attacks. Hacking once took dedication and expertise, with zero-day attacks targeting mostly unknown vulnerabilities.
Today, on the other hand, anyone can launch a DDoS attack, as hacking toolkits are freely available online and thousands of tutorials are accessible on social media. As the attack surface area expands, and thousands more hackers get in on the action, threat detection is becoming far more complex, with IT experts being forced to deal with protecting near-infinite amounts of data.
How can CISOs use AI to reduce the time to detect the cyberthreats facing their company?
Discovering an unknown cyberthreat is like trying to find one signal in a whole lot of noise; most of this noise is legitimate, meaning the overwhelming minority of malicious activity is like finding a needle in a haystack.
AI can be most effective through two methods: unsupervised and supervised Machine Learning. Unsupervised learning involves humans asking the AI for advice on where their attention should be focused in order to find the ‘needle’.
Essentially, asking it to search for anomalies within large, generic data sets. Anomalies aren’t always malicious though, and hackers are generally quite effective at masquerading their activities as legitimate, but these processes can help uncover new types of attack.
Supervised Machine Learning algorithms are key in detecting known cyberattacks. Human experts can feed trainer data into the AI, using algorithms that represent good, legitimate, and known malicious behaviour.
As fresh data is regularly fed into the AI, Machine Learning algorithms get more efficient at pinpointing those specific threats. Human experts are great at spotting these tell-tale signs, but CISOs can’t always scale teams effectively, considering the sheer amounts of data they need to wade through.
AI is the key here, as it quickly identifies anomalous data, streamlining the detection process, with humans adding the final check when separating normal data from potentially malicious activity.
Can AI algorithms be adapted to the specific operational context of each industry sector?
AI only does what you tell it to do and it answers the questions that you set it more efficiently over time. If our human experts ask generic questions, then we’ll get a generic response and vice versa. Therefore, it would be wrong to apply an industry-specific approach to all AI algorithms, as the data set required can vary based on a range of situational factors.
If attacks are unique to an industry, such as on Industrial Control Systems, the AI will need to be fed data specific to that context. However, the algorithm would need to be tweaked when dealing with data sets related to a sector within that industry, for instance in the commercial or HR side of the business.
You can only take a general approach when dealing with generic attacks, like spam filtering, which can be spotted by the AI no matter what context or industry these attacks operate within.
AI can be a contradiction in itself sometimes. While it needs to be specific, it only really becomes effective when it’s used to spot anomalies in large data sets. For instance, if you look for an anomaly in a smaller group of people, everyone will have different habits so things that are normal may often look like anomalies.
However, the algorithm also needs to be specific if you want to identify strong threat signals, but at the same time this makes the data set narrower. It’s about finding that balance and ensuring AI is fed enough specific data to function at an effective level across any sector.
How can AI help overcome the challenges facing CISO on a daily basis?
AI’s main value is that it provides CISOs with the opportunity to more effectively deploy their human analysts against potential cyberattacks and data breaches. With this enlarged target surface area and a growing number of active hackers, our experts need to sort through more data than is humanly possible to detect all malicious activity.
AI is essentially there to augment the role of human security experts. Machine Learning algorithms are key in helping us pinpoint anomalous data – some of which is bound to be malicious – but the AI is rendered totally useless without a crack team in place to add technical and contextual awareness to the data.
CISOs and CTOs need to understand that while AI is here to stay, as data sets become more and more complex, it will not come to replace human experts.
Ultimately though, just because an organisation has an AI system in place, this does not mean it is secure. Countering cyberthreats is a constant game of cat and mouse between the cybersecurity experts and hackers.
Hackers always want to get the maximum reward from the minimum effort, tweaking known types of attack as soon as these are detected by the AI. CTOs need to, therefore, make sure that the AI system is routinely exercised and fed new data.
Think of it like this: If a company has an employee who is trusted across the board, that employee still needs to be regularly evaluated to ensure their performance levels remain high. In the same way as humans, AI needs to be reviewed and trained – and only once we are sure that the AI is performing can we trust it to take swift and decisive action.