Magazine Button
The AI arms race: Safeguarding against evolving threats in cybersecurity

The AI arms race: Safeguarding against evolving threats in cybersecurity

CybersecurityIndustry Expert

The contest between security experts and criminals remains a challenge of endurance, one that faces continuous evolution and demands strategic adaptation. Rob Pocock, Technology Director, Red Helix, explores the impact of AI-generated attacks and how to build a defence system that can fight against advanced cyberthreats.

Rob Pocock, Technology Director, Red Helix

Artificial Intelligence (AI) is by no means a new technology in the realm of cybersecurity. It has been around for years, built into security solutions to help prevent breaches by detecting anomalies in user behaviour. Recently, however, we have seen a change in the tide. Considerable advancements in the capabilities of AI, particularly that of Generative AI (GenAI) and Large Language Models (LLMs), have opened the door to new possibilities – for security teams and cybercriminals alike.

For those working to protect organisations, these developments mean improved detection and triaging of cyberattacks. More advanced AI can be used to better recognise patterns and relationships between data, spotting phishing attacks faster and clustering them together to identify campaigns.

Conversely, cybercriminals have been handed a new tool to increase the speed, sophistication and reach of their attacks. GenAI and LLMs can help them automate processes and support in the drafting of increasingly convincing emails and messages, written in a wide range of languages. It is no coincidence that widespread adoption of ChatGPT over January and February 2024 was also met with a 135% increase in ‘novel social engineering’ attacks. As the technology advances, so do the threats, with the likelihood of ever more convincing deepfakes beginning to pose more of a threat.

With AI technology showing no sign of slowing down, the race between cybersecurity professionals and criminals has stepped up a gear – both looking to use the evolving capabilities of AI to thwart the advances of the other.

The danger of AI-enabled attacks

GenAI tools like ChatGPT, Bard and LLaMA are all readily available and, in many cases, can be used completely free of charge. While these may have built-in restrictions to prevent them from being used for unethical purposes, the restrictions are far from airtight. There are several examples of these being bypassed, using certain ‘jailbreak’ prompts, and even a WikiHow page giving tips on how this can be achieved.

Not only does this put highly advanced, quick-to-respond GenAI in the hands of cybercriminals, but it also considerably lowers the bar to entry for cybercrime. In essence, it means that nearly anyone with nefarious intentions can craft malicious code or well-structured phishing or smishing messages at a rapid pace, opening the door to an increased number of potential attackers.

The use of GenAI in social engineering attacks also makes these types of threat harder to detect, creating more linguistically convincing messages that can be tailored to specific targets. Furthermore, the adaptive nature of AI means these threats will continuously evolve, bypassing conventional detection methods that rely on recognising patterns of attack.

The rise of deepfake technology, a by-product of advanced AI, presents an additional concern. It can be used to create highly realistic and convincing forgeries of audio and video content, with the potential to mimic individuals, or create scenarios that never occurred. The implications of this are profound, extending from personal security breaches to the manipulation of public opinion and political discourse – with the upcoming UK and US elections being particular risk areas.

Prompt injection and data poisoning

In addition to malicious actors using AI and LLM’s in the development of attacks, there is the threat of criminals exploiting vulnerabilities innate to the tools themselves.

One method of achieving this is through prompt injection attacks, which are used to ‘trick’ LLMs into behaving in an unexpected way. Similar to the aforementioned ‘jailbreaking’, these attacks use carefully worded prompts to get the system to do something it isn’t meant to; however, the goal is to insert malicious data or instructions inside the AI model itself. As more enterprises adopt LLMs, the risk from malicious prompt injection grows.

Data poisoning is another attack aimed at AI tools, which targets the foundation of AI development – its reliance on learning from the data it is fed. By deliberately contaminating the data pool, criminals aim to skew the AI’s learning process, leading to erroneous or biased outputs – with the potential to significantly disrupt decision-making, interrupt supply chain operations and erode customer trust.

Getting ahead of the adversaries

The rise in the scale and complexity of cyberattacks may have been given a boost by the AI toolbox, but it’s not all doom and gloom. Cybersecurity solutions are also benefitting from advanced technology to make our defences ever stronger, and simple actions can make a huge difference in our ability to protect ourselves.

Central to any security strategy is the empowerment of the human firewall, through comprehensive cyberawareness training. An organisation’s staff are its first line of defence, and they must be equipped with the knowledge and tools to recognise and respond to threats, both existing and emerging. Ongoing training and testing, alongside regular updates on the latest threats, are crucial in maintaining security at this level.

The dynamic nature of AI-driven threats also necessitates regular security audits, to identify and address new vulnerabilities introduced by AI technologies. They also play a vital role in ensuring compliance with the latest data protection regulations, which are becoming increasingly stringent as the digital landscape evolves. These audits should also be used to vet suppliers, particularly those providing any AI-powered solution, to ensure they have their own robust security measures in place and that due diligence is paid in the creation of any tools.

Monitoring existing security measures and ensuring they are fully updated is another important step in strengthening defences. AI is helping to improve threat detection systems, and many of these upgrades will be provided through software updates. Keeping on top of these can improve the tool’s ability to analyse vast datasets for unusual activity, automate threat responses, and continuously learn and adapt to new attack patterns.

An ongoing battle

The race between security professionals and criminals is, and always will be, an ongoing challenge. As the technologies we employ continue to become more advanced, the responsibility of those tasked with safeguarding an organisation against cyberthreats continues to grow. Staying informed, training staff, conducting regular audits and updating cyber defences are more than just beneficial – they are essential components for staying ahead in this race.

These threats highlight the imperative for a ‘secure by design’ approach in AI development. As AI and LLMs continue to be integrated into various sectors, their allure as targets for malicious activities will inevitably rise, making robust security not just a necessity, but a cornerstone of responsible AI development and deployment.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive