Magazine Button
ESET research: Is AI and ML hype putting businesses at greater risk?

ESET research: Is AI and ML hype putting businesses at greater risk?

Enterprise SecurityMore NewsResearch
Ashraf Sheet, regional director, Middle East and Africa at Infoblox, discusses seven cyberintelligence insights for a more secure business

New ESET research has revealed that the hype around the role of Artificial Intelligence and Machine Learning plays in cybersecurity could be putting businesses at greater risk.

New research from global cybersecurity firm ESET reveals that the recent hype surrounding Artificial Intelligence (AI) and Machine Learning (ML) is deceiving three in four IT decision makers (75%) into believing the technologies are the silver bullet to solving their cybersecurity challenges. The hype, ESET says, is causing confusion among IT teams and could be putting organisations at greater risk of falling victim to cybercrime.

In the past year, the amount of content published in marketing materials, media and social media on the role of AI in cybersecurity has grown enormously. ESET surveyed 900 IT decision makers across the US, UK and Germany on their opinions and attitudes to AI and ML in response to this growing hype.

The findings showed that US IT decision makers are most likely to consider the technologies as a panacea to solve their cybersecurity challenges, compared to their European counterparts – 82% compared to 67% in the UK and 66% in Germany. The majority of respondents said that AI and ML would help their organisation detect and respond to threats faster (79%) and help solve a skills shortage (77%).

Juraj Malcho, Chief Technology Officer at ESET, said “It is worrying to see that the hype around AI and ML is causing so many IT decision makers – particularly in the US – to regard the technologies as ‘the silver bullet’ to cybersecurity challenges. If the past decade has taught us anything, it’s that some things do not have an easy solution – especially in cyberspace where the playing field can shift in a matter of minutes. In today’s business environment, it would be unwise to rely solely on one technology to build a robust cyberdefence.

“However, it is also interesting to see such a gap between the US and European respondents. The concern is that over-hyping this technology may be causing technology leaders in the UK and Germany to tune out. It’s crucial that IT decision makers recognise that, while ML is without a doubt an important tool in the fight against cybercrime, it must be just one part of an organisation’s overall cybersecurity strategy.”

Miscommunication leads to misunderstanding

While many IT decision makers regard AI and ML as the silver bullet, the reality is that the majority of respondents have actually already implemented ML in their cybersecurity strategies with 89% of German respondents, 87% of US respondents and 78% of UK respondents saying their endpoint protection product uses ML to protect their organisation from malicious attacks.

What’s more, many respondents stated that there is confusion over what the terms ‘AI’ and ‘ML’ mean, with just 53% of IT decision makers saying their company fully understands the differences between the two.

Malcho said: “Sadly, when it comes to AI and ML, the terminology used in some marketing materials can be misleading and IT decision makers across the world aren’t sure what to believe. The reality of cybersecurity is that true AI does not yet exist [and] while the hype around novelty of ML is completely misleading, it has been around for a long time. As the threat landscape becomes even more complex, we cannot afford to make things more confusing for businesses. There needs to be greater clarity as the hype is muddling the message for those making key decisions on how best to secure their company’s networks and data.”

Understanding the limitations

ML is invaluable in today’s cybersecurity practices, particularly malware scanning. It primarily refers to a technology built into a company’s protective solution that has been fed large amounts of correctly labelled clean and malicious samples to essentially learn the difference between the good and the bad. With this training, ML is quickly able to analyse and identify most of the potential threats to users and act proactively to mitigate them.

However, it’s important for businesses to understand ML’s limitations. For example, ML still requires human verification for initial classification, to investigate potentially malicious samples and reduce the number of false positives. In addition, ML algorithms have a narrow focus and play by the rules but hackers, in comparison, are continually learning and breaking the rules. A creative cybercriminal can introduce scenarios which are completely new for ML and thereby fool the system. Machine Learning algorithms can be misled in numerous ways and hackers can exploit this by creating malicious code that ML will classify as a benign object.

Malcho said: “We’ve been using Machine Learning as part of our weaponry against cybercriminals since 1995 – and it’s simply not enough on its own. By educating themselves of ML’s limitations, businesses can take a more strategic approach to building a robust defence. Multi-layered solutions, combined with talented and skilled people, will be the only way to stay step ahead of the hackers as the threat landscape continues to evolve.”

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive