Magazine Button
Is AI the answer to the SOC’s problems?

Is AI the answer to the SOC’s problems?

Deep DiveEnterprise SecuritySoftwareTop Stories

Artificial Intelligence has opened many doors for cybersecurity professionals, but it has also caused the attack surface to widen. Geert van der Linden, Cybersecurity Business Lead at Capgemini Group, discusses how SOCs can work smarter to lighten the load with technology and whether AI is the answer for all the SOC’s troubles.

Cybersecurity professionals have had a tough time of it recently. Their services have never been more in demand, but equally the cyberthreat landscape has never been more varied and sophisticated.

The last 18 months have seen the rise of double extortion ransomware, record-breaking DDoS attack volumes. Any ambitious professional loves a challenge, of course, but there are limits and recent research has shown that three-quarters of security operations staff are feeling the strain

With the cyberskills shortage a perennial issue – recent research puts the deficit at 3.1 million – clearly Security Operations Centres (SOCs) are not going to be able to simply throw manpower at the problem. They’ll have to work smarter, not harder, to lighten the load, and Artificial Intelligence may be the technology capable of doing the heaviest lifting.

Leveraging AI in the SOC

Against this backdrop of stressed-out, time-poor and stretched SOC teams, AI is already being used to try and better manage workloads and alert volumes. This makes sense, as the bread-and-butter tasks of the SOC – threat identification, tracking and remediation – are the sort that AI excels at. They’re rote, mundane and time-consuming, the perfect fit for an AI.

With AI automating the majority of this workload, some of the pressure is taken off employees. This is crucial in a landscape that is lighter on skilled cybersecurity professionals and facing an ever-increasing deluge of attacks.

In addition to improving the quality and speed of analysis, AI technologies can also perform threat modelling and impact analysis – activities which have previously relied on the expertise of highly skilled cybersecurity professionals. In fact, AI has advanced so much so that it can provide insights that were previously impossible through solely manual analysis. For instance, some can identify when threats could result in attacks on the corporate network and shut down particular services or subnets based on activities determined to be potentially harmful. Others can scan vast amounts of code and automate the process of discovering vulnerabilities.

Can AI be the answer for all the SOC’s troubles?

While AI can speed up – and scale up – the data analysis process, it’s not the ultimate solution. Regardless of developments in the technology, AI simply cannot replace cybersecurity experts.

People often perceive AI as eventually replacing humans. While that can’t be ruled out in the future, for now AI still has far too many issues for that to be a reality. By its very nature, AI is another system that can be targeted, which increases the attack surface available to cybercriminals. Such attacks can confuse the underlying Machine Learning model and bypass what the system is intended for. For example, generative adversarial networks (GANs) can be used to fool facial recognition security systems or subvert voice biometric systems.

As we increasingly rely on AI – and therefore give it more responsibility – ethics and privacy need to be taken into account. As AI becomes more advanced, these will only become more important, and just adds to the argument against SOC being solely AI-powered.

Ensuring fair AI for all

Despite all its advancements, AI can only ever be as good as the data it is fed. In order to maximise accuracy, AI systems require huge volumes of high-quality training data. But at the same time, this raises the potential of ethical lapses and intrusions of privacy. How much would you let an AI know about you? Would you sacrifice privacy for security?

There are also regulatory hurdles to consider. For example, data from financial services firms or medical sciences organisations are under heavier regulatory scrutiny than other industries. Should they therefore sacrifice cybersecurity by having less effective AI, when the sensitive data they hold is the most tempting for attackers?

If SOCs are to gain the trust of the customers that they’re hired to protect, they need to be completely transparent with how much and what kinds of data they’re feeding to their AI programs and militant in ensuring those lines are not overstepped.  

While AI will likely transform the SOC over the next five to 10 years, security professionals shouldn’t start job hunting. In fact, the future success of AI will rely hugely upon the human element.

The old game of cat-and-mouse may come to an end, but security professionals will have a new purpose: ensuring their most powerful weapon is being used judiciously and, most importantly, ethically.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive