Corin Imai, Senior Security Advisor at DomainTools, outlines the current perception of automation and AI among cybersecurity professionals across the globe, according to the findings of a recent study, and contextualises the predictions that emerged.
The speed at which technology has been evolving over past decades is enough to justify the fear, deeply embedded in some individuals, that ‘robots will steal their jobs’. Indeed, many professions have already been completely replaced by automation: have you ever thought of becoming a switchboard operator? Or a car manufacturer? Probably not.
The cybersecurity sector, however, has seen a 0% unemployment rate since 2011 and unfilled vacancies are predicted to grow to 3.5 million by 2021. Automation and AI may actually relieve IT security professionals of mundane, time-consuming tasks and allow them to focus on more stimulating challenges.
Security professionals across the globe seem to think that this might be the case, a recent study conducted by Ponemon Institute and DomainTools revealed. The responses of over 1,400 IT security professionals from the UK, the US and APAC region painted a picture for the future of AI and automation, and provided a snapshot of the current adoption rates and most common use cases.
The first thing that the results confirmed was that the hiring and retention of competent candidates are as much of a challenge in the UK, as they are in the US and APAC region, with roughly 70% of cybersecurity professionals in each of the geographical areas admitting that their IT security function is typically understaffed.
As a result, in the next three years, automation and Machine Learning are expected to automate certain tasks, such as malware and log analysis. Indeed, of the organisations that already rely on automation tools, about half do so for precisely malware and log analysis, which tend to be somewhat repetitive and time-consuming tasks, the automation of which can leave staff more time to perform what they consider to be higher functions, such as focusing on serious vulnerabilities and overall network security (68%).
If we think that the average number of security tickets that an IT security function receives in one day is 367, of which an average of 132 are critical or severe, it should not come as a surprise that security professionals would welcome the help of automation to help them resolve at least some of them. A total of 40% of teams, in fact, typically spend between 51 and 100 staff hours per day just triaging and investigating alerts, and 19% spend over 100 hours.
This, added to the fact that roughly half of respondents admit that their organisation does not have the resources to monitor threats 24/7, indicates that automation could not only free up employees’ time, but actually cover a portion of the attack surface that would otherwise remain unaddressed.
In the future, other more complicated functions could be assigned to Artificial Intelligence, such as threat hunting (56% of respondents) and incident response (40%).
A total of 65% of respondents, however, think that AI and automation are not currently able to perform tasks that IT security staff can perform and 51% think they will never replace human intuition and hands-on expertise.
The most burning question that the report asked was whether security professionals thought they were going to lose their job to automation. A total of 39% admitted that they are concerned about that eventuality, but others seemed convinced that automation will somehow change the nature of the skills gap, rather than create a redundancy of cybersecurity staff.
In fact, when asked how automation will impact employment in the security sector, 65% of respondents stated that automation will either have no effect on hiring, or that it will actually increase the need for highly specialised personnel within IT security functions, who will need to have a different, specific kind of training to be able to operate the highly sophisticated technologies that will be introduced.
The biggest obstacle to the adoption of automation, according to 54% of respondents, remains the fact that many organisations still rely on legacy IT environments, which do not support automation tools.
As far as Artificial Intelligence is concerned, the IT security community was divided, with 51% of security professionals thinking there will never be an Artificial Intelligence that will be able to make human expertise obsolete in the cybersecurity sector and 49% saying that that scenario is possible in the future.
Interestingly, the confidence in AI and automation as an efficient tool to monitor threats and relieve IT security functions of their intense workload was the question that most revealed regional differences. In fact, 76% of US professionals saying that they do trust AI as a security tool, followed by the UK (69%) and APAC region (63%).
Security professionals also seem to fear the misuse of AI technology, which they say should be regulated precisely to avoid the eventuality of it falling into the wrong hands (51%). A substantial portion of respondents (49%) was also concerned with the potential harm to consumers that would result from not regulating Artificial Intelligence.
The key, it seems, will be finding a balance between what is automated and what is left to the capable hands of security professionals.
Indeed, we are not at a point in time when automation and AI are capable of performing all the tasks that a trained professional can, although there seems to be reasonable confidence among the experts in the sector that technology will progress in that direction.
If that time will come, the organisations that will succeed will be those that will best understand where a machine will make their staff’s job easier and where not to try to speed up a process that requires the expertise and the intelligence of an expert.