Magazine Button
Are humans more effective than automation tools for discovering security vulnerabilities

Are humans more effective than automation tools for discovering security vulnerabilities

SoftwareTop Stories

HackerOne, a leading hacker-powered security platform, has announced the results of a study that revealed the majority of security professionals believe humans remain more effective than machines when it comes to securing digital assets.

The study, which was carried out at Infosecurity Europe in June 2019, revealed that 53% of security professionals believe the outsider perspective – hackers and pen testers – are the most effective technique for discovering unknown security vulnerabilities, while only 27% believe vulnerability scanners and automation are the most reliable.

“I’m actually surprised that there are still a large number of people who would put their trust solely into automated scanners,” said Laurie Mercer, a Security Engineer at HackerOne. “The singularity is not here. Automation is no match for human intelligence.”

The study also revealed that over one in 10 (12%) organisations have suffered a recent security breach as a result of an unpatched vulnerability and 79% of respondents said they thought unknown security vulnerabilities posed a serious threat to their organisation.

“We are all vulnerable and we all suffer the consequences. Let’s help each other out. There is a huge community of trustworthy people who are naturally talented at finding unpatched and unknown security vulnerabilities. The best way to prevent getting hacked is to try to get hacked by people you trust. Together, we can build a safer Internet,” said Mercer.

Companies are globally increasing their trust in ethical hackers to help secure their websites, applications and hardware. HackerOne has also recently announced its Top 20 Bounty Programs that hackers work on to find vulnerabilities. Based on HackerOne’s 1,400 customer programs, this list was curated using public details available in the HackerOne directory of programs, with rankings based on the total amount of each organisation’s cumulative bounties awarded to hackers over the life of their program. It also includes accolades for those programs who placed in the top five for fastest response time, fastest time to bounties paid, most hackers thanked, most vulnerability reports resolved and more. Hackers are attracted to programs that are responsive, pay well and pay quickly.

We heard from a number of experts who discuss the subject further.

Myke Lyons, CISO at Collibra

There’s a lot of hype surrounding automated vulnerability identification and management, and it’s true that it has great potential – automation combined with Artificial Intelligence (AI) and Machine Learning (ML) can help security teams reduce their overhead and resource costs, as well as spot new vulnerabilities much faster than they were able to before. This means security teams are immediately alerted when there are any changes to their existing assets and can quickly patch any vulnerabilities to maintain compliance with regulations and ensure data remains uncompromised.

For all its benefits, however, security leaders should be wary of hopping on the bandwagon and implementing full-scale automation simply for the sake of it. While I’m a big advocate of automation, enforcing an entire ‘set-it-and-forget-it’ solution may actually do more harm than good if security teams don’t actually know what processes they want to automate. Vulnerability management entails a multitude of complex variables, and making critical decisions related to security often requires an in-depth understanding gained through experience and full network visibility. In these cases, automation may not be enough – for instance, while a software can automate penetration testing, a security professional will still need to be involved to review the results for false positives or negatives. Companies are implementing automated security solutions that are aimed at attacking weaknesses without fully understanding those weaknesses – in these cases, a human is needed to investigate the root cause, contextualise the situation and organise resource prioritisation.

There has been so much flux in the security space, which is why getting various perspectives from different security experts is so important – there is only so much automation can do to keep pace and identify these advanced threats. That is why humans will always need to be involved in the process – whether it’s an executive or a specific domain expert, leveraging human knowledge and getting guidance from security professionals who are very ‘current’ in their roles is critical, because unlike an automation software, they understand how security trends are moving and evolving.

With so much emphasis around automation, people might think that security engineers are on the way out, but the reality is that it’s quite the opposite. Security infrastructure will become increasingly complicated and attackers will only become more sophisticated, which is why human decision making is only becoming more important. There’s no doubt that automation can be extremely useful, from helping ‘red teams’ automate penetration testing, to helping ‘blue teams’ spot threats using AI based on common attack signals. At the end of the day, however, it is the human who develops the malware and targets a victim – with this in mind, security engineers know that human creativity, intuition and contextual understanding will be the strongest defences, and that they shouldn’t leave their security fortress exclusively in the hands of an automated software.

Mike Ahmadi, CISSP, DigiCert

First of all, it is important to understand that security vulnerabilities are an infinite space problem. An exploit based on a security vulnerability is a misuse case. Essentially, it is using the system or application in an incorrect way. As it turns out, the correct ways to use a system or application are finite in nature and the incorrect ways to use a system are infinite in nature.

While some have somewhat successfully been able to design software-based systems to correctly detect system misuse, through the application of disciplines such as whitelisting, such systems are prone to errors due to the fact that they are not always capable of separating misuse from ‘noise’. This therefore leaves them open to attacks such as fuzzing, which can fool a system into believing an input is valid by being off by as little as a single byte of data, which may be enough to bring a system down (and I have witnessed this many times).

Other methods include straight malware detection through the use of binary analysis or software signature matching. This is how most virus scanning and malware detection software works.

In order for either whitelisting, fuzz testing, or component analysis systems to remain effective, the database of anomalies must be both large and constantly updated as we continue to take larger ‘bites’ out of the infinite world of misuse cases. Moreover, the effect a misuse case has on the overall security of a system is highly contextual in nature and that is where humans become most effective.

While software systems with large databases of anomalies can be automated to step through misuse cases, what they cannot do very easily is understand the exploitability and overall effect on the security of a system. If, for example, a system is prone to failing due to a fuzzing attack, an organisation may have a backup system in place to take over during the attack. However, if the backup system is not secured, or perhaps on a separate power system, etc., a security researcher may be able to readily disable the backup system and then mount the attack. If the vulnerability is fixed in the main system, an attacker may simply find another way into the system and alter it in a way that disables it (e.g. an open USB interface or a wireless interface). In other words, a software-based vulnerability detection system is only capable of operating within a defined set of parameters and simply cannot walk through the entire system like a human can and cobble together misuse cases on the fly. Humans can also easily identify and exploit cyberphysical vulnerabilities. An example of this would be cutting power to a system and then restoring power to the system in a rapid and repetitive manner, which can cause overheating or fatigue-based failures.

One thing remains certain, however, controlling access to a system is highly effective at preventing the exploitation of vulnerabilities. Authenticating a user, system, or application that has access within or outside of an organisation can go a long way towards limiting the effectiveness of a security threat. Digital certificate-based authentication schemes using public key infrastructure, or PKI, are a highly scalable and effective way to provide such authentication for enterprise security use cases and the rapidly growing volume of connected devices in the IoT.

Gorav Arora, Solution Architect at Thales

With new threats emerging every day and explosion of data needed for analysis, businesses can no longer have a black and white approach to cybersecurity when it comes to humans vs. automation and Artificial Intelligence (AI). It should not be about which is more effective, but rather how to utilise the strengths of each in spotting the vulnerabilities a business has.

AI and automation are already having a massive impact on cybersecurity. In fact, with a machine’s ability to analyse large amounts of data quickly, learn from trends and detect anomalies in data in real-time, everything from how we detect, protect, manage and mitigate threats, will help us to continue to fight the attackers head on.

However, AI and automation do have their limitations, such as the reliance on the data it has been trained with and its inability to convey new threats to the business. If, at point of entry or anywhere along the daisy chain, the data has a mistake or has been tampered with by an attacker, the error will continue down the chain and the end results will be wrong. This is where humans must ensure that the right security protocols are in place, with encryption, key management and device and human authentication to help mitigate any outside threats.

For humans, our strengths reside in our ability to be situationally aware. Businesses should be hiring people that have the capabilities to understand the continually evolving risks facing the company – from outside threats, to identifying the internal processes that could be making the organisation vulnerable. This is particularly relevant for emerging and developing threats which AI would not be aware of due to the lack of data around this.

Humans can also operate as ethical hackers whose job is to think like cybercriminals in understanding the vulnerabilities a business has that a hacker would look to exploit. Having automation in place frees up humans to search for signals and correlations, to be able to focus on the emerging/novel threats and also gives them the space to communicate this back to the rest of the business.

So, rather than focus on which is better, business leaders should be looking at a hybrid approach to spot and prevent the vulnerabilities in their companies.

Travis Weathers, Director, Threat Management, Optiv

As offensive security practitioners, we often get asked by our clients, ‘what tools do you use for vulnerability identification?’. I would be lying to say that we don’t rely on the use of automated when performing comprehensive testing (host-by-host and port-by-port). The use of automated tooling accounts for roughly 20% of our work, while the remaining 80% of our time is spent manually reviewing hosts. Manual review often results in additional finding generation and the removal of errant results that were identified by automated tooling. When performing adversarial emulation engagements, where we simulate nation-state and cybercrime syndicates, we refrain from using automated tooling entirely. We omit these tools as they are not designed to evade detective measures. And our adversarial emulation offerings are meant to illustrate the real-world business impact of an advanced threat actor.

I can think back to one engagement in particular, where automated tooling would not have been able to identify critical vulnerabilities. Our client, a  Fortune 500 organisation, hired us to perform a breach simulation with the goal of compromising financial transactions and systems. To do this, we would need 1) access to a system at a branch office that processed payments and 2) access to a highly segmented portion of the upstream corporate network.

First, we scraped social media, branch websites and job postings to identify employee contact information and business data, such as IT technologies that are in place. Leveraging the data we collected, we sent a highly targeted spear phish containing a link to a website that hosted malicious code. Once opened by the target, we were provided with a remote command and control (C2) session on a financial advisers system within a branch office. Aside from designing a custom spear-phishing campaign, host compromise and illustrating business impact is where automated tooling falls short.

The access that was maintained on the system was that of a limited domain user and the branch network was responsibly locked down. If we were going to move upstream and achieve our goal, we had to get creative. While manually reviewing the compromised system, we found a binary located in the temp directory. The application was developed in-house by the organisation and used to install a secure tunnel from the client’s branch office systems to the upstream payment processing network. We extracted the binary from the compromised system and moved it to our lab environment. We performed dynamic analysis on the binary, which revealed local administrative credentials, tunnel credentials and the connection details for the secure tunnel. This enabled us to not only gain access to the upstream network, but it also allowed us to move laterally to other branch locations. If we were to rely solely on the use of automated tooling, these critical flaws might have gone unnoticed or worse; a real-world threat could’ve leveraged them.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive