The container software market is expected to grow, owing to the many advantages it offers modern businesses. Eoin Keary, CEO/Founder, edgescan, how the wide spread use of container and serverless technology is creating potential new security issues for the industry and offers his advice for considerations for organisations when choosing a containerised security model.
Anyone who has worked in the IT sector knows this industry loves a buzzword. Lately, it seems that, alongside the indiscriminate use of ‘AI’ and ever-present plea to ‘shift left’, serverless and container technologies are the main topic of discussion among organisations undergoing Digital Transformation. In fact, 451 Research has estimated the application container software market will grow to more than US$5.5 billion by 2023.
A smaller and more efficient avatar of virtual machines, containers bring many advantages, offering a cost-effective solution, increased scalability and flexibility over hosted applications on traditional servers or virtual machines. Most companies see a container approach to DevOps as a much more productive and time efficient way to the software delivery process. But, like with all new technology, it also creates unknown threats and security challenges. Are organisations really preparing for them?
The first of such challenges – and perhaps the most obvious – resides in containers’ complexity. As data and applications are broken down into microservices, the traffic of data increases and the task of protecting it becomes more complicated. To further increase the complexity of containerised environments, unlike with virtualisation software, the tools necessary to build it often come from multiple sources, have numerous independent components and most likely come from various repositories. Each of these components is a potential security liability, if not updated and properly secured.
A second significant challenge posed by containers is lack of isolation. With containerised environments, in fact, while the attack surface is reduced, there is a risk that an attacker gaining access to one container will be able to compromise all the others on the same host server. Unlike virtual machines, which – if hacked – are independent security boundaries, a rogue program could break out of a running container and spread to the rest of the cluster.
The third problem with container security is its relatively recent development. Until a year ago, container adoption had been slow and cautious, but has since then more than doubled, a Portworx report found. Organisations need to remember that, as with any new product, ironing out the security issues can take time and the adequate measures to protect them require time to be developed, perfected and their effectiveness tested against real-life threats.
As mentioned above, however, container security can be an invaluable tool for speeding up, scaling and making application development smoother and more secure. The key to take full advantage of this technology is to complement it with equally dynamic and flexible security efforts.
Many organisations conduct regular security reviews, record existing vulnerabilities manually and patch them according to a schedule, prioritising by risk severity. The main issue with this approach is that, in between security checks, the organisation is blindsided to any ongoing attack that might be happening and vulnerabilities that may open. Furthermore, such an approach is not scalable and does quickly become obsolete as an organisation expands its IT function.
Rather than regular security checks, organisations should aim for continuous visibility tools that can scale up and down with container clusters. Although perhaps a buzzword in itself, continuous visibility remains a fundamental part of any security strategy, physical or virtual. After all, there is little use to a security camera that only turns on twice a month and one that only turns on when it detects movement is effective if the aim is to react to threats but does nothing to create a proactively secure environment.
In fact, since the life-cycle of containers is so short and since they can be easily created, deleted and replaced, they can often be deployed without being scanned for vulnerabilities at all until they’re already in production. If a vulnerability is present in one of the container images – the risk of that happening being even greater with open source images – the same vulnerability will affect all the containers created from the same image. To avoid a scenario that is at best embarrassing and at worse disastrous, organisations should look for the risks within the containers they plan to employ before their life-cycle even begins.
The majority of containers also come with root access built-in by default, which makes things easier for developers, but goes against the principle of least-privilege and opens the container up to unnecessary security risks.
Furthermore, analytics metrics can be an extremely useful tool for organisations looking to continuously improve their security performance. Tracking the mean remediation time, as well as keeping tabs of the most severe vulnerabilities on an automated metrics tab, can ensure that the organisation is moving in the right direction and that their response times – and, consequently, exposure windows – are becoming smaller.
Ultimately, however, the best approach to security – especially that of complex, containerised environments – is to adopt a defence in depth stance, whereby multiple solutions are integrated and orchestrated into a solid portfolio of proactive and reactive solutions.
Despite the lack of an actual silver bullet against all cyberthreats, having eyes everywhere and continuously striving to improve the security of the entire attack surface is a good place to start.