Magazine Button
Organisations rush to use Generative AI tools despite significant security concerns

Organisations rush to use Generative AI tools despite significant security concerns

Enterprise SecurityResearchSoftwareTop Stories

New research from Zscaler, a leader in cloud security, suggests that organisations are feeling the pressure to rush into Generative AI (GenAI) tool usage, despite significant security concerns.

According to its latest survey of more than 900 global IT decision-makers, All Eyes on Securing GenAI reveals that 89% of organisations consider GenAI tools like ChatGPT to be a potential security risk and 95% are already using them in some guise within their businesses. 

Even more worryingly, 23% of this user group aren’t monitoring the usage at all and 33% have yet to implement any additional GenAI-related security measures – though many have it on their roadmap. The situation appears particularly pronounced among smaller-sized businesses, where the same number of organisations are using GenAI tools (95%), but as many as 94% recognise the risk of doing so.   

“GenAI tools, including ChatGPT and others, hold immense promise for businesses in terms of speed, innovation and efficiency,” said Sanjay Kalra, VP Product Management, Zscaler. “However, with the current ambiguity surrounding their security measures, a mere 39% of organisations perceive their adoption as an opportunity rather than a threat. This not only jeopardises their business and customer data integrity but also squanders their tremendous potential.”

Despite mainstream awareness, it is not employees who appear to be the driving force behind current interest and usage – only 5% of respondents said it stemmed from employees. Instead, 59% said usage was being driven by the IT teams directly. 

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive