AI cyberthreats soar but Americans give biometric security authentication the cold shoulder

AI cyberthreats soar but Americans give biometric security authentication the cold shoulder

As AI empowers cybercriminals to launch increasingly sophisticated attacks, a new report reveals a significant hurdle in the defence: the reluctance of Americans to embrace biometric security solutions. 

While AI-driven threats are becoming faster and harder to detect, a substantial majority in the United States are likely to express distrust towards authentication methods like facial recognition and implanted chips, data shows.

According to Cybersecurity and the AI Threat Landscape, a report by Delinea, there is a worrying surge in attacks targeting non-human identities (NHIs). These include service accounts, APIs and machine identities, which are increasingly prevalent as businesses adopt AI technologies. 

The research indicates that for every human identity within an organisation, there are now 46 NHIs, a figure projected to balloon to over 45 billion by 2026. The report found that over 70% of these NHIs are not regularly updated, with an average credential rotation cycle of 627 days – far exceeding recommended security practices. 

Furthermore, a concerning 97% of organisations expose their NHIs to third-party vendors, significantly widening the attack surface for malicious actors.

“One of the biggest challenges identified in the report is the increasing targeting of non-human identities,” said Jon Kuhn, SVP of Product Management at Delinea. 

“For organisations, this shift means they are facing a massive and often ignored security gap. With the number of these machine identities expected to grow exponentially in the coming years as enterprises continue to rapidly adopt AI, the lack of proper credential management and the exposure of these identities to third parties creates serious vulnerabilities that cyberattackers can exploit to gain unauthorised access to critical systems and data.”

According to the data, attacks are growing in frequency and sophistication. In 2024, cybercriminals embraced ‘double extortion’ tactics, not only encrypting victim’s data but also stealing it and threatening public release unless a ransom is paid. Delinea’s monitoring identified five key ransomware groups – RansomHub, LockBit, Play, Akira, and Hunters – as being responsible for over a third of all ransomware incidents last year, amounting to more than 5,700 attacks. 

Looking ahead, the report anticipates a significant increase in AI-driven phishing attacks. Cybercriminals are leveraging AI to craft increasingly convincing and personalised phishing emails, making it far more challenging for individuals to distinguish legitimate communications from malicious attempts to steal credentials and sensitive information.

“The rise in ransomware sophistication and the increasing prevalence of AI-driven attacks are undeniable trends in today’s cybersecurity landscape,” said Gal Diskin, Vice President of Threat and Research at Delinea. “Our research reveals that cybercriminals are increasingly using AI and powerful Ransomware-as-a-Service (RaaS) tools to launch more targeted and scalable attacks, particularly around phishing and machine identities.” 

Gen Z Americans shun biometric authentication solutions

However, while the technological arms race in cybersecurity intensifies, a new study shows a deep public distrust among US citizens regarding biometric security measures. A report from Frontegg, Americans Hate Password Resets So Much They’d Rather Abandon Your App that is based on a survey of over 1,000 Americans, reveals a significant reluctance to adopt biometric authentication, despite its potential to offer a more secure alternative to traditional passwords.

The study found that 70% of Americans would refuse biometric implants, such as brain chips, under any circumstances, regardless of the purported benefits, citing concerns over bodily autonomy and the potential for misuse of such invasive technologies. 

The study also revealed a generational divide in trust towards AI in security. A concerning 72% of Generation Z, the digital natives, expressed distrust in AI for securing their data – a higher percentage than any other generation. This scepticism among younger users, who are often early adopters of technology, raises questions about the future acceptance of AI-driven security measures.

Concerns extend beyond invasive implants, with nearly half (49%) of Americans expressing worry that facial recognition technology is being used to track them beyond their personal devices. This highlights a broader unease about the potential for surveillance and the privacy implications of biometric data collection. Despite the well-documented weaknesses of traditional passwords, 61% of Americans stated they trust passwords more than AI for their security.

Why NHIs impact Human Identity fraud

While the Delinea report doesn’t provide a specific numerical breakdown solely for AI-powered attacks targeting human identities, it offers insights into the evolving threat landscape where human identities remain a significant point of attack, especially when combined with AI tactics.

  • AI-driven phishing attacks: AI-driven phishing attacks are a direct threat to human identities as cybercriminals leverage AI to craft increasingly convincing and personalised phishing emails. These sophisticated emails make it harder for individuals to distinguish legitimate communications from malicious attempts to steal their login credentials, personal information and financial details.  
  • Social engineering enhanced by AI: AI significantly enhances social engineering attacks, which often precede or accompany identity-related breaches. AI can be used to analyse social media profiles, online behaviour, and personal information to create highly targeted and believable scams that trick individuals into divulging sensitive data or performing actions that compromise their security. This often involves the theft or misuse of human identities.  
  • Credential phishing: The report focuses on the need to strengthen Multi-Factor Authentication (MFA) to combat the growing threat of credential phishing. This highlights that stealing human credentials (usernames and passwords) remains a primary goal for attackers, and AI is making these phishing attempts more effective.
  • Ransomware and Human Identity Fraud: Ransomware attacks often begin by compromising human user accounts. AI could potentially be used to identify vulnerable human targets within an organisation to gain initial access for ransomware deployment. The ‘double extortion’ tactics also involve the exfiltration of sensitive data, which often includes personally identifiable information (PII) linked to human identities.  
  • Exploiting weak password habits: Many Americans still have poor password habits. AI could be used to predict common password patterns and launch more effective brute-force or password spraying attacks against human accounts. 

The Frontegg study also highlights a paradox: while a significant majority express distrust in advanced security methods, 57% of self-identified ‘tech-savvy’ users already rely on biometrics like fingerprint or facial recognition for authentication. This reveals a growing divide between early adopters who prioritise convenience and those who remain sceptical due to privacy and trust concerns.

Browse our latest issue

Intelligent CISO

View Magazine Archive