What are the most common psychological tactics used in social engineering attacks, and how can organisations train employees to recognise and resist them?

What are the most common psychological tactics used in social engineering attacks, and how can organisations train employees to recognise and resist them?

Social engineering remains one of the most effective and dangerous tactics in a cybercriminal’s arsenal – not because it exploits systems, but because it manipulates people.

These attacks rely on psychological tricks designed to bypass technical defences by targeting human behaviour, making them especially difficult to detect and defend against. Whether it’s a phishing email that sparks urgency, a phone call from a seemingly trustworthy source, or a fake login page crafted to mirror a familiar site, the goal is always the same: to deceive someone into giving away sensitive information or access.

The most successful social engineering attacks are built on a deep understanding of human psychology. Cybercriminals know how to exploit emotions such as fear, curiosity, trust and the desire to help. They use tactics like authority bias, urgency cues, and social proof to trick employees into making decisions they would normally avoid if they were thinking critically and calmly.

For organisations, this means that even the most advanced cybersecurity tools are only as strong as the people using them. While technology can flag suspicious activity and block known threats, it takes awareness and education to stop a well-crafted social engineering attempt.

So how can businesses train their employees to spot and resist psychological manipulation? And what tactics should they be aware of in today’s threat landscape?

In this feature, industry experts unpack the psychological strategies most commonly used by attackers – from impersonation and pretexting to fear-based messaging and reward-driven lures. They also share practical advice on how organisations can build a culture of security awareness, develop effective training programmes and empower their people to act as the first line of defence.

Understanding the mind of the attacker is the first step in neutralising their advantage. With the right mix of education, awareness and vigilance, organisations can better protect themselves against social engineering and minimise the human vulnerabilities that cybercriminals so often exploit.

David Morimanno, Director of Identity and Access Management Technologies, Xalient

David Morimanno, Director of Identity and Access Management Technologies, Xalient

From a threat actor’s perspective, identity is a key jewel as it leads to access to valuable data. This makes it a common and sought-after attack vector – after all, it’s a lot easier to unlock a door and walk in rather than break down its defences.

The challenge with social engineering and phishing campaigns is that they are becoming increasingly convincing, even stepping beyond emails and texts to voice and video calls with the help of AI. The constantly developing threats mean employee training must be regular and kept up-to-date.

Additionally, as the lines between our work and personal lives become increasingly blurred, we are generating more information that could be leveraged against us by threat actors. Posting pictures of children in school uniforms, regular coffee shop haunts, work trips and holidays is valuable Open-source Intelligence (OSINT) that can inform convincing social engineering campaigns.

Social media information can be cross-referenced with LinkedIn profiles, allowing attackers to craft convincing scams. For example, an attacker might impersonate a school reporting an emergency involving a child and redirect the parent (employee) to a fake portal. While the sophistication has developed, the actual themes are not new. Attackers often rely on urgency, fear or authority to garner a response.

To effectively combat this, a culture of verification is required. Employees should be encouraged to double check email addresses, hang up and call back using known numbers, and verify unexpected requests. This, in combination with training, can create awareness around fraudulent campaigns and mitigate the risk of their success.

Furthermore, Zero Trust Network Access (ZTNA) would assist in mitigating the risk of campaigns that slip through the cracks of the training, reducing the threat actor’s ability for lateral movement once within the network. There is also a risk that a social engineering campaign be so realistic that an employee becomes compromised by ransom, and rather than click any links or share any information, they are manipulated into enacting the threat actor’s will to target the organisation.

 Along with restricting access to non-essential information, ZTNA can leverage behavioural analytics to determine when an approved identity is behaving in an unusual way, and block access. To round out this defence, endpoint detection and response (EDR) and identity threat detection and response (ITDR) should also be deployed to identify suspicious activity on devices, contain threats early and provide telemetry for incident response.

Layering user awareness, identity controls, and intelligent access monitoring ensures organisations are not just defending the perimeter – but the people within it too.

Lorri Janssen-Anessi, Director External Cyber Assessments at BlueVoyant

Lorri Janssen-Anessi, Director External Cyber Assessments at BlueVoyant

With the increased use of AI, social engineering attacks continue to be increasingly personalised and convincing. Third-party suppliers and vendors are often the weakest link in a company’s cybersecurity. Attackers exploit these relationships to bypass robust security measures and infiltrate target organisations. Social engineering is a key tactic used to compromise third-party suppliers, leading to data breaches, ransomware attacks and financial fraud.

Attackers use different tactics to create a sense of urgency. For example, they set false deadlines for tasks, forcing victims to act quickly without thinking, with requests appearing as if they have come from executive/C-level or IT support leaders. Attackers would exploit the human tendency to follow what others are doing, such as using language that is already used in communications between employees. Ultimately, trust is often the gateway that attackers exploit in social engineering, using social media and online forums to harvest personal details.

This enables cybercriminals to build a profile of employee behaviour, routines and preferences. With this information, attackers can create highly tailored messages, such as emails, texts – and even enticing web links, all of which are designed to manipulate and deceive victims. As a result, what appears to be a harmless message is often the first step in a targeted breach.

To combat these advancements in social engineering techniques, organisations should increase training and awareness for their workforce, ensure there are limited user privileges, implement multi-factor authentication (MFA), and consider adopting a zero-trust model, where no one is trusted by default inside or outside of a network. Businesses should also encourage employees to report any suspicious emails and interactions.

Traditional cybersecurity practices can also help, such as having a comprehensive incident response plan, regular security audits, and following industry specific threat intelligence information to counter new and novel threats. Vigilance is always key, as new and adapted threats emerge.

Therefore, user training and awareness are vital in preventing successful phishing attacks, including gamification exercises that include constructive feedback, helping to strengthen an organisation’s defences.

Brandon Leiker, Principal Solutions Architect at 11:11 Systems

Brandon Leiker, Principal Solutions Architect at 11:11 Systems

The primary cybersecurity risk facing firms today is both technological vulnerabilities and human behaviour. According to the 2024 Verizon Data Breach Investigations Report, 68% of breaches involve human error. The report also found that phishing, a common social engineering tactic, was responsible for 15% of breaches.

Attackers exploit trust and leverage natural human tendencies with tactics such as a convincing phishing email or a persuasive voice scam to create a false sense of urgency. These simple triggers can bypass elaborate network defences when targeted at the right employee in the right way. 

Human error plays a critical role in these breaches, clicking on suspicious links, and neglecting multi-factor authentication (MFA), invite disaster. Even tech-savvy employees can fall victim to social engineering.

These actions leave firms vulnerable to attacks, even when technological defences are in place. Moreover, the rise of AI tools like chatbots and Generative AI further increases the sophistication of these attacks, allowing cybercriminals to automate scams and develop new methods of evading detection. Tactics like spear phishing, a highly targeted variant of phishing, leverages detailed personal information to craft targeted emails posing as bosses, colleagues or other trusted entities. 

The risk of human error is compounded by ‘security fatigue’, where excessive alerts and overly complex password requirements lead to employees bypassing these prompts to complete their tasks. Employees struggling to access their accounts often use weak passwords, reuse passwords or take dangerous workarounds. Therefore, the human factor emerges as the most significant vulnerability in an organisation’s cybersecurity posture.

Beyond education, organisations should leverage principles of behavioural economics to help ‘nudge’ employees toward safer practices. To encourage adherence to security measures without overwhelming employees, it’s essential to make these measures user-friendly and seamlessly integrate them into daily tasks. Additionally, implementing single sign-on (SSO) can greatly simplify access for users by reducing the number of usernames and passwords they need to manage, thereby lessening their overall burden.

Secure behaviours may be cultivated through incentives, fostering an environment where security is viewed as a shared responsibility and a culture of transparent communication throughout the organisation. Regular training, feedback loops, and open discussions about policies and threats ensure that employees remain vigilant and engaged. Security should not be seen as solely the domain of the IT department; instead, every employee should be an active participant in safeguarding the organisation’s assets.

Ultimately, ignoring the human element plays into the hands of attackers. By investing in human-focused defence strategies, organisations can turn their employees from liabilities into strong assets in the fight against cyberthreats. 

A combination of education, clear policies, and a culture of security can significantly reduce human-related risks, strengthening the organisation’s overall cybersecurity resilience.

Browse our latest issue

Intelligent CISO

View Magazine Archive