Social Engineering Attacks: Trends, Psychological Triggers, and AIdriven Prevention

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Social engineering is one the most common and potent attack techniques in cybersecurity, by which humans are deceived instead of computers, to breach a system and data. This article examines the recent evolution of social engineering attacks, the psychological factors that make them effective, and how advances in artificial intelligence (AI) are helping to combat them. Recent examples, such as the 2020 Twitter breach and the 2022 entrapment of Uber, showcase how adversaries are now combining multichannel vector tactics phishing, vishing, smishing and deep fake based impersonation with reconnaissance from social media and opensource data into sophisticated pretexts that defy credulity. The psychological principles applied were: authority, urgency, fear, trust/familiarity, reciprocation, social proof and commitment (paralleling existing theories of persuasion and cognitive bias). In this environment, AI and machine learning have become critical defensive weapons. The progress achieved is driven by AI powered email filtering, phishing URL detection, user and entity behavior analytics (UEBA), voice and chat scam detection, as well as adaptive phishing simulation for user enablement. Yet, AI based security systems also come with limitations such as adversarial evasion approaches , false positive/negative rates, privacy considerations and ethical issues related to the behavioral tracking itself. Attackers are more and more exploiting generative AI to produce hyper custom lures, generating what becomes an evolutionary arms race between offensive and defensive AI. This survey highlights the importance of having a sociotechnical approach that fuses psychological motivators with explainable and privacy preserving AI. Research agenda As a future research direction, emphasis should be placed to interdisciplinary collaboration, adversarial robust models and user centric security design in order to fight against the emerging threat of social engineering.

Article activity feed