Research On Ai-Assisted Identity Theft, Phishing, And Impersonation In Corporate And Government Sectors

Case 1: Arup Group – Deepfake Executive Fraud

Facts:

In 2023, Arup, a UK-based multinational engineering firm, was targeted by a sophisticated fraud involving AI-generated deepfake videos and voice cloning.

An employee in the Hong Kong office received a call and video conference that appeared to involve the company’s CFO. The “CFO” instructed the employee to make urgent bank transfers for a confidential business transaction.

The video and voice were later revealed to be AI-generated impersonations of senior executives.

Modus Operandi:

Deepfake video of the executive for visual confirmation.

AI-generated voice messages to simulate real-time conversation.

Social engineering by creating a sense of urgency and secrecy.

Impact:

The employee transferred over HK$200 million (~US$25 million) before realizing it was a scam.

Legal Implications:

This falls under “fraud by impersonation” and “obtaining property by deception” in most jurisdictions.

Raises questions of liability for AI tools used to facilitate crime, and the necessity of corporate internal controls to prevent high-value transfer fraud.

Lessons Learned:

Traditional verification (email, video calls) is insufficient.

Out-of-band verification, multi-factor confirmation for large transactions, and AI detection tools are essential.

Case 2: WPP plc – AI-Assisted Executive Impersonation Attempt

Facts:

In 2024, WPP, a global advertising company, experienced an attempted impersonation attack.

Cybercriminals used AI to generate a cloned voice and video of the CEO to request sensitive business information and financial transactions from employees.

Modus Operandi:

Voice cloning to make phone calls convincing.

Video deepfake to create virtual meetings for authenticity.

Fake emails to reinforce instructions.

Impact:

The fraud attempt was thwarted due to employee awareness and internal verification policies.

Legal Implications:

Even unsuccessful attempts are considered criminal impersonation and fraud attempts.

Demonstrates that AI-enhanced phishing increases sophistication, necessitating legal adaptation.

Lessons Learned:

AI-assisted impersonation can bypass standard email verification protocols.

Employee vigilance and verification processes are critical.

Case 3: NASSCOM v. Ajay Sood – Email Phishing and Impersonation

Facts:

In India, Ajay Sood created fake emails and websites impersonating NASSCOM, a leading IT industry association.

He attempted to collect sensitive information, including passwords and financial data, from IT professionals under the pretense of official communication.

Modus Operandi:

Email spoofing and fake websites.

Social engineering by exploiting trust in a recognized organization.

Legal Implications:

The Delhi High Court recognized this as phishing and identity theft.

Sections 66C (identity theft) and 66D (cheating by impersonation) of the IT Act applied.

Lessons Learned:

Even without AI, phishing and impersonation are criminal offenses.

Provides a precedent for prosecuting online impersonation in corporate environments.

Case 4: Government Impersonation – RBI Phishing Campaign

Facts:

In 2023, cybercriminals impersonated the Reserve Bank of India (RBI) to send fraudulent emails to corporate banking clients.

The emails claimed that accounts would be frozen unless the recipient provided sensitive credentials or executed instructions.

Modus Operandi:

Official logos and language to appear legitimate.

Email spoofing and phishing links to extract login credentials.

Impact:

No confirmed financial losses in reported cases, but widespread potential risk to corporate clients.

Legal Implications:

Constitutes fraud and impersonation of a government authority.

Covered under Sections 66C/D of IT Act and sections of the Indian Penal Code regarding cheating and impersonation.

Lessons Learned:

Regulatory or government impersonation is highly effective due to inherent trust.

Verification through official channels is critical before acting on instructions.

Case 5: Corporate Payroll Scam Using AI Voice Cloning

Facts:

In 2024, a medium-sized corporation in Europe experienced an AI-assisted payroll scam.

Fraudsters cloned the voice of the CFO and called the finance department to authorize an urgent payroll transfer.

Modus Operandi:

AI voice cloning for real-time phone conversations.

Urgent instructions to bypass internal approval protocols.

Impact:

€300,000 (~US$320,000) was transferred to criminal-controlled accounts.

Legal Implications:

Classified as fraud by deception and impersonation.

Raises challenges in tracing AI-generated instructions to the perpetrator.

Lessons Learned:

Multi-step approval systems and voice verification protocols are necessary.

Highlights the real-world danger of AI in financial fraud.

Summary of Themes Across Cases

AI amplifies traditional fraud – deepfakes and voice cloning dramatically increase impersonation credibility.

Corporate sectors are at high risk – executives and finance teams are prime targets.

Government impersonation leverages trust – phishing emails from regulatory bodies can deceive many.

Legal frameworks exist but are evolving – laws like IT Act Sections 66C/D, criminal impersonation statutes, and emerging AI-focused regulations are crucial.

Controls are vital – multi-factor verification, out-of-band confirmation, and employee training reduce the risk.

LEAVE A COMMENT