Research On Criminal Responsibility In Ai-Assisted Algorithmic Financial Fraud, Trading, And Market Manipulation

1. Michael Coscia – Spoofing in Commodity Futures (USA)

Facts:

Michael Coscia ran Panther Energy Trading LLC and used automated algorithms to trade commodity futures like gold, soybean, copper, and currency futures.

The algorithm placed large buy or sell orders that Coscia did not intend to execute, then executed smaller real trades on the opposite side.

This gave a false impression of supply or demand, allowing Coscia to profit from artificially created price movements.

Legal Outcome:

Coscia was prosecuted under the anti-spoofing provisions of the Commodity Exchange Act.

He was convicted of commodities fraud and spoofing, sentenced to prison, fined, and banned from trading.

The court found that the human designer/operator of the algorithm is responsible for the manipulative behavior, even if the trades were executed automatically by software.

Key Takeaways:

Shows clear human criminal liability when an algorithm is designed for manipulation.

Establishes that intent and control of the algorithm are critical for prosecution.

2. Navinder Singh Sarao – Flash Crash Manipulation (UK/USA)

Facts:

Sarao used an automated trading program in E-mini S&P 500 futures on the Chicago Mercantile Exchange.

He engaged in “layering” and spoofing: placing large sell orders he didn’t intend to execute, to drive prices down, then buying at lower prices.

His orders contributed to the May 6, 2010, Flash Crash in US equities, where the market temporarily lost almost 1,000 points in minutes.

Legal Outcome:

Sarao was charged in the US for price manipulation and spoofing.

He admitted that thousands of his spoof orders were intentional.

He paid monetary sanctions and was permanently prohibited from trading.

Key Takeaways:

Human responsibility is maintained even when the trader operates remotely using automated software.

Highlights systemic risk: a single algorithmic trader can significantly impact the broader market.

3. Michael Coscia (Different Aspect) – Algorithmic Spoofing Precedent

This case is often cited again in legal discussions as it directly relates to algorithmic trading and human liability.

The court emphasized that automated systems cannot hide human intent: Coscia programmed his system specifically to cancel orders at the last moment, which fulfilled the legal definition of spoofing.

This case set a clear legal precedent for prosecuting algorithm-assisted market manipulation.

Key Takeaways:

It reinforced that humans controlling AI-assisted trading systems are liable for manipulative market behaviors.

This created a legal framework for future AI-assisted trading cases.

4. Mina Tadrus – Fake AI Hedge Fund (USA)

Facts:

Tadrus claimed to run a hedge fund that used AI-driven algorithmic trading to achieve high returns.

In reality, almost none of the investor funds were used for algorithmic trading; it operated as a Ponzi scheme.

Investors were misled into believing AI algorithms were generating profits.

Legal Outcome:

Tadrus pleaded guilty to investment adviser fraud.

Sentenced to prison and ordered to pay restitution to victims.

Key Takeaways:

Shows how “AI trading” can be used in fraudulent schemes without any real algorithmic activity.

Liability rests entirely with the human operators who make false representations about the AI system.

5. Gregory Finerty – Unlicensed AI Trading Bot (Australia)

Facts:

Finerty ran a company that leased an AI trading bot (“Robot1”) to retail clients for foreign exchange trading.

The company did not hold the required financial services licence in Australia.

Finerty misled clients about the trading bot’s performance and directly participated in providing the unlicensed service.

Legal Outcome:

The regulator banned Finerty from providing financial services for several years.

Clients were misled, and the operation violated financial regulations.

Key Takeaways:

Even if the trading is automated, humans deploying AI systems are responsible for licensing compliance and accurate disclosure.

Regulatory liability applies when AI systems are used in unlicensed financial services or misrepresented to clients.

Summary of Lessons Across Cases

Human intent matters – AI or algorithmic trading cannot shield operators from criminal liability.

Algorithm misuse vs. fraud – Spoofing and market manipulation involve algorithmic execution, while some AI fraud cases involve misrepresentation without real trading.

Regulatory compliance is crucial – Using AI in trading without licences or misleading investors can lead to severe penalties.

Systemic impact – Algorithms can amplify market effects, making human oversight critical.

Precedent for AI-assisted trading liability – Courts consistently hold humans accountable for algorithm-driven misconduct.

LEAVE A COMMENT