Analysis Of Ai-Enabled Manipulation Of Financial Credit Ratings And Prosecutions Under Criminal Law

Case Study 1: Credit Reporting Agency CTOS Data Systems Sdn Bhd v. Suriati binti Mohd Yusof (Malaysia, 2024)

Facts: The plaintiff claimed that the defendant credit‑reporting agency gave her a low credit‑score, which led to loan applications being rejected. She alleged negligence and defamation because the report contained inaccurate or outdated information. The credit score was calculated by a software algorithm with little human oversight.
Legal/algorithmic issue: The court considered the role of algorithmic scoring in credit‑reporting, and whether the agency had a duty to verify accuracy and explain its algorithmic score. The High Court originally found in favour of the plaintiff, awarding damages (RM200,000). However, on appeal the Court of Appeal overturned that decision, holding that the credit‑reporting agency was authorised to formulate a credit score and no breach of duty was shown because the plaintiff had indeed defaulted on payment obligations.
Why relevant to AI‑enabled manipulation: While this case did not involve proven intentional manipulation by the credit‑reporting agency, it addresses the legal risks when automated/algorithmic credit‑scoring is used, including duty of care, transparency, and accuracy of algorithmic decision‑making.

Case Study 2: Infrastructure Leasing & Financial Services (IL&FS) ratings‑manipulation investigation (India)

Facts: A forensic report by Grant Thornton India flagged that several credit‑rating agencies allegedly compromised professional judgments in rating IL&FS group companies during 2013‑2018. Examples cited: invitations of rating‑agency officials to high‑profile events, changes in rationales after meetings with the company, ratings cleared despite concerns. The report suggested possible “prepared” ratings by company management. Regulators (e.g., Serious Fraud Investigation Office, Enforcement Directorate) investigated.
Legal/algorithmic issue: Although not explicitly about AI, this case shows how credit‑ratings can be manipulated (by human misconduct) and the regulatory risk for credit‑rating agencies. If AI/algorithms are introduced into rating models, similar manipulation risk emerges—e.g., biasing input data, manipulating model weights, changing rationales.
Why relevant to AI‑enabled manipulation: This is an example of credit‑rating manipulation; if AI were incorporated, the risk would extend to algorithmic manipulation. It also shows that regulators are willing to investigate credit‑agency misconduct. It doesn’t show a criminal conviction for AI‑manipulation, but is a helpful analogue.

Case Study 3: FICO Corp. v. Experian Information Solutions et al. (USA, 2011)

Facts: FICO sued credit‑bureaus and their joint‑venture (VantageScore) alleging antitrust claims, false advertising, unfair competition – asserting that bureaus’ development of a competing credit‑score model harmed FICO. The Eighth Circuit held that FICO failed to show antitrust injury or threat of immediate injury.
Legal/algorithmic issue: While not a manipulation case, it involves competing algorithmic credit‑score models and suggests legal scrutiny of how credit‑scoring algorithms are developed and represented to the market.
Why relevant to AI‑enabled manipulation: Again, this case doesn’t show AI manipulation per se or criminal prosecution, but it highlights the regulatory/legal environment around algorithmic credit‑scoring, which is the foundation for potential future AI‑manipulation cases.

Key observations and gaps

None of the above involve proven intentional AI‑driven manipulation of credit ratings leading to a criminal prosecution. That suggests this area is under‑litigated so far.

Many issues arise: algorithmic transparency, fairness, accuracy, duty of care, potential for bias, risk of manipulation (whether human or machine).

For criminal law: you would need evidence of intentional deception or fraud using AI‑models to manipulate ratings, plus damage. That standard is hard to meet given the proprietary nature of many models, “black‑box” issues, data opaqueness.

The regulatory/compliance side is more active than the criminal prosecution side in this domain (e.g., data‑protection laws, consumer‑protection laws, algorithmic‑decision‑making laws).

LEAVE A COMMENT