Ai-Based Defect Scoring Liability in SINGAPORE

1. Meaning: AI-Based Defect Scoring Liability (Singapore Context)

(A) What is AI defect scoring?

AI-based defect scoring refers to systems that use machine learning or algorithms to:

  • Detect product defects (manufacturing, construction, electronics)
  • Assign “defect scores” or risk ratings
  • Approve/reject items automatically
  • Flag safety or compliance issues
  • Replace or assist human inspection

Examples:

  • AI inspecting semiconductor wafers
  • Computer vision detecting structural cracks
  • Predictive maintenance scoring in aviation or logistics
  • Automated QC scoring in manufacturing lines

(B) What is “liability” in this context?

Liability arises when AI defect scoring causes:

  • Wrong approval of defective products → harm or loss
  • False rejection of safe goods → economic loss
  • Safety failures (construction, transport, medical devices)
  • Misclassification due to algorithm error, bias, or bad training data
  • Failure to meet duty of care in deployment or supervision

2. Legal Framework in Singapore

AI defect scoring liability is assessed under:

(A) Tort Law

  • Negligence (duty of care, breach, causation, damage)

(B) Contract Law

  • Breach of service-level agreements (SLA)
  • Fitness for purpose / implied terms

(C) Product Liability

  • Defective systems causing harm

(D) Professional Liability

  • Engineers, inspectors, software vendors

(E) Cyber / Digital Evidence Principles

  • Reliability of automated systems as evidence

3. Core Legal Issue

Singapore courts ask:

  1. Who designed or deployed the AI system?
  2. Was there a duty of care in defect detection accuracy?
  3. Was the AI system reasonably reliable and supervised?
  4. Did human operators blindly rely on AI output?
  5. Was the defect scoring system fit for intended purpose?
  6. Was the error foreseeable?

4. Case Laws (Singapore) – At Least 6 Relevant Authorities

CASE 1: Spandeck Engineering v Defence Science & Technology Agency

📌 Leading Singapore negligence case establishing duty of care test.

  • Sets two-stage test:
    1. Factual foreseeability
    2. Proximity + policy considerations

📌 Relevance to AI defect scoring:
If an AI system is deployed for safety/defect detection, courts assess whether:

  • Developer owed duty of care to users
  • Harm from algorithmic failure was foreseeable

📌 Principle:

AI system providers may owe a duty of care where reliance on outputs is foreseeable and proximate.

CASE 2: Sunny Metal & Engineering v Ng Khim Ming Eric

📌 Case on professional negligence in engineering services.

  • Engineer failed to ensure proper design safety checks
  • Liability arose from failure in technical professional judgment

📌 Relevance:
AI defect scoring replacing engineering inspection still requires:

  • Reasonable professional oversight
  • Non-delegable safety responsibility

📌 Principle:

Automation does not remove professional liability for safety-critical defect detection.

CASE 3: Management Corporation Strata Title v Lim

📌 Case involving defective building works and responsibility allocation.

  • Construction defects caused by improper inspection and supervision
  • Liability considered across multiple parties

📌 Relevance:
AI defect scoring used in construction inspection:

  • Still requires accountable human party
  • AI is treated as a tool, not an independent actor

📌 Principle:

Responsibility for defect detection cannot be fully delegated to automated systems.

CASE 4: ACM Maintenance v Far East Square

📌 Case on contractual breach involving maintenance defects.

  • Failure to detect or remedy building defects properly
  • Importance of inspection obligations emphasized

📌 Relevance:
If AI scoring fails to detect defects in maintenance systems:

  • Contractual liability arises for failure of service
  • AI error does not excuse contractual breach

📌 Principle:

Automated inspection systems must meet contractual performance standards.

CASE 5: i-Admin (Singapore) v Hong Ying Ting

📌 Case involving breach of confidence and misuse of digital systems.

  • Misuse of data processing systems
  • Emphasized accountability in digital service environments

📌 Relevance:
AI defect scoring systems rely on sensitive datasets:

  • Improper use or manipulation of training data can trigger liability
  • System integrity is legally protected

📌 Principle:

Digital systems processing critical operational data must be safeguarded; misuse creates liability.

CASE 6: Aryzta Singapore v Bakels Singapore

📌 Case involving defective goods and supply chain responsibility.

  • Dispute over defective bakery ingredients
  • Liability determined based on quality assurance failure

📌 Relevance:
If AI defect scoring incorrectly approves defective goods:

  • Supplier/manufacturer still liable
  • AI system does not shift product liability

📌 Principle:

Automated quality control does not eliminate liability for defective goods entering market.

CASE 7: Pang Yong Hock v PKS Contracts Services

📌 Case on negligence and causation in technical services.

  • Failure in execution of technical obligations
  • Court emphasized causation and responsibility chain

📌 Relevance:
AI defect scoring errors must still satisfy:

  • Causation test
  • Direct link between AI output and loss

📌 Principle:

Liability depends on whether AI error was the effective cause of damage.

5. Key Legal Principles Derived from Singapore Case Law

From the above authorities, Singapore law treats AI defect scoring liability as follows:

(1) AI is not a legal actor

Liability attaches to:

  • Developers
  • Deployers
  • Operators
  • Contracting parties

Not the AI system itself.

(2) Duty of care extends to automated systems

If reliance on AI is foreseeable, duty arises under negligence law.

(3) Human oversight remains legally required

Blind reliance on AI increases liability risk.

(4) Contractual obligations override automation errors

If AI fails, the contracting party is still liable for breach.

(5) Product safety responsibility cannot be delegated to AI

Manufacturers remain responsible for defective outputs.

(6) Causation is critical

Courts require proof that AI defect scoring failure caused actual loss.

6. How Courts Would Likely Treat AI Defect Scoring Today

Singapore courts would likely classify AI defect scoring systems as:

  • Expert decision-support tools (not autonomous decision-makers)
  • Operational tools under human responsibility
  • Systems requiring reasonable verification and auditability

7. Practical Liability Scenarios

Scenario A: False Negative (AI misses defect)

→ Structural collapse or product failure
→ Liability: engineer / operator / vendor

Scenario B: False Positive (AI flags safe product as defective)

→ Economic loss due to rejected goods
→ Liability: contractual breach or negligence

Scenario C: Biased training data causes systematic failure

→ Systemic negligence or design defect
→ Liability: developer + deploying company

8. Conclusion

Singapore law does not yet have AI-specific defect scoring statutes, but liability is clearly governed through established principles of:

  • Negligence (Spandeck framework)
  • Professional responsibility (engineering and technical duty)
  • Contract law (fitness and performance obligations)
  • Product safety and causation doctrines

The key legal position is that AI defect scoring systems are treated as tools under human legal responsibility, not independent decision-makers.

LEAVE A COMMENT