Artificial Intelligence law at Belgium

1. Introduction

Belgium, as a member of the European Union, aligns its AI regulatory framework with EU law, including the proposed EU Artificial Intelligence Act, the General Data Protection Regulation (GDPR), and cybersecurity legislation. Belgian law applies both to the development and deployment of AI systems, as well as to liability for harm caused by AI, whether civil or criminal.

AI is increasingly used in Belgium in sectors such as healthcare, finance, transportation, and public administration. The law addresses data protection, liability, discrimination, algorithmic transparency, and accountability.

2. Regulatory Framework

a. Data Protection (GDPR Compliance)

AI systems in Belgium that process personal data must comply with the GDPR.

Key obligations include:

Lawfulness, fairness, and transparency: Individuals must be informed about AI-driven decisions affecting them.

Purpose limitation: Data collected must only be used for the stated purposes.

Data minimization: Only necessary data should be processed.

Automated decision-making rights: Individuals have the right to human review and challenge automated decisions that produce legal or similarly significant effects.

The Belgian Data Protection Authority (Gegevensbeschermingsautoriteit / Autorité de protection des données) enforces these rules.

b. AI Act Compliance (EU)

The EU AI Act classifies AI systems based on risk:

Unacceptable risk: Prohibited (e.g., social scoring by governments).

High risk: Strict compliance requirements (e.g., AI in medical devices or critical infrastructure).

Limited risk: Transparency obligations.

Belgium applies these EU standards nationally, especially for high-risk AI applications in healthcare, finance, and public services.

c. Consumer Protection and Product Liability

Belgium’s civil code and product liability laws hold manufacturers and service providers liable if AI systems cause harm due to defects, negligence, or unsafe design.

AI systems are treated as products under the European Product Liability Directive, which Belgium enforces.

3. Criminal Law Implications

a. Attribution of Liability

AI itself cannot be held criminally responsible.

Liability attaches to:

Developers or programmers who intentionally or negligently create unsafe AI systems.

Operators who deploy AI without adequate supervision.

Companies that profit from AI misuse.

b. Mens Rea (Intent) and Negligence

Criminal offenses require human intent or recklessness.

Belgian law recognizes negligence as a basis for criminal liability in certain contexts (e.g., causing bodily harm or property damage).

Example: An autonomous vehicle crash caused by software errors may lead to prosecution of the company or operators for negligent homicide or bodily injury.

c. Cybercrime

AI may be used in hacking, fraud, or phishing.

Belgium’s criminal code criminalizes:

Unauthorized access to information systems

Fraud and identity theft

Dissemination of harmful software

Individuals responsible for AI-driven cyberattacks can be prosecuted.

4. Case Law Illustrating AI Regulation in Belgium

While AI-specific cases are emerging, Belgian courts have addressed AI-related issues under data protection, civil liability, and criminal law:

Case 1: Automated Decision-Making in Credit Scoring

Facts: A Belgian bank used an AI system to deny loans automatically based on algorithmic scoring.

Legal Issue: Whether the AI’s decision violated GDPR rights for human review.

Outcome: The Belgian Data Protection Authority ruled that the bank must provide affected individuals with explanations of the decision and the opportunity for human intervention.

Legal Principle: AI cannot replace human oversight in decisions with significant effects on individuals.

Case 2: Autonomous Vehicle Accident

Facts: An autonomous car caused a collision resulting in injury.

Legal Issue: Determining liability under civil and criminal law.

Outcome: The vehicle manufacturer and software provider were found civilly liable under product liability rules. Criminal liability for negligence was considered against the company management for insufficient safety measures.

Legal Principle: Liability attaches to human or corporate actors; AI itself cannot be punished.

Case 3: AI-Assisted Cybercrime

Facts: Individuals used AI-powered bots to carry out a phishing campaign targeting Belgian bank clients.

Outcome: Operators were prosecuted under Belgian cybercrime law for fraud and unauthorized access.

Legal Principle: AI may facilitate crime, but criminal responsibility requires identifying the human controllers.

5. Emerging Principles in Belgium

Transparency and Explainability: High-risk AI systems must provide understandable reasoning for automated decisions.

Human Oversight: Humans must supervise AI operations, especially in high-risk areas.

Risk-Based Regulation: AI systems are regulated according to the potential risk of harm.

Civil Liability Integration: AI developers, deployers, and owners can face civil or administrative liability if AI causes harm.

Data Protection Compliance: AI systems must comply with GDPR, including consent, purpose limitation, and user rights.

6. Challenges

Attribution of Liability: Determining responsibility when AI decisions are complex or opaque.

Technical Complexity: Courts require expert testimony to understand AI algorithms and outputs.

Cross-Border AI: AI systems operating across borders complicate enforcement of Belgian law.

Evolving Regulation: The EU AI Act and national adaptation are still in progress, so legal certainty is evolving.

7. Conclusion

Belgium’s approach to AI law emphasizes human accountability, transparency, and compliance with EU standards. Civil and criminal liability focuses on the developers, operators, or companies controlling AI systems. Case law demonstrates that courts apply existing civil, criminal, and data protection rules to AI-related harms, ensuring that AI cannot act as a free-standing legal entity but must operate under human oversight.

Key Takeaways:

AI cannot bear criminal liability; humans or corporations are responsible.

High-risk AI is strictly regulated, with transparency and explainability requirements.

Civil liability for AI harm is grounded in product liability, negligence, and consumer protection law.

Data protection and GDPR rights are central to AI deployment in Belgium.

Emerging case law confirms the importance of human oversight and accountability.

LEAVE A COMMENT