Artificial Intelligence law at Liechtenstein
🇱🇮 Artificial Intelligence Law in Liechtenstein — Overview
Liechtenstein does not have an independent, standalone “AI Act” of its own.
However, AI is regulated through a combination of:
1. EU/EEA Integration
Liechtenstein is part of the European Economic Area (EEA).
This means Liechtenstein is required to adopt major EU digital regulations, including the EU AI Act (2024) once it becomes EEA-binding.
Thus, Liechtenstein’s AI legal framework is based on:
the EU AI Act (risk-based regulation of AI systems)
the General Data Protection Regulation (GDPR)
the Liechtenstein Data Protection Act
consumer protection and product liability laws
criminal law (e.g., cybercrime, fraud, deepfakes)
2. Supervisory Authorities
Datenschutzstelle (DSS) – Liechtenstein Data Protection Authority
Financial Market Authority (FMA) – for fintech/AI in finance
Sectoral regulators depending on use (healthcare, telecom, etc.)
⭐ Risk Classification under the EU AI Act (applies in Liechtenstein)
Unacceptable-risk AI – prohibited
(e.g., social scoring, real-time biometric mass surveillance)
High-risk AI – allowed but heavily regulated
(e.g., medical diagnostic AI, credit scoring, fintech risk assessment)
Limited-risk AI – transparency obligations
(e.g., chatbots, deepfakes)
Minimal-risk AI – free use
(e.g., video game AI, spell checkers)
📚 Detailed Hypothetical Case Studies (5+ Cases)
These examples illustrate how Liechtenstein authorities would likely apply the law.
Case 1 — AI Credit Scoring in a Liechtenstein Bank (High-Risk AI)
A bank in Vaduz uses a machine-learning model to evaluate loan applications.
An applicant claims the AI unfairly rejected his loan.
Legal Issues
It is high-risk AI (financial creditworthiness assessment).
The bank must provide:
documentation on the model
training data quality information
transparency about logic
human oversight
The applicant has the right to:
obtain meaningful information under GDPR
challenge automated decisions
Outcome (likely)
The Data Protection Authority orders:
a review of the AI system
improved explainability
a repeat of the loan assessment by a human
The bank may face administrative fines if the model used biased or insufficiently transparent algorithms.
Case 2 — AI-Generated Investment Advice in a FinTech Startup
A Liechtenstein fintech app uses generative AI to recommend investments.
One customer suffers major losses and claims:
the AI gave “confident but misleading” advice
no warning about risks
Legal Issues
This is limited-risk AI, but financial regulation is strict.
FMA requires:
clear labeling that AI is used
no misleading impression of professional advice
disclosure of risks and limitations
Outcome
The company is required to:
add disclaimers
improve risk communication
ensure human verification for high-stakes advice
Liability may apply if the AI made systematically misleading or non-compliant recommendations.
Case 3 — AI Medical Diagnosis Tool in a Private Clinic
A Vaduz clinic adopts an AI radiology system that misidentifies a tumor as benign.
The patient sues for delayed treatment.
Legal Issues
This is high-risk AI according to the EU AI Act.
Duties:
rigorous testing
certified medical devices
monitoring and incident reporting
human oversight (doctor must verify)
Outcome
Investigators find:
the clinic relied too heavily on automated results
documentation did not meet high-risk AI requirements
Result:
Patient receives compensation
Clinic must revise procedures
Compliance audit is imposed on the vendor and clinic
Case 4 — Biometric Facial Recognition in a Shopping Mall (Prohibited Use)
A Liechtenstein shopping center deploys cameras to:
track customer behavior
identify “VIP customers” via facial recognition
Legal Issues
This is effectively real-time biometric identification in public spaces, which is prohibited (unacceptable-risk AI) unless certain strict conditions apply (e.g., serious crime prevention, law enforcement—NOT commercial use).
Also violates GDPR principles of necessity and proportionality.
Outcome
The Data Protection Authority orders:
immediate shutdown of the system
deletion of collected data
administrative fines
public notice due to severity
The mall must adopt non-biometric analytics instead.
Case 5 — AI Deepfake Defamation Against a Public Figure
An AI tool is used to create a fabricated video of a Liechtenstein business leader involved in illegal activities.
The video circulates on social media.
Legal Issues
Misleading deepfakes fall under transparency obligations.
GDPR violations (processing a person’s likeness without consent).
Criminal law on defamation and possibly cybercrime.
Outcome
Authorities order:
removal of the video
identification of uploader
potential prosecution for defamation
compensation for reputational damage
AI platform may be warned if labeling deepfakes was not properly implemented.
Case 6 — AI Used to Filter Job Applicants in a Liechtenstein Company
A manufacturing company uses AI to short-list applicants.
A candidate claims discrimination based on nationality or gender.
Legal Issues
Employment-related AI = high-risk
Obligations:
bias monitoring
documentation of training data
human involvement in decision-making
explainability of results
Outcome
Investigation finds:
the AI disproportionately rejected applicants from certain backgrounds
inadequate bias audits
Actions imposed:
retraining or decommissioning the AI
compensation for affected applicants
compliance improvements
Case 7 — Autonomous Delivery Robots in Vaduz (Minimal/High-Risk Hybrid)
A company rolls out sidewalk robots delivering parcels.
One robot bumps into a pedestrian, causing minor injury.
Legal Issues
Safety falls under product liability
Depending on autonomy level, may be high-risk
Obligations:
record-keeping
failsafes
continuous monitoring
Outcome
Manufacturer and operator share liability.
They must:
update firmware
perform safety tests
compensate the injured pedestrian
Local authorities may impose temporary restrictions.
✔ Summary
Liechtenstein’s AI regulation is built on:
EU AI Act risk categories
GDPR
sector laws (finance, medical, product liability)
And typical legal actions involve:
audits
transparency requirements
penalties for high-risk misuse
compensation to harmed individuals
shutdowns for prohibited AI

comments