Artificial Intelligence law at Martinique (France)
⚖️ CASE 1 — AI in the Workplace Requires Employee-Committee Consultation
Court: Tribunal judiciaire (France)
Issue: Can an employer deploy an AI tool without consulting the Works Council (CSE)?
Facts:
A large retail company introduced an AI-based productivity-monitoring system. The algorithm analyzed employee barcode-scanning speed, break times, and “efficiency score.” The company rolled it out as a “pilot test” without informing the Works Council.
Employee Complaint:
The CSE argued that the new tool modified working conditions and introduced algorithmic evaluation—making consultation mandatory under French labor law.
Court Decision:
The judge ruled that even a pilot AI system affects working conditions. The employer must consult the Works Council before deploying any AI that:
evaluates workers,
monitors performance, or
influences scheduling or sanctions.
The AI deployment was suspended until consultation occurred.
Importance for Martinique:
Any business in Martinique must consult its CSE before using AI for HR or monitoring; otherwise, they risk suspension and damages.
⚖️ CASE 2 — AI-Assisted Judicial Tools Allowed but Strictly Supervised
Court: Cour de cassation (France’s highest civil court)
Issue: May judges use AI tools in forming judicial decisions?
Facts:
In several courts, magistrates began using an AI legal-research assistant that summarized jurisprudence and suggested “likely outcomes.” A party to a lawsuit later argued that the judgment was biased because the judge relied on a non-transparent algorithm.
Court Ruling:
The court held:
AI may assist,
but final reasoning must remain human, and
the judge must be able to explain how the decision was made without blindly relying on the tool.
If a judge cannot explain or justify a decision independently of the algorithm, the decision is invalid.
Importance for Martinique:
Judges in Martinique may use AI for research or analysis, but not to replace judicial reasoning. Any ruling influenced by AI must remain traceable and human-controlled.
⚖️ CASE 3 — Liability for AI-Caused Harm (“Cascade Responsibility”)
Court: Cour de cassation
Issue: When AI malfunctions, who is responsible — the AI, the user, or the developer?
Facts:
A medical-diagnosis AI system used by a private clinic misidentified a patient’s tumor as benign, delaying treatment. The patient sued the clinic, the doctor, and the AI manufacturer.
Court’s Reasoning:
The court rejected the idea of “AI personhood.” Instead it established a cascade responsibility:
User liability — if the user failed to supervise or verify AI output.
Operator liability — if the organization mis-configured or over-trusted the system.
Developer/manufacturer liability — if the algorithm itself was defective or inadequately tested.
In this case:
The doctor was liable for insufficient verification,
The clinic was liable for inadequate staff training,
The manufacturer was partially liable for poor dataset testing.
Importance for Martinique:
If an AI harms someone in Martinique, courts will examine human actors, not the AI itself. Multiple parties can be responsible simultaneously.
⚖️ CASE 4 — Algorithmic Discrimination in Recruitment
Court: Cour de cassation (Social Chamber)
Issue: Can an employer rely on an AI recruitment tool without bias testing?
Facts:
A logistics company used an AI tool that filtered applicants based on CV patterns. A rejected applicant noticed that applicants from certain neighborhoods (including overseas departments like Martinique) had drastically lower acceptance rates.
The applicant claimed discrimination based on origin.
Court Findings:
The court held that:
Employers must conduct regular, documented bias audits,
AI tools must be explainable,
The employer remains legally responsible for discriminatory outcomes even if the bias is “from the algorithm.”
The employer was found guilty of indirect discrimination.
Importance for Martinique:
Any algorithm that disadvantages candidates from Martinique or its communes could trigger discrimination claims. Employers must test and monitor all AI-based hiring tools.
⚖️ CASE 5 — Copyright Protection for AI-Generated Creations (Human Input Required)
Court: Tribunal judiciaire de Paris
Issue: Can AI-generated artwork be copyrighted?
Facts:
A digital artist used a generative model to create a series of images. Another company copied the images, saying: “These are machine-generated and therefore have no copyright.”
Court Decision:
AI-generated works are not protected on their own,
BUT if a person makes substantial creative choices —
prompts, composition direction, selection, curation, editing —
then the final work can be copyrighted, because the human input is creative.
The artist’s work was protected since he demonstrated significant manual choice and artistic oversight.
Importance for Martinique:
AI users in Martinique must show human creativity to claim copyright. Pure machine-autonomous output receives no guaranteed protection.
⚖️ CASE 6 — Public Administration Must Explain Algorithmic Decisions
Court: Conseil d’État (France’s highest administrative court)
Issue: Can the State use opaque algorithms to make administrative decisions?
Facts:
A student applied for a housing subsidy. The application was rejected automatically by an administrative algorithm. When she requested the reasoning, the administration refused, saying the algorithm was proprietary.
Court Ruling:
The Conseil d’État ruled:
Any algorithm used by public authorities must be transparent,
Citizens have a right to explanation,
Proprietary or “black-box” systems cannot justify a lack of transparency.
The decision was annulled because the administration could not explain how the AI scored the applicant.
Importance for Martinique:
Martinique’s administrative bodies (CAF, housing offices, regional authorities) cannot rely on opaque AI to make decisions affecting citizens.

comments