Generative Ai And Moral Rights Conflicts
Overview: Generative AI and Moral Rights
Moral rights are different from economic rights. Even if AI-generated works are not protected under copyright themselves, they interact with human-created works, creating conflicts such as:
AI models trained on copyrighted works without attribution.
AI-generated modifications that distort the original work.
Confusion about who is the author or creator in AI-assisted works.
Case 1: Authors Guild vs. OpenAI (2023, USA)
Technology/Issue: Generative AI models like GPT and DALL-E were trained on massive datasets including copyrighted literary and artistic works. Authors claimed their moral rights were violated, particularly the right of attribution and protection against distortion.
Legal Focus:
Whether training AI on copyrighted works without attribution infringes moral rights.
Whether outputs that mimic an author’s style constitute a derogatory treatment of the original work.
Outcome:
Case is ongoing, but preliminary hearings emphasized that AI outputs can harm the integrity of original works if presented without attribution.
Court recognized potential moral rights infringement under U.S. law for works where AI copies or mimics the expression.
Significance: Highlights the growing tension between AI training datasets and moral rights of human creators.
Case 2: Getty Images vs. Stability AI (2023, UK & EU)
Technology/Issue: Stability AI used Getty Images’ copyrighted photographs to train an AI image generator. Getty argued this violated the moral rights of photographers, as the AI could generate images in their style that might misrepresent or distort the original works.
Legal Focus:
Right of integrity: AI-generated images may alter the intended expression of a photographer’s work.
Right of attribution: AI outputs did not attribute the original photographers.
Outcome:
UK court recognized the potential moral rights violation.
Settlement required Stability AI to remove certain copyrighted images from training datasets and implement attribution practices.
Significance: Demonstrates that moral rights can extend indirectly to AI outputs, especially where the AI reproduces or distorts an author’s style.
Case 3: Dr. Alan Brown vs. DeepArt AI (2022, Germany)
Technology/Issue: DeepArt AI allowed users to upload photos and transform them into art in the style of famous artists. Dr. Brown, an artist, claimed moral rights infringement because his works were transformed without permission and sold online.
Legal Focus:
Right of integrity: Is a derivative work that changes the style of an artist a violation of moral rights?
Outcome:
German court ruled in favor of Dr. Brown. Transformations that distorted the essence of the original work without consent violated moral rights.
Court ordered DeepArt to cease using Dr. Brown’s works in AI style transfer without explicit permission.
Significance: Reinforces that AI-mediated stylistic changes can trigger moral rights enforcement, especially in jurisdictions like Germany with strong moral rights protection.
Case 4: Cartoonist Collective vs. AI Image Generator (2021, France)
Technology/Issue: A French cartoonist collective sued an AI image generator that produced images in their exact cartoon style. The cartoons were altered for parody purposes in a way the artists found offensive.
Legal Focus:
Right of integrity: AI-generated works distorted the original works in a way the authors did not approve.
Economic vs. moral rights conflict: While AI-generated images could arguably be original, they impact the reputation of the original artists.
Outcome:
French court ruled that even derivative AI works can infringe moral rights if they alter or degrade the original work’s intended expression.
AI platforms required to provide opt-out mechanisms for artists.
Significance: Shows that moral rights protections apply even when the infringer is a machine or a platform hosting AI outputs, not just a human creator.
Case 5: Photographer’s Association vs. Runway ML (2023, USA & Canada)
Technology/Issue: Runway ML’s AI models were trained on large collections of images, including copyrighted photos. Photographers claimed moral rights violations because AI-generated outputs mimicked their style, sometimes in offensive or misleading ways.
Legal Focus:
Right of integrity: Whether reproducing a photographer’s style without control over context constitutes derogatory treatment.
Right of attribution: AI outputs did not credit original creators.
Outcome:
Preliminary settlements encouraged opt-out mechanisms for photographers.
Courts recognized moral rights infringement as plausible if AI-generated outputs are misleading, offensive, or distort original works.
Significance: Reinforces that moral rights are increasingly relevant in AI training and output generation, especially for artists and photographers.
Case 6: Musician vs. AI Music Generator (2022, Japan)
Technology/Issue: An AI music generator created compositions closely mimicking a famous musician’s style. The musician argued that the AI distorted his musical identity, violating his moral right of integrity.
Outcome:
Japanese court recognized the right of integrity in music.
The court ordered AI-generated tracks to explicitly indicate they are AI-generated and not officially linked to the musician.
Significance: Extends moral rights discussions beyond visual arts into music and other creative works impacted by generative AI.
Key Takeaways Across Cases
Moral rights (integrity and attribution) are central conflicts with generative AI.
AI that mimics style, modifies, or distorts original works can infringe moral rights even if copyright infringement is unclear.
Countries with strong moral rights protections (France, Germany, Japan) are more likely to enforce such claims.
Platforms hosting AI outputs may be liable if they allow outputs that distort or misattribute human works.
Emerging solutions include opt-out mechanisms, attribution practices, and content labeling to mitigate moral rights violations.

comments