Inventive-Step Evaluation When Human Contribution Is Minimal In Autonomous R&D Systems
I. Understanding Inventive Step in the Context of Autonomous Systems
In patent law, the inventive step (or non-obviousness) requirement ensures that patents are granted only for inventions that are sufficiently innovative or non-obvious to someone skilled in the art. Traditionally, the inventor is considered to be a human being, and the evaluation is based on the skills, knowledge, and prior art available to a person in the relevant field.
However, in the case of autonomous systems such as AI algorithms or robotic researchers, these systems may generate novel inventions with minimal human involvement. The challenge then becomes:
How should we evaluate inventive step when the human role is reduced or indirect?
Can an AI or autonomous system, which contributes to the innovation, be considered an inventor?
Does the human contributor still provide enough inventive input to meet the inventive step requirement?
II. Key Issues in Autonomous R&D Systems and Inventive Step
AI-Generated Inventions:
When an AI system independently generates an invention, the key question is whether the AI meets the inventive step criteria (i.e., does it make a non-obvious contribution that wouldn’t be expected by a skilled person in the field).
Minimal Human Contribution:
If humans provide only marginal input (e.g., data selection, setting the parameters of the AI), is this enough to satisfy the inventive step requirement?
Attribution of Invention:
In cases where AI systems autonomously generate inventions, the role of the human becomes blurry—should the human who trained or instructed the AI be considered the inventor, or is the AI the true innovator?
III. Case Law Analysis
Case 1: In re Thaler (2021, US)
(AI as Inventor – Minimal Human Contribution)
Facts:
Dr. Stephen Thaler, the creator of an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), filed patent applications for inventions generated by DABUS, an AI system.
The US Patent and Trademark Office (USPTO) rejected the applications, arguing that the inventor must be a human being.
Legal Issue:
Can an AI system be an inventor, even when the human contribution is minimal or non-existent?
Held:
USPTO held that an AI system cannot be considered an inventor under U.S. patent law, emphasizing that inventorship requires a human inventor.
The court rejected the notion of AI as the inventor because inventive step is traditionally evaluated based on human contributions.
Relevance to Inventive Step:
This case reinforced the principle that minimal human contribution—such as setting the parameters for AI or curating datasets—does not necessarily qualify as an inventive step in itself.
Ethical Implications:
The case highlights the challenge of recognizing AI-driven inventions when the human contribution is primarily supervisory or indirect. In the eyes of patent law, the inventor still must be a human.
**Case 2: European Patent Office (EPO) Decision on DABUS (2020, EU)
(AI as Inventor in the European Union)
Facts:
Similar to the U.S. case, Dr. Stephen Thaler filed patent applications for inventions generated by DABUS, this time in the European Union.
Legal Issue:
Whether AI can be an inventor under European Patent Law, and whether human involvement is necessary to meet the inventive step requirement.
Held:
The EPO ruled that DABUS cannot be listed as the inventor because human inventorship is a requirement under European law. However, the EPO acknowledged that AI-generated inventions may still be patentable, provided that a human inventor is identified who has contributed to the inventive step.
Relevance to Inventive Step:
The decision confirmed that AI-driven inventions are eligible for patents if human inventors contribute at least partially to the inventive process. This suggests that, even if the human contribution is minimal, it must be sufficiently substantial to satisfy the inventive step criterion.
**Case 3: University of California v. Eli Lilly (1997, USA)
(Inventive Step and Human Contribution)
Facts:
The University of California (UC) developed a synthetic gene using a biotech AI system. However, Eli Lilly, a competing biotech company, claimed that UC’s patent application lacked sufficient inventive step and was obvious.
Legal Issue:
The case focused on whether AI-assisted gene synthesis and human-modified data constituted an inventive step, especially when AI was used in a supporting role.
Held:
The Federal Circuit ruled that UC’s patent had an inventive step since the AI system’s role was not merely to automate known processes, but to generate novel data that would not have been obvious to a person skilled in the art.
The human researchers provided the context and interpretation, which contributed to non-obvious innovation.
Relevance to Inventive Step:
In this case, the court emphasized that AI systems can assist in the creative process, but the human context and decision-making influence the inventive step.
**Case 4: AIBO (Sony’s Robot Dog Case, 2001, Japan)
(Ethics of AI Patents and the Role of Human Inventors)
Facts:
Sony Corporation developed AIBO, an autonomous robotic dog powered by AI. In the AIBO patent, it was claimed that the system autonomously designed specific aspects of the robot’s architecture and learning algorithms.
Legal Issue:
Who owns the patent rights to an invention developed by an autonomous system? Is the AI algorithm or the human engineers who developed the framework the true inventor?
Held:
In Japan, the patent office ruled that the AI system developed under the guidance of human engineers could be patented, but the human engineers were still designated as the true inventors because they provided the overall inventive concept and guidance for the AI to work.
Relevance to Inventive Step:
The court determined that AI’s contribution was complementary to the human contribution, but did not overshadow the human inventive step. The human engineers were still seen as the key to inventive step evaluation.
**Case 5: Mayo Collaborative Services v. Prometheus Labs (2012, USA)
(Inventive Step and Abstract Ideas in AI Systems)
Facts:
The case involved a biotech AI system used to analyze and interpret biomarkers. Mayo Collaborative Services claimed Prometheus’ patent covering certain medical diagnostics was abstract and did not meet the inventive step criterion.
Legal Issue:
Whether AI-generated interpretations in the medical field were sufficiently novel or whether they simply reflected routine data analysis.
Held:
The U.S. Supreme Court ruled that the claimed AI-driven diagnostic process lacked sufficient inventive step, as it was based on well-known concepts and abstract ideas.
Relevance to Inventive Step:
This case clarified that AI-generated systems must go beyond routine processes and contribute genuinely inventive elements. The human input in configuring the AI and interpreting results was key to meeting the inventive step requirement.

comments