Case Law On Criminal Responsibility For Automated Decision-Making In Public Governance

Case 1: Pintarich v Deputy Commissioner of Taxation (Australia, 2018)

Facts:
A taxpayer applied for remission of a general interest charge (GIC) on tax liabilities of about AU$1.17 m. The Australian Taxation Office (ATO) sent a computer‑generated letter, via a template, purporting to accept a payment arrangement and remission of the GIC. No authorised officer reviewed the letter before dispatch. Subsequently, the ATO claimed the remission was in error and required full repayment. The taxpayer challenged on the basis that the original letter constituted a “decision” by the Commissioner.
Legal issues:

What constitutes a “decision” by a decision‑maker under the Taxation Administration Act? Does a purely computer‑generated letter issued without an adjudicative mental process qualify?

Delegation and automated decision‑making: can an automated system stand in for the mental process of a delegate?
Outcome:
The Full Federal Court (FCAFC) held that the computer‑generated letter did not amount to a valid “decision” by the Commissioner because there was no mental process of reaching a conclusion or objective manifestation of deliberation. Thus the taxpayer could not rely on it as an official decision and the later decision by the Commissioner was valid.
Key take‑aways:

The decision highlights that automated decision‐making systems used by public bodies must still satisfy administrative law requirements of process and mental decision‑making when purported to issue “decisions”.

Public authorities cannot assume that a machine output equates to legally effective administrative decision unless the human decision‑maker’s deliberation is present.

Implication: Liability may shift from the automated tool to the human delegate or the public body if the statutory requirements aren’t met.

Case 2: Robodebt scheme (Australia, 2015‑2019)

Facts:
The Australian federal government implemented an automated income‑averaging system to identify over‑payments of welfare by matching tax office data and Centrelink records. The system sent debt notices based solely on algorithmic averaging of annual income to fortnightly income, without verifying detailed actual income for each fortnight. Hundreds of thousands of recipients received debt notices. In 2019 the Federal Court declared that raising debt notices purely by automated averaging was unlawful.
Legal issues:

Whether the use of an automated decision system deprived recipients of procedural fairness, transparency, and proper basis for decision.

Accountability: Which entity or individual is responsible when the decision‑making process is automated and errors are systemic?

Whether the debt notices were valid “decisions” under the Social Security Act and whether they were irrational or unlawful due to the automated process.
Outcome:
The scheme was declared unlawful; debts based solely on the automated averaging method were invalid. A class action settlement followed and a royal commission found serious failings. Although not strictly a criminal prosecution of officials, it triggered accountability investigations and enormous public administration consequences.
Key take‑aways:

Automated decision systems with insufficient human oversight can result in mass administrative law liability.

Responsibility for flawed automated governance may fall on ministers, agencies or designers of the system, even if no individual “criminal” is prosecuted.

This case underscores the risk of delegating high‑stakes decisions (debt recovery) to automation without adequate legal authority, oversight or transparency.

Case 3: Schouten v Secretary, Department of Education, Employment and Workplace Relations (Australia, 2011)

Facts:
The applicant sought review of the Youth Allowance benefit paid by the Australian government, which was calculated via an automated process. During tribunal hearing the automated system was explained, and the Tribunal noted that only with human officer evidence could the data‑processing be understood.
Legal issues:

Transparency and accountability of automated decision‐making in public benefits context.

Whether users have meaningful access to reasoning when decisions are algorithmic.
Outcome:
The Tribunal affirmed the calculation but emphasized the challenge of contesting automated decisions and the need for audit trails and plain‑language explanations of algorithmic decisions.
Key take‑aways:

Even when automated decisions are valid, public bodies must maintain audit trails and provide explanations so affected individuals can understand and challenge them.

This case set a precedent for how administrative law frameworks must adapt to automation (e.g., right to explanation, reason‑giving).

Case 4: Swedish Parliamentary Ombudsman decision 2022/23 p 481 (Sweden)

Facts:
The Swedish Migration Agency deployed an automated decision‑making (ADM) system to process so‑called delayed action cases. The system generated decisions but did not provide reasons in a manner compliant with Section 32 of Sweden’s Administrative Procedure Act (APA), which requires reasons for decisions. The Ombudsman found the agency’s reason‑giving inadequate.
Legal issues:

Delegation of decision‑making authority to automated systems without proper statutory authority/delegation.

Duty to state reasons: automated decisions failing to articulate the basis for the outcome violate administrative law rights.
Outcome:
The Ombudsman criticised the agency’s process for failing to provide adequate reasons; while not a judicial decision in a criminal sense, it is a supervisory accountability mechanism for automated public decisions.
Key take‑aways:

Public bodies cannot hide behind automation to avoid the requirement of giving proper reasons for decisions affecting individuals.

The decision highlights organisational administrative liability and the need for transparency in automated governance.

While not “criminal responsibility,” it illustrates accountability mechanisms triggered when automation fails public‑law standards.

Case 5: Masetlha v President of the Republic of South Africa (South Africa, 2007)

Facts:
A South African intelligence agency head was dismissed by the President. While not about full automated decision‑making, the case includes themes of administrative power, fairness and decision‑making by public authority—relevant to queries about machines replacing human decision‑makers.
Legal issues:

Whether executive dismissal constituted “administrative action” subject to review or was a political executive act.

Implications for delegation of decision‑making and the extent of accountability when public functions are exercised by non‑traditional decision processes.
Outcome:
The Constitutional Court held the dismissal was lawful executive action, not administrative action subject to procedural fairness review. A dissent argued fairness and consultation should apply.
Key take‑aways:

Whilst not automated, the case shows the boundary between human decision‑making and delegated/non‑delegated authority—a critical reference point for when an automated decision system seeks to act as a decision‑maker.

It helps illustrate the question: when is an automated system operating under delegated decision‑making such that human law‑maker oversight must apply?

Case 6: Amato v Commonwealth (Australia, 2019)

Facts:
A welfare recipient contested a “robo‑debt” raised by the automated income‑averaging system (see Case 2). The Federal Court ordered the government to pay back the amount and declared that the automated decision‑making method was irrational.
Legal issues:

Whether the automated decision process (income‑averaging) complied with the Social Security Act’s requirement and administrative law doctrine (rationality, legality, procedural fairness).

Accountability of public authorities when algorithmic methods are used for mass decisions.
Outcome:
The decision by court held that the methodology used (income averaging without verification) was unlawful and the debt notice invalid. The government later settled for billions of dollars in refunds.
Key take‑aways:

Reinforces that automated decision‑making systems must be lawful, based on authorised statutory power, rational methodology, and human oversight.

Public authorities must ensure that automation does not erode procedural rights or shift unfair burden onto individuals.

Emerging Trends & Analytical Observations

From these cases several emerging legal and governance trends can be identified in prosecuting or holding responsible public bodies (and, potentially, individuals) for automated decision‑making:

Human decision‑maker requirement remains strong: Many courts insist that there must be a “mental process” by a human decision‑maker even when automation assists or is used. Automated systems alone often fail to satisfy statutory decision‑making requirements. (See Pintarich)

Transparency, reason‑giving and audit trails: Automated decisions in the public sector trigger heightened expectations of transparency. Public bodies must provide understandable reasons, maintain logs of system input/output, and enabling challenge. (See Schouten, Swedish Ombudsman)

Statutory authority and delegation: For an automated system to issue legally binding decisions, there must be explicit or proper delegated authority for the system to act. Without it, decisions may be invalid. (See discussions in “iDecide” speech and other literature)

Liability and accountability – not just machines but human/institutional accountability: Machines cannot be “criminally responsible” but the public entity, human officers, policy‑makers, or system designers may be accountable under administrative law, tort, or (in rare cases) criminal law for faulty automated decision systems.

Risk of mass harm from public automation: Large scale automated decision systems (e.g., Robodebt) demonstrate how automation in governance without adequate safeguards can lead to systemic unfairness, and trigger significant liability, inquiry and compensation.

Increasing regulatory/regime attention: Governments, commissions and legislatures are responding by requiring algorithmic impact assessments, transparency registers, and regulation of high‑risk automated decision‑making tools. This suggests future higher liability and scrutiny.

Criminal responsibility still rare in automated governance context: Although public bodies may be held civilly or administratively liable, few cases involve direct criminal prosecution of officials or systems for automated decision‑making. The law is still evolving to address criminal liability when decision‑making is automated.

Implications for Practice

Public agencies deploying automated decision‑making systems should ensure:

clear statutory authority or proper delegation for the system;

human supervision and review of outputs;

transparency and reason‑giving capacity;

audit trails of system inputs/outputs;

mechanisms for individuals to challenge automated decisions;

risk‑assessment of bias, discrimination, error especially in high‑stakes decisions (welfare, benefits, debt, immigration).

Legal practitioners should scrutinise:

whether an automated decision was legally valid;

what human oversight was present;

whether the decision‑maker’s process satisfied procedural fairness;

whether the system’s design, methodology and approval were lawful.

For policy makers:

push for regulation of high‑risk ADM (automated decision‑making) in public governance;

mandate transparency, impact auditing, contestability;

consider criminal/penal sanctions for serious breaches of public governance by automation (this remains nascent).

LEAVE A COMMENT