Artificial Intelligence law at Australia

Australia is actively developing a regulatory framework for artificial intelligence (AI), focusing on a risk-based approach to ensure safety and accountability.

🇦🇺 Australia's AI Regulatory Landscape

1. Voluntary AI Safety Standard

In September 2024, the Australian Government introduced a Voluntary AI Safety Standard comprising ten guidelines aimed at promoting responsible AI development and deployment. These guidelines cover areas such as:

Establishing accountability processes and governance structures.

Implementing risk management strategies.

Ensuring data quality and provenance.

Facilitating transparency and human oversight.

Providing avenues for individuals to challenge AI decisions.

Maintaining records for third-party assessments.

Conducting conformity assessments to demonstrate compliance.(Spruson & Ferguson, K&L Gates Experience, Corrs Chambers Westgarth, Herbert Smith Freehills)

While these guidelines are voluntary, they are designed to complement existing Australian laws and emerging regulatory guidance .(Home page)

2. Proposed Mandatory Guardrails for High-Risk AI

In addition to the voluntary standard, the government has proposed mandatory guardrails for AI systems deemed high-risk. These proposed measures include:

Establishing and publishing accountability processes.

Implementing risk management processes.

Protecting AI systems and managing data quality.

Testing AI models and systems.

Enabling meaningful human oversight.

Informing end-users about AI-enabled decisions.

Providing processes for individuals to challenge AI outcomes.

Ensuring transparency across the AI supply chain.

Maintaining records for third-party assessments.

Conducting conformity assessments (Corrs Chambers Westgarth).

The government is considering three regulatory approaches to implement these guardrails:

Domain-Specific Approach: Adapting existing regulatory frameworks to include the guardrails.

Framework Approach: Introducing new framework legislation with amendments to existing laws.

Whole-of-Economy Approach: Enacting a new cross-economy AI Act (Mondaq).

3. AI Ethics Principles

Australia's AI Ethics Principles, published in 2019, provide a foundation for responsible AI development. These principles emphasize:

Fairness and non-discrimination.

Transparency and accountability.

Privacy and data governance.

Safety and reliability.

Inclusiveness and accessibility.

Sustainability.

Continuous improvement and learning.

Governance and accountability (Home | White & Case LLP).

4. AI in Government

The Australian Government has committed to being an 'exemplar' in the safe and responsible adoption of AI. This commitment is reflected in the 'National Framework for the Assurance of Artificial Intelligence in Government', released in June 2024. The framework aims to ensure that AI systems used in government are transparent, accountable, and aligned with ethical standards .(Ashurst)

5. Regulatory Oversight

Australia's eSafety Commissioner has utilized powers under the Online Safety Act to address risks associated with generative AI. This includes registering mandatory industry codes and standards to mitigate the risk of AI-generated harmful content, such as child exploitation material or pro-terrorism content .(Ashurst)

🧭 Summary

Australia is taking a proactive and structured approach to AI regulation, balancing innovation with safety and accountability. The combination of voluntary standards, proposed mandatory guardrails, and ethical principles aims to foster responsible AI development and deployment across various sectors.(Spruson & Ferguson)

news
The Australian
The Guardian

LEAVE A COMMENT

0 comments