Australia’s Plan to Regulate High-Risk AI

Australia AI regulation

Share This Post

In mid-January 2024, Australia’s Minister for Industry and Science, Ed Husic, unveiled the government’s response to the Safe and Responsible AI in Australia consultation. The response, shaped by feedback from last year’s consultation on Australia Ai regulation, incorporates over 500 submissions from individuals as well as businesses. These submissions expressed enthusiasm for AI tools’ potential while also highlighting concerns about potential risks and Australians’ expectations for regulatory safeguards to prevent harm.

The potential of AI systems and applications to enhance well-being, quality of life, and to stimulate economic growth is widely acknowledged. Projections suggest that integrating AI and automation could contribute an additional $170 billion to $600 billion annually to Australia’s GDP by 2030. Realizing these benefits does, however, require fostering trust to encourage the adoption of this technology, a trust which is currently lacking.

The Main Concerns Expressed in the Consultation

The submissions concerning risks associated with AI highlight both familiar challenges and new, emerging concerns. Traditional issues like bias, discrimination, and scams remain prevalent, often due to inadequate mitigation efforts. However, there is growing apprehension surrounding novel risks that current regulations do not fully address. The rapid development and deployment of AI raise concerns about the need for enhanced testing, transparency, and oversight to mitigate potential harms. Public unease is exacerbated by insufficient testing protocols and a lack of detailed understanding of AI system operations.

The risks regarding which Australians raised concerns are broadly categorized into technical, unpredictability and opacity, domain-specific, systemic, and unforeseen risks. Technical risks include concerns about compromised outputs due to design inaccuracies, non-representative data, or biases in training data, leading to unfair outcomes, especially in critical areas like healthcare. Unpredictability and opacity arise from opaque AI systems that make it difficult to identify harms (black box issue), hinder error prediction, accountability establishment, and outcome explanation. Domain-specific risks involve AI’s exacerbation of existing harms or systems, such as the spread of online harms through deep fake pornography and the undermining of social cohesion through misinformation.

Systemic risks are highlighted, including concerns about highly capable and potentially dangerous frontier models and the broader accessibility and usability of generative AI models, which can lead to unprecedented harm at an unprecedented pace. Additionally, the rapid evolution of AI presents unforeseen risks, challenging regulatory frameworks to remain agile and adaptable without stifling innovation. Submissions stress the need for flexible regulatory approaches responsive to emerging risks, requiring governments to swiftly address evolving challenges while ensuring regulatory frameworks remain future-proof and adaptable.

Submissions by businesses call for amendments to existing laws and regulatory guidance, whereas individual consumers were more likely to prefer a dedicated AI act to tackle these issues. However, there was broad agreement among submissions that voluntary guardrails are insufficient, suggesting that mandatory guardrails should be targeted primarily at high-risk AI applications to facilitate innovation in low-risk areas.

It was emphasized that any regulatory framework should align with international approaches and remain adaptable. Additionally, submissions highlighted the importance of non-regulatory measures, such as education and capacity building, to foster public trust and encourage adoption of AI technologies.

The Government’s Response to the Consultation

To address the notable lack of public trust in the safe and responsible design, development, deployment, and use of AI systems, hampering business adoption and public acceptance, the government of Australia is indicating that it will take a risk-based approach, particularly focusing on implementing additional safeguards to mitigate potential harms in high-risk scenarios, such as workplace discrimination, the justice system, surveillance, or self-driving cars. The government is yet unclear on whether this will require only amending existing laws or a comprehensive AI law. Additionally, the government intends to establish a temporary expert advisory group to aid in developing these protective measures.

The government recognizes that mandatory safeguards for the development and deployment of AI are necessary in high-risk settings as harms can, at worst, be irreversible, meaning we might only get one shot at getting this right, at ensuring that harms are prevented from occurring in the first place. Mandatory safeguards are also intended to be complemented by voluntary safety standards, as has been the approach internationally.

The interim response also highlights ongoing efforts in strengthening existing and introducing new laws, for example the privacy law reform, a review of the Online Safety Act 2021, and legislative efforts to enact laws relating to misinformation and disinformation.

At the same time, in low-risk settings, the sentiment is that AI should be allowed to flourish uninhibitedly, for example to filter spam emails or to optimize business operations.

Concerns

The focus in Australia, as elsewhere, seems to be on known risks without an eye towards smarter than human AI that might be achieved in the future with more far-reaching consequences than those contemplated in mainstream conversation. The approach is comparable to common consumer protection or environmental law. Strong arguments can be made that risk regulation alone is not an effective approach in the face of AI because it assumes that a bit of tweaking will suffice to properly govern it. However, according to Margot E. Kaminski’s article The Developing Law of AI: A Turn To Risk Regulation: “The deontological harms raised by the use of AI systems—to autonomy, dignity, privacy, equality, and other human rights—are not inherently well-suited to a risk regulation framework.” The questions that should be asked before we assume that we can prevent harms from occurring is whether the technology should be developed or used in the way that it is today.

For example, the use of the vast amount of data for the development of LLMs poses significant threats to personal information included in this data. There may be little visibility as to what personal information is in these data sets. That makes it very difficult for downstream users to use these models confidently because they are inevitably exposed to risks of non-compliance with data protection requirements.

Possible Solutions Addressing Privacy Concerns

A little-pursued avenue is to filter out personal information before the model is trained on data sets containing personal information. But this doesn’t have to be as onerous as it seems. With the support of Private AI’s technology businesses could meet, for example, the personal data minimization requirement that demands that only personal data that is necessary to achieve the stipulated purpose is used. In the case of training an AI model, depending on the specific use case, personal information may not at all be required and hence prohibited from being included in the data set.

Private AI’s technology can identify and report on personal identifiers included in large unstructured data sets and replace them with synthetic data or placeholders. For many use cases, this approach that relies on context-aware algorithms trained by data experts in multiple languages is able to preserve data utility while maximizing data privacy. Try it on your own data here.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.