Safe and responsible AI in Australia: The government’s interim response

On the 17th of January 2024, The Australian Government published its interim response to the Safe and Responsible AI in Australia Discussion paper. The Interim Report is available in full here.

A preliminary analysis of submissions found at least 10 legislative frameworks that may require amendments to respond to applications of AI. 

The Australian Government committed to five principles when guiding its interim response:

  • Using a risk-based framework to support the safe use of AI and prevent harm occurring from AI
  • Avoiding unnecessary or disproportionate burdens for businesses, the community and regulators
  • Being open in its engagement and working with experts from across Australia in developing its approach
  • Consistency with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks
  • Placing people and communities at the centre when developing and implementing its regulatory approaches 


Many AI risks outlined in submissions were well-known before recent advances in generative AI. These include:

  • inaccuracies in model inputs and outputs 
  • biased or poor-quality model training data 
  • model slippage over time 
  • discriminatory or biased outputs 
  • a lack of transparency about how and when AI systems are being used. 

The government is already undertaking work to strengthen existing laws in areas that will help to address the known harms of AI. This includes:

  • the AI in Government Taskforce works to support the safe and responsible deployment of AI in the Australian Public Service, including by developing policies, standards and guidance
  • related work under the Data and Digital Minister’s Meeting to develop a nationally consistent approach to the safe and ethical use of AI by government 
  • reforms to Australia’s privacy laws, including an in-principle agreement to require non-government entities to conduct a privacy impact assessment for activities with high privacy risks to identify and manage, minimise or eliminate risks. Further, corresponding proposals agreed in the Robodebt Royal Commission report focus on increasing transparency and integrity of automated decision- making which uses personal information 
  • The registration under Australia’s online safety laws of new mandatory industry codes and the development of 2 mandatory industry standards will require industry to provide appropriate community safeguards to deal with certain types of illegal and harmful content (including child sexual abuse material) online, including that generated and spread by AI
  • cyber security considerations consistent with the Cyber Security Strategy, as well as work underway in the Australian Signals Directorate through its Ethical AI Framework.
  • developing new laws that will provide the Australian Communications and Media Authority with powers to combat online misinformation and disinformation
  •  an independent statutory review of the Online Safety Act 2021 to ensure that the legislative framework remains responsive to online harms 
  • A regulatory framework for automated vehicles in Australia, including interactions with work health and safety laws
  •  Ongoing research and consultation by the Attorney-General’s Department and IP Australia, including through the AI Working Group of the IP Policy Group, on the implications of AI on copyright and broader IP law
  •  Implementing privacy law reforms 
  • Strengthening Australia’s competition and consumer laws to address issues posed by digital platforms 
  • Agreeing on an Australian Framework for Generative AI in Schools by education ministers to guide the responsible and ethical use of generative AI tools
  • ensuring the security of AI tools, such as using principles like security by design, through the government’s work on the Cyber Security Strategy.

The Department of Industry, Science and Resources is establishing an interim expert advisory group to support the government’s development of options for AI guardrails. 

 The government will continue to work with states and territories to consider opportunities to further strengthen regulatory frameworks. 

The government has stated that it will consider:

  • mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings. 
  • possible legislative vehicles for introducing mandatory safety guardrails for AI in high-risk settings in close consultation with industry and the community 
  • specific obligations for the development, deployment and use of frontier or general-purpose models
  • steps it can take to support the development and diffusion of AI technologies across the Australian economy, including the need for an AI Investment Plan. 

While the government considers mandatory guardrails for AI development and use and the next steps, immediate actions are being taken including:

  • working with industry to develop a voluntary AI Safety Standard;
  • working with industry to develop options for voluntary labelling and watermarking of AI-generated materials;
  • establishing an expert advisory group to support the development of options for mandatory guardrails.
  • Requesting the National AI Centre to work with industry to develop an AI Safety Standard to provide industry with a practical, voluntary, best-practice toolkit that ensures that AI systems being developed or deployed are safe and secure

Australia is closely monitoring how other countries are responding to the challenges of AI, including initial efforts in the EU, US and Canada. Building on its engagement at the UK AI Safety Summit in November, the Government will continue to work with other countries to shape international efforts in this area. The Interim Report indicates that any new laws would need to be tailored to Australia. 

For more information, please contact Hawker Britton’s Managing Director Simon Banks on +61 419 638 587. 

Download the Paper

Be informed when news is published