16 Jun 2025

Agentic AI in Australia: Legal and Transparent Solutions for Privacy Risks

Privacy Awareness Week, June 16-22, 2025 – As Australia observes Privacy Awareness Week, a critical conversation is taking shape around the rapid evolution of Artificial Intelligence (AI), specifically agentic AI systems, and their profound implications for individual privacy. With new transparency requirements under the Privacy Act set to commence in December 2026, Australian businesses and the government alike are grappling with how to ethically and compliantly deploy these increasingly autonomous technologies.

Understanding Agentic AI: How Autonomous Systems Are Evolving

Agentic AI represents a significant leap forward in AI automation. At its core, agentic AI is comprised of AI agents – specialised software components designed to make decisions and operate cooperatively or independently to achieve defined system objectives. Think of them as highly capable, independent assistants.

Unlike traditional software or even earlier generations of AI that follow rigid, predetermined pathways, agentic AI systems are characterised by their capacity for independent decision-making and proactive action, often without direct human intervention at every step.

They don't just process data; they're given a set of capabilities and designed to autonomously select and combine actions, manage entire workflows, adapt to changing circumstances, and even initiate communications or transactions on their own.

In the near future, agentic AI systems could independently interact with other systems and tools, including robotic automation, to perform actions and determine resolutions without any human oversight.

While the prospect of complete autonomy in AI systems offers substantial potential, it also introduces considerable risk. The absence of a critical layer of human oversight, particularly in the realm of ethical judgment, can be deeply problematic. Without human intervention, autonomous systems may produce outcomes that deviate from their intended objectives, potentially resulting in harmful or unintended consequences.

The Productivity Impact of AI on Australian Industries

The transformative potential of agentic AI spans numerous industries. From enhancing customer service virtual assistants to sophisticated fraud detection and cybersecurity tools, agentic AI is already making significant inroads across Australian sectors, including banking and finance, healthcare, and mining, promising unprecedented efficiency and personalised experiences.

It's not just large corporations embracing AI. Senator the Hon Tim Ayres, the new Minister for Industry and Innovation and Science, recently released the Australian AI Adoption Tracker Report, which surveys Australian SMEs' views on AI. The report revealed that over 40% of Australia's SMEs are adopting AI, a 5% increase from the previous quarter. Among these businesses, 22% reported improved decision-making speed, and nearly 20% highlighted increased productivity thanks to AI.

The Australian government also sees AI as fundamental to national productivity. In his keynote address to the Australian Financial Review's (AFR) recent AI summit, Senator Ayres emphasised that AI and the digital economy are core to Australia’s national interest and foundational to future productivity growth.

Download Privacy Act flowchart for privacy compliance guidance in Australia

The Privacy Challenge of Agentic AI: Building Trust in Australia

While both business and government are keen to increase their adoption of AI, including agentic AI, with its promise of significant productivity gains, there's a parallel recognition of the potential for these technologies to intensify existing issues with AI. These include discrimination, bias, copyright infringement, potential privacy breaches, and a lack of transparency and explainability.

The central challenge lies in delivering on AI's promise and the heightened autonomy of agentic AI, all while ensuring these systems are trustworthy, particularly considering Australia's evolving privacy legislation.

The Privacy and Other Legislation Amendment Act 2024 (POLA), which received Royal Assent in December 2024, introduced crucial transparency obligations for automated decision-making. These provisions are a significant step towards building trust in AI systems by providing individuals with information about how AI systems make decisions and the processes behind them.

From December 10, 2026, entities subject to the Privacy Act will be required to disclose in their privacy policies:

  • the kinds of personal information used by computer programs involved in decisions that could significantly affect individuals' rights or interests; and
  • the kinds of decisions made by computer programs (whether solely by the program or with substantial human assistance) that have such an effect.

This means organisations will need to meticulously document and understand their use of automated decision-making throughout their operations, including the information consumed by these systems, and develop a clear strategy to meet these Privacy Act requirements.

Implementing these measures will present a significant hurdle for agentic AI. The "black box" nature of some advanced AI models, combined with their dynamic and self-learning capabilities, can make it challenging to fully explain how a particular decision was reached or what specific information, including personal information, influenced an autonomous action.

When an agentic AI system learns and adapts in real time, its decision-making processes can become fluid and less predictable, complicating the ability to provide clear, upfront disclosures. The very design of agentic AI can inherently limit its ability to fully explain its actions and provide complete insight into its behaviours.

How Australian Businesses Can Prepare for Transparent AI Systems

How can businesses truly be transparent about the operation of a system designed to evolve and make independent choices? This demands a shift from static disclosures to a more dynamic approach to transparency.

On a practical level, and in preparation for the December 2026 deadline, businesses deploying agentic AI should consider the following key actions to ensure compliance:

  • Conduct Privacy Impact Assessments (PIAs): Regularly conduct PIAs to identify and mitigate privacy risks associated with automated decision-making. PIAs are crucial for ensuring AI systems comply with privacy principles and address potential biases and errors from the outset.
  • Implement Privacy by Design Principles: Integrate privacy and ethical considerations into the very earliest stages of AI system development, rather than treating them as an afterthought.
  • Update Privacy Policies: Ensure your privacy policies clearly include information about automated decision-making processes that significantly affect individuals' rights or interests. This involves detailing the types of personal information used and the nature of the decisions made by the AI systems.
  • Implement Proactive Transparency Measures: Provide clear and accessible information about how automated decision-making systems operate. This includes explaining the logic behind decisions and the data used, which helps build trust and accountability. Consider implementing detailed system logs and real-time monitoring capabilities.
  • Ensure Robust Human Oversight: Establish a "human in the loop" who can monitor and intervene in the operation of AI systems to prevent harm. Clearly define the boundaries of an AI agent's authority, establish human checkpoints for critical decisions, and implement strict access controls to data.
  • Implement Comprehensive Information and Data Governance Practices: Establish robust practices to manage the data used in AI systems, particularly any personal information.

Free Privacy and Data Protection Compliance Checklist

Guiding Agentic AI: The Role of AI Governance in Australia

Businesses can also draw insights from the Australian government's ongoing efforts to regulate AI and promote AI governance. While the government has not yet committed to a specific timeframe for developing a dedicated Australian AI Act (similar to the EU's AI Act), it has taken a number of key steps towards developing a comprehensive AI governance and regulatory framework for both government and business.

In recognition of both the transformative potential and inherent risks of AI, the government is actively implementing an AI governance framework piece by piece. Key measures include:

  • Australia’s AI Ethics Principles: Released on November 7, 2019, and updated on October 11, 2024, these principles guide businesses and government in the responsible design, development, and implementation of AI systems, emphasising human-centred values, fairness, transparency, and accountability.
  • National Framework for the Assurance of AI in Government: Agreed upon on June 21, 2024, this framework establishes key principles for AI assurance, aligning government AI use with Australia’s AI ethics principles and providing practices for applying those principles to government AI use and, by extension, to guide responsible AI development and deployment for businesses.
  • Policy for the Responsible Use of AI in Government: Effective September 1, 2024, this policy sets mandatory requirements for government agencies regarding accountability, transparency, and staff training.
  • Voluntary AI Safety Standard: Released on September 5, 2024, this provides ten voluntary guardrails and guidance for businesses and organisations on the safe and responsible development and deployment of AI systems in Australia.
  • Mandatory AI Guardrails Proposal Paper: This paper outlines ten mandatory requirements for AI in high-risk settings, aiming to ensure AI transparency and accountability, mitigate privacy risks in AI-driven decision-making, and establish best practices for AI governance.

Using OAIC Guidance to Navigate AI and Privacy Compliance

In addition, businesses deploying AI should consult the guidance provided by the Office of the Australian Information Commissioner (OAIC). They have published two crucial guidance papers related to privacy in the context of AI:

  1. Guidance on privacy and the use of commercially available AI products.
  2. Guidance on privacy and developing and training generative AI models. 

Conclusion: Building Trust and Transparency in Australia’s AI Future

As we approach the December 2026 deadline for the Privacy Act's transparency requirements, the interplay between the rapid advancements in agentic AI, its increasing uptake by business and government, and Australia's evolving regulatory landscape will be a defining challenge.

While agentic AI promises significant innovation and efficiency, its autonomous nature demands that businesses and governments take a proactive and comprehensive approach to both privacy and AI governance.

For Australian businesses and government agencies alike, embracing ethical AI principles, investing in robust privacy and transparency mechanisms, and fostering a culture of accountability will be paramount to building trust and ensuring that the autonomous age of AI truly serves the best interests of all Australians.