Authored by Lindsay O’Connor , General Manager Content Pacific & Content Strategy APAC For years, legal technology has promised greater efficiency. Firms have invested in research platforms, document...
Authored by Lindsay O’Connor , General Manager Content Pacific & Content Strategy APAC I recently had the opportunity to speak at Legal Innovation & Tech Fest in Sydney on the rise of agentic AI and...
The use of artificial intelligence in legal practice continues to mature. While early adoption focused on discrete applications such as drafting assistance, document summarisation, and research support...
This blog was originally published in 2025 and has been updated in March, 2026 to reflect Protégé General AI’s updates. As AI becomes more common in legal practice, many professionals are discovering...
This blog was originally published in 2020 by Professor Sharon Christensen, Queensland University of Technology, and has been updated in the Australian Property Law Bulletin in 2026 to reflect current...
Authored by Lindsay O’Connor, General Manager Content Pacific & Content Strategy APAC
I recently had the opportunity to speak at Legal Innovation & Tech Fest in Sydney on the rise of agentic AI and its impact on legal work. The conversations that followed, both on and off stage, reinforced how quickly this space is evolving and how many firms and legal teams are now grappling with the same question: where is the real value emerging, and what needs to change to capture it?
We’ve seen over the past 18 months, that the legal sector has moved quickly on AI. Top-tier firms and large corporates, in particular, have invested meaningfully. However, there are very few firms that haven’t either experimented with or implemented some form of AI tools within their organisation.
Tools have been deployed, pilots completed, and in many cases there are genuine productivity gains. Drafting is faster. Research is broader. First outputs arrive earlier. And yet, for many organisations, the commercial return is still difficult to articulate. That is not a failure of the technology but a reflection of how it is being used. Most implementations are still based on a simple assumption: that AI is a tool which improves how individual tasks are performed.
That assumption is starting to break down.
What we are now seeing is the emergence of AI that behaves less like a tool and more like a participant in the work itself. Systems that can take an objective, break it into steps, execute those steps, and refine outputs with the relevant matter context persisting across the process.
That shift matters because it changes the unit of value. It is no longer the individual task. It is the workflow.
The first wave of legal AI was prompt-led and task-specific. Lawyers asked questions, generated drafts, or summarised documents. The interaction model was linear. Input, output, review. Agentic AI changes that model. It introduces systems that can manage increasingly complex sequences of activity.
In practice, that can mean conducting research, applying it to drafting, testing outputs against playbooks, and refining with context maintained. This is closer to how legal work actually happens in practice.
In private practice, this is most evident in transactional work.
Contract review is evolving from clause extraction to structured analysis against playbooks, with negotiation support and risk assessment capabilities.
In-house, the shift is particularly visible in risk and compliance, with continuous monitoring and regulatory impact analysis.
In both cases, value comes from the connected workflows, not isolated tasks.
Delegation has always been central to legal practice but firms and legal teams are now starting to delegate elements of work to AI systems.
Where this works well, organisations define what to delegate, how to scope it, and where review happens.
Delegation now needs to become more structured and explicit as the question becomes less about whether AI should be used, and more about how the work should be structured across humans and AI.
Efficiency alone does not necessarily deliver commercial value. Untrusted outputs need to be reworked. Often this means that work shifts to more senior lawyers and so the benefits are diluted.
At the same time, client expectations around billing is changing and the value provided by lawyers is shifting more toward judgement and insight, rather than simply time spent.
At this point capability of the technology is no longer the constraint. Control is.
Clear guardrails, embedded expert oversight, trusted content, and auditability are essential. This is where the combination of trusted content and workflow design becomes critical to success.
If agentic AI is fundamentally about delegation, then the real question becomes what makes that delegation safe, reliable, and professionally defensible? There are 3 key areas to consider:
Platforms like Protégé are now evolving to embed AI within legal workflows, grounded in authoritative content.
Meanwhile partnerships with providers such as Anthropic will enable capabilities like Claude Co-Work to be included within secure environments, such as Lexis Protégé, later in 2026. These developments will allow firms to access advanced AI while maintaining governance, traceability, and consistency.
Agentic AI is an operating model shift.
The organisations that succeed will be deliberate about what they delegate, what they trust, and how they maintain control.
The commercial advantage will come from combining capability with control to deliver scalable and defensible legal work.
This is where solutions like Protégé Workflows are beginning to play a critical role, embedding AI within structured, governed workflows that bring together authoritative content, matter context and human oversight. In doing so, they enable legal teams to move beyond isolated use cases and operationalise AI in a way that is both practical and professionally defensible.