Use this button to switch between dark and light mode.

From Tools to Teammates: Rewiring Legal Work in the Age of Agentic AI

Authored by Lindsay O’Connor, General Manager Content Pacific & Content Strategy APAC

I recently had the opportunity to speak at Legal Innovation & Tech Fest in Sydney on the rise of agentic AI and its impact on legal work. The conversations that followed, both on and off stage, reinforced how quickly this space is evolving and how many firms and legal teams are now grappling with the same question: where is the real value emerging, and what needs to change to capture it?

We’ve seen over the past 18 months, that the legal sector has moved quickly on AI. Top-tier firms and large corporates, in particular, have invested meaningfully. However, there are very few firms that haven’t either experimented with or implemented some form of AI tools within their organisation.

Tools have been deployed, pilots completed, and in many cases there are genuine productivity gains. Drafting is faster. Research is broader. First outputs arrive earlier. And yet, for many organisations, the commercial return is still difficult to articulate. That is not a failure of the technology but a reflection of how it is being used. Most implementations are still based on a simple assumption: that AI is a tool which improves how individual tasks are performed.

That assumption is starting to break down.

What we are now seeing is the emergence of AI that behaves less like a tool and more like a participant in the work itself. Systems that can take an objective, break it into steps, execute those steps, and refine outputs with the relevant matter context persisting across the process.

That shift matters because it changes the unit of value. It is no longer the individual task. It is the workflow.

From tasks to workflows

The first wave of legal AI was prompt-led and task-specific. Lawyers asked questions, generated drafts, or summarised documents. The interaction model was linear. Input, output, review. Agentic AI changes that model. It introduces systems that can manage increasingly complex sequences of activity.

In practice, that can mean conducting research, applying it to drafting, testing outputs against playbooks, and refining with context maintained. This is closer to how legal work actually happens in practice.

Where this is actually working

In private practice, this is most evident in transactional work.

Contract review is evolving from clause extraction to structured analysis against playbooks, with negotiation support and risk assessment capabilities.

In-house, the shift is particularly visible in risk and compliance, with continuous monitoring and regulatory impact analysis.

In both cases, value comes from the connected workflows, not isolated tasks.

Delegation is changing in practice, not principle

Delegation has always been central to legal practice but firms and legal teams are now starting to delegate elements of work to AI systems.

Where this works well, organisations define what to delegate, how to scope it, and where review happens.

Delegation now needs to become more structured and explicit as the question becomes less about whether AI should be used, and more about how the work should be structured across humans and AI.

Why efficiency gains are not enough

Efficiency alone does not necessarily deliver commercial value. Untrusted outputs need to be reworked. Often this means that work shifts to more senior lawyers and so the benefits are diluted.

At the same time, client expectations around billing is changing and the value provided by lawyers is shifting more toward judgement and insight, rather than simply time spent.

From capability to control

At this point capability of the technology is no longer the constraint. Control is.

Clear guardrails, embedded expert oversight, trusted content, and auditability are essential. This is where the combination of trusted content and workflow design becomes critical to success.

Bringing capability and control together

If agentic AI is fundamentally about delegation, then the real question becomes what makes that delegation safe, reliable, and professionally defensible? There are 3 key areas to consider:

  1. Governance frameworks – clearly defined use cases for AI, boundaries around how it can and cannot be used and clarity on who remains accountable for outputs
  2. Trusted, Traceable, Explainable outputs - not all AI is equal, particularly in a legal context. At a minimum, outputs need to be: grounded in authoritative content, traceable to source and explainable in a way a lawyer can defend.
  3. Humans in control - AI can accelerate work, but it doesn’t replace professional responsibility. So, control needs to be designed into the workflow:
  • where review happens
  • what gets checked
  • and how consistency is maintained

Platforms like Protégé are now evolving to embed AI within legal workflows, grounded in authoritative content.

Meanwhile partnerships with providers such as Anthropic will enable capabilities like Claude Co-Work to be included within secure environments, such as Lexis Protégé, later in 2026. These developments will allow firms to access advanced AI while maintaining governance, traceability, and consistency.

What will matter from here

Agentic AI is an operating model shift.

The organisations that succeed will be deliberate about what they delegate, what they trust, and how they maintain control.

The commercial advantage will come from combining capability with control to deliver scalable and defensible legal work.

This is where solutions like Protégé Workflows are beginning to play a critical role, embedding AI within structured, governed workflows that bring together authoritative content, matter context and human oversight. In doing so, they enable legal teams to move beyond isolated use cases and operationalise AI in a way that is both practical and professionally defensible.