As AI becomes more common in legal practice, many professionals are discovering an unexpected challenge: their tools don’t always work together. A typical day for an Australian lawyer might involve switching...
Kiren Chitkara , Legal Writer, Practical Guidance Succession Chloe Silvester , Head of General Practice, Practical Guidance November 2025 marks one of the most significant transformations in Australia...
For decades, LexisNexis ® has defined legal research. But research is just the beginning. Today, we’re transforming the entire legal workflow, evolving from a research provider into a technology partner...
Authored by Seeta Bodke, Head of Product - Pacific, LexisNexis® Legal & Professional We all know the stories: briefs cited fake cases. Submissions with phantom judgments. Entire arguments are built on...
Capital Monitor™ Editorial by Keely Garcia A single week in November 2023 saw four women lose their lives in South Australia amid domestic and family violence, an alarming record across any Australian...
Authored by Seeta Bodke, Head of Product - Pacific, LexisNexis® Legal & Professional
We all know the stories: briefs cited fake cases. Submissions with phantom judgments. Entire arguments are built on citations that don’t exist.
Across Australia, a growing list of cases has shown what happens when generative AI is used without proper oversight, and why lawyers remain cautious about its place in legal practice.
In Luck v Secretary, Services Australia [2025] FCAFC 26, a non-existent case was included in a party’s submissions, providing us with a stark reminder that “hallucinated” references can slip into the record when AI outputs aren’t verified.
In Kohls v Elison No 24-cv-03754 (D Minn 10 January 2025), the United States District Court found that fabricated academic citations were presented as factual references. The issue wasn’t just the inclusion of false material; it was that no one caught it before it reached the court.
These incidents make understandable what many lawyers fear: trust in legal AI is fragile. If the tool fails, the risk is professional, not technical.
However, as Justice Needham made clear in her “AI and the Courts in 2025” Speech, lawyers must take “full and ultimate responsibility for any submissions made to the court.” She also cautioned that artificial intelligence must not be relied upon unless its outputs are “independently and thoroughly verified.”
AI can assist, but it cannot replace the human judgment, verification, and ethical accountability that define good practice.
AI is here. But if you’re reading this, you already know that. The bigger question is: “Can I trust it not to lead me astray?”
That’s the difference between general-purpose AI (ChatGPT, Claude) and legal-grade AI tools. My job is to make that difference real.
Legal AI must be built on fundamental pillars:
Generative AI works by producing language based on patterns. Agentic AI, by contrast, plans, interacts, and adapts. Think of it as the difference between having a passive assistant and an active coworker.
Agentic systems can execute logic: break down a brief, gather supporting cases, cross-examine your arguments, and refine output, all within your supervision.
We’re integrating agentic features into tools like Lexis+ AI® with ProtégéTM, so that it feels less like magic and more like trusted legal colleagues.
If there’s one question I hear more than any other, it’s this: “Where does my data go when I use AI?”
It’s an important question. And the right one to ask. Because trust in AI doesn’t start with the model; it starts with how your information is protected.
At LexisNexis, privacy, data governance, and security are built into every stage of AI development. We understand that legal professionals handle sensitive client information every day, and any loss of control over that data is unacceptable.
Here’s what that protection looks like in practice:
Local Hosting in Australia: All LexisNexis AI products, including Lexis+ AI, are hosted within Australia. That means your data never leaves Australian jurisdiction, ensuring compliance with local privacy and data-sovereignty requirements.
Enterprise-Grade Encryption: Every piece of data transferred to or from our systems is encrypted in transit and at rest, using the same standards applied in global financial institutions.
Strict Access Controls: Only authorised LexisNexis personnel involved in system maintenance can access limited, de-identified information - and all access is logged, audited, and reviewed.
No Data Re-Use Without Consent: Unlike open, consumer-grade AI tools, your interactions with LexisNexis AI models are never used to train our algorithms or shared externally. Your firm’s data stays private and under your control.
Comprehensive Compliance: Our systems adhere to the Australian Privacy Act 1988 (Cth), the Australian Privacy Principles (APPs), and SOC 2 information-security certification standards.
These are not just technical settings; they are ethical guardrails that define what “responsible AI” looks like in the legal profession. When you use Lexis+ AI, you know:
Every lawyer has heard the warning stories, and caution is justified. But when used responsibly, AI can transform how legal professionals research, draft, and deliver advice.
The key is to move from fear to informed confidence: understanding that the right AI tools don’t replace judgment, they strengthen it.
Unlock insights with The Definitive Legal AI Buyer’s Guide.
Here’s how responsible, legal-grade AI already supports everyday practice:
Research with Reliability: Legal AI tools allow you to ask natural-language questions and receive responses grounded in verified law. Every citation links back to a trusted LexisNexis source, eliminating the risk of fabricated cases.
Drafting with Precision: They can help generate correspondence, clauses, memos, agreements, or litigation drafts directly within Microsoft Word, saving time while maintaining your firm’s tone and accuracy.
Document and Case Analysis: You can upload submissions or pleadings and instantly identify relevant authorities, gaps, or contradictions, helping you validate your strategy before it reaches the court.
Summarisation and Workflow Efficiency: AI-powered summarisation and cross-referencing features make reviewing large volumes of documents faster and more accurate.
But Legal AI does not work in isolation. Human oversight remains at the heart of every process. AI acts as your digital colleague - quick, consistent, and tireless - but you remain the final authority.
For many firms, the journey begins with small, low-risk pilots: letting teams experiment with research and drafting tools, learning how AI supports rather than disrupts workflows. Over time, that familiarity turns caution into confidence.
Trust in AI doesn’t come from hype; it comes from verification, explainability, and control.
Here’s my invitation: Explore Lexis+ AI with Protégé for yourself with a free trial. Start small, insist on transparency, and demand oversight from every tool you use.
Legal AI doesn’t have to be risky. When built for lawyers, with lawyers in control, it becomes a force multiplier.