Use this button to switch between dark and light mode.

Balancing AI and Human Judgment: Ethical Considerations in the Legal Profession

AI has made monumental leaps in the brief history of its existence, and it will only become increasingly more sophisticated and capable. In what researchers from Codex–The Stanford Centre for Legal Informatics called a watershed moment, GPT-4, passed the Uniform Bar Exam (UBE), exceeding all prior large language model scores and scoring in the 90th percentile. AI has also been used for tasks through specifically trained legal software, streamlining processes like contract reviews, and providing legal AI chatbots to clients for quick and accessible legal advice.

It is a story that will come to define the beginnings of law and artificial intelligence – an outlandish cautionary tale that serves as the perfect example of how wrong things can go when the responsible and ethical use of AI is ignored. The setting: Colombia. The case: Roberto Mata v. Avianca Airlines. The issue: in a response filed by Mata’s lawyers to the airline’s attempted dismissal of Mata’s personal injury suit, several other cases were filed to show precedent. All these cases did not exist.

A member of Mata’s legal team Steven Schwartz had used ChatGPT to conduct his research and was assured by the Large Language Model that the cases were real. Schwartzman pleaded ignorance, claiming that he believed the technology to be more like a search engine and not a generative language-processing tool, but the damage was done.

Cases like these – and there will be more as accessible generative AI continues to soar in popularity – show just how essential it is that lawyers understand its capabilities and limitations.

AI should not be overly demonised, but it should be heavily scrutinised. While these are indeed significant achievements AI still lacks a key element of what makes lawyers and legal professionals indispensable – empathy.

The data on which these algorithms are trained are inherently biased, often discriminating against marginalised groups on the grounds of race, gender, and class. While human beings are also inherently biased, there is often a sense of trust placed in this technology as it is assumed to be neutral in its judgements and understanding of humanity, making those who use it complacent in its results. So how can we manage these risks? How can we ensure that the work produced through AI is as fair and error-free as possible?

It comes down to the tasks we use it for, the information we feed it, and how we treat the outcomes. Being wary of AI’s inherent biases and critical of the information it produces is key to using it ethically. Data is not foolproof, and a critical and empathetic eye should be the final judge. As the cautionary tale at the beginning of this article warns, AI will often tell you what you want to hear. It should not be used to replace thorough and fastidious research but should augment the research process to make it more efficient. Using an AI research platform that has integrated legal data into its algorithm, not just ChatGPT, is also essential and less likely to generate errors.

The legal profession is becoming increasingly demanding, and we are seeing the effects of that in higher rates of burnout and declining mental health. While it is tempting to use AI to mitigate this and ease the burden, we must always exercise caution. AI will revolutionise the way we work, and we must not be afraid to embrace it. However factual inaccuracies and discriminatory biases on which the algorithms are built can undermine the fundamental ethical foundations on which the legal profession is built. Responsible use of AI means that it should enhance and support our processes rather than replace them. Balancing AI and human judgement in the legal profession is in the humanity and empathy that you bring to the table.

AI’s limitations go beyond the absence of empathy. The data sets these algorithms are built on often come laden with biases. These biases, reflecting systemic prejudices related to race, gender, and class, pose grave risks when mistaken as neutral and accurate. Whereas human judgments are typically contextualised and can be questioned, AI outputs, often taken at face value, might perpetuate existing inequalities, painting them with an undeserved veneer of objectivity.

For AI to be an ethical tool in the legal landscape, its use must be circumspect. Lawyers should be trained to be wary of its inherent biases and must maintain a critical stance towards its outputs. AI should serve as an augmentative tool, enhancing the efficiency of the research process, rather than replacing the meticulous and thorough research practices lawyers are known for. When engaging with AI platforms for legal research, it is crucial to opt for those tailored for legal data rather than generic models to minimize errors.

As the pressures of the legal profession intensify, leading to increased burnout rates and mental health challenges, AI seems like an attractive solution to lessen the workload. Yet, we must tread with caution. It is crucial that AI's role is to support, not supplant. The ethical bedrock of the legal profession hinges on the judicious balance between leveraging AI and human judgment. The essence of legal practice lies not just in the laws and statutes but in the humanity and empathy lawyers bring to their craft.

Tags:

Contact Us


Telephone number: +27 (0) 860 765 432

I consent to being kept updated about related products, services and events.


LexisNexis South Africa and our LexisNexis Legal & Professional group of companies which are part of the RELX Group will use your personal information to administer your account and/or provide the products and services that you have requested from us. We may contact you in your professional capacity with information about our other products, services and events that we believe may be of interest. You’ll be able to update your communication preferences any time via the unsubscribe link provided within our communication or you can manage your communication preferences via our Preference Centre. You can learn more about how we handle your personal data and your rights by reviewing our Privacy Policy.