Use this button to switch between dark and light mode.

Copyright © 2025 LexisNexis and/or its Licensors.

What Lawyers Need to Know about Deepfake Technology

February 02, 2025 (5 min read)

By: Bijan Ghom, Saxton & Stump

This article addresses existing deepfake technology and covers topics such as the available platforms to both create and detect deepfakes and the best practices for dealing with deepfakes in your case. The following is a summary of a more comprehensive practice note included in Lexis Practical Guidance.

This summary discusses the implications of deepfake technology for lawyers, emphasizing the need for legal professionals to understand and address the challenges posed by deepfakes in legal proceedings. Deepfakes are sophisticated forgeries that can convincingly mimic real people and events, making them difficult to detect and potentially impactful in court cases.

Deepfake Creation Tools

The term “deepfake” typically refers to images, videos, or audio of real people that are edited or manufactured using artificial intelligence. Examples of various platforms used to create video and image deepfakes, include Synthesia, Zao, DeepFaceLab, FaceApp, and Avatarify for video and image manipulation.  For audio deepfakes, examples include Descript, Resemble AI, and VoiceAI. These tools are often marketed as user-friendly and require minimal technical skills, posing a risk of misuse in legal contexts.

Deepfake Detection Tools

Understanding the basic mechanics underlying deepfake technology is the first step to defending against them. Like any scientific or specialized area, a basic understanding of the landscape will allow you to ask the right questions of the parties, witnesses, and experts, and then use the responsive information favorably.

Detection methods are discussed in more depth in the full practice note and include analyzing flaws in deepfakes, examining metadata, and using advanced technologies like deep learning, biometric analysis, and digital forensic techniques. Examples of detection technologies include Sensity AI, FaceForensics++, and Intel's FakeCatcher.

Best Practices for Evidence Collection and Discovery

Legal professionals to be vigilant in collecting and preserving evidence to counter deepfakes. This includes monitoring metadata, identifying visual inconsistencies, and securing corroborating evidence. Lawyers should also be prepared to work with forensic experts and use detection tools to assess questionable evidence.

The above information is a summary of a more comprehensive article included in Practical Guidance. Customers may view the complete article by following this link.

Not yet a practical guidance subscriber? Sign up for a free trial to view this complete article and other current AI coverage and guidance.


Bijan Ghom is senior counsel at Saxton & Stump. He handles commercial litigation, business and corporate law, intellectual property, and trusts and estates litigation. A former business owner with a master’s degree in business administration, he continually works with business clients to assist with litigation and intellectual property. He brings his experience founding and selling a number of businesses to advising his clients on protecting and monetizing intellectual property assets. He is also a strategist with Palq IP, an IP strategy firm and strategic partner of Saxton & Stump.


Related Content

For a full listing of current practical guidance materials on generative artificial intelligence (AI), ChatGPT, and similar tools that is organized by practice area and updated with new developments, see 

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT

For key resources that provide step-by-step guidance on fundamental civil litigation tasks that an attorney will typically work on when litigating a case in federal court, see

CIVIL LITIGATION FUNDAMENTALS RESOURCE KIT (FEDERAL)

For a discussion on how to make pitches for new litigation business, including preparing the presentation, effective communication techniques, and following up after the pitch, see

LITIGATION BUSINESS PITCHES: FIVE TIPS

For an examination of the ethical issues litigators must be aware of when considering using generative AI technology in their practices, including the many ways litigators may use AI and the specific professional ethics rules that apply, see

AI AND LEGAL ETHICS: WHAT LAWYERS NEED TO KNOW

For an analysis of the primary issues relating to the use of ChatGPT or other chatbot AI programs in the practice of law, see

LAWYERS AND ChatGPT: BEST PRACTICES


For a review of the expectations and opportunities for a senior litigation associate, such as leading case teams, interacting with clients, developing a niche, and strategies for success, see

PROFESSIONAL DEVELOPMENT: LIFE AS A SENIOR LITIGATION ASSOCIATE

For guidance on the utilization of AI to measure an outside litigation counsel’s performance and using new tools to expedite and enhance the delivery of legal services, see

AI AND THE EVALUATION OF OUTSIDE COUNSEL

For an overview of the integration of AI into law firm management and performance, see

HOW TO USE AI TO MANAGE THE ATTORNEY-CLIENT RELATIONSHIP

For practical tips for an attorney relocating to another firm, including what to consider when considering a lateral move and how to navigate your ethical responsibilities, see

LATERAL MOVES FOR ATTORNEYS: WHAT YOU NEED TO KNOW

For information on managing a litigation client’s expectations, including the initial client meeting, early case assessment, engagement agreements, budget and billing, communicating with the client during the litigation, and post-litigation reviews, see

MANAGING CLIENT EXPECTATIONS IN LITIGATION

For assistance in drafting client memos and emails, covering such topics as effective drafting techniques, preserving privileges, and maintaining the security of your client communications, see

DRAFTING CLIENT MEMOS AND EMAILS

For a look at the primary and emerging legal issues related to AI, see

ARTIFICIAL INTELLIGENCE KEY LEGAL ISSUES

Sources

Scott v. Harris, 550 U.S. 372, 378–81 (2007), finding summary judgment should be granted when a video shows the plaintiff's "version of events is so utterly discredited by the [video evidence] that no reasonable jury could have believed him."

United States v. Watson, 483 F.3d 828 (D.C. 2007).

Karen Martin Campbell, Roll Tape—Admissibility of Videotape Evidence in the Courtroom, 26 U. Mem. L. Rev. 1445, 1447 (1996). Studies show that jurors are 650% more likely to retain information when they hear oral testimony coupled with video testimony than those who only hear oral testimony.

About Synthesia - Read our story here.

Download ZAO.

DeepFaceLab 2.0.

About Us · FaceApp.

Avatarify - Bring your photos to life.

About Descript.

Resemble AI - The All-in-One AI Voice Platform.

We’re Building the Future of Voice Technology - Voice.ai.

Sensity AI: Best All-In-One Deepfake Detection.

GitHub - ondyari/FaceForensics: Github of the FaceForensics dataset.

Shruti Agarwal, Hany Farid, Ohad Fried Maneesh Agrawala, Detecting Deep-Fake Videos From Phoneme-Viseme Mismatches, CVPR Workshop Paper (2020).

Intel Introduces Real-Time Deepfake Detector (Nov. 14, 2022).

David Salazar, How Intel Putting its, AI-optimized Processors to Work Detecting Deepfakes. Fast Company (Oct. 3, 2023).

Amped Authenticate - Photo and Video Analysis and Tampering Detection.

Setting the Standard for Image and Video Forensics, Amped Software.

Privacy - Pindrop.

OriginTrail.

Discover how creators can use Content Credentials to obtain proper recognition and promote transparency in the content creation process.