Texas Bar Journal • May 2024


Preparing to confront AI-generated 'evidence' in investigations and litigation.

Written By Anne D. Cartwright, Peter C. Anderson, And Jonathan A. Porter

An AI-generated face

Recent news items have highlighted advances in AI software that render it capable of generating fake images, video, and audio of a person nearly indistinguishable from the real thing, commonly known as “deepfakes.” As tools to create deepfakes become more accessible and output becomes increasingly convincing, deepfakes—and/or arguments that proffered evidence is deepfaked—are appearing more frequently in legal investigations, dispute resolution, and litigation. As we stand at the threshold of this rapidly changing AI era, attorneys can prepare to protect the integrity of investigations and litigation outcomes by learning to identify and address potentially deepfaked evidence when it arises.

Rule 1.1 of the American Bar Association Model Rules of Professional Responsibility calls on all attorneys to have the “legal knowledge, skill, thoroughness and preparation” reasonably necessary for representation.1 Given the potentially outcome- determinative impact of deepfaked evidence, attorneys serving in investigative, dispute resolution, and litigation roles arguably have a duty to arm themselves with a baseline understanding of the potential for AI trickery, gather tools to address it, and anticipate methods for assessing it in investigations and litigation.

Train yourself—and your factfinding team—to monitor for and identify obvious deepfakes. Though most laypeople cannot spot a sophisticated deepfake, current consumer-facing deepfake applications often contain clear mistakes upon careful inspection. According to the Department of Homeland Security, a deepfake research project run by MIT and others, simple things to consider include unnatural lighting or shadows, blurry areas, inconsistent skin tone or texture, background anomalies, and neck and lip movement that is not aligned with audio.2 Additionally, readily available metadata—like creation dates and authorship— could provide insight. Witnesses and inconsistencies in evidence may also alert your team in evidence may also alert your team to the potential for AI-generated "evidence."

Companies reportedly are developing programs intended to identify even extremely persuasive deepfakes.
Unfortunately, as these programs improve, deepfakes are simultaneously becoming more realistic and difficult to detect—indeed, bad actors are working to build tools coded to avoid detection markers.3 Consider deploying available, reliable technology to review potentially AI-generated media, making sure to continuously verify the dependability of any chosen programs.

With the rapid evolution of available deepfake technology, it seems highly probable that even some consumer-facing applications will produce deepfakes that are undetectable absent expert examination. Demand for such experts will likely evolve as quickly as the technology. If the credibility of a particular deepfake is a key issue in a particular case, a battle between expert witnesses may become the deciding factor for the trier of fact. Forensic experts may apply specialized detection applications and technical knowledge to review media, metadata, and code to identify false material. Consider building relationships with potential experts and their organizations early to avoid protracted searches that could delay proceedings. As deepfakes become more widespread and convincing, and as the supply of experts specialized in deepfakes increases, experts may need increasing amounts of specialized training to qualify as a deepfake expert or be sufficiently persuasive to the finder of fact.

Attorneys must prepare to confront deepfakes in representing their clients and to defend against allegations that legitimate evidence has been faked. While courts and lawmakers consider the extent to which AI-specific rules should be promulgated, existing rules and principles offer guidance. Court rules related to admissibility—including those governing relevance, authentication, hearsay (and its exceptions), and undue prejudice—will control in litigation and likely inform investigation and dispute resolution consideration of purported deepfakes.4

As authentication under Federal Rule of Evidence 901 only requires that “the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is,” there is not a terribly high bar for evidence to be shown to the trier of fact in many proceedings. 5 A danger of this is that, as evidence of videos and photographs can be especially emotionally potent, the prejudice inflicted on the factfinder by deepfaked media may be irreversible even if expert testimony thoroughly debunks it.6 Of course, a deepfake may be excluded from consideration entirely if it is irrelevant or patently (or at least provably) false. If not, a factfinder may be left to decide whether it is “real.” Attorneys whose roles include the selection of jurors or other factfinders should consider what characteristics would best serve their clients in matters in which deepfakes are implicated. For example, the technological competence of proposed mediators/arbitrators may come into play, and voir dire may need to explore whether a juror is open to the possibility that visually seamless media evidence may not be real. If a juror does not have much experience with technology or the internet, the juror may have a hard time understanding the expert witness, even if the expert is making a strong argument for a party.

Attorneys should be prepared for jurors or other factfinders to be skeptical of whether evidence is legitimate or a deepfake. Recent studies have found that the majority of Americans express confusion and concern over whether deepfakes play a role in current events.7 That potential uncertainty could be assuaged by devoting additional attention to foundational testimony that explains steps the witness took to verify authenticity, including explanations into how the witness obtained the evidence and how metadata proves legitimacy. Due to jurors’ pre-existing skepticism regarding deepfakes, expert testimony pointing out errors in deepfaked evidence is likely to be very effective. Attorneys with deepfake concerns should attack evidence in these specific ways, rather than using general arguments that evidence could be deepfakes, which have angered both courts8 and juries.VsuP9?As with other potentially false information, parties and witnesses should also be questioned about the veracity of possibly deepfaked evidence.

Where assessment of an alleged deepfake will be left to a factfinder, the factfinder must consider how to assess its credibility. Credibility may be explored through traditional factors such as plausibility, corroboration, consistency, motive to falsify, contemporaneousness, and the context in which it was received (e.g., the credibility of the witness who provided it and chain-of-custody). Contradictory evidence outside the deepfake itself, such as other contemporaneous media that differ from the alleged deepfake, may also call into question the credibility of allegedly false media. Additionally, deepfakes may not align with more credible evidence such as a person’s real face and disinterested witness statements.

In a situation where a factfinder such as a judge, jury, investigator, or hearing officer believes that something could be a deepfake, but cannot be sure, they must carefully consider how much “weight” should be given to the evidence. This consideration should include an assessment of its credibility, as well as other factors suggesting its persuasiveness. Such factors include whether the media appears based on personal or direct observation versus hearsay or general knowledge, whether the evidence is direct or circumstantial, or whether other evidence is more reliable or plausible.

Attorneys investigating and evaluating matters must prepare for the changes and risks accompanying the exciting opportunities presented by AI. Familiarity with AI is crucial for attorneys to competently serve their clients in an AI- infused world. Judicial or investigative proceedings themselves may also need to adapt in terms of modifying evidentiary rules or providing increased training to judges, hearing officers, and other institutional factfinders. Developing deepfake-detection skills, bandwidth, resources, and strategies in advance will better position attorneys to conduct full and informed investigations and litigation.


1.   See Model Rules of Prof’l Conduct R. 1.1 (Competence) and Texas Disciplinary Rules of Professional Conduct Rule 1.01 (Competence and Diligent Representation).
2.   Increasing Threat of Deepfake Identities, U.S. Department of Homeland Security, https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_ identities_0.pdf; see also Detect DeepFakes: How to counteract misinformation created by AI, MIT Media Lab, Affective Computing Project, https://www.media.mit.edu/projects/detect-fakes/overview/.
3.   Contextualizing Deepfake Threats to Organizations, U.S. Department of Defense, National Security Agency, https://media.defense.gov/2023/Sep/12/2003298925/-1/- 1/0/CSI-DEEPFAKE-THREATS.PDF.
4.   Rebecca A. Delfino, Deepfakes on Trial: A Call To Expand the Trial Judge’s Gatekeeping Role To Protect Legal Proceedings from Technological Fakery, 74 Hastings L.J. 293 (2023), https://repository.uclawsf.edu/hastings_law_journal/vol74/iss2/3.
5.   Fed. R. Evid. 901.
6.   John P. LaMonaga, A Break From Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes, American University Law Review: Vol. 69: Iss. 6, Article 5 (2020), https://digitalcommons.wcl.american.edu/aulr/vol69/iss6/5.
7.   See, e.g., Jeffrey Gottfried, About three-quarters of Americans favor steps to restrict altered videos and images, Pew Research Center (June 14, 2019), https://www.pewresearch.org/short-reads/2019/06/14/about-three-quarters-of- americans-favor-steps-to-restrict-altered-videos-and-images/.
8.   See, e.g., Shannon Bond, People are trying to claim real videos are deepfakes. The courts are not amused, NPR, (May 8, 2023), https://www.npr.org/2023/05/08/1174132413/ people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-arenot-amused.
9. See, e.g., Caitlin Dickson, Jury finds Jan. 6 rioter Guy Reffitt guilty on all charges, Yahoo News (Mar. 8, 2022), https://news.yahoo.com/jury-finds-jan-6-rioter-guy- reffitt-guilty-on-all-charges-185659359.html. Reffitt’s attorney’s defense at closing was that video evidence of the January 6 riots could have been deepfaked, despite offering to witnesses or evidence supporting that argument, and despite the fact that the video that was introduced was found by the FBI on the defendant’s own hard drive. The jury convicted Reffitt.

Headshot of Anne D. CartwrightANNE D. CARTWRIGHT is a partner in Husch Blackwell in its virtual office The Link (licensed ito practice in Illinois, Kansas, and Missouri), where she is an education lawyer focused on legal compliance audits, policy development, investigations, and customized training. She works with colleges, universities, and health care systems managing regulatory requirements related to merger and acquisition transactions involving education programs. Cartwright also works with educational institutions on issues related to artificial intelligence and its implications.

Headshot of Peter C. AndersonPETER C. ANDERSON is an attorney with Husch Blackwell in its virtual office The Link (licensed to practice in Iowa), where he is a member of the Education Group and supports the firm’s education clients, helping them stay in compliance with federal and state laws and regulations.

Headshot of Jonathan A. PorterJONATHAN A. PORTER is a partner in Husch Blackwell in its virtual office The Link (licensed to practice in Georgia), where he focuses on white collar criminal defense, federal investigations brought under the False Claims Act, and litigation against the government and whistleblowers, with an emphasis on the health care industry.

We use cookies to analyze our traffic and enhance functionality. More Information agree