Texas Bar Journal • May 2024
Deepfakes
Preparing to confront AI-generated 'evidence' in investigations and litigation.
Written By Anne D. Cartwright, Peter C. Anderson, And Jonathan A. Porter
Recent news items have highlighted advances in AI software that render it capable of generating fake images, video, and audio of a person nearly indistinguishable from the real thing, commonly known as “deepfakes.” As tools to create deepfakes become more accessible and output becomes increasingly convincing, deepfakes—and/or arguments that proffered evidence is deepfaked—are appearing more frequently in legal investigations, dispute resolution, and litigation. As we stand at the threshold of this rapidly changing AI era, attorneys can prepare to protect the integrity of investigations and litigation outcomes by learning to identify and address potentially deepfaked evidence when it arises.
THE ETHICAL DUTY OF COMPETENCE
Rule 1.1 of the American Bar Association Model Rules of Professional
Responsibility calls on all attorneys to have the “legal
knowledge, skill, thoroughness and preparation” reasonably
necessary for representation.1 Given the potentially
outcome- determinative impact of deepfaked evidence, attorneys serving
in investigative, dispute resolution, and litigation roles arguably
have a duty to arm themselves with a baseline understanding of the
potential for AI trickery, gather tools to address it, and anticipate
methods for assessing it in investigations and litigation.
HOW CAN ATTORNEYS PREPARE TO ADDRESS DEEPFAKED
MEDIA?
Train yourself—and your factfinding team—to monitor for and identify
obvious deepfakes. Though most laypeople cannot spot a sophisticated
deepfake, current consumer-facing deepfake applications often contain
clear mistakes upon careful inspection. According to the Department of
Homeland Security, a deepfake research project run by MIT and others,
simple things to consider include unnatural lighting or shadows, blurry
areas, inconsistent skin tone or texture, background anomalies, and
neck and lip movement that is not aligned with audio.2
Additionally, readily available metadata—like creation dates and
authorship— could provide insight. Witnesses and inconsistencies in
evidence may also alert your team in evidence may also alert your team
to the potential for AI-generated "evidence."
UTILIZE RELIABLE DEEPFAKE DETECTION
TECHNOLOGY
Companies reportedly are developing programs intended to identify even
extremely persuasive deepfakes.
Unfortunately, as these programs improve, deepfakes are simultaneously
becoming more realistic and difficult to detect—indeed, bad actors are
working to build tools coded to avoid detection markers.3
Consider deploying available, reliable technology to review potentially
AI-generated media, making sure to continuously verify the
dependability of any chosen programs.
IDENTIFY AI EXPERTS
With the rapid evolution of available deepfake technology, it seems
highly probable that even some consumer-facing applications will
produce deepfakes that are undetectable absent expert examination.
Demand for such experts will likely evolve as quickly as the
technology. If the
credibility of a particular deepfake is a key issue in a particular
case, a battle between expert witnesses may become the deciding factor
for the trier of fact. Forensic experts may apply specialized detection
applications and technical knowledge to review media, metadata, and
code to identify false material. Consider building relationships with
potential experts and their organizations early to avoid protracted
searches that could delay proceedings. As deepfakes become more
widespread and convincing, and as the supply of experts specialized in
deepfakes increases, experts may need increasing amounts of specialized
training to qualify as a deepfake expert or be sufficiently persuasive
to the finder of fact.
PREPARE TO RAISE ARGUMENTS RELATING TO
DEEPFAKERY
Attorneys must prepare to confront deepfakes in representing their
clients and to defend against allegations that legitimate evidence has
been faked. While courts and lawmakers consider the extent to which
AI-specific rules should be promulgated, existing rules and principles
offer guidance. Court rules related to admissibility—including those
governing relevance, authentication, hearsay (and its exceptions), and
undue prejudice—will control in litigation and likely inform
investigation and dispute resolution consideration of purported
deepfakes.4
As authentication under Federal Rule of Evidence 901 only requires
that “the proponent must produce evidence sufficient to support a
finding that the item is what the proponent claims it is,” there
is not a terribly high bar for evidence to be shown to the trier of
fact in many proceedings.
Attorneys should be prepared for jurors or other factfinders to be skeptical of whether evidence is legitimate or a deepfake. Recent studies have found that the majority of Americans express confusion and concern over whether deepfakes play a role in current events.7 That potential uncertainty could be assuaged by devoting additional attention to foundational testimony that explains steps the witness took to verify authenticity, including explanations into how the witness obtained the evidence and how metadata proves legitimacy. Due to jurors’ pre-existing skepticism regarding deepfakes, expert testimony pointing out errors in deepfaked evidence is likely to be very effective. Attorneys with deepfake concerns should attack evidence in these specific ways, rather than using general arguments that evidence could be deepfakes, which have angered both courts8 and juries.VsuP9?As with other potentially false information, parties and witnesses should also be questioned about the veracity of possibly deepfaked evidence.
DECIDING UPON THE CREDIBILITY OF POTENTIAL
DEEPFAKES
Where assessment of an alleged deepfake will be left to a factfinder,
the factfinder must consider how to assess its credibility. Credibility
may be explored through traditional factors such as plausibility,
corroboration, consistency, motive to falsify, contemporaneousness, and
the context in which it was received (e.g., the credibility of
the witness who provided it and chain-of-custody). Contradictory
evidence outside the deepfake itself, such as other contemporaneous
media that differ from the alleged deepfake, may also call into
question the credibility of allegedly false media. Additionally,
deepfakes may not align with more credible evidence such as a
person’s real face and disinterested witness statements.
DECIDING UPON THE WEIGHT TO GIVE POTENTIAL
DEEPFAKES
In a situation where a factfinder such as a judge, jury,
investigator, or hearing officer believes that something could be a
deepfake, but cannot be sure, they must carefully consider how much
“weight” should be given to the evidence. This
consideration should include an assessment of its credibility, as well
as other factors suggesting its persuasiveness. Such factors include
whether the media appears based on personal or direct observation
versus hearsay or general knowledge, whether the evidence is direct or
circumstantial, or whether other evidence is more reliable or
plausible.
MOVING FORWARD
Attorneys investigating and evaluating matters must prepare for the
changes and risks accompanying the exciting opportunities presented by
AI. Familiarity with AI is crucial for attorneys to competently serve
their clients in an AI- infused world. Judicial or investigative
proceedings themselves may also need to adapt in terms of modifying
evidentiary rules or providing increased training to judges, hearing
officers, and other institutional factfinders. Developing
deepfake-detection skills, bandwidth, resources, and strategies in
advance will better position attorneys to conduct full and informed
investigations and litigation.
ANNE D.
CARTWRIGHT is a partner in Husch Blackwell in its virtual
office The Link (licensed ito practice in Illinois, Kansas, and
Missouri), where she is an education lawyer focused on legal compliance
audits, policy development, investigations, and customized training. She
works with colleges, universities, and health care systems managing
regulatory requirements related to merger and acquisition transactions
involving education programs. Cartwright also works with educational
institutions on issues related to artificial intelligence and its
implications.
PETER C.
ANDERSON is an attorney with Husch Blackwell in its virtual
office The Link (licensed to practice in Iowa), where he is a member of
the Education Group and supports the firm’s education clients, helping
them stay in compliance with federal and state laws and regulations.
JONATHAN A.
PORTER is a partner in Husch Blackwell in its virtual office
The Link (licensed to practice in Georgia), where he focuses on white
collar criminal defense, federal investigations brought under the False
Claims Act, and litigation against the government and whistleblowers,
with an emphasis on the health care industry.