Will Mandatory Generative AI Use Certifications Become The Norm In Legal Filings?
On Friday, June 2, Judge Brantley Starr of the Northern District of Texas released what appears to be the first standing order regulating use of generative AI—which has recently emerged as a powerful tool on many fronts—in court filings. Generative AI provides capabilities for ease of research, drafting, image creation, and more. But along with this new technology comes the opportunity for abuse, and the legal system is taking notice.
Judge Starr’s new order requires the following:
All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.
He is calling this a “Mandatory Certification Regarding Generative Artificial Intelligence” and he will strike any party filing that does not include the required certificate—attorneys “will be held responsible under Rule 11 for the contents of any filing that they sign.” His order alleges this restriction is necessary because generative AI is not well suited to writing legal briefs due to: (1) its propensity to “make stuff up – even quotes and citations” and (2) the chance that the artificial intelligence incorporates some type of unknown or unanticipated bias. Judge Starr observes that while attorneys have sworn to set aside personal prejudices and biases, programmers of generative AI products have sworn no such oath.
The order is timely and may be at least in part a response to litigation in the Southern District of New York, where an attorney appearing before that court filed a brief in which “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” Roberto Mata v. Avianca, Inc., 1:22-cv-01461-PKC (S.D.N.Y. May 4, 2023). The attorney is now facing potential sanctions as a result of his reliance on generative AI software for case citations, particularly unconfirmed by well-known, widely used legal sources.
While this may be the first order addressing use of generative AI by lawyers, it is very likely that others will soon follow suit.
Related People
Related Services
Media Contact
Lisa Franz
Director of Public Relations
Jeremy Heallen
Public Relations Senior Manager
mediarelations@HuntonAK.com