Wednesday, November 8th, 2023
Article originally published by Bloomberg Law: AI Standing Orders Proliferate as Federal Courts Forge Own Paths (bloomberglaw.com)
Reproduced with permission. Published Nov. 8, 2023. Copyright 2023 Bloomberg Industry Group 800-372-1033. For further use please visit https://www.bloombergindustry.com/copyright-and-usage-guidelines-copyright
Gentry Locke’s Jessiah Hulle surveys federal courts’ ever-expanding guidance on artificial intelligence, and finds that, for now, courts are experimenting with varied approaches to the new technology.
Federal courts nationwide are weighing in on how artificial intelligence can be used in court filings, and they’re exploring different approaches to address issues such as disclosure, accuracy, and ethical duties.
A comprehensive review of 196 federal court websites reveals that judges continue to release AI orders at a steady pace. So far, at least 14 federal courts have released official guidance on using AI tools in litigation.
These new orders also reveal a notable trend: Most courts personalize AI mandates rather than adopt guidelines verbatim from colleagues.
AI’s Perils Inspire Policies
Judicial scrutiny of AI tools intensified in May, when a lawyer in Mata v. Avianca, Inc. filed a brief citing “nonexistent cases” created by ChatGPT, a generative AI software. The lawyer, who admitted relying on ChatGPT rather than personally verifying the cases, was later sanctioned by a federal judge.
Shortly after Mata, Judge Brantley Starr of the US District Court for the Northern District of Texas issued the first judicial standing order on AI. The order requires litigants in his court to file a certificate attesting either that no generative AI will be used in filings or that any generative AI use will be “checked for accuracy … by a human being.” Starr says the certificate is necessary because generative AI is “prone to hallucinations and bias.”
Others quickly followed suit. Orders by three judges—Judge Stephen Vaden of the Court of International Trade, Magistrate Judge Gabriel Fuentes of the Northern District of Illinois, and Senior District Judge Michael Baylson of the Eastern District of Pennsylvania—received national attention. Each judge put their own spin on Starr’s prototype.
Vaden’s standing order focuses on confidentiality, requiring litigants to expressly disclose use of generative AI and file a certificate attesting that such use didn’t disclose proprietary information to unauthorized parties.
Fuentes’ order also requires litigants to disclose the use of generative AI, but has no certificate requirement. Instead, to safeguard accuracy, Fuentes relies on Fed. R. Civ. P. 11, which requires arguments in a filing to be “warranted by existing law” and provides sanctions for noncompliance.
Baylson’s standing order is the most unusual of these four pioneers. It mandates disclosure of any use of AI—generative or not—and requires litigants using AI to certify that citations in a filing are “verified as accurate.” Two former federal judges note that Baylson’s order, by its broad terms, “directs counsel to reveal the use of seemingly innocuous programs like Grammarly.”
Practitioners predicted that other federal judges would follow the lead of these high-profile orders. Months later, those predictions have proven correct.
AI Orders Proliferate
Federal courts are steadily releasing AI standing orders, and—so far—no template has emerged.
For example, it appears that only three district judges—Judge Leslie Kobayashi of the District of Hawaii, Judge Scott Palk of the Western District of Oklahoma, and Judge Gene Pratter of the Eastern District of Pennsylvania—have issued orders mirroring Starr’s prototype.
In comparison, the Western District of Oklahoma Bankruptcy Court has an AI standing order adopting the substance of Starr’s order, but adding Vaden’s requirement that generative AI users certify nondisclosure of confidential information.
Likewise, New Jersey District Judge Evelyn Padin’s standing order generally tracks Starr’s model, but adds a condition that the certification must identify the specific “portion of the filing” drafted with generative AI.
Only one jurist, Magistrate Judge Jeffrey Cole in the Northern District of Illinois, has copied Baylson’s order regulating all AI. Cole requires certification even if AI is used for research rather than drafting. This restriction potentially encompasses everything from ChatGPT to AI-assisted search engines and chatbots.
In contrast, orders issued by the Northern District of Texas Bankruptcy Court and Southern District of New York Judge Arun Subramanian simply warn litigants about generative AI pitfalls without demanding disclosure or certification. Both orders, like Fuentes’, rely on federal procedural rules to certify trustworthy filings.
Moreover, at least two courts expressly prohibit use of AI.
The Eastern District of Missouri provides guidance on its website titled “Self-Represented Litigants” that prohibits filings drafted by generative AI.
Similarly, Judge Michael Newman of the Southern District of Ohio has a standing order prohibiting litigants from using AI in preparing filings. But importantly, Newman carves out an exception for AI in legal and internet search engines. He also requires litigants to inform the court if they discover that AI was used in creating a filing.
Check Local Rules
This varied proliferation of federal standing orders means that judicial curtailment or moderation of AI use in litigation is ongoing, not a passing trend. Pratter, for instance, published her AI order in October. And state and foreign courts are joining the bandwagon.
As this landscape is constantly changing, lawyers and pro se litigants should therefore always double-check local rules and individual judge preferences before using AI in a court filing.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.