With bad AI in courtrooms increasing, SC chief justice joins states giving guidance

Witthaya Prasongsin via Getty Images
South Carolina is among at least 10 states where state judges have issued an order or guidance around the use of artificial intelligence.
This article was originally published by South Carolina Daily Gazette.
COLUMBIA — A federal judge in Alabama reprimanded a trio of attorneys in July who used artificial intelligence in a court briefing that cited nonexistent case law.
A Utah state appeals court ordered a lawyer to pay $1,000 to a state legal aid foundation in May after his law clerk improperly used AI to draft a brief. The lawyer failed to check the document before filing it, and it too cited fake cases.
That same month, an Indiana federal judge fined an attorney $6,000 for similar mistakes.
These ever-growing instances of bad AI have prompted action from some state judiciaries. South Carolina joined a growing list earlier this year, according to a Duke University law school database.
Chief Justice John Kittredge issued an interim policy “due to the increasing use of artificial intelligence systems in legal research,” he told the SC Daily Gazette in a statement Wednesday.
The policy also was recommended by the National Center of State Courts and National Conference of Chief Justices, he said.
South Carolina is one of at least 10 states where judges have issued an order or guidance around the use of so-called “generative” AI tools — such as ChatGPT, capable of creating text, images, audio and video based on a set of prompts — in state court proceedings.
The chief justice, in March, told judges and judicial branch employees they may not use AI without the state Supreme Court or court administration’s express approval.
The judicial branch republicized the policy with a news alert in mid-August as a reminder.
“Generative AI tools are intended to provide assistance and are not a substitute for judicial, legal, or other professional expertise,” Kittredge wrote in what’s dubbed an interim policy. “As such, content from Generative AI may not be used verbatim; be assumed to be truthful, reliable, or accurate; be treated as the sole source of reference; or be solely relied on in making final decisions.”
Kittredge went on to list other potential risks, including “bias, cybersecurity vulnerabilities, and unauthorized use of intellectual property.” He also cited privacy concerns.
There’s no timeline for an update. The policy will stay in place until Kittredge or the entire Supreme Court issues a new order.
Kittredge told the Daily Gazette the language “is intended to be a common-sense approach that recognizes the potential benefits of AI in assisting South Carolina Judicial Branch employees in performance of certain tasks.”
Judges and lawyers in the state have access to Westlaw, a platform for legal research, which includes some AI-assisted research capabilities.
“The Interim Policy states that AI is to be used to assist judges and other judicial branch employees only and cannot be relied on to produce a final product,” he said in the statement.
And while the interim policy does not specifically apply to lawyers, the chief justice also cautioned them against relying on the technology.
“The institutions around law, whether it’s judges or lawyers, operate on some certain function of trust,” David Sella-Villa, a University of South Carolina professor of technology and law, told the SC Daily Gazette. “So how much does technology like this impact what people trust about lawyers and judges?”
Judges and state legal associations across the country that have waded into the issue largely have reached similar conclusions, Sella-Villa said. Where they differ is in the process for determining what could be considered admissible.
In Connecticut, the state judicial branch drafted a 21-page framework. And in Michigan, the state Supreme Court signed a contract last month with an AI platform built specifically for judges and judicial staff, according to The National Law Review.
“The difference is, these other states lay out a process or some kind of standard for approval,” Sella-Villa said. “South Carolina does not at this time.”
When it comes to lawyers, rules and policies differ even more, Sella-Villa said.
“There’s a whole variety of practices of what judges are willing to tolerate in their courtroom,” he said.
Some courts have gone so far as to limit any AI use by lawyers to platforms developed by the nation’s big legal research companies, such as LexisNexis or Bloomberg.
Others require lawyers to sign statements saying they didn’t use AI to draft any portion of a filing under fear of perjury.
At least one North Carolina judicial circuit took a different approach. Last year, the senior judge for Cabarrus County, which adjoins Charlotte, issued an order recognizing the increased use of AI by lawyers and society in general. The order reminds lawyers they are “ultimately responsible for everything submitted in a case” and requires disclosure before a trial starts on any evidence created by AI.
“It’s not denying the reality that this is part of legal practice for some people,” Sella-Villa said. “But at the same time, they don’t want to put somebody on the spot to make a decision.”
Between 20 and 30 state bar associations have issued varying levels of guidance, from official ethics opinions to semi-formal advice.
South Carolina’s Bar Association has not issued an ethics opinion but published a series of articles in its monthly magazine on the topic.
“We know that there’s a need for clarification, but there’s not yet a consensus on what exactly are the limits beyond just make sure that the stuff you give to the opposing counsel or a judge is real,” Sella-Villa said.
Because AI is a tool that saves lawyers time when used properly, rules of professional conduct also say attorneys must bill their clients fairly.
In teaching his own law students, Sella-Villa pushes first-year students to steer clear of the technology until they’ve learned how to analyze and write legal briefings on their own.
“If they’re not going and doing the work themselves, it’s hard to know that they’re actually learning,” he said. “It’s great that you can give a prompt and give me something that looks like a lawyer’s answer, but can you actually do that?”
Traditionally, law schools have tested students through their writing samples, so a technology that generates writing creates a new set of issues.
Meanwhile, universities, including USC, are striking deals with these AI companies to make more advanced versions of the technology readily available to students.




