This article looks at a recent episode where Sullivan & Cromwell admitted that a major court filing included AI-generated hallucinations and inaccuracies. The firm apologized, made corrections, and sparked a wider conversation about the limits of artificial intelligence in high-stakes legal work.
Even leading firms can slip up when using AI in complex litigation. This case really shows why human review and oversight matter as AI tools become more common in law.
Overview of the incident and its stakes
The filing at the center of all this involved Sullivan & Cromwell’s work for liquidators going after the Prince Group, a Chinese-owned conglomerate led by Chen Zhi. On April 9, opposing counsel Boies Schiller Flexner flagged some pretty serious problems: misquotations of the US bankruptcy code, wrong case citations, and inaccurate summaries of other cases.
The firm admitted that many of these mistakes came from AI assistance, but they didn’t catch them until after submission. In a letter to Judge Martin Glenn, Andrew Dietderich, S&C’s co-head of global restructuring, apologized for the errors and said the firm’s AI policies and training weren’t followed this time.
A second review also missed the issues before the filing went in. S&C later filed a corrected version and made it clear the errors weren’t from independent analyst work, but from AI-generated material.
What the disclosure reveals about AI use in filings
The firm didn’t say which AI tool they used or who drafted the filing. But it’s a strong reminder: lawyers can use AI in their practice, but they’re still ethically responsible for making sure court submissions are accurate.
AI can spit out errors that look convincing but are legally wrong, especially when it comes to misquoting, mis-citing, or summarizing other cases. That’s a risk that’s not going away anytime soon.
Ethical and compliance implications for law firms
This episode raises tough questions about how firms actually govern AI use. Ethical rules demand that court submissions are accurate, properly sourced, and free from anything misleading or made up.
If AI-generated content goes unchecked, firms could face professional discipline, reputational damage, and real harm to their clients. That’s not something any firm wants to deal with.
For clients and the wider legal world, this serves as a warning: robust governance around AI tools is absolutely necessary. That means clear responsibility, mandatory human review, and transparent documentation of how AI is used in drafting and analysis.
Key lessons for AI governance in legal practice
So, what should firms actually do in response to episodes like this? Here are some best practices that seem obvious, but often get overlooked:
- Explicit policy enforcement: Make sure AI usage policies are followed and can be audited before anything is filed.
- Mandatory verification: Always require a human to independently check all AI-generated legal citations, quotes, and conclusions.
- Traceability: Keep a clear record of which AI tools were used for which parts of a document.
- Quality control layers: Add multiple rounds of review, including a risk-based check of all legal authorities cited.
- Tool vetting and updates: Regularly evaluate AI tools for reliability, bias, and whether they’re up to date with the law.
- Ethics training: Keep reinforcing education on AI ethics and the limits of automated reasoning.
- Client communication: Be up front with clients about AI-assisted processes and oversight.
Case context: Prince Group and the Chen Zhi charges
The ongoing case against the Prince Group involves some serious US charges, like wire fraud, money laundering, and running forced-labor scam compounds. U.S. prosecutors are trying to seize nearly $9 billion in bitcoin they say is linked to the group’s criminal activity.
Chen Zhi was arrested in Cambodia and extradited to China as part of a broader international crackdown. The fact that this case collided with AI-assisted legal work just raises the stakes when technology meets complex, cross-border litigation.
Impact on risk management and due diligence
For law firms, this incident really drives home that risk management has to include the tools used in case prep. Firms need to check the reliability of AI-generated insights, cross-check results against trusted sources, and make sure every submission goes through a human-in-the-loop review that can stand up in court.
Practical takeaways for firms and clients
Firms can take a few practical steps to reduce risks and keep client interests front and center.
- Strengthen governance around AI-assisted drafting, and make sure someone’s clearly accountable.
- Set up standardized verification protocols for all AI-generated content in filings.
- Invest in training that helps people spot hallucinations and misquotations.
- Keep detailed records of which AI tools, versions, and review steps get used in each document.
- Talk openly with clients about how you use AI and what oversight looks like.
Here is the source article for this story: AI hallucinations found in high-profile Wall Street law firm filing