Skip to Content
chevron-left chevron-right chevron-up chevron-right chevron-left arrow-back star phone quote checkbox-checked search wrench info shield play connection mobile coin-dollar spoon-knife ticket pushpin location gift fire feed bubbles home heart calendar price-tag credit-card clock envelop facebook instagram twitter youtube pinterest yelp google reddit linkedin envelope bbb pinterest homeadvisor angies

Maryland Appellate Court Warns Attorneys About AI‑Generated Fake Case Citations

Chukwuemeka Mezu v. Kristen Mezu
No. 361, September Term, 2025
Opinion by: Judge Graeff
Court: Appellate Court of Maryland
Areas of Law: Family Law, Legal Ethics, Civil Procedure, Professional Responsibility

In a case of first impression in Maryland, the Appellate Court of Maryland issued a strong warning to attorneys about the misuse of artificial intelligence (AI) in legal research and writing. In Chukwuemeka Mezu v. Kristen Mezu, the Court addressed the growing problem of fictitious case citations generated by AI tools, making clear that attorneys remain fully responsible for verifying the accuracy of all citations submitted to the court.

The opinion serves as a cautionary tale—and a clear ethical directive—for lawyers using AI in their practice.

Background of the Case

The appeal arose from a Motion to Invalidate portions of a Marital Settlement Agreement (MSA) entered into by Kristen Mezu (“Mother”) and Chukwuemeka Mezu (“Father”) following the filing of a complaint for absolute divorce.

During the appeal, the Mother’s attorney submitted a brief that included numerous case citations generated by artificial intelligence. The brief contained:

  • Citations to non‑existent (fictitious) cases
  • Misquoted passages
  • Citations to cases that did not support the legal propositions for which they were cited

These errors were not minor or isolated; rather, they reflected a fundamental failure to verify the accuracy of the legal authorities relied upon.

How the AI Errors Occurred

The record showed that:

  • A law clerk prepared the draft brief
  • The clerk relied heavily on AI‑assisted research tools, including ChatGPT and VLex
  • She also visited sites such as CourtListener, CaseMine, and Justia
  • The clerk stated she was unaware that some of these platforms use AI‑generated content or that search results could be the product of AI “hallucinations”

Most importantly, the attorney failed to independently review and verify the AI‑generated citations before filing the brief with the court.

Court’s Response and Ethical Implications

Because of the nature and severity of the misconduct, the Appellate Court of Maryland referred the attorney to the Maryland Attorney Grievance Commission.

The Court emphasized that while AI can be a useful tool, its improper use raises serious ethical concerns. Submitting briefs that contain fake cases—regardless of intent—is unacceptable and undermines the integrity of the judicial process.

Key Rules Cited by the Court

The Court relied on several ethical and procedural rules, including:

Maryland Rule 1‑311(b)

An attorney’s signature on a pleading certifies that:

  • The attorney has read the document, and
  • To the best of their knowledge, there is good ground to support it, and
  • It is not filed for an improper purpose or delay

Maryland Attorneys’ Rules of Professional Conduct – Rule 19‑303.1

Attorneys may bring or defend only meritorious claims and contentions.

Submitting fictitious citations—whether generated by AI or otherwise—violates these obligations.

The Court’s Message About AI in Legal Practice

The Appellate Court made an important distinction:

  • Using AI in legal practice is not inherently improper
  • Failing to verify AI‑generated content is unquestionably improper

The Court explained that AI tools must be used responsibly, and attorneys cannot rely on the apparent authority or sophistication of AI‑generated output. Courts will not excuse ethical violations simply because technology was involved.

Why This Decision Matters

This opinion serves as:

  • A warning to attorneys statewide
  • Guidance for trial courts on how to respond to AI‑related misconduct
  • A reminder that professional responsibility cannot be outsourced to technology

As AI becomes more common in legal practice, courts will continue to demand accuracy, diligence, and accountability from lawyers.

Frequently Asked Questions (FAQ)

Yes. The Court acknowledged that AI may be a valuable tool when used responsibly.

The attorney submitted a brief containing fake case citations generated by AI and failed to independently verify them.

The attorney. The Court made clear that ultimate responsibility rests with the signing attorney.

AI hallucinations occur when an AI system generates information that appears credible but is factually incorrect or entirely fabricated.

Consequences may include sanctions, referral to the Attorney Grievance Commission, or other disciplinary action.

No. It requires careful, responsible use and independent verification of all AI‑generated content.

Using AI in Legal Practice Requires Care and Accountability

The Appellate Court of Maryland issued this decision not to prohibit innovation, but to reinforce that ethical obligations remain unchanged—regardless of the tools used.

Submitting fictitious cases to a court is improper, whether done intentionally or through careless reliance on AI.