AI in courtrooms has prompted concern as a UK judge warns lawyers they could face sanctions—including contempt or criminal charges—for citing AI-generated fake cases. While AI hallucinations pose risks to legal ethics, its responsible use in routine tasks can boost efficiency. To maintain public trust in justice, lawyers must verify AI outputs, uphold professional integrity, and use the technology ethically.

AI in Courtroom: Ethical Challenges and Responsible Use

On June 6, 2025, London’s High Court issued a stern warning to lawyers: citing non-existent cases generated by artificial intelligence (AI) could lead to contempt of court or even criminal charges. Justice Victoria Sharp, addressing two cases where lawyers submitted fabricated case law, underscored the “serious implications for the administration of justice and public confidence in the justice system” when AI is misused. In one instance, a lawyer cited 18 fake cases in a £90 million lawsuit involving Qatar National Bank, with the client, Hamad Al-Haroun, admitting to using AI tools and taking responsibility.

In another, barrister Sarah Forey cited five fictitious cases in a housing claim against the London Borough of Haringey, offering no clear explanation. Both lawyers were referred to their professional regulators, highlighting a growing concern: AI’s potential to erode trust in legal processes through “hallucinations”—confidently generated but false information. This editorial explores the ethical obligations of lawyers in the age of AI, the risks of AI hallucinations, and how legal professionals can harness AI responsibly to uphold justice.

The Perils of AI Hallucinations in Legal Practice

AI hallucinations, where tools like ChatGPT or Google Gemini produce plausible but fictitious outputs, pose a unique threat in the legal field. Unlike human errors, which often stem from oversight or misinterpretation, AI-generated falsehoods can appear authoritative, deceiving even seasoned professionals. In the Qatar National Bank case, the lawyer’s reliance on a client’s AI-generated citations—rather than conducting independent verification—reversed the traditional roles of expertise, a misstep Justice Sharp called “extraordinary.” Similarly, in the Haringey case, the use of American spellings and formulaic prose raised red flags, suggesting AI’s involvement despite the lawyer’s denial.

These incidents echo global trends: in the U.S., lawyers have faced sanctions for citing fictitious case law in court filings. In one notable aviation injury case in New York, attorneys were fined $5,000 for submitting fake citations generated by AI tools like ChatGPT. Other legal professionals in U.S. courts have been reprimanded for overreliance on AI outputs, including in federal tax and civil litigation. In one instance, Reuters reported that Morgan & Morgan cited AI-generated content in a case involving Walmart, though no public sanctions or specific penalties were detailed.

Such errors undermine the legal profession’s ethical bedrock. Lawyers are bound by a duty to ensure accuracy and avoid misleading the court, a principle codified in professional codes worldwide. Submitting unverified AI-generated citations violates this duty, risking contempt charges or, in extreme cases, prosecution for perverting the course of justice, which carries a potential life sentence in the UK. Beyond legal consequences, these missteps erode public trust, as courts rely on accurate precedents to deliver justice. When AI fabricates case law, it threatens the integrity of judicial decisions, potentially affecting outcomes for clients and the broader legal system.

The Ethical Imperative: Balancing Innovation with Responsibility

The rise of AI in law is not inherently problematic. Tools like CoCounsel or Westlaw Precision can streamline research, summarize documents, or draft initial briefs, saving time and reducing costs. Sixty-three percent of lawyers surveyed by Thomson Reuters in 2024 reported using AI, with 12% doing so regularly. Yet, as Justice Sharp noted, “AI is a powerful technology” with “risks as well as opportunities.” The ethical use of AI demands rigorous oversight.

Lawyers must verify AI outputs against primary sources, as failing to do so constitutes malpractice, according to legal ethicists like Lisa Lerman. The American Bar Association’s 2024 opinion reinforces this, stating that neglecting to review AI-generated work violates the duty of competent representation.

Consider the human element: a lawyer’s role is not just to process information but to exercise judgment, empathy, and ethical discernment. AI lacks the ability to contextualize legal nuances or assess the moral weight of a case. In a 2023 New York case, lawyer Steven Schwartz cited fake cases from ChatGPT, mistakenly believing it was a reliable search engine. The resulting sanctions underscored a simple truth: AI is a tool, not a substitute for critical thinking. Lawyers must treat AI as a first draft, not a final authority, ensuring every citation and argument is vetted with the diligence expected of a human professional.

When and How to Use AI Ethically in Legal Practice

AI can be a boon in specific legal scenarios. For routine tasks—such as drafting contracts, summarizing discovery documents, or identifying relevant statutes—AI tools excel, provided outputs are cross-checked. In high-volume practices like personal injury law, AI can manage caseloads efficiently, as seen with firms like Morgan & Morgan, which adopted internal AI platforms like MX2.law. However, ethical use requires clear protocols: lawyers must disclose AI use to clients, verify outputs, and avoid over-reliance on tools for substantive legal arguments or precedent-setting citations. In complex litigation, where nuanced interpretation is critical, AI should be limited to support roles, not decision-making.

Courts and regulators are stepping up to address AI’s risks. In the UK, Justice Sharp emphasized that existing guidance is “insufficient,” urging stronger regulatory frameworks. In the U.S., some judges, like one in Texas, now require attorneys to attest that filings are free of unverified AI content. These measures signal a broader need for mandatory AI training in legal education and continuing professional development. Firms should invest in teaching lawyers to use AI as a collaborative tool, not a shortcut, fostering a culture of accountability.

A Path Forward: Restoring Trust Through Ethical AI Use

The legal profession stands at a crossroads. AI’s potential to enhance efficiency is undeniable, but its misuse threatens the trust that underpins justice systems. Lawyers must embrace AI with a sense of responsibility, treating it as a partner that requires human oversight. Regulators should develop clear AI policies, mandating transparency and verification. Clients, too, deserve assurance that their cases won’t be jeopardized by unchecked technology. As a California judge warned after nearly citing fake AI-generated cases, the stakes are high: a single oversight could embed falsehoods in judicial rulings, with lasting consequences.

The human touch remains irreplaceable. A lawyer’s duty to uphold justice demands diligence, integrity, and a commitment to truth—qualities no algorithm can replicate. By using AI ethically, lawyers can harness its power while preserving the profession’s moral core. The warning from London’s High Court is a wake-up call: embrace innovation, but never at the cost of justice.

FAQs:

Q1: Can lawyers use ChatGPT or similar AI tools for legal research?
A: Yes, but with strict caution. AI tools can assist with initial research or drafting, but lawyers are ethically obligated to verify all outputs against official sources. Submitting unverified or AI-generated content in court filings risks severe professional and legal consequences.

Q2: What are AI hallucinations in the legal context?
A: AI hallucinations occur when tools like ChatGPT confidently generate legal citations, rulings, or statutes that don’t actually exist. These outputs may look authentic but are entirely fabricated—posing serious risks if used without proper validation.

Also read: Legal AI in 2025: The Dawn of Automation and Regulation