It's not AI's Fault: THE LAZY LAWYER
Lazy lawyers were around before AI
11/16/20253 min read


It's Not the AI's Fault. You're Just a Lazy Lawyer: How Artificial Intelligence Is Exposing the Legal Profession's Competence Crisis
⚖️ Another week, another lawyer sanctioned for citing fake cases generated by AI.
This isn't an AI ethics problem. It's a lawyer competence problem.
The headlines keep piling up, and they're embarrassing for the entire profession. Over 486 cases worldwide have featured AI hallucinations in court filings—324 of those right here in U.S. courts. Stanford just released a study showing legal hallucination rates between 69% and 88% across top AI models.
In September 2025, a Los Angeles attorney got slapped with a $10,000 fine after 21 out of 23 case citations in his appellate brief were completely made up by ChatGPT. 😬 An Arizona lawyer has to personally notify three federal judges that she falsely attributed fake opinions to them. A Colorado attorney lost his license for 90 days after texting his paralegal that he was "like an idiot" for not checking ChatGPT's work before filing.
These aren't accidents. They're symptoms of something deeper.
Every Rule You Violated Already Existed
Here's the part nobody wants to talk about: Every single ethical duty these lawyers violated existed long before ChatGPT launched in November 2022.
ABA Model Rule 1.1 requires attorneys to exercise "legal knowledge, skill, thoroughness and preparation reasonably necessary" for competent representation. The duty to verify case citations and validate legal authorities has been foundational to legal practice for over a century.
🔍 And yes, lawyers still Shepardize—the term is alive and well in 2025. It's standard practice for validating case citations using tools like Shepard's Citations on Lexis, KeyCite on Westlaw, or BCite on Bloomberg Law. These citator services tell you whether cases have been overturned, distinguished, or criticized by later courts. Law schools still teach this as a fundamental skill.
ChatGPT literally warns users to verify its output. The ABA's Formal Opinion 512 makes it crystal clear: lawyers using generative AI must understand the "benefits and risks" and remain personally responsible for all AI-assisted work. Every legal ethics expert, bar association, and CLE program has been saying the same thing since 2023: You must verify AI-generated content.
So why are lawyers still getting sanctioned? Because they didn't bother.
AI Didn't Make You Incompetent—It Exposed You
🎯 The real scandal isn't that AI hallucinates. The real scandal is how many lawyers were willing to file court documents without reading them, without verifying them, and without performing basic due diligence.
Consider the excuses judges have heard:
🔸 "I didn't know AI could make up cases"
🔸 "I trusted the technology"
🔸 "I was under time pressure"
🔸 "My paralegal did it"
None of these would have worked before AI existed, and they don't work now. If you filed a brief citing cases from a random blog without verification, you'd be sanctioned. If you relied on unqualified research without checking it, you'd face malpractice claims. The standard has always been the same: you're responsible for everything you file.
💡 AI is simply a mirror. It's reflecting back the profession's worst habits: over-delegation, inadequate supervision, failure to read source materials, and an erosion of professional standards that's been building for years.
The Malpractice Implications Are Real
💼 Legal malpractice carriers are paying attention. Firms without proper AI verification protocols face increased premiums and potential coverage exclusions. Clients harmed by AI hallucinations have valid malpractice claims based on breach of the duty of care.
State bars are making it clear: AI use doesn't lower the competence standard—it raises it. Lawyers must now demonstrate:
✅ Understanding how AI tools work and their limitations
✅ Direct human review of all AI-generated content
✅ Independent verification of legal citations and authorities
✅ Proper supervision of staff using AI tools
✅ Documentation of verification steps taken
Stop Blaming the Tool
📱 We've been blaming technology for human laziness since the invention of copy-paste. AI didn't fail these lawyers—these lawyers failed their clients, the courts, and the profession.
ChatGPT told you to verify. Your ethics rules told you to verify. Your law school professors told you to verify. You knew better. You just didn't bother.
The lawyers getting sanctioned weren't unlucky. They were incompetent. And AI just made it impossible to hide that fact anymore.
What This Means for the Profession
🔥 This is a reckoning moment for legal practice. AI is forcing conversations the profession has avoided for decades: about competence standards, about shortcuts, about what "thorough" really means.
The law firms and solo practitioners thriving in the AI era won't be those who adopted technology first. They'll be those who understood their ethical duties deeply enough to use it responsibly.
Bottom line: AI isn't the problem. Lazy lawyering is. Stop outsourcing your professional judgment. Stop blaming the tool when you fail to do your job. And if you can't be bothered to verify a case citation before filing it in court, maybe it's time to ask yourself whether you should be practicing law at all.
