Artificial Intelligence Missteps: What We Can Learn from a Sanctioned Texas Lawyer

AI is everywhere, and it’s making waves in the legal field. It can help us research, draft, and manage cases faster than ever. But a recent case involving a Texas lawyer shows what happens when you trust AI a little too much.

In the case of Gauthier v. Goodyear Tire & Rubber Co, U.S. District Court for the Eastern District of Texas, No. 1:23-CV-00281, Texas Lawyer, Brandon Monk, representing a plaintiff in a wrongful termination case against Goodyear Tire, used an AI tool to draft a legal brief. Sounds efficient, right? The problem? The AI generated case citations that didn’t exist and the brief was filed. When opposing counsel pointed out the problem, Monk didn’t address it quickly enough, leading to sanctions: a $2,000 fine and mandatory AI training.

What Went Wrong?

This case is another wake-up call for attorneys about the risks of relying on AI without oversight. Some takeaways:

  1. AI Isn’t Always Right
    AI is an incredible tool, but it’s not perfect. It can “hallucinate” and have certain biases. It can generate fake or false citations. Attorneys need to be educated on the use of AI and double-check everything it generates.
  2. Know the potential pitfalls
    We have an ethical duty to be competent in the tools we use. That includes understanding what AI can and can’t do. If you’re using AI for research or drafting, you need to know how to spot potential issues.
  3. Judicial intervention
    Courts are paying attention to AI and how we use it. Cases like this show that if you don’t hold up your end of the deal—being thorough and accurate—you’re going to hear about it, and not in a good way.

Using AI the Right Way

All businesses should know how to responsibly use AI in the workplace. This includes (1) having a secure AI policy, (2) having continuous education on the topics and pitfalls, and (3) setting up verification systems within your workplace. 

Why It Matters

The lesson here is simple: AI can’t replace human judgment. It’s a tool, not a magic wand. The responsibility to get it right still falls squarely on us. Cases like this one are a good reminder to slow down, verify, and do the job the right way.

As AI continues to evolve, we’re all figuring out the balance between embracing new technology and maintaining the standards our profession demands.

If you’re wondering how to make AI work for you without falling into these traps, let’s talk. We’re here to help you draft a policy and educate you and your organization on the responsible use of AI.

SUBSCRIBE