Robot Judge
Source: Generated by Adobe Firefly

The advice given to students for many years now has been "don't just rely on Wikipedia — check the sources too," and now we need to revise that with "and don't trust ChatGPT either". There are no shortcuts, especially when it comes to the law, and that's something that New York attorneys Steven Schwartz and Peter LoDuca have learned to the tune of a $5,000 fine.

This disciplinary action comes as a result of their work on behalf of a man who claimed he was injured on a flight from Colombian airline Avianca, and was suing for damages. The lawyers called on ChatGPT for summaries of other cases that would help bolster their case and it returned several fabricated options that they took at face value. Turns out there is not any precedent from "Miller v. United Airlines" or "Martinez v. Delta Airlines" that relates to their case, and the judge called them out it.

It's worth remembering what ChatGPT actually is: it's a highly advanced text prediction engine and it "knows" nothing more than what word should come next. And because it doesn't actually know what any of the content it generates actually means, it doesn't know if any of it is true. It will just create statements out of thin air with no basis in reality because it has no concept of being "right" or "wrong" or "I don't know that". It's really really good at creating sentences that sound like they were written by a human, but it can get stuff very very wrong.

Like Wikipedia, ChatGPT can be a great starting point for research and for generating ideas, but it is far from an arbiter of truth and definitely shouldn't be called upon as the sole source of information for legal cases. Even if ChatGPT is capable of passing the bar exam, that doesn't mean it is a good lawyer.

Read more