AI Tools Demonstrate Hidden Risks of Imperfect Outputs
As AI use grows, so does the need for disclaimers, vetting AI-generated results, and avoiding expensive errors.
When it comes to deployment of AI, some deployers are listening to their lawyers. Many documents that are created with AI now contain disclaimers like this one which I found at the bottom of a document reviewed in legal community platform Justia: “Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.” At least, this entity has the courage to announce and warn about its use of AI. At least this entity has the courage to announce and warn about its use of AI. Many documents and entities are not, including a radio station with which I’m familiar, which uses AI to produce weather reports, signed off by “staff meteorologists” who are not real. The AI-generated forecasts are close to accurate, which I view as positive but not determinative, but the named staff meteorologists who provide the forecasts are identified as real people, which they are not.
Well respected analyst and No Jitter contributor Blair Pleasant shared this gem from her local government’s website with me:
"Answers given by this chatbot, or any live agent(s) may contain errors or may be incomplete. The Judicial Council of California and the California courts (and their officers and personnel):
- Make no representations or warranties about the Virtual Customer Service Center, and are not responsible for any damage, loss, claim or liability arising out of your use of the Virtual Customer Service Center, or information provided by the chatbot or in the live chat.
- Do not warrant the Virtual Customer Service Center or any related materials will be error-free, or free of viruses or other harmful components."
With disclaimers like this, who needs chatbots? What’s clear is that at least some of the entities deploying AI tools are recognizing the vulnerabilities that use of those tools expose. This is not a new argument for me to make, but while AI-driven chatbots may come through with correct information much of the time, they’re not always right. And that can have consequences for any business that used those chatbots in its operations.
As creative entities work feverishly to create AI aps that solve every problem, it’s dismaying to observe how those who work under extreme pressure to create the next best thing can often miss business- or industry-critical items that slip through the cracks, wreaking havoc on those who have relied upon AI-generated outcomes to make critical decisions.
As AI deployment increases, so will the number of errors. These will likely occur in every sector where AI tools are deployed. Few, however, are more visible than those that occur in the legal profession. In mid-May, The MIT publication “The Algorithm” highlighted several occurrences of “AI hallucinations” that have occurred recently in courtrooms across the country.
According to Science News Today, “In the world of artificial intelligence, a hallucination refers to when an AI model generates information that is not true, not supported by any data, or entirely fictional. These “hallucinations” may take the form of fake facts, invented quotes, incorrect citations, or completely fabricated people, places, or events … An AI hallucination occurs when [a large language] model produces a response that sounds plausible but is factually incorrect, logically flawed, or completely invented.”
By this definition, such information does not qualify as misinformation because its creation is not intentional, but simply a function of the system that’s answering the question.
The amount of bad publicity—not to mention outcomes—when lawyers get caught unknowingly relying on AI hallucinations can be staggering and potentially career ending. Recently, the Algorithm even cited a case in a court filing produced by the AI company Anthropic Clearly someone—or many someones—have taken their collective eyes off the ball. As a most current example, the Chicago Sun-Times actually published a summer reading list on Sunday, May 18th where 10 of the 15 recommended books do not exist. How did they do it? They did it with AI. However, the bigger question is why? And why was there no review process to check the AI output. Sadly, this is not an outlier. And as confidence in media sources has decreased, “accidents” like this only make such sources that much less trustworthy.
Originally posted in No Jitter May 22, 2025