Workplace AI Use Demands Careful Monitoring
November 8, 2025
[Your Name Here] in AI, Artificial Intelligence, Consumer Issues, No Jitter

While AI tools offer valuable shortcuts, workplace misuse is difficult to track and litigation from unreliable AI outputs is just beginning to emerge.
I’m in the second semester of teaching an undergraduate class in ethics, with a focus on use of AI tools, and I am both hopeful and skeptical that AI tools can be used beneficially in most contexts. However, and this is a big enough HOWEVER to warrant capital letters, these tools are only as good as the people using them. AI tools are precisely that — tools. Nothing more.

Fail Without Recourse 

For the purposes of my class, I ask students to notify me if they are using tools such as Grammarly or Chat GPT to do their research. Both are acceptable, if I’m notified. However, while the task of identifying un-original work is getting tougher, I make them fully aware that I will check every source and citation, and I will further run every document through an AI “checker” for originality. While such checks are not foolproof, they provide some assistance as do other “tricks of the trade” that I use to verify original student work. Students are further warned that if I find that work is not original, they will fail the course without recourse. It is, after all, an Ethics class. 

AI Use in the Workplace

In the workplace, such misuse of AI tools may be tougher to track, although that’s not to say that how employees use AI tools shouldn’t be carefully monitored — not just occasionally, but routinely. This is not to say, of course, that humans don’t make plenty of errors without using AI tools. Many of these tools provide an indispensable shortcut when time and energy are limited. The problem is the reliance on them can be very misplaced and very harmful. It is important to note that litigation resulting from reliance on unreliable AI tools is just getting started.

AI is one of the biggest buzzwords in the CX industry, with those who operate CX platforms looking for new and existing ways to optimize output while minimizing cost. Even if the AI-driven tool resolves an issue 80% of the time — and that’s according to the entity running the process — the bigger question remaining is what happens when the AI-driven CX “solution,” (I use that word in quotes intentionally) is not successful, or worse, provides inaccurate information that causes harm?

Nowhere are these errors more visible than in the legal world where lawyers — and even judges — are being caught relying on AI tools to do their work and guide their decisions. The consequences in these cases can be beyond severe and can lead to an out and out miscarriage of justice. Not that errors don’t happen without reliance on AI tools, because they do. But the risks are far greater when the research leading to a decision is faulty from the start.

According to the Hon. John Licata, a judge in New York’s State Supreme Court, “it is increasingly more common that a submission [to the court] will have had the benefit of being run through generative AI.” This could make the submission better, but submissions to the court also may be based on AI hallucinations where lawyers have relied on the AI tools to do the research and the research yields an improper or unreal result. Licata continued, “where lawyers get tripped up is not providing the rationale and theory, but on the research.”

Where AI tools are best used in the legal context is when judges and lawyers use search tools to go through whatever information is out there and use it to find a starting point. It’s a wonderful tool for synthesizing stuff that’s out there. Then they write their motion papers and/or decisions, but only after verifying that the information they have is both useful and real. That makes sense to me. But relying on an AI tool to write briefs or opinions is very dangerous territory unless vetted by the human signing the document.”

Where the Buck Stops

As a final point, and one that appears in recent court decisions, often senior attorneys and/or partners will assign tasks to subordinates to write papers and affidavits. When the partner, or named attorney, signs off on papers submitted to the court, it’s irrelevant who cited the cases and who did the research. That is, if an underling, who is desperately looking to do a good job under time or other constraints uses an AI tool that yields a hallucination, it’s the attorney whose name is on the paperwork submitted to the court who is on the hook. 

This is no different than in a corporate or government context. A person who is signing his/her name on a document, whether it’s for court or a municipal or charitable entity, ought to be very sure that the information has been verified — and not just that it’s true today, but that it remains appropriate as circumstances and processes evolve.

The takeaway is to use caution. Avoiding AI tools is self-defeating. Using them with care and ongoing oversight and validation is the safest way to go. Reliance on bad outputs, regardless of how those outputs are generated, is always very dangerous territory.

Origiinally published October 27, 2025 in No Jitter


Article originally appeared on Martha Buyer Telecommunications Law (https://www.marthabuyer.com/).
See website for complete article licensing information.