Watch for Legal Risks in Everyday AI Use
October 14, 2025
[Your Name Here] in AI, Artificial Intelligence, Law & Policy, No Jitter, Privacy, Privacy/data security

When AI outputs results are either misused or misapplied — most notably without human oversight — the risks and consequences can be outsized.
In the current federal regulatory environment, where regulatory protections and structures are often deemed as burdensome and impediments to business, it’s not difficult to understand why federal AI regulation has gone virtually nowhere. In fairness, it also must be noted that getting federal legislation drafted and passed is a monumental task in and of itself, and in the current environment, it’s even more difficult. However, the need for additional guardrails on how AI was used has become increasingly obvious, and where Congress has failed—or been unable—to act, the states have stepped up. In some cases, they’ve followed the roadmap laid by international bodies like the executive body of the European Union, the European Commission, and in others, some states have followed the lead(s) of others. This has led to a crazy quilt of differing regulations and approaches to regulation. And thus, as AI and other technology tools become more widely deployed, the risks multiply for the organizations that use generative AI in their daily workflows. Use and/or misuse or overreliance on AI tools reach into diverse areas of law, including both civil and criminal law, along with cyber, consumer protection, anti-discrimination, anti-trust, and data privacy and security. Because of the breadth of areas of law and because of the differences among and between various state regulations, new developments in AI litigation must be monitored carefully and consistently by any technology professional who could later be held responsible for failing to safeguard against risk.

The Varied Types of AI Regulation Already on the Books

As states scramble to enact useful legislation, they have proposed and passed legislation that varies widely, according to Orrick, Herrington & Sutcliffe attorney Shannon Yavorsky. More than 160 AI-specific laws have been enacted on a variety of subtopics. These include

  1. 40 rules targeting AI images and deepfakes

  2. 25 targeting government/campaign use

  3. 20 automated decision-making laws

  4. 9 involving AI Transparency-Related Laws

  5. 2 “comprehensive” AI laws, courtesy of CO and TX

  6. Additional laws for AI Calling and the Telephone Consumer Protection Act

  7. AI in critical infrastructure

  8. AI ownership/liability presumptions

  9. AI in regulated industries (Healthcare / Insurance)

  10. AI in social media

This diversity reflects that qualms about misuse and overuse—let alone appropriate use—arise from a variety of potential concerns. In fact, as legal challenges go, there are four key areas of law that must be considered and deeply understood before such tools are deployed. That means not only before using these tools for its own internal operations and/or processes, but also before exposing customers and others who may be, at some point, “intercepted” by those tools about what’s happening careful evaluation and care must be taken to ensure compliance with relevant state (mostly) and occasionally federal law. 

The Areas Any AI Law Should Regulate

There are four areas worthy of consideration. The most important of these is transparency. Next is discrimination, which is often a result of bias. The third is the use of AI tools where there is high-risk to consumers, and the last is the responsibility to consumers and others who interact with those tools.

With respect to transparency, California and Maine have enacted laws that are specific to chatbot use.

California’s chatbot law requires companies with more than $500 million in revenue to not only assess the risks that their sophisticated technology could pose should it “break free of human control” or aid the development of bioweapons, but also disclose those assessments of risk to the public. Failure to comply could be accompanied by a $1 million fine per violation. Maine’s law requires that consumers be notified when AI tools are used to interface with customers who assume that they are dealing with a person when, in fact, the entity on the other end of the communication is an AI tool. CX managers, take note.

In Illinois, the Illinois Department of Financial and Professional Regulation (IDFPR)The Wellness and Oversight for Psychological Resources, in conjunction with the state legislature and a consortium of mental health professionals drove the creation of The Wellness and Oversight for Psychological Resources Act which “prohibits anyone from using AI to provide mental health and therapeutic decision-making, while allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals,” according to the governor’s press release. Nevada has laws regarding bot use when mental health is concerned. New York has an AI companion law awaiting the governor’s signature, but scheduled to become effective in early November 2025that requires that entities that are providing “AI companions” [a phrase defined in the statute] to implement safety measures to detect and address users’ expression of suicidal ideation or self-harm and to regularly disclose to users that they are not communicating with a human, and California has its own transparency law which is also in the process of final approval. 

Colorado’s AI Act was one of the first such laws on the books, while Texas’ TRAIGA law, which will take effect in 2026, defines prohibited uses of AI and creates a framework for civil penalties for misuse. Utah has its own AI consumer protection law, which took effect on May 1, 2024, and defines penalties for inadequate disclosure of generative AI use. To a varying extent, many of these laws are modeled in whole or in part on the EU AI Act (see my previous postson these issues).

Overreliance on AI tools must be a major consideration every time AI tools or services are deployed. It’s incredibly easy to take the comfortable, familiar route and let the AI tools crank out results you treat as the final result of your work. It’s the tortoise and hare story all over again. It’s also, to quote Julia Roberts’ character in “Pretty Woman,” a “big mistake. Big. Huge.”

The most recent example of overreliance on AI tools came to light during the first week of October 2025. At that time, Senator Chuck Grassley (R-IA), in his role as Chairman of the Senate Judiciary Committee, put two U. S. District Court Judges on notice about their use (or use by their staffs) of what appear to be AI tools to produce decisions that contained factual errors or allegedly AI-generated outcomes that they were not independently verified before being included in published opinions. In Senator Grassley’s letters to the two judges, he said, “No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor and factual accuracy. Indeed, Article III (U.S. District Court) judges should be held to a higher standard, given the binding force of their rulings on the rights and obligations of litigants before them.”

Inappropriate reliance on AI is not always easily detectible. But when dependence on AI outcomes as tools in decision-making, and when AI outputs results are either misused or misapplied—most notably without human oversight, such actions reflect incompetence at best and criminal conduct at worst. As parents often instruct children—even adult ones—simply because everyone’s doing it doesn’t make it right. This couldn’t be more true than it is in this context. 

Orginally published October 14, 2025 in No Jitter

Article originally appeared on Martha Buyer Telecommunications Law (https://www.marthabuyer.com/).
See website for complete article licensing information.