Getting Legal on AI - Part II
April 30, 2018
[Your Name Here] in Law & Policy, New Technologies, No Jitter, Privacy/data security

There are some who claim that reading legal boilerplate makes watching paint dry seem exciting. Some of those people might even be lawyers.  But the issues associated with the ever-increasing presence of AI make consideration of legal issues and vulnerabilities more important than ever.  In Part I of this series, we considered qualitative issues, as opposed to the quantitative ones that often arise when considering adoption of AI techniques. Right at the top of the list of important considerations before an AI investment decision is made are contractual issues that must be given their due.

The definition of “artificial intelligence” is a bit of a moving target, but when defined in its broadest sense, the phrase can easily include not just data processing/machine learning capabilities, but also biometrics, “big data” and Internet of Things (IoT) devices.  As electronic devices become smarter and more omnipresent -- both at the enterprise and at home — practical and legal considerations (including vulnerabilities) will become even more important.

Consideration should be given to some, if not all, of these distinct segments of law: contract, allocation of risk, privacy, product liability, antitrust, international, intellectual property, and communications technology.  Of these, contractual considerations are probably the most important for all enterprise users.

I’ve made a career out of reading and writing what many may consider the driest of boilerplate contractual language, but AI raises a number of issues that may not be obvious, and which warrant careful consideration.  The most notable of these is service expectations.  It is critical that both parties have a clear understanding of what information the AI product is expected to yield, and how that information will be compiled. The assumption, which is probably true most of the time, is that the origin of that data and its ownership would belong to the contracting enterprise. While vendors can make the claim that their AI process is proprietary (“secret sauce”), but until and unless the customer understands what happens in that mysterious black box, it’s hard to rely on the data that’s generated.   This is an issue that comes up time and time again.

Secondly, how will the end user be protected when the inevitable refinement of the underlying AI technologies and processes occur?  It can be very helpful to include terms dealing with technology evolution, particularly when the agreement between the enterprise and the AI provider is for more than a single project.

Third, another essential contractual element should address the question of who bears the risk(s) associated with the secured AI-driven data.  The number of risks is totally dependent upon the number of variables and the complexity of the operations being performed. Who will bear the responsibility if (or when) things go wrong, and the data that is relied upon creates a problem that results in a legal harm?  With this in mind, possible terms of insurance and indemnification must also be carefully considered and evaluated. This is not “same old, same old” at all, and a successful agreement must include a fresh consideration of these otherwise rather sleepy issues.

Fourth, a clear termination strategy is essential, particularly given the mystery of the actual number-crunching and the enterprise’s reliance on the generated data, potentially to its detriment.  As a general rule, auto-renewal provisions are detrimental to the enterprise, and in some states like New York, they’re deemed contrary to public policy and not enforceable.  But that doesn’t keep vendors from including them? relying on them.

Fifth, contractual flexibility is essential in terms of managing unknowns.  There are known challenges that accompany increasing reliance on data, whether raw or processed. But it’s  including clauses that hold the customers’ collective feet to the fire, and while providing for the occurrence of unknown and unanticipated challenges that may make the AI product, as it evolves, less beneficial than intended.  This same contractual flexibility should also allow for the provision of system evolution as the technology makes more and potentially deeper analysis possible.  Another outside factor is a changing regulatory climate. When the rules change, both products and the agreements that govern their use must change as well.

While there are other critical legal elements that should be considered, in the interest of space and time, I’m just going to highlight two more:  privacy and export control.  In the U.S., there are industry-specific rules (think HIPAA and FCRA, among others) that dictate very specific terms that are designed to ensure the privacy and security of individuals’ personal information. For citizens of the EU, regardless of where they are physically located, the stakes will be much, much higher when the GDPR (General Data Protection Regulation) become effective on May 25th of this year. See my article in No Jitter, Get Ready for GDPR.

With respect to controls on export, despite the fact that data and processing technologies and capabilities are not always something that can be seen or touched, the export of these very things remains an incredibly sensitive topic.  Military applications, space and satellite applications and drones often rely on AI capabilities to function. It is to be expected that in-house experts are well-aware of the obligations imposed on them by the terms of ITAR (International Traffic in Arms Regulations), rules promulgated by the U.S. State Department for the purposes of keeping defense-related technologies within appropriate U.S. government organizations and contractors and EAR (Export Administration Regulations) similar regulations promulgated by the U.S. Department of Commerce).   Enterprises with even potential overseas applications should be well-versed in these rules and not rely on vendors’ guidance. 

Obviously, many of these topics warrant much further discussion and consideration. But the key takeaways remain that when securing AI-based goods and services, different contract terms should not only be considered, but insisted upon in order to protect the acquirer.  

Who says reading boilerplate can’t be fun?  “Fun” may not be the right word. How about essential?

 

Article originally appeared on Martha Buyer Telecommunications Law (https://www.marthabuyer.com/).
See website for complete article licensing information.