July 7, 2020
The COVID-19 pandemic brings with it increased focus on Artificial Intelligence (“AI”) as developers rush to create software, such as contact tracing software, that can help businesses reduce the risks for employees returning to work. Before businesses acquire AI technology, owners and human resources professionals must consider how to balance being proactive with protecting employee privacy. There are numerous provisions they should incorporate into their contracts with their software developers. Some of these provisions include:
When contracting for AI, the customer should be particularly focused on documenting the level of testing to be provided. Generally, the more robust the description of the testing, the better. At a minimum, this description should include: the number of rounds of testing, the process for testing, what the minimum sample size will be for each round of testing, and who is involved in creating the test environment. In addition, the customer should ask the vendor to contractually commit to describing the remedies if the testing does not result in adequate work product. The parties need to define exactly what constitutes acceptance, and whether ongoing testing is necessary or appropriate, particularly as the AI adapts and learns from itself.
Security is currently one of the fastest evolving areas of information technology law. When contracting for AI, it is important to have standards that can adapt to this ever-changing environment. In order to do this, it is helpful to incorporate a requirement that the vendor comply with industry security standards such as ISO-27001 and OWASP-Top 10 (for web applications). Businesses should also state any specific technical requirements related to security necessary to protect the customer’s IT environment, as well the whereabouts and other data associated with its employees and the locations of its customers. Finally, requiring adequate cyber-insurance that meets the risk level of the environment is also prudent.
Customers should be wary that AI may transform data that was once anonymous into data that is decipherable. Also, there is a complex set of data privacy laws in effect in the United States and even more so globally. All vendors should contractually agree to comply with any such applicable laws. Customers should also consider putting limitations on how vendors can use data, particularly outside of providing the services to the contracting customer.
Most vendors require a cap on consequential damages, but in AI contracts this provides additional challenges as much of the risk to the customer lies with items commonly considered to be consequential damages. There are several ways to address this problem. One way is to redefine what constitutes direct damages. A second way is to negotiate exceptions to caps for specified items such as: breaches of privacy/data security, failure to comply with threshold requirements, and allegations of bias due to algorithm data use.
This article is meant to share a few ideas for contracting for AI. As with any contract, you should contact a lawyer understanding the nuances of the subject matter particular to your situation prior to signing it.
For more information, please contact Diana J. P. McKenzie, partner & chair, Information Technology & Outsourcing Practice Group at HunterMaclean, firstname.lastname@example.org.
May 10, 2023
By Francesca Macchiaverna, as published by Legal Newswire The term “dockominium” is not defined in the Georgia Condominium Act or Georgia case law. Dockominiums as an interest in land are…
January 10, 2023
By Louann Bronstein, as published by Legal Newswire The Corporate Transparency Act (CTA) will become effective January 1, 2024. The CTA was enacted on January 1, 2021, as part of…