Skip to Content

Why You Probably Need an Artificial Intelligence (AI) Policy

Publications - Newsletter | August 2023

Do you know if your employees or vendors are using generative AI in their work? Chances are that some of them are (and yes, that can be a big deal). This article takes a quick look at why your company probably needs an AI policy to be proactive instead of reactive in the ever-evolving AI world.

ChatGPT is the fastest growing app of all time since its launch in November 2022 and generative AI continues to take the world by storm. Unfortunately, many businesses (including law firms) have not paused to consider how they should approach generative AI. Before we get ahead of ourselves, let’s make sure we are on the same page as to what we mean by generative AI. Generative AI is any type of artificial intelligence system that can generate text, images or other media in response to a prompt based on the patterns and structure of their input training data. On the surface that doesn’t sound so bad but let’s narrow in on what we mean by “input training data,” specifically in the context of ChatGPT.

Like your browsing history, ChatGPT by default keeps a chat history of the “conversations” that you have with ChatGPT and uses this data to train and improve ChatGPT, meaning that both data you input and outputs you receive from ChatGPT are out of your control and subject to the whims of the complex AI algorithm. Fortunately, the ChatGPT feature introduced on April 25 now allows users to turn off their chat history. Conversations that take place after chat history is turned off will not be used to train and improve Chat GPT and will be deleted after 30 days. While this is a step in the right direction, keep in mind that ChatGPT is not the only generative AI platform and by default ChatGPT will be keeping a chat history.

What does this mean for your business? Here are a few quick examples:

First, unbeknownst to you, your employee may have used ChatGPT or a similar platform to generate that blog post requested by your client and there are a number of issues, including copyright, that could be the subject of another article.

Second, imagine you have an outdated filing system and your company signs a contract with a new online document management platform that will help you become more organized. One of the key selling points is that this document management company scans the documents and then uses OCR (optical character recognition) combined with AI to process those piles of documents around the office into orderly online folders. What you failed to consider was that the document management platform utilizes a third-party AI provider whose policy provides that your documents will be used as “input training data” for the AI provider. Hopefully those documents did not have confidential client information, personal health information covered by HIPAA or data from customers in Europe subject to the General Data Protection Regulation because they are now forever part of the AI platform.

Third, imagine you hire an attorney to do some research for you surrounding some of your company’s trade secrets. This attorney prepares a draft memo but decides to use ChatGPT to rewrite part of the memo to make it more understandable (not knowing to turn off chat history). The attorney uploads the memo, and out comes a much more polished product for you. The first problem is that the lawyer has probably violated the important duty of confidentiality (Model Rules of Professional Conduct 1.6) and may have waived attorney-client privilege by disclosing confidential client information to ChatGPT. The second problem is that you may have lost trade secret protection because the trade secrets were discussed in the memo, and the memo is now part of the ChatGPT training data and used to generate content for other users.

Perhaps you now see some of the risks surrounding generative AI. We recommend that every company consider adopting AI policies and procedures both for employees and external vendors. An AI policy should at a minimum (i) provide for general training on the topic to all employees; (ii) address permitted uses, prohibited uses and uses requiring internal approval; (iii) define what company information may or may not be uploaded; (iv) specify what generative AI platforms are permitted and prohibited (possibly having your IT department block prohibited platforms); (v) adopt transparency protocols which help internal and external stakeholders identify content created by generative AI and (vi) provide for continuous monitoring of new platforms and technology.

If you have questions or want assistance creating an AI policy, please contact a member of Kutak Rock’s Scottsdale Corporate and Securities Group.

Why You Probably Need an Artificial Intelligence (AI) Policy