ChatGPT – the artificial intelligence application from OpenAI that can provide detailed, lengthy natural language answers in response to user questions – famously attracted 1 million users in its first five days, and more than a billion users within three months.
The program is possibly the most well-known example of generative AI, an artificial intelligence technology that identifies patterns in large quantities in training data and then generates original content – text, images, music, video, etc. – by recreating those patterns in response to user input. Other examples include Google’s Bard (which also produces natural language), DALL-E 2 (images) and Synthesia (videos). As ChatGPT’s statistics suggest, the adoption of generative AI is on the rise, and Bloomberg has reported it will be a $1.3 trillion market by 2032.
Organizations may see their employees using the technology personally and professionally and should explore AI use policies in response. These policies should address the applications’ known liabilities, while encouraging employees to experiment with identified strengths. Liabilities include privacy, limited intellectual property protection and questionable accuracy. Strengths include initial research and brainstorming assistance.
Intellectual property, copyright protection and AI
The current position of the U.S. Copyright Office is copyright protection is only for works created by a human being. That means content created by generative AI is not eligible for copyright protection. An AI-generated work may be eligible for protection if a human sufficiently alters it, but only the portions authored by a human being. That means organizations need to clearly state when and how employees may use content produced by generative AI to avoid relying on text, images or other media it cannot copyright. Organizations should also be aware there is debate as to whether certain developers of generative AI applications have violated copyright law by including protected works in the training data they used for their platforms, which may eventually prove problematic for content those platforms produce.
Ensuring accuracy of AI-generated content
Acceptable resource for initial research
However, if employees are going to confirm background information from ChatGPT, Bard, etc., those applications can be safely used for first impressions of a research topic, similar to how many people use a Google search or Wikipedia. Typing a series of basic inquiries into a generative AI program can be a useful shortcut to learn about a new issue, so long as there is follow-up research to identify incorrect information.
Effective tool for brainstorming
Prompts to generative AI platforms can be very useful in producing content that gets the human mind thinking about a subject in a new way. A paragraph outlining the weaknesses in a client proposal or an image to inspire human-created graphics are excellent uses of the technology’s capabilities. They can help employees produce better projects and ideas.
Although the liabilities described above (and others) should give organizations pause as their employees explore generative AI, there are numerous functionalities organizations will want. A properly drafted policy will help you address both issues and help your group incorporate generative AI in a smart way.