Legal applications and operations are integrating artificial intelligence more rapidly than any similar technology. A few examples include email, notetaking, legal research, document creation, due diligence and e-discovery review, data summarization and analysis, finance and accounting, human resources, marketing, and IT. As the American Bar Association recognized in its recent Formal Opinion 512, law firms need to develop and execute plans for the structured implementation of AI. The following are 10 key features of such a plan.
- Governance. Form a team of individuals to manage and make decisions about AI implementation. The team should include lawyers experienced in each of the firm’s practice areas, management, finance, and IT responsibilities, as well as either an inside or outside counsel with cybersecurity, privacy, and AI expertise.
- Policy. Adopt an AI use policy to address existing and foreseeable regulatory and business issues. Amend that policy throughout the AI implementation process to reflect decisions made about those issues and the operational uses of AI.
- Existing and Potential Uses. Identify existing uses of AI, and additional use cases for it. Examples include stand-alone generative AI (such as for creating content like text, audio, video, and photos), as well as AI integrated into other applications (such as legal research platforms, customer relationship management apps, HR systems, and IT ticketing).
- Control and Ownership. License AI to ensure ownership and confidentiality of data inputs and AI outputs, and to control data used to train AI. Enter into agreements with AI developers to secure rights, allocate obligations and liabilities, and ensure compliance with ethical duties.
- Training and User Group. Select groups of users to test AI. Train them about the AI use policy, how to use the AI, and the business goals for testing, prototyping, and AI use.
- Testing and Prototyping. Select non-client and non-production data and uses to test AI. Ensure human control of data input integrity as well as legitimacy and reliability of AI outputs. Once testing yields those outputs, identify limited client and operational use cases appropriate for prototyping, ensuring transparency with and consent from clients. Once prototyping yields legitimate and reliable results, deploy approved AI more broadly in production.
- Recordkeeping and Auditing. Audit AI use in production to ensure and maintain records of continuous data input integrity and output legitimacy and reliability. Adjust the AI use policy and practices to address regulatory, operational, and other issues that may arise.
- Assessment Restricted Uses. Conduct risk assessments for “restricted” uses of AI. That includes use of AI to process sensitive personal information (such as health and biometric information, data about children, race, religion, political affiliation, and other protected characteristics, and governmental identification and financial account numbers), and use of AI that poses either a risk to humans (such as HR functions and consumer profiling) or to systems or security (such as IT, infrastructure controls, and surveillance).
- Contracting and Transparency. Ensure that consumers and business customers are aware of the use of AI through a privacy policy, terms of use, and contracts. Engagement letters with clients should include consent and reference the firm’s AI use policy.
- Management. Empower select personnel to manage ongoing and evolving uses of AI. Ensure that other AI risks are addressed, such as contracts with vendors that use AI for the business as well as cyber liability and professional liability insurance coverages for AI use.
AI is a powerful technology that has the capacity to create both tremendous opportunity and risk. Law firm need to implement this technology to secure competitive advantages, doing so based on a structured plan to manage and mitigate technological, regulatory and business risk.