It’s been several years since the business community collectively stopped thinking of artificial intelligence (AI) as science fiction and began thinking about it as a business opportunity. Many organizations have begun to rely on it in their operations (preparing documents and communications, summarizing long reports, etc.) and in their customer interfaces (chatbots, videoconference summaries, etc.). These functions can make employees more productive and customers more satisfied. However, these opportunities also carry risk: data privacy breaches, algorithmic bias, intellectual property concerns, regulatory uncertainty, etc. For companies in New Hampshire and beyond, the question is not whether to govern AI, but how. Robust governance is not a luxury. It is a legal and strategic necessity.
Why AI Governance Matters
AI systems perform functions that can affect employees, customers, and stakeholders. Without clear guardrails, organizations risk violating privacy laws, breaching contractual obligations, or facing reputational damage. Additionally, regulators are taking action: the European Union’s AI Act imposes strict compliance requirements, and U.S. agencies are signaling similar expectations, particularly around video and voice recordings of individuals. Although AI-specific laws are not as developed in this country as others, there are a number of laws that are generally applicable to businesses that the U.S. and state governments apply to AI applications and functions, such as consumer protection laws (including RSA 358-A) and the Federal Trade Commission Act, which prohibit unfair and deceptive trade practices. Even if your company isn’t directly regulated today, courts and regulators, as well as your customers and partner organizations, will expect reasonable oversight. That begins with documenting what your organization does.
AI Governance Documents
A mature AI governance program rests on a foundation of well-drafted documents:
- AI Acceptable Use Policy: This document describes how employees at your organization may use AI applications. It is intended to provide guardrails to prevent use that may harm the organization or its employees, customers and stakeholders, as well as provide instructions on how employees can and should use AI in their jobs, hopefully increasing their AI literacy and encouraging them to bring in some of AI’s benefits into your organization.
- Ethical AI Guidelines: These guidelines are intended to be a high-level summary of key principles governing your adoption of AI, including fairness, transparency, and accountability. As the name suggests, this document should also provide some guidance on translating those values into actionable practices (g., requiring explainability for high-impact AI decisions, mandating human oversight for critical functions, etc.).
- External AI Use Statement: Transparency builds trust. This public-facing statement explains how your company uses AI, what safeguards are in place, and how customer data is protected. A concise disclosure can mitigate litigation risk and demonstrate good faith. Please note, however, that whatever information is included in this document must accurately reflect your organization’s actual practices. Otherwise, it might be found to have engaged in “unfair or deceptive trade practices.”
- Privacy Policy: If you think of AI as the engine, data is the fuel. And much of that fuel is personal data, so if your organization is offering AI functions to customers and users, the AI is likely collecting personal data from them. Make sure your public-facing privacy policy accurately reflects this, as required by privacy laws, including the New Hampshire Privacy Act, RSA 507-H.
- AI Risk Management Framework: This document formalizes how your company identifies, assesses, and mitigates AI-related risks, such as algorithmic bias, cybersecurity vulnerabilities, and model drift. It should include monitoring procedures and escalation protocols for high-risk incidents.
- Incident Response Plan: When something goes wrong (such as discriminatory outcomes or data leaks) you need a documented response plan. You may already have an incident response plan to govern what your organization does when there is a data breach or other cybersecurity incident, and this can either be part of that document or a related document.
- Training and Awareness Materials: Governance fails if employees don’t understand it. You should (a) develop training modules and FAQs, and (b) schedule regular training sessions to ensure staff know the rules. Courts will consider training evidence when assessing negligence claims.
Not every business will adopt all these documents. As I noted above, the list reflects a mature AI governance program. If your organization is only beginning to incorporate AI into its operations, you may only want to update your privacy policy and put in place an AI acceptable use policy. Your leadership team or your AI governance group should consult with your counsel to discuss which documents are appropriate for you now.
Final Thoughts
Companies with clear governance documents deploy AI faster and with greater confidence because they know guardrails are in place. Governance also reduces insurance premiums, strengthens negotiating positions with partners, and enhances brand reputation. In short, governance documents are not just legal niceties, they are a competitive advantage.