Know the Law: What are the Risks of Using AI with Business Data?

Published:
April 18, 2026

Q: Can my business safely use AI tools like ChatGPT, Copilot, or Claude with customer or employee information?

A: Not without guardrails. These tools can boost efficiency but using them with business data – especially personal or confidential information – creates significant legal risk and requires appropriate safeguards.

Once information is entered into an AI tool, data disclosure risk is immediate. If employees input customer, employee, or proprietary data into public or improperly configured AI tools, that data may be retained, used to train the AI model, or accessed outside your control. Doing so can violate New Hampshire’s privacy law and can expose businesses to enforcement action under Massachusetts consumer protection laws. The risk is heightened for sensitive data – like health, financial, or biometric information – which privacy laws afford stronger protections.

Businesses should also be mindful not only of their own AI use, but the use of AI by service providers and vendors, which can introduce additional downstream risk if not properly governed.

Accuracy is another concern. AI outputs can be incorrect, incomplete, or misleading – a phenomenon often referred to as “hallucination.” Businesses should not assume that outputs are reliable simply because they are generated by AI. These systems are trained on vast datasets that are often opaque (“black box”) and may include implicit biases or prejudices present in underlying data.

As a result, AI outputs may reflect or amplify bias in their training data, creating legal exposure in areas like employment decisions or customer interactions.

To reduce risk, businesses should take the following steps:

  1. Establish an AI governance team with representatives from management, operations, IT, and legal counsel experienced in cybersecurity, privacy, and AI. This team should oversee AI implementation, manage risk, employee training, and ensure compliance as both technology and regulations evolve.
  2. Adopt a clear AI use policy specifying what data employees may and may not enter into AI tools and what AI tools they can use and require training on that policy.
  3. Use enterprise AI tools, which are designed to provide greater control over data handling, and vet your vendors. Negotiate agreements addressing data confidentiality, retention limits, model training restrictions, and cybersecurity controls to prevent unauthorized access to or exposure of inputs and outputs.
  4. Maintain human control throughout the AI process – verifying the integrity of data inputs, carefully crafting prompts, reviewing outputs before relying on them, and auditing outcomes to identify errors or bias.
  5. Be transparent about AI use. Update your privacy policy to disclose AI-related data processing, and address AI in customer and vendor contracts, including consent, performance standards, and liability allocation.

Businesses do not need to avoid AI, but they should implement it with structure and oversight. A governance team, sound policies, and human accountability can significantly reduce regulatory, contractual, and reputational risk.

 

Know the Law is a bi-weekly column sponsored by McLane Middleton.  Questions and ideas for future columns should be emailed to knowthelaw@mclane.com.  Know the Law provides general legal information, not legal advice.  We recommend that you consult a lawyer for guidance specific to your particular situation.