Are Businesses Liable for AI Discrimination?

Cameron G. Shilling
Director, Litigation Department & Chair of Cybersecurity and Privacy Group
Published: New Hampshire Business Review
September 19, 2025

Automation pervades all business operations, including human resources. Artificial intelligence and automated decision-making (ADS) can be valuable tools for HR professionals, and should be an integral aspect of the HR operations of today and the future. It should not come as a surprise, therefore, that businesses will be held responsible when discriminatory personnel decisions are made either by ADS or by a human relying on AI to help make those decisions.

AI and ADS offer immense benefits for HR professionals. Here are a just few examples.

  1. Review and summarization of job applicant materials for completeness and suitability for available jobs, employer culture and expectations, etc.
  2. Summarization and analysis of interview content and performance.
  3. Assessments of personality, characteristics, skills, aptitudes, etc.
  4. Summarization and analysis of performance based on the entirely of an employee’s work product, including emails, phone and video conference calls, recorded meetings, interactions with business systems and databases, speed and accuracy, etc.
  5. Recommendations for job placement, advancement, training opportunities, etc.
  6. Monitoring and alerting about employee conduct for disciplinary purposes.

The initial reactions of some HR professionals is that AI would not be useful to them, or that they do not trust AI to generate reliable results. The proof of AI’s utility will be in the pudding, as more-and-more HR operations use it and as HR professionals who refuse to do so loose touch in the industry. Moreover, studies show that results yielded by AI are not necessarily less reliable than human decision-making, since business managers and HR professionals make decisions that are influenced, consciously and unconsciously, by latent variables and inherent biases. Finally, existing HR systems and applications already incorporate AI and ADS, and that functionality is only becoming increasingly powerful. Businesses that choose not to utilize it will over time expose themselves to competitive disadvantages.

Just like humans who make personnel decisions, AI can generate such decisions that are based on improper variables, or appear in hindsight to be based on such factors. That occurs because AI models are trained using historical data, and that data embodies biases and trends that existed throughout history. Likewise, AI trained on HR data about the business’ historical and existing workforce, management structure, job descriptions, etc. will invariably incorporate the biases and trends inherent in that data too. AI also may unintentionally disadvantage certain individuals, such as if ADS disqualifies a candidate because of availability and the candidate’s availability is based on medical or family-care requirements, or if AI rates an employee’s performance in meetings and calls poorly and such performance was due in part to a disability.

Since AI like humans can generate discriminatory results, it is unsurprising that businesses that use AI will be liable for those results. New York City, Illinois, and California (effective October 1, 2025) have adopted new or amended existing laws to have that effect. However, do we really need a new law to tell us that? Existing laws already prohibit and punish discriminatory personnel decisions. AI and ADS do not make those decisions, no matter how autonomously they operate. Rather, employers use AI and ADS to analyze data and implement the results yielded by those technologies. If a human relies on technology to help make a personnel decision, the human is still making the decision. Similarly, if an employer permits an ADS result to be effective without human oversight, the employer has made the decision to do so.

Perhaps the most salient aspect of these new AI regulations is that they require businesses to maintain records that can be later scrutinized to determine whether personnel decisions are legitimate or discriminatory. That requirement is consistent with other existing and emerging laws concerning AI generally, which provide broader and more rigorous requirements for the use of AI for HR functions.

The European Union and Colorado adopted broad-based AI laws last year, and other states will almost certainly adopt similar laws this year and in years to come. Those regulations categorize the use of AI for employment decisions as high-risk. That does not mean that such use of AI is prohibited. Rather, to use AI for that purpose, employers must first conduct a risk assessment to identify the risks inherent in the use of AI and implement measures to mitigate those risks. Those measures include safeguards like thorough testing of AI before implementing it, ensuring the reliability of data used to train the model, limiting the use of AI to qualified personnel, and ensuring human control throughout the process, including AI configuration, model training, data input, prompt and process creation, and outcome review and auditing.

That risk assessment process will have the effect of (a) implementing measures that eliminate or mitigate the potential for discriminatory outcomes, (b) comply with new AI-employment discrimination regulations, and (c) provide employers with defenses in the event prospective or existing employees assert that the use of AI or ADS resulted in discriminatory outcomes.

The fallibility of AI for HR functions is not a reason to refrain from using it. After all, human are fallible too, and technological advancement is indispensable for business development and competition. Rather, just like any other new technology, AI for HR must be implemented based on an appropriate risk assessment and mitigation process.