Employers are rapidly adopting artificial intelligence and automated decision-making (ADS) tools for human resources functions. They are valuable tools for HR professionals, and should be an integral aspect of the HR operations of today and the future. Consequently, it should be no surprise to employer lawyers that our clients will be held liable when discriminatory personnel decisions are made either by ADS or by a manager or HR professional relying on AI to help make those decisions.
AI and ADS offer immense benefits to employers. Here are a just few examples:
- Review and summarization of job applicant materials for completeness and suitability for available jobs, employer culture and expectations, etc.
- Summarization and analysis of interview content and performance.
- Assessments of personality, characteristics, skills, aptitudes, etc.
- Summarization and analysis of performance based on the entirely of an employee’s work product, including emails, phone and video conference calls, recorded meetings, interactions with business systems and databases, speed and accuracy, etc.
- Recommendations for job placement, advancement, training opportunities, etc.
- Monitoring and alerting about employee conduct for disciplinary purposes.
The initial reactions of some managers and HR professionals is that AI would not be useful to them, or that they do not trust AI to generate reliable results. However, as more-and-more HR operations use AI, managers and HR professionals who refuse to do so will loose touch with prevailing practices. Moreover, studies show that results yielded by AI are not necessarily less reliable than human decision-making, since business managers and HR professionals make decisions that are influenced, consciously and unconsciously, by latent variables and inherent biases. Finally, existing HR systems and applications already incorporate AI and ADS, and that functionality is only becoming increasingly powerful. Employers that opt not to utilize it will over time expose themselves to competitive disadvantages.
Just like humans who make personnel decisions, AI can generate such decisions that are based on improper variables, or appear in hindsight to be based on such factors. That occurs because AI models are trained using historical data, and that data embodies biases and trends that existed throughout history. Likewise, AI trained on HR data about the employer’s historical and existing workforce, management structure, job descriptions, etc. will invariably incorporate the trends and biases inherent in that data too. AI also may unintentionally disadvantage certain individuals, such as if ADS disqualifies a candidate because of availability and the candidate’s availability is based on medical or family-care requirements, or if AI rates an employee’s performance in meetings and calls poorly and such performance was due in part to a disability.
Since AI like humans can generate discriminatory results, it is unsurprising that employers that use AI will be liable for those results. New York City, Illinois, and California (effective October 1, 2025) have adopted new or amended existing laws to have that effect. However, do we really need a new law to tell us that? Existing laws – including RSA Chapter 354-A and the federal Civil Rights Act and Title VII of that law – already prohibit and punish discrimination by employers, no matter how the discriminatory decision was made. Indeed, AI and ADS do not implement such decisions, no matter how autonomously they operate. Rather, employers use AI and ADS to analyze data and implement the results yielded by those technologies. If a human relies on technology to help make a personnel decision, the human is still making the decision. Similarly, if an employer permits an ADS result to be effective without human oversight, the employer has made the decision to do so and is liable for the outcome.
Perhaps the most salient aspect of these new AI regulations is that they require employers to maintain records that can be later scrutinized to determine whether personnel decisions are legitimate or discriminatory. That requirement is consistent with other existing and emerging laws concerning AI generally, which provide broader and more rigorous requirements for the use of AI for HR functions.
The European Union and Colorado adopted broad-based AI laws last year, and other states will almost certainly adopt similar laws this year and in years to come. Those regulations categorize the use of AI for employment decisions as high-risk. That does not mean that such use of AI is prohibited. Rather, to use AI for that purpose, employers must first conduct a risk assessment to identify the risks inherent in the use of AI and implement measures to mitigate those risks. Those measures include safeguards like thorough testing of AI before implementing it, ensuring the reliability of data used to train the model, limiting the use of AI to qualified personnel, and ensuring human control throughout the process, including AI configuration, model training, data input, prompt and process creation, and outcome review and auditing.
That risk assessment process will have the following effects.
- Implement measures that eliminate or mitigate the potential for discriminatory decisions being made and implemented.
- Comply with new AI-employment discrimination regulations.
- Establish defenses in the event prospective or existing employees assert that the use of AI or ADS resulted in discriminatory outcomes.
The fallibility of AI for HR functions is not a reason to refrain from using it. After all, human are fallible too, and technological advancement is indispensable for business development and competition. Rather, just like any other new technology, AI for HR must be implemented based on an appropriate risk assessment and mitigation process.