Navigating AI’s Benefits and Risks in the Employment Context

Photo of Vineesha S. Sow
Vineesha S. Sow
Associate, Litigation Department
Published: Boston Business Journal
March 1, 2024

Artificial intelligence (AI) is the ability of machines to mimic human intelligence by perceiving, reasoning, learning, and decision-making through the use of code, algorithms and data. Machine learning, deep learning, and generative AI (which can generate new content autonomously) are rapidly developing subsets of AI with seemingly endless applications across all industries from healthcare to logistics to cybersecurity, and functional areas, including Human Resources (HR).

HR teams are using AI in the employment context to streamline candidate and workplace processes, analyze large volumes of data, and create insight for more informed strategic decisions. Examples include:

  • Automating resume screening and candidate selection using algorithms to scan resumes, job descriptions, and candidate responses and make recommendations.
  • Monitoring real-time employee performance and behavior such keylogging, attention tracking, web browsing and app utilization.
  • Generating predictive analytics to identify patterns and correlations around individual and team performance and engagement.
  • Utilizing 24/7 virtual assistants and chatbots to quickly respond to employee questions and provide resources to workers struggling with stress, anxiety, or other mental health challenges.
  • Automating offboarding steps to protect loss of intellectual property (IP), remove access from systems and networks, collect employee feedback, and manage knowledge transfer.

The potential of AI to revolutionize workplaces is undeniable, offering benefits for both organizations and employees. However, employers face several key legal considerations in utilizing AI technology: risk of bias leading to discrimination and violations of employee privacy.

Risks of Bias Leading to Discrimination

The potential for AI to generate biased employment decisions is a major concern for legislators, regulators, and civil rights groups. AI may logically appear unbiased because the human decision-maker is out of the equation, but that is not the case. Human developers enter the data used by AI and create the code and algorithms that direct AI to perform. If the training data contains more of one group than others, that imbalanced data can create a disparate impact on the other groups. For example, if an algorithm favors resumes containing keywords associated with one gender over another, the result can be a biased shortlisting of candidates.

The Equal Employment Opportunity Commission (EEOC) has published guidance about how employers can monitor AI and algorithmic decision-making tools to ensure unbiased treatment of candidates and employees.  The EEOC also makes it clear that employees are generally liable for any disparate impact discrimination resulting from AI decision-making tools designed or administered by third-party vendors. Employers should also be aware of regulatory attention to AI in hiring practices. For example, New York City regulations obligate employers to conduct audits of AI applications used in hiring decisions and to notify job candidates about the use of AI in the hiring process. Illinois and Maryland require an applicant’s consent before using certain AI tools. Other jurisdictions are considering similar statutes.

Risks to Employee Privacy 

The implications of employee privacy rights are also of great importance when employers decide to use AI in the workplace. Employers are increasingly using AI for performance monitoring, recruitment and hiring, and security and surveillance. The use of AI for these purposes may amount to excessive employee monitoring and significant collection, processing, and storage of sensitive employee information including biometrics such as facial recognition, voice, retina scans, and fingerprints.

Failing to implement appropriate data minimization practices and security measures to protect employee data from unauthorized access, breaches, or misuse can lead to organizational liability. If a data breach occurs, the consequences for both the organization and its employees can be severe. Stolen employee data can be used for identity theft, financial fraud, or other malicious purposes, causing significant financial losses, reputational damage, and emotional distress for affected individuals.

Although the state privacy laws do not specifically reference the workplace, Massachusetts courts have applied the laws to workplace issues raised by employees. As it relates to the Massachusetts Right of Privacy Act, courts use a balancing test weighing the employer’s legitimate business interest against the employee’s reasonable expectation of privacy.  Failure to obtain consent and overbroad data collection procedures all create greater risk of employer liability.

Best Practices for Organizations

Courts and government agencies are sending a strong message: employers must take proactive measures to safeguard candidates and employees from discrimination and privacy violations.

Actions that employers should take to protect their organizations and employees:

  • Screen third-party vendors carefully about how AI testing is done on their software and request to review the results. Ask how bias is mitigated and measured and how the AI model is maintained and reconfigured.
  • Recognize the problems and risks and focus on transparency in decision-making and human review.
  • Evaluate data regularly because AI models rely on historical datasets, but the world changes constantly.
  • Confirm the status of regulations and government agency review governing your incorporation of AI in HR practices.
  • Audit your AI software with real-time testing to identify bias and apply solutions.
  • Collect only the data necessary and reasonable to satisfy the business interests of the employer and avoid unnecessary surveillance.
  • Employ appropriate security measures to protect employee data and prevent unauthorized access or breaches.
  • Inform employees about AI use, data collection practices, and employee privacy rights.
  • Communicate openly with employees and applicants about AI use, and decision-making, answer their questions, and be receptive to feedback.

While AI offers undeniable potential to revolutionize workplaces, navigating its benefits and risks requires a nuanced approach. Employers must be mindful of the legal and ethical considerations surrounding AI use, particularly regarding potential bias and employee privacy concerns. By implementing best practices like thorough vendor selection, transparent human decision-making, careful testing of AI models, data minimization, robust security measures, and open communication with employees, organizations can harness the power of AI responsibly. Learn more about Artificial Intelligence at The National Institute of Standards and Technology (NIST).