The use of AI in job recruitment and hiring, promotion, and termination decisions has been widely criticized for automating and deepening existing racial, gender, and age biases. Experts have identified that some AI systems that evaluate candidates’ facial expressions and language prioritize white, male, and traditionally abled candidates. The scale of the problem is potentially massive as most companies rely upon AI at least once during the hiring process. Charlotte Burrows, chair of the US Equal Employment Opportunity Commission, cited estimates that “as many as 83 percent of employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen or rank candidates for hire.”
This summer, New York City became one of the first US jurisdictions to pass a law regulating the use of artificial intelligence in employment decisions. The legal implications of this new law could shape how AI policy and debate might take form in other American states or even at the federal level.
New York’s law regulating employers’ use of automated employment decision tools recently entered its enforcement phase after several months of delay. This law requires employers to audit their HR technology systems for bias and publish the results. The nationwide push to regulate increasingly powerful automation and AI technology in the workplace prompted the New York City law. The U.S. Equal Employment Opportunity Commission, as well as a handful of states and Washington, D.C., are all considering their own draft legislation covering AI bias in hiring. Experts predict that the fact that New York passed this law is a telling sign that the US government is catching up on emerging technologies.
EEOC issued new technical guidance in May, expressing that employers are typically responsible for the outcomes of using selection tools in employment decisions even if they were trained and programmed by outside vendors.
The guidance, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” says employers must ensure that scanners that qualify and rank applicants by the presence of specific keywords on their resumes, employee monitoring software that rates worker productivity based on keystrokes, virtual assistants or chatbots that eliminate job candidates according their answers to pre-set questions, video-interviewing software that evaluates candidates based on facial expressions and speech patterns, and testing software that provides “job fit” scores based on cognitive skills or perceived cultural fit do not introduce “disparate impact” into the process.
Disparate impact occurs when a neutral test or selection procedure disproportionately excludes people based on protected characteristics such as race, color, religion, sex, and national origin.
EEOC notes that employment discrimination exists where a selection procedure exerts a disparate impact on a particular protected class, the employer cannot prove that the test is job-related for the position in question, and a less discriminatory tool is available but not used.
Unlike the New York City law, EEOC’s guidelines fall short of official, enforceable regulations. Nonetheless, it is essential for employers to assess the legal implications of using algorithmic decision-making tools when making employment decisions, as it is likely Congress and US states will follow them in advancing future legislation. In fact, the recently introduced No Robot Bosses Act of 2023 would require employers to disclose when and how they use automated decision systems in evaluating job applicants and employees. The bill seeks additional assurances that AI does not supersede human logic in human resources decisions:
- Prohibits employers from relying exclusively on an automated decision system in making employment-related decisions
- Requires pre-deployment and periodic testing and validation of automated decision systems for issues such as discrimination and biases
- Requires employers to train individuals or entities on the proper operation of automated decision systems
- Mandates employers to provide independent, human oversight of automated decision system outputs
- Prescribes timely disclosures on the use of automated decision systems, the data inputs to and outputs from these systems, and employee rights related to the decisions aided by these systems
- Establishes the Technology and Worker Protection Division at the Department of Labor to regulate the use of automated decision systems in the workplace
Organizations that use or plan to use AI to evaluate job candidates, promotion requests, pay advancements, job performance, or other employment-related factors, can confidently benchmark against the New York City law.
Compliance
Employers can start by determining whether any of the software their HR departments use during the hiring or promotion process utilizes Automatic Employment Decision Tools to either “substantially assist or replace discretionary decision-making” by humans to “score, classify, or recommend” applicants or employees.
Reporting
Next, employers should publicize and expressly inform anyone whose prospects may be affected that the business uses automated employment decision tools as part of the process. Upon request, they should be able to supply candidates with detailed information about what data they collect and analyze through AI technology. Companies should also get into the habit of preparing annual independent audits that prove their systems are not racist or sexist, as these reports are currently mandated by the New York law.
Enforecement
New York prescribes civil penalties including a $375 fine for the first violation and $500 to $1,500 fines for subsequent violations. There are separate penalties for a breach of notice and audit requirements. A lawyer specializing in AI law can help companies incorporate these emerging technologies, disclosures, reports, and compliance.
Conclusion
given the emerging legal implications of AI regulations, it is crucial for businesses, especially those in the technology sector, to seek legal advice before making any hiring decisions. Non-compliance with these new laws could not only result in penalties but also tarnish a company’s reputation, which is particularly significant for companies in emerging technologies.
To mitigate potential legal liabilities, employers must be judicious about the personal data they collect from their employees. Well-drafted employment agreements and robust company privacy policies could significantly aid in ensuring compliance.
While the proposed law may not be perfect, proponents see it as a positive step towards regulating AI and mitigating potential risks and harms associated with its use. This law mandates companies to have a deeper understanding of the algorithms they use and scrutinize whether these technologies might inadvertently discriminate against certain demographics, such as women or people of color.
There’s no denying that AI tools can introduce biases into the recruitment process, potentially excluding qualified candidates based on irrelevant factors. This unintended discrimination could lead to legal challenges and damage an organization’s reputation. As we continue to advance in AI and other technologies, it becomes increasingly important for employers to understand these legal implications and strive to establish fair and equitable hiring processes.
Public opinion on New York’s AI law has been mixed, with even tech leaders like Google’s Sundar Pichai and Microsoft’s Brad Smith advocating for more oversight of AI algorithms at the national level. Despite the controversy, this law represents a rare regulatory success in US AI policy and sets the stage for more specific local regulations in the future.
Gamma Law is a San Francisco-based Web3 firm supporting select clients in complex and cutting-edge business sectors. We provide our clients with the legal counsel and representation they need to succeed in dynamic business environments, push the boundaries of innovation, and achieve their business objectives, both in the U.S. and internationally. Contact us today to discuss your business needs.