U.S. civil rights enforcers warn employers against biased AI

The federal government said Thursday that artificial intelligence technology used to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws

Via AP news wire
Thursday 12 May 2022 18:18 BST
Artificial Intelligence-Hiring Discrimination
Artificial Intelligence-Hiring Discrimination (Copyright 2021 The Associated Press. All rights reserved)

The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.

The U.S. Justice Department and the Equal Employment Opportunity Commission jointly issued guidance to employers to take care before using popular algorithmic tools meant to streamline the work of evaluating employees and job prospects — but which could also potentially violate the Americans with Disabilities Act.

“We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologies that we are seeing increasingly used by employers," Assistant Attorney General Kristen Clarke of the department’s Civil Rights Division told reporters Thursday. "The use of AI is compounding the longstanding discrimination that jobseekers with disabilities face.”

Among the examples given of popular work-related AI tools were resume scanners, employee monitoring software that ranks workers based on keystrokes, and video interviewing software that measures a person's speech patterns or facial expressions. Such technology could potentially screen out people with speech impediments or range of other disabilities.

The move reflects a broader push by President Joe Biden's administration to foster positive advancements in AI technology while reining in opaque and potentially harmful AI tools that are being used to make important decisions about people's livelihoods.

“We totally recognize that there’s enormous potential to streamline things," said Charlotte Burrows, chair of the EEOC, which is responsible for enforcing laws against workplace discrimination. “But we cannot let these tools become a high-tech path to discrimination."

A scholar who has researched bias in AI hiring tools said holding employers accountable for the tools they use is a “great first step,” but added that more work is needed to rein in the vendors that make these tools. Doing so would likely be a job for another agency, such as the Federal Trade Commission, said Ifeoma Ajunwa, a University of North Carolina law professor and founding director of the AI Decision-Making Research Program.

“There is now a recognition of how these tools, which are usually deployed as an anti-bias intervention, might actually result in more bias – while also obfuscating it," Ajunwa said.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in