New York City has taken one more step toward implementing a new law meant to curb bias in hiring tools that use artificial intelligence technology. Under the updated guidelines, companies will be expected to notify job candidates if hiring decisions are made using AI technology, and they will also be required to conduct regular “bias audits” of the technology they use. Last week, the New York City Department of Consumer and Worker Protection announced that it had finalized the rules and would begin enforcing the new law in July.

But confusion remains about what this means for HR departments that use AI hiring technology, which the city also calls “automated employment decision tools.” Ramsey Isler, the director of product at The Markup, an online technology magazine, told me he thinks the New York law “is a good start, but there's a tough road ahead for implementation.”

Patrick Hall, a principal scientist at the boutique law firm BNH.AI., told the New York City website Gothamist that the rollout of these new regulations was creating lots of stress inside of modern HR departments. “There's a lot of questions about whether they should turn off the (AI) tool or even use the automated decisioning tool,” he said.

The law is the first of its kind in the US and, so far, only applies to New York City-based job candidates and companies based in New York. But regulators across the country are also trying to grapple with technologies like AI which, as we’ve reported in the past, can be replete with bias and can (sometimes inadvertently) perpetuate discrimination. So far, representatives in states like California have introduced their own legislation to prevent algorithmic discrimination, and The White House has issued two executive orders and a blueprint for an AI Bill of Rights, which highlights the importance of AI tools for innovation but cautions that their use “must not come at the price of civil rights or democratic values.” It also lays some of the groundwork for AI best practices, including ensuring that systems are safe, that users are protected from algorithmic discrimination, and that there are human alternatives to using the technology.

Here’s what we know so far about the new rules in New York and how they might be a harbinger of what’s to come.


What is your biggest challenge?

We can help, with Charter Pro research, reporting, and advisory support.

What's included:

  • Access to on-demand advisory from Charter team & experts
  • Data-driven research and insights
  • Actionable journalism, toolkits, playbooks
  • Private roundtable sessions