This year is shaping up to be an eventful one for the regulation of facial recognition technology. The European Union is reportedly considering a temporary ban on the technology, which, if adopted, would make that government the largest in the world to take such action. That development could also add impetus to efforts to regulate the technology in this country, with legislation aimed at doing so having been introduced in at least twenty states this session and possibly on the way in Congress as well.

 News broke last month that the European Commission, the executive branch of the European Union, was considering imposing a moratorium on the use of facial recognition technology, or FRT, in public spaces by both private and public entities. According to a leaked preliminary draft of a Commission white paper expected to be officially released this month, the ban would potentially be in place “for a definite period (e.g. 3-5 years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed.”

The white paper - addressing how the EU should approach artificial intelligence in general - also states that a ban on FRT “would be a far-reaching measure that might hamper the development and uptake of this technology,” and, consequently, the Commission would prefer to focus instead on protections against “automatic processing” granted by the General Data Protection Regulation.” The GDPR is the EU law that was adopted in 2016 and went into effect in 2018, imposing strict restrictions on the use of personal data on EU residents.

Still, the news that the EU is considering such action could spur similar efforts in the United States, as the GDPR has done. So far U.S. regulation of FRT has largely been confined to local governments. Last year San Francisco and Oakland, California, banned the use of the technology by city departments, according to LexisNexis State Net’s local ordinance service, while at least a dozen localities have imposed restrictions on government surveillance more broadly in recent years.

All of those wider-ranging local government measures were part of a national effort launched by the American Civil Liberties Union and other advocacy groups in 2016 to ensure adequate oversight of municipal surveillance. According to the website for that campaign, called Community Control Over Police Surveillance (CCOPS), more than a dozen other cities are working on ordinances based on CCOPS’ model law.

At the state level, California enacted a law last year (AB 1215) imposing a three-year moratorium on the use of FRT in police body cameras. And that state is one of 20 that have introduced legislation dealing with FRT this session (see Bird’s Eye View), according to State Net’s legislative tracking database.

A few of the proposed bills would provide for the testing of FRT systems (New Jersey AB 5300 and AB 989) or require public disclosure when FRT is used (California AB 1281 and Vermont HB 595 and HB 899). But the majority of them would restrict or ban the use of FRT: 

  • By government agencies and private entities that have not obtained consent from those subject to the FRT, as is the case with Hawaii HB 2745/SB 3148.

One of the most active states on the issue is Washington, home to two of the biggest providers of FRT to government agencies, Amazon and Microsoft. Among the multiple FRT-related bills introduced in that state this session is SB 6281, the Washington Privacy Act, which among other things would require “meaningful human review” of decisions made using FRT systems.

The state’s Senate passed a different version of the Washington Privacy Act (SB 5376) nearly unanimously last year, but the bill died in the House. Some lawmakers said Microsoft and other tech companies had too much influence over that measure’s development, according to a report by the tech news website VentureBeat.

That report also indicated that one of the sponsors of this year’s version of the Washington Privacy Act is Sen. Joe Nguyen (D), who’s also a senior program manager for Microsoft. And VentureBeat cited an expert on FRT and AI policy, Jevan Hutson at the University of Washington School of Law’s Technology Law and Public Policy Clinic, who viewed Microsoft’s involvement with both versions of the Washington Privacy Act “as an effort to create an initial framework for facial recognition regulation that can serve as a model for other states and the federal government.”

There are numerous bills aimed at regulating the use of FRT currently pending in Congress, according to State Net’s database. And there appears to be bipartisan support for the issue. As U.S. Rep. Mark Meadows (R-North Carolina) said last month in regard to a series of House committee hearings about FRT, according to another VentureBeat report, “I think that this is where conservatives and progressives come together” and “we’ve had really good conversations about addressing this issue.” Whether conservatives and progressives in Congress can come together on any issue after a highly partisan presidential impeachment process, however, remains to be seen.

For much of the nearly two decades since the 9/11 terrorist attacks, law enforcement agencies have been able to acquire and deploy FRT and other surveillance technologies without having to notify impacted populations or even local governments. But in recent years the rollout of those technologies across the country with little public discussion or even awareness has coincided with growing public unease about the encroachment of privacy by Big Tech.

The surge in local, state and federal FRT legislation suggests a tipping point may have been reached. As the author of California AB 1215, Assemblyman Phil Ting (D), told reporters last year: “This is not just a California concern, this is a national concern, people have really...been much more sensitive to their privacy recently.”

Sensitivity about privacy among the American public and lawmakers isn’t likely to have been diminished by an article last month in the New York Times about a startup called Clearview AI that uses FRT to help law enforcement identify suspects from a database of billions of images scraped off social media and other websites.

According to the Times report, hundreds of law enforcement agencies have begun using the company’s app in the last year “without public scrutiny,” and the app’s computer code includes language for pairing it with wearable computer glasses, potentially allowing users to identify anyone they looked at, including “activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.”

“It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” Al Gidari, Consulting Director of Privacy at the Stanford Center for Internet and Society, told the Times. “Absent a very strong federal privacy law, we’re all screwed.”

Absent federal action, more very strong local and state privacy legislation may be on the way.

For a free, sample report showing the current status of all the bills mentioned in this story click here.


Many States Taking Hard Look at Facial Recognition Technology

At least 20 states have introduced legislation in the 2019-20 session aimed at restricting or prohibiting the use of facial recognition technology, according to LexisNexis State Net’s legislative tracking system. One of those states, California, enacted a bill last year imposing a three-year moratorium on the use of the technology with police body cameras.

Source: LexisNexis State Net