Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.
Bill Setting Rounding Rules for Cash Transactions Advances in FL The Florida Senate Commerce and Tourism Committee advanced a bill ( SB 1074 ) that would direct retailers how to round cash transactions...
NH Bill Aimed at Banning Political Discrimination in Workplace New Hampshire Rep. Terry Roy (R) has introduced a bill ( HB 1464 ) that would prohibit employers from refusing to hire, barring from employment...
ACA Health Insurance Enrollments Down by More Than 800,000 About 830,000 fewer Americans have signed up for Affordable Care Act health insurance plans than last year, according to data from the Centers...
What a difference a year makes. In March of last year, we reported that many states were considering requiring insurers to cover popular weight-loss drugs like Ozempic, Wegovy, Mounjaro and Victoza....
State Lawmakers Target Data Centers State lawmakers are considering legislation to protect consumers from rising energy prices as data centers drive up demand. A bill [ HB 3546 (2025) ] passed in Oregon...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional.
Artificial intelligence is having a moment.
As we recently reported, ChatGPT has been grabbing headlines for a while now for its astonishingly human-like writing ability. And just last month, Siqi Chen, chief executive of a San Francisco startup called Runway, predicted that AI would have a greater impact on society than the internet or even electricity. So did Bill Gates.
Yet, for all the talk about the power of AI (and its ethical implications), the burgeoning technology—once relegated to the pages of science fiction—remains remarkably under-regulated (if not unregulated) in the United States.
This is no small matter as businesses and even governments are increasingly turning to AI to make critical decisions—decisions that could severely hurt people due to intended or unintended biases baked right into the technology.
Take, for example, a study last year by the Society for Human Resource Management, which found that almost one in four organizations use automation and/or AI in their hiring process, tools that could, in theory, improperly or unfairly sift out applicants, leading to discrimination.
As AI continues to infiltrate our culture in numerous ways, it has the undeniable potential to infringe on the rights of individuals and groups, particularly marginalized groups, which makes the issue of AI governance all the more pressing for our leaders and policymakers.
And for the time being at least, the regulation of AI governance and AI ethics appears as though it’s going to be handled piecemeal at the state level, which will make compliance all the more tricky and complicated for developers of this revolutionary technology.
President Donald Trump’s administration issued two executive orders on AI governance, but they’re perceived as not having done much, at least not yet.
In October, President Joe Biden’s administration released a Blueprint for an AI Bill of Rights, a document intended to guide the use, design and deployment of automated systems. The blueprint lists ethical principles for the use of AI, but is nonbinding, a reoccurring theme in AI governance.
BABL AI, an Iowa City, Iowa-based AI consultancy, recently released a report examining the state of AI governance in both the United States and Europe. BABL’s first-of-its-kind research found that “significantly less than half of all organizations that use or develop AI have any formal or substantial governance structures for AI.”
BABL’s researchers found AI governance isn’t lacking because organizations are ignorant of the potential harms or risks related to AI. Rather, BABL found that AI governance is suffering for a litany of reasons, including:
“The newness is certainly a big issue,” said Shea Brown, BABL’s CEO and founder. “I think we don’t know what will be effective at mitigating risks.”
As a form of intelligence, an AI decision-making product has, in theory, as many inherent risks as a human actor. That includes reputational risks to a business as well as behavior that could infringe on an individual’s fundamental rights. The list is as potentially limitless as human imagination, which is daunting, Brown said.
“We don’t have a good handle on what all the risks are going to be,” he said.
Into this void, states have stepped up to take the lead.
As of mid-April,144 measures containing the phrase “artificial intelligence” had been introduced in 33 states since the start of 2023, according to the LexisNexis® State Net® legislative tracking database. About 30 of the bills appear to deal substantially with AI governance or AI ethics issues—that is, with the automatic decision-making capabilities of AI which pose some of the thorniest problems for the technology.
“These are difficult issues to regulate,” said Hayley Tsukayama, senior legislative activist, at the Electronic Frontier Foundation, a nonprofit dedicated to defending digital privacy. “AI covers a broad spectrum of things,” she added.
Unsurprisingly, one of the leaders in this area of legislation is California, the home of Silicon Valley. The Legislature there is considering four bills that deal with AI governance, all by Democrats:
These Golden State bills offer just a sampling of the kind of strategies lawmakers are attempting to employ to regulate AI decision making.
But with a diverse array of regulatory schemes being considered—including the potential for states to adopt differing schemes— businesses and other users of AI may face challenges when it comes to complying with these burgeoning laws.
Attorney Daniel J. Barsky, a partner with the international law firm Holland & Knight, said compliance in regard to AI governance issues is going to be similar to compliance around data privacy. Businesses and other AI users are going to have to be constantly monitoring the regulatory landscape, which, barring the establishment of a single, federal standard, will be a patchwork of laws.
Barsky said the biggest compliance issues to look out for will likely be infringement on intellectual property rights; AI bias, which could lead to discriminatory decisions that run afoul of other existing laws; and simple fidelity to the truth: Do AI systems produce accurate answers? Barsky noted that while AI platforms can produce some impressive results, they often can come up with glaringly incorrect responses, too. That alone poses risks to businesses.
Further complicating matters from a compliance perspective, Barsky said, is that in order for businesses to be up to date on the regulatory environment they will need to monitor not only state legislative proposals, but also guidance issued by regulators and decisions by the courts—decisions that might apply in some jurisdictions but not others until the courts reach some consensus on these issues.
“Businesses really have to consider what they’re doing, where they’re operating,” Barsky said.
—By SNCJ Correspondent BRIAN JOSEPH
Lawmakers in at least 33 states have considered bills or resolutions relating to AI in 2023, five more states than had done so a month ago, according to the State Net® legislative tracking system. Such measures have been enacted in nine states, eight more than last month.
Please visit our webpage to connect with a State Net representative and learn how the State Net legislative and regulatory tracking solution can help you identify, track, analyze and report on relevant legislative and regulatory developments.