Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.
AI Regulation to Remain in State Hands in 2025 In the absence of congressional action on artificial intelligence, state legislatures have taken the lead on the issue. And that’s likely to continue...
NLRB Prohibits Mandatory Anti-Union Meetings In a decision stemming from a complaint over Amazon’s actions before a successful unionization election at a New York warehouse in 2022, the National...
Federal Regulators Move to Block UnitedHealth Acquisition of Amedisys The U.S. Department of Justice and Democratic attorneys general of Illinois, Maryland, New Jersey and New York filed an antitrust...
A legal battle over a bill passed this year in California prohibiting political “deepfakes” in the leadup to an election revealed a significantly broader potential area of future artificial...
Trump Administration Likely to End ESG Rules Environmental, social and governance regulations will probably be rolled back next year, when President-elect Donald Trump takes office. Likely targets include...
A legal battle over a bill passed this year in California prohibiting political “deepfakes” in the leadup to an election revealed a significantly broader potential area of future artificial intelligence regulation.
Well before the legislation was enacted, it touched off a public feud between California Gov. Gavin Newsom (D) and Elon Musk. The dispute began this summer when the loquacious billionaire reposted on his social media platform X (formerly Twitter) an AI-generated video of Vice President Kamala Harris calling herself the “ultimate diversity hire.” Newsom, in response, declared such content election disinformation and vowed to ban it.
A few months later, in September, the governor signed AB 2839 outlawing the dissemination of “materially deceptive audio or visual media of a candidate” 120 days before an election. Musk immediately mocked the move on X, writing, “The governor of California just made this parody video illegal in violation of the Constitution of the United States.”
Implementation of the law has since been hung up in court. But attorney Daniel J. Barsky, a partner with the international law firm Holland & Knight, wonders if perhaps the row over the measure missed a larger point—one that portends a possible new area of state AI regulation.
AB 2839 ended up in court because the creator of the Harris deepfake, Chris Kohls, known as “Mr Reagan” on X, sued, saying the new law violated the First Amendment.
But while the public rhetoric around AB 2839 and other AI legislation has focused on end users like Mr Reagan, Barsky said the developers of AI tools to make deepfakes like Mr Reagan’s video could be in the crosshairs of both state legislators and the plaintiffs' bar.
“Platforms have tons of money,” Barsky said. “So, they’re going to be targets.”
Indeed, online AI tools like Synthesia and Invideo AI do very little to question users about their intentions for creating AI-generated content. Barsky said this lack of user vetting by AI platforms could not only be a liability in court, but a vulnerability state legislators could look to address as well.
“I can see that being an area of legislation coming up,” he said.
Barsky also noted that a growing area of concern in AI is so-called “AI washing,” where companies exaggerate their AI capabilities to market themselves as being more sophisticated than they are or even to fraudulently raise funding.
In April, Gurbir Grewal, director of the U.S. Securities and Exchange Commission’s Division of Enforcement warned: “If you are rushing to make claims about using AI in your investment processes to capitalize on growing investor interest, stop. Take a step back, and ask yourselves: do these representations accurately reflect what we are doing or are they simply aspirational? If it’s the latter, your actions may constitute the type of ‘AI-washing’ that violates the federal securities laws.”
A month earlier, the SEC announced it had settled the first-ever charges against investment advisors for misrepresenting their use of AI. The firms involved, Delphia (USA) Inc. and Global Predictions Inc., agreed to pay $400,000 in total civil penalties.
“We find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not,” SEC Chair Gary Gensler said in a press release. “We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.”
AI is expected to remain a major issue for state lawmakers next year, but Barsky said the proposed legislation could be narrower and more tempered, reflecting a growing understanding of the technology and how it is really being used today, as the hype surrounding it begins to die down.
“A lot of the froth is coming off the AI market,” he said. “I think that’s probably a good thing.
—By SNCJ Correspondent BRIAN JOSEPH
This year state lawmakers across the country considered 679 measures referring to artificial intelligence, according to the LexisNexis® State Net® legislative tracking system. Two hundred sixty-five of those bills, introduced in 36 states, dealt substantively with the technology. Twenty-two of the states enacted such bills.
Visit our webpage to connect with a LexisNexis® State Net® representative and learn how the State Net legislative and regulatory tracking service can help you identify, track, analyze and report on relevant legislative and regulatory developments.