Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.
TX to Consider Sweeping AI Bill in 2025 Texas Rep. Giovanni Capriglione (R) released draft legislation for the state’s 2025 session that would provide for comprehensive regulation of artificial...
Minimum Wage Measures on Ballot in Multiple States Measures aimed at increasing the minimum wage are on the ballot in six states this year. Voters in Alaska and Missouri will consider raising their minimum...
IL’s New ‘Swipe’ Fee Law Faces Legal Challenge The Illinois Bankers Association and other organizations filed a federal lawsuit to block a new Illinois law limiting banks from charging...
There’s a potential new front opening in the ongoing battle between states and the tech industry over minors’ access to social media, and it comes courtesy of Facebook and Instagram parent...
Battle of Tech Titans Brewing over Age-Gating In response to efforts in multiple states, including Arkansas, California and Texas, to require social media platforms to verify the age of users and obtain...
Financial Scams are becoming increasingly sophisticated, thanks in large part to artificial intelligence. But the problems created by this new wave of technology are so numerous and complicated that addressing them will not be easy.
In July, U.S. Federal Trade Commission Chair Lina Khan warned that AI is turbocharging fraud, raising a litany of concerns with regulators, law enforcement and people in the financial industry.
“Artificial intelligence promises to have a profound impact on many aspects of our society, with vast implications for how people live, work, and communicate,” Khan said in her prepared comments for an FTC oversight hearing by the U.S. House Committee on the Judiciary. “The benefits of AI, though, are accompanied by serious risks; AI misuse can violate consumers’ privacy, automate discrimination and bias, and turbocharge imposter schemes and other types of scams. And the rapid development and deployment of AI risks further locking in the market dominance of large incumbent technology firms.”
To give you an idea of just how advanced scam tech has become, fraudsters just need to download a short sample of an audio clip from someone’s social media or voicemail message to create their own fake message using the victim’s voice. With that fake message, scammers can employ all kinds of devious schemes, like calling the victims’ parents and asking for money.
Even before AI’s recent proliferation, financial fraud was on the rise. Last year, U.S. consumers lost nearly $8.8 billion from fraud, up 44 percent from 2021 – and that’s despite record investments in detection and prevention.
And the situation has only gotten worse, as new technology has made it easier and cheaper for fraudsters to run their scams while COVID-19 lockdowns have discouraged face-to-face interactions, which even in today’s technologically advanced world remains one of the greatest guards against fraud.
It’s not hyperbole to say we may be on the verge of a fraud boom. Indeed, Bloomberg recently reported: “Financial crime experts at major banks, including Wells Fargo & Co. and Deutsche Bank AG, say the fraud boom on the horizon is one of the biggest threats facing their industry.”
For the moment, however, state legislators seem more concerned about the use of AI to create realistic fake images and videos, or “deepfakes,” that are sexually explicit in nature.
A number of states have enacted laws addressing deepfakes in recent years. The majority of them have dealt with deepfake pornography, although a few have also targeted the use of deepfakes to influence elections.
So far this year four states—California, Massachusetts, Louisiana and New Jersey—have introduced a total of six bills specifically addressing deepfakes by name, according to the LexisNexis® State Net® legislative tracking system. Most deal specifically with deepfake porn.
Illinois also enacted a bill (HB 2123) that includes the term “digital forgery” rather than “deepfake” but prohibits the nonconsensual dissemination of a private or intentionally digitally altered sexual image.” And Washington State enacted a bill (SB 5152) dealing with the use of “synthetic media” in election campaigns. New York has also introduced legislation to require disclosure of the use of “synthetic media” in advertisements (AB 216 and SB 6859) and political campaigns (AB 7106 and SB 6638).
Some lawmakers, however, are looking at deepfakes more broadly or at AI fraud specifically. For example, a bill introduced in Pennsylvania (HB 1373) seeks to criminalize the “unauthorized dissemination of an artificially generated impersonation of an individual.” The bill would make it a first-degree misdemeanor to disseminate such deepfakes.
Sponsors of the bill, Democratic Reps. Robert E. Merski and Chris Pielli, wrote in a May 2023 memo to other House members: “A quick google search reveals an overabundance of concerning headlines about artificial intelligence (AI) and deepfakes, including, Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio; AI-generated deepfakes are moving faster than policy can; Deepfaking it: America's 2024 election collides with AI boom; It’s Getting Harder to Spot a Deep Fake Video.”
The memo went on to say: “As the use of AI becomes more widespread and AI itself continues to evolve and become more advanced, it is incumbent upon us to take proactive measures to confront the spread of disinformation and protect individuals from having artificially generated replicas of themselves disseminated without their consent or, worse yet, used for nefarious purposes.”
A bill introduced in New Jersey this past June (SB 3926), meanwhile, is aimed squarely at the issue of AI fraud. The measure would update the state’s identity theft law to include fraudulent impersonation using AI or deepfake technology.
“As artificial intelligence tools have grown increasingly more powerful and available to the general public, they’ve opened the door for scammers to commit shockingly disturbing new crimes involving identity theft,” said Sen. Doug Steinhardt (R), one of the bill’s sponsors. “With very little technical expertise, scammers can download pictures or video of a person from online sources and run it through AI tools to imitate their voice or generate realistic video of the person saying or doing things that never happened. It’s leading to new scams that put both the imitated victim and other parties, including relatives, at risk.”
So far this year lawmakers in eight states have introduced legislation dealing with “deepfakes”—realistic fake images, videos and other content facilitated by artificial intelligence—or, alternatively, “synthetic media” or “digital forgeries.” Most target deepfake porn, although several focus on the use of deepfakes in elections and advertising. At least one bill, in Pennsylvania (SB 1373) addresses digital fakes more broadly, while another in New Jersey (SB 3926) deals specifically with AI fraud.
While the majority of the state legislation related to deepfake technology may still be focused on pornography and elections right now, more measures dealing with AI fraud may be on the way. The growing awareness about the risk of AI fraud that spurred legislation like New Jersey’s SB 3926 has also raised the question of who should be responsible for losses associated with it.
This summer the U.K.’s top court said a couple who were duped into sending money abroad couldn’t hold their bank liable. After all, the couple in that case requested that the bank transfer the funds. How could the bank know the couple were being tricked?
But as Bloomberg reported, the British “government is preparing to require banks to reimburse fraud victims when the cash is transferred via Faster Payments, a system for sending money between UK banks. Bloomberg also noted that “Politicians and consumer advocates in other countries are pushing for similar changes, arguing that it’s unreasonable to expect people to recognize these increasingly sophisticated scams.”
Some in the financial industry would like to see tech firms shoulder some of that burden. In June, the chief executives of nine of Britain’s biggest banks wrote a letter to Prime Minister Rishi Sunak demanding that tech companies not only do more to stop fraud on their platforms, but contribute to refunds for victims.
It wouldn’t be too surprising to see efforts to shift some of the burden of AI scams off consumers and onto tech companies in this country as well.
—By SNCJ Correspondent BRIAN JOSEPH
Please visit our webpage to connect with a State Net® representative and learn how the State Net legislative and regulatory tracking solution can help you identify, track, analyze and report on relevant legislative and regulatory developments.