The alteration of video and other digital content is nothing new. But the application of artificial intelligence to that pursuit has taken it to a whole new level - known as a “deepfake” - and begun to draw the attention of state lawmakers.

Photographs, moving pictures and audio recordings have been subject to manipulation virtually since they came into existence. The fakes got better with the coming of the digital age. They got better still in the 1990s, when academic researchers began employing machine and deep learning techniques with the digital content, which is where the “deep” in deepfake comes from.

Basically, the researchers developed networks capable of learning how to make fake videos by studying real ones. Now, a few decades later, tools are readily available to create reasonably convincing videos of people saying and doing things they’ve never actually said or done.

Practical applications of deepfake technology include the reanimation of historical figures for educational purposes, video dubbing of foreign films - as opposed to audio dubbing with its inherent mismatch between the actors’ mouths and the words they speak - and online clothes shopping that allows a consumer to virtually try on outfits before buying them, as well as simply providing entertainment.

But the technology has also been used in ways that are decidedly less innocuous. In fact, one of its first real-world applications was the creation of synthetic pornography, in which the face of a celebrity - or, in the case of revenge porn, a jilting ex-girlfriend - is swapped onto a porn actress’s body.

That use case quickly became the most prevalent one. As of September 2019, 96 percent of all deepfake videos online were pornographic, according to a report from startup Deeptrace.

From Porn to Politics

Deepfakes have also entered the political sphere, with some alarming results. Last year a Belgian political group posted a deepfake video on Facebook of Prime Minister Sophie Wilmès appearing to link COVID-19 to environmental damage and call for strong action on climate change. The video reportedly drew 80,000 views within 24 hours, with at least some of those viewers indicating they thought it was authentic.

Even the mere existence of deepfake technology has caused political instability. In 2018 the president of the small African country of Gabon, Ali Bongo, who hadn’t been seen in public for months, gave a video address in an effort to quell rumors that he was either sick or dead. His political opponents dismissed the video as a deepfake, helping touch off the country’s first coup in over 50 years.

U.S. Sen. Marco Rubio (R-Florida) has equated deepfakes with military weaponry.

“In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles,” he said in a 2018 speech, according to CSO. “Today...all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.”

Others don’t see deepfakes as quite that much of an existential threat.

“As dangerous as nuclear bombs? I don’t think so,” Tim Hwang, director of the Ethics and Governance of AI Initiative at the Berkman-Klein Center and MIT Media Lab, told CSO. “I think that certainly the demonstrations that we’ve seen are disturbing. I think they’re concerning and they raise a lot of questions, but I’m skeptical they change the game in a way that a lot of people are suggesting.”

The Speed of Deception

One thing about deepfakes that doesn’t seem open to debate is that they’ve been getting better fast.

“In January 2019, deep fakes were buggy and flickery,” Hany Farid, a professor at U.C. Berkeley who specializes in digital image analysis, told the Financial Times. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”

Deepfakes have also been proliferating. Deeptrace reported that the number of deepfake videos online nearly doubled in the first nine months of 2019, from 7,964 to 14,678, according to Forbes.

There are now websites like MyHeritage that let even those with little technical knowledge access deepfake technology to turn still photos of departed relatives into short videos. There are also deepfake smartphone apps that allow users to convert selfies into lip-sync music videos (Wombo) or map them onto clips from blockbuster movies (Reface). Another smartphone app called Avatarify lets users control the face of anyone they have a photo of - a celebrity, a public figure, a friend, a rival, an ex, their boss - like a puppet, which troubles some industry observers.

“It’s all very cute when we do this with grandpa’s pictures,” Anjana Susarla, a professor of responsible AI at Michigan State University, told the Washington Post. “But you can take anyone’s picture from social media and make manipulated images of them. That’s what’s concerning.”

It’s concerning to some state lawmakers too. In the last couple of years a handful of states have enacted laws dealing with deepfakes. As Mathew Feeney, director of the Cato Institute’s Project on Emerging Technologies, has pointed out, the enactments have been narrowly focused on specific uses of deepfake technology.

States Take Action

In March 2019 Virginia enacted legislation (HB 2678) amending its existing criminal code concerning revenge porn to include the dissemination or sale of deepfake videos or still images “with the intent to coerce, harass, or intimidate.”

In June 2019 Texas enacted SB 751, making it a criminal offense to create or distribute a deepfake video within 30 days of an election with the aim of injuring a candidate or influencing the election result.

In October California enacted both AB 602, allowing victims of deepfake porn to “seek injunctive relief and recover reasonable attorney's fees and costs as well as specified monetary damages,” and AB 730, making it a criminal offense to create or distribute a deepfake video, image, or audio recording of a politician within 60 days of an election.

And in November of last year New York enacted SB 5959, which in addition to providing a right of action for victims of deepfake pornography, also established a right of publicity prohibiting the commercial use of a deceased performer’s “digital replica” for 40 years from the time of their death.

The election-related laws have drawn the most criticism, mainly on free-speech grounds.

“Political speech enjoys the highest level of protection under U.S. law,” Jane Kirtley, a professor of media ethics and law at the University of Minnesota’s Hubbard School of Journalism and Mass Communication, told the Guardian. “The desire to protect people from deceptive content in the run-up to an election is very strong and very understandable, but I am skeptical about whether they are going to be able to enforce this law.”

Such criticism didn’t prevent three other election-related deepfake bills from being introduced this session, although two of them have already failed: Illinois SB 3171, which was very similar to Texas SB 751, and Illinois HB 5321, which would have prohibited the use of so-called “cheapfakes” - images altered with non-AI technology like Photoshop - as well as deepfakes prior to an election.

New Jersey’s AB 4985, which is still pending, adds a slight twist to the typical approach to election deepfake legislation, prohibiting the use of deepfakes within 60 days of an election, unless they include a disclosure stating they have been manipulated. That hasn’t spared the measure from the First Amendment argument, however.

New Jersey and Illinois are among four states that have introduced bills referring specifically to “deepfakes” or “deep fakes” this session. The other two states are New York, which has introduced a measure (SB 6829) prohibiting the use of deepfakes to “harass, annoy, threaten or alarm another person,” and Hawaii, which has introduced three bills (HB 346SB 309 and SB 1009) expanding the state’s definition of a violation of privacy in the first degree to include the intentional or threatened disclosure of deepfake images or video.

Bills have also been introduced this session that don’t mention deepfakes by name but presumably would apply to them. New Jersey’s election measure falls into that category, as does a bill enacted in Georgia (SB 78) amending the state’s criminal code dealing with invasion of privacy to prohibit the electronic transmission of a sexually explicit photograph or video - including a “falsely created videographic or still image” - to harass the depicted individual.

There appears to have been a lull in deepfake-related legislative activity during the coronavirus pandemic. But with deepfakes only becoming more prevalent and more convincing, more legislation targeting them seems inevitable.



Deepfake Legislation Catching on in States

Legislatures in at least 6 states have introduced bills in their current sessions dealing with “deepfakes” videos and other digital content that have been altered using artificial intelligence according to State Net’s legislative tracking system. One of those states, Georgia, has enacted a deepfake measure (SB 78), joining at least four other states that did the same in 2019 or 2020.