Use this button to switch between dark and light mode.

How Much Should Artificial Intelligence, AI Advertising Be Regulated?

July 12, 2023 (7 min read)
Graphic depiction of AI

By James B. Astrachan | Partner at Goodell, DeVries, Leech & Dann, LLP         

Artificial intelligence continues to dominate the news, which is rather remarkable considering all the other happenings of importance. The discussion of dangers posed by artificial intelligence took center stage in Congress when Sam Altman, CEO of OpenAI, told a Senate panel hearing that society is at a “printing press” moment and that Congress needs to regulate AI.

Altman, whose company created ChatGPT, advanced his three-point plan. He wants: a new federal agency to license AI models, with the power to revoke licenses if a licensee does not comply with standards; implementation of safety standards and evaluations of dangerous AI capabilities; and audits of a model’s performance. An entrepreneur and business leader asking that his industry be regulated is as rare as a beef steak at a vegan banquet.

To drive home the point that AI poses risks to all of us, Senator Richard Blumenthal began the Senate hearing by playing a voice recording expressing his views of the risks of AI, except the voice everyone heard was not recorded by the Senator. Instead, the recording was created, or written, by ChatGPT, using audios of Blumenthal’s voice taken from his earlier speeches. While the views attributed to him by AI reflected his real views, he made the point that AI could have been used to create a speech to misrepresent his views, and no one would know it was not him speaking. Available AI tools make it easy for anyone to do this.          

Altman’s primary concern is the possible malevolent misuse of AI to influence elections and manipulate voters. Other dangers involve the use, or misuse, of consumer data and the inability to keep data secure. There are too-vague prompts and thin sources of data. As well, there is the bias of the data used, and the people who train the AI programs, with a result that biased data or training will cancel otherwise balanced AI. A still more frightening thought is that an AI program could self-replicate, escape into the wild and become a self-directed bad actor, as in 2001: A Space Odyssey, where this exchange occurred: “Open the doors, Hal! I’m sorry Dave, I’m afraid I can’t do that.”  Or, even something simple as instructing an AI assisted car to “get to the airport as fast as you can,” with the resulting mayhem.

AI Federal Regulations 

Altman called for federal regulation due to the concerns regarding privacy and voting, but there are physical safety concerns involving AI to be addressed. For example, self-driving cars, AI predictions regarding maintenance schedules for dangerous equipment, and health care diagnostic tools. As well, these are non-physical AI applications that can adversely affect the lives and well-being of people, such as using AI as a financial planning tool to scam people and steal their money. The list goes on.

These concerns are both real and speculative, but to make the point that AI development was moving too fast and has potential dangers, in March 2023 over 1,000 luminaries involved in the AI field signed an open letter warning that AI presents “profound risks to society and humanity.”  In two months, 26,000 signatures were added to this letter! The stand-out concern is the possibility a sophisticated AI model that analyzed vast amounts of data could learn unexpected behavior from this data and go rogue. Given where AI is today and how fast AI got here, this sort of result in several years is hardly implausible. At least the big thinkers are fearful, and no one else is denying this possibility.     

Finally, both Altman and Blumenthal agreed that while AI will eliminate some jobs, it will result in the creation of new jobs too. As much as some members of Congress want to appear helpful, the protection of jobs from elimination due to obsolescence should not be the job of Congress; it is the job of a free market. Still, this is just one more area in which people will be adversely affected by the use of AI – many in rote jobs, some in the learned professions of law and medicine. 

AI Advertising Regulations

Nor is the world of advertising and marketing agencies immune from the adverse effects of AI as it is becoming employed by those now selling products on-line, such as Amazon, Google, and Facebook. Google is promoting its own advertising practices to its advertiser-clients by allowing them to bypass the traditional marketing and ad agencies. Advertisers merely need to present Google with creative content such as video, text and photos, or illustrations relating to their products and services and Google’s AI applications will use the supplied content to create targeted ads for the advertisers’ intended audience. The traditional ad agency’s role in this endeavor can be eliminated if the advertiser chooses.            

AI-produced ads are hardly without issue. For example, AI could create ads that contain misstatements, requiring a very keen human review before any occurs. If the AI is programmed to develop the largest number of new customers possible, will it do so by making unsubstantiated or false claims about the advertiser’s product to meet the stated goals? In its search of data for references to the product, how will it know to discern the true from the false? Google is not the only social media platform that relies on ad revenue and employs AI to create ad campaigns for its advertisers. Meta also uses AI for this purpose, and recently Meta was fined $1.3 billion by European Union regulators because Meta sent European user data to the United States for storage, concerning regulators this data could be accessed by U.S. spy agencies, invading the privacy of Facebook users from whom the data was collected. This follows an $800 million European fine imposed on Amazon in 2021.             

Advertisers are also using AI in-house to create ads that their outside agencies once created. Coca-Cola now has its own AI application, called “Real Magic.”  Coke intends to use this AI to produce text that will appear to be generated by people but instead will come from questions asked of search engines, sought out by Real Magic. The AI will allow people to create Coke-related art, thus getting personally involved in the brand, and developing more than a mere buyer-seller relationship.           

The questions posed by Altman’s testimony, and the realization of many thinkers including Elon Musk who is developing his own AI product, are: does AI pose a societal danger and, does Congress need to regulate AI?

If Congress steps in, what form will regulation take? Will congressional regulation and oversight stifle the creation of AI, and if so, will that be bad for AI but good for society? Can Congress regulate a new product and not drive innovators and investors up the proverbial wall, and take away their incentive to create?

The European Union has written a bill to regulate AI; American regulation, if it comes, will follow suit. The European bill, not yet enacted, assigns AI apps to one of three risk categories. If considered an “unacceptable risk,” the AI is banned. If “high-risk” such as AI apps that are used where there are certain legal requirements that may be violated due to imperfect data or other bias, such as c.v. scanning for job applicants, the AI is regulated. The law does not regulate those AI apps that are neither banned nor considered high-risk.           

Senator Blumenthal said that we are now at a time with AI where we once were with the Internet, meaning if ever there was a time to regulate AI it is now, not in the future. There are those who strongly feel an unregulated Internet and social media have led to serious societal problems. The problems created by AI, enhanced by the Internet, and gathered data made available to AI engines without restriction, no doubt will exacerbate these problems.

AI will advance so much faster than Congress can move. Whatever regulation of AI imposed must be written by people who really understand AI enough to regulate it effectively without killing it. Even those creators and entrepreneurs who support and advance AI appear to support government intervention, although many themselves are in a race to develop new AI applications. Atlas is not merely shrugging under his load; he seeks help to carry it.

James B. Astrachan is a partner at Goodell, DeVries, Leech & Dann, LLP and teaches Trademark and Unfair Competition Law at University of Baltimore Law School. He is the co-author of the 6 volume Law of Advertising.