Question: Are incarcerated persons in California entitled to in-person court hearings? Short Answer: No, Proposition 57 does not require or imply a right to an in-person hearing. In a 2022 decision...
American Bar Association Little Book Series The "Little Book" series from the American Bar Association is a remarkable collection of concise and accessible guides that provide insight into...
By Gerard J. Horgan, CJM, JD | Retired Superintendent Suffolk and Norfolk County, Massachusetts The law regarding an offender’s access to the courts was made clear nearly 40 years ago when the...
By Eric Geringswald | CSC Florida Laws Governing Business Entities Annotated Florida’s General Assembly added or amended more than 40 sections of the state’s business entity laws, including...
By Eric Geringswald | CSC New Jersey Laws Governing Business Entities New Jersey lawmakers added legislation to the state’s business entity laws allowing conversions and domestications of all...
By James B. Astrachan | Partner at Goodell, DeVries, Leech & Dann, LLP
Stephen Hawking warned artificial intelligence might end the world and others feel the same. How will this impact the advertising industry?
Most of us, when we think of AI, ponder more benign matters. For example, my colleague who sat in front of his computer to experiment with AI generated a pretty decent bio with only a few screen prompts. The program did not make up facts. Instead, it scoured the Internet pulling facts and assembling them; not as he might but as it was trained to do. Impressive, but if you overlook the process, the result was pedestrian, and there occurred no legal harm or foul—we think.
Facts are free to be taken under copyright law, but we don’t know whether the program had pinched a protected bio from an unknown source and used it as a template or took qualitatively key language that might be protected and therefore infringed. Employees create a lot more than bios, and employers might ask, “Do I know what my employees unintentionally infringed today?” An employer’s valuable assets are also at risk, being the coin’s other side.
No doubt, AI is already being used to create advertisements, whether by agency, employees or in-house. While AI may be a benefit when it comes to helping to identify and reach audiences, and recognize prior interactions, it can lead to legal issues.
For example, what if AI scans the best ads from the last 20 years for cruise adventures, and identifies, and then copies the elements it identifies from what it considers to be the most successful ads? And what if the AI then scours the Internet for images similar, but not identical, to those used in the most successful ad? It is possible that the result will infringe the look and feel of some other ad that is probably protected under copyright law. And, if AI recognizes a celebrity’s distinct voice or persona as a real draw in some ads, it might recreate those characteristics resulting in an expensive legal claim.
An employer’s code shared with artificial intel like ChatGPT would be used to train that AI program and, if the code was useful, the bot would use it to create future code, destroying the proprietary nature of the original code. This may be the reason Amazon lawyers have admonished employees not to share company code with ChatGPT.
The entertainment industry knows its most important assets are at risk, including the personas of its stars and the talents of its writers. Anthony Bourdain’s and Andy Warhol’s voices have been recreated by AI for use in documentaries. In some jurisdictions a post-mortem right of publicity exists, and some cases have recognized a publicity right in a distinctive voice. There are millions of vocal expressions throughout the Internet available to AI.
A clever AI user could create a script to a good sequel to When Harry Met Sally, using simulated voices of the original actors and their stolen animated images. Some viewers may reject this effort as more surreal than it should be, but a new generation of viewers might consider this sort of effort the real deal and embrace it over studio acting and production. The more mundane assets of Main Street are also at risk. Colorable imitations of trademarks, proprietary writings, exposed aspects of trade secrets, code, even reputation are all subject to AI taking and manipulation. The problem goes deeper, because even if discovered, who can be held responsible? The AI developer; the user?
There are steps to take to protect IP from AI. As Amazon warned, computer code should not be openly shared if it is intended to remain proprietary, and it should be protected from hackers by encryption. Access controls may help and users must be admonished to never share credentials. Images can be subtly watermarked, and while this is not a protection, the mark can at least evidence that the image was taken. Installation of anti-malware software may help, also; so many software that helps to prevent an intruder from removing content it has improperly accessed. Firewalls, email filers, web content filters all are important to keep bots out.
Much of the content that will be used by bots to create new works will have been released intentionally through one form of publication or another. Generally speaking, these items are published without any intent to allow unauthorized third-party use but once published, it becomes fair game for capture and use by AI.
Back in the day a lawyer with a concern for the originality of content could ask the creative director to share the borrowed ideas that were used to create the ad. With AI, this might be far more difficult.
AI will make creation much easier; the question will be how do we protect ourselves from the legalities of infringement and loss of propriety rights? Hawking, and even Musk, are right to be concerned about the existentialist problems that AI can create, but most of us will deal with the more mundane.
“AI is the next revolution…there is no going back.” M. Werneck, Executive Vice President, The Kraft Heinz Company.
Not all revolutions benefit humanity; tech luminaries Elon Musk and AI pioneer Yoshua Bengio recently warned we might be circling the drain. They, and others, have called for a 6 month moratorium on training AI systems more powerful than Microsoft’s GPT-4. They caution AI can be dangerous to society in ways not understood. Nine years ago Stephen Hawking was more direct: “The development of full AI could spell the end of the human race.” Yet we race forward at breakneck speed.
Musk’s concern run from an internet populated with AI-generated false information, to a situation reminiscent of the AI Skynet war on humans waged in the movie, The Terminator.
AI technology capable of generating content by user prompt exists. Chatbots, for example, respond to human questions; they can produce sophisticated computer code. Companies are racing to develop and market the most sophisticated AI tools.
Max Tegmark, an MIT physics professor and President of the Future Life Institute, calls this practice a “suicide race.” He fears, “humanity as a whole could lose control of its own destiny.” That’s a stiff warning. Steve Wozniak and others have joined the chorus.
Yet many continue to embrace the development of AI, believing possibly correctly, that it will make work more stimulating and replace mundane functions now done by people, such as organizing and fulfilling seating preferences on flights. But no doubt, some will lose jobs; others will gain jobs.
Hardly the stuff of science fiction and the end of civilization, but AI has and will continue to create special problems in the realm of copyright protection and ownership because “authors” are starting to use it to create content including text and illustrations. Some are comparing this so-called copyright frontier to the late 19th century reconciliation of photography and copyright.
The U.S. Copyright Office is grappling with old questions applied to the AI technology: what is protectable, what is not? Who is the author? Recently, that Office denied registration to AI-generated images in the work, “Zarya of the Dawn.” The story and the arrangement of images were considered sufficiently original to a human author to be registerable. This was probably one of the first decisions of the Office regarding whether and to what extent AI-generated content should be recognized as protectable under copyright laws because content is generated from human decisions.
The “Zarya” author unsuccessfully claimed that the images he sought to register were created as a result of his expression of creativity. But the Office declined to agree, ruling that because the specific outcome of the AI application Midjourney could not be predicted by users, its use is distinct from the use of other artistic tools, say a camera.
Generative AI programs create text, images and other content through human prompts. They have this capability because they are trained to generate content from exposure to existing, and often protected, works. As to the AI-generated product, who is the author, if anyone; the machine or the human prompter? The Copyright Act fails to define “author;” the Register will only recognize works created by real people.
Challengers to this policy have asserted human authorship is not a requirement for registration. One suit is pending and the results could likely depend on the level of human-machine interactions. Courts, for example, since 1884 have held protectable photographs created by cameras, due to the creative decisions made by the photographer, such as lighting and angle. The Register, however, has continued to assert lack of human control over AI as a basis to deny registration.
The question of who owns the work also looms, and while that answer could depend on the creative choices made by the user to complete the project, those rights can arguably be taken, or licensed, by the end user agreement.
Lastly, the courts will need to grapple with whether training a program by copying existing works is infringement or a permissible fair use. And, there will likely be many more questions. These questions surrounding copyright seems trivial, however, considering the warning from prominent thinkers that AI could end our civilization.
There’s so much to consider. And to add for another day’s discussion, AI must use data to generate its deliverable, and the gathering and use of data is fraught with all sorts of legal questions, both here and in Europe.
James B. Astrachan is a partner at Goodell, DeVries, Leech & Dann, LLP and teaches Trademark and Unfair Competition Law at University of Baltimore Law School. He is the author of the 6 volume Law of Advertising.