By: Shabbi S. Khan, Natasha Allen, David W. Kantaros, Chanley T. Howell, Graham P. MacEwan, and Avi B. Ginsberg, FOLEY & LARDNER LLP
THIS ARTICLE DISCUSSES KEY CONSIDERATIONS IN MERGERS AND ACQUISITIONS...
By: Kirk A. Sigmon BANNER WITCOFF
THIS ARTICLE DISCUSSES PATENTING ARTIFICIAL (AI), MACHING LEARNING (ML), AND RELATED INVENTIONS.
It provides a high-level overview of AI and ML, offers tips for drafting...
By: Rose J. Hunter Jones, Kassi R. Burns, and Meredith A. Perlman, KING & SPALDING
THIS ARTICLE DISCUSSES BEST PRACTICES AND STRATEGIC INSIGHTS LITIGATORS SHOULD CONSIDER IN A FEDERAL COURT LITIGATION...
By: Practical Guidance Real Estate, Construction, and Finance Attorney Teams
PRACTICAL GUIDANCE RECENTLY COMPLETED ITS ANNUAL PRIVATE MARKET DATA REAL ESTATE SURVEY. This survey, which ran from August...
By: D. Reed Freeman Jr., ARENTFOX SCHIFF LLP
THIS ARTICLE DISCUSSES THE PRINCIPLE OF DATA MINIMIZATION in the context of commercial applications of generative artificial intelligence (GenAI) technology...
Copyright © 2024 LexisNexis and/or its Licensors.
By: Tom Spiggle, THE SPIGGLE LAW FIRM, PLLC
Artificial intelligence (AI) isn’t new, but AI technology that’s good enough to catch the attention of the average person and affect their daily lives is rather novel. A great example of such technology is ChatGPT,1 which is a type of chatbot2 developed by OpenAI3 that was launched in November 2022. Since then, it’s received a lot of publicity, especially regarding its implications on professional and academic content creation.
IT HAS ALSO SEEN TREMENDOUS GROWTH, AS IT TOOK just five days for ChatGPT to reach one million users.4 To put this in perspective, it took Twitter two years to hit one million users.5
Many of the uses for Chat GPT have been in the workplace, with roughly 27%6 of professionals saying they’ve used ChatGPT for work-related tasks. But ChatGPT isn’t always used openly at work, as 68%7 of workplace ChatGPT users don’t disclose that they use it and only 32%8 use ChatGPT with their boss’ knowledge.
So many people using ChatGPT for work in a covert manner is notable and brings up the question of what ChatGPT’s use in the office could mean for workers. Taking a more in-depth look at ChatGPT and how it works might shed some light on this question.
ChatGPT is a chatbot that can engage in human-like text interactions with humans. Through these interactions, users can ask ChatGPT to provide answers to questions or help users complete certain tasks, such as suggesting ideas during a brainstorm or preparing a written work.
The exact details of how ChatGPT works are beyond the scope of this article, but in short, it’s a generative AI that creates new content as opposed to simply acting or responding to existing information.9 It’s also based on a language model,10 which works by using math to predict word combinations that make sense to a human reader.11
Language models are fairly good at certain tasks, such as predicting words to fill in the blanks of sentences. For example, imagine you asked a language model AI to fill in the blank for the following phrase: World War Two in 1945.
The language model AI can easily figure out that it can fill in the blank with words like “began” or “ended” and have it make grammatical sense and read naturally. The problem is that while either word may sound correct, only one word can be used to have a factually correct phrase. The hard part is trying to develop a language model chatbot that can do all of this without having to expend an inordinate amount of resources to train and operate it.
ChatGPT makes use of neural networks12 to learn more efficiently. ChatGPT is unique in that it has been designed to learn from vast amounts of unprocessed information on its own and avoid having to first have that information annotated by humans.13 Only after this major learning step has taken place do people step in to train ChatGPT to refine how it interacts with humans and provide information more accurately and in a safer manner.14
The result is a chatbot that can generate original content in a way that sounds very human-like and is reasonably accurate, without having to expend an unreasonable amount of resources to develop and train it. Then there’s the fact that ChatGPT often has the self-awareness to know when it needs more information to respond properly.15 While it’s clear that ChatGPT seems smart, the question then becomes, how smart is it really?
It’s not perfect, as OpenAI readily admits to several limitations of ChatGPT, such as writing “plausible-sounding, but incorrect or nonsensical answers.”16 Despite this and other drawbacks to using ChatGPT, it’s found plenty of uses in the workplace.
One of the popular ways many people are using ChatGPT at work (and in general) is as a research tool.17 More precisely, they’re using it to replace Google or another online search engine to find answers to questions. ChatGPT can do this by quickly cutting through the search-engine-optimized results and provide a more useful answer to the user in less time.
There are other ways in which individuals can use ChatGPT to save time while doing certain tasks at work. Entrepreneur.com18 lists several workplace uses for ChatGPT such as using it to:
How does this work in practice? Imagine a worker needs to send out a company email announcing an event for a product release. All the worker needs to do is tell ChatGPT, “Can you write me an email telling my coworkers about the upcoming Acme Product release on March 20, 2023, that will be held at company headquarters?” A few seconds later, ChatGPT will produce a sample email with a subject line that the worker can literally cut and paste into their email account.
Depending on the exact wording of the prompt and information given ChatGPT, the worker might need to tweak what ChatGPT creates. The worker can make the changes themselves or ask ChatGPT to do it, such as by asking ChatGPT to adjust the tone of the email or add certain information, like the time of the product launch event. Even when ChatGPT can’t complete a particular assignment for the user, it can help save time by providing a starting point or inspiration.
Many of the workplace applications for ChatGPT aren’t likely to cause problems or run afoul of any laws or company policies. Yet individuals who use ChatGPT for work may still need to be careful of when and how they use this technology.
ChatGPT could cause problems for workers in three contexts. First, there are situations where the mere use of ChatGPT could violate an employer’s policy. Second, the use of ChatGPT is permissible by the employer, but it’s used in a particular way that could lead to a violation of a law or rule. Third, the worker relies on incorrect information from ChatGPT.
ChatGPT’s Use Violates a Rule or Policy of the EmployerGiven how new ChatGPT is, there aren’t going to be many employers that have banned its use. That being said, there’s generally nothing to stop an employer from implementing a policy that forbids employees or other workers from using this technology. Whether it’s a moral objection or the fear that workers might somehow misuse it, most employers would likely be within their rights to prohibit its use, even for work-related tasks.
For instance, a company might modify its Internet-use policy to limit the use of chatbot tools in addition to stopping workers from visiting social media websites during work hours. A worker could then get into trouble if they violate this policy by using ChatGPT at work.
This is probably not the most likely concern a worker will face when using ChatGPT at work. What’s more likely is that the worker uses ChatGPT in a way that leads to an infraction of a different, seemingly unrelated company rule or requirement.
Improper Use of ChatGPT by an Individual
This is probably the most likely way a worker could get into trouble at work for using ChatGPT, although this misuse would probably be unintentional. There could be a scenario where the worker might be using ChatGPT for legitimate reasons but do so in a way that could cause problems for the worker and/or employer. Here are two hypotheticals to help illustrate.
In the first hypothetical, the misuse occurs when the worker provides confidential or otherwise protected information to ChatGPT. This could happen if someone is asking ChatGPT to write a performance review and includes information subject to a non-disclosure agreement (NDA). Or an attorney asks ChatGPT to help prepare a contract or discovery request and provides confidential client information to ChatGPT so it can complete the task.
In the second hypothetical, ChatGPT is used to create a piece of work that an employer wants to have certain legal protections. But because ChatGPT helped create the work, it might not be eligible for those protections. For example, an engineer might use ChatGPT to help create new software code. Depending on ChatGPT’s involvement in creating it, the newly developed code may not be eligible for copyright protections.22
This isn’t to say that a work created with the help of ChatGPT can never receive copyright protections, but it will depend on the level of human involvement concerning the traditional elements of authorship. Needless to say, an employer might be upset to learn that there’s a possibility that the code for a groundbreaking new piece of software won’t be as profitable as it hoped because the U.S. Copyright Office won’t register it.
The Worker Relies on Incorrect Information Provided by ChatGPTIn the earlier days of computer science, there was a saying, “garbage in, garbage out.” This meant that if a user gave bad information to a computer, the computer was likely to provide bad results. This concept applies to ChatGPT in that one reason it may provide undesirable results is that it’s been given incorrect information. This incorrect information could come from the user, but it may also be a consequence of not having access to correct information during its training or development, which OpenAI readily admits is possible.23
Imagine a worker needs to write a press release and uses ChatGPT to help prepare it. Ideally, the worker will only use ChatGPT to create a very rough draft. But people don’t often get to work under ideal conditions, with soon-approaching deadlines a common occurrence.
If this hypothetical worker were to essentially rely on ChatGPT to write the press release, this could lead to problems if it contains incorrect information. If the worker is lucky, the press release will simply come across as sloppy and unprofessional. If they’re unlucky, the press release will contain untrue statements that can harm a particular individual or business. The worker and/or the employer could then be subject to potential defamation liability.
Often, a mistake that’s present in something ChatGPT creates won’t be obvious. Instead, the problem might be something like a subtle bias that stems from biased information provided to ChatGPT. This bias could come out despite the best efforts of OpenAI and the users to prevent this from happening.
Amazon.com’s somewhat recent attempt at using AI to help it sort through the resumes of job applicants for software development and other technology-based positions serves as an example of what can happen when AI has a bias. The problem was that the software was biased against women because it was trained using resumes submitted to Amazon.com in the past. And because most of these resumes came from men, the software learned to “prefer” resumes that came from men by downgrading resumes that contained the word “women.”24
Dealing with these potential errors or undesirable results from ChatGPT is especially challenging because ChatGPT doesn’t provide citations or an explanation for how it reached its conclusions. So users must proactively do their own research to double-check ChatGPT’s results. But they might have used ChatGPT to avoid doing their own research, so this verification may not always happen.
In the majority of cases, a worker who gets in trouble for using ChatGPT can probably be treated just like any other worker who does something the employer doesn’t like. This will be especially true if the worker gets fired and is an at-will employee. As of the time of this writing, it’s unlikely that getting fired for using a chatbot is against a particular law or violates public policy.
Workers who have a contractual agreement with their employers may enjoy greater protections from getting fired for using ChatGPT, unless the use of ChatGPT violates a provision in the contract. If the worker is a creative-content creator, the contract might contain a provision that prohibits the worker from using AI-based technology to create content.
ChatGPT is already a game-changer for many workers, but it’s likely just the beginning of what’s to come. So far, the major changes have revolved around how ChatGPT can save people time to complete tasks they were already able to do. This could have a dramatic effect on knowledge workers, especially those who work by the hour.
For many professions, there’s an alignment between the quality and/or amount of work and the time the worker has to spend to produce that work. This alignment hasn’t always been perfect, but ChatGPT will likely expand any existing misalignment, such that paying these types of workers by the hour will no longer be viable in certain situations. For instance, instead of getting paid by the hour, some workers who rely on ChatGPT might get paid with a flat or value-added fee arrangement.
Besides getting paid differently, this could turn non-exempt workers into exempt workers under wage laws like the Fair Labor Standards Act of 1938.25 As a result, it could take away certain wage and hour protections. Of course, hourly jobs that focus more on physical human labor as opposed to knowledge are probably going to be less affected by ChatGPT and similar AI technology, at least until AI-controlled robots become commonplace in the workplace.
The reality of ChatGPT or similar technology is that, sooner or later, employers will probably want their workers to use it, because it will save time that will help the employers save money. Eventually, society may get to a point where using a chatbot is as ubiquitous as doing a Google search or looking up how to do something on YouTube.
Tom Spiggle is a principal in The Spiggle Law Firm, PLLC in Washington, DC. He represents individuals in employment matters and complex litigation and defends individuals subject to federal investigation and prosecution.
To find this article in Practical Guidance, follow this research path:
RESEARCH PATH: Labor & Employment > Trends & Insights > Articles
For an overview of current practical guidance on Generative AI, see
> GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT
For an analysis of possible pitfalls in the use of AI in workplace hiring, see
> ANTICIPATING WHAT ChatGPT MEANS FOR THE WORKPLACE
> ARTIFICIAL INTELLIGENCE AND ROBOTS IN THE WORKPLACE: BEST PRACTICES
> EVALUATING THE LEGAL ETHICS OF A ChatGPT-AUTHORED MOTION
> LABOR & EMPLOYMENT KEY LEGAL DEVELOPMENTS TRACKER (CURRENT)
This article was originally published in Bender’s Labor & Employment Law Bulletin, Volume 23, Issue No. 4.