Use this button to switch between dark and light mode.

Copyright © 2024 LexisNexis and/or its Licensors.

Implications of Using ChatGPT in the Workplace

April 27, 2023 (14 min read)

By: Tom Spiggle, THE SPIGGLE LAW FIRM, PLLC

Artificial intelligence (AI) isn’t new, but AI technology that’s good enough to catch the attention of the average person and affect their daily lives is rather novel. A great example of such technology is ChatGPT,1 which is a type of chatbot2 developed by OpenAI3 that was launched in November 2022. Since then, it’s received a lot of publicity, especially regarding its implications on professional and academic content creation.

IT HAS ALSO SEEN TREMENDOUS GROWTH, AS IT TOOK just five days for ChatGPT to reach one million users.4 To put this in perspective, it took Twitter two years to hit one million users.5

Many of the uses for Chat GPT have been in the workplace, with roughly 27%6 of professionals saying they’ve used ChatGPT for work-related tasks. But ChatGPT isn’t always used openly at work, as 68%7 of workplace ChatGPT users don’t disclose that they use it and only 32%8 use ChatGPT with their boss’ knowledge.

So many people using ChatGPT for work in a covert manner is notable and brings up the question of what ChatGPT’s use in the office could mean for workers. Taking a more in-depth look at ChatGPT and how it works might shed some light on this question.

What Is ChatGPT and How Does it Work?

ChatGPT is a chatbot that can engage in human-like text interactions with humans. Through these interactions, users can ask ChatGPT to provide answers to questions or help users complete certain tasks, such as suggesting ideas during a brainstorm or preparing a written work.

The exact details of how ChatGPT works are beyond the scope of this article, but in short, it’s a generative AI that creates new content as opposed to simply acting or responding to existing information.9 It’s also based on a language model,10 which works by using math to predict word combinations that make sense to a human reader.11

Language models are fairly good at certain tasks, such as predicting words to fill in the blanks of sentences. For example, imagine you asked a language model AI to fill in the blank for the following phrase: World War Two in 1945.

The language model AI can easily figure out that it can fill in the blank with words like “began” or “ended” and have it make grammatical sense and read naturally. The problem is that while either word may sound correct, only one word can be used to have a factually correct phrase. The hard part is trying to develop a language model chatbot that can do all of this without having to expend an inordinate amount of resources to train and operate it.

ChatGPT makes use of neural networks12 to learn more efficiently. ChatGPT is unique in that it has been designed to learn from vast amounts of unprocessed information on its own and avoid having to first have that information annotated by humans.13 Only after this major learning step has taken place do people step in to train ChatGPT to refine how it interacts with humans and provide information more accurately and in a safer manner.14

The result is a chatbot that can generate original content in a way that sounds very human-like and is reasonably accurate, without having to expend an unreasonable amount of resources to develop and train it. Then there’s the fact that ChatGPT often has the self-awareness to know when it needs more information to respond properly.15 While it’s clear that ChatGPT seems smart, the question then becomes, how smart is it really?

It’s not perfect, as OpenAI readily admits to several limitations of ChatGPT, such as writing “plausible-sounding, but incorrect or nonsensical answers.”16 Despite this and other drawbacks to using ChatGPT, it’s found plenty of uses in the workplace.

Using ChatGPT at Work

One of the popular ways many people are using ChatGPT at work (and in general) is as a research tool.17 More precisely, they’re using it to replace Google or another online search engine to find answers to questions. ChatGPT can do this by quickly cutting through the search-engine-optimized results and provide a more useful answer to the user in less time.

There are other ways in which individuals can use ChatGPT to save time while doing certain tasks at work. Entrepreneur.com18 lists several workplace uses for ChatGPT such as using it to:

  • Write essays, speeches, emails, and employee evaluation
  • Look for patterns or conduct statistical analysis of large volumes of data
  • Schedule events and/or help plan tasks
  • Brainstorm a second opinion or a different perspective on a topic or question
  • Apply for a new job by helping to write resumes and cover letters

How does this work in practice? Imagine a worker needs to send out a company email announcing an event for a product release. All the worker needs to do is tell ChatGPT, “Can you write me an email telling my coworkers about the upcoming Acme Product release on March 20, 2023, that will be held at company headquarters?” A few seconds later, ChatGPT will produce a sample email with a subject line that the worker can literally cut and paste into their email account.

Depending on the exact wording of the prompt and information given ChatGPT, the worker might need to tweak what ChatGPT creates. The worker can make the changes themselves or ask ChatGPT to do it, such as by asking ChatGPT to adjust the tone of the email or add certain information, like the time of the product launch event. Even when ChatGPT can’t complete a particular assignment for the user, it can help save time by providing a starting point or inspiration.

Many of the workplace applications for ChatGPT aren’t likely to cause problems or run afoul of any laws or company policies. Yet individuals who use ChatGPT for work may still need to be careful of when and how they use this technology.

Potential Problems for Workers When Using ChatGPT

ChatGPT could cause problems for workers in three contexts. First, there are situations where the mere use of ChatGPT could violate an employer’s policy. Second, the use of ChatGPT is permissible by the employer, but it’s used in a particular way that could lead to a violation of a law or rule. Third, the worker relies on incorrect information from ChatGPT.

ChatGPT’s Use Violates a Rule or Policy of the Employer

Given how new ChatGPT is, there aren’t going to be many employers that have banned its use. That being said, there’s generally nothing to stop an employer from implementing a policy that forbids employees or other workers from using this technology. Whether it’s a moral objection or the fear that workers might somehow misuse it, most employers would likely be within their rights to prohibit its use, even for work-related tasks.

For instance, a company might modify its Internet-use policy to limit the use of chatbot tools in addition to stopping workers from visiting social media websites during work hours. A worker could then get into trouble if they violate this policy by using ChatGPT at work.

This is probably not the most likely concern a worker will face when using ChatGPT at work. What’s more likely is that the worker uses ChatGPT in a way that leads to an infraction of a different, seemingly unrelated company rule or requirement.

Improper Use of ChatGPT by an Individual

This is probably the most likely way a worker could get into trouble at work for using ChatGPT, although this misuse would probably be unintentional. There could be a scenario where the worker might be using ChatGPT for legitimate reasons but do so in a way that could cause problems for the worker and/or employer. Here are two hypotheticals to help illustrate.

In the first hypothetical, the misuse occurs when the worker provides confidential or otherwise protected information to ChatGPT. This could happen if someone is asking ChatGPT to write a performance review and includes information subject to a non-disclosure agreement (NDA). Or an attorney asks ChatGPT to help prepare a contract or discovery request and provides confidential client information to ChatGPT so it can complete the task.

Doing either of these things would result in providing protected information to an unauthorized third party, which would probably violate the terms of an NDA, privacy policy, employment contract, and/or professional privilege. This is because, according to OpenAI’s FAQ,19 privacy policy,20 and terms of use,21 OpenAI may use the information from ChatGPT conversations for training purposes and OpenAI has the right to review the information provided to ChatGPT.

In the second hypothetical, ChatGPT is used to create a piece of work that an employer wants to have certain legal protections. But because ChatGPT helped create the work, it might not be eligible for those protections. For example, an engineer might use ChatGPT to help create new software code. Depending on ChatGPT’s involvement in creating it, the newly developed code may not be eligible for copyright protections.22

This isn’t to say that a work created with the help of ChatGPT can never receive copyright protections, but it will depend on the level of human involvement concerning the traditional elements of authorship. Needless to say, an employer might be upset to learn that there’s a possibility that the code for a groundbreaking new piece of software won’t be as profitable as it hoped because the U.S. Copyright Office won’t register it.

The Worker Relies on Incorrect Information Provided by ChatGPT

In the earlier days of computer science, there was a saying, “garbage in, garbage out.” This meant that if a user gave bad information to a computer, the computer was likely to provide bad results. This concept applies to ChatGPT in that one reason it may provide undesirable results is that it’s been given incorrect information. This incorrect information could come from the user, but it may also be a consequence of not having access to correct information during its training or development, which OpenAI readily admits is possible.23

Imagine a worker needs to write a press release and uses ChatGPT to help prepare it. Ideally, the worker will only use ChatGPT to create a very rough draft. But people don’t often get to work under ideal conditions, with soon-approaching deadlines a common occurrence. 

If this hypothetical worker were to essentially rely on ChatGPT to write the press release, this could lead to problems if it contains incorrect information. If the worker is lucky, the press release will simply come across as sloppy and unprofessional. If they’re unlucky, the press release will contain untrue statements that can harm a particular individual or business. The worker and/or the employer could then be subject to potential defamation liability.

Often, a mistake that’s present in something ChatGPT creates won’t be obvious. Instead, the problem might be something like a subtle bias that stems from biased information provided to ChatGPT. This bias could come out despite the best efforts of OpenAI and the users to prevent this from happening.

Amazon.com’s somewhat recent attempt at using AI to help it sort through the resumes of job applicants for software development and other technology-based positions serves as an example of what can happen when AI has a bias. The problem was that the software was biased against women because it was trained using resumes submitted to Amazon.com in the past. And because most of these resumes came from men, the software learned to “prefer” resumes that came from men by downgrading resumes that contained the word “women.”24

Dealing with these potential errors or undesirable results from ChatGPT is especially challenging because ChatGPT doesn’t provide citations or an explanation for how it reached its conclusions. So users must proactively do their own research to double-check ChatGPT’s results. But they might have used ChatGPT to avoid doing their own research, so this verification may not always happen.

What Happens If a Worker Gets in Trouble for Using ChatGPT?

In the majority of cases, a worker who gets in trouble for using ChatGPT can probably be treated just like any other worker who does something the employer doesn’t like. This will be especially true if the worker gets fired and is an at-will employee. As of the time of this writing, it’s unlikely that getting fired for using a chatbot is against a particular law or violates public policy.

Workers who have a contractual agreement with their employers may enjoy greater protections from getting fired for using ChatGPT, unless the use of ChatGPT violates a provision in the contract. If the worker is a creative-content creator, the contract might contain a provision that prohibits the worker from using AI-based technology to create content.

What Does the Future of AI Hold for Hourly Work?

ChatGPT is already a game-changer for many workers, but it’s likely just the beginning of what’s to come. So far, the major changes have revolved around how ChatGPT can save people time to complete tasks they were already able to do. This could have a dramatic effect on knowledge workers, especially those who work by the hour.

For many professions, there’s an alignment between the quality and/or amount of work and the time the worker has to spend to produce that work. This alignment hasn’t always been perfect, but ChatGPT will likely expand any existing misalignment, such that paying these types of workers by the hour will no longer be viable in certain situations. For instance, instead of getting paid by the hour, some workers who rely on ChatGPT might get paid with a flat or value-added fee arrangement.

Besides getting paid differently, this could turn non-exempt workers into exempt workers under wage laws like the Fair Labor Standards Act of 1938.25 As a result, it could take away certain wage and hour protections. Of course, hourly jobs that focus more on physical human labor as opposed to knowledge are probably going to be less affected by ChatGPT and similar AI technology, at least until AI-controlled robots become commonplace in the workplace.

The Bottom Line

The reality of ChatGPT or similar technology is that, sooner or later, employers will probably want their workers to use it, because it will save time that will help the employers save money. Eventually, society may get to a point where using a chatbot is as ubiquitous as doing a Google search or looking up how to do something
on YouTube. 


Tom Spiggle is a principal in The Spiggle Law Firm, PLLC in Washington, DC. He represents individuals in employment matters and complex litigation and defends individuals subject to federal investigation and prosecution.


To find this article in Practical Guidance, follow this research path:

RESEARCH PATH: Labor & Employment > Trends & Insights > Articles

Related Content

For an overview of current practical guidance on Generative AI, see 

GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RESOURCE KIT

For an analysis of possible pitfalls in the use of AI in workplace hiring, see

ANTICIPATING WHAT ChatGPT MEANS FOR THE WORKPLACE


For guidance on counseling employers on the legal implications of integrating AI into their workplaces, see

ARTIFICIAL INTELLIGENCE AND ROBOTS IN THE WORKPLACE: BEST PRACTICES


For a discussion of the use of AI in the legal services context, see

EVALUATING THE LEGAL ETHICS OF A ChatGPT-AUTHORED MOTION


To track legal developments in the labor and employment area, see

LABOR & EMPLOYMENT KEY LEGAL DEVELOPMENTS TRACKER (CURRENT)

1. The GPT in ChatGPT stands for Generative Pre-trained Transformer. 2. IBM defines a chatbot as “a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.” IBM, What Is a Chatbot?. 3. OpenAI’s website states that this company’s mission is to “ensure that artificial general intelligence benefits all of humanity.” See OpenAI, About OpenAI. 4. Katharina Buchholz, Statista, ChatGPT Sprints to One Million Users (Jan. 24, 2023). 5. Id. 6. Fishbowl, ChatGPT Sees Strong Early Adopting in the Workplace (Jan. 17, 2023). 7. Fishbowl, 70% of Workers Using ChatGPT at Work Are Not Telling Their Bosses; Overall Usage Among Professionals Jumps to 43% (Feb. 1, 2023). 8. Id. 9. McKinsey & Company, What Is Generative AI? (Jan. 19, 2023). 10. OpenAI, Introducing ChatGPT (Nov. 30, 2022). 11. Sindhu Sundar, Business Insider, If You Still Aren’t Sure What ChatGPT Is, This Is Your Guide to the Viral Chatbot that Everyone Is Talking About (Mar. 1, 2023). 12. Brown University, Brown Scholars Put Their Heads Together to Decode the Neuroscience Behind ChatGPT (Feb. 9, 2023). 13. Will Douglas Heaven, MIT Technology Review, ChatGPT Is Everywhere. Here’s Where it Came From (Feb. 8, 2023). 14. OpenAI says it uses something called the Moderation API “to warn or block certain types of unsafe content.” Introducing ChatGPT. 15. OpenAI provides an example of a user asking ChatGPT to look for problems in computer code and it replies by stating, “[i]t’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just part of it?” Id. 16. Id. 17. Jacob Zinkula & Aaron Mok, Entrepreneur, 7 Ways to Use ChatGPT at Work to Boost Your Productivity, Make Your Job Easier, and Save a Ton of Time (Feb. 9. 2023). 18. Id. 19. Open AI, ChatGPT General FAQ. 20. OpenAI, Privacy Policy (updated Mar. 14, 2023). 21. OpenAI, Terms of Use (updated Mar. 14, 2023). 22. See Compendium of U.S. Copyright Office Practice (Third) § 313.2 (stating that the Copyright Act only applies to works created by an author and that this author must be a human being). 23. See OpenAI, ChatGPT General FAQ (stating that ChatGPT “has limited knowledge of the world and events after 2021 . . . ”). 24. Jeffrey Dastin, Reuters, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women (Oct. 10, 2018). 25. 29 U.S.C.S. §201 et seq. 

This article was originally published in Bender’s Labor & Employment Law Bulletin, Volume 23, Issue No. 4.