Generative AI (genAI) is used daily across industries by approximately half of working professionals. But, according to the latest LexisNexis Future of Work research, genAI adoption is accelerating faster...
Reputational damage can escalate within hours. One comment on social media, one critical blog post, or a sudden spike in negative press, can trigger a series of events that leaves brands in challenging...
Public relations (PR) teams are often measured by media coverage, share of voice, sentiment shifts, and crisis response metrics. These are known as lagging indicators . They measure impact after events...
Artificial intelligence (AI) offers an incredible amount of opportunity from enhancing productivity to managing information. But, because AI "learns" from the data you give it, it's critical...
Brand recognition in a constantly evolving media landscape requires well-planned public relations (PR) campaigns. Simply having a message isn't enough. To truly stand out and make an impact, you need...
* The views expressed in externally authored materials linked or published on this site do not necessarily reflect the views of LexisNexis Legal & Professional.
Generative AI (genAI) is used daily across industries by approximately half of working professionals.
But, according to the latest LexisNexis Future of Work research, genAI adoption is accelerating faster than organizational AI governance. The result? Excess liability and risk.
GenAI should help your organization grow—not hinder it. In this article, we’ll help you determine where your organization is on your AI development journey by exploring how organizational policies, guidance, and training are not keeping pace with AI adoption rates. We’ll close with recommended next steps for leaders.
Table of Contents:
As Snehit Cherian, CTO of Global Nexis Solutions, has previously noted, genAI’s most durable value lies in augmenting human effort inevitably freeing professionals to focus on higher-value, more strategic work. The Future of Work Report 2026: Generative AI—Tool, Colleague, or Liability, confirms his point, while also revealing where gaps in oversight, confidence, and trust threaten to undermine it.
According to our research, genAI is already deeply embedded in professional workflows. Roughly half of professionals report using genAI frequently, and usage spans everything from basic assistance to full task execution.
Yet while adoption has surged, governance and readiness have not kept pace. The result is a widening disconnect between leadership perception and operational reality, potentially exposing organizations to legal, security, and reputational risk.
This year’s findings point to three critical themes shaping the future of work:
Download the Full Report
GenAI adoption is happening with or without formal approval. While leaders may believe policies and controls are in place, the data tells a more sobering story.
More than half of professionals surveyed(53%)report using genAI tools without formal approval. Nearly one-third (28%) say their organization has no genAI policy at all, and 42% aren’t even sure whether one exists. Meanwhile, 55% pay for their own AI tools, with the majority using them for work purposes.
This is the reality of shadow AI: Employees solving real problems with whatever tools are available, often outside approved systems and safeguards.
The governance gap is largely caused by misalignment. Teams are moving faster than the infrastructure designed to support them, creating exposure leaders may not see until something goes wrong.
There’s no question genAI is delivering productivity gains. Professionals increasingly treat AI models as more than simple tools. Nearly 40% now view genAI as a collaborator or partner, and 16% rely on it to take over entire tasks. These shifts reflect deeper integration into workflows and real efficiency benefits.
This aligns with earlier leadership perspectives emphasizing genAI as a “supportive co-worker” rather than a replacement. But the data also shows that productivity gains are fragile when they’re built on ungoverned usage.
One of the most striking findings in the 2026 report is the gap between confidence and comprehension.
Nearly 64% of professionals say they are very or extremely confident using AI responsibly. On the surface, this looks like progress. In practice, it introduces a new category of risk.
Many professionals admit they don’t fully understand:
More than a third say they are least confident in understanding AI data sources, and another third struggle to know when AI is inappropriate for a task. At the same time, employees increasingly trust AI outputs enough to use them in higher-stakes deliverables, often without consistent validation.
The combination of high confidence, low visibility, and expanding use creates a liability blind spot. Trust that begins as appropriate for brainstorming or drafting can quietly extend into decisions with legal, financial, or reputational consequences.
The stakes rise further as AI evolves from supporting tasks to initiating actions.
More than half of professionals say their organization has launched internal AI agents, yet only 44% clearly understand what those agents are. A quarter report minimal or poor understanding, and some don’t know whether agents are in use at all.
Despite this, most professionals still expect human involvement:
Increasing autonomy and limited understanding underscores why governance frameworks designed for assistive AI are no longer enough. As systems begin to act, accountability, explainability, and oversight must be designed in from the start.
The 2026 data makes it clear that experimentation without validation introduces significant risk and that governance must be designed alongside adoption.
Training coverage has improved meaningfully. 82% of professionals now receive some form of AI training, up from 72% the previous year. Training correlates strongly with confidence: nearly 80% of those with mandatory training report being very or extremely confident using AI.
But training alone is not enough, and, in some cases, it may even accelerate risk.
Professionals with mandatory training report significantly higher rates of unauthorized AI usage than those with no training at all. This suggests that awareness without adequate tools, access, or governance can push employees toward shadow solutions when official options fall short.
The workforce is ready and eager to adopt AI. However, the issue is whether organizations are providing credible, secure, and validated tools that match real-world needs.
Taken together, the findings point to a clear conclusion: genAI strategy is now operating-model strategy.
Organizations that continue to focus solely on adoption metrics—usage rates, tool counts, experimentation—will miss the deeper shift underway. The next phase of the future of work is defined by accountability, not access.
Prepared organizations are:
The defining shift of 2026 is no longer asking “Can AI do this?” but “Should we trust how it does this—and are we protected when it does?”
Are you AI-ready? Take the quiz
This article highlights only a portion of the insights from the Future of Work Report 2026: Generative AI — Tool, Colleague, or Liability? The full report explores adoption patterns, readiness gaps, and governance risks across industries and provides leaders with a clearer view of where their organizations stand and what comes next.
To understand how your organization compares, and how to close the pace gap responsibly, download the full report.
Download the Full Report Get Industry-Specific Insights