23 Mar 2026

Improving AI governance: What recent research reveals about productivity, trust, and AI that acts

Generative AI (genAI) is used daily across industries by approximately half of working professionals.  

But, according to the latest LexisNexis Future of Work research,  genAI adoption is accelerating faster than organizational AI governance. The result? Excess liability and risk. 

GenAI should help your organization grow—not hinder it. In this article, we’ll help you determine where your organization is on your AI development journey by exploring how organizational policies, guidance, and training are not keeping pace with AI adoption rates. We’ll close with recommended next steps for leaders. 


 

AI and the future of work 

As Snehit Cherian, CTO of Global Nexis Solutions, has previously noted, genAI’s most durable value lies in augmenting human effort inevitably freeing professionals to focus on higher-value, more strategic work. The Future of Work Report 2026: Generative AI—Tool, Colleague, or Liability, confirms his point, while also revealing where gaps in oversight, confidence, and trust threaten to undermine it. 

According to our research, genAI is already deeply embedded in professional workflows. Roughly half of professionals report using genAI frequently, and usage spans everything from basic assistance to full task execution. 

Yet while adoption has surged, governance and readiness have not kept pace. The result is a widening disconnect between leadership perception and operational reality, potentially exposing organizations to legal, security, and reputational risk. 

This year’s findings point to three critical themes shaping the future of work: 

  1. A growing governance crisis driven by shadow AI 
  2. Rising overconfidence that masks real risk
  3. Training progress that builds confidence but leaves access uneven 

Download the Full Report

Key finding 1: The AI governance crisis is leadership’s biggest blind spot 

GenAI adoption is happening with or without formal approval. While leaders may believe policies and controls are in place, the data tells a more sobering story. 

More than half of professionals surveyed(53%)report using genAI tools without formal approval. Nearly one-third (28%) say their organization has no genAI policy at all, and 42% aren’t even sure whether one exists. Meanwhile, 55% pay for their own AI tools, with the majority using them for work purposes. 

This is the reality of shadow AI: Employees solving real problems with whatever tools are available, often outside approved systems and safeguards. 

The governance gap is largely caused by misalignment. Teams are moving faster than the infrastructure designed to support them, creating exposure leaders may not see until something goes wrong. 

AI productivity gains are real but fragile 

There’s no question genAI is delivering productivity gains. Professionals increasingly treat AI models as more than simple tools. Nearly 40% now view genAI as a collaborator or partner, and 16% rely on it to take over entire tasks. These shifts reflect deeper integration into workflows and real efficiency benefits. 

This aligns with earlier leadership perspectives emphasizing genAI as a “supportive co-worker” rather than a replacement. But the data also shows that productivity gains are fragile when they’re built on ungoverned usage. 

Key finding 2: Organizations are ready in theory, but exposed in practice

One of the most striking findings in the 2026 report is the gap between confidence and comprehension

Nearly 64% of professionals say they are very or extremely confident using AI responsibly. On the surface, this looks like progress. In practice, it introduces a new category of risk. 

Many professionals admit they don’t fully understand: 

  • Where AI outputs come from 
  • How conclusions are generated 
  • When AI should not be used 

More than a third say they are least confident in understanding AI data sources, and another third struggle to know when AI is inappropriate for a task. At the same time, employees increasingly trust AI outputs enough to use them in higher-stakes deliverables, often without consistent validation. 

The combination of high confidence, low visibility, and expanding use creates a liability blind spot. Trust that begins as appropriate for brainstorming or drafting can quietly extend into decisions with legal, financial, or reputational consequences. 

When AI moves from assisting to acting 

The stakes rise further as AI evolves from supporting tasks to initiating actions. 

More than half of professionals say their organization has launched internal AI agents, yet only 44% clearly understand what those agents are. A quarter report minimal or poor understanding, and some don’t know whether agents are in use at all. 

Despite this, most professionals still expect human involvement: 

  • 65% say human validation is very or extremely important 
  • 56% believe humans should remain involved at every stage 
  • Only 9% support minimal human oversight 

Increasing autonomy and limited understanding underscores why governance frameworks designed for assistive AI are no longer enough. As systems begin to act, accountability, explainability, and oversight must be designed in from the start. 

The 2026 data makes it clear that experimentation without validation introduces significant risk and that governance must be designed alongside adoption. 

Key finding 3: AI training builds confidence, but access remains uneven 

Training coverage has improved meaningfully. 82% of professionals now receive some form of AI training, up from 72% the previous year. Training correlates strongly with confidence: nearly 80% of those with mandatory training report being very or extremely confident using AI. 

But training alone is not enough, and, in some cases, it may even accelerate risk. 

Professionals with mandatory training report significantly higher rates of unauthorized AI usage than those with no training at all. This suggests that awareness without adequate tools, access, or governance can push employees toward shadow solutions when official options fall short. 

The workforce is ready and eager to adopt AI. However, the issue is whether organizations are providing credible, secure, and validated tools that match real-world needs. 

From AI adoption to accountability 

Taken together, the findings point to a clear conclusion: genAI strategy is now operating-model strategy. 

Organizations that continue to focus solely on adoption metrics—usage rates, tool counts, experimentation—will miss the deeper shift underway. The next phase of the future of work is defined by accountability, not access. 

Prepared organizations are: 

  • Treating trust as something to measure, not assume 
  • Aligning training with approved tools and clear guardrails 
  • Planning governance for AI that acts, not just assists 

The defining shift of 2026 is no longer asking “Can AI do this?” but “Should we trust how it does this—and are we protected when it does?” 

Are you AI-ready? Take the quiz

Next steps: Explore the AI and the Future of Work Report 

This article highlights only a portion of the insights from the Future of Work Report 2026: Generative AI — Tool, Colleague, or Liability? The full report explores adoption patterns, readiness gaps, and governance risks across industries and provides leaders with a clearer view of where their organizations stand and what comes next. 

To understand how your organization compares, and how to close the pace gap responsibly, download the full report

Download the Full Report  Get Industry-Specific Insights