undefined logo

Leadership

Stop Telling People to “Use AI More.” It’s Not Working.

Why the most common AI directive in modern organizations is the one most likely to stall adoption.

Across executive meetings, boardrooms, and internal town halls, a familiar directive is echoing: “We need to use AI more.” It sounds reasonable. Urgent, even. But in practice, it’s one of the least effective ways to drive meaningful adoption inside an organization—and in many cases, it produces the opposite result.

Brett HarwardBrett Harward
4 min read·Apr 30, 2026
Executives meeting around a long table in a modern boardroom
Photo by Sebastian Herrmann on Unsplash

Instead of accelerating capability, the "use AI more" mandate often reinforces skepticism, creates shallow usage, and quietly stalls progress.

If that sounds familiar, it's worth asking: is the problem really that your people aren't trying hard enough? Or is it that nobody has defined what good looks like?

The Illusion of Adoption

On the surface, many organizations appear to be making progress. Licenses are purchased. Tools are rolled out. Usage metrics tick upward.

But beneath that activity is a different reality: most employees are engaging with AI in low-value, low-context ways. They experiment briefly, get inconsistent results, and move on.

The issue isn't effort. It's not even willingness. It's that the instruction itself—"use AI more"—lacks the specificity required to change behavior.

Without context, people default to what they already know. They treat AI like a search engine, a writing assistant, or a novelty. And when the output feels generic or underwhelming, the conclusion is predictable:

"This isn't as useful as advertised."

What follows is a quiet regression back to old workflows—and a self-reinforcing cycle:

  • Vague directive ("use AI more")
  • Low-quality inputs
  • Mediocre outputs
  • Frustration or indifference
  • Reduced usage or superficial compliance

From a leadership perspective, this can be misread as resistance or lack of curiosity. In reality, it's a signal that the organization hasn't yet learned how to work with AI in a meaningful way. Before you push harder, it's worth pausing to honestly assess where your organization actually stands.

AI Is Not a Tool—It's a Context Engine

Most enterprise technologies improve productivity through features, automation, or scale. AI behaves differently. Its effectiveness is directly tied to the quality of context and the clarity of the problem it's asked to solve.

In other words:

AI amplifies thinking. It does not replace it.

When employees are given vague prompts, incomplete context, or loosely defined objectives, the output reflects that ambiguity. The technology is performing exactly as designed—but the experience feels inconsistent.

This is where many organizations misdiagnose the problem. They invest in more tools, more licenses, or more generalized training. But the constraint isn't access. It's interaction quality.

Why "More Usage" Backfires

Telling people to "use AI more" assumes that repetition will lead to mastery. In practice, repetition without improvement leads to disengagement.

Consider the implicit message behind the directive:

  • "Figure it out on your own"
  • "Experiment until something works"
  • "This is important, but we won't define what good looks like"

For high-performing employees, this creates friction. They are being asked to adopt a capability without a clear standard for success. The result is either cautious experimentation or performative usage—neither of which drives real value.

Worse, early poor experiences shape long-term perception. Once someone concludes that AI outputs are generic or unreliable, it becomes significantly harder to re-engage them later.

From Mandate to Capability

The organizations making meaningful progress with AI are not asking employees to "use it more." They're doing something far more specific: defining where and how AI should be used to solve real problems.

Instead of broad directives, they create narrow, high-relevance applications:

  • A sales leader uses AI to refine deal strategy before executive reviews
  • A manager uses it to prepare for difficult performance conversations
  • A team uses it to pressure-test operational decisions before committing resources

These aren't generic use cases. They're grounded in actual work, with clear context, stakes, and expected outcomes.

That difference changes everything. Because when AI is applied to a real problem with real context, the output improves dramatically. And when the output improves, so does belief.

The shift organizations need to make is subtle but critical:

From: "Use AI more." To: "Here is a specific problem. Here is how AI can help you think through it."

This reframes AI from a tool to be adopted into a capability to be developed. It also acknowledges something many leaders underestimate: most employees don't struggle with AI because it's complex. They struggle because they haven't been shown what good interaction looks like.

Once they experience that—once they see AI process nuance, context, and ambiguity effectively—their perception changes quickly. Adoption doesn't need to be forced. It accelerates naturally.

A Different Starting Point

If your organization wants to unlock the real value of AI, the starting point isn't broader usage. It's better usage.

That means:

  • Defining high-value scenarios
  • Providing structured context
  • Setting a clear bar for output quality
  • Treating AI interaction as a skill, not an afterthought

Without that foundation, telling people to "use AI more" will continue to produce the same result: activity without impact.

The good news is that the diagnosis is usually faster than the fix. Before your next all-hands or strategy offsite, take 5 minutes to find out where your organization actually stands—and what's actually blocking real adoption.


Next in this series: How to design AI experiences that create immediate "this changes how I work" moments—and why most training approaches fail to do it.