Driving Adoption

TL;DR. Half your AI seats are idle six months in. The reflex is to fund a training program. That is the wrong move. The 6x gap between power users and everyone else is a disposition gap, a workflow gap, and a manager-signaling gap. Curriculum addresses none of the three. The fix is to kill the bad approved tool, fund the people with disposition, redesign three workflows, set the manager signal from the top, and stop measuring attendance as if it were adoption.

You ran the audit from guide three. The shape of your usage report was uncomfortable. Six months later you ran it again. Same shape. Maybe ten percent better at the top. The bottom thirty percent is still bottom thirty percent. The new seats you handed out after the last all-hands have already gone quiet.

Your head of HR has a proposal on your desk. It is a thirty thousand dollar AI enablement program. Six modules. Mandatory completion. A vendor with a deck full of “AI-fluent workforce” pull quotes. A certificate at the end. There is a similar memo from a department head asking for an “AI champions network.” There is a third asking to stand up an internal Center of Excellence.

None of this is going to move the number you actually care about.

Why training doesn’t move the number

The default leadership move when usage stays flat is to schedule learning. It is the move that feels safest in front of a board. It is also the most reliably wrong response to an adoption gap I have seen in twenty years of watching enterprise software roll out.

The reasoning behind the training reflex goes like this. Usage is low. People do not use what they do not know how to use. Therefore teach them. Therefore the gap closes. Each step in that chain is intuitive. The problem is the second one. The premise that low usage is caused by low knowledge is empirically wrong for AI tools, and it has been wrong from the beginning.

The 6x engagement gap that OpenAI publishes is not a knowledge gap. The Federal Reserve number on average productivity gain (5.4% of work hours, roughly two hours a week on a forty-hour schedule) is the average of a population in which a quarter of users save nine or more hours a week and the rest save almost nothing. BCG’s 2025 AI Radar surveyed 1,800-plus executives and found that fewer than one-third of companies have managed to upskill even a quarter of their workforce on AI. Their headline finding was not “train more.” It was the opposite: their leading firms run on a “10-20-70” principle in which 10% of effort goes to algorithms, 20% to data and technology, and 70% to people, processes, and cultural transformation. The training inside that 70% is a small piece. The bulk is workflow redesign and operating-model change.

The pattern across every credible 2025 and 2026 dataset is the same. The companies extracting value are not the ones with the most enablement hours delivered. They are the ones that picked a small number of workflows, redesigned them, and assigned the work to people who were going to do it well no matter what.

Curriculum is downstream of the gap, not the cause of it.

What’s causing the gap

Three things, in order of how much budget should go to fixing each.

Disposition. Roughly the top decile of any role is willing to delegate work to a model, iterate on a wrong answer, and treat the chat tab as a workspace rather than a search engine. That posture is not taught in a session. It is a temperament. The people who have it figured the tool out in their first week. The people who do not have it never get there, regardless of the curriculum. Guide 2 covered this in detail. Nothing about it has changed in six months. The disposition gap is real, it is durable, and it is the dominant variable.

Workflow fit. Most of the broad middle of your org is not refusing to use AI. They are dutifully opening it once a week, asking it to summarize a document, copying the output into an email, and closing the tab. They get marginal value because their workflow has not changed. They are using a 2026 tool to do their 2022 job. The model is doing the part of the work it can see, which is a tiny part. The other ninety percent of the work is in steps the user never thought to surface to the model because the workflow was not designed to surface anything. This is not a training problem. It is a process-design problem.

Manager signaling. Your director-level managers are not visibly using AI. They are not pasting drafts into their team channel and saying “here is the first cut Claude gave me, fix it.” They are not asking in their one-on-ones what the report’s prompt setup looks like. They are not modeling the behavior. In the absence of that signal, the team correctly reads the situation as “the official line is to use AI, but the actual review and promotion incentives have not changed.” So nothing changes. Adoption of any new working practice in a knowledge-work org tracks the practices of the manager two layers up, not the official policy. AI is no exception.

The three failure modes interact. A leader without disposition cannot sense the workflow that needs redesign. A redesigned workflow that the manager does not visibly use does not stick. A manager who signals usage in a workflow that was never redesigned looks like theater within two weeks. You have to fix all three. Training is somewhere on the list, but not in the top three.

The three populations of non-user

There is one more frame that helps before the prescription. Your idle seats are not a single population. They are three.

The won’t. People with the wrong disposition for this tool, in this role, at this stage of their career. They are doing fine work. They will not be your AI leverage story. Stop trying. Take their seat back, pocket the money, redirect it.

The can’t. People in roles where current AI tools genuinely do not have product-market fit. The relational sales lead in a six-quarter cycle. The senior litigator. The field service tech. The line worker. Guide 2 named these. They are not the gap. They are the floor.

The won’t-bother. People with the disposition, in a role with fit, who have looked at the approved tool, found it inferior to what they used at home, and decided it is not worth the effort to swim against the procurement current. This is the only one of the three groups that responds to anything you do this quarter. They are also the largest of the three. They are your reallocation pool.

If you cannot tell which of your idle seats falls into which bucket, you do not have an adoption problem yet. You have a visibility problem, and the audit from guide 3 is how you fix it.

What moves the number

Five moves. Order matters.

Kill the bad approved tool

If the audit shows that your headline AI tool has 4% paid penetration, a negative satisfaction NPS for nine straight months, and your power users are paying out of pocket for something else, the tool is not the floor of your AI program. It is the cause of your adoption problem.

Every additional month it sits in front of your broad middle as the “official AI” is a month they are forming the conclusion that AI does not work. They are not wrong. The tool they were given does not work for the job they have. They are correctly inferring its quality from the experience. They will then resist the next tool you put in front of them, because they have learned, accurately, that your IT-procured “official” anything is downstream of a vendor relationship and not of their actual workflow.

The fix is not a memo. The fix is to switch the default. Make the tool your power users have already chosen the approved tool. Move the seats. Eat the procurement awkwardness. The shadow AI line in your expense report is the user research. Act on it.

Fund the disposition

Take the dollars freed up from the seat audit and concentrate them on the people in your org who already have leverage. Give them the higher tier. Give them the API budget. Give them a small operating budget for tools, prompt libraries, and time on the calendar. Tell them, in writing, that the expectation is they will produce reusable artifacts (prompts, workflows, scripts, internal tools) that the rest of their team can run.

This is the inverse of the usual move. The usual move spreads the budget evenly across a broad cohort and produces a flat usage curve. The right move concentrates it on the steep part of the curve, because the steep part of the curve is where the leverage propagates from. Your top decile becomes the channel through which good practice reaches the broad middle. They become it because they are doing the work and the work is now visible. No curriculum will produce that effect. A few people with budget and a mandate will.

Redesign three workflows, not thirty

Pick three workflows. Three. Not an enterprise-wide process re-engineering effort. Three specific recurring workflows in three different functions, each consuming meaningful weekly time, each currently bottlenecked at a step a model can plausibly do.

Examples that work. Monthly close reconciliation in finance. Account research and call prep in sales. First-draft contract redline in legal. Internal report writing in operations. Customer ticket triage and response drafting in support. Pick yours.

For each, assign your most leveraged person on that team to redesign the workflow with AI in the loop, on a thirty-day clock, with a specific output: a written description of the new process, the prompt and tool stack used, and the new cycle time compared to the old. This is not a training deliverable. It is a process-engineering deliverable. The training falls out of it as a side effect, because the team running the workflow learns the new shape by doing the new shape.

BCG’s data on this is direct. Companies that focus on 3.5 use cases see 2.1 times the ROI of companies spreading themselves across 6.1 use cases. The discipline is depth, not breadth. Three workflows redesigned to completion will move your usage numbers more than thirty workflows lightly touched.

Make the manager signal louder than the policy

Your directors and VPs need to be visibly, repeatedly using AI in front of their teams. Not “I support this initiative” emails. Actual artifacts. The first draft of the strategy memo, with a note that it is a first draft from Claude. The market analysis with the prompt attached. The “here is what I asked the model and here is where it got it wrong” post in the team channel.

This sounds soft. It is the most load-bearing move on the list. Adoption of any new working practice in a knowledge-work organization is set by the manager two layers above the IC. If your director uses AI visibly, the team uses AI. If your director does not, the team does not, and no amount of policy will overcome that signal because the policy is an abstraction and the manager’s behavior is a concrete prediction of what gets rewarded at review time.

The implementation move is small and uncomfortable. Sit down with your top two layers of management. Tell them, by name, that visible AI use in their own workflow is now part of how you evaluate their leadership over the next two quarters. Not their team’s usage stats. Their own. Then check.

Run the quarterly audit, and act on it

The seat-level audit from guide 3 is not a one-time exercise. It is the operating cadence of an AI program that works. Every quarter, you pull the report. You sort by activity. You kill the bottom third of seats. You redirect the savings. You ask the top decile what they need. You act on the answer within thirty days.

This sounds like an obvious operations habit. It is. Almost no one does it. The reason is that the first time you do it, you have to take a tool away from someone who said in a survey that they “find AI useful.” The audit shows they have not opened it in sixty days. The survey was polite. The audit is the truth. Choose the audit. The second quarter is much easier than the first.

The five moves

If you screenshot one thing from this guide.

  1. Kill the bad approved tool. The “official” AI that your broad middle uses once and gives up on is teaching them that AI does not work. Replace it with the one your power users already chose.
  2. Fund the disposition, not the average. Concentrate budget on the top decile. Give them tools, API, and a mandate to produce reusable artifacts. They are your distribution channel.
  3. Redesign three workflows. Not thirty. Three specific recurring workflows, one per function, thirty-day clock, written outputs. Depth beats breadth at 2.1 to 1.
  4. Set the manager signal from the top. Directors and VPs must be visibly using AI on their own work. Make this part of how you evaluate them. Their behavior is the policy.
  5. Audit and act, every quarter. Kill the idle seats, fund the active ones, ask the top decile what they need, deliver in thirty days.

Notice what is not on the list. Mandatory training. AI champions networks with no authority. Centers of Excellence. Enablement modules. AI-fluency certifications. None of these have moved a usage curve in any company I have looked at. They produce attendance and the appearance of action. They do not produce adoption.

This does not mean training has zero role. Once the first three workflows are redesigned and the manager signal is set, a tight, role-specific, hands-on session for the team running that new workflow is useful. Forty-five minutes, not six hours. Run by the person who redesigned the workflow, not by an external vendor. Anchored to the artifact, not to the abstract concept of “prompt engineering.” That is training that works. It is also a tenth the cost of the program your HR head proposed.

What to do Monday morning

One thing.

In your top-performing team, identify the single recurring workflow that consumes the most weekly time and is currently bottlenecked at a step a model can plausibly do. Pick the most leveraged person on that team. Hand them the workflow, a thirty-day clock, the budget for whatever tool they need, and a clear deliverable: a written description of the redesigned process, the prompt and tool stack, and the new cycle time.

Tell them you will personally walk through their result with them on day thirty. Then do it.

You will get one of two outcomes. Either the workflow is meaningfully faster, in which case you have your first internal case study, your first piece of reusable IP, and your first concrete data point for the rest of the org. Or it is not, and you have learned something specific about where the current generation of tools does not yet have fit for your business. Both outcomes are more valuable than another quarter of training attendance reports.

The point of an adoption program is not to spread AI across every desk. It is to find the workflows where AI changes the work, redesign them, and let the manager signal carry the practice across the rest of the org. The training you would have run will then either become unnecessary or finally have something concrete to teach.