TL;DR. A small number of people in your org are already producing two to ten times their previous output with AI. You probably can’t name them. The reason your AI program looks flat is that you’re measuring seats and training hours instead of looking for the work that’s suddenly getting done. This guide is how to spot it.
You have an AI budget. You have seats deployed. You have a training plan, or someone is building one. You have, somewhere on a slide, a number for “AI productivity gains” that came from a vendor or a consultant.
You also, almost certainly, can’t name the three people in your org getting the most leverage from AI right now. That’s the actual problem this guide solves.
Force multiplication is happening inside your company. It’s unevenly distributed, mostly invisible to leadership, and often actively hidden by the people producing it because they don’t want their output baseline reset. The job is to find it, fund it, and copy it. Before you can do any of that, you have to recognize it.
Where most leaders look
The default executive dashboard for AI looks like this. Number of seats licensed. Percentage of seats “active” (logged in within 30 days). Hours of training delivered. Survey results on perceived productivity gains. A vendor-supplied chart showing usage curves over time.
Every number on that dashboard is a proxy for the thing you actually want to know. None of them measure it. You can have 100% seat activation, full training compliance, and rave survey scores while producing zero observable change in what your org actually ships.
The reason is that AI leverage isn’t a usage statistic. It’s a step change in the relationship between one person and a previously immovable bottleneck. It looks like the financial analyst who used to spend two days reconciling vendor data and now spends ninety minutes. It looks like the marketing manager who used to brief an agency for a campaign and now ships the campaign herself. It looks like the engineer who closed a five-person ticket backlog over a long weekend.
These people don’t show up on the dashboard. They’re noise inside the activation rate. The vendor’s adoption chart is the average of the entire seat pool, which is mostly drag. Your training metrics measure attendance, which has roughly nothing to do with whether the attendee then went and changed their job.
This isn’t a measurement edge case. The Federal Reserve’s most recent labor data puts the average AI productivity gain across all users at 5.4% of work hours, about two hours a week on a forty hour schedule. Underneath that average, OpenAI’s own engagement data shows a 6x gap between power users and everyone else. Anthropic’s internal employee study finds Claude in 60% of work and a 50% perceived productivity boost, with the largest gains concentrated in a minority of roles. The headline averages are real. They’re also hiding the only thing that matters.
The leverage isn’t distributed across the seat pool. It’s concentrated in a small number of people in a small number of workflows. Your dashboard is averaging it into invisibility.
Why training doesn’t surface it
The reflex when AI numbers look flat is to schedule training. This is a well-rehearsed organizational move. It’s also, in this case, almost always the wrong one.
The 6x engagement gap between power users and everyone else isn’t a knowledge gap. It’s a disposition gap. The people getting leverage from AI tools share a temperament. They’re willing to delegate tasks they used to own. They iterate quickly with a literal collaborator instead of getting frustrated. They treat a wrong answer as a prompt to refine, not as evidence the tool is broken. They keep the chat open while they work instead of opening it like a search engine when they get stuck.
You can’t teach this in a one hour session. You can barely teach it in six months of coaching. People with the disposition pick it up in their first week with the tool. People without it never get there, no matter how many hours of curriculum you put in front of them.
The corollary is uncomfortable. Your training program is producing a wide, shallow capability gain in roles that were never going to drive meaningful leverage anyway. The actual leverage is being produced by maybe 5 to 15 percent of your workforce, mostly in spite of your program rather than because of it. They figured it out. They’re not waiting for the rollout. They’re quietly outproducing their peers, and in many cases not advertising it.
The instinct to train the broad middle isn’t wrong as a long-term play. It’s wrong as a way to find leverage right now. Right now, you find leverage by finding the people already producing it.
Leverage by role
Here’s the operational answer. Concrete. Role specific. Look for these signatures.
Engineering. This is the easiest case because the productivity step change is now well documented. An engineer with leverage is using a coding agent (Claude Code, Codex, Cursor, Copilot in agent mode) to do work that would previously have required a full sprint or a team. Signals: pull requests landing in volumes that previously belonged to the entire team. Old internal tools getting rewritten on weekends. Manual data pipelines being replaced with scripts that nobody asked for but everyone is now using. Backlogs that have been “next quarter” for two years getting closed. The number to look at is throughput per engineer, not lines of code. The most leveraged engineer in your org is probably shipping work that used to take three of them.
Sales. A salesperson with leverage is using AI to do account research and call prep that an SDR or BDR used to do. They are running a personal research workflow across LinkedIn, 10-Ks, news, and CRM history before every meaningful call. They are drafting follow-ups in their voice, not in the CRM’s templated voice, in a fraction of the time. Signals: meeting prep notes that are noticeably better than the rest of the team. Higher meeting-to-opportunity conversion. A pipeline that looks like the rep is covering a territory thirty percent larger than peers without working longer hours.
Marketing. A leveraged marketer is shipping campaigns end to end that previously required a brief, an agency, two rounds of revisions, and a six week timeline. They’re producing first drafts of copy, ad creative variants, landing pages, and email sequences in hours, not weeks. They’re not producing more bad work. They’re producing the same volume of better-targeted work much faster, or the same quality of work in a tenth of the time. Signals: campaigns shipping outside the agency cadence. Internal tools or microsites going live that nobody briefed an agency for. The marketing team’s contractor spend going down quietly.
Operations and finance. This is where the largest dollar value of leverage hides, and where it’s hardest to spot from a dashboard. A leveraged ops or finance person is automating reconciliations, building one-off analyses on demand instead of waiting on BI, and rewriting reports that used to take a week into something that runs in the background. Signals: month-end close time dropping. Ad hoc analyses turning around in hours instead of days. The “we’ll need to pull that and get back to you” response disappearing from finance reviews. A data engineer or BI analyst noticing that some operator is no longer in their queue.
Legal. A leveraged legal team member is using AI to do first-pass contract review, redlining, and clause comparison against a precedent library, with a human partner doing the final pass. They’re drafting routine agreements (NDAs, statements of work, vendor terms) in minutes. They’re running risk diligence across hundreds of documents in the time it used to take to read ten. Signals: contract turnaround time falling. Outside counsel spend on routine matters going down. Faster sign-off on standard deals. A general counsel who’s suddenly able to take on strategic work because the queue is finally clear.
Customer support. Leverage in support looks less like deflection (the vendor pitch) and more like agents handling 50 to 100 percent more tickets at higher CSAT because AI is doing the research, drafting the response, and surfacing relevant precedent in real time. Signals: senior agent capacity opening up. Escalation rates dropping without a CSAT hit. Knowledge base entries getting written as a side effect of agents resolving novel tickets instead of as a separate quarterly project.
Executive and director-level work. This is where leverage is most often present and least often noticed because the work itself is opaque. A leveraged exec is using AI for first-draft strategy memos, board prep, market analysis, and synthesis of long internal documents. They’re walking into meetings with a tighter point of view because they had a model interrogate their own arguments before the meeting. Signals: faster turnarounds on memos. Sharper questions in reviews. The disappearance of the “I’ll get back to you with the analysis” response. The quiet reduction in dependence on the strategy team or external consultants for routine synthesis work.
Product management. A leveraged PM is building working prototypes, internal tools, and data analyses without filing engineering tickets. They’re writing PRDs that already include code samples and edge case enumeration because they used a coding agent to think through the implementation. Signals: prototypes appearing in Slack instead of mockups. Internal tools shipping that replaced SaaS line items. PMs who are suddenly more credible in technical reviews.
The pattern across roles is the same. The leveraged person is doing work that used to require someone else, or doing work that used to be deferred indefinitely. The output is observable in deliverables, not in tool usage stats.
Where leverage isn’t, yet
Be honest about this. The “AI is everywhere” pitch papers over real gaps.
Highly tactile and physical work does not get force multiplied by current tools. Field service, manufacturing line work, healthcare delivery, food service, construction. AI is doing things at the edges (scheduling, documentation, training material), but the core job is unchanged.
Highly relational work has marginal gains at best. Therapists, executive coaches, senior account managers in long sales cycles, M&A bankers in late-stage deals. AI helps with prep and admin. The work itself is human in a way the model can’t replicate.
Specialized expert work with weak feedback signals is mixed. Senior researchers, deep technical specialists, certain creative roles. The model can accelerate parts of the workflow, but the expert still has to evaluate the output, and the evaluation is most of the job.
Anything that requires being right rather than plausible. Final-pass legal advice, audit signoffs, regulatory submissions, anything that becomes evidence. AI can draft. The professional still has to be the one whose name is on it.
This isn’t an argument against using AI in those roles. It’s a warning against expecting the engineering-style step change there. If you go looking for 3x productivity gains in your senior litigators, you won’t find them, and you’ll conclude AI is overhyped. The right conclusion is that the leverage is somewhere else and you’re looking in the wrong place.
The signals
If you remember nothing else from this guide, remember the signals. These are what to look for in your own org over the next thirty days.
A person is getting AI leverage if you can observe at least one of these:
-
They are doing work that used to require another person. A marketer who no longer briefs an agency. An engineer who no longer needs the data team. A finance lead who no longer needs the BI queue.
-
They are doing work that used to be deferred indefinitely. Backlog items getting closed. “We should clean that up someday” things suddenly cleaned up. Reports nobody had time to build appearing on Slack.
-
Their cycle time on a recurring task has dropped by more than half. Not 10% faster. Half or better. If it’s only marginally faster, it’s not leverage. It’s a small efficiency gain.
-
Their output baseline has shifted in a way peers haven’t matched. Same role, same tenure, dramatically more shipping. Not working longer hours. Not cutting corners. Just producing more.
-
Other people are starting to depend on their AI-built artifacts. A script they wrote. A prompt template the team uses. An internal tool that was supposed to be a one-off. This is the strongest signal because it means leverage is starting to spread organizationally.
-
They’ve stopped asking for headcount in places they used to ask for it. Quietly, without being asked. Often the cleanest financial signal. The hiring request that didn’t come in.
If you’re looking for ROI dashboards, you’ll miss all six. If you’re looking for these signals in your weekly business reviews, you’ll start finding them within a week.
What to do Monday morning
One thing.
Pick three people. Your guess at the three most leveraged AI users in your org. Not the loudest. Not the most senior. The three who actually seem to be shipping more than they used to. Get a thirty minute meeting with each of them.
Ask three questions:
- What are you using, and how do you have it set up?
- What work used to take you a day or a week that now takes you an hour?
- What would you be able to do if you had a budget to spend on tools, time, or people supporting this?
Write down the answers. Compare them to your current AI program. The gap between what those three people are doing and what your program is funding is the gap between the AI strategy you wrote and the AI strategy you actually have.
That gap is your roadmap. The rest of the program exists to close it.