Selecting Talent

TL;DR. Selecting AI talent is not about scanning resumes for tool names or asking candidates to demo a chatbot. It is about finding one champion per function whose disposition makes the rest of your investment work. The right champion elevates a team. They also make it observable, within sixty days, who on the existing team is never going to engage. This guide is how to identify them, hire for them, and use them as the assessment instrument for everyone else.

You are about to make a hiring decision, or a promotion decision, or a quiet decision about who to staff on the work that matters next year. AI is in the conversation. Either the role explicitly mentions it, or your boss asked, or a recruiter slid you a candidate with “GenAI” in their title.

What you do next will look like a hiring decision and will actually be an org-design decision. Hire one kind of person and the function compounds. Hire the wrong kind and you will spend the next two years running training programs that produce no observable change, while the people who could have transformed the function leave for a competitor that gave them air cover.

This is the guide that gets that decision right.

How most AI hiring goes wrong

Open ten current job descriptions for senior roles in your industry. Count how many require “experience with GenAI tools,” “familiarity with prompt engineering,” or “Copilot certification.” Most of them. Now look at how the requirement is phrased. It is in the same paragraph as “proficient in Microsoft Office.” That is the level of seriousness most orgs are bringing to this.

The corresponding interview process matches. A candidate gets asked if they have used ChatGPT. They say yes. They are asked to describe a time they used it. They describe summarizing a document. The interviewer nods. The box is checked.

This produces hires who pass the AI screen and behave, on the job, exactly like every other hire. They open the chat tool occasionally. They do not change how the function operates. Six months later the team’s AI usage stats look fine, the work product looks the same, and leadership cannot point to a single thing that is now possible that was not possible before.

Meanwhile, the one person on your existing team who would have transformed the function is sitting in a mid-level seat. They have already automated three pieces of their own job. They built a script their team relies on. They are not on the slate for the next promotion because they do not have the title that matches the JD. You did not hire them, because they were not on the market. You will not promote them, because their wins do not show up in the standard review template.

That is the actual problem. AI hiring as currently practiced is selecting against the trait that matters.

Disposition, not skill

The reason is the same one named in Recognizing Leverage. The 6x engagement gap between AI power users and everyone else is not a knowledge gap. It is a disposition gap. The trait that produces leverage is a temperament: willingness to delegate work you used to own, willingness to iterate with a literal collaborator, willingness to treat a wrong answer as a prompt to refine instead of evidence the tool is broken. Comfort being in the loop without being the bottleneck.

Tool fluency is a week of practice. The disposition is something closer to a personality trait. You can reliably select for it. You cannot reliably teach it.

This means your hiring filter has to do two things your current process does not. It has to surface the disposition, which a resume cannot show and a behavioral interview rarely catches. And it has to ignore the credentials that look like a proxy for it but are not. The “prompt engineer” job title boom of 2023 collapsed into the certification grift of 2024 and 2025 for a reason. Neither produced the trait. Both gave hiring managers a false signal that let them feel they were screening for AI capability while actually screening for the willingness to put a badge on a resume.

The same trap applies inside your walls. Whoever volunteered for the AI working group is not necessarily your champion. Champions are often quiet. They are usually shipping. They have rarely been given air cover, because the kind of work they are doing is not yet on the dashboard.

What a champion looks like

Concrete signals. These hold whether the person is a candidate, a current employee, or a referral.

A champion has already built something nobody asked for. A script that automates a piece of their own job. A prompt template their team uses. An internal tool, a dashboard, a checklist, a workflow document, a spreadsheet that does something the team used to ask BI for. They built it before there was a budget for it, before there was a working group, before anyone signed off. They built it because the friction was bothering them.

A champion keeps the chat open while they work. They do not open it like a search engine when they get stuck. It is one of the surfaces they think on. They will tell you what they have it set up for, what custom instructions they wrote, which model they switched to last month and why. The answer is specific and slightly opinionated.

A champion has a story about a wrong answer. They do not tell you the model is amazing. They tell you about the time it confidently invented a citation, and what they did next. That second half is the signal. They corrected, they refined, they kept going. They did not write the tool off. They calibrated their trust.

A champion is doing the work of someone who is no longer in the org chart. The marketer drafting in their own voice instead of briefing an agency. The analyst running the ad-hoc that used to take BI two weeks. The engineer closing tickets that belonged to three of them. They are not asking for headcount in a place they used to ask for it.

A champion is not necessarily senior. They are often not the loudest person on the team. They are rarely the one with “AI” in their job title. They are sometimes the person you almost passed over because their resume looked unremarkable for the level. They are almost always the person whose teammates, when asked, say “oh, you should talk to them about this.”

A champion is not the consultant you brought in. The consultant has the vocabulary. The champion has the artifacts.

What to ask in interviews

Throw out the AI fluency interview as currently designed. “Have you used Copilot” is a yes-or-no question that screens for nothing. “Walk me through how you would use AI for this role” is a hypothetical that rewards vocabulary over experience.

Ask three questions instead.

One. “Walk me through the last thing you built or changed in your own workflow because of an AI tool. What was it, what does it do for you now, and what was the version before it.” You are listening for specificity. The answer should include a tool, a workflow detail, a before-and-after, and ideally a slight irritation in their voice about the part that still does not work. If the answer is generic (“I use it to summarize emails”) they have not done the thing. If the answer is precise and slightly nerdy, you have a candidate.

Two. “Tell me about a time the model gave you a wrong or bad answer. What did you do.” You are listening for the second half. The candidate who says “I stopped using it for that” is telling you they hit a single wall and bounced. The candidate who tells you how they reframed the prompt, or fed in an example, or switched models, or just kept iterating until it worked is telling you they have a working relationship with the tool. That is the trait.

Three. “If you had a budget to spend on AI tools, time, or people in this role, what would you spend it on first, and what would you stop doing because of it.” You are listening for an opinion. A real champion has one. They have already thought about which seat is the wrong tier, which workflow is the next one to automate, which task they would gladly delegate. The candidate who turns this into “it depends on the strategy” is not in the cohort.

Pair the three questions with one structured exercise. Give the candidate a real, non-confidential piece of work the role would actually involve. A messy data set, a draft document that needs analysis, a strategy memo that needs a counterargument, a pile of customer feedback that needs synthesis. Tell them they may use any AI tool they want. Give them ninety minutes. Watch what they do. The strongest signal is not the output. It is the workflow. Did they pick a tool with intent. Did they iterate. Did they course-correct when an early pass was thin. Did they end up with something better than a person without tools could have produced in the same time. Did they tell you, unprompted, what they would do differently with more time.

What to ignore. Certifications. The AI bootcamps and short-form credentials that proliferated over the last two years are, with rare exceptions, a tax on the credentialed and a signal of nothing. “Prompt engineer” as a job title. The role barely existed in 2023, peaked in 2024, and is now being quietly absorbed into every other role, which is what should have happened from the start. Portfolio pieces produced by an agency or a course. The work was done by someone else; the candidate was a client. Vendor logos. “Used Copilot at a Fortune 500” tells you they had a license. It does not tell you they opened it.

The hard part of this filter is that it will eliminate candidates who look strong on paper. A strong AI hire often looks like a generalist with a slightly weird side project. Hire them anyway.

Assessing your current team

The same filter applies internally, with one inversion. You are not selecting from a slate of candidates. You are looking for the one or two people per function who already meet the bar, and giving them explicit air cover to do more of what they are already doing.

This is the leverage move. One champion per function, surfaced and supported, will produce more downstream change than a quarterly training program for the whole team. They will rebuild a workflow other people then copy. They will show, by contrast, what is possible. They will quietly raise the standard of what gets shipped without you having to mandate it.

They will also make assessment of everyone else trivially observable, within about sixty days. Watch the diffusion. Once a champion is producing visible artifacts (a script, a prompt library, a workflow that closes a recurring task in a fraction of the previous time), the rest of the team sorts itself into three groups. The first group picks up the artifacts and adapts them. They are your second wave. Fund them next. The second group asks the champion how to do the thing, takes the answer, and tries it once. They are your trainable middle. Most adoption work belongs here. The third group quietly avoids the champion. They do not adopt the artifacts. They do not ask the questions. When the work that used to be theirs becomes someone else’s faster output, they reframe it as a quality concern.

That third group is your honest assessment problem. Don’t run a skills test. Watch the diffusion for sixty days and take notes.

Who won’t get there

A meaningful fraction of your team is not going to develop AI fluency, regardless of training, time, or tools. Naming that out loud is the leadership job most leaders avoid because it sounds like the wrong thing to say in a townhall.

It is not a firing recommendation. It is a staffing one. Do not put the AI-leveraged future of the function on people who, by demonstrated behavior over a reasonable window, are not going to be part of producing it. Put them on the work where their experience compounds without requiring the disposition. Promote them on the work they are good at. Do not give them the new operating model to design.

The signals that someone is not in the cohort are quieter than the champion signals, and worth being honest about. They have been licensed for six months and still talk about AI in the future tense. They escalate small wrong answers as evidence of a categorical problem with the tool. They have not opened the chat in the last week, and when reminded, they explain why their work is the kind that does not benefit. They route around colleagues who are using it visibly. They reach for the previous version of the workflow when given the choice. None of these are character flaws. They are dispositional facts. Treat them as such.

The trap to avoid: confusing seniority with disposition in either direction. Some of your most senior people will turn out to be your strongest champions. Some will turn out to be the ones routing around the chat tool. Same for your most junior. The trait does not correlate with tenure in either direction, and assuming it does is one of the more expensive mistakes a leader can make right now.

The signals

Champion signals. A person is a champion if you can observe at least three of these.

  1. They have built something nobody asked for, in their own workflow, because of an AI tool.
  2. They keep the chat open while they work, and can describe specifically how they have it configured.
  3. They have a story about a wrong answer, and the story ends with them iterating, not bouncing.
  4. They are doing work that used to belong to someone else, in volume.
  5. Their teammates name them, unprompted, as the person to ask.
  6. They have an opinion about the next tool, plan, or workflow change, and the opinion is specific.

Non-champion signals. A person is not in the cohort if more than one of these is consistently true after six months of access.

  1. They talk about AI in the future tense.
  2. They cite a single bad answer as a reason the tool does not work for their role.
  3. They have not opened the tool in the last week, in a role where the tool plainly applies.
  4. They route around the colleagues who are using it visibly.

These are observation questions, not survey questions. Do not send a form. Watch the work for sixty days.

What to do Monday morning

One thing.

Identify your champion candidate in each function. Not every function will have one yet; that is fine, and informative. Where you can name them, get them on your calendar within the week. Tell them you have noticed what they are doing. Give them three things, in this order: explicit air cover to keep doing it, a small budget to expand it (a better seat tier, an hour of their week officially carved out, a tool they have been wanting), and a public expectation that the rest of the team will adopt their artifacts.

Where you cannot name a champion candidate in a function, that is your hiring brief. Not “find someone with AI experience.” Find someone who has built things nobody asked for. Pull the three interview questions above into the next loop you run for that function, regardless of whether AI is in the JD.

The functions where you can name a champion are the ones that will compound first. The functions where you cannot are the ones that need a hire. That distinction, drawn honestly across your org chart on a single page, is your AI talent strategy.

The rest is execution.