Every AI tool your organization uses right now works the same way. A person has a task. They open the tool. They type a prompt. They get output. They close the tool. Maybe they come back in an hour. Maybe tomorrow. The AI does nothing in between.
Same model as a search engine. Nobody would call Google an employee. You use it, you close the tab, it produces nothing until you come back. That’s fine for search. It’s a strange way to deploy something that can draft reports, triage queues, review documents, and monitor pipelines.
The total leverage any reactive tool produces is bounded by the hours people spend in it. For the median employee in most organizations, that number is under thirty minutes a day. Your best people might sustain two or three hours. The rest of the day, the capability sits idle. Not because the tool can’t do more. Because no one asked it to.
This reframes the adoption gap. The usual explanation: the broad middle needs more training, better prompts, workflow integration. Partly true. But the deeper constraint is structural. A reactive tool only produces leverage during the minutes someone actively uses it. Training can increase those minutes. It can’t make them twenty-four hours.
The night shift
The change worth watching in Q2 2026 isn’t a smarter model. It’s that agents can now run without anyone opening anything.
Think of it as a night shift. You give the night shift a scope, a schedule, and a definition of done. They work. In the morning, the work is there. You didn’t stand over them. You didn’t initiate each task. You set the parameters once.
AI agents have crossed that line. Not theoretically. In shipping product, on paid plans, running on cloud infrastructure. A prompt attached to a schedule or an event trigger runs on cadence without a human starting a session. The leverage ceiling moves from “how often someone remembers to open the tool” to “how well someone, once, designed the workflow.”
That’s a different kind of constraint. And a much more tractable one for a manager to work on.
Three modes, all live
Scheduled work. Claude Code Routines shipped in April as a research preview across all paid plans.1 A Routine is a saved prompt attached to a cron schedule. Point it at a repository, give it a cadence, and it runs unattended. Nightly issue triage. Weekly documentation freshness checks. Daily dependency audits. The agent reads the repo, does the work, and commits to a branch or opens a pull request.
That’s the engineering version. For non-code work, Cowork’s scheduled prompts do the same thing. Write a prompt, pick a cadence, and the agent runs it automatically. A marketing director’s weekly competitive analysis. A finance lead’s daily reconciliation check. The output is waiting when they sit down.
Event-driven work. The same Routines platform fires on GitHub events or webhooks. A new pull request triggers a code review. A deploy triggers a changelog update. A merged PR triggers a documentation check. The trigger isn’t a clock. It’s something happening in the environment that the agent responds to without being prompted.
Self-correction. A feature called Dreaming, released in early May, lets agents review their own past sessions overnight.2 The agent finds recurring mistakes, curates its own memory, and runs better the next time. Harvey, a legal AI company, reported roughly 6x improvement in task completion after enabling it. The night shift gets better at the job without being retrained.
Setup for all three is a prompt editor and a schedule picker. The hard part is prompt design and cadence choice, which is management work, not engineering work.
What changes about adoption
The leverage concentration pattern in most organizations is that fifteen percent of users produce the large majority of the value. Adoption programs try to move the broad middle up the curve through training and workflow guides. That works slowly, when it works, because it asks people to change a daily habit indefinitely.
Proactive agents change the shape of this problem. A scheduled workflow doesn’t require the person who benefits from it to be an AI champion. It requires someone, a manager, a team lead, an ops person, to identify a recurring task, write a prompt, and set a schedule. After that, the agent produces value on cadence.
The champions still matter. They’ll still find creative, unstructured uses no one else thought of. But the recurring work, the Monday status pull, the weekly report, the daily queue triage, can run without anyone building a new habit. The question shifts from “how do we get everyone to adopt the tool” to “which workflows should run on a schedule.” The second question has clearer answers and a faster path to value.
The verifier constraint still holds
Everything from verification carries over. Proactive agents work best on workflows where correctness is cheap to check. A nightly code review against a style guide works because the CI pipeline is the verifier. A daily reconciliation works because the numbers tie out or they don’t. A weekly competitive price scrape works because the output is a structured table you can glance at.
Workflows where a senior person has to read carefully before anyone trusts the output are harder to schedule. Not because the agent can’t do the work. Because nobody has designed a check that runs faster than the work itself. The same question applies: can two people on your team agree, mechanically, on whether the output is correct? If yes, schedule it. If no, the work is still designing the verifier.
Something to carry
Most organizations already have a set of recurring tasks that run on a human schedule. The Monday morning status pull. The Friday close-of-week report. The daily support queue review. The weekly competitive scan. Each happens on a cadence because someone decided it should, and a person does it because a person was the only option.
That last part changed. The person who spends Monday morning compiling the status report could have the draft waiting when they sit down. The weekly competitive scan could run overnight and surface what actually changed. The recurring work that already has a cadence and a verifiable output is where proactive agents produce value fastest.
Three of those workflows are probably on someone’s calendar this week.
Footnotes
-
Claude Code Routines shipped April 14, 2026 as a research preview. Available on Pro, Max, Team, and Enterprise plans. Supports cron schedules, webhook triggers, and GitHub event triggers. ↩
-
Dreaming entered research preview May 6, 2026 for Claude Managed Agents. Harvey, a legal AI company, reported approximately 6x improvement in task completion rates after enabling the feature. ↩