Building an AI Policy

TL;DR. Your AI policy is too long, took too long, and is still a draft. While you wrote it, your people kept pasting client data into the free tier of whatever chat tool was open. The fix is a one-page policy that names approved tools, names off-limits data, names the owner, and ships in two weeks. Policy follows tools. Tools do not follow policy.

Somewhere in your company there is a forty-page AI governance framework. It is in version 0.7. Legal owns it. IT was supposed to review it last quarter. Someone from risk has comments. The committee meets again next month.

In the same company, your highest-output people have already chosen their tools. Some of them are paying out of pocket. Some of them are using a personal account on a work laptop. Some of them are pasting customer records into a model whose data terms nobody has read. They are not waiting for the policy. They cannot wait. The work is now.

Your policy is not protecting you from the risk you think it is. It is creating the risk you actually have.

Why most AI policies fail

The default move when an executive says “we need an AI policy” is to convene the committee. Legal, IT, security, HR, compliance, sometimes a line-of-business sponsor. Six months later there is a document. It defines artificial intelligence. It cites the EU AI Act and the NIST AI Risk Management Framework. It enumerates seven categories of risk. It includes a sign-off form. It is forty pages long, written in the voice of an outside law firm, and approved by everyone who will never use it.

Nobody reads it. Nobody can find it when they need it. The director who is about to upload a contract to a chatbot is not going to scroll PDF page nineteen to find out if she is allowed to. She is going to upload the contract.

The policy fails for three structural reasons.

It is written by people who do not use the tools, for people who do. The committee’s frame is risk classes. The user’s frame is “I have this document and this deadline.” The two never meet on the page.

It has no enforcement loop. Nobody is monitoring whether the policy is being followed. There is a signed acknowledgment in the LMS. That is the entire compliance program. An acknowledgment is not a control.

It is already obsolete. The vendor pricing changed last month. A new model launched yesterday. The data residency terms shifted. The forty-page policy cannot keep up because forty-page policies are not built to keep up. The cadence of the document is six months. The cadence of the market is two weeks.

The mechanism is simple. A long policy is a one-time artifact. The thing it is trying to govern is a continuous behavior. Static documents do not govern continuous behavior. Available tools do.

Shadow AI as a procurement signal

Roughly half your workforce is using AI tools your IT department has not approved. The exact number depends on which survey you read and how honestly your people answer. Treat any single statistic with skepticism. Treat the underlying fact as certain.

The reflex is to call this an employee compliance problem. It is not. It is a procurement problem with an employee-shaped symptom.

Here is the test. Walk into the office of someone using a tool you did not approve. Ask why. The answer is almost always one of three things. The approved tool does not exist yet. The approved tool is worse than the free one. The approved tool exists and is fine but nobody told them it was approved or how to get a seat.

Each of those is something you control. None of them is something the employee can fix by reading a longer policy.

The Samsung story from 2023 is the canonical version of how this ends. Engineers pasted source code and internal meeting notes into a public chat tool to debug and summarize. The leak was discovered. Samsung banned generative AI on company devices. The employees were not malicious. They were trying to do their jobs with the only tool that worked. The company’s response was a control on the symptom. The disease was that the company had not yet given them an approved equivalent.

If you take one thing from this section: every shadow AI incident is preceded by a procurement gap your company did not close in time.

The right shape

A working AI policy has the shape of an internal operations memo, not a legal document. One page. Plain language. Updated on a fixed cadence. Owned by a named human, not a committee. Distributed where people work, not buried in a policy library.

The body of the policy is three lists.

Approved tools. The specific products, by name, your people are allowed to use for work. With links to where to request access and which plan tier they are on. This list is short on purpose. Two or three is normal. One is fine. The list reflects what you have already procured under enterprise terms that protect your data. Anything not on the list is not approved, full stop. If a department head wants something added, there is a one-paragraph request process. Approval or denial within two weeks. That cadence is the whole game.

Restricted data. The categories of information that do not go into any AI tool, approved or not. Be specific to your business. Generic categories are useless. “Confidential information” is not a category. “Customer payment data, signed contracts, employee compensation, M&A working files, anything subject to a current NDA, anything covered by HIPAA, anything covered by attorney-client privilege” is a category. Five to eight specific lines. Not twenty. Twenty means nobody will remember any of them.

Logged and reviewed. What you actually monitor. For approved enterprise tools, this is admin-console usage data: who used what, how often, on what kind of prompt patterns where the vendor exposes them. For unapproved tools, this is whatever your endpoint or DLP stack catches. State the review cadence. Quarterly is the default. State who reviews it. Name the role, not the department.

That is the policy. Three lists. The supporting metadata is four lines: owner (a named role), review cadence (quarterly is correct for now), the consequence for putting restricted data into any AI tool, and the date of the most recent update.

The whole document fits on one screen. A new hire reads it in two minutes. A manager forwards it without apology. A vendor sees it during procurement and recognizes a buyer who has thought about this.

The one-page policy, in full

Adapt the names and lists. Keep the shape.

[Company] AI Use Policy

Owner: [Head of IT or equivalent named role]. Last updated: [date]. Reviewed quarterly.

Approved tools. You may use the following AI tools for work. All other AI tools are not approved.

  • [Tool A, plan tier, link to request a seat]
  • [Tool B, plan tier, link to request a seat]

To request a new tool, email [owner] with the use case and the team. You will get an answer within two weeks.

Off-limits data. Do not put the following into any AI tool, approved or not.

  • Customer personal or payment data
  • Signed contracts or attorney-client communications
  • Employee compensation, performance, or health information
  • M&A, financing, or other material non-public information
  • Anything covered by an active NDA
  • Source code from [specific repos / acquired company codebases]

What we monitor. We review usage on approved tools quarterly: which seats are active, which are idle, which prompt patterns trigger data classifiers. We use endpoint controls to detect use of unapproved AI tools.

If you put off-limits data into an AI tool, tell [owner] within 24 hours. We will not punish a fast disclosure. We will treat a hidden one as a security incident.

That is the entire policy. If yours is longer, ask which sentence in the version above you want to delete. There is no good answer.

Questions the policy forces you to answer

The forty-page version lets you avoid the hard decisions by drowning them. The one-pager forces them.

Which tools are on the approved list? You cannot duck this. If you are still saying “we are evaluating,” you do not have a policy. You have a backlog. Pick a primary chat tool with a real enterprise data agreement. Pick a coding tool if you have engineers. That is enough to ship version one.

What is the data agreement on each? Read it. Specifically: does the vendor train on your inputs by default, can you turn it off, where is the data stored, how long is it retained, what is their notification policy on a breach. The reputable enterprise tiers from the major vendors all answer these acceptably as of this quarter. The free and consumer tiers do not. That difference is the entire reason an approved list exists.

What is the consequence for violating the off-limits list? “Up to and including termination” is not a useful answer because it is the answer to every policy. Say the actual escalation. First instance with fast disclosure: documented, no penalty. First instance discovered later: written warning. Repeat: HR. Use of a personal account to circumvent a control: HR on first instance. The point is not to be punitive. The point is that an incentive structure exists and can be named.

Who owns the policy? A person, not a department. The person who can update the approved-tools list when the market moves, run the quarterly review, and make the call when a manager asks for an exception. In most companies this is a head of IT or a CIO. In smaller companies it is the COO or whoever is closest to the procurement seat. It is not legal. Legal is an input. Owning the policy is an operational job.

The heuristic

If you remember nothing else.

  1. One page or it does not work. Length is the enemy of compliance.
  2. Three lists: Approved, Off-limits, Logged. That is the policy.
  3. Tools precede policy. If you have not procured the approved tool, you do not have a policy yet. You have a hope.
  4. Two-week SLA on requests. Faster than that is overkill. Slower than that creates shadow AI.
  5. Quarterly review by a named owner. Not annual. Not “as needed.” Quarterly, calendared.
  6. Off-limits data is specific to your business. Five to eight lines, named in your company’s vocabulary.
  7. Fast disclosure is rewarded. Concealment is the incident. This is the only sentence on culture you need.

What to do Monday morning

One thing.

Open a blank document. Write the seven lines that make up the one-page policy: owner, approved tools, off-limits data, what you monitor, the consequence, the review cadence, the last-updated date. Fill in what you can answer today. For the lines you cannot fill in, you have just produced the agenda for the next two weeks of work.

Send the draft to three people: the manager of your highest-leverage AI user, the head of IT, and the head of legal. Tell them you are shipping a one-page policy in fourteen days and you want their edits, not their committee.

The forty-page version will still be in draft when this one is in production. That is not an accident. That is the design.