Skip to main content
Back to Insights
AI AdvisoryApr 15, 20265 min read

How to Write an AI Acceptable Use Policy (Without Making It 30 Pages Nobody Reads)

A practical guide to writing an AI acceptable use policy that employees will actually read and follow. What to include, what to skip, and how to make it work.

AI Advisory illustration

Most AI policies I see from companies are one of two things. Either a dense compliance document that sounds like it was written by a law firm and gets filed away immediately. Or nothing at all, which is where most companies are.

Neither is useful.

What you actually need is a one to two page document that tells your employees what they can do, what they can't do, and what to do when they're not sure. That's it. The goal is clarity, not comprehensiveness.

Here's how to think through writing one.

Start with what you're actually worried about

Before you write a word, get clear on your specific risks. For most companies, there are three.

The first is confidential client data ending up in an AI tool's training pipeline. If an employee pastes a client contract into ChatGPT, that text may be used to train future versions of the model. Whether that's actually a problem depends on what the contract said and what your client agreements say about data handling, but it's a risk most CEOs would want to know about.

The second is proprietary business information leaving the organization. Pricing models, internal processes, unreleased product information. Same risk as above, different category of data.

The third is AI-generated output being used without review. Incorrect legal summaries, made-up citations in research, fabricated statistics in client reports. AI tools make things up with complete confidence. Without a review step built in, that output ends up in places it shouldn't.

Your policy needs to address all three.

What to include

An effective AI acceptable use policy has five sections.

Scope: what the policy covers (all AI tools, including personal accounts used for work purposes).

Approved uses: what employees are encouraged to use AI for. Drafting and editing internal documents, summarizing meeting notes, general research and ideation, writing assistance on non-client-facing content. This section matters because you don't want employees to read the policy and conclude that AI is off-limits. It isn't.

Restricted uses: what requires approval or is prohibited. Entering client-identifying information into any external AI tool without explicit approval. Using AI to draft client-facing content without human review. Using AI to answer questions about company financials, pending legal matters, or personnel decisions. Any use of AI tools not on the approved list for regulated activities.

Review requirement: any AI-generated content used in client deliverables, proposals, or external communications must be reviewed and verified by the employee responsible for it. The employee, not the AI, is accountable for the output.

Questions and updates: who employees should contact if they're not sure whether a specific use is covered, and how often the policy will be reviewed. AI tools change fast and your policy should too.

What makes a policy actually work

Length is not the problem. Relevance is. A policy that uses real examples from your actual business will get read. A policy written in generic compliance language won't.

Train your team on it. Not a 90-minute compliance seminar. A 20-minute conversation at your next all-hands where you explain the reasoning. People follow policies they understand the reason for.

Update it once a year minimum. The AI landscape moves fast enough that a policy written 18 months ago probably doesn't cover tools that are now in common use.

The goal of an AI acceptable use policy is not to prevent your team from using AI. It's to make sure that when they do use it, they're doing it in a way that doesn't create problems you didn't see coming. If you'd rather not draft this from scratch, AI policy development is one of the more common engagements we run.

Talk it through

Questions about AI governance or tool adoption in your business? Start with a 30-minute call.

Free Executive Resources

Choose your free guide

Two guides built for business owners who want straight answers about their technology.

5 signs your company has outgrown its current tech setup

A practical checklist for CEOs and founders managing technology without a dedicated executive.

  • Technology decisions are made by gut feel, not by someone who owns the outcome
  • Your IT spend is growing but nobody can explain where it goes
  • A vendor, investor, or client has asked a technology question nobody could answer

We respect your inbox. Unsubscribe at any time.