Your Employees Are Already Using AI. The Question Is Whether You Know About It.
In January 2026, a BlackFog survey of 2,000 workers at companies with more than 500 employees found that 49% admitted to using AI tools without employer approval. More than a third were using free versions of AI tools with no enterprise data protections.
And here is the part that should keep compliance officers awake at night: 69% of C-suite executives and 66% of senior VPs said they were fine with this, prioritizing speed over security.
The numbers get worse the deeper you look. An UpGuard report from November 2025 found that more than 80% of workers, including nearly 90% of security professionals, use unapproved AI tools on the job. Cisco’s 2025 study reported that 46% of organizations experienced data leakage through employee prompts to generative AI.
A Menlo Security analysis found 68% of employees used personal accounts to access ChatGPT, with 57% inputting sensitive data. And IBM’s 2025 Cost of Data Breach Report found that AI-associated breaches cost organizations more than $650,000 per incident.
Samsung learned this lesson early when employees leaked proprietary semiconductor code through ChatGPT prompts in 2023. They banned the tool entirely, then spent months building a controlled internal alternative.
Most organizations still have not caught up. A Varonis study analyzing 1,000 enterprise environments found that 99% had sensitive data exposed to AI tools due to insufficient access controls.
If your organization does not have a formal generative AI acceptable use policy, you are not managing risk. You are ignoring it. This guide walks you through building one from scratch, with a ready-to-use template you can adapt to your organization.
It is built to integrate with your existing enterprise risk management framework and to address the US regulatory landscape that is forming in real time.
Why a Generic AI Ban Does Not Work (and What Does)
Some organizations responded to AI risks by banning ChatGPT and similar tools outright. That approach has three problems. First, it does not work. UpGuard found that even security professionals, the people most aware of the risks, use unapproved AI tools at higher rates than average employees. Banning a tool that makes people 40-60% more productive at certain tasks guarantees shadow AI, it does not prevent it.
Second, a blanket ban puts your organization at a competitive disadvantage. Deloitte’s State of AI in the Enterprise 2026 report found that worker access to AI rose 50% in 2025, and companies where senior leadership actively shapes AI governance achieve significantly greater business value than those that delegate it to technical teams.
Third, a ban does not address the actual risks. The risks are not that employees use AI. The risks are that they use it without guardrails: entering confidential data into public models, treating AI output as verified fact, generating content that creates legal liability, or making decisions based on AI recommendations without human oversight.
What works is a risk-based acceptable use policy that enables productive AI use within defined boundaries. Think of it the same way you think about your risk appetite framework: not a prohibition on risk-taking, but a structured approach to taking the right risks within the right tolerances.
Foundation: Risk-Based Data Classification for AI
Your generative AI acceptable use policy must be anchored to your data classification scheme. If you do not have one, build this first. Every piece of data your employees might enter into an AI tool falls into one of these tiers, and each tier determines what AI tools can be used with it:
| Data Tier | Definition | Examples | Permitted AI Tools | Controls Required |
| Tier 1: Public | Data intended for public distribution with no confidentiality requirements | Published marketing materials, press releases, public financial filings, blog content | Any AI tool (public or enterprise) | Attribution of AI-generated content; human review before publishing |
| Tier 2: Internal | Non-sensitive data for internal use only, but not damaging if exposed | Internal meeting notes (non-sensitive), general project timelines, internal communications (non-confidential), training materials | Enterprise-approved AI tools only (ChatGPT Enterprise, Microsoft Copilot with data protection, Anthropic Claude enterprise) | Enterprise license with data retention controls; no public/free-tier tools; usage logging |
| Tier 3: Confidential | Sensitive business data whose exposure would cause competitive harm or regulatory consequences | Financial projections, strategic plans, M&A activity, non-public product roadmaps, employee performance data, salary information, vendor contracts | Enterprise-approved AI tools with enhanced controls only | Approved tool list; DLP monitoring on prompts; manager approval for AI use; audit trail; data must not be used for model training |
| Tier 4: Restricted | Highly sensitive data subject to regulatory protection or whose exposure would cause severe harm | PII/PHI (HIPAA), payment card data (PCI DSS), material non-public information (SEC), trade secrets, attorney-client privileged communications, classified government data | Prohibited from all external AI tools. Internal/self-hosted AI only, if approved by CISO and Legal | No external AI under any circumstance; DLP hard block; violation = disciplinary action; incident reporting required |
This classification is the backbone of your entire policy. Every rule, every prohibition, every monitoring requirement flows from this table. Post it on your intranet. Print it on desk cards. Make it impossible for employees to claim they did not know which tier their data fell into.
Generative AI Acceptable Use Policy Template
Below is a complete, section-by-section policy template you can adapt to your organization. Each section includes the policy language followed by implementation guidance. Customize the bracketed placeholders for your context.
Section 1: Purpose and Scope
POLICY: This policy governs the use of all generative AI tools, platforms, and services by [Company Name] employees, contractors, and third-party personnel. It applies to both company-provided and personal AI tools used for any company-related purpose.
Generative AI includes, but is not limited to: large language models (ChatGPT, Claude, Gemini, Copilot), image generators (DALL-E, Midjourney, Stable Diffusion), code assistants (GitHub Copilot, Cursor, Codeium), audio/video generators, and any AI tool that creates content based on user prompts.
Implementation note: Scope broadly. Employees will argue their specific tool is “not really AI” or “doesn’t count.” Close that gap upfront. Include personal devices and personal AI accounts used for any work-related purpose. The FS-ISAC’s framework specifically recommends covering external generative AI services regardless of how they are accessed.
Section 2: Approved and Prohibited Tools
POLICY: [Company Name] maintains an Approved AI Tools List, managed by the IT Department in coordination with the CISO and Legal. Only tools on this list may be used for company-related work. The current approved tools are: [list specific tools, e.g., Microsoft Copilot for Microsoft 365, ChatGPT Enterprise (company instance), GitHub Copilot Business].
Use of any AI tool not on this list for company-related work is prohibited without prior written approval from your department head and the IT Department. Free-tier or consumer versions of any AI tool (including free ChatGPT, free Claude, personal Copilot accounts) are prohibited for company-related work regardless of whether the employee has a personal subscription.
Implementation note: The free-tier prohibition is critical. Free versions of ChatGPT, Claude, and other tools typically retain and may use prompt data for model training.
Enterprise versions contractually guarantee data is not used for training. Menlo’s 2025 research found 68% of employees used personal accounts. This is where the data leakage happens.
Section 3: Data Classification and Handling
POLICY: Before entering any data into a generative AI tool, employees must classify the data according to [Company Name]’s Data Classification Policy [reference Tier 1-4 table above]. Tier 1 (Public) data may be used with any AI tool. Tier 2 (Internal) data may be used only with approved enterprise AI tools.
Tier 3 (Confidential) data may be used only with approved enterprise AI tools and requires documented manager approval before each use.
Tier 4 (Restricted) data must never be entered into any external AI tool under any circumstances. Employees who are uncertain about a data classification must treat the data as Tier 3 (Confidential) and obtain manager approval before use.
Implementation note: The “when in doubt, treat as Tier 3” rule prevents the most common failure mode: employees rationalizing that their data is “probably fine.” Varonis found 99% of enterprises had sensitive data exposed to AI tools. The default must be caution, not convenience.
Section 4: Prohibited Uses
POLICY: The following uses of generative AI are prohibited regardless of the data classification tier or tool used:
- Entering personally identifiable information (PII), protected health information (PHI), payment card data, Social Security numbers, or other regulated personal data into any external AI tool
- Entering material non-public information (MNPI) as defined by SEC regulations, including non-public financial results, pending M&A activity, or other information that could constitute insider trading if disclosed
- Using AI to make or materially influence employment decisions (hiring, firing, promotion, compensation) without documented human review and approval, consistent with EEOC guidance and state laws including NYC Local Law 144 and Illinois AI Video Interview Act
- Generating legal advice, regulatory filings, or contract language without review and approval by the Legal Department
- Generating content for external publication or customer communications without human review, editing, and approval by the designated content owner
- Representing AI-generated work as original human work without disclosure, consistent with OpenAI, Anthropic, and other providers’ terms of service and emerging state transparency requirements
- Using AI to generate deepfake, misleading, or deceptive content involving real persons, including employees, customers, competitors, or public figures
- Uploading or pasting proprietary source code into any AI tool not specifically approved for code assistance by the IT Department
- Circumventing DLP, access controls, or monitoring tools to use AI tools that have been blocked or restricted by IT
- Using AI for any purpose that violates existing [Company Name] policies, including the Code of Conduct, Information Security Policy, Data Privacy Policy, and Intellectual Property Policy
Implementation note: The employment decisions prohibition is legally significant. NYC Local Law 144 requires bias audits for automated employment decision tools. The EEOC has issued guidance on AI and employment discrimination.
Colorado’s SB 21-169 addresses algorithmic discrimination. Your policy must address this even if your organization is not in these jurisdictions, because the regulatory trend is clear.
Section 5: Output Review and Quality Assurance
POLICY: All AI-generated output must be treated as a draft that requires human review before use. Employees are personally responsible for the accuracy, completeness, and appropriateness of any work product they submit or publish, regardless of whether AI assisted in its creation. Specific review requirements by output type:
| Output Type | Minimum Review Requirement | Approval Required |
| Internal draft documents | Employee reviews for factual accuracy, hallucinations, and appropriateness before circulation | No additional approval needed for routine internal use |
| External communications (emails to clients, vendors, regulators) | Employee review + manager review before sending | Manager or designated approver |
| Published content (website, social media, press releases, marketing) | Employee review + content owner review + brand/legal review | Content owner and Communications/Legal department |
| Financial analysis or projections | Employee review + independent verification of all numbers and assumptions against source data | Department head; Finance review for external-facing materials |
| Code generation | Employee code review + automated security scan (SAST/DAST) + peer review per existing code review policy | No change to existing code review approval chain |
| Legal or regulatory content | Employee review + Legal Department review and sign-off before any use | General Counsel or designated legal reviewer |
| Employment-related decisions supported by AI | Human review of all AI recommendations + bias check + documented rationale for decision | HR and hiring manager; documented human override capability required |
Section 6: Monitoring, Logging, and Compliance
POLICY: [Company Name] reserves the right to monitor, log, and review all use of generative AI tools on company systems and networks. Monitoring includes but is not limited to: prompt content submitted to AI tools, AI-generated output, usage frequency and patterns, data classification of content submitted, and attempts to access blocked AI tools.
Data Loss Prevention (DLP) tools will be deployed to identify and block the submission of Tier 3 and Tier 4 data to unauthorized AI platforms. Employees will receive notice when DLP blocks a submission. Repeated DLP violations will be reported to the employee’s manager and the Compliance Department.
Implementation note: DLP is not optional for a serious AI governance program. Cisco’s 2025 research found 46% of organizations had data leak through AI prompts.
Deploy CASB (Cloud Access Security Broker) and SWG (Secure Web Gateway) controls that can inspect and block prompt content in real time. Tools like Microsoft Purview, Netskope, Zscaler, and Palo Alto Networks all offer AI-specific DLP capabilities. For cybersecurity risk management integration, coordinate these controls with your existing SIEM and incident management workflows.
Section 7: Intellectual Property and Ownership
POLICY: AI-generated output is not eligible for copyright protection under current US law (Copyright Office guidance, February 2023, reaffirmed 2024). Employees must not represent AI-generated content as original creative work for purposes of patent, copyright, or trademark registration without prior Legal Department review.
When AI tools are used to assist in creating work product that may be subject to IP protection, employees must document the extent of AI assistance and obtain Legal Department guidance before filing any IP claims. All output generated using company-approved AI tools on company time remains [Company Name] property under existing employment agreements.
Section 8: Vendor and Third-Party AI Risk
POLICY: Before any new AI tool is added to the Approved AI Tools List, the following assessments must be completed: information security risk assessment per the [Company Name] Vendor Risk Management Policy, legal review of the vendor’s terms of service (particularly data retention, model training, and data sharing clauses),
data privacy impact assessment for tools processing personal data, and confirmation that the vendor’s data handling practices are compatible with [Company Name]’s regulatory obligations (HIPAA, CCPA/CPRA, GLBA, SOX, SEC regulations as applicable). This assessment must be refreshed annually or whenever the vendor materially changes its terms of service.
Implementation note: AI vendor terms of service change frequently and often quietly. OpenAI, Anthropic, Google, and Microsoft have all modified their data handling practices multiple times. Build AI vendor monitoring into your third-party risk management process. Flag any vendor term change that affects data retention or model training.
Section 9: Training and Awareness
POLICY: All employees must complete Generative AI Acceptable Use Training within 30 days of this policy’s effective date and annually thereafter. New employees must complete the training within their first two weeks.
The training must cover: this policy and its requirements, data classification and AI-specific handling rules, how to identify and avoid hallucinated content, prohibited uses and consequences, approved tools and how to access them,
how to report suspected violations or AI-related incidents, and department-specific guidelines where applicable. Employees in high-risk roles (Finance, Legal, HR, IT, Executive) must complete enhanced training that includes scenario-based exercises specific to their function.
Section 10: Enforcement and Consequences
POLICY: Violations of this policy will be handled consistently with [Company Name]’s existing disciplinary procedures. Consequences may include verbal or written warning for first-time minor violations (e.g., using a non-approved tool for non-sensitive work), formal disciplinary action for repeated violations or mishandling of Tier 2-3 data, and immediate termination for intentional submission of Tier 4 data to external AI tools, deliberate circumvention of DLP controls, or violations that result in regulatory exposure or material harm.
All employees are encouraged to report suspected violations without fear of retaliation. Reports may be made to the employee’s manager, the Compliance Department, or the [Company Name] Ethics Hotline.
Section 11: Policy Governance and Review
POLICY: This policy is owned by the [Chief Risk Officer / CISO / General Counsel, choose based on your governance structure] and is reviewed quarterly by the AI Governance Committee, which includes representatives from Risk Management, Legal, IT/Security, HR, and business unit leadership.
The policy will be updated at minimum annually or whenever: a material change in AI technology or vendor terms of service occurs, new regulations or regulatory guidance affecting AI use is issued, an AI-related incident reveals a gap in the current policy, or the Approved AI Tools List changes. Version history and change log will be maintained and accessible to all employees.
Implementation note: Quarterly review is not optional. AI technology and regulation are moving too fast for annual review cycles.
The EU AI Act’s high-risk system requirements take full effect August 2, 2026. California AB 2013 (training data transparency) and SB 942 (AI content labeling) took effect January 2026. SEC 2026 examination priorities specifically elevated AI governance. Your policy must keep pace.
US Regulatory Landscape: What Your Policy Must Address
There is no single comprehensive federal AI law in the United States as of February 2026. Instead, organizations face a patchwork of existing laws applied to AI, sector-specific guidance, and state-level legislation. Your generative AI acceptable use policy needs to account for these:
| Regulation / Guidance | AI-Relevant Requirements | Policy Section Alignment |
| EEOC AI Guidance (2023-2025) | AI tools used in hiring must comply with Title VII; employers liable for discriminatory AI outcomes even if using third-party tools | Section 4 (Prohibited Uses) and Section 5 (Output Review): Human review mandatory for all AI-assisted employment decisions |
| NYC Local Law 144 | Automated employment decision tools require annual bias audit by independent auditor; public notice to candidates | Section 4: Covers AI in hiring. Applies to any employer using AI for NYC job candidates regardless of employer location |
| SEC Cybersecurity Disclosure (2023) + 2026 Exam Priorities | Material cyber incidents must be disclosed; AI governance elevated in SEC examination priorities for 2026 | Section 6 (Monitoring) and Section 8 (Vendor Risk): AI incident detection, logging, and response procedures |
| CCPA/CPRA (California) | Right to know about automated decision-making; right to opt out of profiling; data minimization requirements | Section 3 (Data Classification): PII in AI prompts must comply with CCPA data processing requirements |
| California AB 2013 (eff. Jan 2026) | Developers of GenAI must publish training data summaries disclosing copyrighted material, PII, and synthetic data | Section 8 (Vendor Risk): Include training data transparency in vendor assessments |
| California SB 942 (eff. Jan 2026) | High-traffic AI systems must label AI-generated content | Section 5 (Output Review): Disclosure and labeling requirements for AI-generated content |
| Colorado SB 21-169 | Insurance companies prohibited from using AI that unfairly discriminates; algorithmic accountability requirements | Section 4 (Prohibited Uses): Sector-specific prohibition on discriminatory AI use |
| EU AI Act (if EU operations/customers) | High-risk AI system requirements by August 2, 2026; prohibited practices already enforceable; fines up to 7% global turnover | Entire policy: Risk-based classification aligns with EU AI Act four-tier structure |
| FTC Act Section 5 + Operation AI Comply | Deceptive AI marketing and unfair practices enforcement; FTC has actively pursued AI-related enforcement actions | Section 4 (Prohibited Uses) and Section 5 (Output Review): No deceptive AI-generated content |
| NIST AI RMF 1.0 (voluntary) | Govern, Map, Measure, Manage framework for AI risk management; GenAI Profile (NIST-AI-600-1) | Entire policy architecture maps to NIST AI RMF; see implementation section below |
Implementation Roadmap: From Policy to Practice
A policy without implementation is a compliance artifact, not a risk control. Here is a practical 90-day implementation plan:
Weeks 1-2: Foundation
Conduct an AI usage audit. Use CASB logs, network monitoring, and a confidential employee survey to discover what AI tools are already in use across your organization. ISACA recommends this as the starting point for any AI governance program.
You will likely find shadow AI usage far exceeds expectations. Document the gap between current state and desired state. This gap becomes your risk assessment input.
Weeks 3-4: Policy Development and Approval
Customize this template for your organization. Get input from Legal, IT/Security, HR, and business unit leaders. Submit to your AI Governance Committee (form one if you do not have one). Obtain executive sign-off. Your risk management governance structure should define who owns this policy and who has approval authority.
Weeks 5-8: Technical Controls and Training
Deploy DLP controls for AI prompt monitoring. Configure approved AI tools with enterprise settings. Block known consumer AI endpoints at the network level (optional, depending on your enforcement approach). Develop and launch mandatory training. Create department-specific guidance for Finance, Legal, HR, and IT.
Weeks 9-12: Monitoring and Iteration
Monitor DLP alerts, usage patterns, and shadow AI discovery. Report initial KRIs to leadership. Collect employee feedback on policy friction points (you will need to iterate). Adjust approved tools list based on demand and risk assessment. Integrate AI risk KRIs into your existing risk dashboard and board reporting framework.
KRIs for Monitoring AI Policy Effectiveness
| KRI | Measurement | Frequency | Threshold |
| Shadow AI Discovery Rate | New unapproved AI tools detected on network per month | Monthly | Green: 0-2 | Amber: 3-5 | Red: >5 |
| DLP Block Rate (AI prompts) | Sensitive data submissions blocked by DLP before reaching AI tools | Weekly | Green: Declining trend | Amber: Stable | Red: Increasing trend |
| Policy Training Completion | % of employees completing GenAI acceptable use training | Monthly | Green: >90% | Amber: 75-90% | Red: <75% |
| AI Incident Count | Number of AI-related data exposure, hallucination, or compliance incidents | Monthly | Green: 0 | Amber: 1-2 | Red: >2 |
| Approved Tool Adoption Rate | % of AI usage through approved enterprise tools vs. total AI usage detected | Monthly | Green: >85% | Amber: 70-85% | Red: <70% |
| Vendor Assessment Currency | % of approved AI vendors with current (within 12 months) risk assessments | Quarterly | Green: 100% | Amber: 80-99% | Red: <80% |
The approved tool adoption rate is your most telling KRI. If fewer than 70% of AI interactions are going through approved channels, your policy is failing. Either your approved tools do not meet employee needs, or your enforcement is too weak. Address both.
Integrating AI Policy into Business Continuity and Risk Management
Your generative AI acceptable use policy does not exist in isolation. It should connect to your broader risk and continuity frameworks:
- Enterprise Risk Register: Add AI governance risks (shadow AI, data leakage, regulatory non-compliance, hallucination-driven errors) to your operational risk register. Score using your existing likelihood-impact methodology. For guidance on building effective registers, see our enterprise risk management KRI guide.
- Business Continuity Planning: If your organization becomes dependent on AI tools for critical processes, those tools become dependencies in your business impact analysis. What is your RTO if your enterprise AI platform goes down? What manual workarounds exist? Build AI vendor outage scenarios into your BCP testing program.
- Incident Response: Define what constitutes an AI-related incident (data leakage through prompts, hallucinated content published externally, biased AI output affecting customers or employees) and integrate response procedures into your existing incident management framework.
- Third-Party Risk Management: AI vendors are now critical third parties. Include them in your vendor risk control self-assessment program with AI-specific assessment criteria: data retention policies, model training practices, subprocessor usage, and incident notification commitments.
- Board Reporting: Report AI governance maturity alongside your other top risks. Include KRIs from the dashboard above, incident trends, and regulatory compliance status. Boards increasingly expect this. The NACD’s 2025 survey found 62% of boards now hold regular AI discussions, though only 27% have formally added AI governance to committee charters. Be ahead of that curve.
The Bottom Line
Shadow AI is not a future risk. It is a current reality affecting every organization. Forty-nine percent of your employees are already using AI tools you did not approve, and more than half of them are feeding sensitive data into those tools. A structured, risk-based generative AI acceptable use policy is the minimum viable control.
The policy template in this guide gives you everything you need to get started: risk-based data classification, approved and prohibited tool lists, explicit prohibited uses mapped to US regulatory requirements, output review standards, monitoring and enforcement mechanisms, and a 90-day implementation roadmap.
Customize it, get it approved, and deploy it. Then iterate, because both the technology and the regulatory landscape will look different six months from now.
As the ISACA analysis of 2025 AI incidents concluded: the organizations that will thrive in 2026 are not those using the most AI, but those governing it best. Your acceptable use policy is where that governance begins.
Building your AI governance program? Explore our full library at riskpublishing.com, including guides on enterprise risk management frameworks, ERM technology best practices, cybersecurity and ERM integration, ISO 22301 business continuity management, and ERM in cloud computing environments.
Sources and Further Reading
- BlackFog: January 2026 survey: 49% of employees use AI tools without employer approval; 69% of C-suite OK with shadow AI — https://www.blackfog.com
- UpGuard: Shadow AI in the Enterprise (November 2025): 80%+ of workers use unapproved AI; 90% of security professionals — https://www.cybersecuritydive.com/news/shadow-ai-employee-trust-upguard/805280/
- Cisco: 2025 Data Privacy Benchmark: 46% of organizations experienced data leakage through generative AI employee prompts — https://www.cisco.com
- Menlo Security: 2025 report: 68% of employees use personal accounts for ChatGPT; 57% input sensitive data — https://www.proofpoint.com/us/threat-reference/shadow-ai
- IBM: 2025 Cost of Data Breach Report: AI-associated breaches cost $650,000+ per incident; shadow AI = 20% of breaches — https://www.ibm.com/think/topics/shadow-ai
- Varonis: State of Data Security Report 2025: 99% of enterprises had sensitive data exposed to AI tools — https://www.varonis.com
- ISACA: The Rise of Shadow AI: Auditing Unauthorized AI Tools in the Enterprise (2025) — https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-rise-of-shadow-ai-auditing-unauthorized-ai-tools-in-the-enterprise
- Fisher Phillips: Acceptable Use of Generative AI Tools sample policy template for employers — https://www.fisherphillips.com
- FS-ISAC: Framework of an Acceptable Use Policy for External Generative AI (financial services sector guidance) — https://www.fsisac.com
- SHRM: Generative AI Usage Policy Template for HR professionals — https://www.shrm.org/topics-tools/tools/policies/chatgpt-generative-ai-usage
- Deloitte: State of AI in the Enterprise 2026: worker AI access up 50%; governance gap identified — https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html
- NACD: 2025 Board Survey: 62% of boards hold regular AI discussions; only 27% have formal AI governance in committee charters — https://www.nacdonline.org
NIST: AI Risk Management Framework 1.0 and Generative AI Profile (NIST-AI-600-1) — https://www.nist.gov/itl/ai-risk-management-framework

Chris Ekai is a Risk Management expert with over 10 years of experience in the field. He has a Master’s(MSc) degree in Risk Management from University of Portsmouth and is a CPA and Finance professional. He currently works as a Content Manager at Risk Publishing, writing about Enterprise Risk Management, Business Continuity Management and Project Management.
