CISO Guide to AI Coding Tools: Security Policy Template for 2026
A practical security policy template for CISOs managing AI coding tool adoption. Covers tool approval, data classification, audit requirements, and Pretense as the enforcement layer.
Why Your Existing DLP Policy Does Not Cover AI Coding Tools
Most enterprise data loss prevention policies were written before AI coding assistants existed. They cover email attachments, USB drives, and cloud storage. They do not cover the case where a developer types "refactor this function" into Claude Code and sends 300 lines of proprietary TypeScript as context.
This gap is not theoretical. It is happening in your organization today. The average enterprise engineering team has 60 to 80 percent adoption of AI coding tools within 18 months of the tools becoming available. If you have not written a policy, your developers are making their own decisions about what is acceptable to send.
This guide gives you a working policy framework you can adopt and adapt.
The Four Questions Your Policy Must Answer
Before writing a single line of policy, answer these four questions for your organization:
**1. Which AI coding tools are approved?** Enumeration matters more than principles. Developers need a list: Claude Code is approved, Copilot is approved, the random AI tool a developer found on GitHub is not. Unapproved tools route to personal accounts with no contractual protections. Your policy needs to list exactly which tools are on the approved list and what approval requires.
**2. What data classifications are permitted in AI prompts?** A three-tier framework works for most organizations. Public information can go anywhere. Internal information can go to approved tools with a standard BAA or data processing agreement. Confidential and restricted information (which includes proprietary algorithms, credentials, and PII) requires a technical control, not just a policy statement.
**3. How do you verify compliance?** Policies without enforcement are aspirational documents. Your policy needs a mechanism: a proxy that logs all LLM API traffic, CI/CD gates that block unapproved endpoints, or required tooling like Pretense that enforces mutation before transit. A policy that says "developers should not send proprietary code" and has no mechanism to verify compliance is not a control.
**4. What is the exception process?** Developers will find legitimate reasons to need exceptions. A clear exception process (security review, manager approval, time-limited) keeps exceptions visible and documented. Without a process, developers route around the policy entirely.
Policy Template: AI Coding Tool Usage
The following is a template you can adapt. Replace bracketed text with your organization's specifics.
**Scope**: This policy applies to all engineering employees and contractors who use AI-assisted coding tools in connection with [COMPANY NAME] systems, codebases, and data.
**Approved Tools**: The following AI coding tools are approved for use: [LIST APPROVED TOOLS]. All other AI coding tools require written approval from the Security team before use.
**Data Classification Requirements**: - Public data: May be included in AI prompts without restriction. - Internal data: May be included in prompts to approved tools subject to existing data processing agreements. - Confidential data (proprietary algorithms, system architecture, business logic): Must be protected using an approved technical control before inclusion in any AI prompt. Currently approved controls: Pretense mutation proxy. - Restricted data (credentials, PII, PHI, payment card data): Must never be included in AI prompts. Secrets scanners in the CI/CD pipeline enforce this automatically.
**Enforcement Mechanism**: All AI API calls from company development environments must route through the Pretense proxy at localhost:9339. The proxy enforces mutation of confidential identifiers and blocks restricted data. CI/CD pipelines include a gate that verifies API calls are routed through the proxy.
**Audit Requirements**: Every AI coding session that includes company code must produce an audit log entry. The Pretense audit log satisfies this requirement automatically. Audit logs are retained for [RETENTION PERIOD, e.g., 13 months] and are available to the Security team on request.
**Incident Reporting**: Any suspected transmission of restricted data to an AI tool must be reported to the Security team within 24 hours using the standard incident reporting process.
**Exception Process**: Exceptions to this policy require written approval from the CISO or their designee, documentation of the business justification, a 90-day review date, and enhanced monitoring for the duration of the exception.
The Tool Approval Process
Many CISOs make tool approval the bottleneck. Instead, make it a checklist:
The tool requires a data processing agreement or BAA before your legal team will approve it. For most major AI providers (Anthropic, OpenAI, Microsoft), these agreements exist and can be signed in under a week. For smaller or newer providers, the absence of a DPA is itself disqualifying.
The tool's API traffic can be routed through a local proxy. Tools that require direct cloud connectivity and cannot be proxied present a verification problem. You cannot audit what you cannot intercept.
The tool stores prompt data for model training by default. Opt-out from training data use should be verified, not assumed. Major providers offer this on enterprise tiers. Verify it contractually, not through a privacy policy.
The tool has a SOC2 Type II report available. This is a minimum bar for enterprise software. If the vendor cannot produce a report, do not approve the tool for use with confidential data.
Pretense as the Implementation Layer
Writing a policy is step one. The harder problem is enforcement at scale. When you have 200 engineers making hundreds of AI calls per day, you cannot rely on each engineer correctly classifying data and making the right decision.
Pretense removes the decision from the developer. Once configured, the proxy automatically:
- Scans every prompt for secrets (API keys, credentials, PII patterns) and blocks them with a clear error - Mutates proprietary identifiers before transmission, so what leaves the network is never the real code - Logs every session with mutation count, secrets blocked, model used, and timestamp - Reverses mutations in responses so developers receive working code with real names
The policy says confidential data requires a technical control. Pretense is that control. It is deployed in 30 seconds and runs transparently in the developer's existing workflow.
curl -fsSL https://pretense.ai/install.sh | sh
pretense init
pretense start
export ANTHROPIC_BASE_URL=http://localhost:9339That four-command sequence brings a developer into policy compliance. For team-wide enforcement, set the environment variable in the shared development environment and add the CI gate that verifies proxy routing.
Audit Evidence Your Compliance Team Needs
For SOC2, ISO 27001, and HIPAA, auditors will ask three questions about AI tool usage:
**What controls exist to prevent unauthorized data transmission?** Answer: Pretense mutation proxy with secrets blocking.
**How do you know the controls are working?** Answer: Audit log with per-session mutation counts, secrets blocked events, and model endpoint records.
**How long are audit records retained?** Answer: [Your retention period]. Export with pretense audit --export=csv --range=90d.
The audit log is the artifact. Without it, you are asserting control without evidence. With it, you can walk an auditor through exactly which sessions occurred, what was protected, and what was blocked.
[Download the policy template and get started with Pretense at /docs/quickstart](/docs/quickstart) or [see enterprise pricing at /pricing](/pricing).
Share this article