Back to Blog
·13 min read
CISOEnterprisePolicySecurity

The CISO's Guide to Approving AI Coding Tools in 2026

How security leaders should evaluate and roll out AI coding tools without creating new IP exposure risks.

The Policy Gap

A 2025 survey of enterprise security teams found that 67% had no formal policy governing AI coding tool usage. This is not because security leaders are uninformed. It is because the tools moved faster than the policy frameworks.

The typical timeline at enterprise companies: developers began using Copilot and Cursor in mid-2024, often with informal approval or under personal accounts. Security teams became aware of the practice during routine audits in late 2024. By the time a formal evaluation began, the tools were already deeply embedded in engineering workflows.

This guide is for CISOs who are now in that evaluation phase: trying to formalize governance over tools that are already in use, or trying to get ahead of adoption before it becomes ungoverned.

---

The Actual Threat Model

Before building a policy, you need an accurate threat model. Most security leaders I speak with have one of two framings that are both incomplete.

**Framing 1: "AI tools are dangerous, we should block them."**

This framing misses that blocking popular developer tools does not eliminate the behavior. It drives it underground. Teams that issue blanket AI tool bans consistently see an increase in developers using personal accounts and web-based interfaces, which are harder to monitor, audit, and control than API-routed tools.

**Framing 2: "AI vendors have enterprise agreements with data protection clauses, so we are covered."**

This framing misses the residual risk. Enterprise data protection agreements typically cover intentional data use and training. They do not cover breach scenarios (if the vendor is compromised, your data was already transmitted), inference attacks (pattern analysis across anonymized request sets), or the risk of a developer accidentally including a secret in a prompt (contractual protection does not recover exposed credentials).

The accurate framing: AI coding tools create a new data exfiltration channel that requires active controls, not just contractual protections. The threat is not primarily malicious use by the vendor. It is the normal use of the tools creating inadvertent exposure that would not have been acceptable in a pre-AI world.

---

Risk Matrix: Which Tools Send What Data Where

ToolData TransmittedVendorTraining Opt-OutAPI-Routed
GitHub CopilotCode context (current file + open files)Microsoft/OpenAIYes (Business/Enterprise)Yes
CursorSelected code + promptCursor/Anthropic/OpenAIYes (Business)Yes
Claude CodeFiles read, terminal output, clipboardAnthropicYes (API)Yes
ChatGPT (web)Full prompt textOpenAIPartialNo
Claude.ai (web)Full prompt textAnthropicPartialNo
Copilot Chat (web)Full prompt textMicrosoftYes (Enterprise)No

Key observations:

- API-routed tools (Copilot, Cursor, Claude Code) can be intercepted and protected at the API layer. This is where Pretense operates. - Web-based tools (ChatGPT.com, Claude.ai) cannot be protected by a proxy. They require browser extension controls or network-level filtering. - Training opt-out is a policy control, not a technical one. It reduces the risk of model training on your data but does not prevent transmission or breach exposure.

---

Approval Framework: 5 Questions Before Approving Any AI Coding Tool

**Question 1: Is the traffic API-routed or browser-based?**

API-routed tools can be controlled at the network layer with a proxy like Pretense. Browser-based tools require different controls (extension policies, network filtering, acceptable use training). Treat these as separate risk categories.

**Question 2: What data categories are transmitted in a typical session?**

For each tool, map a typical developer session: what files are read, what context is sent, what the largest possible context window contains. For Claude Code, a typical session can include hundreds of files and terminal output. For Copilot, it is typically the current file and a few open files. The exposure surface is different.

**Question 3: Does the vendor offer a data processing agreement (DPA) with appropriate provisions?**

Minimum requirements: data not retained beyond session, training opt-out honored for all tiers you are purchasing, breach notification within 72 hours, data residency options for regulated industries. For healthcare and financial services, verify if PHI/PII provisions are covered under the DPA.

**Question 4: Is there a technical control layer, or are you relying solely on contractual protections?**

Contractual protections fail silently. A technical control (mutation proxy, network filtering, browser extension policy) provides verifiable enforcement. For any tool approved for use on proprietary or regulated code, require a technical control layer.

**Question 5: What does the audit trail look like?**

You need to be able to answer: which identifiers were transmitted to which vendor, when, by whom, and whether they were protected. If you cannot produce this evidence, you cannot satisfy SOC2 CC6.7, ISO 27001 A.13, or HIPAA technical safeguard requirements for AI tool usage.

---

What to Log, What to Alert On

**Log everything:**

- Every outbound request to LLM API endpoints (Anthropic, OpenAI, Azure OpenAI, Google Vertex) - The session ID, user/machine identifier, model used, and timestamp - Whether a technical protection layer (mutation) was active for the request - Any secrets or high-risk patterns detected and blocked

**Alert on:**

- Requests to LLM endpoints that did not pass through an approved proxy - Secrets scanner findings (any API key, credential, or token detected in a prompt) - High-volume sessions (single session with unusually large context window size, which may indicate bulk code exfiltration) - Requests from machines that have not been onboarded to the protection layer

**Do not alert on:**

- Volume alone (developer productivity requires high-frequency AI usage) - Specific code content (your SIEM does not need to see the actual code, only metadata)

---

Incident Response: What to Do When Code Does Leak

Define "leak" before you need to invoke this plan. Recommended definition: an outbound API request that contained proprietary code identifiers without a mutation or redaction layer active.

**Immediate (0-4 hours):**

1. Identify the session: which machine, which developer, which tool, what timestamp 2. Pull the full request log for the session from your proxy or network capture 3. Assess what was transmitted: which files, which identifiers, whether any secrets were included 4. If secrets were included: rotate immediately. Do not wait for further analysis. 5. Notify the vendor in writing if the incident involved PHI, PCI-scope data, or trade secrets (required under most DPAs)

**Short-term (4-48 hours):**

1. Determine root cause: was this a misconfiguration (developer bypassed proxy), a tooling failure (proxy had a bug), or a policy gap (this category of tool was not covered)? 2. Preserve logs for potential legal action 3. If trade secrets were transmitted: consult with legal counsel on disclosure requirements and IP protection options 4. Brief the affected engineering team lead. Do not brief the full company until you understand the scope.

**Long-term (48 hours - 30 days):**

1. Close the root cause gap: reconfigure, patch, or update policy 2. Run a full audit of similar sessions that may have had the same gap 3. Update your SOC2 or ISO 27001 control documentation to reflect the incident and remediation 4. Consider whether the incident requires disclosure in your next compliance audit

---

The CISO's Approval Decision Tree

Is the tool API-routed?
  Yes -> Does a technical protection layer (proxy mutation) exist?
    Yes -> Is there a structured audit trail?
      Yes -> APPROVE with documentation
      No -> CONDITIONAL APPROVAL pending audit trail implementation
    No -> CONDITIONAL APPROVAL pending technical control deployment
  No -> Is it browser-based only?
    Yes -> Apply browser policy controls, limit to non-proprietary code use
    Partially -> Approve API use with controls, restrict browser use

Most AI coding tools can be approved with appropriate controls. The goal is not to block adoption; it is to make adoption auditable and technically controlled.

---

A Note on the 67% Gap

The 67% of enterprises with no AI coding policy are not necessarily at higher risk than those with written policies. A written policy without technical enforcement is not a control. It is documentation of intent.

The question is not "do you have a policy?" It is "can you prove that your policy was followed, with machine-readable evidence, for every AI coding session in the last 90 days?"

If the answer is no, the policy is incomplete regardless of how well-written it is.

[Contact the Pretense team about enterprise deployment at pretense.ai/pricing](/pricing)

Share this article