SOC2 Type II and AI Coding Tools: What Auditors Ask in 2026
The specific SOC2 Type II questions auditors ask about AI coding tool usage, the evidence package they expect, and how the Pretense audit log satisfies each requirement.
The Audit Landscape Has Changed
Two years ago, SOC2 auditors rarely asked about AI coding tools. The tools were new, adoption was low, and the audit community had not yet developed a standard framework for evaluating AI tool controls.
In 2026, it is a standard line of questioning. If your engineering team uses Copilot, Cursor, Claude Code, or any AI coding assistant, expect your next SOC2 Type II audit to include a dedicated review of how you control that usage.
This post covers the specific questions auditors ask, the evidence they expect, and how to be prepared.
The Relevant Trust Service Criteria
SOC2 Type II evaluates five Trust Service Criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. AI coding tool usage touches multiple criteria.
**CC6.7 (Logical and Physical Access Controls)**: "The entity restricts the transmission, movement, and removal of information to authorized internal and external users and processes, and protects it during transmission, movement, or removal to meet the entity's objectives."
This is the primary criterion for AI tool usage. Sending code to an LLM API is a transmission of information. CC6.7 requires demonstrating that this transmission is controlled and that the information is protected.
**CC7.2 (System Operations)**: "The entity monitors system components and the operation of those components for anomalies that are indicative of malicious acts, natural disasters, and errors affecting the entity's ability to meet its objectives."
Auditors use CC7.2 to ask about monitoring: how do you detect unauthorized AI tool usage, and how do you respond when you find it?
**C1.1 and C1.2 (Confidentiality)**: "The entity identifies and maintains confidential information to meet the entity's objectives related to confidentiality" and "The entity disposes of confidential information to meet the entity's objectives related to confidentiality."
If your service commitments include protecting customer data or system architecture details, C1.1 and C1.2 require demonstrating that AI tool usage does not undermine those commitments.
What Auditors Ask
Based on the pattern of questions from auditors at major firms, here is what to prepare for:
**Q1: Do your engineers use AI coding assistants?**
The answer is almost certainly yes. Auditors know this. The question is establishing the baseline.
Preparation: Have a documented inventory of approved AI tools. This is the tool approval list from your security policy. If you do not have one, create one before the audit. An informal "yeah, people use Copilot" is a red flag.
**Q2: What controls prevent unauthorized transmission of confidential information through AI tools?**
This is the core question. Auditors are asking about CC6.7 compliance for AI tool traffic.
A policy statement alone is not sufficient. Auditors want a technical control: something that enforces the policy automatically, not something that relies on developer judgment.
Strong answer: "We deploy Pretense as a mandatory proxy for all LLM API traffic. It enforces mutation of proprietary identifiers before transmission and blocks secrets patterns. The proxy is enforced through CI/CD gates. Developers cannot make unproxied API calls from the development environment."
Weak answer: "We have a policy that says developers should not send confidential code to AI tools."
**Q3: How do you monitor AI tool usage?**
CC7.2 asks about monitoring. Auditors want to see that you can detect anomalies: sessions that bypass the proxy, unusual data volumes, new endpoints being used.
Preparation: Demonstrate the audit log. Show the auditor a sample report from pretense audit. Show them how you would detect an unproxied session. Show them the alert mechanism for secrets blocking events.
**Q4: Show me evidence that controls were operating effectively during the audit period.**
Type II audits require evidence over time, not just at the audit date. Auditors will request a sample of audit log entries from across the audit period (typically 6 to 12 months) to verify that controls were applied consistently.
Preparation: Run pretense audit --export=csv --range=365d before the audit fieldwork begins. This produces a comprehensive report showing every session, every mutation, and every blocked event over the full year.
**Q5: What is your response procedure for an AI tool incident?**
Auditors want to see that you have thought through what happens if a developer bypasses controls or if a secrets blocking event occurs.
Preparation: Document a brief incident response procedure specifically for AI tool events. Secrets blocking events should generate an alert, be reviewed within 24 hours, and be documented in your incident log.
The Evidence Package
For a complete SOC2 Type II evidence package for AI tool controls, prepare these artifacts:
**1. Policy document**: Your AI coding tool usage policy, including the approved tool list, data classification requirements, and enforcement mechanism. Should be version-controlled and show when it was last reviewed.
**2. Technical control description**: A one-page description of how Pretense operates as the enforcement layer. This should describe what gets mutated, what gets blocked, and how the audit log is generated.
**3. Audit log sample**: A CSV or JSON export of audit log entries from the audit period. Auditors will sample these for consistency and look for gaps (periods where no logs were generated despite active development).
**4. CI/CD gate configuration**: Evidence that the proxy enforcement is embedded in your pipeline, not just recommended. A screenshot of the pipeline configuration or the relevant pipeline-as-code section.
**5. Incident log**: Any secrets blocking events or policy exceptions during the audit period, with documentation of the review and resolution.
This evidence package answers all five common audit questions and provides Type II evidence (operating effectiveness over time) not just design adequacy.
Common Gaps Auditors Find
**Gap 1: Policy exists, no technical enforcement**
Many organizations write an AI tool policy after their first SOC2 audit flags the gap, but implement only a policy document without a technical control. At the next audit, the policy is present but the auditor finds no evidence that it was enforced. The gap persists.
Solution: Deploy a technical control (Pretense) and generate audit logs from day one.
**Gap 2: Audit logs exist but have gaps**
If the Pretense proxy is deployed but not mandatory, developers who opt out of using the proxy generate no logs. Auditors treat unexplained gaps in audit logs as evidence that the control was not operating effectively.
Solution: Enforce proxy routing through the CI/CD gate. Make the proxy mandatory, not optional.
**Gap 3: Incidents not documented**
Secrets blocking events that are not documented and reviewed leave an incomplete record. If an auditor finds blocking events in the proxy logs but no corresponding entries in the incident log, it appears that the team is not monitoring its own controls.
Solution: Set up automated alerting for secrets blocking events and document each review, even if the review is brief (for example, "developer sent a test file with a hardcoded credential, remediated by moving to environment variable").
**Gap 4: Policy not reviewed since initial creation**
Auditors check policy review dates. A policy written in 2024 that has not been reviewed since is evidence that the organization is not actively managing the control.
Solution: Schedule an annual review of the AI tool policy. Document the review in your policy management system.
The Pretense Audit Log as SOC2 Artifact
The Pretense audit log is designed to satisfy SOC2 evidence requirements without requiring additional tooling. Each log entry includes:
- Timestamp (ISO 8601) - Session identifier - Developer identifier (if configured) - File path(s) processed - Model endpoint - Mutation count - Secrets blocked count - PHI patterns detected count - Round-trip fidelity percentage
The export format (CSV or JSON) is auditor-friendly and integrates with standard GRC platforms including Drata, Vanta, and Secureframe.
Run before fieldwork:
pretense audit --export=csv --range=365d --output=soc2-evidence.csvThe resulting file is the primary artifact for AI tool control evidence.
[Get started with Pretense at /docs/quickstart](/docs/quickstart) or [contact us about enterprise audit support at /pricing](/pricing).
Share this article