Blog

AI Security Insights

Technical deep-dives on protecting proprietary code from LLM APIs.

·12 min read
SecurityEnterpriseAI Safety

The State of AI Code Security in 2026: We Are in the Pre-Firewall Era

Six major AI security incidents in 12 months prove we are in the pre-firewall era of AI security. Just as network firewalls became mandatory in the 1990s, AI firewalls will become mandatory in the 2020s.

Read article
·10 min read
SecurityIncidentEnterpriseDLP

Samsung Engineers Leaked Chip Design Source Code to ChatGPT -- Banning AI Did Not Work

Samsung engineers leaked semiconductor source code to ChatGPT three times. Samsung banned ChatGPT. Engineers kept using AI anyway. Here is the lesson.

Read article
·10 min read
SecurityIncidentSupply Chain

LiteLLM CVSS 9.6 Vulnerability: When Your AI Proxy Becomes the Attack Vector

A critical vulnerability in LiteLLM AI proxy could intercept all AI API traffic. Here is why local-first, open-source AI security beats cloud proxy architectures.

Read article
·10 min read
SecurityIncidentGovernmentCompliance

CISA Officials Used ChatGPT for Sensitive Government Work -- Policy Alone Does Not Work

When CISA officials used ChatGPT for sensitive government work, it proved that policy without technical enforcement fails. Here is why AI firewalls are the answer.

Read article
·10 min read
SecurityIncidentPrivacy

When Google Indexed 4,500 Private ChatGPT Conversations

A missing noindex tag exposed thousands of private ChatGPT conversations to search engines. Here is what happened and why code mutation would have made the leaked data useless.

Read article
·10 min read
SecurityIncidentSupply Chain

Axios npm Supply Chain Attack: When 100M Weekly Downloads Get Compromised

The Axios npm package was compromised with a remote access trojan that exfiltrated source files. Here is what happened and why code mutation is your last line of defense.

Read article
·10 min read
SecurityIncidentEnterpriseDLP

Shadow AI: 40% of Files Uploaded to ChatGPT Contain Sensitive Data

LayerX Security found that 40% of enterprise files uploaded to ChatGPT contained PII or PCI data. GenAI is now the #1 data exfiltration channel. Here is what happened and how Pretense stops it.

Read article
·12 min read
StrategySecurityVisionEnterprise

Why AI Code Security Is a 30-Year Problem

The problem of protecting proprietary code from AI systems does not get solved by better models. It gets worse. Here is why AI code security is a permanent category and what the next three decades look like.

Read article
·11 min read
BreachesSecurity2025Incidents

5 Real AI Security Incidents from 2025 — and How Pretense Stops Each One

The breaches that defined AI security in 2025: code leaked through Cursor, API keys exposed in Claude sessions, enterprise IP in Copilot context windows. Here is exactly how Pretense prevents each attack vector.

Read article
·6 min read
TechnicalArchitectureTutorial

How Pretense Works: A 5-Minute Technical Overview

A clear, visual walkthrough of Pretense's request flow, mutation algorithm, and 30-second deployment. After reading this, you know exactly what Pretense does and how to run it.

Read article
·8 min read
Case StudyFinTechEnterpriseCompliance

Case Study: How a FinTech Team Protected 2.3M Lines of Proprietary Code While Using Claude

A synthetic composite showing how a regulated FinTech team deployed Pretense after a CISO stop-work order, protected 2.3M proprietary identifiers, and maintained 94% developer productivity with a full SOC2 audit trail.

Read article
·8 min read
SecurityArchitecture

Why Code Mutation Beats Redaction for AI Security

Redaction tools remove information from prompts, but they break LLM context and output quality. Here's why mutation (replacing identifiers with semantically equivalent synthetics. That is the right approach.

Read article
·11 min read
ComparisonEnterprise

Pretense vs. Nightfall DLP: A Technical Comparison

Nightfall is the incumbent in AI data loss prevention. We did a detailed technical and cost comparison. Here is how Pretense stacks up on every dimension that matters to enterprise security teams.

Read article
·6 min read
TutorialClaude Code

Securing Claude Code: A Step-by-Step Guide

Claude Code is transforming how engineers write code. But every prompt you send contains proprietary identifiers. This guide shows exactly how to route Claude Code through Pretense in under 5 minutes.

Read article
·9 min read
ArchitectureSecurityCommunity

Why the Mutation Algorithm is Documented (And Why It Makes Us Stronger)

The mutation algorithm being documented is a feature, not a bug. If the algorithm is public knowledge, security does not depend on keeping it secret. It depends on keeping your mutation keys private. Like SSL: the protocol is public, your certificate is private.

Read article
·13 min read
SOC2ComplianceEnterpriseCISO

Your SOC2 Auditor Will Ask About AI Code Security. Here Is What to Say.

SOC2 CC6.7 and CC7.2 now effectively require demonstrating control over AI tool usage. Here is a practical guide with ready-to-use control documentation, the four artifacts your auditor wants, and a template control statement you can use today.

Read article
·10 min read
SecurityDeveloperEducationCopilot

AI Security 101: What Every Developer Needs to Know Before Using Copilot or Claude

Most developers do not think about what they are sending to AI tools. This is a practical primer on what leaves your machine, where it goes, what is actually at risk, and three rules every developer should follow before using AI on production code.

Read article
·10 min read
DeveloperSecurityPractical

The Developer's Guide to Using AI Coding Tools Without Getting Fired

Most companies prohibit sending proprietary code to external APIs. It is in the employee handbook you did not read. Here is how to use AI tools anyway, pragmatically and safely.

Read article
·7 min read
LaunchFounderProduct Hunt

We're Launching on Product Hunt: Here's What We Built and Why

Pretense started as a solution to our own problem. We were using Claude to build Pretense, and realized we were sending proprietary code to Anthropic. Here is the full story.

Read article
·11 min read
EnterpriseSecurityTrustArchitecture

Why Pretense is Fully Auditable (And What It Means for Enterprise Buyers)

Most security tools are black boxes. Pretense's mutation engine is fully documented and auditable. Here is why that decision makes Pretense more trustworthy, not less.

Read article
·13 min read
PredictionsCISOSecurityIndustry

5 AI Security Predictions for 2026-2027

Opinionated predictions on where AI security is heading: audit trails, breach incidents, data residency law, mutation replacing redaction, and the CISO evolving from gatekeeper to architect.

Read article
·12 min read
ChangelogProductv0.2.0

Pretense v0.2.0: Everything We Added in the Last 90 Days

From a single CLI package to a 17-package monorepo with VS Code extension, GitHub Action, MCP server, and dashboard. Here is what changed and what we learned.

Read article
·9 min read
SecurityArchitectureExplainer

What Is Code Mutation and Why It Beats Redaction for AI Security

Code mutation replaces proprietary identifiers with semantically equivalent synthetics before sending to LLM APIs. Unlike redaction, it preserves context so the AI still produces useful output. Here is how it works and why it is the right architectural choice.

Read article
·8 min read
CopilotDeveloperTutorial

How to Protect Proprietary Code When Using GitHub Copilot

GitHub Copilot sends your code to Microsoft servers. For most teams that is acceptable. For teams with proprietary algorithms, client contracts, or regulated data, it requires a protection layer. Here is a practical guide to using Copilot safely.

Read article
·12 min read
CISOEnterpriseSecurityPolicy

The CISO Guide to AI Coding Tool Security in 2026

AI coding tools are now standard developer infrastructure. For CISOs, that creates a new attack surface: every code completion, every prompt, every context window is a potential data exfiltration channel. This guide covers the threat model, control framework, and enforcement mechanisms.

Read article
·11 min read
SOC2ComplianceEnterpriseCISO

SOC2 Compliance for AI-Assisted Development Teams

SOC2 Type II auditors are increasingly asking about AI tool usage controls. CC6.7 requires demonstrating that third-party data access is controlled. If your team uses Copilot, Cursor, or Claude, you need a documented control. Here is what to build.

Read article
·8 min read
ArchitectureDLPSecurityEnterprise

Why Local-First AI Security Beats Cloud DLP

Cloud DLP tools scan your data after it reaches their servers. For AI coding tools, that is too late: the data left your network the moment the developer hit autocomplete. Local-first security stops exfiltration before transit, not after.

Read article
·7 min read
ComparisonDeveloperProductivity

Pretense vs Manual Code Review: Speed and Coverage Compared

Manual code review catches some secrets and proprietary identifiers before AI prompts are sent. But it catches roughly 40% of them, introduces 2-3 day delays, and does not scale with team growth. Here is a detailed comparison.

Read article
·10 min read
FinTechComplianceEnterpriseUse Case

How Financial Services Teams Use AI Coding Tools Safely

Financial services firms face stricter data handling requirements than most industries. Sending proprietary trading algorithms or client-identifying code to AI APIs creates real regulatory exposure. Here is how regulated FinServ teams are solving this without blocking developer productivity.

Read article
·11 min read
HIPAAHealthcareComplianceEnterprise

HIPAA Compliant AI Development: A Practical Guide

HIPAA does not prohibit using AI coding tools. It prohibits sending protected health information to unauthorized parties. If your codebase references patient data structures, claim identifiers, or PHI schemas, standard AI tools create real compliance exposure. Here is how to close the gap.

Read article
·9 min read
CISOPolicyEnterprise

CISO Guide to AI Coding Tools: Security Policy Template for 2026

A practical security policy template for CISOs managing AI coding tool adoption. Covers the four questions every policy must answer, tool approval process, data classification requirements, audit evidence, and Pretense as the enforcement layer.

Read article
·8 min read
ClaudeCopilotComparison

Claude Code vs GitHub Copilot: Enterprise Security Comparison

A detailed comparison of Claude Code and GitHub Copilot data handling, IP exposure risks, training opt-out defaults, and enterprise contract protections. Mutation resolves the core risk for both tools.

Read article
·10 min read
SecurityROIEnterprise

The Hidden Cost of AI Coding Tools: IP Exposure by the Numbers

The expected cost of IP exposure from AI coding tools: $250K per incident, scaled by probability. A full ROI model showing Pretense pays for itself via audit time savings before counting any incident it prevents.

Read article
·9 min read
HIPAAComplianceHealthcare

HIPAA Compliance With AI Coding Assistants: A Practical Guide

How AI coding assistants create PHI exposure risk through code structure and identifier names, not just data values. Mutation as a HIPAA technical safeguard, and how the Pretense audit log satisfies HIPAA documentation requirements.

Read article
·9 min read
SOC2ComplianceAudit

SOC2 Type II and AI Coding Tools: What Auditors Ask in 2026

The specific SOC2 Type II questions auditors ask about AI coding tool usage in 2026, the full evidence package they expect, and how the Pretense audit log satisfies each criterion including CC6.7 and CC7.2.

Read article
·10 min read
SecurityDeveloperRisks2026

The 5 AI Coding Security Risks Every Engineering Team Faces in 2026

The five most common ways AI coding tools leak proprietary code: function names revealing business logic, variable names exposing architecture, comments as training data, secrets inferred from context, and refactoring requests exposing complete internal APIs. With real examples and prevention for each.

Read article
·12 min read
TechnicalArchitectureSecurityDeep Dive

Mutation vs Redaction: A Technical Deep Dive into AI Privacy Techniques

Why redaction degrades LLM output quality to 58.5% of baseline while mutation preserves 92.5%. A technical comparison of NER-based DLP, regex redaction, and deterministic mutation, with benchmark data and the round-trip problem explained.

Read article
·7 min read
TutorialClaude CodeSetup

Using Pretense with Claude Code: Complete Setup Guide

Step-by-step guide to routing Claude Code through the Pretense proxy. Covers what Claude Code actually transmits to Anthropic, installation, the ANTHROPIC_BASE_URL configuration, verifying protection via the audit log, and team-wide rollout options.

Read article
·11 min read
Open SourceSecurityToolsReview

The Best Open Source AI Security Tools in 2026 (Reviewed)

Honest review of Gitleaks, Semgrep, Trivy, detect-secrets, truffleHog, and Pretense. What each tool does, install time, false positive rate, and what it misses. Pretense is complementary to all of them: they scan static code, Pretense protects at the AI API boundary.

Read article
·13 min read
CISOEnterprisePolicySecurity

The CISO's Guide to Approving AI Coding Tools in 2026

How security leaders should evaluate and roll out AI coding tools without creating IP exposure risks. Covers the policy gap (67% of enterprises have no AI coding policy), risk matrix by tool, a 5-question approval checklist, what to log, and incident response when code does leak.

Read article
·10 min read
RustPerformanceArchitectureEngineering

Why We Rewrote Our Code Scanner in Rust: 27x Faster at 1.82ms

The technical story behind the Pretense Rust scanner rewrite. TypeScript was taking 50ms per scan, blocking the proxy hot path. Rust with Rayon and NAPI bindings brought it to 1.82ms. Includes benchmark methodology, the djb2+SHA-256 mutation algorithm, and lessons learned.

Read article

Stay ahead of AI security threats

One email per week. Technical depth. No marketing fluff.