CISA Officials Used ChatGPT for Sensitive Government Work -- Policy Alone Does Not Work
When CISA officials used ChatGPT for sensitive government work, it proved that policy without technical enforcement fails. Here is why AI firewalls are the answer.
When the Cybersecurity Agency Uses Insecure AI Tools
The irony could not be sharper. Officials at the Cybersecurity and Infrastructure Security Agency (CISA) -- the very agency tasked with protecting the United States' critical digital infrastructure -- were found to be using ChatGPT for sensitive government work. The agency responsible for telling everyone else to follow cybersecurity best practices was not following its own guidance.
This incident cut through years of debate about whether AI security policies are sufficient. If CISA officials -- people who write cybersecurity policy for a living, who understand the threat landscape better than almost anyone, who have access to classified briefings about nation-state cyber operations -- cannot resist using ChatGPT for work, then no policy will stop rank-and-file employees from doing the same.
The incident proved a point that security practitioners have been making for decades: **policy without technical enforcement is theater**.
What Happened
Multiple CISA officials used commercial ChatGPT accounts for work-related tasks, including:
- Drafting policy documents related to critical infrastructure protection - Analyzing threat intelligence reports - Writing code for internal security tools - Summarizing classified and sensitive briefing materials
The usage was discovered during an internal audit. The officials involved were not acting maliciously -- they were trying to be more productive. ChatGPT helped them draft documents faster, analyze data more efficiently, and write code with fewer errors. The same productivity benefits that drive AI adoption in every organization.
But each interaction sent sensitive government data to OpenAI's commercial infrastructure. Data about critical infrastructure vulnerabilities, threat actor techniques, and defensive capabilities -- exactly the kind of information that adversarial nation-states actively seek.
---
Why Policy Failed at CISA
CISA had an AI usage policy. It was detailed, well-written, and clearly communicated. It explicitly prohibited using commercial AI tools for sensitive work. And it failed.
The reasons are instructive because they apply to every organization:
**1. Convenience beats compliance.** When a CISA official needs to draft a 20-page policy document, they can spend 8 hours writing it manually or 2 hours using ChatGPT. The productivity difference is too large to ignore, especially under deadline pressure.
**2. Perception of low risk.** Officials rationalized that ChatGPT "probably" does not store conversations long-term, that the data was "not really classified," or that the risk was "acceptable." These rationalizations are universal -- every employee in every company makes the same mental calculations.
**3. No technical enforcement.** There was no mechanism to prevent ChatGPT access. No proxy, no firewall, no monitoring. The policy relied entirely on individual compliance. In security, relying on individual compliance is equivalent to having no control at all.
**4. No visibility.** Security teams had no way to know who was using AI tools, what data was being sent, or how frequently it occurred. By the time the internal audit discovered the usage, months of sensitive data had already been transmitted.
// What a CISA official might paste into ChatGPTasync function assessCriticalInfraVulnerability( target: InfrastructureTarget ): Promise<VulnerabilityReport> { const scanResults = await runNessusDeepScan(target.ipRange); const cveMatches = await correlateWithCISAKEV(scanResults);
// Check against classified threat actor TTPs const threatCorrelation = await matchThreatActorPatterns( cveMatches, CLASSIFIED_TTP_DATABASE );
return { targetId: target.facilityId, sector: target.criticalInfraSector, // e.g., 'energy', 'water', 'financial' vulnerabilities: cveMatches, threatActorRisk: threatCorrelation.riskLevel, recommendedActions: generateMitigationPlan(cveMatches), }; }
// After Pretense mutation -- what the AI would actually see: async function _fn4a2b( _v3b1c: _cls7d4e ): Promise<_cls8e9f> { const _v2c3d = await _fn1a2b(_v3b1c._v5d6e); const _v6f7a = await _fn9b8c(_v2c3d);
const _v4e5f = await _fn3d4e( _v6f7a, _v7a8b );
return { targetId: _v3b1c._v8b9c, sector: _v3b1c._v1c2d, vulnerabilities: _v6f7a, threatActorRisk: _v4e5f._v9d8e, recommendedActions: _fn5e6f(_v6f7a), }; } ```
---
The Government AI Security Gap
The CISA incident highlights a gap that exists across all levels of government:
- **Federal agencies** have OMB memos restricting AI use but limited technical enforcement - **Defense contractors** must comply with CMMC and ITAR but often lack controls for AI tool usage - **State and local governments** have almost no AI security policy or infrastructure - **Intelligence community** faces the most acute risk: analysts using AI to process classified information
The challenge is compounded by the government's procurement cycle. Traditional security tools take 6-18 months to procure, configure, and deploy. AI tools evolve on a weekly basis. By the time a government agency deploys a traditional DLP solution, the AI tool landscape has changed entirely.
FISMA and FedRAMP Implications
Federal agencies operating under FISMA (Federal Information Security Modernization Act) are required to implement security controls from NIST SP 800-53. Several control families are directly relevant to AI tool usage:
- **AC-4 (Information Flow Enforcement)**: Requires technical controls on information flow between systems - **AU-2 (Audit Events)**: Requires logging of security-relevant events, including data transfers - **SC-7 (Boundary Protection)**: Requires monitoring and controlling communications at system boundaries - **SI-4 (System Monitoring)**: Requires detecting unauthorized use of information systems
A policy that says "do not use ChatGPT" does not satisfy AC-4. You need a technical control that enforces information flow restrictions. Pretense satisfies AC-4 by mutating data at the boundary before it flows to external AI systems.
---
How Pretense Addresses Government Requirements
Pretense was designed with exactly this threat model in mind: protecting sensitive data from AI exfiltration while maintaining developer and analyst productivity.
**Local-first architecture.** Pretense runs entirely on your infrastructure. No data flows to Pretense's servers. No cloud processing. The mutation happens on the local machine or on-prem server before any data reaches any third-party API.
**Full audit trail.** Every AI interaction is logged with the timestamp, provider, number of identifiers mutated, and risk assessment. This satisfies FISMA AU-2 requirements and provides the evidence package that auditors expect.
**Transparent deployment.** Pretense works with any AI tool that uses HTTP APIs -- ChatGPT, Claude, Copilot, Cursor, and custom tools. Set one environment variable and every AI interaction is protected.
# Deploy Pretense on government infrastructure| # Option 1: Single developer machine | |
|---|---|
| curl -fsSL https://pretense.ai/install.sh | sh |
| pretense init | |
| pretense start |
# Option 2: Team proxy on air-gapped network docker run -d \ --name pretense-proxy \ --network=host \ -v /opt/pretense/config:/config \ -v /opt/pretense/audit:/audit \ pretense/proxy:latest
# All AI traffic routed through the proxy export ANTHROPIC_BASE_URL=http://pretense-proxy:9339 export OPENAI_BASE_URL=http://pretense-proxy:9339/v1 ```
**Air-gap compatible.** For classified environments, Pretense can run entirely disconnected from the internet. The mutation engine requires no external calls. Audit logs are stored locally. The only outbound connection is the proxied AI API call itself -- with all proprietary identifiers already mutated.
---
The Lesson: Technical Controls Are Not Optional
The CISA incident is a clear signal to every organization: if the nation's cybersecurity agency cannot enforce an AI usage policy through policy alone, neither can you.
The solution is not more policy. It is not more training. It is not stricter consequences for violations. The solution is a technical control that sits at the network boundary and protects data before it leaves, regardless of which AI tool is used, which account is logged in, or whether the employee has read the policy.
Pretense is that technical control. It mutates proprietary identifiers at the proxy level. It provides a complete audit trail. It works with every AI tool. And it deploys in 30 seconds.
Policy tells people what not to do. Pretense makes it impossible for sensitive data to leave unprotected -- even when people do it anyway.
Protect Your Code Today
Pretense is the AI firewall that mutates proprietary code before it reaches any LLM API. Install in 30 seconds and protect your team's intellectual property.
curl -fsSL https://pretense.ai/install.sh | sh
pretense init
pretense startShare this article