Documentation

Integrations

Pretense is a drop-in proxy. Connect it to any AI coding tool by changing one setting.

Claude Code

Recommended
30 seconds

Route every Claude Code session through Pretense by setting a single environment variable. All prompts are mutated before reaching the Anthropic API.

  1. 1Install and start the Pretense proxy: pretense start
  2. 2Set ANTHROPIC_BASE_URL in your shell profile or per-session
  3. 3Run Claude Code as normal -- mutation happens transparently
Shellbash
# Option 1: per-session
ANTHROPIC_BASE_URL=http://localhost:9339 claude

# Option 2: export in ~/.zshrc or ~/.bashrc (recommended)
export ANTHROPIC_BASE_URL=http://localhost:9339

MCP Server (Claude Code Tools)

New
60 seconds

Register the Pretense MCP server in Claude Code to unlock 4 native tools: scan, mutate, status, and audit. Claude can invoke these tools directly inside any conversation without leaving your editor.

  1. 1Install the MCP server: npx @pretense/mcp-server or pretense mcp install
  2. 2Add the server to your claude_desktop_config.json (see snippet below)
  3. 3Reload Claude Code -- the pretense_scan, pretense_mutate, pretense_status, pretense_audit tools appear automatically
  4. 4Ask Claude to scan a file or mutate code before sending it to any LLM
claude_desktop_config.jsonjson
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "pretense": {
      "command": "npx",
      "args": ["@pretense/mcp-server"],
      "env": { "PRETENSE_API_KEY": "your-key-here" }
    }
  }
}

Cursor

30 seconds

Override the model base URL in Cursor settings to route requests through Pretense. Works with GPT-4o, Claude, and any OpenAI-compatible model Cursor supports.

  1. 1Open Cursor and go to Settings (Cmd+,)
  2. 2Navigate to Models, then find "Override Base URL"
  3. 3Set the value to http://localhost:9339
  4. 4Your existing API key works unchanged
Cursor settingstext
# Settings > AI > Override Base URL
http://localhost:9339

# Or set the env var before launching Cursor
OPENAI_BASE_URL=http://localhost:9339 cursor

OpenAI SDK

30 seconds

Set a single environment variable before any Node.js or Python script that uses the OpenAI SDK. No code changes required.

  1. 1Start the Pretense proxy: pretense start
  2. 2Set OPENAI_BASE_URL to http://localhost:9339
  3. 3Run your script as normal -- the SDK routes through Pretense automatically
Shell / TypeScripttypescript
# Node.js / shell
OPENAI_BASE_URL=http://localhost:9339 node your-script.js

# Python
OPENAI_BASE_URL=http://localhost:9339 python your-script.py

# Or in code (TypeScript)
import OpenAI from 'openai';
const client = new OpenAI({
  baseURL: 'http://localhost:9339',
  apiKey: process.env.OPENAI_API_KEY,
});

LangChain

30 seconds

Pass the Pretense proxy URL as the base_url when initializing any LangChain LLM or ChatModel. Works with LangChain Python and LangChain.js.

  1. 1Start the Pretense proxy: pretense start
  2. 2Pass base_url to your LLM initializer
  3. 3All chain and agent calls are now protected
Pythonpython
# Python (LangChain + OpenAI)
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="http://localhost:9339",
    api_key=os.environ["OPENAI_API_KEY"],
)

# Python (LangChain + Anthropic)
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
    anthropic_api_url="http://localhost:9339",
    api_key=os.environ["ANTHROPIC_API_KEY"],
)

GitHub Actions

30 seconds

Block PRs that contain secrets or unprotected AI calls before they reach your main branch. The Pretense CLI integrates with any CI system that runs Node.js.

  1. 1Add pretense to devDependencies or use npx
  2. 2Add a scan step to your workflow YAML
  3. 3The step exits non-zero if secrets are detected, failing the build
  4. 4Export scan-report.json as a CI artifact for compliance evidence
.github/workflows/pretense-scan.ymlyaml
# .github/workflows/pretense-scan.yml
name: Pretense Scan
on: [pull_request]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Pretense
        run: curl -fsSL https://pretense.ai/install.sh | sh

      - name: Scan for secrets
        run: pretense scan ./src --format json --output scan-report.json

      - name: Upload scan report
        uses: actions/upload-artifact@v4
        with:
          name: pretense-scan-report
          path: scan-report.json

VS Code Extension

Extension
30 seconds

The Pretense VS Code extension adds a status bar indicator, a scan command, and a mutation map viewer. Configure the proxy URL in your workspace settings.

  1. 1Search for "Pretense" in the VS Code Extensions panel and install
  2. 2Add pretense.proxyUrl to your VS Code settings.json
  3. 3Reload VS Code -- the status bar shows live mutation count and proxy status
  4. 4Use Cmd+Shift+P and type "Pretense: Scan" to scan the active file
.vscode/settings.jsonjson
// .vscode/settings.json
{
  "pretense.proxyUrl": "http://localhost:9339",
  "pretense.autoScanOnSave": true,
  "pretense.showStatusBar": true
}

Need a different integration?

If your tool is not listed, email us and we will add official support. Pretense speaks OpenAI-compatible API format so most tools work already.

Was this page helpful?