Proxy Endpoint
Change one URL. SignalVault intercepts all AI traffic transparently — no SDK required.
The proxy is the fastest integration path: point your existing OpenAI or Anthropic client at SignalVault and every request is automatically audited and checked against your guardrail rules.
OpenAI
Set baseURL to
https://api.signalvault.io/proxy/openai/v1
and add your SignalVault key as a default header. Everything else stays the same.
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.signalvault.io/proxy/openai/v1',
defaultHeaders: {
'X-SignalVault-Key': 'sv_live_...',
},
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
Anthropic
Set baseURL to
https://api.signalvault.io/proxy/anthropic/v1
and add your SignalVault key as a default header.
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://api.signalvault.io/proxy/anthropic/v1',
defaultHeaders: {
'X-SignalVault-Key': 'sv_live_...',
},
});
const message = await client.messages.create({
model: 'claude-opus-4-6',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello!' }],
});
Supported endpoints
| Proxy path | Upstream |
|---|---|
| POST /proxy/openai/v1/chat/completions | api.openai.com/v1/chat/completions |
| POST /proxy/anthropic/v1/messages | api.anthropic.com/v1/messages |
Headers
| Header | Required | Default | Notes |
|---|---|---|---|
| X-SignalVault-Key | Yes | — | Your SignalVault API key (sv_live_...) |
| Authorization | Yes | — | Your provider key (Bearer sk-... or Bearer sk-ant-...) |
| X-SignalVault-Environment | No | production |
development, staging, or production |
| X-SignalVault-Metadata | No | |
JSON string stored in event metadata (e.g. user ID, session ID) |
Streaming
Streaming is fully supported. Pass stream: true as normal.
SignalVault pipes SSE chunks directly to your client and logs the full response after the stream completes.
Blocked requests
When a guardrail rule with action = block or
action = redact matches the prompt,
the proxy returns a 400 before forwarding to the upstream provider. The response uses the standard OpenAI error shape so existing error-handling code works without changes:
HTTP 400
{
"error": {
"message": "Request blocked by SignalVault guardrail: pii_detection",
"type": "invalid_request_error",
"code": "content_policy_violation"
}
}
In proxy mode, redact rules block the request rather than forwarding a redacted version to the provider.
The violation is still logged and visible in your dashboard. Full prompt rewriting before forwarding is coming in a future release.
Proxy events appear identically to SDK events in your dashboard — same request/response pairing, same rule checks, same export format.