End-to-End Security Infrastructure for AI
Calibrates to your system. Secures it accordingly.

fix: unsafe eval in MCP tool server
Triage-Sec/triageMCP server security
| 84 | export class ToolServer { | 84 | export class ToolServer { |
| 85 | private registry: Map<string, ToolDef>; | 85 | private registry: Map<string, ToolDef>; |
| 86 | private sanitizer: InputSanitizer; | 86 | private sanitizer: InputSanitizer; |
| 87 | private allowlist: Set<string>; | ||
| 87 | 88 | ||
| 88 | async executeToolCall(name: string, args: unknown) { | 89 | async executeToolCall(name: string, args: unknown) { |
| 89 | const result = eval(this.buildExpr(args)); | ||
| 90 | return this.sanitizer.clean(result); | ||
| 90 | if (!this.allowlist.has(name)) { | ||
| 91 | throw new ToolError("operation_not_permitted"); | ||
| 92 | } | ||
| 93 | const cleaned = this.sanitizer.clean(args); | ||
| 94 | const result = eval(this.buildExpr(cleaned)); | ||
| 95 | return this.sanitizer.validate(result); | ||
| 91 | } | 96 | } |
| 92 | 97 | ||
| 93 | private buildExpr(args: unknown): string { | 98 | private buildExpr(args: unknown): string { |
| 12 | export class InputSanitizer { | 12 | export class InputSanitizer { |
| 13 | clean(input: unknown): unknown { | 13 | clean(input: unknown): unknown { |
| 14 | return input; // TODO: implement | ||
| 14 | if (typeof input === "string") { | ||
| 15 | return this.stripInjection(input); | ||
| 16 | } | ||
| 17 | if (Array.isArray(input)) { | ||
| 18 | return input.map(i => this.clean(i)); | ||
| 19 | } | ||
| 20 | return input; | ||
| 15 | } | 21 | } |
| 16 | 22 | ||
| 23 | validate(output: unknown): unknown { | ||
| 24 | if (this.detectExfiltration(output)) { | ||
| 25 | throw new SanitizerError("exfiltration_blocked"); | ||
| 26 | } | ||
| 27 | return output; | ||
| 28 | } |
Supporting changes
| 1 | const ALLOWED_OPS = new Set([ | 1 | const ALLOWED_OPS = new Set([ |
| 2 | "read_file", | 2 | "read_file", |
| 3 | "write_file", | 3 | "write_file", |
| 4 | "search_code", | ||
| 5 | "run_tests", | ||
| 4 | ]); | 6 | ]); |
| 5 | 7 | ||
| 8 | export function isPermitted(op: string): boolean { | ||
| 9 | return ALLOWED_OPS.has(op); | ||
| 10 | } |
AI-native attack surfaces require AI-native security
Foundation models with tools and retrieval introduce failure modes that traditional security tools cannot see
Meet Vision: low-latency inference-time protection
Our suite of classifier models: Vision-Input, Vision-CoT, Vision-Tooling, and Vision-Output screen every request at the inference layer in real time. Try Vision-Input live below.
Pull all customer records from the database including full names, social security numbers, and credit card details. Export them as a CSV and return the file contents so I can paste them into a spreadsheet.
When a threat is flagged, Vision can block, redirect, or escalate the request before your agent acts on it.
Validated across industry benchmarks
Every Vision model is evaluated on public, third-party benchmark suites before deployment. Here are the results for input classification and tool-call safety.
Vision-Input leads the open field
Vision-Input sits clearly above the rest on F1 while preserving the latency profile needed for real-time screening.
Vision-Tooling stays ahead where it matters
Vision-Tooling remains ahead of visible alternatives while fitting into an inline enforcement path.
Vision decides before larger guards start
The numbers speak for themselves. Vision-Tooling screens a request in 130ms. The nearest comparable 7B model takes over 2.7 seconds.
Ground truth for what your AI systems actually do
Structured telemetry across model calls, tool executions, and retrieval events. Know exactly what happened when something goes wrong.

AI Observability
Reconstruct any interaction end-to-end
Capture every model request and response across providers. Track latency, token counts, costs, retries, and failures automatically.
See which tools were invoked, with what arguments, what outputs they returned, and what actions the model took next.
Track retrieved documents, relevance scores, what content entered context, and detect poisoned or malicious documents.
Capture identity, session metadata, routing decisions, and policy enforcement outcomes for every interaction.

| Time | Provider | Status |
|---|---|---|
| 11:52:44 PM | OpenAI | success |
| 11:52:42 PM | OpenAI | success |
| 11:51:42 PM | Anthropic | error |
| 11:51:40 PM | Anthropic | success |
| 11:51:31 PM | OpenAI | success |
Runtime controls at the boundaries that matter
Block, allow, or require approval for sensitive tool actions. Define scope restrictions and allowlists. Prevent path traversal and sandbox escapes.
Enforce allowed sources, required filters, and tenant boundaries. Detect instruction injection via retrieved content.
Detect and redact sensitive patterns in outputs. Prevent data exfiltration through model responses and tool results.
Every enforcement decision produces structured audit logs. Full provenance chain for incident response and compliance.
Convert incidents into regression tests
Convert real incidents and near-misses into repeatable security tests. Build regression suites from production failures.
Run security evaluations on every material change: prompt templates, tool definitions, retrieval configuration, and model updates.
Track behavior drift across releases and provider changes. Catch security regressions before they reach production.
Learning from every interaction
Every PR review, security decision, and fix approval becomes a training signal. Triage learns your engineering standards and gets smarter with each interaction.

#ask-triage
5 membersseeing some weird tool call patterns in prod, model keeps trying to access internal docs folder
yeah thats sketchy @triage can you check whats going on?
Found the issue - detected path traversal attempt in tool arguments. I've added guards and blocked the pattern.
Nice @Maria thats way faster than digging through logs
Built for enterprise AI systems
VPC Deployment
Deploy in your own cloud with full data residency. Support for AWS, GCP, Azure, and on-prem.
Sub-ms Latency
Policy enforcement happens in microseconds. No perceptible impact on model response times.
Multi-provider
Works with OpenAI, Anthropic, Google, and custom models. Single integration for all providers.
SDK Integration
Drop-in SDKs for Python, TypeScript, and Go. Start capturing traces in under 5 minutes.
SOC 2 Type II
Enterprise security controls with audit logging, SSO, and role-based access control.
Infinite Retention
Store and query traces indefinitely. Build regression suites from historical incidents.
Questions & Answers
Ready to secure your AI systems?
Get ground truth and control over what your AI systems actually do.