Triage Raises $1.5M to Secure AI-Native Applications

Triage has raised $1.5M in pre-seed funding to build an AI-native security and observability platform for teams shipping LLM-powered products. The round was led by BoxGroup, with participation from Precursor Ventures and notable angels including Zach Lloyd (CEO, Warp.dev), Michael Fertik (Verdict Capital), Bill Shope (Tidal Partners), Niklas de la Motte, and Cory Levy (Z Fellows).
This capital accelerates development of the platform and expands early deployments with teams that need security guarantees for inference, retrieval, and training workflows.
AI Security Was Not Meant to Be Guesswork
Modern software increasingly includes models that:
- –Generate and execute code
- –Call tools with real permissions
- –Retrieve proprietary context through RAG
- –Learn from new data and feedback loops
That creates a new operational reality: security incidents can originate inside model behavior, not just at the API perimeter. Prompt injection, tool misuse, data exfiltration through retrieval, and training-time poisoning do not map cleanly onto legacy application security workflows.
When model behavior cannot be reproduced and explained trace-by-trace, mitigation becomes improvisation.
Legacy Security Tools Cannot See Inside the AI Stack
Traditional security stacks are strong at network and application telemetry, but they typically do not capture:
- –The full prompt assembly and runtime context
- –The tool invocation sequence and outputs
- –Retrieval inputs, rankings, and cited sources
- –Model responses as they evolve across turns and agents
Without that visibility, teams struggle to answer basic questions after an incident:
- –What did the model see?
- –Why did it choose that action?
- –Which data influenced the output?
- –What change prevents recurrence without breaking functionality?
Triage: Security and Observability for LLM Products
Triage is designed as an end-to-end system that spans instrumentation, detection and reasoning, and remediation with learning.
1. Capture: full-fidelity AI telemetry
- –Model call tracing across providers (requests, responses, tool calls, latency, tokens, retries, failures)
- –Agent execution traces (what ran, with which arguments, what returned, what happened next)
- –RAG visibility (retrieved chunks, ranks and scores, citation links, prompt assembly)
2. Detect and reason: AI-native threat coverage
- –Security detections tailored to inference, retrieval, and training routes
- –Investigation workflows that connect traces, prompts, tools, and retrieved data into a single storyline
- –Policy-driven analysis aligned with common AI attack patterns (including prompt injection and data leakage)
3. Remediate and learn: close the loop
- –Minimal-diff remediation suggestions (prompt hardening, tool schema constraints, retrieval filters, policy updates)
- –Test and evaluation harnesses to prevent regressions
- –Feedback-driven improvements that turn incidents, false positives, and accepted fixes into structured learning signals over time
The goal is simple: make AI systems measurable, debuggable, and defensible in production.
What Comes Next
The new funding expands:
- –Provider and framework integrations for faster instrumentation
- –Stronger trace analysis and automated root-cause workflows
- –More robust remediation and evaluation tooling for safe iteration
- –Early customer deployments and case studies demonstrating measurable security posture improvements
Work With Triage
Triage is partnering with teams building LLM-powered products that need:
- –Deep visibility into model and agent behavior
- –Practical defenses against AI-native attack surfaces
- –A remediation loop that improves security without slowing shipping velocity
For pilots, partnerships, or roles, the fastest path is a direct introduction through the site's contact channel.
Ready to secure your AI systems?
Get in touch to learn how Triage can help your team ship secure AI products faster.
Contact Us