Investor brief

The runtime AI agents will run on.

Whoever owns the runtime owns the margin. Sagewai is the open-source agent platform that ships in production today and compounds defensibility every time a customer runs it.

Thesis

Every wave of compute creates a runtime layer where the value accrues — operating systems on PCs, container orchestration in the cloud, databases everywhere in between. AI agents are the next wave, and the runtime layer is being built right now.

The frontier-model providers will not own this layer. Their incentive is to lock customers into their hosted endpoints; customers' incentive is to stay portable, observable, and in control of cost. That gap is where an open, self-hostable agent runtime wins.

Sagewai is that runtime: five interlocking products, one open-source license, one architecture that lets a customer start on Opus and finish on a 7B model they fine-tuned themselves — without ever rewriting their code.

Five pillars

The platform is composed of five products. Each ships independently; together they form the runtime. Each pillar is also a wedge — a customer can adopt one and pull in the rest as they scale.

  • SDK

    The harness. Tool calling, MCP, memory, workflows. Provider-agnostic — swap models without changing app code.

  • Autopilot

    Goal-to-agent compilation. Customers describe outcomes; Autopilot decomposes into agents and captures every successful run as training data.

  • Fleet

    Production runtime. Workers, dispatch, scoped routing, hard tenant isolation — CI-gated against cross-tenant leakage.

  • Observatory

    The accountability layer. Per-run, per-project, per-model, per-token cost telemetry. The dashboard a CFO signs against.

  • Training Loop

    The cost-down engine. Capture frontier-model traces, fine-tune a small open-weight model, deploy locally. Marginal cost falls toward zero.

The spine

Running through all five pillars is a single component: Curator. Every time a customer's agent succeeds in production, Curator captures that run as structured training data, scoped to the customer's tenant.

That dataset belongs to the customer. We never see it. But the mechanism that produces it is ours, ships with every install, and gets stronger every time someone uses the platform. Specifics of the pipeline are detailed under NDA.

The moat: cost-down compounding

A customer onboards on a frontier model — Opus, GPT-5, whatever pays the bills today. Every run is captured. Six months later, that captured corpus fine-tunes a small open-weight model that handles 80% of their traffic at cents on the dollar. Two quarters after that, it's 95%.

The frontier model becomes a fallback. The customer's marginal cost falls toward zero, and so does their incentive to leave. Switching costs aren't imposed by us — they're imposed by the savings the customer earned by staying.

That's a different kind of moat than vendor lock-in. It's an exit clause we hand the customer, and they choose not to use it because the platform has paid for itself many times over.

Open source as go-to-market

Sagewai ships under AGPL-3.0. Engineers download, self-host, and run it without ever talking to us. Adoption is bottom-up, friction-free, and measurable.

AGPL is the catch. Any company that wants to embed Sagewai into a commercial product they don't open-source needs a commercial license. By the time a buyer is on that call, the platform is already in production and the engineering team is the champion.

The result: bottom-up developer love, with a built-in compliance moment that converts into commercial revenue without an outbound sales motion.

Positioning

The agent-infrastructure space is crowded but not consolidated. Every competitor optimises for a different point on the build/buy/own axis. Sagewai is the only one that owns the full stack from SDK to fine-tune to local inference, end-to-end and self-hostable.

CapabilitySagewaiLangChain / LangGraphCrewAIOpenAI Agents SDK
License modelAGPL-3.0 + commercialMIT (Cloud SaaS for prod)MITOpenAI ToS
Self-hostable end-to-endPartial
Provider-agnostic at runtime
Production multi-tenant runtimeCI-gated isolationDIYDIY
Built-in cost & ops telemetryObservatoryExternalExternalLimited
Training-data capture pipelineCuratorHosted only
Fine-tune-to-own workflowClosed weights

Comparisons are based on publicly documented capabilities at time of writing. Where a competitor offers a paid hosted equivalent, the row reflects the open-source / self-host story most likely to apply to our buyer.

Traction

We optimise for proof, not promises. Every number on the public site is reproducible from a runnable example shipped in the repository.

  • 47
    shipped examples — 15 lighthouse-grade
  • $0.004
    to triage 6 support emails on Haiku
  • $0.35
    one full LoRA fine-tune (RunPod RTX 5090)
  • 8 / 8
    held-out eval accuracy after fine-tune
  • 11.6×
    fewer tokens — graph memory vs vector
  • 24 / 100 / 0
    workers / tasks / cross-tenant leaks (CI-gated)

Commercial pipeline, design partners, and the strategic roadmap are available to qualified investors under NDA.

Contact

If the thesis lands, the next step is a call. We share the deeper deck, the design-partner list, and the commercial pipeline under NDA.

hello@sagewai.ai

Mention "investor inquiry" in the subject so it routes correctly.