Pillar · training-loop
Training Loop — from juggernauts to your own model.
Start with Opus or GPT-5. Capture their answers as training data via the Curator. Fine-tune your own SLM on free Colab CUDA, on $0.30/hr Spheron bare-metal, on serverless Modal, or on whatever GPU you can rent. Deploy locally via Ollama. Cost-down isn't an optimisation — it's an exit clause.
# Capture (Ex 36) → fine-tune (Ex 38) → deploy (Ex 38a / Ex 47)
sagewai curator export --project acme --since 30d > train.jsonl
sagewai finetune --base llama3.2:3b --data train.jsonl --runpod
ollama serve # your model, $0/call from hereOther pillars
- SDK — the harness that works with any LLM.$0.004 to triage 6 support emails on Haiku 4.5
- Autopilot — describe the goal, we design the agents.Goals in, agents out — Curator captures every run
- Fleet — workers, dispatch, scoped routing.24 workers / 100 tasks / 0 cross-tenant leaks (CI-gated)
- Observatory — show your CFO where the AI money goes.Real OTel + Grafana, populated by mixed-tenant load