Pillar · training-loop

Training Loop — from juggernauts to your own model.

Start with Opus or GPT-5. Capture their answers as training data via the Curator. Fine-tune your own SLM on free Colab CUDA, on $0.30/hr Spheron bare-metal, on serverless Modal, or on whatever GPU you can rent. Deploy locally via Ollama. Cost-down isn't an optimisation — it's an exit clause.

# Capture (Ex 36) → fine-tune (Ex 38) → deploy (Ex 38a / Ex 47)
sagewai curator export --project acme --since 30d > train.jsonl
sagewai finetune --base llama3.2:3b --data train.jsonl --runpod
ollama serve  # your model, $0/call from here

Other pillars