Skip to content

MVP Scope & Build Plan

Part of Project Kaze Architecture


1. Portfolio Context

Speedrun Ventures is building multiple products simultaneously. The Kaze platform exists to serve them all while creating shared infrastructure and compounding knowledge:

ProjectDescriptionStatusAgent needs
Internal Ops (Vertical 0)AI-powered project assistant built on OpenClaw for Speedrun's own operations: research, note-taking, scheduling, backlog management, issue tracking, task assignmentNew buildResearch agents, scheduling agents, project management agents
ToddleFamily activity discovery platform for Singapore. TypeScript + Postgres + pgvector + Gemini LLM. Already has RAG search, chat assistant, recommendations. See project/toddle.mdNear productionContent enrichment agents, data quality agents, recommendation tuning agents
SEO AutomationAutomated SEO workflows for SME clients: keyword research, content optimization, technical audits, competitor analysis, reportingNear productionKeyword research agents, content optimization agents, reporting agents
PunkgaComic artist ecosystem platform at punkga.meFutureTBD — content moderation, artist support, community management
TrueSightTrading platform at truesight.tradeFuture (timeline TBD)TBD

2. Relationship with OpenClaw

OpenClaw serves as the communication layer and simple orchestration interface. Kaze builds its own agent design, memory system, and knowledge architecture on top:

┌──────────────────────────────────────────────────────┐
│  User Interface (CLI / Chat / Slack / etc.)           │
└──────────────┬───────────────────────────────────────┘

┌──────────────▼───────────────────────────────────────┐
│  OpenClaw Layer                                       │
│  • Conversation management                            │
│  • Simple orchestration & tool routing                │
│  • Subagent spawning                                  │
└──────────────┬───────────────────────────────────────┘

┌──────────────▼───────────────────────────────────────┐
│  Kaze Platform Layer                                  │
│  • Agent runtime (YAML + TypeScript hybrid)           │
│  • Memory system (Mem0 + Knowledge Graph)             │
│  • LLM Gateway (multi-provider, budget management)    │
│  • Skill framework (composable, vertical-specific)    │
│  • Supervision ramp & quality monitoring              │
└──────────────────────────────────────────────────────┘

Why this split: OpenClaw provides a mature, battle-tested conversation layer. Building our own would be months of work with no differentiation. Kaze's differentiation is in the agent architecture, memory design, knowledge accumulation, and self-improvement loop — not in chat interfaces.

3. Verticals

Vertical 0: Internal Ops (Primary — Foundation Testbed)

The first vertical is Speedrun's own internal operations. This is strategic:

  • Dogfooding — We are the first client. Every pain point we feel, our clients will feel.
  • Fast feedback loop — No external client coordination. Iterate in hours, not weeks.
  • Foundation testbed — Every platform component gets exercised here first before external verticals use it.

Agents:

  • Research Agent — Deep-dive research on topics, competitors, technologies. Synthesizes findings into structured notes.
  • Project Management Agent — Tracks tasks, manages backlogs, assigns work, monitors progress across projects.
  • Scheduling Agent — Coordinates meetings, deadlines, and milestones across team members.
  • Note & Documentation Agent — Captures decisions, meeting notes, and maintains project knowledge base.
  • Issue Tracking Agent — Monitors GitHub issues, categorizes bugs, suggests prioritization.

Skills exercised: LLM calls, tool integration (GitHub, calendar, docs), memory (conversation history, project context), multi-turn conversation, task decomposition.

Vertical 1: SEO Automation

Agents:

  • Keyword Research Agent — Discovers opportunities, evaluates difficulty, clusters by intent.
  • Content Optimization Agent — Analyzes existing content, suggests improvements, drafts content briefs.
  • Technical Audit Agent — Crawls sites, identifies issues, prioritizes fixes.
  • Reporting Agent — Generates periodic performance reports, highlights trends and opportunities.

Skills exercised: External API integration (SEMrush, Ahrefs, Google Search Console), structured data extraction, scheduled workflows, client-specific knowledge.

Vertical 2: Toddle Activity Enrichment

Built on top of the existing Toddle API (project/toddle.md):

Agents:

  • Content Enrichment Agent — Enhances activity descriptions, adds age-appropriate tags, fills missing data.
  • Data Quality Agent — Monitors data freshness, detects stale listings, flags inconsistencies.
  • Recommendation Tuning Agent — Analyzes user engagement patterns, optimizes recommendation algorithms.

Skills exercised: Database integration (Postgres + pgvector), existing API enhancement, data pipeline integration, vector embedding management.

4. Platform MVP Components

The minimum platform that enables all three verticals to run:

ComponentLayerDescriptionPriority
Agent RuntimeL1YAML + TypeScript hybrid definition, actor-based execution, skill compositionCritical
LLM GatewayL0Multi-provider abstraction, dual-key management, token budget trackingCritical
Mem0 IntegrationL2Per-agent working memory, episodic memory, conversation contextCritical
Tool Integration FrameworkL1Typed tool definitions, auth management, error handling. Integrations: GitHub, Calendar, SEO APIs, Toddle DBCritical
Task SchedulerL1Cron-like and event-triggered task execution for agentsHigh
Basic OrchestrationL2Simple task decomposition and agent-to-agent delegation (via OpenClaw initially)High
Observation LoggerL0Structured logging of all agent actions for debugging and future trainingHigh

Deferred to Phase 2:

  • NATS message bus (start with direct calls, migrate when needed)
  • Supervisor agents (Layer 3)
  • Quality monitoring agents (Layer 3)
  • Self-improvement loop (Layer 3)
  • Full knowledge graph (Apache AGE) — start with Mem0 + pgvector only
  • Multi-channel interaction (start with CLI/chat only)
  • Customer VPC deployment mode
  • Cross-vertical knowledge sharing
  • Cognee/GraphRAG integration

5. Build Plan — Parallel Team Structure

Three workstreams running in parallel:

Timeline →  Week 1-2         Week 3-4          Week 5-6          Week 7-8
            ─────────────────────────────────────────────────────────────────
Lead        │ Agent Runtime  │ LLM Gateway     │ Mem0 Integration │ Orchestration
(You)       │ + Tool         │ + Task          │ + Observation    │ + Supervisor
Foundation  │   Framework    │   Scheduler     │   Logger         │   basics
            ─────────────────────────────────────────────────────────────────
Lead        │ Research Agent │ Project Mgmt    │ Scheduling       │ Integration
(You)       │ (first agent   │ Agent           │ Agent            │ testing +
Vertical 0  │  on platform)  │                 │                  │ dogfooding
            ─────────────────────────────────────────────────────────────────
Team 2      │ Toddle API     │ Content         │ Data Quality     │ Recommendation
Toddle      │ integration    │ Enrichment      │ Agent            │ Tuning Agent
            │ + data review  │ Agent           │                  │
            ─────────────────────────────────────────────────────────────────
Team 3      │ SEO tool       │ Keyword         │ Content          │ Technical
SEO         │ integrations   │ Research Agent  │ Optimization     │ Audit +
            │ + API setup    │                 │ Agent            │ Reporting
            ─────────────────────────────────────────────────────────────────

Key dependencies:

  1. Foundation unblocks everything. Agent Runtime and Tool Framework must exist before any vertical can build agents. Lead builds these first.
  2. Vertical 0 is the first consumer. The Research Agent is the first agent to run on the platform. This surfaces integration issues before other teams hit them.
  3. Team 2 and Team 3 start with prep work (data review, API integrations, tool setup) during Weeks 1-2 while Foundation is being built.
  4. All teams converge on the same platform by Week 3 when they start building agents.

Communication:

  • Vertical teams raise platform needs → Lead prioritizes and builds
  • Weekly sync across all three workstreams
  • Shared Kaze Knowledge Graph captures cross-team learnings from day one

6. Success Criteria (MVP)

MetricTarget
Vertical 0: Internal research agent produces usable outputAgent can take a research brief and return structured findings with sources
Vertical 0: Project management agent tracks real tasksAgent manages at least one project's backlog end-to-end
Vertical 1: SEO keyword research is partially automatedAgent reduces manual keyword research time by 50%+
Vertical 2: Toddle data quality improves measurablyAgent identifies and flags stale/incomplete listings with >80% precision
Platform: Multiple agents share the same runtimeAt least 5 different agents running on Kaze platform
Platform: Memory persists across sessionsAgents recall previous context and client preferences
Platform: Token budget is trackedLLM Gateway reports per-agent, per-client token usage