Skip to content

Product Strategy

Part of Project Kaze Architecture


1. Vertical-First Approach

Value is created by going deep into well-understood business verticals, not building a generic horizontal platform.

The Kaze Flywheel:

1. Pick a vertical (SEO, CRM, etc.)


2. Encode human expertise into agent skills
   (from existing manuals, workflows, SOPs)


3. Deploy agents with tight supervision


4. Quality loop: supervised → sampling → autonomous


5. Agents build vertical knowledge graph


6. Apply vertical to new clients/domains/sizes
   (knowledge transfers, agents get smarter)

           └──── Pick next vertical ────→ repeat

Each vertical makes the platform smarter, not just the individual agents. The moat is the accumulated vertical knowledge graphs and proven agent skills.

Knowledge sourcing constraint: The flywheel is powered by three knowledge sources with different legal bases — not by automatically harvesting client data:

SourceWhat feeds itLegal basis
Speedrun-sourced (always shared)V0 internal ops learnings, public domain research, Speedrun-funded benchmarksSpeedrun's own IP
Client-contributed (opt-in only)Anonymized/abstracted learnings from consenting clientsContractual consent (Knowledge Contribution Addendum)
Client-private (never shared)Client-specific strategies, preferences, historyConfidential by default

By default, no client data enters shared vertical knowledge. Clients opt into a contributor tier for enriched knowledge access. See research/data-rights-knowledge-sharing.md for full legal analysis.

Vertical structure:

vertical: seo
├── knowledge/
│   ├── domain-concepts.graph       # What SEO IS
│   ├── best-practices.graph        # How to DO SEO well
│   ├── tool-knowledge.graph        # How to use SEMrush, Ahrefs, etc.
│   └── industry-patterns/          # Patterns across client types
│       ├── ecommerce-seo.graph
│       ├── saas-seo.graph
│       └── local-business-seo.graph
├── skills/
│   ├── keyword-research
│   ├── content-optimization
│   ├── technical-audit
│   ├── competitor-analysis
│   ├── backlink-prospecting
│   └── reporting
├── workflows/
│   ├── monthly-seo-audit
│   ├── content-pipeline
│   └── new-client-onboarding
└── quality/
    ├── evaluation-criteria
    ├── benchmark-datasets
    └── human-feedback-log

2. The Supervision Ramp

The transition from human control to agent autonomy happens in three phases, configured per skill x client x risk level — not as a blanket setting.

Phase 1: Supervised

  • Agent does work, human reviews every output
  • Human approves or corrects
  • Corrections feed back into agent learning
  • Signal: building training data and calibrating quality

Phase 2: Sampling

  • Agent does work, random sample (10-20%) gets human review
  • Statistical quality score maintained
  • If quality drops → automatic rollback to Phase 1
  • Signal: maintaining statistical confidence

Phase 3: Autonomous

  • Agent does work, AI quality check on all outputs
  • Auto-delivers unless confidence below threshold
  • Escalates only exceptions
  • Signal: system is self-correcting

Example: An SEO agent might simultaneously be:

  • Autonomous at keyword research (well-understood, measurable outputs)
  • Sampling on content optimization (subjective, needs occasional check)
  • Supervised on client communication (high-stakes, brand-sensitive)

Feedback loop:

Every human correction is captured and classified:

  • If the correction applies broadly ("always include search volume") → updates skill knowledge
  • If it's client-specific ("Client A doesn't do that product line") → updates client knowledge

3. Multi-Channel Interaction

Humans interact with Kaze through their existing tools. The system is invisible — they talk to agents like they'd talk to a colleague.

The Conversation Manager maintains one unified thread regardless of channel:

  • Client asks a question on WhatsApp → agent responds on WhatsApp
  • Same client sends a document via email → agent processes it, references the WhatsApp conversation
  • Agent needs approval → routes to Slack (because that's where the client's team reviews things)

For SME clients (the daily experience):

No dashboards. Natural conversation through their preferred channel:

Slack #seo-updates: Agent: "I found 12 new keyword opportunities this week. Top 3 are [X, Y, Z] with estimated traffic of 5k/mo combined. I've drafted content briefs for each. Want me to proceed?"

Human: "Looks good but skip Z, we dropped that product line"

Agent: "Got it — I'll remember that. Proceeding with X and Y. Drafts by Thursday."

That correction feeds back into the client knowledge graph automatically.

For Speedrun ops team (the management experience):

Dashboard for:

  • Quality scores across all clients
  • Supervision queue (outputs flagged for human review)
  • Agent policy and skill configuration
  • Audit logs and knowledge graph changes
  • Supervision ramp management

4. Human-in-the-Loop as a Configurable Dial

Autonomy is not binary — it's a spectrum configured per agent, task type, risk level, and client preference:

Full Autonomy ◄────────────────────────────► Full Human Control
     │                                              │
     │  "Process invoices, fix errors,              │  "Show me every
     │   only escalate if amount > $10k"            │   action before
     │                                              │   you take it"
     │              Most agents live                 │
     │              somewhere in between             │

Hard limits (non-negotiable, enforced in code):

  • An agent cannot grant itself new tool access or escalate its own permissions
  • Financial actions above configured thresholds require human approval regardless of agent confidence
  • Total spend circuit breakers that even AI monitors cannot override
  • All changes versioned in an immutable audit log