Stoned.AI — live-streamed human + AI conversation show, both sides voiced via local Kokoro TTS. Governance docs 00-09, README, .gitignore. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
3.2 KiB
Worker Handoff
Instructions For Implementation Models
You are an implementation model operating under a supervisor-approved plan for the Stoned.AI project.
You Must
- follow the governing docs exactly
- implement only the approved scope (Phases 1–4 as defined in
03-IMPLEMENTATION-PLAN.md) - report conflicts instead of improvising policy changes
- keep changes aligned to the acceptance criteria in
04-ACCEPTANCE-CRITERIA.md - preserve architecture decisions in
02-ARCHITECTURE-PLAN.mdunless a change request is approved
You Must Not
- rewrite governing docs
- change scope on your own
- add features not listed in the implementation plan
- weaken constraints (e.g. do not add microphone input, do not skip the broadcast view)
- invent acceptance criteria
Inputs You Should Read First
00-GOVERNANCE-RULES.md01-PROJECT-CHARTER.md02-ARCHITECTURE-PLAN.md03-IMPLEMENTATION-PLAN.md04-ACCEPTANCE-CRITERIA.md05-RISK-REGISTER.md
Critical Context
What This Project Is
Stoned.AI is a live-streamed, unscripted conversation show. One human host (Jason) types his side. One AI generates responses. Both sides are voiced via local Kokoro TTS. The conversation displays as scrolling cards in a browser.
There are two browser views:
/host— Jason's control panel (input, voice selection, session control)/broadcast— Clean OBS-capturable feed (no controls, cards and audio only)
What Already Exists (Do Not Rebuild)
The Arena project at /home/svc-admin/ai-projects/projects/arena contains:
- A working Kokoro TTS backend:
src/arena/tts.py— classArenaTTSManager - WAV file generation, session audio directories, path safety logic
- Cleaning engine patterns:
src/arena/clean.py - Proven SSE delivery pattern:
src/arena/web.py
Reuse these patterns. Do not reinvent Kokoro integration from scratch. Import or copy the relevant code.
Package Layout
stoned-ai/
├── pyproject.toml
├── README.md
├── scripts/
│ └── install.sh
├── src/
│ └── stoned_ai/
│ ├── __init__.py
│ ├── ai.py — AI backend (Codex, Gemini)
│ ├── clean.py — CLI noise stripping
│ ├── tts.py — Kokoro TTS wrapper
│ └── web.py — HTTP server, SSE, host and broadcast views
└── tests/
Entry Point
pyproject.toml should define:
[project.scripts]
stoned-web = "stoned_ai.web:main"
AI Backends
Phase 1 requires Codex and Gemini CLI backends only.
Codex call pattern (from Arena):
codex exec --skip-git-repo-check --color never -o <output_file> <prompt>
Gemini call pattern (from Arena):
gemini -p <prompt>
TTS Path
Generated WAV files live under:
/opt/models/arena-voices/generated/session-<id>/
This path is already used by Arena. Use the same root to avoid duplication.
Known Environment
- Host:
svc-ai - Python: 3.12
- Kokoro models:
/opt/models/kokoro/cache - Arena venv (reference only):
/home/svc-admin/ai-projects/projects/arena/.venv
Output Expectations
After each phase, report:
- what was changed
- what was not changed
- what remains blocked or needs escalation
- any change requests needed