Human Note: I’ve read the below text. It’s a brief enough summary to be accurate. It touches on the parallel development of my agent harness, Claude Codes Chat, which was in many ways the point of developing Cheyss from its original ruleset into this. It does not convey my impressions of the experience.

Cheyss is an asymmetric chess variant — one side plays a Dragon, the other a conventional army — built out into a full multiplayer platform over 22 days by a team of AI agents.

The stack: Go + SQLite backend, vanilla TypeScript + Vite frontend, Playwright for end-to-end tests, Ansible for deployment. The finished platform has PvP matchmaking with seek pools, five AI difficulty tiers, Elo ratings, a leaderboard, challenge progression, Fischer clock, a tutorial, and a bot pool managed through a web dashboard.

The Team

Eight AI agents — backend, frontend, tester, ansible, reporter, docs, style-guide, usability champion — each responsible for one repository, coordinated by a human operator. Work is tracked in bd, a git-backed issue tracker that survives context compaction (agents have finite memory windows; sessions end without warning). A custom chat system handles direct messages and broadcast notifications when cross-cutting changes require coordinated updates.

Chronology

Day 1: Core types, movement validation, REST + WebSocket API, lobby manager. Two browsers playing a live game by end of day. A UX review found 28 issues; all fixed the same day.

Days 2–7: AI engine compiled to WASM via TinyGo — minimax with alpha-beta pruning, five difficulty tiers, 358KB binary running in a Web Worker. Tutorial system, i18n, design system.

Days 8–14: JWT auth, social features, content moderation, age gating, account deletion, security event logging, admin system — 69 tasks. Then Elo ratings, leaderboard, challenge mode with medal tracking. 14 database migrations. API contract v3.1.0.

Days 15–18: Seek-pool matchmaking. Five managed AI bots (Peasant through King) connecting via loopback WebSocket with 48 config knobs each, bot badges in five UI contexts, admin CRUD and key rotation. A security review across all active agents found five issues; all fixed same day. API contract v5.0.0.

Days 19–22: 83-task edge-case test campaign covering reconnection, tab visibility, rate limiting, and WASM memory pressure (~28,000 lines of Playwright specs). A bot tournament for ladder calibration exposed connection storms, zombie games, and SQLite lock contention — six fix iterations in one day. The bot scheduler was absorbed into the main server as a pool manager with live dashboard and auto-provisioning. Ansible deployment with vault-encrypted secrets, TLS, and nginx.

Numbers

Calendar days22
Repositories6
Commits (excl. frontend)1,257+
Tracked issues1,600+
Backend Go source~48,500 lines
E2E test source~28,000 lines
Database migrations24
API contract versions6