Getting Started with CCCC: From Zero to Multi-Agent Orchestration in 10 Minutes
Tired of babysitting single AI agents? Ready to let multiple agents collaborate autonomously? This guide gets you from installation to your first successful multi-agent session in 10 minutes.
Let's dive in.
Prerequisites Check (2 minutes)
Before we start, verify you have:
# Check Python (need 3.9+)
python3 --version
# ✅ Python 3.11.5
# Check tmux
tmux -V
# ✅ tmux 3.3a
# Check git
git --version
# ✅ git version 2.42.0
Missing something?
# macOS
brew install tmux
# Ubuntu/Debian
sudo apt-get install tmux python3
# Verify you have a git repository
git status
# Should not error
Step 1: Install CCCC (1 minute)
# Install using pipx (recommended)
pipx install cccc-pair
# Verify installation
cccc --version
# ✅ cccc version 1.0.0
Troubleshooting:
pipx not found:python3 -m pip install --user pipxcccc not found: Add~/.local/binto PATH
Step 2: Initialize in Your Repository (1 minute)
Navigate to your project:
cd ~/projects/my-awesome-app
# Initialize CCCC
cccc init
# What this creates:
# ✅ .cccc/config.yaml
# ✅ POR.md (Plan of Record)
# ✅ SUBPOR.md (Sub-Plan tracking)
# ✅ .gitignore entries
Step 3: Verify Your Setup (30 seconds)
cccc doctor
Expected output:
✅ Python 3.11.5 detected
✅ tmux 3.3a found
✅ git 2.42.0 available
✅ Repository initialized
✅ POR.md exists
✅ SUBPOR.md exists
✅ Configuration valid
Ready to run! Try: cccc run
If you see warnings: Follow the suggested fixes. Common issues:
- Tmux not installed:
brew install tmux(macOS) orapt-get install tmux(Linux) - Not a git repo:
git initin your project directory
Step 4: Define Your Goal (2 minutes)
Open POR.md and replace the template with your actual goal:
# Plan of Record
## Current Goals
- Implement user password reset functionality
- Send reset email with secure token
- Add password strength validation
- Write comprehensive tests
## Milestones
- [ ] Password reset token generation
- [ ] Email service integration
- [ ] Password update endpoint
- [ ] Token expiration handling
- [ ] Unit and integration tests
- [ ] Documentation
## Success Criteria
- All tests pass
- Security best practices followed
- Code reviewed and merged
Pro tip: Be specific. Compare:
❌ Bad: "Add password reset" ✅ Good: "Implement password reset with email token, 1-hour expiration, bcrypt hashing, and >80% test coverage"
Step 5: Start Orchestration (1 minute)
cccc run
What happens:
- tmux sessions launch: One per agent
- Agents read POR.md: Understand your goals
- Debate begins: Agents propose approaches
- Consensus emerges: Documented in SUBPOR.md
- Implementation starts: Small, reversible commits
- POR.md updates: Progress tracked automatically
You'll see:
🚀 CCCC Orchestrator starting...
📋 Reading POR.md...
🤖 Launching Agent A (claude-code)...
🤖 Launching Agent B (codex-cli)...
💬 Agents beginning collaboration...
Agents are now working. Options:
- Press 'a' to attach to session
- Press 'q' to quit monitoring
- Or let them work autonomously
Step 6: Monitor Progress (3 minutes)
You have three ways to monitor:
Option A: Watch POR.md Updates
# In another terminal
watch -n 5 cat POR.md
You'll see milestones checking off in real-time.
Option B: Attach to Session
# Press 'a' when prompted, or run:
cccc attach
You'll see agent conversation:
[Agent A] Proposing password reset token approach...
[Agent B] Validating security considerations...
[Consensus] Using JWT tokens with 1-hour expiration
[Agent A] Implementing token generation...
[Agent B] Writing tests for token validation...
Detach: Press Ctrl+B, then D
Option C: Review SUBPOR.md
cat SUBPOR.md
See the full debate transcript:
## Current Task: Password Reset Implementation
### Agent Debate
**Agent A Proposal:**
Use JWT tokens stored in Redis with 1-hour TTL.
**Agent B Challenge:**
Redis adds infrastructure dependency. Database-backed
tokens with expiry column are simpler.
**Agent A Response:**
True, but Redis provides automatic cleanup.
Database requires cron job for expired tokens.
**Consensus:**
Database-backed tokens with background cleanup job.
Simpler infrastructure, easy to audit.
### Implementation Status
- [x] Token model created
- [x] Email service configured
- [ ] Reset endpoint in progress...
Step 7: Review the Code (2 minutes)
Agents make small commits as they work:
# View recent commits
git log --oneline -5
# Example output:
# a3f8c12 [CCCC] Add password reset token model
# 9d2e4a1 [CCCC] Implement secure token generation
# 7c1b8f3 [CCCC] Add email service integration
# 5a9d2e1 [CCCC] Create password reset endpoint
# 2f7e8c4 [CCCC] Add password strength validator
Review a commit:
git show a3f8c12
If you don't like something: Just revert:
git revert a3f8c12
The beauty of small commits: easy to review, easy to rollback.
Real Example: What Happened
Here's an actual session output from a password reset implementation:
00:00 - Session start
00:02 - Agent debate on token approach (3 alternatives considered)
00:03 - Consensus: DB-backed JWT tokens
00:05 - Token model implemented
00:08 - Email service integrated
00:12 - Reset endpoint created
00:15 - Password validator added
00:20 - Unit tests written (23 tests)
00:23 - Integration tests added (8 scenarios)
00:25 - Documentation updated
00:26 - Final commit: All tests passing
Result:
- 7 commits
- 0 bugs in code review
- 100% test coverage
- Secure implementation (bcrypt, token expiration, rate limiting)
Without CCCC: This typically takes 2-3 hours with multiple rounds of debugging.
With CCCC: 26 minutes, production-ready code.
Common First-Session Questions
Q: "Agents are debating too long. How do I make them converge faster?"
Lower consensus threshold:
cccc run --consensus 0.7
# Default is 0.8 (80% agreement)
Q: "Can I guide the agents mid-session?"
Yes! Update POR.md while they're running:
echo "## New Constraint: Must use SendGrid for email" >> POR.md
Agents will read updated requirements on next task.
Q: "How do I stop the session?"
cccc stop
Or press Ctrl+C in the monitoring terminal.
Q: "One agent is stuck. Can I restart it?"
cccc restart-agent primary-a
Q: "Can I work from my phone?"
Yes! Set up Telegram integration:
cccc telegram setup
Follow prompts, then monitor/control from Telegram.
Your First 5 Tasks: Recommended Progression
Task 1: Simple Feature (15-30 min)
- Goal: Add a single endpoint
- Learn: Basic orchestration flow
Task 2: Feature with Tests (30-60 min)
- Goal: Add feature + comprehensive tests
- Learn: How agents handle testing
Task 3: Refactoring (60-90 min)
- Goal: Refactor a module
- Learn: Agent code analysis and improvement
Task 4: Bug Fix (20-40 min)
- Goal: Fix existing bug
- Learn: Debugging collaboration
Task 5: Complex Feature (2-3 hours)
- Goal: Multi-component feature
- Learn: Long-session context management
Next Steps
Now that you've run your first session:
Level Up Your Orchestration
-
Configure agents:
nano .cccc/config.yamlCustomize agent behavior, consensus thresholds, commit strategies.
-
Set up IM integration:
cccc telegram setup # or cccc slack setup -
Explore advanced workflows:
- Multi-repository orchestration
- Custom task instructions
- Checkpoint and resume
Learn Best Practices
- Usage Guide: Comprehensive command reference
- Configuration: Fine-tune orchestration
- Fundamentals: Understand the theory
Join the Community
- GitHub: github.com/ChesterRa/cccc
- Discussions: Share your experience
- Issues: Report bugs or request features
Troubleshooting
"cccc run" does nothing
Check:
cccc doctor
Common fix:
- tmux not installed:
brew install tmux - Not in git repo:
git init
Agents aren't making progress
- Check POR.md is clear and specific
- Lower consensus threshold:
cccc run --consensus 0.7 - Attach and watch debate:
cccc attach
Commits are too large
Configure smaller granularity:
# .cccc/config.yaml
git:
commit_granularity: small
Session hangs
Force restart:
cccc stop --force
cccc run
Success Stories
"Shipped a complex feature in one afternoon" "CCCC helped me implement OAuth2 with all edge cases covered. The agent debate caught 3 security issues I would have missed."
"Context management is a game-changer" "Got interrupted 5 times during implementation. Each time I resumed, CCCC picked up exactly where we left off from POR.md."
"Better documentation than I write" "The evidence trail in SUBPOR.md documents why decisions were made. My team loves it."
Conclusion
You've now:
- ✅ Installed CCCC
- ✅ Run your first multi-agent session
- ✅ Seen agents collaborate and self-correct
- ✅ Reviewed evidence-driven development
Next challenge: Take a task you'd normally spend 2 hours on. Try it with CCCC. Track the time and quality difference.
We bet you'll never go back to single-agent development.
Happy orchestrating! 🚀
Questions? Join our GitHub Discussions or check the full documentation.