* Add AI assistant documentation and commands Adds structured documentation for AI coding assistants: - CLAUDE.md / AGENTS.md: Lightweight entry points with critical rules - .ai/: Shared knowledge base (CODE_REVIEW.md, DEVELOPMENT.md, ISSUES.md) - .claude/commands/: Claude Code skills for review, issue, release - .github/copilot-instructions.md: GitHub Copilot instructions Supports Claude Code, OpenAI Codex, and GitHub Copilot with modular, pointer-based structure for maintainability. Includes guidelines for AI assistants to prompt developers about updating these docs after receiving feedback, creating a continuous improvement loop. * Add parallel development tip with git worktrees * Address review feedback - Add missing details to DEVELOPMENT.md: fork-specific testing, database backends, cross-compilation targets, make test-release - Simplify AGENTS.md to pointer to CLAUDE.md (Codex can read files) * Address review feedback - Add priority signaling: Critical vs Important vs Good Practices - Restore actionable file references (canonical_head.rs, test_utils.rs, etc.) - Add Rayon CPU oversubscription context - Add tracing span guidelines - Simplify AGENTS.md to pointer * Address review feedback and remove Copilot instructions - Restore anti-patterns section (over-engineering, unnecessary complexity) - Restore design principles (simplicity first, high cohesion) - Add architecture guidance (dependency bloat, schema migrations, backwards compat) - Improve natural language guidance for AI comments - Add try_read lock pattern - Remove copilot-instructions.md (can't follow file refs, untestable)
3.9 KiB
Lighthouse AI Assistant Guide
This file provides guidance for AI assistants (Claude Code, Codex, etc.) working with Lighthouse.
Quick Reference
# Build
make install # Build and install Lighthouse
cargo build --release # Standard release build
# Test (prefer targeted tests when iterating)
cargo nextest run -p <package> # Test specific package
cargo nextest run -p <package> <test> # Run individual test
make test # Full test suite (~20 min)
# Lint
make lint # Run Clippy
cargo fmt --all && make lint-fix # Format and fix
Before You Start
Read the relevant guide for your task:
| Task | Read This First |
|---|---|
| Code review | .ai/CODE_REVIEW.md |
| Creating issues/PRs | .ai/ISSUES.md |
| Development patterns | .ai/DEVELOPMENT.md |
Critical Rules (consensus failures or crashes)
1. No Panics at Runtime
// NEVER
let value = option.unwrap();
let item = array[1];
// ALWAYS
let value = option?;
let item = array.get(1)?;
Only acceptable during startup for CLI/config validation.
2. Consensus Crate: Safe Math Only
In consensus/ (excluding types/), use saturating or checked arithmetic:
// NEVER
let result = a + b;
// ALWAYS
let result = a.saturating_add(b);
Important Rules (bugs or performance issues)
3. Never Block Async
// NEVER
async fn handler() { expensive_computation(); }
// ALWAYS
async fn handler() {
tokio::task::spawn_blocking(|| expensive_computation()).await?;
}
4. Lock Ordering
Document lock ordering to avoid deadlocks. See canonical_head.rs:9-32 for the pattern.
5. Rayon Thread Pools
Use scoped rayon pools from beacon processor, not global pool. Global pool causes CPU oversubscription when beacon processor has allocated all CPUs.
Good Practices
6. TODOs Need Issues
All TODO comments must link to a GitHub issue.
7. Clear Variable Names
Avoid ambiguous abbreviations (bb, bl). Use beacon_block, blob.
Branch & PR Guidelines
- Branch from
unstable, targetunstablefor PRs - Run
cargo sortwhen adding dependencies - Run
make cli-localwhen updating CLI flags
Project Structure
beacon_node/ # Consensus client
beacon_chain/ # State transition logic
store/ # Database (hot/cold)
network/ # P2P networking
execution_layer/ # EL integration
validator_client/ # Validator duties
consensus/
types/ # Core data structures
fork_choice/ # Proto-array
See .ai/DEVELOPMENT.md for detailed architecture.
Maintaining These Docs
These AI docs should evolve based on real interactions.
After Code Reviews
If a developer corrects your review feedback or points out something you missed:
- Ask: "Should I update
.ai/CODE_REVIEW.mdwith this lesson?" - Add to the "Common Review Patterns" or create a new "Lessons Learned" entry
- Include: what went wrong, what the feedback was, what to do differently
After PR/Issue Creation
If a developer refines your PR description or issue format:
- Ask: "Should I update
.ai/ISSUES.mdto capture this?" - Document the preferred style or format
After Development Work
If you learn something about the codebase architecture or patterns:
- Ask: "Should I update
.ai/DEVELOPMENT.mdwith this?" - Add to relevant section or create new patterns
Format for Lessons
### Lesson: [Brief Title]
**Context:** [What task were you doing?]
**Issue:** [What went wrong or was corrected?]
**Learning:** [What to do differently next time]
When NOT to Update
- Minor preference differences (not worth documenting)
- One-off edge cases unlikely to recur
- Already covered by existing documentation