From f01341a707af3c7228a2df4bed8b1693d3d04216 Mon Sep 17 00:00:00 2001 From: Marius Date: Wed, 31 Dec 2025 01:03:23 +0200 Subject: [PATCH] Add Claude Code configuration MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 --- .claude/commands/branch-plan-handover.md | 5 + .claude/commands/context-handover.md | 8 ++ .claude/commands/plan-handover.md | 4 + .claude/commands/ultimate_validate_command.md | 116 ++++++++++++++++++ 4 files changed, 133 insertions(+) create mode 100644 .claude/commands/branch-plan-handover.md create mode 100644 .claude/commands/context-handover.md create mode 100644 .claude/commands/plan-handover.md create mode 100644 .claude/commands/ultimate_validate_command.md diff --git a/.claude/commands/branch-plan-handover.md b/.claude/commands/branch-plan-handover.md new file mode 100644 index 0000000..b5f129c --- /dev/null +++ b/.claude/commands/branch-plan-handover.md @@ -0,0 +1,5 @@ +Create a new branch, save the detailed implementation plan to a markdown file for context handover to another session, then stop. + +1. **Create new branch** with descriptive name based on current task +2. **Save the implementation plan** you created earlier in this session to a markdown file in the project root +3. **Stop execution** - do not commit anything, just prepare the context for handover to another session \ No newline at end of file diff --git a/.claude/commands/context-handover.md b/.claude/commands/context-handover.md new file mode 100644 index 0000000..84eb07f --- /dev/null +++ b/.claude/commands/context-handover.md @@ -0,0 +1,8 @@ +Save detailed context about the current problem to a markdown file for handover to another session due to context limit reached. + +1. **Create context handover file** in project root: `CONTEXT_HANDOVER_[TIMESTAMP].md` +2. **Document the current problem** being worked on with all relevant details and analysis +3. **Include current progress** - what has been discovered, analyzed, or attempted so far +4. **List key files examined** and their relevance to the problem +5. **Save current state** - todos, findings, next steps, and any constraints +6. **Stop execution** - context is now ready for a fresh session to continue the work \ No newline at end of file diff --git a/.claude/commands/plan-handover.md b/.claude/commands/plan-handover.md new file mode 100644 index 0000000..7fa3175 --- /dev/null +++ b/.claude/commands/plan-handover.md @@ -0,0 +1,4 @@ +Save the detailed implementation plan to a markdown file for context handover to another session, then stop. + +1. **Save the implementation plan** you created earlier in this session to a markdown file in the project root +2. **Stop execution** - do not commit anything, just prepare the context for handover to another session \ No newline at end of file diff --git a/.claude/commands/ultimate_validate_command.md b/.claude/commands/ultimate_validate_command.md new file mode 100644 index 0000000..48b2678 --- /dev/null +++ b/.claude/commands/ultimate_validate_command.md @@ -0,0 +1,116 @@ +--- +description: Generate comprehensive validation command for this codebase +--- + +# Generate Ultimate Validation Command + +Analyze this codebase deeply and create `.claude/commands/validate.md` that comprehensively validates everything. + +## Step 0: Discover Real User Workflows + +**Before analyzing tooling, understand what users ACTUALLY do:** + +1. Read workflow documentation: + - README.md - Look for "Usage", "Quickstart", "Examples" sections + - CLAUDE.md/AGENTS.md or similar - Look for workflow patterns + - docs/ folder - User guides, tutorials + +2. Identify external integrations: + - What CLIs does the app use? (Check Dockerfile for installed tools) + - What external APIs does it call? (Telegram, Slack, GitHub, etc.) + - What services does it interact with? + +3. Extract complete user journeys from docs: + - Find examples like "Fix Issue (GitHub):" or "User does X → then Y → then Z" + - Each workflow becomes an E2E test scenario + +**Critical: Your E2E tests should mirror actual workflows from docs, not just test internal APIs.** + +## Step 1: Deep Codebase Analysis + +Explore the codebase to understand: + +**What validation tools already exist:** +- Linting config: `.eslintrc*`, `.pylintrc`, `ruff.toml`, etc. +- Type checking: `tsconfig.json`, `mypy.ini`, etc. +- Style/formatting: `.prettierrc*`, `black`, `.editorconfig` +- Unit tests: `jest.config.*`, `pytest.ini`, test directories +- Package manager scripts: `package.json` scripts, `Makefile`, `pyproject.toml` tools + +**What the application does:** +- Frontend: Routes, pages, components, user flows +- Backend: API endpoints, authentication, database operations +- Database: Schema, migrations, models +- Infrastructure: Docker services, dependencies + +**How things are currently tested:** +- Existing test files and patterns +- CI/CD workflows (`.github/workflows/`, etc.) +- Test commands in package.json or scripts + +## Step 2: Generate validate.md + +Create `.claude/commands/validate.md` with these phases (ONLY include phases that exist in the codebase): + +### Phase 1: Linting +Run the actual linter commands found in the project (e.g., `npm run lint`, `ruff check`, etc.) + +### Phase 2: Type Checking +Run the actual type checker commands found (e.g., `tsc --noEmit`, `mypy .`, etc.) + +### Phase 3: Style Checking +Run the actual formatter check commands found (e.g., `prettier --check`, `black --check`, etc.) + +### Phase 4: Unit Testing +Run the actual test commands found (e.g., `npm test`, `pytest`, etc.) + +### Phase 5: End-to-End Testing (BE CREATIVE AND COMPREHENSIVE) + +Test COMPLETE user workflows from documentation, not just internal APIs. + +**The Three Levels of E2E Testing:** + +1. **Internal APIs** (what you might naturally test): + - Test adapter endpoints work + - Database queries succeed + - Commands execute + +2. **External Integrations** (what you MUST test): + - CLI operations (GitHub CLI create issue/PR, etc.) + - Platform APIs (send Telegram message, post Slack message) + - Any external services the app depends on + +3. **Complete User Journeys** (what gives 100% confidence): + - Follow workflows from docs start-to-finish + - Example: "User asks bot to fix GitHub issue" → Bot clones repo → Makes changes → Creates PR → Comments on issue + - Test like a user would actually use the application in production + +**Examples of good vs. bad E2E tests:** +- ❌ Bad: Tests that `/clone` command stores data in database +- ✅ Good: Clone repo → Load commands → Execute command → Verify git commit created +- ✅ Great: Create GitHub issue → Bot receives webhook → Analyzes issue → Creates PR → Comments on issue with PR link + +**Approach:** +- Use Docker for isolated, reproducible testing +- Create test data/repos/issues as needed +- Verify outcomes in external systems (GitHub, database, file system) +- Clean up after tests + +## Critical: Don't Stop Until Everything is Validated + +**Your job is to create a validation command that leaves NO STONE UNTURNED.** + +- Every user workflow from docs should be tested end-to-end +- Every external integration should be exercised (GitHub CLI, APIs, etc.) +- Every API endpoint should be hit +- Every error case should be verified +- Database integrity should be confirmed +- The validation should be so thorough that manual testing is completely unnecessary + +If /validate passes, the user should have 100% confidence their application works correctly in production. Don't settle for partial coverage - make it comprehensive, creative, and complete. + +## Output + +Write the generated validation command to `.claude/commands/validate.md` + +The command should be executable, practical, and give complete confidence in the codebase.