]> gitweb.erp-flowers.ru Git - erp24_rep/yii-erp24/.git/commitdiff
chore: update .claude/skills — refactor to <500 lines + references/assets origin/chore/update-claude-skills
authorAleksey Filippov <Aleksey.Filippov@erp-flowers.ru>
Sat, 28 Feb 2026 15:52:59 +0000 (18:52 +0300)
committerAleksey Filippov <Aleksey.Filippov@erp-flowers.ru>
Sat, 28 Feb 2026 15:52:59 +0000 (18:52 +0300)
- Extracted detailed content from 21 skills into references/ and assets/
- Each SKILL.md now acts as a hub (<500 lines) with JiT-loaded references
- Added new skills: code-review-expert
- Added references/ dirs for: jira-*, github-*, swarm-*, sparc-*, pair-programming, etc.
- Added assets/ dirs for: agentdb-optimization, flow-nexus-platform, github-*, etc.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
192 files changed:
.claude/skills/agentdb-advanced/references/cli-and-ops.md [new file with mode: 0644]
.claude/skills/agentdb-advanced/references/distance-metrics.md [new file with mode: 0644]
.claude/skills/agentdb-advanced/references/hybrid-search.md [new file with mode: 0644]
.claude/skills/agentdb-advanced/references/multi-database.md [new file with mode: 0644]
.claude/skills/agentdb-advanced/references/production-patterns.md [new file with mode: 0644]
.claude/skills/agentdb-advanced/references/quic-sync.md [new file with mode: 0644]
.claude/skills/agentdb-learning/references/algorithms.md [new file with mode: 0644]
.claude/skills/agentdb-learning/references/api-quickstart.md [new file with mode: 0644]
.claude/skills/agentdb-learning/references/performance.md [new file with mode: 0644]
.claude/skills/agentdb-learning/references/training-workflow.md [new file with mode: 0644]
.claude/skills/agentdb-learning/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/assets/benchmarks.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/caching-and-batch-ops.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/hnsw-indexing.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/memory-optimization.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/quantization-strategies.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/recipes-and-scaling.md [new file with mode: 0644]
.claude/skills/agentdb-optimization/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/code-review-expert/SKILL.md [new file with mode: 0644]
.claude/skills/code-review-expert/assets/review-report-template.md [new file with mode: 0644]
.claude/skills/code-review-expert/references/architecture-patterns.md [new file with mode: 0644]
.claude/skills/code-review-expert/references/review-checklist.md [new file with mode: 0644]
.claude/skills/code-review-expert/references/security-checklist.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/api-reference.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/architecture-patterns.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/distributed-training.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/single-node-training.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/flow-nexus-neural/references/use-cases.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/assets/best-practices.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/assets/quick-start-guide.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/assets/troubleshooting.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/advanced-patterns.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/app-store-deployment.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/authentication.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/challenges-achievements.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/payments-credits.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/sandbox-management.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/storage-realtime.md [new file with mode: 0644]
.claude/skills/flow-nexus-platform/references/system-utilities.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/advanced-features.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/best-practices.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/orchestration-patterns.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/swarm-management.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/templates.md [new file with mode: 0644]
.claude/skills/flow-nexus-swarm/references/workflow-automation.md [new file with mode: 0644]
.claude/skills/github-code-review/assets/pr-template.md [new file with mode: 0644]
.claude/skills/github-code-review/assets/swarm-config.yml [new file with mode: 0644]
.claude/skills/github-code-review/references/ci-cd-workflows.md [new file with mode: 0644]
.claude/skills/github-code-review/references/comment-templates.md [new file with mode: 0644]
.claude/skills/github-code-review/references/custom-agents.md [new file with mode: 0644]
.claude/skills/github-code-review/references/review-agents.md [new file with mode: 0644]
.claude/skills/github-code-review/references/review-configuration.md [new file with mode: 0644]
.claude/skills/github-code-review/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/github-code-review/references/workflow-examples.md [new file with mode: 0644]
.claude/skills/github-multi-repo/assets/architecture-layouts.md [new file with mode: 0644]
.claude/skills/github-multi-repo/assets/cli-reference.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/configuration.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/cross-repo-swarm.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/orchestration-workflows.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/package-sync.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/repo-architecture.md [new file with mode: 0644]
.claude/skills/github-multi-repo/references/sync-patterns.md [new file with mode: 0644]
.claude/skills/github-project-management/assets/cli-reference.md [new file with mode: 0644]
.claude/skills/github-project-management/assets/issue-templates.md [new file with mode: 0644]
.claude/skills/github-project-management/assets/workflow-configs.md [new file with mode: 0644]
.claude/skills/github-project-management/references/advanced-coordination.md [new file with mode: 0644]
.claude/skills/github-project-management/references/board-automation.md [new file with mode: 0644]
.claude/skills/github-project-management/references/issue-management.md [new file with mode: 0644]
.claude/skills/github-project-management/references/sprint-planning.md [new file with mode: 0644]
.claude/skills/github-project-management/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/github-release-management/assets/hotfix-workflow.yml [new file with mode: 0644]
.claude/skills/github-release-management/assets/release-deployment.yml [new file with mode: 0644]
.claude/skills/github-release-management/assets/release-swarm-config.yml [new file with mode: 0644]
.claude/skills/github-release-management/assets/release-workflow.yml [new file with mode: 0644]
.claude/skills/github-release-management/references/advanced-workflows.md [new file with mode: 0644]
.claude/skills/github-release-management/references/basic-usage.md [new file with mode: 0644]
.claude/skills/github-release-management/references/best-practices.md [new file with mode: 0644]
.claude/skills/github-release-management/references/enterprise-features.md [new file with mode: 0644]
.claude/skills/github-release-management/references/release-checklists.md [new file with mode: 0644]
.claude/skills/github-release-management/references/swarm-coordination.md [new file with mode: 0644]
.claude/skills/github-release-management/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/assets/command-reference.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/assets/setup-checklist.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/advanced-features.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/best-practices.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/claude-flow-integration.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/debugging.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/monitoring-analytics.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/real-world-examples.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/swarm-modes.md [new file with mode: 0644]
.claude/skills/github-workflow-automation/references/workflow-templates.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/assets/hive-config.json [new file with mode: 0644]
.claude/skills/hive-mind-advanced/assets/memory-config.json [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/api-reference.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/best-practices.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/collective-memory.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/consensus-mechanisms.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/examples.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/integration-patterns.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/performance-optimization.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/queen-worker-architecture.md [new file with mode: 0644]
.claude/skills/hive-mind-advanced/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/hooks-automation/assets/settings-advanced.json [new file with mode: 0644]
.claude/skills/hooks-automation/assets/settings-auto-testing.json [new file with mode: 0644]
.claude/skills/hooks-automation/assets/settings-basic.json [new file with mode: 0644]
.claude/skills/hooks-automation/assets/settings-protected.json [new file with mode: 0644]
.claude/skills/hooks-automation/references/custom-hooks.md [new file with mode: 0644]
.claude/skills/hooks-automation/references/git-hooks.md [new file with mode: 0644]
.claude/skills/hooks-automation/references/hook-cli-reference.md [new file with mode: 0644]
.claude/skills/hooks-automation/references/mcp-integration.md [new file with mode: 0644]
.claude/skills/hooks-automation/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/hooks-automation/references/workflow-examples.md [new file with mode: 0644]
.claude/skills/jira-comment/references/adf-format.md [new file with mode: 0644]
.claude/skills/jira-comment/references/api-reference.md [new file with mode: 0644]
.claude/skills/jira-comment/references/comment-templates.md [new file with mode: 0644]
.claude/skills/jira-debate/assets/debate-log-template.md [new file with mode: 0644]
.claude/skills/jira-debate/assets/review-prompt-templates.md [new file with mode: 0644]
.claude/skills/jira-debate/references/consensus-logic.md [new file with mode: 0644]
.claude/skills/jira-debate/references/openrouter-api.md [new file with mode: 0644]
.claude/skills/jira-debate/references/persona-debate-protocol.md [new file with mode: 0644]
.claude/skills/jira-debate/references/progress-output.md [new file with mode: 0644]
.claude/skills/jira-focus-detect/references/detection-rules.md [new file with mode: 0644]
.claude/skills/jira-focus-detect/references/output-and-integration.md [new file with mode: 0644]
.claude/skills/jira-focus-detect/references/scoring-algorithm.md [new file with mode: 0644]
.claude/skills/jira-implement/references/implementation-patterns.md [new file with mode: 0644]
.claude/skills/jira-implement/references/lifecycle-and-errors.md [new file with mode: 0644]
.claude/skills/jira-plan/references/decomposition-rules.md [new file with mode: 0644]
.claude/skills/jira-plan/references/jira-sync-details.md [new file with mode: 0644]
.claude/skills/jira-plan/references/plan-template.md [new file with mode: 0644]
.claude/skills/jira-report/references/metric-extraction.md [new file with mode: 0644]
.claude/skills/jira-report/references/report-template.md [new file with mode: 0644]
.claude/skills/jira-report/references/workflow-integration.md [new file with mode: 0644]
.claude/skills/jira-skill-recommend/references/integration-patterns.md [new file with mode: 0644]
.claude/skills/jira-skill-recommend/references/matching-algorithm.md [new file with mode: 0644]
.claude/skills/jira-skill-recommend/references/skill-catalog.md [new file with mode: 0644]
.claude/skills/jira-spec/references/generation-rules.md [new file with mode: 0644]
.claude/skills/jira-spec/references/spec-template.md [new file with mode: 0644]
.claude/skills/jira-workflow/references/execution-instructions.md [new file with mode: 0644]
.claude/skills/jira-workflow/references/pipeline-steps.md [new file with mode: 0644]
.claude/skills/pair-programming/references/commands.md [new file with mode: 0644]
.claude/skills/pair-programming/references/configuration.md [new file with mode: 0644]
.claude/skills/pair-programming/references/examples.md [new file with mode: 0644]
.claude/skills/pair-programming/references/modes.md [new file with mode: 0644]
.claude/skills/pair-programming/references/session-management.md [new file with mode: 0644]
.claude/skills/pair-programming/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/performance-analysis/assets/analyze-performance.js [new file with mode: 0644]
.claude/skills/performance-analysis/assets/github-workflow.yml [new file with mode: 0644]
.claude/skills/performance-analysis/references/advanced-usage.md [new file with mode: 0644]
.claude/skills/performance-analysis/references/bottleneck-metrics.md [new file with mode: 0644]
.claude/skills/performance-analysis/references/optimization-fixes.md [new file with mode: 0644]
.claude/skills/performance-analysis/references/profiling-patterns.md [new file with mode: 0644]
.claude/skills/performance-analysis/references/report-generation.md [new file with mode: 0644]
.claude/skills/performance-analysis/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/skill-builder/assets/examples-wild.md [new file with mode: 0644]
.claude/skills/skill-builder/assets/template-advanced.md [new file with mode: 0644]
.claude/skills/skill-builder/assets/template-basic.md [new file with mode: 0644]
.claude/skills/skill-builder/assets/template-intermediate.md [new file with mode: 0644]
.claude/skills/skill-builder/references/content-structure.md [new file with mode: 0644]
.claude/skills/skill-builder/references/progressive-disclosure.md [new file with mode: 0644]
.claude/skills/skill-builder/references/scripts-and-resources.md [new file with mode: 0644]
.claude/skills/skill-builder/references/validation-checklist.md [new file with mode: 0644]
.claude/skills/skill-builder/references/yaml-frontmatter.md [new file with mode: 0644]
.claude/skills/sparc-methodology/assets/advanced-features.md [new file with mode: 0644]
.claude/skills/sparc-methodology/assets/common-workflows.md [new file with mode: 0644]
.claude/skills/sparc-methodology/assets/examples.md [new file with mode: 0644]
.claude/skills/sparc-methodology/references/activation-methods.md [new file with mode: 0644]
.claude/skills/sparc-methodology/references/best-practices.md [new file with mode: 0644]
.claude/skills/sparc-methodology/references/modes.md [new file with mode: 0644]
.claude/skills/sparc-methodology/references/orchestration-patterns.md [new file with mode: 0644]
.claude/skills/sparc-methodology/references/tdd-workflows.md [new file with mode: 0644]
.claude/skills/stream-chain/references/advanced-use-cases.md [new file with mode: 0644]
.claude/skills/stream-chain/references/best-practices.md [new file with mode: 0644]
.claude/skills/stream-chain/references/custom-chain-examples.md [new file with mode: 0644]
.claude/skills/stream-chain/references/custom-pipeline-config.md [new file with mode: 0644]
.claude/skills/stream-chain/references/integration.md [new file with mode: 0644]
.claude/skills/stream-chain/references/predefined-pipelines.md [new file with mode: 0644]
.claude/skills/stream-chain/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/swarm-advanced/assets/cli-reference.md [new file with mode: 0644]
.claude/skills/swarm-advanced/assets/examples.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/advanced-techniques.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/analysis-swarm.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/best-practices.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/development-swarm.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/research-swarm.md [new file with mode: 0644]
.claude/skills/swarm-advanced/references/testing-swarm.md [new file with mode: 0644]
.claude/skills/verification-quality/references/configuration.md [new file with mode: 0644]
.claude/skills/verification-quality/references/integrations.md [new file with mode: 0644]
.claude/skills/verification-quality/references/reports-dashboard.md [new file with mode: 0644]
.claude/skills/verification-quality/references/troubleshooting.md [new file with mode: 0644]
.claude/skills/verification-quality/references/truth-scoring.md [new file with mode: 0644]
.claude/skills/verification-quality/references/verification-checks.md [new file with mode: 0644]

diff --git a/.claude/skills/agentdb-advanced/references/cli-and-ops.md b/.claude/skills/agentdb-advanced/references/cli-and-ops.md
new file mode 100644 (file)
index 0000000..74a882c
--- /dev/null
@@ -0,0 +1,68 @@
+# CLI Advanced Operations & Environment Reference
+
+## Database Import/Export
+
+```bash
+# Export with compression
+npx agentdb@latest export ./vectors.db ./backup.json.gz --compress
+
+# Import from backup
+npx agentdb@latest import ./backup.json.gz --decompress
+
+# Merge databases
+npx agentdb@latest merge ./db1.sqlite ./db2.sqlite ./merged.sqlite
+```
+
+## Database Optimization
+
+```bash
+# Vacuum database (reclaim space)
+sqlite3 .agentdb/vectors.db "VACUUM;"
+
+# Analyze for query optimization
+sqlite3 .agentdb/vectors.db "ANALYZE;"
+
+# Rebuild indices
+npx agentdb@latest reindex ./vectors.db
+```
+
+## Environment Variables
+
+```bash
+# AgentDB configuration
+AGENTDB_PATH=.agentdb/reasoningbank.db
+AGENTDB_ENABLED=true
+
+# Performance tuning
+AGENTDB_QUANTIZATION=binary     # binary|scalar|product|none
+AGENTDB_CACHE_SIZE=2000
+AGENTDB_HNSW_M=16
+AGENTDB_HNSW_EF=100
+
+# Learning plugins
+AGENTDB_LEARNING=true
+
+# Reasoning agents
+AGENTDB_REASONING=true
+
+# QUIC synchronization
+AGENTDB_QUIC_SYNC=true
+AGENTDB_QUIC_PORT=4433
+AGENTDB_QUIC_PEERS=host1:4433,host2:4433
+```
+
+## Environment Variables Quick Reference
+
+| Variable | Default | Description |
+|----------|---------|-------------|
+| `AGENTDB_PATH` | `.agentdb/reasoningbank.db` | Database file path |
+| `AGENTDB_ENABLED` | `true` | Enable/disable AgentDB |
+| `AGENTDB_QUANTIZATION` | `none` | Quantization type |
+| `AGENTDB_CACHE_SIZE` | `1000` | Cache size (entries) |
+| `AGENTDB_HNSW_M` | `16` | HNSW graph connectivity |
+| `AGENTDB_HNSW_EF` | `100` | HNSW search depth |
+| `AGENTDB_LEARNING` | `false` | Enable learning plugins |
+| `AGENTDB_REASONING` | `false` | Enable reasoning agents |
+| `AGENTDB_QUIC_SYNC` | `false` | Enable QUIC sync |
+| `AGENTDB_QUIC_PORT` | `4433` | QUIC server port |
+| `AGENTDB_QUIC_PEERS` | (empty) | Comma-separated peer list |
diff --git a/.claude/skills/agentdb-advanced/references/distance-metrics.md b/.claude/skills/agentdb-advanced/references/distance-metrics.md
new file mode 100644 (file)
index 0000000..4fb0900
--- /dev/null
@@ -0,0 +1,98 @@
+# Distance Metrics Reference
+
+## Cosine Similarity (Default)
+
+Best for normalized vectors, semantic similarity.
+
+```bash
+# CLI
+npx agentdb@latest query ./vectors.db "[0.1,0.2,...]" -m cosine
+
+# API
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  metric: 'cosine',
+  k: 10,
+});
+```
+
+**Use Cases**:
+- Text embeddings (BERT, GPT, etc.)
+- Semantic search
+- Document similarity
+- Most general-purpose applications
+
+**Formula**: `cos(theta) = (A . B) / (||A|| x ||B||)`
+**Range**: [-1, 1] (1 = identical, -1 = opposite)
+
+## Euclidean Distance (L2)
+
+Best for spatial data, geometric similarity.
+
+```bash
+# CLI
+npx agentdb@latest query ./vectors.db "[0.1,0.2,...]" -m euclidean
+
+# API
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  metric: 'euclidean',
+  k: 10,
+});
+```
+
+**Use Cases**:
+- Image embeddings
+- Spatial data
+- Computer vision
+- When vector magnitude matters
+
+**Formula**: `d = sqrt(sum((ai - bi)^2))`
+**Range**: [0, inf] (0 = identical, inf = very different)
+
+## Dot Product
+
+Best for pre-normalized vectors, fast computation.
+
+```bash
+# CLI
+npx agentdb@latest query ./vectors.db "[0.1,0.2,...]" -m dot
+
+# API
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  metric: 'dot',
+  k: 10,
+});
+```
+
+**Use Cases**:
+- Pre-normalized embeddings
+- Fast similarity computation
+- When vectors are already unit-length
+
+**Formula**: `dot = sum(ai x bi)`
+**Range**: [-inf, inf] (higher = more similar)
+
+## Custom Distance Metrics
+
+```typescript
+// Implement custom distance function
+function customDistance(vec1: number[], vec2: number[]): number {
+  // Weighted Euclidean distance
+  const weights = [1.0, 2.0, 1.5, ...];
+  let sum = 0;
+  for (let i = 0; i < vec1.length; i++) {
+    sum += weights[i] * Math.pow(vec1[i] - vec2[i], 2);
+  }
+  return Math.sqrt(sum);
+}
+
+// Use in search (requires custom implementation)
+```
+
+## Metric Selection Guide
+
+| Metric | Best For | Magnitude Sensitive | Speed |
+|--------|----------|---------------------|-------|
+| Cosine | Text/semantic search | No | Fast |
+| Euclidean | Spatial/image data | Yes | Fast |
+| Dot Product | Pre-normalized vectors | Yes | Fastest |
+| Custom | Domain-specific needs | Configurable | Varies |
diff --git a/.claude/skills/agentdb-advanced/references/hybrid-search.md b/.claude/skills/agentdb-advanced/references/hybrid-search.md
new file mode 100644 (file)
index 0000000..f4bdf55
--- /dev/null
@@ -0,0 +1,98 @@
+# Hybrid Search (Vector + Metadata) Reference
+
+## Basic Hybrid Search
+
+Combine vector similarity with metadata filtering:
+
+```typescript
+// Store documents with metadata
+await adapter.insertPattern({
+  id: '',
+  type: 'document',
+  domain: 'research-papers',
+  pattern_data: JSON.stringify({
+    embedding: documentEmbedding,
+    text: documentText,
+    metadata: {
+      author: 'Jane Smith',
+      year: 2025,
+      category: 'machine-learning',
+      citations: 150,
+    }
+  }),
+  confidence: 1.0,
+  usage_count: 0,
+  success_count: 0,
+  created_at: Date.now(),
+  last_used: Date.now(),
+});
+
+// Hybrid search: vector similarity + metadata filters
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'research-papers',
+  k: 20,
+  filters: {
+    year: { $gte: 2023 },          // Published 2023 or later
+    category: 'machine-learning',   // ML papers only
+    citations: { $gte: 50 },       // Highly cited
+  },
+});
+```
+
+## Advanced Filtering
+
+```typescript
+// Complex metadata queries
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'products',
+  k: 50,
+  filters: {
+    price: { $gte: 10, $lte: 100 },      // Price range
+    category: { $in: ['electronics', 'gadgets'] },  // Multiple categories
+    rating: { $gte: 4.0 },                // High rated
+    inStock: true,                        // Available
+    tags: { $contains: 'wireless' },      // Has tag
+  },
+});
+```
+
+## Filter Operators
+
+| Operator | Description | Example |
+|----------|-------------|---------|
+| `$gte` | Greater than or equal | `{ year: { $gte: 2023 } }` |
+| `$lte` | Less than or equal | `{ price: { $lte: 100 } }` |
+| `$in` | Value in array | `{ category: { $in: ['a', 'b'] } }` |
+| `$contains` | Array contains value | `{ tags: { $contains: 'x' } }` |
+| (direct) | Exact match | `{ inStock: true }` |
+
+## Weighted Hybrid Search
+
+Combine vector and metadata scores:
+
+```typescript
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'content',
+  k: 20,
+  hybridWeights: {
+    vectorSimilarity: 0.7,  // 70% weight on semantic similarity
+    metadataScore: 0.3,     // 30% weight on metadata match
+  },
+  filters: {
+    category: 'technology',
+    recency: { $gte: Date.now() - 30 * 24 * 3600000 },  // Last 30 days
+  },
+});
+```
+
+## Troubleshooting: No Results
+
+```typescript
+// Relax filters progressively
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  k: 100,  // Increase k
+  filters: {
+    // Remove or relax filters one by one
+  },
+});
+```
diff --git a/.claude/skills/agentdb-advanced/references/multi-database.md b/.claude/skills/agentdb-advanced/references/multi-database.md
new file mode 100644 (file)
index 0000000..754c232
--- /dev/null
@@ -0,0 +1,52 @@
+# Multi-Database Management Reference
+
+## Multiple Databases
+
+```typescript
+// Separate databases for different domains
+const knowledgeDB = await createAgentDBAdapter({
+  dbPath: '.agentdb/knowledge.db',
+});
+
+const conversationDB = await createAgentDBAdapter({
+  dbPath: '.agentdb/conversations.db',
+});
+
+const codeDB = await createAgentDBAdapter({
+  dbPath: '.agentdb/code.db',
+});
+
+// Use appropriate database for each task
+await knowledgeDB.insertPattern({ /* knowledge */ });
+await conversationDB.insertPattern({ /* conversation */ });
+await codeDB.insertPattern({ /* code */ });
+```
+
+## Database Sharding
+
+```typescript
+// Shard by domain for horizontal scaling
+const shards = {
+  'domain-a': await createAgentDBAdapter({ dbPath: '.agentdb/shard-a.db' }),
+  'domain-b': await createAgentDBAdapter({ dbPath: '.agentdb/shard-b.db' }),
+  'domain-c': await createAgentDBAdapter({ dbPath: '.agentdb/shard-c.db' }),
+};
+
+// Route queries to appropriate shard
+function getDBForDomain(domain: string) {
+  const shardKey = domain.split('-')[0];  // Extract shard key
+  return shards[shardKey] || shards['domain-a'];
+}
+
+// Insert to correct shard
+const db = getDBForDomain('domain-a-task');
+await db.insertPattern({ /* ... */ });
+```
+
+## When to Use Multiple Databases
+
+| Strategy | Use Case |
+|----------|----------|
+| Separate DBs | Distinct domains with no cross-queries |
+| Sharding | High write throughput, horizontal scaling |
+| Single DB + domains | Moderate size, cross-domain queries needed |
diff --git a/.claude/skills/agentdb-advanced/references/production-patterns.md b/.claude/skills/agentdb-advanced/references/production-patterns.md
new file mode 100644 (file)
index 0000000..d3dbdf3
--- /dev/null
@@ -0,0 +1,86 @@
+# Production Patterns Reference
+
+## Connection Pooling
+
+```typescript
+// Singleton pattern for shared adapter
+class AgentDBPool {
+  private static instance: AgentDBAdapter;
+
+  static async getInstance() {
+    if (!this.instance) {
+      this.instance = await createAgentDBAdapter({
+        dbPath: '.agentdb/production.db',
+        quantizationType: 'scalar',
+        cacheSize: 2000,
+      });
+    }
+    return this.instance;
+  }
+}
+
+// Use in application
+const db = await AgentDBPool.getInstance();
+const results = await db.retrieveWithReasoning(queryEmbedding, { k: 10 });
+```
+
+## Error Handling
+
+```typescript
+async function safeRetrieve(queryEmbedding: number[], options: any) {
+  try {
+    const result = await adapter.retrieveWithReasoning(queryEmbedding, options);
+    return result;
+  } catch (error) {
+    if (error.code === 'DIMENSION_MISMATCH') {
+      console.error('Query embedding dimension mismatch');
+      // Handle dimension error
+    } else if (error.code === 'DATABASE_LOCKED') {
+      // Retry with exponential backoff
+      await new Promise(resolve => setTimeout(resolve, 100));
+      return safeRetrieve(queryEmbedding, options);
+    }
+    throw error;
+  }
+}
+```
+
+## Common Error Codes
+
+| Code | Cause | Resolution |
+|------|-------|------------|
+| `DIMENSION_MISMATCH` | Query embedding size differs from stored | Verify embedding model consistency |
+| `DATABASE_LOCKED` | Concurrent write contention | Retry with backoff; use connection pool |
+| `OUT_OF_MEMORY` | Too many vectors loaded | Enable quantization; reduce cache size |
+
+## Monitoring and Logging
+
+```typescript
+// Performance monitoring
+const startTime = Date.now();
+const result = await adapter.retrieveWithReasoning(queryEmbedding, { k: 10 });
+const latency = Date.now() - startTime;
+
+if (latency > 100) {
+  console.warn('Slow query detected:', latency, 'ms');
+}
+
+// Log statistics
+const stats = await adapter.getStats();
+console.log('Database Stats:', {
+  totalPatterns: stats.totalPatterns,
+  dbSize: stats.dbSize,
+  cacheHitRate: stats.cacheHitRate,
+  avgSearchLatency: stats.avgSearchLatency,
+});
+```
+
+## Memory Consolidation
+
+```typescript
+// Disable automatic optimization when needed
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  optimizeMemory: false,  // Disable auto-consolidation
+  k: 10,
+});
+```
diff --git a/.claude/skills/agentdb-advanced/references/quic-sync.md b/.claude/skills/agentdb-advanced/references/quic-sync.md
new file mode 100644 (file)
index 0000000..9f7006d
--- /dev/null
@@ -0,0 +1,86 @@
+# QUIC Synchronization Reference
+
+## What is QUIC Sync?
+
+QUIC (Quick UDP Internet Connections) enables sub-millisecond latency synchronization between AgentDB instances across network boundaries with automatic retry, multiplexing, and encryption.
+
+**Benefits**:
+- <1ms latency between nodes
+- Multiplexed streams (multiple operations simultaneously)
+- Built-in encryption (TLS 1.3)
+- Automatic retry and recovery
+- Event-based broadcasting
+
+## Enable QUIC Sync
+
+```typescript
+import { createAgentDBAdapter } from 'agentic-flow/reasoningbank';
+
+// Initialize with QUIC synchronization
+const adapter = await createAgentDBAdapter({
+  dbPath: '.agentdb/distributed.db',
+  enableQUICSync: true,
+  syncPort: 4433,
+  syncPeers: [
+    '192.168.1.10:4433',
+    '192.168.1.11:4433',
+    '192.168.1.12:4433',
+  ],
+});
+
+// Patterns automatically sync across all peers
+await adapter.insertPattern({
+  // ... pattern data
+});
+
+// Available on all peers within ~1ms
+```
+
+## Configuration Options
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  enableQUICSync: true,
+  syncPort: 4433,              // QUIC server port
+  syncPeers: ['host1:4433'],   // Peer addresses
+  syncInterval: 1000,          // Sync interval (ms)
+  syncBatchSize: 100,          // Patterns per batch
+  maxRetries: 3,               // Retry failed syncs
+  compression: true,           // Enable compression
+});
+```
+
+## Multi-Node Deployment
+
+```bash
+# Node 1 (192.168.1.10)
+AGENTDB_QUIC_SYNC=true \
+AGENTDB_QUIC_PORT=4433 \
+AGENTDB_QUIC_PEERS=192.168.1.11:4433,192.168.1.12:4433 \
+node server.js
+
+# Node 2 (192.168.1.11)
+AGENTDB_QUIC_SYNC=true \
+AGENTDB_QUIC_PORT=4433 \
+AGENTDB_QUIC_PEERS=192.168.1.10:4433,192.168.1.12:4433 \
+node server.js
+
+# Node 3 (192.168.1.12)
+AGENTDB_QUIC_SYNC=true \
+AGENTDB_QUIC_PORT=4433 \
+AGENTDB_QUIC_PEERS=192.168.1.10:4433,192.168.1.11:4433 \
+node server.js
+```
+
+## Troubleshooting
+
+```bash
+# Check firewall allows UDP port 4433
+sudo ufw allow 4433/udp
+
+# Verify peers are reachable
+ping host1
+
+# Check QUIC logs
+DEBUG=agentdb:quic node server.js
+```
diff --git a/.claude/skills/agentdb-learning/references/algorithms.md b/.claude/skills/agentdb-learning/references/algorithms.md
new file mode 100644 (file)
index 0000000..15f1992
--- /dev/null
@@ -0,0 +1,203 @@
+# Learning Algorithms Reference (9 Total)
+
+Complete catalog of reinforcement learning algorithms available via AgentDB plugins.
+
+---
+
+## 1. Decision Transformer (Recommended)
+
+**Type**: Offline Reinforcement Learning
+**Best For**: Learning from logged experiences, imitation learning
+**Strengths**: No online interaction needed, stable training
+
+```bash
+npx agentdb@latest create-plugin -t decision-transformer -n dt-agent
+```
+
+**Use Cases**:
+- Learn from historical data
+- Imitation learning from expert demonstrations
+- Safe learning without environment interaction
+- Sequence modeling tasks
+
+**Configuration**:
+```json
+{
+  "algorithm": "decision-transformer",
+  "model_size": "base",
+  "context_length": 20,
+  "embed_dim": 128,
+  "n_heads": 8,
+  "n_layers": 6
+}
+```
+
+---
+
+## 2. Q-Learning
+
+**Type**: Value-Based RL (Off-Policy)
+**Best For**: Discrete action spaces, sample efficiency
+**Strengths**: Proven, simple, works well for small/medium problems
+
+```bash
+npx agentdb@latest create-plugin -t q-learning -n q-agent
+```
+
+**Use Cases**:
+- Grid worlds, board games
+- Navigation tasks
+- Resource allocation
+- Discrete decision-making
+
+**Configuration**:
+```json
+{
+  "algorithm": "q-learning",
+  "learning_rate": 0.001,
+  "gamma": 0.99,
+  "epsilon": 0.1,
+  "epsilon_decay": 0.995
+}
+```
+
+---
+
+## 3. SARSA
+
+**Type**: Value-Based RL (On-Policy)
+**Best For**: Safe exploration, risk-sensitive tasks
+**Strengths**: More conservative than Q-Learning, better for safety
+
+```bash
+npx agentdb@latest create-plugin -t sarsa -n sarsa-agent
+```
+
+**Use Cases**:
+- Safety-critical applications
+- Risk-sensitive decision-making
+- Online learning with exploration
+
+**Configuration**:
+```json
+{
+  "algorithm": "sarsa",
+  "learning_rate": 0.001,
+  "gamma": 0.99,
+  "epsilon": 0.1
+}
+```
+
+---
+
+## 4. Actor-Critic
+
+**Type**: Policy Gradient with Value Baseline
+**Best For**: Continuous actions, variance reduction
+**Strengths**: Stable, works for continuous/discrete actions
+
+```bash
+npx agentdb@latest create-plugin -t actor-critic -n ac-agent
+```
+
+**Use Cases**:
+- Continuous control (robotics, simulations)
+- Complex action spaces
+- Multi-agent coordination
+
+**Configuration**:
+```json
+{
+  "algorithm": "actor-critic",
+  "actor_lr": 0.001,
+  "critic_lr": 0.002,
+  "gamma": 0.99,
+  "entropy_coef": 0.01
+}
+```
+
+---
+
+## 5. Active Learning
+
+**Type**: Query-Based Learning
+**Best For**: Label-efficient learning, human-in-the-loop
+**Strengths**: Minimizes labeling cost, focuses on uncertain samples
+
+**Use Cases**:
+- Human feedback incorporation
+- Label-efficient training
+- Uncertainty sampling
+- Annotation cost reduction
+
+---
+
+## 6. Adversarial Training
+
+**Type**: Robustness Enhancement
+**Best For**: Safety, robustness to perturbations
+**Strengths**: Improves model robustness, adversarial defense
+
+**Use Cases**:
+- Security applications
+- Robust decision-making
+- Adversarial defense
+- Safety testing
+
+---
+
+## 7. Curriculum Learning
+
+**Type**: Progressive Difficulty Training
+**Best For**: Complex tasks, faster convergence
+**Strengths**: Stable learning, faster convergence on hard tasks
+
+**Use Cases**:
+- Complex multi-stage tasks
+- Hard exploration problems
+- Skill composition
+- Transfer learning
+
+---
+
+## 8. Federated Learning
+
+**Type**: Distributed Learning
+**Best For**: Privacy, distributed data
+**Strengths**: Privacy-preserving, scalable
+
+**Use Cases**:
+- Multi-agent systems
+- Privacy-sensitive data
+- Distributed training
+- Collaborative learning
+
+---
+
+## 9. Multi-Task Learning
+
+**Type**: Transfer Learning
+**Best For**: Related tasks, knowledge sharing
+**Strengths**: Faster learning on new tasks, better generalization
+
+**Use Cases**:
+- Task families
+- Transfer learning
+- Domain adaptation
+- Meta-learning
+
+---
+
+## Algorithm Selection Guide
+
+| Algorithm | Action Space | Data Source | Safety | Complexity |
+|-----------|-------------|-------------|--------|------------|
+| Decision Transformer | Any | Offline logs | High | Medium |
+| Q-Learning | Discrete | Online/Offline | Medium | Low |
+| SARSA | Discrete | Online | High | Low |
+| Actor-Critic | Continuous | Online | Medium | Medium |
+| Active Learning | N/A | Human-in-loop | High | Low |
+| Adversarial | Any | Adversarial | High | High |
+| Curriculum | Any | Progressive | Medium | Medium |
+| Federated | Any | Distributed | High | High |
+| Multi-Task | Any | Multi-domain | Medium | Medium |
diff --git a/.claude/skills/agentdb-learning/references/api-quickstart.md b/.claude/skills/agentdb-learning/references/api-quickstart.md
new file mode 100644 (file)
index 0000000..d420324
--- /dev/null
@@ -0,0 +1,84 @@
+# API Quick Start Reference
+
+Full TypeScript examples for initializing the adapter, storing training experiences, and running a training cycle.
+
+---
+
+## Initialize Adapter
+
+```typescript
+import { createAgentDBAdapter } from 'agentic-flow/reasoningbank';
+
+// Initialize with learning enabled
+const adapter = await createAgentDBAdapter({
+  dbPath: '.agentdb/learning.db',
+  enableLearning: true,       // Enable learning plugins
+  enableReasoning: true,
+  cacheSize: 1000,
+});
+```
+
+---
+
+## Store Training Experience
+
+```typescript
+await adapter.insertPattern({
+  id: '',
+  type: 'experience',
+  domain: 'game-playing',
+  pattern_data: JSON.stringify({
+    embedding: await computeEmbedding('state-action-reward'),
+    pattern: {
+      state: [0.1, 0.2, 0.3],
+      action: 2,
+      reward: 1.0,
+      next_state: [0.15, 0.25, 0.35],
+      done: false
+    }
+  }),
+  confidence: 0.9,
+  usage_count: 1,
+  success_count: 1,
+  created_at: Date.now(),
+  last_used: Date.now(),
+});
+```
+
+---
+
+## Train Model
+
+```typescript
+const metrics = await adapter.train({
+  epochs: 50,
+  batchSize: 32,
+});
+
+console.log('Training Loss:', metrics.loss);
+console.log('Duration:', metrics.duration, 'ms');
+```
+
+---
+
+## Integration with Reasoning Agents
+
+Combine learning with reasoning for better performance:
+
+```typescript
+// Train learning model
+await adapter.train({ epochs: 50, batchSize: 32 });
+
+// Use reasoning agents for inference
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'decision-making',
+  k: 10,
+  useMMR: true,              // Diverse experiences
+  synthesizeContext: true,    // Rich context
+  optimizeMemory: true,       // Consolidate patterns
+});
+
+// Make decision based on learned experiences + reasoning
+const decision = result.context.suggestedAction;
+const confidence = result.memories[0].similarity;
+```
diff --git a/.claude/skills/agentdb-learning/references/performance.md b/.claude/skills/agentdb-learning/references/performance.md
new file mode 100644 (file)
index 0000000..5e53bdf
--- /dev/null
@@ -0,0 +1,50 @@
+# Performance Optimization Reference
+
+Techniques for faster training, efficient batch processing, and incremental learning.
+
+---
+
+## Batch Training
+
+```typescript
+// Collect batch of experiences
+const experiences = collectBatch(size: 1000);
+
+// Batch insert (500x faster)
+for (const exp of experiences) {
+  await adapter.insertPattern({ /* ... */ });
+}
+
+// Train on batch
+await adapter.train({
+  epochs: 10,
+  batchSize: 128,  // Larger batch for efficiency
+});
+```
+
+---
+
+## Incremental Learning
+
+```typescript
+// Train incrementally as new data arrives
+setInterval(async () => {
+  const newExperiences = getNewExperiences();
+
+  if (newExperiences.length > 100) {
+    await adapter.train({
+      epochs: 5,
+      batchSize: 32,
+    });
+  }
+}, 60000);  // Every minute
+```
+
+---
+
+## Tips
+
+- Use `batchSize: 128` or higher for large datasets to maximize throughput.
+- Enable quantization (`binary quantization`) for 32x faster inference on large models.
+- Use `optimizeMemory: true` in retrieval calls to consolidate patterns and reduce redundancy.
+- WASM-accelerated inference provides 10-100x speed improvement over pure JS.
diff --git a/.claude/skills/agentdb-learning/references/training-workflow.md b/.claude/skills/agentdb-learning/references/training-workflow.md
new file mode 100644 (file)
index 0000000..b4ad7b6
--- /dev/null
@@ -0,0 +1,141 @@
+# Training Workflow Reference
+
+Step-by-step guide for collecting experiences, training models, evaluating performance, and applying advanced training techniques.
+
+---
+
+## 1. Collect Experiences
+
+Store experiences during agent execution:
+
+```typescript
+// Store experiences during agent execution
+for (let i = 0; i < numEpisodes; i++) {
+  const episode = runEpisode();
+
+  for (const step of episode.steps) {
+    await adapter.insertPattern({
+      id: '',
+      type: 'experience',
+      domain: 'task-domain',
+      pattern_data: JSON.stringify({
+        embedding: await computeEmbedding(JSON.stringify(step)),
+        pattern: {
+          state: step.state,
+          action: step.action,
+          reward: step.reward,
+          next_state: step.next_state,
+          done: step.done
+        }
+      }),
+      confidence: step.reward > 0 ? 0.9 : 0.5,
+      usage_count: 1,
+      success_count: step.reward > 0 ? 1 : 0,
+      created_at: Date.now(),
+      last_used: Date.now(),
+    });
+  }
+}
+```
+
+---
+
+## 2. Train Model
+
+```typescript
+// Train on collected experiences
+const trainingMetrics = await adapter.train({
+  epochs: 100,
+  batchSize: 64,
+  learningRate: 0.001,
+  validationSplit: 0.2,
+});
+
+console.log('Training Metrics:', trainingMetrics);
+// {
+//   loss: 0.023,
+//   valLoss: 0.028,
+//   duration: 1523,
+//   epochs: 100
+// }
+```
+
+---
+
+## 3. Evaluate Performance
+
+```typescript
+// Retrieve similar successful experiences
+const testQuery = await computeEmbedding(JSON.stringify(testState));
+const result = await adapter.retrieveWithReasoning(testQuery, {
+  domain: 'task-domain',
+  k: 10,
+  synthesizeContext: true,
+});
+
+// Evaluate action quality
+const suggestedAction = result.memories[0].pattern.action;
+const confidence = result.memories[0].similarity;
+
+console.log('Suggested Action:', suggestedAction);
+console.log('Confidence:', confidence);
+```
+
+---
+
+## Advanced Training Techniques
+
+### Experience Replay
+
+```typescript
+// Store experiences in buffer
+const replayBuffer = [];
+
+// Sample random batch for training
+const batch = sampleRandomBatch(replayBuffer, batchSize: 32);
+
+// Train on batch
+await adapter.train({
+  data: batch,
+  epochs: 1,
+  batchSize: 32,
+});
+```
+
+### Prioritized Experience Replay
+
+```typescript
+// Store experiences with priority (TD error)
+await adapter.insertPattern({
+  // ... standard fields
+  confidence: tdError,  // Use TD error as confidence/priority
+  // ...
+});
+
+// Retrieve high-priority experiences
+const highPriority = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'task-domain',
+  k: 32,
+  minConfidence: 0.7,  // Only high TD-error experiences
+});
+```
+
+### Multi-Agent Training
+
+```typescript
+// Collect experiences from multiple agents
+for (const agent of agents) {
+  const experience = await agent.step();
+
+  await adapter.insertPattern({
+    // ... store experience with agent ID
+    domain: `multi-agent/${agent.id}`,
+  });
+}
+
+// Train shared model
+await adapter.train({
+  epochs: 50,
+  batchSize: 64,
+});
+```
diff --git a/.claude/skills/agentdb-learning/references/troubleshooting.md b/.claude/skills/agentdb-learning/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..968e465
--- /dev/null
@@ -0,0 +1,92 @@
+# Troubleshooting Reference
+
+Common issues and solutions when working with AgentDB learning plugins.
+
+---
+
+## Training Not Converging
+
+**Symptom**: Loss does not decrease over epochs.
+
+**Solution**: Reduce the learning rate.
+
+```typescript
+await adapter.train({
+  epochs: 100,
+  batchSize: 32,
+  learningRate: 0.0001,  // Lower learning rate
+});
+```
+
+**Additional checks**:
+- Verify experience data is correctly formatted (state, action, reward, next_state, done).
+- Ensure sufficient training data (at least 100+ experiences).
+- Check reward signal is meaningful and not constant.
+
+---
+
+## Overfitting
+
+**Symptom**: Training loss drops but validation loss increases.
+
+**Solution**: Use validation split and memory optimization.
+
+```typescript
+// Use validation split
+await adapter.train({
+  epochs: 50,
+  batchSize: 64,
+  validationSplit: 0.2,  // 20% validation
+});
+
+// Enable memory optimization
+await adapter.retrieveWithReasoning(queryEmbedding, {
+  optimizeMemory: true,  // Consolidate, reduce overfitting
+});
+```
+
+**Additional checks**:
+- Reduce number of epochs.
+- Increase batch size for smoother gradients.
+- Add more diverse training experiences.
+
+---
+
+## Slow Training
+
+**Symptom**: Training takes longer than expected.
+
+**Solution**: Enable quantization and use larger batch sizes.
+
+```bash
+# Enable binary quantization for 32x faster inference
+```
+
+**Additional checks**:
+- Increase `batchSize` (64 -> 128 -> 256).
+- Reduce `epochs` and rely on incremental training.
+- Verify WASM acceleration is active (Node.js 18+).
+
+---
+
+## Plugin Creation Fails
+
+**Symptom**: `npx agentdb@latest create-plugin` exits with error.
+
+**Checks**:
+- Verify Node.js 18+ is installed: `node --version`.
+- Verify AgentDB v1.0.7+: `npx agentdb@latest --version`.
+- Check write permissions on target directory.
+- Try `--dry-run` flag first to validate configuration.
+
+---
+
+## No Experiences Retrieved
+
+**Symptom**: `retrieveWithReasoning` returns empty results.
+
+**Checks**:
+- Confirm experiences were inserted with correct `domain` value.
+- Lower `minConfidence` threshold if set.
+- Increase `k` parameter for broader search.
+- Verify embedding computation produces valid vectors.
diff --git a/.claude/skills/agentdb-optimization/assets/benchmarks.md b/.claude/skills/agentdb-optimization/assets/benchmarks.md
new file mode 100644 (file)
index 0000000..7ae50ca
--- /dev/null
@@ -0,0 +1,69 @@
+# Performance Benchmarks
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## Test System
+
+AMD Ryzen 9 5950X, 64GB RAM
+
+---
+
+## Results
+
+| Operation | Vector Count | No Optimization | Optimized | Improvement |
+|-----------|-------------|-----------------|-----------|-------------|
+| Search | 10K | 15ms | 100µs | 150x |
+| Search | 100K | 150ms | 120µs | 1,250x |
+| Search | 1M | 100s | 8ms | 12,500x |
+| Batch Insert (100) | - | 1s | 2ms | 500x |
+| Memory Usage | 1M | 3GB | 96MB | 32x (binary) |
+
+---
+
+## Running Benchmarks
+
+```bash
+# Comprehensive performance benchmarking
+npx agentdb@latest benchmark
+
+# Results show:
+# Pattern Search: 150x faster (100µs vs 15ms)
+# Batch Insert: 500x faster (2ms vs 1s for 100 vectors)
+# Large-scale Query: 12,500x faster (8ms vs 100s at 1M vectors)
+# Memory Efficiency: 4-32x reduction with quantization
+```
+
+---
+
+## Database Statistics
+
+```bash
+# Get comprehensive stats
+npx agentdb@latest stats .agentdb/vectors.db
+
+# Output:
+# Total Patterns: 125,430
+# Database Size: 47.2 MB (with binary quantization)
+# Avg Confidence: 0.87
+# Domains: 15
+# Cache Hit Rate: 84%
+# Index Type: HNSW
+```
+
+---
+
+## Runtime Metrics API
+
+```typescript
+const stats = await adapter.getStats();
+
+console.log('Performance Metrics:');
+console.log('Total Patterns:', stats.totalPatterns);
+console.log('Database Size:', stats.dbSize);
+console.log('Avg Confidence:', stats.avgConfidence);
+console.log('Cache Hit Rate:', stats.cacheHitRate);
+console.log('Search Latency (avg):', stats.avgSearchLatency);
+console.log('Insert Latency (avg):', stats.avgInsertLatency);
+```
diff --git a/.claude/skills/agentdb-optimization/references/caching-and-batch-ops.md b/.claude/skills/agentdb-optimization/references/caching-and-batch-ops.md
new file mode 100644 (file)
index 0000000..d276729
--- /dev/null
@@ -0,0 +1,86 @@
+# Caching & Batch Operations — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## In-Memory Pattern Cache
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  cacheSize: 1000,  // Cache 1000 most-used patterns
+});
+
+// First retrieval: ~2ms (database)
+// Subsequent: <1ms (cache hit)
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  k: 10,
+});
+```
+
+### Cache Size Tuning
+
+| Application Size | Recommended cacheSize |
+|-----------------|-----------------------|
+| Small | 100-500 patterns |
+| Medium | 500-2000 patterns |
+| Large | 2000-5000 patterns |
+
+---
+
+## LRU Cache Behavior
+
+The cache automatically evicts least-recently-used patterns. Most frequently accessed patterns remain in cache.
+
+```typescript
+// Monitor cache performance
+const stats = await adapter.getStats();
+console.log('Cache Hit Rate:', stats.cacheHitRate);
+// Aim for >80% hit rate
+```
+
+---
+
+## Batch Insert (500x Faster)
+
+```typescript
+// ❌ SLOW: Individual inserts
+for (const doc of documents) {
+  await adapter.insertPattern({ /* ... */ });  // 1s for 100 docs
+}
+
+// ✅ FAST: Batch insert
+const patterns = documents.map(doc => ({
+  id: '',
+  type: 'document',
+  domain: 'knowledge',
+  pattern_data: JSON.stringify({
+    embedding: doc.embedding,
+    text: doc.text,
+  }),
+  confidence: 1.0,
+  usage_count: 0,
+  success_count: 0,
+  created_at: Date.now(),
+  last_used: Date.now(),
+}));
+
+// Insert all at once (2ms for 100 docs)
+for (const pattern of patterns) {
+  await adapter.insertPattern(pattern);
+}
+```
+
+---
+
+## Batch Retrieval
+
+```typescript
+// Retrieve multiple queries efficiently
+const queries = [queryEmbedding1, queryEmbedding2, queryEmbedding3];
+
+// Parallel retrieval
+const results = await Promise.all(
+  queries.map(q => adapter.retrieveWithReasoning(q, { k: 5 }))
+);
+```
diff --git a/.claude/skills/agentdb-optimization/references/hnsw-indexing.md b/.claude/skills/agentdb-optimization/references/hnsw-indexing.md
new file mode 100644 (file)
index 0000000..fc60d26
--- /dev/null
@@ -0,0 +1,75 @@
+# HNSW Indexing — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## Overview
+
+**Hierarchical Navigable Small World** provides O(log n) search complexity for vector databases.
+
+AgentDB automatically builds HNSW indices when a database is created.
+
+---
+
+## Automatic HNSW
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  dbPath: '.agentdb/vectors.db',
+  // HNSW automatically enabled
+});
+
+// Search with HNSW (100µs vs 15ms linear scan)
+const results = await adapter.retrieveWithReasoning(queryEmbedding, {
+  k: 10,
+});
+```
+
+---
+
+## HNSW Parameters
+
+```typescript
+// Advanced HNSW configuration
+const adapter = await createAgentDBAdapter({
+  dbPath: '.agentdb/vectors.db',
+  hnswM: 16,              // Connections per layer (default: 16)
+  hnswEfConstruction: 200, // Build quality (default: 200)
+  hnswEfSearch: 100,       // Search quality (default: 100)
+});
+```
+
+---
+
+## Parameter Tuning Guide
+
+### M (Connections per Layer)
+
+Higher value = better recall, more memory.
+
+| Dataset Size | Recommended M |
+|-------------|--------------|
+| Small (<10K) | 8 |
+| Medium (10K-100K) | 16 |
+| Large (>100K) | 32 |
+
+### efConstruction (Build Quality)
+
+Higher value = better index quality, slower build.
+
+| Priority | Recommended efConstruction |
+|----------|---------------------------|
+| Fast build | 100 |
+| Balanced | 200 (default) |
+| High quality | 400 |
+
+### efSearch (Search Quality)
+
+Higher value = better recall, slower search.
+
+| Priority | Recommended efSearch |
+|----------|---------------------|
+| Fast search | 50 |
+| Balanced | 100 (default) |
+| High recall | 200 |
diff --git a/.claude/skills/agentdb-optimization/references/memory-optimization.md b/.claude/skills/agentdb-optimization/references/memory-optimization.md
new file mode 100644 (file)
index 0000000..3af21a5
--- /dev/null
@@ -0,0 +1,60 @@
+# Memory Optimization — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## Automatic Consolidation
+
+```typescript
+// Enable automatic pattern consolidation
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  domain: 'documents',
+  optimizeMemory: true,  // Consolidate similar patterns
+  k: 10,
+});
+
+console.log('Optimizations:', result.optimizations);
+// {
+//   consolidated: 15,  // Merged 15 similar patterns
+//   pruned: 3,         // Removed 3 low-quality patterns
+//   improved_quality: 0.12  // 12% quality improvement
+// }
+```
+
+---
+
+## Manual Optimization
+
+```typescript
+// Manually trigger optimization
+await adapter.optimize();
+
+// Get statistics
+const stats = await adapter.getStats();
+console.log('Before:', stats.totalPatterns);
+console.log('After:', stats.totalPatterns);  // Reduced by ~10-30%
+```
+
+---
+
+## Pruning Strategies
+
+```typescript
+// Prune low-confidence patterns
+await adapter.prune({
+  minConfidence: 0.5,     // Remove confidence < 0.5
+  minUsageCount: 2,       // Remove usage_count < 2
+  maxAge: 30 * 24 * 3600, // Remove >30 days old
+});
+```
+
+---
+
+## Optimization Checklist
+
+1. Enable quantization appropriate for the dataset size.
+2. Set `optimizeMemory: true` on retrieval calls.
+3. Schedule periodic `adapter.optimize()` runs.
+4. Prune stale patterns with `adapter.prune()`.
+5. Monitor `stats.totalPatterns` trend over time.
diff --git a/.claude/skills/agentdb-optimization/references/quantization-strategies.md b/.claude/skills/agentdb-optimization/references/quantization-strategies.md
new file mode 100644 (file)
index 0000000..601c14f
--- /dev/null
@@ -0,0 +1,103 @@
+# Quantization Strategies — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## 1. Binary Quantization (32x Reduction)
+
+**Best For**: Large-scale deployments (1M+ vectors), memory-constrained environments
+**Trade-off**: ~2-5% accuracy loss, 32x memory reduction, 10x faster
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'binary',
+  // 768-dim float32 (3072 bytes) → 96 bytes binary
+  // 1M vectors: 3GB → 96MB
+});
+```
+
+**Use Cases**:
+- Mobile/edge deployment
+- Large-scale vector storage (millions of vectors)
+- Real-time search with memory constraints
+
+**Performance**:
+- Memory: 32x smaller
+- Search Speed: 10x faster (bit operations)
+- Accuracy: 95-98% of original
+
+---
+
+## 2. Scalar Quantization (4x Reduction)
+
+**Best For**: Balanced performance/accuracy, moderate datasets
+**Trade-off**: ~1-2% accuracy loss, 4x memory reduction, 3x faster
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'scalar',
+  // 768-dim float32 (3072 bytes) → 768 bytes (uint8)
+  // 1M vectors: 3GB → 768MB
+});
+```
+
+**Use Cases**:
+- Production applications requiring high accuracy
+- Medium-scale deployments (10K-1M vectors)
+- General-purpose optimization
+
+**Performance**:
+- Memory: 4x smaller
+- Search Speed: 3x faster
+- Accuracy: 98-99% of original
+
+---
+
+## 3. Product Quantization (8-16x Reduction)
+
+**Best For**: High-dimensional vectors, balanced compression
+**Trade-off**: ~3-7% accuracy loss, 8-16x memory reduction, 5x faster
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'product',
+  // 768-dim float32 (3072 bytes) → 48-96 bytes
+  // 1M vectors: 3GB → 192MB
+});
+```
+
+**Use Cases**:
+- High-dimensional embeddings (>512 dims)
+- Image/video embeddings
+- Large-scale similarity search
+
+**Performance**:
+- Memory: 8-16x smaller
+- Search Speed: 5x faster
+- Accuracy: 93-97% of original
+
+---
+
+## 4. No Quantization (Full Precision)
+
+**Best For**: Maximum accuracy, small datasets
+**Trade-off**: No accuracy loss, full memory usage
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'none',
+  // Full float32 precision
+});
+```
+
+---
+
+## Quick Comparison
+
+| Type | Memory Reduction | Speed Gain | Accuracy | Best For |
+|------|-----------------|------------|----------|----------|
+| Binary | 32x | 10x | 95-98% | 1M+ vectors, edge |
+| Scalar | 4x | 3x | 98-99% | General production |
+| Product | 8-16x | 5x | 93-97% | High-dim embeddings |
+| None | 1x | 1x | 100% | Small datasets |
diff --git a/.claude/skills/agentdb-optimization/references/recipes-and-scaling.md b/.claude/skills/agentdb-optimization/references/recipes-and-scaling.md
new file mode 100644 (file)
index 0000000..35f60e7
--- /dev/null
@@ -0,0 +1,114 @@
+# Optimization Recipes & Scaling Strategies — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## Optimization Recipes
+
+### Recipe 1: Maximum Speed (Sacrifice Accuracy)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'binary',  // 32x memory reduction
+  cacheSize: 5000,             // Large cache
+  hnswM: 8,                    // Fewer connections = faster
+  hnswEfSearch: 50,            // Low search quality = faster
+});
+
+// Expected: <50µs search, 90-95% accuracy
+```
+
+### Recipe 2: Balanced Performance
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'scalar',  // 4x memory reduction
+  cacheSize: 1000,             // Standard cache
+  hnswM: 16,                   // Balanced connections
+  hnswEfSearch: 100,           // Balanced quality
+});
+
+// Expected: <100µs search, 98-99% accuracy
+```
+
+### Recipe 3: Maximum Accuracy
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'none',    // No quantization
+  cacheSize: 2000,             // Large cache
+  hnswM: 32,                   // Many connections
+  hnswEfSearch: 200,           // High search quality
+});
+
+// Expected: <200µs search, 100% accuracy
+```
+
+### Recipe 4: Memory-Constrained (Mobile/Edge)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'binary',  // 32x memory reduction
+  cacheSize: 100,              // Small cache
+  hnswM: 8,                    // Minimal connections
+});
+
+// Expected: <100µs search, ~10MB for 100K vectors
+```
+
+---
+
+## Scaling Strategies
+
+### Small Scale (<10K vectors)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'none',    // Full precision
+  cacheSize: 500,
+  hnswM: 8,
+});
+```
+
+### Medium Scale (10K-100K vectors)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'scalar',  // 4x reduction
+  cacheSize: 1000,
+  hnswM: 16,
+});
+```
+
+### Large Scale (100K-1M vectors)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'binary',  // 32x reduction
+  cacheSize: 2000,
+  hnswM: 32,
+});
+```
+
+### Massive Scale (>1M vectors)
+
+```typescript
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'product',  // 8-16x reduction
+  cacheSize: 5000,
+  hnswM: 48,
+  hnswEfConstruction: 400,
+});
+```
+
+---
+
+## Recipe Selection Guide
+
+| Scenario | Recipe | Quantization | Cache | HNSW M |
+|----------|--------|-------------|-------|--------|
+| Real-time search | Maximum Speed | binary | 5000 | 8 |
+| General production | Balanced | scalar | 1000 | 16 |
+| ML/research | Maximum Accuracy | none | 2000 | 32 |
+| Mobile/edge | Memory-Constrained | binary | 100 | 8 |
diff --git a/.claude/skills/agentdb-optimization/references/troubleshooting.md b/.claude/skills/agentdb-optimization/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..e00fa21
--- /dev/null
@@ -0,0 +1,84 @@
+# Troubleshooting — Detailed Reference
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+---
+
+## Issue: High Memory Usage
+
+```bash
+# Check database size
+npx agentdb@latest stats .agentdb/vectors.db
+
+# Enable quantization
+# Use 'binary' for 32x reduction
+```
+
+**Resolution steps**:
+
+1. Check current quantization setting — switch to `binary` or `product`.
+2. Run `adapter.optimize()` to consolidate similar patterns.
+3. Prune stale entries with `adapter.prune()`.
+4. Monitor `stats.dbSize` after changes.
+
+---
+
+## Issue: Slow Search Performance
+
+```typescript
+// Increase cache size
+const adapter = await createAgentDBAdapter({
+  cacheSize: 2000,  // Increase from 1000
+});
+
+// Reduce search quality (faster)
+const result = await adapter.retrieveWithReasoning(queryEmbedding, {
+  k: 5,  // Reduce from 10
+});
+```
+
+**Resolution steps**:
+
+1. Check cache hit rate — if <80%, increase `cacheSize`.
+2. Reduce `k` (number of results) if fewer results are acceptable.
+3. Lower `hnswEfSearch` to trade recall for speed.
+4. Ensure HNSW index is built (check with `npx agentdb@latest stats`).
+
+---
+
+## Issue: Low Accuracy
+
+```typescript
+// Disable or use lighter quantization
+const adapter = await createAgentDBAdapter({
+  quantizationType: 'scalar',  // Instead of 'binary'
+  hnswEfSearch: 200,           // Higher search quality
+});
+```
+
+**Resolution steps**:
+
+1. Switch from `binary` to `scalar` or `none` quantization.
+2. Increase `hnswEfSearch` to 200+.
+3. Increase `hnswM` to 32+ for better graph connectivity.
+4. Verify embedding quality upstream.
+
+---
+
+## Issue: Slow Index Build
+
+**Resolution steps**:
+
+1. Lower `hnswEfConstruction` from 400 to 200 or 100.
+2. Use `product` quantization to reduce vector size before indexing.
+3. Build index in batch mode during off-peak hours.
+
+---
+
+## Issue: Cache Miss Rate Too High
+
+**Resolution steps**:
+
+1. Increase `cacheSize` proportionally to working set.
+2. Analyze query patterns — high cardinality may require larger cache.
+3. Consider domain-level caching for segmented datasets.
diff --git a/.claude/skills/code-review-expert/SKILL.md b/.claude/skills/code-review-expert/SKILL.md
new file mode 100644 (file)
index 0000000..caf2a3e
--- /dev/null
@@ -0,0 +1,109 @@
+---
+name: code-review-expert
+version: 1.0.0
+description: Expert-level code review for PHP/Yii2/TypeScript projects. Performs deep analysis of architecture, security, performance, and maintainability. Use for reviewing PRs, feature branches, or specific files. Do not use for automated CI/CD reviews (use github-code-review instead) or for non-code documents.
+category: quality
+tags: [code-review, quality, security, performance, php, yii2]
+author: ERP24 Team
+---
+
+# Code Review Expert
+
+Perform expert-level code reviews focused on architecture, security, performance, and maintainability for PHP/Yii2 and TypeScript codebases.
+
+## When to Use
+
+- Reviewing a PR or feature branch before merge
+- Analyzing specific files or modules for quality issues
+- Pre-merge security and performance audit
+- Refactoring review — validating improvement quality
+
+## Procedure
+
+### Step 1: Determine Review Scope
+
+1. Identify the files to review. If a PR number is provided, extract the diff with `gh pr diff <number>`.
+2. If reviewing a branch, run `git diff develop...HEAD --name-only` to list changed files.
+3. If reviewing specific files, confirm the file paths exist.
+
+### Step 2: Categorize Changes
+
+Classify each changed file into one of these categories:
+
+| Category | Priority | Focus |
+|----------|----------|-------|
+| Security-critical | P0 | Auth, input validation, SQL, XSS, CSRF |
+| Business logic | P1 | Services, models, controllers |
+| Data access | P1 | ActiveRecord, migrations, queries |
+| API surface | P1 | Endpoints, request/response contracts |
+| UI/Views | P2 | Templates, JavaScript, CSS |
+| Configuration | P2 | Config files, env, deployment |
+| Tests | P3 | Test coverage, test quality |
+| Documentation | P3 | Comments, docblocks, README |
+
+### Step 3: Execute Review Checklist
+
+For each file, apply the relevant checks from the checklist. Read `references/review-checklist.md` for the full checklist organized by category.
+
+### Step 4: Assess Architecture
+
+1. Check single responsibility — each class/method does one thing.
+2. Verify dependency direction — services depend on abstractions, not concrete implementations.
+3. Look for code duplication across changed files.
+4. Validate naming conventions match the project standards.
+
+Read `references/architecture-patterns.md` for ERP24-specific architectural patterns and anti-patterns.
+
+### Step 5: Security Audit
+
+Apply security checks for all P0 and P1 files:
+
+1. SQL injection — all queries use parameterized bindings or ActiveRecord.
+2. XSS — all output is encoded via `Html::encode()` or equivalent.
+3. CSRF — all state-changing endpoints validate CSRF tokens.
+4. Authentication — protected endpoints check `Yii::$app->user->identity`.
+5. Authorization — RBAC rules enforced where applicable.
+6. File upload — validated type, size, and sanitized filename.
+
+Read `references/security-checklist.md` for the complete OWASP-aligned security review guide.
+
+### Step 6: Performance Review
+
+Check for common performance issues:
+
+1. N+1 queries — look for loops calling `->getRelation()` without `with()`.
+2. Missing indexes — new WHERE clauses on unindexed columns.
+3. Unbounded queries — `::find()->all()` without `->limit()`.
+4. Memory — large array operations that could use generators.
+5. Caching — repeated expensive operations without cache.
+
+### Step 7: Generate Review Report
+
+Produce a structured review report with:
+
+1. **Summary** — overall assessment (Approve / Request Changes / Needs Discussion)
+2. **Critical Issues** — must fix before merge (P0)
+3. **Important Issues** — should fix before merge (P1)
+4. **Suggestions** — optional improvements (P2-P3)
+5. **Positive Findings** — good patterns worth noting
+
+Use the template from `assets/review-report-template.md` for consistent formatting.
+
+### Step 8: Provide Actionable Feedback
+
+For each issue found:
+
+1. Specify exact file and line number.
+2. Explain WHY it is a problem (not just what is wrong).
+3. Provide a concrete fix or code suggestion.
+4. Reference the relevant standard (PSR-12, OWASP, Yii2 docs).
+
+## Error Handling
+
+- If `gh` CLI is not available, fall back to `git diff` for local review.
+- If a file cannot be read, skip it and note it in the report.
+- If the diff is too large (>50 files), prioritize P0 and P1 categories only.
+
+## Output Format
+
+The review report is written to stdout in Markdown format. If the user requests, it can be saved to `docs/reviews/` or posted as a PR comment.
diff --git a/.claude/skills/code-review-expert/assets/review-report-template.md b/.claude/skills/code-review-expert/assets/review-report-template.md
new file mode 100644 (file)
index 0000000..62f4df0
--- /dev/null
@@ -0,0 +1,46 @@
+# Code Review Report
+
+**PR/Branch:** {{branch_or_pr}}
+**Reviewer:** Claude Code Review Expert
+**Date:** {{date}}
+**Verdict:** {{Approve | Request Changes | Needs Discussion}}
+
+## Summary
+
+{{1-3 sentence overview of the changes and overall quality assessment}}
+
+## Statistics
+
+| Metric | Value |
+|--------|-------|
+| Files Reviewed | {{count}} |
+| Critical Issues (P0) | {{count}} |
+| Important Issues (P1) | {{count}} |
+| Suggestions (P2-P3) | {{count}} |
+
+## Critical Issues (Must Fix)
+
+### Issue 1: {{title}}
+- **File:** `{{path}}:{{line}}`
+- **Category:** {{Security | Logic | Data}}
+- **Problem:** {{explanation of WHY this is a problem}}
+- **Fix:**
+```php
+// suggested code fix
+```
+
+## Important Issues (Should Fix)
+
+### Issue 1: {{title}}
+- **File:** `{{path}}:{{line}}`
+- **Category:** {{category}}
+- **Problem:** {{explanation}}
+- **Suggestion:** {{how to fix}}
+
+## Suggestions (Optional)
+
+- {{file:line}} — {{suggestion}}
+
+## Positive Findings
+
+- {{good pattern or practice observed}}
diff --git a/.claude/skills/code-review-expert/references/architecture-patterns.md b/.claude/skills/code-review-expert/references/architecture-patterns.md
new file mode 100644 (file)
index 0000000..842d2fd
--- /dev/null
@@ -0,0 +1,50 @@
+# ERP24 Architecture Patterns
+
+## Approved Patterns
+
+### Service Layer
+- Services contain business logic, controllers are thin.
+- Services are injected via constructor, not instantiated inline.
+- Services return typed results, not raw arrays.
+
+```php
+// GOOD
+class OrderService {
+    public function __construct(
+        private readonly OrderRepository $repo,
+        private readonly NotificationService $notify
+    ) {}
+
+    public function create(CreateOrderDto $dto): Order { ... }
+}
+```
+
+### ActiveRecord Models
+- Models define relations, rules, and attribute labels.
+- Business logic lives in services, NOT in models.
+- Use `tableName()`, `rules()`, `attributeLabels()`, `relations`.
+- Search models go in `records/` directory.
+
+### Controller Actions
+- Standalone actions in `actions/` directory for reusable logic.
+- Controller methods ≤15 lines (delegate to services).
+- REST controllers extend `yii\rest\ActiveController`.
+
+### API Layering
+- `api1/` — Legacy, read-only where possible.
+- `api2/` — Current production API.
+- `api3/` — New features, stricter typing.
+- Never mix API version logic.
+
+## Anti-Patterns to Flag
+
+| Anti-Pattern | What to Look For |
+|-------------|------------------|
+| Fat Controller | Business logic in controller methods |
+| God Model | Model with >500 lines or mixed concerns |
+| Service Locator | `Yii::$app->get('service')` instead of DI |
+| Raw SQL | Direct SQL without Query Builder |
+| Magic Numbers | Hardcoded status codes without constants |
+| Missing Transactions | Multi-model saves without `beginTransaction()` |
+| Tight Coupling | Direct instantiation of services |
+| Leaky Abstraction | API response exposing internal model structure |
diff --git a/.claude/skills/code-review-expert/references/review-checklist.md b/.claude/skills/code-review-expert/references/review-checklist.md
new file mode 100644 (file)
index 0000000..c0e00f6
--- /dev/null
@@ -0,0 +1,57 @@
+# Review Checklist
+
+## Security (P0)
+
+- [ ] No hardcoded secrets (passwords, API keys, tokens)
+- [ ] All SQL uses parameterized queries or ActiveRecord Query Builder
+- [ ] All user output escaped with `Html::encode()` or equivalent
+- [ ] CSRF tokens on all state-changing forms
+- [ ] File uploads validate type, size, generate random filenames
+- [ ] Authentication checked on protected endpoints
+- [ ] RBAC authorization enforced where applicable
+- [ ] No `eval()`, `exec()`, or dynamic code execution with user input
+- [ ] Sensitive data not logged or exposed in error messages
+
+## Business Logic (P1)
+
+- [ ] Single Responsibility — each method does one thing
+- [ ] Methods ≤30 lines, ≤4 parameters
+- [ ] Early return pattern used (no deep nesting >3 levels)
+- [ ] Edge cases handled (null, empty, boundary values)
+- [ ] Error states handled explicitly, not silently swallowed
+- [ ] Transaction boundaries correct for multi-step operations
+- [ ] Validation rules complete and match business requirements
+
+## Data Access (P1)
+
+- [ ] No N+1 queries (use `with()` for eager loading)
+- [ ] Queries have appropriate `LIMIT` clauses
+- [ ] New columns have proper indexes for WHERE/JOIN clauses
+- [ ] Migrations are reversible (`safeUp`/`safeDown`)
+- [ ] No raw SQL without parameterized bindings
+- [ ] Large datasets use batch processing or generators
+
+## API Surface (P1)
+
+- [ ] Request validation complete (required fields, types, ranges)
+- [ ] Response format consistent with existing API conventions
+- [ ] Error responses include meaningful error codes and messages
+- [ ] Rate limiting considered for public endpoints
+- [ ] API versioning respected (api1/api2/api3 separation)
+
+## Code Quality (P2)
+
+- [ ] Naming follows project conventions (PSR-12, camelCase methods, PascalCase classes)
+- [ ] No dead code, commented-out code, or TODO without ticket reference
+- [ ] No code duplication (DRY principle)
+- [ ] Type hints on all parameters and return types
+- [ ] `declare(strict_types=1)` present in new PHP files
+- [ ] Proper use of Yii2 components (behaviors, events, validators)
+
+## Testing (P3)
+
+- [ ] New logic covered by unit tests
+- [ ] Tests follow AAA pattern (Arrange-Act-Assert)
+- [ ] Mocks used for external dependencies
+- [ ] Edge cases and error paths tested
+- [ ] Test names describe scenario and expected result
diff --git a/.claude/skills/code-review-expert/references/security-checklist.md b/.claude/skills/code-review-expert/references/security-checklist.md
new file mode 100644 (file)
index 0000000..fc75fb3
--- /dev/null
@@ -0,0 +1,52 @@
+# Security Review Checklist (OWASP-Aligned)
+
+## A01: Broken Access Control
+- Verify authentication on all non-public endpoints
+- Check RBAC rules match business requirements
+- Ensure no IDOR (Insecure Direct Object References) — user can only access own data
+- Validate that admin-only actions check `Yii::$app->user->can('admin')`
+
+## A02: Cryptographic Failures
+- Passwords hashed with `Yii::$app->security->generatePasswordHash()`
+- No sensitive data in URLs (tokens, passwords)
+- API keys and secrets from environment variables only
+- No MD5/SHA1 for security purposes
+
+## A03: Injection
+- All SQL through ActiveRecord or parameterized queries
+- No string interpolation in SQL: `"WHERE id = $id"` is ALWAYS a bug
+- Command injection: no `exec()`, `system()`, `passthru()` with user input
+- LDAP injection: escape special characters if LDAP is used
+
+## A04: Insecure Design
+- Rate limiting on authentication endpoints
+- Account lockout after failed attempts
+- CAPTCHA on public forms if applicable
+
+## A05: Security Misconfiguration
+- Debug mode disabled in production configs
+- Error details not exposed to users
+- Default credentials changed
+- Unnecessary HTTP methods disabled
+
+## A06: Vulnerable Components
+- Check composer.lock for known CVEs
+- Verify minimum PHP version requirements
+
+## A07: Authentication Failures
+- Session tokens regenerated on login
+- Logout properly invalidates session
+- Remember-me tokens are securely generated
+
+## A08: Data Integrity
+- CSRF protection on all POST/PUT/DELETE
+- Content-Type validation on API inputs
+- Deserialization of untrusted data avoided
+
+## A09: Logging Failures
+- Security events logged (login, failed auth, access denied)
+- No sensitive data in logs (passwords, tokens, PII)
+
+## A10: SSRF
+- URL validation if server fetches external resources
+- Whitelist allowed domains for external requests
diff --git a/.claude/skills/flow-nexus-neural/references/api-reference.md b/.claude/skills/flow-nexus-neural/references/api-reference.md
new file mode 100644 (file)
index 0000000..d45ee32
--- /dev/null
@@ -0,0 +1,232 @@
+# API Reference
+
+Complete MCP tool signatures, parameters, and response schemas.
+
+---
+
+## Model Inference
+
+### `neural_predict` -- Single Model
+
+```javascript
+mcp__flow-nexus__neural_predict({
+  model_id: "model_abc123",
+  input: [
+    [0.5, 0.3, 0.2, 0.1],
+    [0.8, 0.1, 0.05, 0.05],
+    [0.2, 0.6, 0.15, 0.05]
+  ],
+  user_id: "your_user_id"
+})
+```
+
+**Response:**
+```json
+{
+  "predictions": [
+    [0.12, 0.85, 0.03],
+    [0.89, 0.08, 0.03],
+    [0.05, 0.92, 0.03]
+  ],
+  "inference_time_ms": 45,
+  "model_version": "1.0.0"
+}
+```
+
+---
+
+## Template Marketplace
+
+### `neural_list_templates` -- Browse Templates
+
+```javascript
+mcp__flow-nexus__neural_list_templates({
+  category: "classification", // timeseries, regression, nlp, vision, anomaly, generative
+  tier: "free",               // or "paid"
+  search: "sentiment",
+  limit: 20
+})
+```
+
+**Response:**
+```json
+{
+  "templates": [
+    {
+      "id": "sentiment-analysis-v2",
+      "name": "Sentiment Analysis Classifier",
+      "description": "Pre-trained BERT model for sentiment analysis",
+      "category": "nlp",
+      "accuracy": 0.94,
+      "downloads": 1523,
+      "tier": "free"
+    },
+    {
+      "id": "image-classifier-resnet",
+      "name": "ResNet Image Classifier",
+      "description": "ResNet-50 for image classification",
+      "category": "vision",
+      "accuracy": 0.96,
+      "downloads": 2341,
+      "tier": "paid"
+    }
+  ]
+}
+```
+
+### Template Categories
+
+| Category | Description |
+|----------|-------------|
+| `classification` | Binary/multi-class classification |
+| `timeseries` | Time series forecasting |
+| `regression` | Continuous value prediction |
+| `nlp` | Natural language processing |
+| `vision` | Computer vision |
+| `anomaly` | Anomaly detection |
+| `generative` | Content generation |
+
+### `neural_deploy_template` -- Deploy a Template
+
+```javascript
+mcp__flow-nexus__neural_deploy_template({
+  template_id: "sentiment-analysis-v2",
+  custom_config: {
+    training: {
+      epochs: 50,
+      learning_rate: 0.0001
+    }
+  },
+  user_id: "your_user_id"
+})
+```
+
+---
+
+## Model Management
+
+### `neural_list_models` -- List Models
+
+```javascript
+mcp__flow-nexus__neural_list_models({
+  user_id: "your_user_id",
+  include_public: true
+})
+```
+
+**Response:**
+```json
+{
+  "models": [
+    {
+      "model_id": "model_abc123",
+      "name": "Custom Classifier v1",
+      "architecture": "feedforward",
+      "accuracy": 0.92,
+      "created_at": "2025-10-15T14:20:00Z",
+      "status": "trained"
+    },
+    {
+      "model_id": "model_def456",
+      "name": "LSTM Forecaster",
+      "architecture": "lstm",
+      "mse": 0.0045,
+      "created_at": "2025-10-18T09:15:00Z",
+      "status": "training"
+    }
+  ]
+}
+```
+
+### `neural_training_status` -- Check Training Progress
+
+```javascript
+mcp__flow-nexus__neural_training_status({
+  job_id: "job_training_xyz"
+})
+```
+
+**Response:**
+```json
+{
+  "job_id": "job_training_xyz",
+  "status": "training",
+  "progress": 0.67,
+  "current_epoch": 67,
+  "total_epochs": 100,
+  "current_loss": 0.234,
+  "estimated_completion": "2025-10-19T12:45:00Z"
+}
+```
+
+### `neural_performance_benchmark` -- Benchmark Model
+
+```javascript
+mcp__flow-nexus__neural_performance_benchmark({
+  model_id: "model_abc123",
+  benchmark_type: "comprehensive" // inference, throughput, memory, comprehensive
+})
+```
+
+**Response:**
+```json
+{
+  "model_id": "model_abc123",
+  "benchmarks": {
+    "inference_latency_ms": 12.5,
+    "throughput_qps": 8000,
+    "memory_usage_mb": 245,
+    "gpu_utilization": 0.78,
+    "accuracy": 0.92,
+    "f1_score": 0.89
+  },
+  "timestamp": "2025-10-19T11:00:00Z"
+}
+```
+
+### Benchmark Types
+
+| Type | Measures |
+|------|----------|
+| `inference` | Latency per prediction |
+| `throughput` | Queries per second |
+| `memory` | RAM and VRAM consumption |
+| `comprehensive` | All of the above + accuracy metrics |
+
+### `neural_validation_workflow` -- Validate Before Deployment
+
+```javascript
+mcp__flow-nexus__neural_validation_workflow({
+  model_id: "model_abc123",
+  user_id: "your_user_id",
+  validation_type: "comprehensive" // performance, accuracy, robustness, comprehensive
+})
+```
+
+---
+
+## Publishing
+
+### `neural_publish_template` -- Publish to Marketplace
+
+```javascript
+mcp__flow-nexus__neural_publish_template({
+  model_id: "model_abc123",
+  name: "High-Accuracy Sentiment Classifier",
+  description: "Fine-tuned BERT model for sentiment analysis with 94% accuracy",
+  category: "nlp",
+  price: 0, // 0 for free, or credits amount
+  user_id: "your_user_id"
+})
+```
+
+### `neural_rate_template` -- Rate a Template
+
+```javascript
+mcp__flow-nexus__neural_rate_template({
+  template_id: "sentiment-analysis-v2",
+  rating: 5,
+  review: "Excellent model! Achieved 95% accuracy on my dataset.",
+  user_id: "your_user_id"
+})
+```
diff --git a/.claude/skills/flow-nexus-neural/references/architecture-patterns.md b/.claude/skills/flow-nexus-neural/references/architecture-patterns.md
new file mode 100644 (file)
index 0000000..f321559
--- /dev/null
@@ -0,0 +1,126 @@
+# Architecture Patterns Reference
+
+Detailed layer configurations for each supported neural network architecture.
+
+---
+
+## Feedforward Networks
+
+**Best for:** Classification, regression, simple pattern recognition.
+
+```javascript
+{
+  type: "feedforward",
+  layers: [
+    { type: "dense", units: 256, activation: "relu" },
+    { type: "dropout", rate: 0.3 },
+    { type: "dense", units: 128, activation: "relu" },
+    { type: "dense", units: 10, activation: "softmax" }
+  ]
+}
+```
+
+**Key parameters:**
+- `units` -- number of neurons per layer
+- `activation` -- `relu`, `sigmoid`, `tanh`, `softmax`, `linear`
+- `dropout.rate` -- fraction of inputs to drop (0.0--1.0)
+
+---
+
+## LSTM Networks
+
+**Best for:** Time series, sequences, forecasting.
+
+```javascript
+{
+  type: "lstm",
+  layers: [
+    { type: "lstm", units: 128, return_sequences: true },
+    { type: "lstm", units: 64 },
+    { type: "dense", units: 1 }
+  ]
+}
+```
+
+**Key parameters:**
+- `return_sequences` -- `true` when stacking LSTM layers, `false` for last LSTM
+- Typical stack: 2-3 LSTM layers with decreasing units
+
+---
+
+## Transformers
+
+**Best for:** NLP, attention mechanisms, large-scale text.
+
+```javascript
+{
+  type: "transformer",
+  layers: [
+    { type: "embedding", vocab_size: 10000, embedding_dim: 512 },
+    { type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },
+    { type: "global_average_pooling" },
+    { type: "dense", units: 2, activation: "softmax" }
+  ]
+}
+```
+
+**Key parameters:**
+- `vocab_size` -- vocabulary size for embedding layer
+- `embedding_dim` -- dimensionality of embeddings
+- `num_heads` -- number of attention heads
+- `ff_dim` -- feed-forward dimension inside transformer block
+
+---
+
+## GANs (Generative Adversarial Networks)
+
+**Best for:** Generative tasks, image synthesis.
+
+```javascript
+{
+  type: "gan",
+  generator_layers: [...],
+  discriminator_layers: [...]
+}
+```
+
+The agent defines both generator and discriminator sub-networks separately.
+
+---
+
+## Autoencoders
+
+**Best for:** Dimensionality reduction, anomaly detection.
+
+```javascript
+{
+  type: "autoencoder",
+  encoder_layers: [
+    { type: "dense", units: 128, activation: "relu" },
+    { type: "dense", units: 64, activation: "relu" }
+  ],
+  decoder_layers: [
+    { type: "dense", units: 128, activation: "relu" },
+    { type: "dense", units: input_dim, activation: "sigmoid" }
+  ]
+}
+```
+
+**Key parameters:**
+- Encoder compresses input to a latent representation
+- Decoder reconstructs from latent space
+- `input_dim` must match the original feature count
+
+---
+
+## Architecture Selection Guide
+
+| Task | Recommended Architecture |
+|------|--------------------------|
+| Tabular classification | feedforward |
+| Time-series forecasting | lstm |
+| Text classification / NLP | transformer |
+| Image generation | gan |
+| Anomaly detection | autoencoder |
+| Sequence-to-sequence | lstm or transformer |
+| Large-scale text models | transformer (distributed) |
diff --git a/.claude/skills/flow-nexus-neural/references/distributed-training.md b/.claude/skills/flow-nexus-neural/references/distributed-training.md
new file mode 100644 (file)
index 0000000..6a968eb
--- /dev/null
@@ -0,0 +1,223 @@
+# Distributed Training Reference
+
+Train large models across multiple E2B sandboxes with cluster orchestration.
+
+---
+
+## 1. Initialize Cluster
+
+```javascript
+mcp__flow-nexus__neural_cluster_init({
+  name: "large-model-cluster",
+  architecture: "transformer", // transformer, cnn, rnn, gnn, hybrid
+  topology: "mesh",            // mesh, ring, star, hierarchical
+  consensus: "proof-of-learning", // byzantine, raft, gossip
+  daaEnabled: true,            // Decentralized Autonomous Agents
+  wasmOptimization: true
+})
+```
+
+**Response:**
+```json
+{
+  "cluster_id": "cluster_xyz789",
+  "name": "large-model-cluster",
+  "status": "initializing",
+  "topology": "mesh",
+  "max_nodes": 100,
+  "created_at": "2025-10-19T10:30:00Z"
+}
+```
+
+### Topology Options
+
+| Topology | Description | Best For |
+|----------|-------------|----------|
+| `mesh` | All nodes interconnected | General distributed training |
+| `ring` | Circular communication | Gradient passing |
+| `star` | Central coordinator | Parameter server pattern |
+| `hierarchical` | Tree structure | Multi-level aggregation |
+
+### Consensus Algorithms
+
+| Algorithm | Description |
+|-----------|-------------|
+| `proof-of-learning` | Validate via training contribution |
+| `byzantine` | Byzantine fault tolerance |
+| `raft` | Leader-based consensus |
+| `gossip` | Epidemic-style propagation |
+
+---
+
+## 2. Deploy Worker Nodes
+
+### Parameter Server
+
+```javascript
+mcp__flow-nexus__neural_node_deploy({
+  cluster_id: "cluster_xyz789",
+  node_type: "parameter_server",
+  model: "large",
+  template: "nodejs",
+  capabilities: ["parameter_management", "gradient_aggregation"],
+  autonomy: 0.8
+})
+```
+
+### Worker Node
+
+```javascript
+mcp__flow-nexus__neural_node_deploy({
+  cluster_id: "cluster_xyz789",
+  node_type: "worker",
+  model: "xl",
+  role: "worker",
+  capabilities: ["training", "inference"],
+  layers: [
+    { type: "transformer_encoder", num_heads: 16 },
+    { type: "feed_forward", units: 4096 }
+  ],
+  autonomy: 0.9
+})
+```
+
+### Aggregator
+
+```javascript
+mcp__flow-nexus__neural_node_deploy({
+  cluster_id: "cluster_xyz789",
+  node_type: "aggregator",
+  model: "large",
+  capabilities: ["gradient_aggregation", "model_synchronization"]
+})
+```
+
+### Node Types
+
+| Type | Role |
+|------|------|
+| `parameter_server` | Maintains global model parameters |
+| `worker` | Performs forward/backward passes |
+| `aggregator` | Combines gradients from workers |
+
+---
+
+## 3. Connect Cluster Topology
+
+```javascript
+mcp__flow-nexus__neural_cluster_connect({
+  cluster_id: "cluster_xyz789",
+  topology: "mesh" // Override default if needed
+})
+```
+
+---
+
+## 4. Start Distributed Training
+
+### Standard Training
+
+```javascript
+mcp__flow-nexus__neural_train_distributed({
+  cluster_id: "cluster_xyz789",
+  dataset: "imagenet",
+  epochs: 100,
+  batch_size: 128,
+  learning_rate: 0.001,
+  optimizer: "adam",   // sgd, rmsprop, adagrad
+  federated: true      // Enable federated learning
+})
+```
+
+### Federated Learning
+
+Data stays on local nodes; only gradients are shared.
+
+```javascript
+mcp__flow-nexus__neural_train_distributed({
+  cluster_id: "cluster_xyz789",
+  dataset: "medical_images_distributed",
+  epochs: 200,
+  batch_size: 64,
+  learning_rate: 0.0001,
+  optimizer: "adam",
+  federated: true,
+  aggregation_rounds: 50,
+  min_nodes_per_round: 5
+})
+```
+
+---
+
+## 5. Monitor Cluster Status
+
+```javascript
+mcp__flow-nexus__neural_cluster_status({
+  cluster_id: "cluster_xyz789"
+})
+```
+
+**Response:**
+```json
+{
+  "cluster_id": "cluster_xyz789",
+  "status": "training",
+  "nodes": [
+    {
+      "node_id": "node_001",
+      "type": "parameter_server",
+      "status": "active",
+      "cpu_usage": 0.75,
+      "memory_usage": 0.82
+    },
+    {
+      "node_id": "node_002",
+      "type": "worker",
+      "status": "active",
+      "training_progress": 0.45
+    }
+  ],
+  "training_metrics": {
+    "current_epoch": 45,
+    "total_epochs": 100,
+    "loss": 0.234,
+    "accuracy": 0.891
+  }
+}
+```
+
+---
+
+## 6. Run Distributed Inference
+
+```javascript
+mcp__flow-nexus__neural_predict_distributed({
+  cluster_id: "cluster_xyz789",
+  input_data: JSON.stringify([
+    [0.1, 0.2, 0.3],
+    [0.4, 0.5, 0.6]
+  ]),
+  aggregation: "ensemble" // mean, majority, weighted, ensemble
+})
+```
+
+### Aggregation Strategies
+
+| Strategy | Description |
+|----------|-------------|
+| `mean` | Average predictions across nodes |
+| `majority` | Majority vote (classification) |
+| `weighted` | Weighted by node confidence |
+| `ensemble` | Full ensemble combination |
+
+---
+
+## 7. Terminate Cluster
+
+```javascript
+mcp__flow-nexus__neural_cluster_terminate({
+  cluster_id: "cluster_xyz789"
+})
+```
+
+Always terminate clusters after training completes to release sandbox resources.
diff --git a/.claude/skills/flow-nexus-neural/references/single-node-training.md b/.claude/skills/flow-nexus-neural/references/single-node-training.md
new file mode 100644 (file)
index 0000000..7a32881
--- /dev/null
@@ -0,0 +1,128 @@
+# Single-Node Training Reference
+
+Complete examples for training neural networks on a single E2B sandbox node.
+
+---
+
+## Feedforward Classifier
+
+```javascript
+mcp__flow-nexus__neural_train({
+  config: {
+    architecture: {
+      type: "feedforward",
+      layers: [
+        { type: "dense", units: 256, activation: "relu" },
+        { type: "dropout", rate: 0.3 },
+        { type: "dense", units: 128, activation: "relu" },
+        { type: "dropout", rate: 0.2 },
+        { type: "dense", units: 64, activation: "relu" },
+        { type: "dense", units: 10, activation: "softmax" }
+      ]
+    },
+    training: {
+      epochs: 100,
+      batch_size: 32,
+      learning_rate: 0.001,
+      optimizer: "adam"
+    },
+    divergent: {
+      enabled: true,
+      pattern: "lateral", // quantum, chaotic, associative, evolutionary
+      factor: 0.5
+    }
+  },
+  tier: "small",
+  user_id: "your_user_id"
+})
+```
+
+### Divergent Thinking Patterns
+
+The `divergent` block enables creative exploration during training:
+
+| Pattern | Description |
+|---------|-------------|
+| `lateral` | Sideways exploration of solution space |
+| `quantum` | Probabilistic branching |
+| `chaotic` | Controlled randomness injection |
+| `associative` | Cross-domain pattern linking |
+| `evolutionary` | Mutation-based search |
+
+---
+
+## LSTM for Time Series
+
+```javascript
+mcp__flow-nexus__neural_train({
+  config: {
+    architecture: {
+      type: "lstm",
+      layers: [
+        { type: "lstm", units: 128, return_sequences: true },
+        { type: "dropout", rate: 0.2 },
+        { type: "lstm", units: 64 },
+        { type: "dense", units: 1, activation: "linear" }
+      ]
+    },
+    training: {
+      epochs: 150,
+      batch_size: 64,
+      learning_rate: 0.01,
+      optimizer: "adam"
+    }
+  },
+  tier: "medium"
+})
+```
+
+---
+
+## Transformer Architecture
+
+```javascript
+mcp__flow-nexus__neural_train({
+  config: {
+    architecture: {
+      type: "transformer",
+      layers: [
+        { type: "embedding", vocab_size: 10000, embedding_dim: 512 },
+        { type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },
+        { type: "global_average_pooling" },
+        { type: "dense", units: 128, activation: "relu" },
+        { type: "dense", units: 2, activation: "softmax" }
+      ]
+    },
+    training: {
+      epochs: 50,
+      batch_size: 16,
+      learning_rate: 0.0001,
+      optimizer: "adam"
+    }
+  },
+  tier: "large"
+})
+```
+
+---
+
+## Training Configuration Reference
+
+### Optimizers
+
+| Optimizer | Use Case |
+|-----------|----------|
+| `adam` | General purpose, recommended default |
+| `sgd` | When fine control over momentum is needed |
+| `rmsprop` | Recurrent networks |
+| `adagrad` | Sparse data |
+
+### Training Tiers
+
+| Tier | Resources | Suitable For |
+|------|-----------|-------------|
+| `nano` | Minimal | Quick experiments, prototyping |
+| `mini` | Small | Small datasets, simple models |
+| `small` | Standard | Medium datasets, standard models |
+| `medium` | Elevated | Complex models, large datasets |
+| `large` | Maximum | Large-scale training, transformers |
diff --git a/.claude/skills/flow-nexus-neural/references/troubleshooting.md b/.claude/skills/flow-nexus-neural/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..1a73080
--- /dev/null
@@ -0,0 +1,88 @@
+# Troubleshooting
+
+Common issues and resolution steps.
+
+---
+
+## Training Stalled
+
+**Symptoms:** Training progress stops advancing; loss/accuracy plateaus for many epochs.
+
+**Steps:**
+
+1. Check cluster status:
+```javascript
+const status = await mcp__flow-nexus__neural_cluster_status({
+  cluster_id: "cluster_id"
+})
+```
+
+2. Verify node health -- look for nodes with `status: "error"` or high memory usage.
+
+3. If unrecoverable, terminate and restart:
+```javascript
+await mcp__flow-nexus__neural_cluster_terminate({
+  cluster_id: "cluster_id"
+})
+```
+
+---
+
+## Low Accuracy
+
+**Possible causes and fixes:**
+
+| Cause | Fix |
+|-------|-----|
+| Too few epochs | Increase `epochs` (e.g., 100 to 300) |
+| Learning rate too high/low | Try 0.001, 0.0001, or use scheduler |
+| Insufficient regularization | Add `dropout` layers (rate 0.2--0.5) |
+| Wrong optimizer | Switch between `adam`, `sgd`, `rmsprop` |
+| Small dataset | Enable data augmentation on worker nodes |
+| Wrong architecture | See `references/architecture-patterns.md` for selection guide |
+
+---
+
+## Out of Memory
+
+**Possible causes and fixes:**
+
+| Cause | Fix |
+|-------|-----|
+| Batch size too large | Reduce `batch_size` (e.g., 128 to 32) |
+| Model too large for tier | Use a higher tier (`medium` or `large`) |
+| No gradient accumulation | Enable gradient accumulation in training config |
+| Single-node limitation | Switch to distributed training with a cluster |
+
+---
+
+## Node Connection Failures
+
+**Symptoms:** Cluster shows `status: "degraded"` or nodes fail to join.
+
+**Steps:**
+
+1. Re-run topology connection:
+```javascript
+mcp__flow-nexus__neural_cluster_connect({
+  cluster_id: "cluster_xyz789",
+  topology: "mesh"
+})
+```
+
+2. Replace failed nodes by deploying new ones with the same configuration.
+
+3. Check consensus algorithm compatibility -- `proof-of-learning` requires minimum 3 active nodes.
+
+---
+
+## Slow Inference
+
+**Possible causes and fixes:**
+
+| Cause | Fix |
+|-------|-----|
+| Model not optimized | Run `neural_performance_benchmark` to identify bottleneck |
+| High latency tier | Use higher tier sandbox |
+| No WASM optimization | Set `wasmOptimization: true` on cluster init |
+| Single-node bottleneck | Use `neural_predict_distributed` with `ensemble` aggregation |
diff --git a/.claude/skills/flow-nexus-neural/references/use-cases.md b/.claude/skills/flow-nexus-neural/references/use-cases.md
new file mode 100644 (file)
index 0000000..9b0952d
--- /dev/null
@@ -0,0 +1,125 @@
+# Common Use Cases
+
+End-to-end workflows for typical neural network tasks.
+
+---
+
+## Image Classification with CNN
+
+```javascript
+// 1. Initialize cluster for large-scale image training
+const cluster = await mcp__flow-nexus__neural_cluster_init({
+  name: "image-classification-cluster",
+  architecture: "cnn",
+  topology: "hierarchical",
+  wasmOptimization: true
+})
+
+// 2. Deploy worker nodes
+await mcp__flow-nexus__neural_node_deploy({
+  cluster_id: cluster.cluster_id,
+  node_type: "worker",
+  model: "large",
+  capabilities: ["training", "data_augmentation"]
+})
+
+// 3. Start training
+await mcp__flow-nexus__neural_train_distributed({
+  cluster_id: cluster.cluster_id,
+  dataset: "custom_images",
+  epochs: 100,
+  batch_size: 64,
+  learning_rate: 0.001,
+  optimizer: "adam"
+})
+```
+
+---
+
+## NLP Sentiment Analysis
+
+```javascript
+// 1. Deploy pre-built template
+const deployment = await mcp__flow-nexus__neural_deploy_template({
+  template_id: "sentiment-analysis-v2",
+  custom_config: {
+    training: {
+      epochs: 30,
+      batch_size: 16
+    }
+  }
+})
+
+// 2. Run inference
+const result = await mcp__flow-nexus__neural_predict({
+  model_id: deployment.model_id,
+  input: ["This product is amazing!", "Terrible experience."]
+})
+```
+
+---
+
+## Time Series Forecasting
+
+```javascript
+// 1. Train LSTM model
+const training = await mcp__flow-nexus__neural_train({
+  config: {
+    architecture: {
+      type: "lstm",
+      layers: [
+        { type: "lstm", units: 128, return_sequences: true },
+        { type: "dropout", rate: 0.2 },
+        { type: "lstm", units: 64 },
+        { type: "dense", units: 1 }
+      ]
+    },
+    training: {
+      epochs: 150,
+      batch_size: 64,
+      learning_rate: 0.01,
+      optimizer: "adam"
+    }
+  },
+  tier: "medium"
+})
+
+// 2. Monitor progress
+const status = await mcp__flow-nexus__neural_training_status({
+  job_id: training.job_id
+})
+```
+
+---
+
+## Federated Learning for Privacy-Sensitive Data
+
+```javascript
+// 1. Initialize federated cluster
+const cluster = await mcp__flow-nexus__neural_cluster_init({
+  name: "federated-medical-cluster",
+  architecture: "transformer",
+  topology: "mesh",
+  consensus: "proof-of-learning",
+  daaEnabled: true
+})
+
+// 2. Deploy nodes across different locations
+for (let i = 0; i < 5; i++) {
+  await mcp__flow-nexus__neural_node_deploy({
+    cluster_id: cluster.cluster_id,
+    node_type: "worker",
+    model: "large",
+    autonomy: 0.9
+  })
+}
+
+// 3. Train with federated learning (data never leaves nodes)
+await mcp__flow-nexus__neural_train_distributed({
+  cluster_id: cluster.cluster_id,
+  dataset: "medical_records_distributed",
+  epochs: 200,
+  federated: true,
+  aggregation_rounds: 100
+})
+```
diff --git a/.claude/skills/flow-nexus-platform/assets/best-practices.md b/.claude/skills/flow-nexus-platform/assets/best-practices.md
new file mode 100644 (file)
index 0000000..b638fc2
--- /dev/null
@@ -0,0 +1,41 @@
+# Best Practices
+
+Guidelines for security, performance, development workflow, and cost management on Flow Nexus.
+
+---
+
+## Security
+
+1. Never hardcode API keys -- use environment variables
+2. Enable 2FA when available
+3. Regularly rotate passwords and tokens
+4. Use private buckets for sensitive data
+5. Review audit logs periodically
+6. Set appropriate file expiration times
+
+## Performance
+
+1. Clean up unused sandboxes to save credits
+2. Use smaller sandbox templates when possible
+3. Optimize storage by deleting old files
+4. Batch operations to reduce API calls
+5. Monitor usage via `user_stats`
+6. Use temp buckets for transient data
+
+## Development
+
+1. Start with sandbox testing before deployment
+2. Version your applications semantically
+3. Document all templates thoroughly
+4. Include tests in published apps
+5. Use execution monitoring for debugging
+6. Leverage real-time subscriptions for live updates
+
+## Cost Management
+
+1. Set auto-refill thresholds carefully
+2. Monitor credit usage regularly
+3. Complete daily challenges for bonus credits
+4. Publish templates to earn passive credits
+5. Use free-tier resources when appropriate
+6. Schedule heavy jobs during off-peak times
diff --git a/.claude/skills/flow-nexus-platform/assets/quick-start-guide.md b/.claude/skills/flow-nexus-platform/assets/quick-start-guide.md
new file mode 100644 (file)
index 0000000..8391869
--- /dev/null
@@ -0,0 +1,102 @@
+# Quick Start Guide
+
+Step-by-step walkthrough to get started with the Flow Nexus platform.
+
+---
+
+## Step 1: Register & Login
+
+```javascript
+// Register
+mcp__flow-nexus__user_register({
+  email: "dev@example.com",
+  password: "SecurePass123!",
+  full_name: "Developer Name"
+})
+
+// Login
+mcp__flow-nexus__user_login({
+  email: "dev@example.com",
+  password: "SecurePass123!"
+})
+
+// Check auth status
+mcp__flow-nexus__auth_status({ detailed: true })
+```
+
+## Step 2: Configure Billing
+
+```javascript
+// Check current balance
+mcp__flow-nexus__check_balance()
+
+// Add credits
+const paymentLink = mcp__flow-nexus__create_payment_link({
+  amount: 50 // $50
+})
+
+// Setup auto-refill
+mcp__flow-nexus__configure_auto_refill({
+  enabled: true,
+  threshold: 100,
+  amount: 50
+})
+```
+
+## Step 3: Create Your First Sandbox
+
+```javascript
+// Create development sandbox
+const sandbox = mcp__flow-nexus__sandbox_create({
+  template: "node",
+  name: "dev-environment",
+  install_packages: ["express", "dotenv"],
+  env_vars: {
+    NODE_ENV: "development"
+  }
+})
+
+// Execute code
+mcp__flow-nexus__sandbox_execute({
+  sandbox_id: sandbox.id,
+  code: 'console.log("Hello Flow Nexus!")',
+  language: "javascript"
+})
+```
+
+## Step 4: Deploy an App
+
+```javascript
+// Browse templates
+mcp__flow-nexus__template_list({
+  category: "backend",
+  featured: true
+})
+
+// Deploy template
+mcp__flow-nexus__template_deploy({
+  template_name: "express-api-starter",
+  deployment_name: "my-api",
+  variables: {
+    database_url: "postgres://..."
+  }
+})
+```
+
+## Step 5: Complete a Challenge
+
+```javascript
+// Find challenges
+mcp__flow-nexus__challenges_list({
+  difficulty: "beginner",
+  category: "algorithms"
+})
+
+// Submit solution
+mcp__flow-nexus__challenge_submit({
+  challenge_id: "fizzbuzz",
+  user_id: "your_id",
+  solution_code: "...",
+  language: "javascript"
+})
+```
diff --git a/.claude/skills/flow-nexus-platform/assets/troubleshooting.md b/.claude/skills/flow-nexus-platform/assets/troubleshooting.md
new file mode 100644 (file)
index 0000000..c2fef5b
--- /dev/null
@@ -0,0 +1,48 @@
+# Troubleshooting
+
+Common issues and resolutions for the Flow Nexus platform.
+
+---
+
+## Authentication Issues
+
+| Symptom | Resolution |
+|---------|------------|
+| Login Failed | Check email/password, verify email first |
+| Token Expired | Re-login to get fresh tokens |
+| Permission Denied | Check tier limits, upgrade if needed |
+
+## Sandbox Issues
+
+| Symptom | Resolution |
+|---------|------------|
+| Sandbox Won't Start | Check template compatibility, verify credits |
+| Execution Timeout | Increase timeout parameter or optimize code |
+| Out of Memory | Use larger template or optimize memory usage |
+| Package Install Failed | Check package name, verify npm/pip availability |
+
+## Payment Issues
+
+| Symptom | Resolution |
+|---------|------------|
+| Payment Failed | Check payment method, sufficient funds |
+| Credits Not Applied | Allow 5-10 minutes for processing |
+| Auto-refill Not Working | Verify payment method on file |
+
+## Challenge Issues
+
+| Symptom | Resolution |
+|---------|------------|
+| Submission Rejected | Check code syntax, ensure all tests pass |
+| Wrong Answer | Review test cases, check edge cases |
+| Performance Too Slow | Optimize algorithm complexity |
+
+## Support & Resources
+
+- **Documentation**: https://docs.flow-nexus.ruv.io
+- **API Reference**: https://api.flow-nexus.ruv.io/docs
+- **Status Page**: https://status.flow-nexus.ruv.io
+- **Community Forum**: https://community.flow-nexus.ruv.io
+- **GitHub Issues**: https://github.com/ruvnet/flow-nexus/issues
+- **Discord**: https://discord.gg/flow-nexus
+- **Email Support**: support@flow-nexus.ruv.io (Pro/Enterprise only)
diff --git a/.claude/skills/flow-nexus-platform/references/advanced-patterns.md b/.claude/skills/flow-nexus-platform/references/advanced-patterns.md
new file mode 100644 (file)
index 0000000..13e3da9
--- /dev/null
@@ -0,0 +1,95 @@
+# Advanced Patterns
+
+Advanced configuration patterns for sandboxes, storage, and real-time features.
+
+---
+
+## Advanced Sandbox Configuration
+
+### Custom Docker Images
+```javascript
+mcp__flow-nexus__sandbox_create({
+  template: "base",
+  name: "custom-environment",
+  startup_script: `
+    apt-get update
+    apt-get install -y custom-package
+    git clone https://github.com/user/repo
+    cd repo && npm install
+  `
+})
+```
+
+### Multi-Stage Execution
+```javascript
+// Stage 1: Setup
+mcp__flow-nexus__sandbox_execute({
+  sandbox_id: "id",
+  code: "npm install && npm run build"
+})
+
+// Stage 2: Run
+mcp__flow-nexus__sandbox_execute({
+  sandbox_id: "id",
+  code: "npm start",
+  working_dir: "/app/dist"
+})
+```
+
+## Advanced Storage Patterns
+
+### Large File Upload (Chunked)
+```javascript
+const chunkSize = 5 * 1024 * 1024 // 5MB chunks
+for (let i = 0; i < chunks.length; i++) {
+  await mcp__flow-nexus__storage_upload({
+    bucket: "private",
+    path: `large-file.bin.part${i}`,
+    content: chunks[i]
+  })
+}
+```
+
+### Storage Lifecycle
+```javascript
+// Upload to temp for processing
+mcp__flow-nexus__storage_upload({
+  bucket: "temp",
+  path: "processing/data.json",
+  content: data
+})
+
+// Move to permanent storage after processing
+mcp__flow-nexus__storage_upload({
+  bucket: "private",
+  path: "archive/processed-data.json",
+  content: processedData
+})
+```
+
+## Advanced Real-time Patterns
+
+### Multi-Table Sync
+```javascript
+const tables = ["users", "tasks", "notifications"]
+tables.forEach(table => {
+  mcp__flow-nexus__realtime_subscribe({
+    table,
+    event: "*",
+    filter: `user_id=eq.${userId}`
+  })
+})
+```
+
+### Event-Driven Workflows
+```javascript
+// Subscribe to task completion
+mcp__flow-nexus__realtime_subscribe({
+  table: "tasks",
+  event: "UPDATE",
+  filter: "status=eq.completed"
+})
+
+// Trigger notification workflow on event
+// (handled by your application logic)
+```
diff --git a/.claude/skills/flow-nexus-platform/references/app-store-deployment.md b/.claude/skills/flow-nexus-platform/references/app-store-deployment.md
new file mode 100644 (file)
index 0000000..8f71c66
--- /dev/null
@@ -0,0 +1,158 @@
+# App Store & Deployment API
+
+Detailed API reference for browsing, publishing, deploying, and managing applications on Flow Nexus.
+
+---
+
+## Browse & Search
+
+**Search Applications**
+```javascript
+mcp__flow-nexus__app_search({
+  search: "authentication api",
+  category: "backend",
+  featured: true,
+  limit: 20
+})
+```
+
+**Get App Details**
+```javascript
+mcp__flow-nexus__app_get({
+  app_id: "app_id"
+})
+```
+
+**List Templates**
+```javascript
+mcp__flow-nexus__app_store_list_templates({
+  category: "web-api",
+  tags: ["express", "jwt", "typescript"],
+  limit: 20
+})
+```
+
+**Get Template Details**
+```javascript
+mcp__flow-nexus__template_get({
+  template_name: "express-api-starter",
+  template_id: "template_id" // alternative
+})
+```
+
+**List All Available Templates**
+```javascript
+mcp__flow-nexus__template_list({
+  category: "backend",
+  template_type: "starter",
+  featured: true,
+  limit: 50
+})
+```
+
+## Publish Applications
+
+**Publish App to Store**
+```javascript
+mcp__flow-nexus__app_store_publish_app({
+  name: "JWT Authentication Service",
+  description: "Production-ready JWT authentication microservice with refresh tokens",
+  category: "backend",
+  version: "1.0.0",
+  source_code: sourceCodeString,
+  tags: ["auth", "jwt", "express", "typescript", "security"],
+  metadata: {
+    author: "Your Name",
+    license: "MIT",
+    repository: "github.com/username/repo",
+    homepage: "https://yourapp.com",
+    documentation: "https://docs.yourapp.com"
+  }
+})
+```
+
+**Update Application**
+```javascript
+mcp__flow-nexus__app_update({
+  app_id: "app_id",
+  updates: {
+    version: "1.1.0",
+    description: "Added OAuth2 support",
+    tags: ["auth", "jwt", "oauth2", "express"],
+    source_code: updatedSourceCode
+  }
+})
+```
+
+## Deploy Templates
+
+**Deploy Template**
+```javascript
+mcp__flow-nexus__template_deploy({
+  template_name: "express-api-starter",
+  deployment_name: "my-production-api",
+  variables: {
+    api_key: "your_api_key",
+    database_url: "postgres://user:pass@host:5432/db",
+    redis_url: "redis://localhost:6379"
+  },
+  env_vars: {
+    NODE_ENV: "production",
+    PORT: "8080",
+    LOG_LEVEL: "info"
+  }
+})
+```
+
+## Analytics & Management
+
+**Get App Analytics**
+```javascript
+mcp__flow-nexus__app_analytics({
+  app_id: "your_app_id",
+  timeframe: "30d" // 24h, 7d, 30d, 90d
+})
+```
+
+**View Installed Apps**
+```javascript
+mcp__flow-nexus__app_installed({
+  user_id: "your_user_id"
+})
+```
+
+**Get Market Statistics**
+```javascript
+mcp__flow-nexus__market_data()
+```
+
+## App Categories
+
+| Category | Description |
+|----------|-------------|
+| `web-api` | RESTful APIs and microservices |
+| `frontend` | React, Vue, Angular applications |
+| `full-stack` | Complete end-to-end applications |
+| `cli-tools` | Command-line utilities |
+| `data-processing` | ETL pipelines and analytics |
+| `ml-models` | Pre-trained machine learning models |
+| `blockchain` | Web3 and blockchain applications |
+| `mobile` | React Native and mobile apps |
+
+## Publishing Best Practices
+
+1. **Documentation**: Include comprehensive README with setup instructions
+2. **Examples**: Provide usage examples and sample configurations
+3. **Testing**: Include test suite and CI/CD configuration
+4. **Versioning**: Use semantic versioning (MAJOR.MINOR.PATCH)
+5. **Licensing**: Add clear license information (MIT, Apache, etc.)
+6. **Deployment**: Include Docker/docker-compose configurations
+7. **Migrations**: Provide upgrade guides for version updates
+8. **Security**: Document security considerations and best practices
+
+## Revenue Sharing
+
+- Earn rUv credits when others deploy your templates
+- Set pricing (0 for free, or credits for premium)
+- Track usage and earnings via analytics
+- Withdraw credits or use for Flow Nexus services
diff --git a/.claude/skills/flow-nexus-platform/references/authentication.md b/.claude/skills/flow-nexus-platform/references/authentication.md
new file mode 100644 (file)
index 0000000..0870afb
--- /dev/null
@@ -0,0 +1,96 @@
+# Authentication & User Management API
+
+Detailed API reference for Flow Nexus authentication, password management, and user profiles.
+
+---
+
+## Registration & Login
+
+**Register New Account**
+```javascript
+mcp__flow-nexus__user_register({
+  email: "user@example.com",
+  password: "secure_password",
+  full_name: "Your Name",
+  username: "unique_username" // optional
+})
+```
+
+**Login**
+```javascript
+mcp__flow-nexus__user_login({
+  email: "user@example.com",
+  password: "your_password"
+})
+```
+
+**Check Authentication Status**
+```javascript
+mcp__flow-nexus__auth_status({ detailed: true })
+```
+
+**Logout**
+```javascript
+mcp__flow-nexus__user_logout()
+```
+
+## Password Management
+
+**Request Password Reset**
+```javascript
+mcp__flow-nexus__user_reset_password({
+  email: "user@example.com"
+})
+```
+
+**Update Password with Token**
+```javascript
+mcp__flow-nexus__user_update_password({
+  token: "reset_token_from_email",
+  new_password: "new_secure_password"
+})
+```
+
+**Verify Email**
+```javascript
+mcp__flow-nexus__user_verify_email({
+  token: "verification_token_from_email"
+})
+```
+
+## Profile Management
+
+**Get User Profile**
+```javascript
+mcp__flow-nexus__user_profile({
+  user_id: "your_user_id"
+})
+```
+
+**Update Profile**
+```javascript
+mcp__flow-nexus__user_update_profile({
+  user_id: "your_user_id",
+  updates: {
+    full_name: "Updated Name",
+    bio: "AI Developer and researcher",
+    github_username: "yourusername",
+    twitter_handle: "@yourhandle"
+  }
+})
+```
+
+**Get User Statistics**
+```javascript
+mcp__flow-nexus__user_stats({
+  user_id: "your_user_id"
+})
+```
+
+**Upgrade User Tier**
+```javascript
+mcp__flow-nexus__user_upgrade({
+  user_id: "your_user_id",
+  tier: "pro" // pro, enterprise
+})
+```
diff --git a/.claude/skills/flow-nexus-platform/references/challenges-achievements.md b/.claude/skills/flow-nexus-platform/references/challenges-achievements.md
new file mode 100644 (file)
index 0000000..9aedb96
--- /dev/null
@@ -0,0 +1,137 @@
+# Challenges & Achievements API
+
+Detailed API reference for browsing challenges, submitting solutions, leaderboards, and achievements.
+
+---
+
+## Browse Challenges
+
+**List Available Challenges**
+```javascript
+mcp__flow-nexus__challenges_list({
+  difficulty: "intermediate", // beginner, intermediate, advanced, expert
+  category: "algorithms",
+  status: "active", // active, completed, locked
+  limit: 20
+})
+```
+
+**Get Challenge Details**
+```javascript
+mcp__flow-nexus__challenge_get({
+  challenge_id: "two-sum-problem"
+})
+```
+
+## Submit Solutions
+
+**Submit Challenge Solution**
+```javascript
+mcp__flow-nexus__challenge_submit({
+  challenge_id: "challenge_id",
+  user_id: "your_user_id",
+  solution_code: `
+    function twoSum(nums, target) {
+      const map = new Map();
+      for (let i = 0; i < nums.length; i++) {
+        const complement = target - nums[i];
+        if (map.has(complement)) {
+          return [map.get(complement), i];
+        }
+        map.set(nums[i], i);
+      }
+      return [];
+    }
+  `,
+  language: "javascript",
+  execution_time: 45 // milliseconds (optional)
+})
+```
+
+**Mark Challenge as Complete**
+```javascript
+mcp__flow-nexus__app_store_complete_challenge({
+  challenge_id: "challenge_id",
+  user_id: "your_user_id",
+  submission_data: {
+    passed_tests: 10,
+    total_tests: 10,
+    execution_time: 45,
+    memory_usage: 2048 // KB
+  }
+})
+```
+
+## Leaderboards
+
+**Global Leaderboard**
+```javascript
+mcp__flow-nexus__leaderboard_get({
+  type: "global", // global, weekly, monthly, challenge
+  limit: 100
+})
+```
+
+**Challenge-Specific Leaderboard**
+```javascript
+mcp__flow-nexus__leaderboard_get({
+  type: "challenge",
+  challenge_id: "specific_challenge_id",
+  limit: 50
+})
+```
+
+## Achievements & Badges
+
+**List User Achievements**
+```javascript
+mcp__flow-nexus__achievements_list({
+  user_id: "your_user_id",
+  category: "speed_demon" // Optional filter
+})
+```
+
+## Challenge Categories
+
+| Category | Description |
+|----------|-------------|
+| `algorithms` | Classic algorithm problems (sorting, searching, graphs) |
+| `data-structures` | DS implementation (trees, heaps, tries) |
+| `system-design` | Architecture and scalability challenges |
+| `optimization` | Performance and efficiency problems |
+| `security` | Security-focused vulnerabilities and fixes |
+| `ml-basics` | Machine learning fundamentals |
+| `distributed-systems` | Concurrency and distributed computing |
+| `databases` | Query optimization and schema design |
+
+## Challenge Difficulty Rewards
+
+| Difficulty | Reward |
+|------------|--------|
+| Beginner | 10-25 credits |
+| Intermediate | 50-100 credits |
+| Advanced | 150-300 credits |
+| Expert | 400-500 credits |
+| Master | 600-1000 credits |
+
+## Achievement Types
+
+| Achievement | Description |
+|-------------|-------------|
+| Speed Demon | Complete challenges in record time |
+| Code Golf | Minimize code length |
+| Perfect Score | 100% test pass rate |
+| Streak Master | Complete challenges N days in a row |
+| Polyglot | Solve in multiple languages |
+| Debugger | Fix broken code challenges |
+| Optimizer | Achieve top performance benchmarks |
+
+## Tips for Success
+
+1. **Start Simple**: Begin with beginner challenges to build confidence
+2. **Review Solutions**: Study top solutions after completing
+3. **Optimize**: Aim for both correctness and performance
+4. **Daily Practice**: Complete daily challenges for bonus credits
+5. **Community**: Engage with discussions and learn from others
+6. **Track Progress**: Monitor achievements and leaderboard position
+7. **Experiment**: Try multiple approaches to problems
diff --git a/.claude/skills/flow-nexus-platform/references/payments-credits.md b/.claude/skills/flow-nexus-platform/references/payments-credits.md
new file mode 100644 (file)
index 0000000..7229a81
--- /dev/null
@@ -0,0 +1,137 @@
+# Payments & Credits API
+
+Detailed API reference for balance management, credit purchases, auto-refill, pricing, and subscription tiers.
+
+---
+
+## Balance & Credits
+
+**Check Credit Balance**
+```javascript
+mcp__flow-nexus__check_balance()
+```
+
+**Check rUv Balance**
+```javascript
+mcp__flow-nexus__ruv_balance({
+  user_id: "your_user_id"
+})
+```
+
+**View Transaction History**
+```javascript
+mcp__flow-nexus__ruv_history({
+  user_id: "your_user_id",
+  limit: 100
+})
+```
+
+**Get Payment History**
+```javascript
+mcp__flow-nexus__get_payment_history({
+  limit: 50
+})
+```
+
+## Purchase Credits
+
+**Create Payment Link**
+```javascript
+mcp__flow-nexus__create_payment_link({
+  amount: 50 // USD, minimum $10
+})
+// Returns secure Stripe payment URL
+```
+
+## Auto-Refill Configuration
+
+**Enable Auto-Refill**
+```javascript
+mcp__flow-nexus__configure_auto_refill({
+  enabled: true,
+  threshold: 100,  // Refill when credits drop below 100
+  amount: 50       // Purchase $50 worth of credits
+})
+```
+
+**Disable Auto-Refill**
+```javascript
+mcp__flow-nexus__configure_auto_refill({
+  enabled: false
+})
+```
+
+## Credit Pricing
+
+**Service Costs:**
+
+| Service | Cost |
+|---------|------|
+| Swarm Operations | 1-10 credits/hour |
+| Sandbox Execution | 0.5-5 credits/hour |
+| Neural Training | 5-50 credits/job |
+| Workflow Runs | 0.1-1 credit/execution |
+| Storage | 0.01 credits/GB/day |
+| API Calls | 0.001-0.01 credits/request |
+
+## Earning Credits
+
+**Ways to Earn:**
+
+1. **Complete Challenges**: 10-500 credits per challenge
+2. **Publish Templates**: Earn when others deploy (you set pricing)
+3. **Referral Program**: Bonus credits for user invites
+4. **Daily Login**: Small daily bonus (5-10 credits)
+5. **Achievements**: Unlock milestone rewards (50-1000 credits)
+6. **App Store Sales**: Revenue share from paid templates
+
+**Earn Credits Programmatically**
+```javascript
+mcp__flow-nexus__app_store_earn_ruv({
+  user_id: "your_user_id",
+  amount: 100,
+  reason: "Completed expert algorithm challenge",
+  source: "challenge" // challenge, app_usage, referral, etc.
+})
+```
+
+## Subscription Tiers
+
+### Free Tier
+
+- 100 free credits monthly
+- Basic sandbox access (2 concurrent)
+- Limited swarm agents (3 max)
+- Community support
+- 1GB storage
+
+### Pro Tier ($29/month)
+
+- 1000 credits monthly
+- Priority sandbox access (10 concurrent)
+- Unlimited swarm agents
+- Advanced workflows
+- Email support
+- 10GB storage
+- Early access to features
+
+### Enterprise Tier (Custom Pricing)
+
+- Unlimited credits
+- Dedicated compute resources
+- Custom neural models
+- 99.9% SLA guarantee
+- Priority 24/7 support
+- Unlimited storage
+- White-label options
+- On-premise deployment
+
+## Cost Optimization Tips
+
+1. **Use Smaller Sandboxes**: Choose appropriate templates (base vs full-stack)
+2. **Optimize Neural Training**: Tune hyperparameters, reduce epochs
+3. **Batch Operations**: Group workflow executions together
+4. **Clean Up Resources**: Delete unused sandboxes and storage
+5. **Monitor Usage**: Check `user_stats` regularly
+6. **Use Free Templates**: Leverage community templates
+7. **Schedule Off-Peak**: Run heavy jobs during low-cost periods
diff --git a/.claude/skills/flow-nexus-platform/references/sandbox-management.md b/.claude/skills/flow-nexus-platform/references/sandbox-management.md
new file mode 100644 (file)
index 0000000..c88acdc
--- /dev/null
@@ -0,0 +1,180 @@
+# Sandbox Management API
+
+Detailed API reference for creating, configuring, executing code in, and managing Flow Nexus sandboxes.
+
+---
+
+## Create & Configure Sandboxes
+
+**Create Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_create({
+  template: "node", // node, python, react, nextjs, vanilla, base, claude-code
+  name: "my-sandbox",
+  env_vars: {
+    API_KEY: "your_api_key",
+    NODE_ENV: "development",
+    DATABASE_URL: "postgres://..."
+  },
+  install_packages: ["express", "cors", "dotenv"],
+  startup_script: "npm run dev",
+  timeout: 3600, // seconds
+  metadata: {
+    project: "my-project",
+    environment: "staging"
+  }
+})
+```
+
+**Configure Existing Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_configure({
+  sandbox_id: "sandbox_id",
+  env_vars: {
+    NEW_VAR: "value"
+  },
+  install_packages: ["axios", "lodash"],
+  run_commands: ["npm run migrate", "npm run seed"],
+  anthropic_key: "sk-ant-..." // For Claude Code integration
+})
+```
+
+## Execute Code
+
+**Run Code in Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_execute({
+  sandbox_id: "sandbox_id",
+  code: `
+    console.log('Hello from sandbox!');
+    const result = await fetch('https://api.example.com/data');
+    const data = await result.json();
+    return data;
+  `,
+  language: "javascript",
+  capture_output: true,
+  timeout: 60, // seconds
+  working_dir: "/app",
+  env_vars: {
+    TEMP_VAR: "override"
+  }
+})
+```
+
+## Manage Sandboxes
+
+**List Sandboxes**
+```javascript
+mcp__flow-nexus__sandbox_list({
+  status: "running" // running, stopped, all
+})
+```
+
+**Get Sandbox Status**
+```javascript
+mcp__flow-nexus__sandbox_status({
+  sandbox_id: "sandbox_id"
+})
+```
+
+**Upload File to Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_upload({
+  sandbox_id: "sandbox_id",
+  file_path: "/app/config/database.json",
+  content: JSON.stringify(databaseConfig, null, 2)
+})
+```
+
+**Get Sandbox Logs**
+```javascript
+mcp__flow-nexus__sandbox_logs({
+  sandbox_id: "sandbox_id",
+  lines: 100 // max 1000
+})
+```
+
+**Stop Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_stop({
+  sandbox_id: "sandbox_id"
+})
+```
+
+**Delete Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_delete({
+  sandbox_id: "sandbox_id"
+})
+```
+
+## Sandbox Templates
+
+| Template | Description |
+|----------|-------------|
+| `node` | Node.js environment with npm |
+| `python` | Python 3.x with pip |
+| `react` | React development setup |
+| `nextjs` | Next.js full-stack framework |
+| `vanilla` | Basic HTML/CSS/JS |
+| `base` | Minimal Linux environment |
+| `claude-code` | Claude Code integrated environment |
+
+## Common Sandbox Patterns
+
+**API Development Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_create({
+  template: "node",
+  name: "api-development",
+  install_packages: [
+    "express",
+    "cors",
+    "helmet",
+    "dotenv",
+    "jsonwebtoken",
+    "bcrypt"
+  ],
+  env_vars: {
+    PORT: "3000",
+    NODE_ENV: "development"
+  },
+  startup_script: "npm run dev"
+})
+```
+
+**Machine Learning Sandbox**
+```javascript
+mcp__flow-nexus__sandbox_create({
+  template: "python",
+  name: "ml-training",
+  install_packages: [
+    "numpy",
+    "pandas",
+    "scikit-learn",
+    "matplotlib",
+    "tensorflow"
+  ],
+  env_vars: {
+    CUDA_VISIBLE_DEVICES: "0"
+  }
+})
+```
+
+**Full-Stack Development**
+```javascript
+mcp__flow-nexus__sandbox_create({
+  template: "nextjs",
+  name: "fullstack-app",
+  install_packages: [
+    "prisma",
+    "@prisma/client",
+    "next-auth",
+    "zod"
+  ],
+  env_vars: {
+    DATABASE_URL: "postgresql://...",
+    NEXTAUTH_SECRET: "secret"
+  }
+})
+```
diff --git a/.claude/skills/flow-nexus-platform/references/storage-realtime.md b/.claude/skills/flow-nexus-platform/references/storage-realtime.md
new file mode 100644 (file)
index 0000000..d262a91
--- /dev/null
@@ -0,0 +1,110 @@
+# Storage & Real-time API
+
+Detailed API reference for file storage, real-time subscriptions, and execution monitoring.
+
+---
+
+## File Storage
+
+**Upload File**
+```javascript
+mcp__flow-nexus__storage_upload({
+  bucket: "my-bucket", // public, private, shared, temp
+  path: "data/users.json",
+  content: JSON.stringify(userData, null, 2),
+  content_type: "application/json"
+})
+```
+
+**List Files**
+```javascript
+mcp__flow-nexus__storage_list({
+  bucket: "my-bucket",
+  path: "data/", // prefix filter
+  limit: 100
+})
+```
+
+**Get Public URL**
+```javascript
+mcp__flow-nexus__storage_get_url({
+  bucket: "my-bucket",
+  path: "data/report.pdf",
+  expires_in: 3600 // seconds (default: 1 hour)
+})
+```
+
+**Delete File**
+```javascript
+mcp__flow-nexus__storage_delete({
+  bucket: "my-bucket",
+  path: "data/old-file.json"
+})
+```
+
+## Storage Buckets
+
+| Bucket | Description |
+|--------|-------------|
+| `public` | Publicly accessible files (CDN-backed) |
+| `private` | User-only access with authentication |
+| `shared` | Team collaboration with ACL |
+| `temp` | Auto-deleted after 24 hours |
+
+## Real-time Subscriptions
+
+**Subscribe to Database Changes**
+```javascript
+mcp__flow-nexus__realtime_subscribe({
+  table: "tasks",
+  event: "INSERT", // INSERT, UPDATE, DELETE, *
+  filter: "status=eq.pending AND priority=eq.high"
+})
+```
+
+**List Active Subscriptions**
+```javascript
+mcp__flow-nexus__realtime_list()
+```
+
+**Unsubscribe**
+```javascript
+mcp__flow-nexus__realtime_unsubscribe({
+  subscription_id: "subscription_id"
+})
+```
+
+## Execution Monitoring
+
+**Subscribe to Execution Stream**
+```javascript
+mcp__flow-nexus__execution_stream_subscribe({
+  stream_type: "claude-flow-swarm", // claude-code, claude-flow-swarm, claude-flow-hive-mind, github-integration
+  deployment_id: "deployment_id",
+  sandbox_id: "sandbox_id" // alternative
+})
+```
+
+**Get Stream Status**
+```javascript
+mcp__flow-nexus__execution_stream_status({
+  stream_id: "stream_id"
+})
+```
+
+**List Generated Files**
+```javascript
+mcp__flow-nexus__execution_files_list({
+  stream_id: "stream_id",
+  created_by: "claude-flow", // claude-code, claude-flow, git-clone, user
+  file_type: "javascript" // filter by extension
+})
+```
+
+**Get File Content from Execution**
+```javascript
+mcp__flow-nexus__execution_file_get({
+  file_id: "file_id",
+  file_path: "/path/to/file.js" // alternative
+})
+```
diff --git a/.claude/skills/flow-nexus-platform/references/system-utilities.md b/.claude/skills/flow-nexus-platform/references/system-utilities.md
new file mode 100644 (file)
index 0000000..1420026
--- /dev/null
@@ -0,0 +1,50 @@
+# System Utilities API
+
+Detailed API reference for Queen Seraphina AI assistant, system health monitoring, and authentication management.
+
+---
+
+## Queen Seraphina AI Assistant
+
+**Seek Guidance from Seraphina**
+```javascript
+mcp__flow-nexus__seraphina_chat({
+  message: "How should I architect a distributed microservices system?",
+  enable_tools: true, // Allow her to create swarms, deploy code, etc.
+  conversation_history: [
+    { role: "user", content: "I need help with system architecture" },
+    { role: "assistant", content: "I can help you design that. What are your requirements?" }
+  ]
+})
+```
+
+Queen Seraphina is an advanced AI assistant with:
+- Deep expertise in distributed systems
+- Ability to create swarms and orchestrate agents
+- Code deployment and architecture design
+- Multi-turn conversation with context retention
+- Tool usage for hands-on assistance
+
+## System Health & Monitoring
+
+**Check System Health**
+```javascript
+mcp__flow-nexus__system_health()
+```
+
+**View Audit Logs**
+```javascript
+mcp__flow-nexus__audit_log({
+  user_id: "your_user_id", // optional filter
+  limit: 100
+})
+```
+
+## Authentication Management
+
+**Initialize Authentication**
+```javascript
+mcp__flow-nexus__auth_init({
+  mode: "user" // user, service
+})
+```
diff --git a/.claude/skills/flow-nexus-swarm/references/advanced-features.md b/.claude/skills/flow-nexus-swarm/references/advanced-features.md
new file mode 100644 (file)
index 0000000..0159cab
--- /dev/null
@@ -0,0 +1,87 @@
+# Advanced Features Reference
+
+Real-time monitoring, swarm metrics, and multi-swarm coordination.
+
+---
+
+## Real-time Monitoring
+
+Subscribe to live execution streams and inspect artifacts:
+
+```javascript
+// Subscribe to execution streams
+mcp__flow-nexus__execution_stream_subscribe({
+  stream_type: "claude-flow-swarm",
+  deployment_id: "deployment_id"
+})
+
+// Get execution status
+mcp__flow-nexus__execution_stream_status({
+  stream_id: "stream_id"
+})
+
+// List files created during execution
+mcp__flow-nexus__execution_files_list({
+  stream_id: "stream_id",
+  created_by: "claude-flow"
+})
+```
+
+---
+
+## Swarm Metrics & Analytics
+
+```javascript
+// Get swarm performance metrics
+mcp__flow-nexus__swarm_status({
+  swarm_id: "id"
+})
+
+// Analyze workflow efficiency
+mcp__flow-nexus__workflow_status({
+  workflow_id: "id",
+  include_metrics: true
+})
+```
+
+---
+
+## Multi-Swarm Coordination
+
+Coordinate multiple swarms for complex, multi-phase projects:
+
+```javascript
+// Phase 1: Research swarm
+const researchSwarm = await mcp__flow-nexus__swarm_init({
+  topology: "mesh",
+  maxAgents: 4
+})
+
+// Phase 2: Development swarm
+const devSwarm = await mcp__flow-nexus__swarm_init({
+  topology: "hierarchical",
+  maxAgents: 8
+})
+
+// Phase 3: Testing swarm
+const testSwarm = await mcp__flow-nexus__swarm_init({
+  topology: "star",
+  maxAgents: 5
+})
+```
+
+Each swarm operates independently with its own topology, then results flow between phases via workflow triggers or manual coordination.
+
+---
+
+## Integration with Claude Flow
+
+Flow Nexus swarms integrate seamlessly with Claude Flow hooks:
+
+```bash
+# Pre-task coordination setup
+npx claude-flow@alpha hooks pre-task --description "Initialize swarm"
+
+# Post-task metrics export
+npx claude-flow@alpha hooks post-task --task-id "swarm-execution"
+```
diff --git a/.claude/skills/flow-nexus-swarm/references/best-practices.md b/.claude/skills/flow-nexus-swarm/references/best-practices.md
new file mode 100644 (file)
index 0000000..a504cc5
--- /dev/null
@@ -0,0 +1,100 @@
+# Best Practices Reference
+
+Proven patterns for topology selection, agent assignment, error handling, scaling, and resource management.
+
+---
+
+## 1. Choose the Right Topology
+
+```javascript
+// Simple projects: Star
+mcp__flow-nexus__swarm_init({ topology: "star", maxAgents: 3 })
+
+// Collaborative work: Mesh
+mcp__flow-nexus__swarm_init({ topology: "mesh", maxAgents: 5 })
+
+// Complex projects: Hierarchical
+mcp__flow-nexus__swarm_init({ topology: "hierarchical", maxAgents: 10 })
+
+// Sequential workflows: Ring
+mcp__flow-nexus__swarm_init({ topology: "ring", maxAgents: 4 })
+```
+
+---
+
+## 2. Optimize Agent Assignment
+
+```javascript
+// Use vector similarity for optimal matching
+mcp__flow-nexus__workflow_agent_assign({
+  task_id: "complex-task",
+  use_vector_similarity: true
+})
+```
+
+---
+
+## 3. Implement Proper Error Handling
+
+```javascript
+mcp__flow-nexus__workflow_create({
+  name: "Resilient Workflow",
+  steps: [...],
+  metadata: {
+    retry_policy: "exponential_backoff",
+    max_retries: 3,
+    timeout: 300000, // 5 minutes
+    on_failure: "notify_and_rollback"
+  }
+})
+```
+
+---
+
+## 4. Monitor and Scale
+
+```javascript
+// Regular monitoring
+const status = await mcp__flow-nexus__swarm_status()
+
+// Scale based on workload
+if (status.workload > 0.8) {
+  await mcp__flow-nexus__swarm_scale({ target_agents: status.agents + 2 })
+}
+```
+
+---
+
+## 5. Use Async Execution for Long-Running Workflows
+
+```javascript
+// Long-running workflows should use message queues
+mcp__flow-nexus__workflow_execute({
+  workflow_id: "data-pipeline",
+  async: true // Non-blocking execution
+})
+
+// Monitor progress
+mcp__flow-nexus__workflow_queue_status({ include_messages: true })
+```
+
+---
+
+## 6. Clean Up Resources
+
+```javascript
+// Destroy swarm when complete
+mcp__flow-nexus__swarm_destroy({ swarm_id: "id" })
+```
+
+---
+
+## 7. Leverage Templates
+
+```javascript
+// Use proven templates instead of building from scratch
+mcp__flow-nexus__swarm_create_from_template({
+  template_name: "code-review",
+  overrides: { maxAgents: 4 }
+})
+```
diff --git a/.claude/skills/flow-nexus-swarm/references/orchestration-patterns.md b/.claude/skills/flow-nexus-swarm/references/orchestration-patterns.md
new file mode 100644 (file)
index 0000000..6d70e49
--- /dev/null
@@ -0,0 +1,130 @@
+# Orchestration Patterns Reference
+
+Ready-to-use agent orchestration patterns for common project types.
+
+---
+
+## Full-Stack Development Pattern
+
+Hierarchical topology with specialized agents for end-to-end web development.
+
+```javascript
+// 1. Initialize swarm with hierarchical topology
+mcp__flow-nexus__swarm_init({
+  topology: "hierarchical",
+  maxAgents: 8,
+  strategy: "specialized"
+})
+
+// 2. Spawn specialized agents
+mcp__flow-nexus__agent_spawn({ type: "coordinator", name: "Project Manager" })
+mcp__flow-nexus__agent_spawn({ type: "coder", name: "Backend Developer" })
+mcp__flow-nexus__agent_spawn({ type: "coder", name: "Frontend Developer" })
+mcp__flow-nexus__agent_spawn({ type: "coder", name: "Database Architect" })
+mcp__flow-nexus__agent_spawn({ type: "analyst", name: "QA Engineer" })
+
+// 3. Create development workflow
+mcp__flow-nexus__workflow_create({
+  name: "Full-Stack Development",
+  steps: [
+    { id: "requirements", action: "analyze_requirements", agent: "coordinator" },
+    { id: "db_design", action: "design_schema", agent: "Database Architect" },
+    { id: "backend", action: "build_api", agent: "Backend Developer", depends_on: ["db_design"] },
+    { id: "frontend", action: "build_ui", agent: "Frontend Developer", depends_on: ["requirements"] },
+    { id: "integration", action: "integrate", agent: "Backend Developer", depends_on: ["backend", "frontend"] },
+    { id: "testing", action: "qa_testing", agent: "QA Engineer", depends_on: ["integration"] }
+  ]
+})
+
+// 4. Execute workflow
+mcp__flow-nexus__workflow_execute({
+  workflow_id: "workflow_id",
+  input_data: {
+    project: "E-commerce Platform",
+    tech_stack: ["Node.js", "React", "PostgreSQL"]
+  }
+})
+```
+
+---
+
+## Research & Analysis Pattern
+
+Mesh topology for collaborative, peer-to-peer research.
+
+```javascript
+// 1. Initialize mesh topology for collaborative research
+mcp__flow-nexus__swarm_init({
+  topology: "mesh",
+  maxAgents: 5,
+  strategy: "balanced"
+})
+
+// 2. Spawn research agents
+mcp__flow-nexus__agent_spawn({ type: "researcher", name: "Primary Researcher" })
+mcp__flow-nexus__agent_spawn({ type: "researcher", name: "Secondary Researcher" })
+mcp__flow-nexus__agent_spawn({ type: "analyst", name: "Data Analyst" })
+mcp__flow-nexus__agent_spawn({ type: "analyst", name: "Insights Analyst" })
+
+// 3. Orchestrate research task
+mcp__flow-nexus__task_orchestrate({
+  task: "Research machine learning trends for 2025 and analyze market opportunities",
+  strategy: "parallel",
+  maxAgents: 4,
+  priority: "high"
+})
+```
+
+---
+
+## CI/CD Pipeline Pattern
+
+Event-driven deployment pipeline with parallel testing gates.
+
+```javascript
+mcp__flow-nexus__workflow_create({
+  name: "Deployment Pipeline",
+  description: "Automated testing, building, and multi-environment deployment",
+  steps: [
+    { id: "lint", action: "lint_code", agent: "code_quality", parallel: true },
+    { id: "unit_test", action: "unit_tests", agent: "test_runner", parallel: true },
+    { id: "integration_test", action: "integration_tests", agent: "test_runner", parallel: true },
+    { id: "build", action: "build_artifacts", agent: "builder", depends_on: ["lint", "unit_test", "integration_test"] },
+    { id: "security_scan", action: "security_scan", agent: "security", depends_on: ["build"] },
+    { id: "deploy_staging", action: "deploy", agent: "deployer", depends_on: ["security_scan"] },
+    { id: "smoke_test", action: "smoke_tests", agent: "test_runner", depends_on: ["deploy_staging"] },
+    { id: "deploy_prod", action: "deploy", agent: "deployer", depends_on: ["smoke_test"] }
+  ],
+  triggers: ["github_push", "github_pr_merged"],
+  metadata: {
+    priority: 10,
+    auto_rollback: true
+  }
+})
+```
+
+---
+
+## Data Processing Pipeline Pattern
+
+Scheduled ETL workflow with validation gates.
+
+```javascript
+mcp__flow-nexus__workflow_create({
+  name: "ETL Pipeline",
+  description: "Extract, Transform, Load data processing",
+  steps: [
+    { id: "extract", action: "extract_data", agent: "data_extractor" },
+    { id: "validate_raw", action: "validate_data", agent: "validator", depends_on: ["extract"] },
+    { id: "transform", action: "transform_data", agent: "transformer", depends_on: ["validate_raw"] },
+    { id: "enrich", action: "enrich_data", agent: "enricher", depends_on: ["transform"] },
+    { id: "load", action: "load_data", agent: "loader", depends_on: ["enrich"] },
+    { id: "validate_final", action: "validate_data", agent: "validator", depends_on: ["load"] }
+  ],
+  triggers: ["schedule:0 2 * * *"], // Daily at 2 AM
+  metadata: {
+    retry_policy: "exponential_backoff",
+    max_retries: 3
+  }
+})
+```
diff --git a/.claude/skills/flow-nexus-swarm/references/swarm-management.md b/.claude/skills/flow-nexus-swarm/references/swarm-management.md
new file mode 100644 (file)
index 0000000..91e597c
--- /dev/null
@@ -0,0 +1,108 @@
+# Swarm Management Reference
+
+Detailed API reference for swarm lifecycle operations: initialization, agent spawning, task orchestration, monitoring, and scaling.
+
+---
+
+## Initialize Swarm
+
+Create a new swarm with specified topology and configuration:
+
+```javascript
+mcp__flow-nexus__swarm_init({
+  topology: "hierarchical", // Options: mesh, ring, star, hierarchical
+  maxAgents: 8,
+  strategy: "balanced" // Options: balanced, specialized, adaptive
+})
+```
+
+### Topology Guide
+
+| Topology | Structure | Best For |
+|----------|-----------|----------|
+| **Hierarchical** | Tree with coordinator nodes | Complex projects with clear delegation |
+| **Mesh** | Peer-to-peer collaboration | Research and analysis tasks |
+| **Ring** | Circular coordination | Sequential workflows |
+| **Star** | Centralized hub | Simple delegation |
+
+### Strategy Guide
+
+| Strategy | Behavior |
+|----------|----------|
+| **Balanced** | Equal distribution of workload across agents |
+| **Specialized** | Agents focus on specific expertise areas |
+| **Adaptive** | Dynamic adjustment based on task complexity |
+
+---
+
+## Spawn Agents
+
+Add specialized agents to the swarm:
+
+```javascript
+mcp__flow-nexus__agent_spawn({
+  type: "researcher", // Options: researcher, coder, analyst, optimizer, coordinator
+  name: "Lead Researcher",
+  capabilities: ["web_search", "analysis", "summarization"]
+})
+```
+
+### Agent Types
+
+| Type | Role |
+|------|------|
+| **Researcher** | Information gathering, web search, analysis |
+| **Coder** | Code generation, refactoring, implementation |
+| **Analyst** | Data analysis, pattern recognition, insights |
+| **Optimizer** | Performance tuning, resource optimization |
+| **Coordinator** | Task delegation, progress tracking, integration |
+
+---
+
+## Orchestrate Tasks
+
+Distribute tasks across the swarm:
+
+```javascript
+mcp__flow-nexus__task_orchestrate({
+  task: "Build a REST API with authentication and database integration",
+  strategy: "parallel", // Options: parallel, sequential, adaptive
+  maxAgents: 5,
+  priority: "high" // Options: low, medium, high, critical
+})
+```
+
+### Execution Strategies
+
+| Strategy | Description |
+|----------|-------------|
+| **Parallel** | Maximum concurrency for independent subtasks |
+| **Sequential** | Step-by-step execution with dependencies |
+| **Adaptive** | AI-powered strategy selection based on task analysis |
+
+---
+
+## Monitor & Scale Swarms
+
+```javascript
+// Get detailed swarm status
+mcp__flow-nexus__swarm_status({
+  swarm_id: "optional-id" // Uses active swarm if not provided
+})
+
+// List all active swarms
+mcp__flow-nexus__swarm_list({
+  status: "active" // Options: active, destroyed, all
+})
+
+// Scale swarm up or down
+mcp__flow-nexus__swarm_scale({
+  target_agents: 10,
+  swarm_id: "optional-id"
+})
+
+// Gracefully destroy swarm
+mcp__flow-nexus__swarm_destroy({
+  swarm_id: "optional-id"
+})
+```
diff --git a/.claude/skills/flow-nexus-swarm/references/templates.md b/.claude/skills/flow-nexus-swarm/references/templates.md
new file mode 100644 (file)
index 0000000..7b365a0
--- /dev/null
@@ -0,0 +1,61 @@
+# Templates Reference
+
+Pre-built swarm templates and custom template creation.
+
+---
+
+## Use Pre-built Templates
+
+```javascript
+// Create swarm from template
+mcp__flow-nexus__swarm_create_from_template({
+  template_name: "full-stack-dev",
+  overrides: {
+    maxAgents: 6,
+    strategy: "specialized"
+  }
+})
+
+// List available templates
+mcp__flow-nexus__swarm_templates_list({
+  category: "quickstart", // Options: quickstart, specialized, enterprise, custom, all
+  includeStore: true
+})
+```
+
+---
+
+## Template Categories
+
+### Quickstart Templates
+
+| Template | Purpose |
+|----------|---------|
+| `full-stack-dev` | Complete web development swarm |
+| `research-team` | Research and analysis swarm |
+| `code-review` | Automated code review swarm |
+| `data-pipeline` | ETL and data processing |
+
+### Specialized Templates
+
+| Template | Purpose |
+|----------|---------|
+| `ml-development` | Machine learning project swarm |
+| `mobile-dev` | Mobile app development |
+| `devops-automation` | Infrastructure and deployment |
+| `security-audit` | Security analysis and testing |
+
+### Enterprise Templates
+
+| Template | Purpose |
+|----------|---------|
+| `enterprise-migration` | Large-scale system migration |
+| `multi-repo-sync` | Multi-repository coordination |
+| `compliance-review` | Regulatory compliance workflows |
+| `incident-response` | Automated incident management |
+
+---
+
+## Custom Template Creation
+
+Save successful swarm configurations as reusable templates for future projects. After a productive swarm session, export the configuration for reuse across teams and projects.
diff --git a/.claude/skills/flow-nexus-swarm/references/workflow-automation.md b/.claude/skills/flow-nexus-swarm/references/workflow-automation.md
new file mode 100644 (file)
index 0000000..444be08
--- /dev/null
@@ -0,0 +1,137 @@
+# Workflow Automation Reference
+
+Detailed API reference for event-driven workflow creation, execution, monitoring, agent assignment, and queue management.
+
+---
+
+## Create Workflow
+
+Define event-driven workflows with message queue processing:
+
+```javascript
+mcp__flow-nexus__workflow_create({
+  name: "CI/CD Pipeline",
+  description: "Automated testing, building, and deployment",
+  steps: [
+    {
+      id: "test",
+      action: "run_tests",
+      agent: "tester",
+      parallel: true
+    },
+    {
+      id: "build",
+      action: "build_app",
+      agent: "builder",
+      depends_on: ["test"]
+    },
+    {
+      id: "deploy",
+      action: "deploy_prod",
+      agent: "deployer",
+      depends_on: ["build"]
+    }
+  ],
+  triggers: ["push_to_main", "manual_trigger"],
+  metadata: {
+    priority: 10,
+    retry_policy: "exponential_backoff"
+  }
+})
+```
+
+### Workflow Features
+
+| Feature | Description |
+|---------|-------------|
+| **Dependency Management** | Define step dependencies with `depends_on` |
+| **Parallel Execution** | Set `parallel: true` for concurrent steps |
+| **Event Triggers** | GitHub events, schedules, manual triggers |
+| **Retry Policies** | Automatic retry on transient failures |
+| **Priority Queuing** | High-priority workflows execute first |
+
+---
+
+## Execute Workflow
+
+Run workflows synchronously or asynchronously:
+
+```javascript
+mcp__flow-nexus__workflow_execute({
+  workflow_id: "workflow_id",
+  input_data: {
+    branch: "main",
+    commit: "abc123",
+    environment: "production"
+  },
+  async: true // Queue-based execution for long-running workflows
+})
+```
+
+### Execution Modes
+
+| Mode | Behavior |
+|------|----------|
+| **Sync** (`async: false`) | Immediate execution, wait for completion |
+| **Async** (`async: true`) | Message queue processing, non-blocking |
+
+---
+
+## Monitor Workflows
+
+```javascript
+// Get workflow status and metrics
+mcp__flow-nexus__workflow_status({
+  workflow_id: "id",
+  execution_id: "specific-run-id", // Optional
+  include_metrics: true
+})
+
+// List workflows with filters
+mcp__flow-nexus__workflow_list({
+  status: "running", // Options: running, completed, failed, pending
+  limit: 10,
+  offset: 0
+})
+
+// Get complete audit trail
+mcp__flow-nexus__workflow_audit_trail({
+  workflow_id: "id",
+  limit: 50,
+  start_time: "2025-01-01T00:00:00Z"
+})
+```
+
+---
+
+## Agent Assignment
+
+Intelligently assign agents to workflow tasks:
+
+```javascript
+mcp__flow-nexus__workflow_agent_assign({
+  task_id: "task_id",
+  agent_type: "coder", // Preferred agent type
+  use_vector_similarity: true // AI-powered capability matching
+})
+```
+
+### Vector Similarity Matching
+
+The agent uses vector similarity to:
+- Analyze task requirements and agent capabilities
+- Find the optimal agent based on past performance
+- Consider workload and availability
+
+---
+
+## Queue Management
+
+Monitor and manage message queues:
+
+```javascript
+mcp__flow-nexus__workflow_queue_status({
+  queue_name: "optional-specific-queue",
+  include_messages: true // Show pending messages
+})
+```
diff --git a/.claude/skills/github-code-review/assets/pr-template.md b/.claude/skills/github-code-review/assets/pr-template.md
new file mode 100644 (file)
index 0000000..816bb87
--- /dev/null
@@ -0,0 +1,19 @@
+<!-- .github/pull_request_template.md -->
+<!-- Copy this template to .github/ in the target repository. -->
+
+## Swarm Configuration
+- Topology: [mesh/hierarchical/ring/star]
+- Max Agents: [number]
+- Auto-spawn: [yes/no]
+- Priority: [high/medium/low]
+
+## Tasks for Swarm
+- [ ] Task 1 description
+- [ ] Task 2 description
+- [ ] Task 3 description
+
+## Review Focus Areas
+- [ ] Security review
+- [ ] Performance analysis
+- [ ] Architecture validation
+- [ ] Accessibility check
diff --git a/.claude/skills/github-code-review/assets/swarm-config.yml b/.claude/skills/github-code-review/assets/swarm-config.yml
new file mode 100644 (file)
index 0000000..37b0b85
--- /dev/null
@@ -0,0 +1,49 @@
+# Swarm Configuration for GitHub Code Review
+# Copy this file to .github/review-swarm.yml in the target repository.
+
+version: 1
+
+# Label-to-agent mapping
+# PR labels automatically determine which agents to spawn.
+label-mapping:
+  bug:
+    - debugger
+    - tester
+  feature:
+    - architect
+    - coder
+    - tester
+  refactor:
+    - analyst
+    - coder
+  docs:
+    - researcher
+    - writer
+  performance:
+    - analyst
+    - optimizer
+  security:
+    - security
+    - authentication
+    - audit
+
+# Topology selection by PR size
+# The swarm automatically selects a topology based on PR line count.
+topology:
+  small:    # < 100 lines changed
+    type: ring
+    max-agents: 3
+  medium:   # 100-500 lines changed
+    type: mesh
+    max-agents: 5
+  large:    # > 500 lines changed
+    type: hierarchical
+    max-agents: 8
+
+# PR comment commands
+# Users invoke swarm actions directly from PR comments.
+comment-commands:
+  - "/swarm init {topology} {agent-count}"
+  - "/swarm spawn {type} {task-description}"
+  - "/swarm status"
+  - "/swarm review --agents {agent-list}"
diff --git a/.claude/skills/github-code-review/references/ci-cd-workflows.md b/.claude/skills/github-code-review/references/ci-cd-workflows.md
new file mode 100644 (file)
index 0000000..f612194
--- /dev/null
@@ -0,0 +1,154 @@
+# CI/CD Workflows
+
+GitHub Actions workflows, automated review pipelines, and build integration.
+
+---
+
+## Auto-Review on PR Creation
+
+```yaml
+# .github/workflows/auto-review.yml
+name: Automated Code Review
+on:
+  pull_request:
+    types: [opened, synchronize]
+  issue_comment:
+    types: [created]
+
+jobs:
+  swarm-review:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+        with:
+          fetch-depth: 0
+
+      - name: Setup GitHub CLI
+        run: echo "${{ secrets.GITHUB_TOKEN }}" | gh auth login --with-token
+
+      - name: Run Review Swarm
+        run: |
+          PR_NUM=${{ github.event.pull_request.number }}
+          PR_DATA=$(gh pr view $PR_NUM --json files,title,body,labels)
+          PR_DIFF=$(gh pr diff $PR_NUM)
+
+          REVIEW_OUTPUT=$(npx ruv-swarm github review-all \
+            --pr $PR_NUM \
+            --pr-data "$PR_DATA" \
+            --diff "$PR_DIFF" \
+            --agents "security,performance,style,architecture")
+
+          echo "$REVIEW_OUTPUT" | gh pr review $PR_NUM --comment -F -
+
+          if echo "$REVIEW_OUTPUT" | grep -q "approved"; then
+            gh pr review $PR_NUM --approve
+          elif echo "$REVIEW_OUTPUT" | grep -q "changes-requested"; then
+            gh pr review $PR_NUM --request-changes -b "See review comments above"
+          fi
+
+      - name: Update Labels
+        run: |
+          if echo "$REVIEW_OUTPUT" | grep -q "security"; then
+            gh pr edit $PR_NUM --add-label "security-review"
+          fi
+          if echo "$REVIEW_OUTPUT" | grep -q "performance"; then
+            gh pr edit $PR_NUM --add-label "performance-review"
+          fi
+```
+
+---
+
+## Build and Review Pipeline
+
+```yaml
+# .github/workflows/build-and-review.yml
+name: Build and Review
+on: [pull_request]
+
+jobs:
+  build-and-test:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - run: npm install
+      - run: npm test
+      - run: npm run build
+
+  swarm-review:
+    needs: build-and-test
+    runs-on: ubuntu-latest
+    steps:
+      - name: Run Swarm Review
+        run: |
+          npx ruv-swarm github review-all \
+            --pr ${{ github.event.pull_request.number }} \
+            --include-build-results
+```
+
+---
+
+## Automated PR Fixes
+
+```bash
+npx ruv-swarm github pr-fix 123 \
+  --issues "lint,test-failures,formatting" \
+  --commit-fixes \
+  --push-changes
+```
+
+---
+
+## Progress Updates to PR
+
+```bash
+PROGRESS=$(npx ruv-swarm github pr-progress 123 --format markdown)
+gh pr comment 123 --body "$PROGRESS"
+
+if [[ $(echo "$PROGRESS" | grep -o '[0-9]\+%' | sed 's/%//') -gt 90 ]]; then
+  gh pr edit 123 --add-label "ready-for-review"
+fi
+```
+
+---
+
+## Webhook Handler for Comment Commands
+
+```javascript
+// webhook-handler.js
+const { createServer } = require('http');
+const { execSync } = require('child_process');
+
+createServer((req, res) => {
+  if (req.url === '/github-webhook') {
+    const event = JSON.parse(body);
+
+    if (event.action === 'opened' && event.pull_request) {
+      execSync(`npx ruv-swarm github pr-init ${event.pull_request.number}`);
+    }
+
+    if (event.comment && event.comment.body.startsWith('/swarm')) {
+      const command = event.comment.body;
+      execSync(`npx ruv-swarm github handle-comment --pr ${event.issue.number} --command "${command}"`);
+    }
+
+    res.writeHead(200);
+    res.end('OK');
+  }
+}).listen(3000);
+```
+
+---
+
+## Auto-Merge When Ready
+
+```bash
+SWARM_STATUS=$(npx ruv-swarm github pr-status 123)
+
+if [[ "$SWARM_STATUS" == "complete" ]]; then
+  REVIEWS=$(gh pr view 123 --json reviews --jq '.reviews | length')
+
+  if [[ $REVIEWS -ge 2 ]]; then
+    gh pr merge 123 --auto --squash
+  fi
+fi
+```
diff --git a/.claude/skills/github-code-review/references/comment-templates.md b/.claude/skills/github-code-review/references/comment-templates.md
new file mode 100644 (file)
index 0000000..7efc106
--- /dev/null
@@ -0,0 +1,146 @@
+# Comment Templates
+
+Templates for generating structured, contextual review comments on GitHub PRs.
+
+---
+
+## Security Issue Comment
+
+```markdown
+🔒 **Security Issue: [Type]**
+
+**Severity**: 🔴 Critical / 🟡 High / 🟢 Low
+
+**Description**:
+[Clear explanation of the security issue]
+
+**Impact**:
+[Potential consequences if not addressed]
+
+**Suggested Fix**:
+```language
+[Code example of the fix]
+```
+
+**References**:
+- [OWASP Guide](link)
+- [Security Best Practices](link)
+```
+
+---
+
+## Performance Issue Comment
+
+```markdown
+⚡ **Performance Issue: [Type]**
+
+**Severity**: 🔴 Critical / 🟡 Medium / 🟢 Low
+
+**Description**:
+[Explanation of the performance bottleneck]
+
+**Measured Impact**:
+- Before: [metric]
+- After: [metric]
+- Regression: [percentage]
+
+**Suggested Optimization**:
+```language
+[Optimized code example]
+```
+
+**Benchmark Data**:
+[Include relevant benchmark results]
+```
+
+---
+
+## Architecture Concern Comment
+
+```markdown
+🏗️ **Architecture Concern: [Type]**
+
+**Pattern Violated**: [SOLID principle / Design pattern]
+
+**Description**:
+[Explanation of the architectural issue]
+
+**Current Design**:
+[Brief description of current approach]
+
+**Recommended Approach**:
+```language
+[Refactored code example]
+```
+
+**Benefits**:
+- [Benefit 1]
+- [Benefit 2]
+```
+
+---
+
+## Style Suggestion Comment
+
+```markdown
+📝 **Style Suggestion: [Type]**
+
+**Convention**: [Which standard is violated]
+
+**Current Code**:
+```language
+[Current code snippet]
+```
+
+**Suggested Change**:
+```language
+[Corrected code snippet]
+```
+
+**Rationale**: [Brief explanation]
+```
+
+---
+
+## Generating Contextual Comments via CLI
+
+```bash
+# Get PR diff with context
+PR_DIFF=$(gh pr diff 123 --color never)
+PR_FILES=$(gh pr view 123 --json files)
+
+# Generate review comments
+COMMENTS=$(npx ruv-swarm github review-comment \
+  --pr 123 \
+  --diff "$PR_DIFF" \
+  --files "$PR_FILES" \
+  --style "constructive" \
+  --include-examples \
+  --suggest-fixes)
+
+# Post comments using gh CLI
+echo "$COMMENTS" | jq -c '.[]' | while read -r comment; do
+  FILE=$(echo "$comment" | jq -r '.path')
+  LINE=$(echo "$comment" | jq -r '.line')
+  BODY=$(echo "$comment" | jq -r '.body')
+  COMMIT_ID=$(gh pr view 123 --json headRefOid -q .headRefOid)
+
+  gh api \
+    --method POST \
+    /repos/:owner/:repo/pulls/123/comments \
+    -f path="$FILE" \
+    -f line="$LINE" \
+    -f body="$BODY" \
+    -f commit_id="$COMMIT_ID"
+done
+```
+
+### Batch Comment Management
+
+```bash
+npx ruv-swarm github review-comments \
+  --pr 123 \
+  --group-by "agent,severity" \
+  --summarize \
+  --resolve-outdated
+```
diff --git a/.claude/skills/github-code-review/references/custom-agents.md b/.claude/skills/github-code-review/references/custom-agents.md
new file mode 100644 (file)
index 0000000..70e5cce
--- /dev/null
@@ -0,0 +1,94 @@
+# Custom Review Agents
+
+Create and register project-specific review agents.
+
+---
+
+## Create a Custom Agent
+
+```javascript
+// custom-review-agent.js
+class CustomReviewAgent {
+  constructor(config) {
+    this.config = config;
+    this.rules = config.rules || [];
+  }
+
+  async review(pr) {
+    const issues = [];
+
+    // Custom logic: Check for TODO comments in production code
+    if (await this.checkTodoComments(pr)) {
+      issues.push({
+        severity: 'warning',
+        file: pr.file,
+        line: pr.line,
+        message: 'TODO comment found in production code',
+        suggestion: 'Resolve TODO or create issue to track it'
+      });
+    }
+
+    // Custom logic: Verify API versioning
+    if (await this.checkApiVersioning(pr)) {
+      issues.push({
+        severity: 'error',
+        file: pr.file,
+        line: pr.line,
+        message: 'API endpoint missing versioning',
+        suggestion: 'Add /v1/, /v2/ prefix to API routes'
+      });
+    }
+
+    return issues;
+  }
+
+  async checkTodoComments(pr) {
+    const todoRegex = /\/\/\s*TODO|\/\*\s*TODO/gi;
+    return todoRegex.test(pr.diff);
+  }
+
+  async checkApiVersioning(pr) {
+    const apiRegex = /app\.(get|post|put|delete)\(['"]\/api\/(?!v\d+)/;
+    return apiRegex.test(pr.diff);
+  }
+}
+
+module.exports = CustomReviewAgent;
+```
+
+---
+
+## Register a Custom Agent
+
+```bash
+npx ruv-swarm github register-agent \
+  --name "custom-reviewer" \
+  --file "./custom-review-agent.js" \
+  --category "standards"
+```
+
+---
+
+## Agent Interface Contract
+
+Every custom agent must implement the `review(pr)` method and return an array of issue objects:
+
+```typescript
+interface ReviewIssue {
+  severity: 'error' | 'warning' | 'info';
+  file: string;
+  line: number;
+  message: string;
+  suggestion?: string;
+}
+```
+
+The `pr` object passed to `review()` contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `number` | number | PR number |
+| `diff` | string | Full diff text |
+| `file` | string | Current file path |
+| `line` | number | Current line number |
+| `files` | string[] | List of changed files |
diff --git a/.claude/skills/github-code-review/references/review-agents.md b/.claude/skills/github-code-review/references/review-agents.md
new file mode 100644 (file)
index 0000000..067094c
--- /dev/null
@@ -0,0 +1,184 @@
+# Specialized Review Agents
+
+Reference for all built-in review agents, their checks, metrics, and CLI invocations.
+
+---
+
+## Security Review Agent
+
+**Focus:** Identify security vulnerabilities and suggest fixes.
+
+### CLI Usage
+
+```bash
+# Get changed files from PR
+CHANGED_FILES=$(gh pr view 123 --json files --jq '.files[].path')
+
+# Run security-focused review
+SECURITY_RESULTS=$(npx ruv-swarm github review-security \
+  --pr 123 \
+  --files "$CHANGED_FILES" \
+  --check "owasp,cve,secrets,permissions" \
+  --suggest-fixes)
+
+# Post findings based on severity
+if echo "$SECURITY_RESULTS" | grep -q "critical"; then
+  gh pr review 123 --request-changes --body "$SECURITY_RESULTS"
+  gh pr edit 123 --add-label "security-review-required"
+else
+  gh pr comment 123 --body "$SECURITY_RESULTS"
+fi
+```
+
+### Checks Performed
+
+```json
+{
+  "checks": [
+    "SQL injection vulnerabilities",
+    "XSS attack vectors",
+    "Authentication bypasses",
+    "Authorization flaws",
+    "Cryptographic weaknesses",
+    "Dependency vulnerabilities",
+    "Secret exposure",
+    "CORS misconfigurations"
+  ],
+  "actions": [
+    "Block PR on critical issues",
+    "Suggest secure alternatives",
+    "Add security test cases",
+    "Update security documentation"
+  ]
+}
+```
+
+---
+
+## Performance Review Agent
+
+**Focus:** Analyze performance impact and optimization opportunities.
+
+### CLI Usage
+
+```bash
+npx ruv-swarm github review-performance \
+  --pr 123 \
+  --profile "cpu,memory,io" \
+  --benchmark-against main \
+  --suggest-optimizations
+```
+
+### Metrics Analyzed
+
+```json
+{
+  "metrics": [
+    "Algorithm complexity (Big O analysis)",
+    "Database query efficiency",
+    "Memory allocation patterns",
+    "Cache utilization",
+    "Network request optimization",
+    "Bundle size impact",
+    "Render performance"
+  ],
+  "benchmarks": [
+    "Compare with baseline",
+    "Load test simulations",
+    "Memory leak detection",
+    "Bottleneck identification"
+  ]
+}
+```
+
+---
+
+## Architecture Review Agent
+
+**Focus:** Evaluate design patterns and architectural decisions.
+
+### CLI Usage
+
+```bash
+npx ruv-swarm github review-architecture \
+  --pr 123 \
+  --check "patterns,coupling,cohesion,solid" \
+  --visualize-impact \
+  --suggest-refactoring
+```
+
+### Analysis Scope
+
+```json
+{
+  "patterns": [
+    "Design pattern adherence",
+    "SOLID principles",
+    "DRY violations",
+    "Separation of concerns",
+    "Dependency injection",
+    "Layer violations",
+    "Circular dependencies"
+  ],
+  "metrics": [
+    "Coupling metrics",
+    "Cohesion scores",
+    "Complexity measures",
+    "Maintainability index"
+  ]
+}
+```
+
+---
+
+## Style & Convention Agent
+
+**Focus:** Enforce coding standards and best practices.
+
+### CLI Usage
+
+```bash
+npx ruv-swarm github review-style \
+  --pr 123 \
+  --check "formatting,naming,docs,tests" \
+  --auto-fix "formatting,imports,whitespace"
+```
+
+### Checks and Auto-Fix Capabilities
+
+```json
+{
+  "checks": [
+    "Code formatting",
+    "Naming conventions",
+    "Documentation standards",
+    "Comment quality",
+    "Test coverage",
+    "Error handling patterns",
+    "Logging standards"
+  ],
+  "auto-fix": [
+    "Formatting issues",
+    "Import organization",
+    "Trailing whitespace",
+    "Simple naming issues"
+  ]
+}
+```
+
+---
+
+## Accessibility Agent
+
+**Focus:** Validate UI accessibility compliance (WCAG, ARIA).
+
+### CLI Usage
+
+```bash
+npx ruv-swarm github review-accessibility \
+  --pr 123 \
+  --check "wcag-aa,aria,keyboard,screen-reader" \
+  --suggest-fixes
+```
+
+The accessibility agent activates automatically for PRs touching `**/components/**`, `**/styles/**`, or `**/pages/**`.
diff --git a/.claude/skills/github-code-review/references/review-configuration.md b/.claude/skills/github-code-review/references/review-configuration.md
new file mode 100644 (file)
index 0000000..416185f
--- /dev/null
@@ -0,0 +1,158 @@
+# Review Configuration
+
+Configuration files, custom triggers, quality gates, and threshold definitions.
+
+---
+
+## Configuration File
+
+```yaml
+# .github/review-swarm.yml
+version: 1
+review:
+  auto-trigger: true
+  required-agents:
+    - security
+    - performance
+    - style
+  optional-agents:
+    - architecture
+    - accessibility
+    - i18n
+
+  thresholds:
+    security: block      # Block merge on security issues
+    performance: warn    # Warn on performance issues
+    style: suggest       # Suggest style improvements
+
+  rules:
+    security:
+      - no-eval
+      - no-hardcoded-secrets
+      - proper-auth-checks
+      - validate-input
+    performance:
+      - no-n-plus-one
+      - efficient-queries
+      - proper-caching
+      - optimize-loops
+    architecture:
+      - max-coupling: 5
+      - min-cohesion: 0.7
+      - follow-patterns
+      - avoid-circular-deps
+```
+
+---
+
+## Custom Review Triggers
+
+```json
+{
+  "triggers": {
+    "high-risk-files": {
+      "paths": ["**/auth/**", "**/payment/**", "**/admin/**"],
+      "agents": ["security", "architecture"],
+      "depth": "comprehensive",
+      "require-approval": true
+    },
+    "performance-critical": {
+      "paths": ["**/api/**", "**/database/**", "**/cache/**"],
+      "agents": ["performance", "database"],
+      "benchmarks": true,
+      "regression-threshold": "5%"
+    },
+    "ui-changes": {
+      "paths": ["**/components/**", "**/styles/**", "**/pages/**"],
+      "agents": ["accessibility", "style", "i18n"],
+      "visual-tests": true,
+      "responsive-check": true
+    }
+  }
+}
+```
+
+---
+
+## Quality Gates
+
+### Status Checks
+
+```yaml
+# Required status checks in branch protection
+protection_rules:
+  required_status_checks:
+    strict: true
+    contexts:
+      - "review-swarm/security"
+      - "review-swarm/performance"
+      - "review-swarm/architecture"
+      - "review-swarm/tests"
+```
+
+### Define Quality Gate Thresholds
+
+```bash
+npx ruv-swarm github quality-gates \
+  --define '{
+    "security": {"threshold": "no-critical"},
+    "performance": {"regression": "<5%"},
+    "coverage": {"minimum": "80%"},
+    "architecture": {"complexity": "<10"},
+    "duplication": {"maximum": "5%"}
+  }'
+```
+
+### Track Review Metrics
+
+```bash
+npx ruv-swarm github review-metrics \
+  --period 30d \
+  --metrics "issues-found,false-positives,fix-rate,time-to-review" \
+  --export-dashboard \
+  --format json
+```
+
+---
+
+## Security Considerations
+
+### Best Practices
+
+1. **Token Permissions**: Ensure GitHub tokens have minimal required scopes.
+2. **Command Validation**: Validate all PR comments before execution.
+3. **Rate Limiting**: Implement rate limits for PR operations.
+4. **Audit Trail**: Log all swarm operations for compliance.
+5. **Secret Management**: Never expose API keys in PR comments or logs.
+
+### Security Checklist
+
+- [ ] GitHub token scoped to repository only
+- [ ] Webhook signatures verified
+- [ ] Command injection protection enabled
+- [ ] Rate limiting configured
+- [ ] Audit logging enabled
+- [ ] Secrets scanning active
+- [ ] Branch protection rules enforced
+
+---
+
+## Best Practices
+
+### Review Configuration
+- Define clear review criteria upfront.
+- Set appropriate severity thresholds.
+- Configure agent specializations for the project stack.
+- Establish override procedures for emergencies.
+
+### Comment Quality
+- Provide actionable, specific feedback.
+- Include code examples with suggestions.
+- Reference documentation and best practices.
+- Maintain respectful, constructive tone.
+
+### Performance Optimization
+- Cache analysis results to avoid redundant work.
+- Use incremental reviews for large PRs.
+- Enable parallel agent execution.
+- Batch comment operations efficiently.
diff --git a/.claude/skills/github-code-review/references/troubleshooting.md b/.claude/skills/github-code-review/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..0107df5
--- /dev/null
@@ -0,0 +1,80 @@
+# Troubleshooting
+
+Common issues and solutions for the GitHub Code Review skill.
+
+---
+
+## Issue: Review agents not spawning
+
+**Solution:**
+
+```bash
+# Check swarm status
+npx ruv-swarm swarm-status
+
+# Verify GitHub CLI authentication
+gh auth status
+
+# Re-initialize swarm
+npx ruv-swarm github review-init --pr 123 --force
+```
+
+---
+
+## Issue: Comments not posting to PR
+
+**Solution:**
+
+```bash
+# Verify GitHub token permissions
+gh auth status
+
+# Check API rate limits
+gh api rate_limit
+
+# Use batch comment posting
+npx ruv-swarm github review-comments --pr 123 --batch
+```
+
+---
+
+## Issue: Review taking too long
+
+**Solution:**
+
+```bash
+# Use incremental review for large PRs
+npx ruv-swarm github review-init --pr 123 --incremental
+
+# Reduce agent count
+npx ruv-swarm github review-init --pr 123 --agents "security,style" --max-agents 3
+
+# Enable parallel processing
+npx ruv-swarm github review-init --pr 123 --parallel --cache-results
+```
+
+---
+
+## Issue: False positives in security review
+
+**Solution:**
+
+```bash
+# Train review agents on past reviews to reduce false positives
+npx ruv-swarm github review-learn \
+  --analyze-past-reviews \
+  --identify-patterns \
+  --improve-suggestions \
+  --reduce-false-positives
+```
+
+---
+
+## Issue: Webhook events not received
+
+**Checklist:**
+
+1. Verify webhook URL is reachable from GitHub.
+2. Confirm webhook secret matches the configured value.
+3. Check that the correct events are subscribed (pull_request, issue_comment).
+4. Review webhook delivery logs in GitHub repository settings.
diff --git a/.claude/skills/github-code-review/references/workflow-examples.md b/.claude/skills/github-code-review/references/workflow-examples.md
new file mode 100644 (file)
index 0000000..e91d87e
--- /dev/null
@@ -0,0 +1,107 @@
+# Workflow Examples
+
+Complete end-to-end examples for common review scenarios.
+
+---
+
+## Example 1: Security-Critical PR
+
+Review authentication system changes with maximum depth.
+
+```bash
+npx ruv-swarm github review-init \
+  --pr 456 \
+  --agents "security,authentication,audit" \
+  --depth "maximum" \
+  --require-security-approval \
+  --penetration-test
+```
+
+---
+
+## Example 2: Performance-Sensitive PR
+
+Review database optimization with benchmarks and profiling.
+
+```bash
+npx ruv-swarm github review-init \
+  --pr 789 \
+  --agents "performance,database,caching" \
+  --benchmark \
+  --profile \
+  --load-test
+```
+
+---
+
+## Example 3: UI Component PR
+
+Review new component library with accessibility and responsive checks.
+
+```bash
+npx ruv-swarm github review-init \
+  --pr 321 \
+  --agents "accessibility,style,i18n,docs" \
+  --visual-regression \
+  --component-tests \
+  --responsive-check
+```
+
+---
+
+## Example 4: Feature Development PR
+
+Review new feature implementation with hierarchical topology.
+
+```bash
+gh pr view 456 --json body,labels,files | \
+  npx ruv-swarm github pr-init 456 \
+    --topology hierarchical \
+    --agents "architect,coder,tester,security" \
+    --auto-assign-tasks
+```
+
+---
+
+## Example 5: Bug Fix PR
+
+Review bug fix with debugging focus and regression testing.
+
+```bash
+npx ruv-swarm github pr-init 789 \
+  --topology mesh \
+  --agents "debugger,analyst,tester" \
+  --priority high \
+  --regression-test
+```
+
+---
+
+## Example 6: Complete PR Management with Claude Code
+
+```javascript
+[Single Message - Parallel Execution]:
+  // Initialize coordination
+  mcp__claude-flow__swarm_init { topology: "hierarchical", maxAgents: 5 }
+  mcp__claude-flow__agent_spawn { type: "reviewer", name: "Senior Reviewer" }
+  mcp__claude-flow__agent_spawn { type: "tester", name: "QA Engineer" }
+  mcp__claude-flow__agent_spawn { type: "coordinator", name: "Merge Coordinator" }
+
+  // Create and manage PR using gh CLI
+  Bash("gh pr create --title 'Feature: Add authentication' --base main")
+  Bash("gh pr view 54 --json files,diff")
+  Bash("gh pr review 54 --approve --body 'LGTM after automated review'")
+
+  // Execute tests and validation
+  Bash("npm test")
+  Bash("npm run lint")
+  Bash("npm run build")
+
+  // Track progress
+  TodoWrite { todos: [
+    { content: "Complete code review", status: "completed", activeForm: "Completing code review" },
+    { content: "Run test suite", status: "completed", activeForm: "Running test suite" },
+    { content: "Validate security", status: "completed", activeForm: "Validating security" },
+    { content: "Merge when ready", status: "pending", activeForm: "Merging when ready" }
+  ]}
+```
diff --git a/.claude/skills/github-multi-repo/assets/architecture-layouts.md b/.claude/skills/github-multi-repo/assets/architecture-layouts.md
new file mode 100644 (file)
index 0000000..a377e98
--- /dev/null
@@ -0,0 +1,72 @@
+# Architecture Layouts
+
+Reference directory structures for monorepo and command organization.
+
+---
+
+## Monorepo Structure
+
+```
+ruv-FANN/
+├── packages/
+│   ├── claude-code-flow/
+│   │   ├── src/
+│   │   ├── .claude/
+│   │   └── package.json
+│   ├── ruv-swarm/
+│   │   ├── src/
+│   │   ├── wasm/
+│   │   └── package.json
+│   └── shared/
+│       ├── types/
+│       ├── utils/
+│       └── config/
+├── tools/
+│   ├── build/
+│   ├── test/
+│   └── deploy/
+├── docs/
+│   ├── architecture/
+│   ├── integration/
+│   └── examples/
+└── .github/
+    ├── workflows/
+    ├── templates/
+    └── actions/
+```
+
+---
+
+## Command Structure
+
+```
+.claude/
+├── commands/
+│   ├── github/
+│   │   ├── github-modes.md
+│   │   ├── pr-manager.md
+│   │   ├── issue-tracker.md
+│   │   └── sync-coordinator.md
+│   ├── sparc/
+│   │   ├── sparc-modes.md
+│   │   ├── coder.md
+│   │   └── tester.md
+│   └── swarm/
+│       ├── coordination.md
+│       └── orchestration.md
+├── templates/
+│   ├── issue.md
+│   ├── pr.md
+│   └── project.md
+└── config.json
+```
+
+---
+
+## Layout Selection Guide
+
+| Project Type | Recommended Layout | Reason |
+|-------------|-------------------|--------|
+| Multi-team, shared deps | Monorepo | Single source of truth, atomic changes |
+| Independent services | Multi-repo | Independent deployment, clear ownership |
+| Hybrid (shared libs + apps) | Monorepo + satellite repos | Shared code in monorepo, apps separate |
diff --git a/.claude/skills/github-multi-repo/assets/cli-reference.md b/.claude/skills/github-multi-repo/assets/cli-reference.md
new file mode 100644 (file)
index 0000000..2b06517
--- /dev/null
@@ -0,0 +1,227 @@
+# CLI Reference
+
+Complete command reference for `npx claude-flow skill run github-multi-repo`.
+
+---
+
+## Initialization
+
+```bash
+# Basic swarm initialization
+npx claude-flow skill run github-multi-repo init \
+  --repos "org/frontend,org/backend,org/shared" \
+  --topology hierarchical
+
+# Advanced initialization with synchronization
+npx claude-flow skill run github-multi-repo init \
+  --repos "org/frontend,org/backend,org/shared" \
+  --topology mesh \
+  --shared-memory \
+  --sync-strategy eventual
+```
+
+---
+
+## Package Synchronization
+
+```bash
+# Synchronize package versions and dependencies
+npx claude-flow skill run github-multi-repo sync \
+  --packages "claude-code-flow,ruv-swarm" \
+  --align-versions \
+  --update-docs
+```
+
+---
+
+## Architecture Optimization
+
+```bash
+# Analyze and optimize repository structure
+npx claude-flow skill run github-multi-repo optimize \
+  --analyze-structure \
+  --suggest-improvements \
+  --create-templates
+```
+
+---
+
+## Use Cases
+
+### Microservices Coordination
+
+```bash
+npx claude-flow skill run github-multi-repo microservices \
+  --services "auth,users,orders,payments" \
+  --ensure-compatibility \
+  --sync-contracts \
+  --integration-tests
+```
+
+### Library Updates
+
+```bash
+npx claude-flow skill run github-multi-repo lib-update \
+  --library "org/shared-lib" \
+  --version "2.0.0" \
+  --find-consumers \
+  --update-imports \
+  --run-tests
+```
+
+### Organization-Wide Policy
+
+```bash
+npx claude-flow skill run github-multi-repo org-policy \
+  --policy "add-security-headers" \
+  --repos "org/*" \
+  --validate-compliance \
+  --create-reports
+```
+
+### Full-Stack Application Update
+
+```bash
+npx claude-flow skill run github-multi-repo fullstack-update \
+  --frontend "org/web-app" \
+  --backend "org/api-server" \
+  --database "org/db-migrations" \
+  --coordinate-deployment
+```
+
+### Cross-Team Collaboration
+
+```bash
+npx claude-flow skill run github-multi-repo cross-team \
+  --teams "frontend,backend,devops" \
+  --task "implement-feature-x" \
+  --assign-by-expertise \
+  --track-progress
+```
+
+---
+
+## Monitoring and Dashboards
+
+### Multi-Repo Dashboard
+
+```bash
+npx claude-flow skill run github-multi-repo dashboard \
+  --port 3000 \
+  --metrics "agent-activity,task-progress,memory-usage" \
+  --real-time
+```
+
+### Dependency Graph
+
+```bash
+npx claude-flow skill run github-multi-repo dep-graph \
+  --format mermaid \
+  --include-agents \
+  --show-data-flow
+```
+
+### Health Check
+
+```bash
+npx claude-flow skill run github-multi-repo health-check \
+  --repos "org/*" \
+  --check "connectivity,memory,agents" \
+  --alert-on-issues
+```
+
+---
+
+## Performance Optimization
+
+### Caching Strategy
+
+```bash
+npx claude-flow skill run github-multi-repo cache-strategy \
+  --analyze-patterns \
+  --suggest-cache-layers \
+  --implement-invalidation
+```
+
+### Parallel Execution
+
+```bash
+npx claude-flow skill run github-multi-repo parallel-optimize \
+  --analyze-dependencies \
+  --identify-parallelizable \
+  --execute-optimal
+```
+
+### Resource Pooling
+
+```bash
+npx claude-flow skill run github-multi-repo resource-pool \
+  --share-agents \
+  --distribute-load \
+  --monitor-usage
+```
+
+---
+
+## Troubleshooting
+
+### Connectivity Diagnostics
+
+```bash
+npx claude-flow skill run github-multi-repo diagnose-connectivity \
+  --test-all-repos \
+  --check-permissions \
+  --verify-webhooks
+```
+
+### Memory Synchronization Debug
+
+```bash
+npx claude-flow skill run github-multi-repo debug-memory \
+  --check-consistency \
+  --identify-conflicts \
+  --repair-state
+```
+
+### Performance Analysis
+
+```bash
+npx claude-flow skill run github-multi-repo perf-analysis \
+  --profile-operations \
+  --identify-bottlenecks \
+  --suggest-optimizations
+```
+
+---
+
+## Advanced Features
+
+### Distributed Task Queue
+
+```bash
+npx claude-flow skill run github-multi-repo queue \
+  --backend redis \
+  --workers 10 \
+  --priority-routing \
+  --dead-letter-queue
+```
+
+### Cross-Repo Testing
+
+```bash
+npx claude-flow skill run github-multi-repo test \
+  --setup-test-env \
+  --link-services \
+  --run-e2e \
+  --tear-down
+```
+
+### Monorepo Migration
+
+```bash
+npx claude-flow skill run github-multi-repo to-monorepo \
+  --analyze-repos \
+  --suggest-structure \
+  --preserve-history \
+  --create-migration-prs
+```
diff --git a/.claude/skills/github-multi-repo/references/configuration.md b/.claude/skills/github-multi-repo/references/configuration.md
new file mode 100644 (file)
index 0000000..550077e
--- /dev/null
@@ -0,0 +1,112 @@
+# Configuration
+
+Multi-repo config file format, repository roles, and communication strategies.
+
+---
+
+## Multi-Repo Config File
+
+```yaml
+# .swarm/multi-repo.yml
+version: 1
+organization: my-org
+
+repositories:
+  - name: frontend
+    url: github.com/my-org/frontend
+    role: ui
+    agents: [coder, designer, tester]
+
+  - name: backend
+    url: github.com/my-org/backend
+    role: api
+    agents: [architect, coder, tester]
+
+  - name: shared
+    url: github.com/my-org/shared
+    role: library
+    agents: [analyst, coder]
+
+coordination:
+  topology: hierarchical
+  communication: webhook
+  memory: redis://shared-memory
+
+dependencies:
+  - from: frontend
+    to: [backend, shared]
+  - from: backend
+    to: [shared]
+```
+
+---
+
+## Repository Roles
+
+```javascript
+{
+  "roles": {
+    "ui": {
+      "responsibilities": ["user-interface", "ux", "accessibility"],
+      "default-agents": ["designer", "coder", "tester"]
+    },
+    "api": {
+      "responsibilities": ["endpoints", "business-logic", "data"],
+      "default-agents": ["architect", "coder", "security"]
+    },
+    "library": {
+      "responsibilities": ["shared-code", "utilities", "types"],
+      "default-agents": ["analyst", "coder", "documenter"]
+    }
+  }
+}
+```
+
+---
+
+## Communication Strategies
+
+### Webhook-Based Coordination
+
+```javascript
+const { MultiRepoSwarm } = require('ruv-swarm');
+
+const swarm = new MultiRepoSwarm({
+  webhook: {
+    url: 'https://swarm-coordinator.example.com',
+    secret: process.env.WEBHOOK_SECRET
+  }
+});
+
+swarm.on('repo:update', async (event) => {
+  await swarm.propagate(event, {
+    to: event.dependencies,
+    strategy: 'eventual-consistency'
+  });
+});
+```
+
+### Event Streaming (Kafka)
+
+```yaml
+# Kafka configuration for real-time coordination
+kafka:
+  brokers: ['kafka1:9092', 'kafka2:9092']
+  topics:
+    swarm-events:
+      partitions: 10
+      replication: 3
+    swarm-memory:
+      partitions: 5
+      replication: 3
+```
+
+---
+
+## Key Decisions
+
+| Setting | Options | Recommendation |
+|---------|---------|----------------|
+| Topology | `hierarchical`, `mesh` | `hierarchical` for teams with clear ownership |
+| Communication | `webhook`, `kafka`, `redis` | `webhook` for simplicity, `kafka` for scale |
+| Memory backend | `redis`, `in-memory` | `redis` for persistence across restarts |
diff --git a/.claude/skills/github-multi-repo/references/cross-repo-swarm.md b/.claude/skills/github-multi-repo/references/cross-repo-swarm.md
new file mode 100644 (file)
index 0000000..24f73ca
--- /dev/null
@@ -0,0 +1,87 @@
+# Cross-Repository Swarm Orchestration
+
+Detailed procedures and code examples for cross-repository AI swarm coordination.
+
+---
+
+## Repository Discovery
+
+Auto-discover related repositories and analyze their dependency graph using `gh` CLI.
+
+```javascript
+// Auto-discover related repositories with gh CLI
+const REPOS = Bash(`gh repo list my-organization --limit 100 \
+  --json name,description,languages,topics \
+  --jq '.[] | select(.languages | keys | contains(["TypeScript"]))'`)
+
+// Analyze repository dependencies
+const DEPS = Bash(`gh repo list my-organization --json name | \
+  jq -r '.[].name' | while read -r repo; do
+    gh api repos/my-organization/$repo/contents/package.json \
+      --jq '.content' 2>/dev/null | base64 -d | jq '{name, dependencies}'
+  done | jq -s '.'`)
+
+// Initialize swarm with discovered repositories
+mcp__claude-flow__swarm_init({
+  topology: "hierarchical",
+  maxAgents: 8,
+  metadata: { repos: REPOS, dependencies: DEPS }
+})
+```
+
+---
+
+## Synchronized Operations
+
+Execute synchronized changes across multiple repositories with coordination agents.
+
+```javascript
+// Execute synchronized changes across repositories
+[Parallel Multi-Repo Operations]:
+  // Spawn coordination agents
+  Task("Repository Coordinator", "Coordinate changes across all repositories", "coordinator")
+  Task("Dependency Analyzer", "Analyze cross-repo dependencies", "analyst")
+  Task("Integration Tester", "Validate cross-repo changes", "tester")
+
+  // Get matching repositories
+  Bash(`gh repo list org --limit 100 --json name \
+    --jq '.[] | select(.name | test("-service$")) | .name' > /tmp/repos.txt`)
+
+  // Execute task across repositories
+  Bash(`cat /tmp/repos.txt | while read -r repo; do
+    gh repo clone org/$repo /tmp/$repo -- --depth=1
+    cd /tmp/$repo
+
+    # Apply changes
+    npm update
+    npm test
+
+    # Create PR if successful
+    if [ $? -eq 0 ]; then
+      git checkout -b update-dependencies-$(date +%Y%m%d)
+      git add -A
+      git commit -m "chore: Update dependencies"
+      git push origin HEAD
+      gh pr create --title "Update dependencies" --body "Automated update" --label "dependencies"
+    fi
+  done`)
+
+  // Track all operations
+  TodoWrite { todos: [
+    { id: "discover", content: "Discover all service repositories", status: "completed" },
+    { id: "update", content: "Update dependencies", status: "completed" },
+    { id: "test", content: "Run integration tests", status: "in_progress" },
+    { id: "pr", content: "Create pull requests", status: "pending" }
+  ]}
+```
+
+---
+
+## Key Concepts
+
+| Concept | Description |
+|---------|-------------|
+| **Topology** | `hierarchical` for tree-shaped orgs, `mesh` for flat collaboration |
+| **Agent Roles** | coordinator, analyst, tester -- one focus per agent |
+| **Parallelism** | Independent repos process in parallel; dependent repos process sequentially |
+| **Tracking** | Use `TodoWrite` or MCP memory to track operation progress |
diff --git a/.claude/skills/github-multi-repo/references/orchestration-workflows.md b/.claude/skills/github-multi-repo/references/orchestration-workflows.md
new file mode 100644 (file)
index 0000000..b921942
--- /dev/null
@@ -0,0 +1,125 @@
+# Orchestration Workflows
+
+Detailed procedures for dependency management, cross-repo refactoring, and security patch deployment.
+
+---
+
+## Dependency Management
+
+Update dependencies organization-wide with tracking issues and automated PRs.
+
+```javascript
+// Update dependencies across all repositories
+[Organization-Wide Dependency Update]:
+  // Create tracking issue
+  TRACKING_ISSUE=$(Bash(`gh issue create \
+    --title "Dependency Update: typescript@5.0.0" \
+    --body "Tracking TypeScript update across all repositories" \
+    --label "dependencies,tracking" \
+    --json number -q .number`))
+
+  // Find all TypeScript repositories
+  TS_REPOS=$(Bash(`gh repo list org --limit 100 --json name | \
+    jq -r '.[].name' | while read -r repo; do
+      if gh api repos/org/$repo/contents/package.json 2>/dev/null | \
+         jq -r '.content' | base64 -d | grep -q '"typescript"'; then
+        echo "$repo"
+      fi
+    done`))
+
+  // Update each repository
+  Bash(`echo "$TS_REPOS" | while read -r repo; do
+    gh repo clone org/$repo /tmp/$repo -- --depth=1
+    cd /tmp/$repo
+
+    npm install --save-dev typescript@5.0.0
+
+    if npm test; then
+      git checkout -b update-typescript-5
+      git add package.json package-lock.json
+      git commit -m "chore: Update TypeScript to 5.0.0
+
+Part of #$TRACKING_ISSUE"
+
+      git push origin HEAD
+      gh pr create \
+        --title "Update TypeScript to 5.0.0" \
+        --body "Updates TypeScript\n\nTracking: #$TRACKING_ISSUE" \
+        --label "dependencies"
+    else
+      gh issue comment $TRACKING_ISSUE \
+        --body "Failed to update $repo - tests failing"
+    fi
+  done`)
+```
+
+---
+
+## Cross-Repo Refactoring
+
+Coordinate large-scale refactoring operations across multiple repositories.
+
+```javascript
+// Coordinate large-scale refactoring
+[Cross-Repo Refactoring]:
+  // Initialize refactoring swarm
+  mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 8 })
+
+  // Spawn specialized agents
+  Task("Refactoring Coordinator", "Coordinate refactoring across repos", "coordinator")
+  Task("Impact Analyzer", "Analyze refactoring impact", "analyst")
+  Task("Code Transformer", "Apply refactoring changes", "coder")
+  Task("Migration Guide Creator", "Create migration documentation", "documenter")
+  Task("Integration Tester", "Validate refactored code", "tester")
+
+  // Execute refactoring
+  mcp__claude-flow__task_orchestrate({
+    task: "Rename OldAPI to NewAPI across all repositories",
+    strategy: "sequential",
+    priority: "high"
+  })
+```
+
+---
+
+## Security Patch Deployment
+
+Scan all repositories for vulnerabilities and apply patches automatically.
+
+```javascript
+// Coordinate security patches
+[Security Patch Deployment]:
+  // Scan all repositories
+  Bash(`gh repo list org --limit 100 --json name | jq -r '.[].name' | \
+    while read -r repo; do
+      gh repo clone org/$repo /tmp/$repo -- --depth=1
+      cd /tmp/$repo
+      npm audit --json > /tmp/audit-$repo.json
+    done`)
+
+  // Apply patches
+  Bash(`for repo in /tmp/audit-*.json; do
+    if [ $(jq '.vulnerabilities | length' $repo) -gt 0 ]; then
+      cd /tmp/$(basename $repo .json | sed 's/audit-//')
+      npm audit fix
+
+      if npm test; then
+        git checkout -b security/patch-$(date +%Y%m%d)
+        git add -A
+        git commit -m "security: Apply security patches"
+        git push origin HEAD
+        gh pr create --title "Security patches" --label "security"
+      fi
+    fi
+  done`)
+```
+
+---
+
+## Workflow Summary
+
+| Workflow | Trigger | Agents | Strategy |
+|----------|---------|--------|----------|
+| Dependency update | Manual or scheduled | coordinator, analyst, tester | Sequential per repo |
+| Cross-repo refactoring | Manual | coordinator, analyst, coder, documenter, tester | Mesh topology |
+| Security patch | Audit alert | coordinator, tester | Parallel scan, sequential patch |
diff --git a/.claude/skills/github-multi-repo/references/package-sync.md b/.claude/skills/github-multi-repo/references/package-sync.md
new file mode 100644 (file)
index 0000000..41b68e5
--- /dev/null
@@ -0,0 +1,128 @@
+# Package Synchronization
+
+Detailed procedures for version alignment, documentation sync, and cross-package integration.
+
+---
+
+## Version Alignment
+
+Synchronize package dependencies and versions across a multi-repo organization.
+
+```javascript
+// Synchronize package dependencies and versions
+[Complete Package Sync]:
+  // Initialize sync swarm
+  mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 5 })
+
+  // Spawn sync agents
+  Task("Sync Coordinator", "Coordinate version alignment", "coordinator")
+  Task("Dependency Analyzer", "Analyze dependencies", "analyst")
+  Task("Integration Tester", "Validate synchronization", "tester")
+
+  // Read package states
+  Read("/workspaces/ruv-FANN/claude-code-flow/claude-code-flow/package.json")
+  Read("/workspaces/ruv-FANN/ruv-swarm/npm/package.json")
+
+  // Align versions using gh CLI
+  Bash(`gh api repos/:owner/:repo/git/refs \
+    -f ref='refs/heads/sync/package-alignment' \
+    -f sha=$(gh api repos/:owner/:repo/git/refs/heads/main --jq '.object.sha')`)
+
+  // Update package.json files
+  Bash(`gh api repos/:owner/:repo/contents/package.json \
+    --method PUT \
+    -f message="feat: Align Node.js version requirements" \
+    -f branch="sync/package-alignment" \
+    -f content="$(cat aligned-package.json | base64)"`)
+
+  // Store sync state
+  mcp__claude-flow__memory_usage({
+    action: "store",
+    key: "sync/packages/status",
+    value: {
+      timestamp: Date.now(),
+      packages_synced: ["claude-code-flow", "ruv-swarm"],
+      status: "synchronized"
+    }
+  })
+```
+
+---
+
+## Documentation Synchronization
+
+Keep CLAUDE.md and other shared documentation consistent across packages.
+
+```javascript
+// Synchronize CLAUDE.md files across packages
+[Documentation Sync]:
+  // Get source documentation
+  Bash(`gh api repos/:owner/:repo/contents/ruv-swarm/docs/CLAUDE.md \
+    --jq '.content' | base64 -d > /tmp/claude-source.md`)
+
+  // Update target documentation
+  Bash(`gh api repos/:owner/:repo/contents/claude-code-flow/CLAUDE.md \
+    --method PUT \
+    -f message="docs: Synchronize CLAUDE.md" \
+    -f branch="sync/documentation" \
+    -f content="$(cat /tmp/claude-source.md | base64)"`)
+
+  // Track sync status
+  mcp__claude-flow__memory_usage({
+    action: "store",
+    key: "sync/documentation/status",
+    value: { status: "synchronized", files: ["CLAUDE.md"] }
+  })
+```
+
+---
+
+## Cross-Package Integration
+
+Coordinate feature implementation spanning multiple packages.
+
+```javascript
+// Coordinate feature implementation across packages
+[Cross-Package Feature]:
+  // Push changes to all packages
+  mcp__github__push_files({
+    branch: "feature/github-integration",
+    files: [
+      {
+        path: "claude-code-flow/.claude/commands/github/github-modes.md",
+        content: "[GitHub modes documentation]"
+      },
+      {
+        path: "ruv-swarm/src/github-coordinator/hooks.js",
+        content: "[GitHub coordination hooks]"
+      }
+    ],
+    message: "feat: Add GitHub workflow integration"
+  })
+
+  // Create coordinated PR
+  Bash(`gh pr create \
+    --title "Feature: GitHub Workflow Integration" \
+    --body "## GitHub Integration
+
+### Features
+- Multi-repo coordination
+- Package synchronization
+- Architecture optimization
+
+### Testing
+- [x] Package dependency verification
+- [x] Integration tests
+- [x] Cross-package compatibility"`)
+```
+
+---
+
+## Sync Quality Metrics
+
+| Metric | Description |
+|--------|-------------|
+| Package version alignment % | How many packages share the same dependency versions |
+| Documentation consistency score | Divergence ratio across shared docs |
+| Sync completion time | End-to-end duration of a full sync cycle |
+| Integration test success rate | Pass rate after synchronization |
diff --git a/.claude/skills/github-multi-repo/references/repo-architecture.md b/.claude/skills/github-multi-repo/references/repo-architecture.md
new file mode 100644 (file)
index 0000000..45d6663
--- /dev/null
@@ -0,0 +1,141 @@
+# Repository Architecture
+
+Detailed procedures for structure analysis, template creation, and cross-repository standardization.
+
+---
+
+## Structure Analysis
+
+Analyze and optimize repository structure using an architecture swarm.
+
+```javascript
+// Analyze and optimize repository structure
+[Architecture Analysis]:
+  // Initialize architecture swarm
+  mcp__claude-flow__swarm_init({ topology: "hierarchical", maxAgents: 6 })
+
+  // Spawn architecture agents
+  Task("Senior Architect", "Analyze repository structure", "architect")
+  Task("Structure Analyst", "Identify optimization opportunities", "analyst")
+  Task("Performance Optimizer", "Optimize structure for scalability", "optimizer")
+  Task("Best Practices Researcher", "Research architecture patterns", "researcher")
+
+  // Analyze current structures
+  LS("/workspaces/ruv-FANN/claude-code-flow/claude-code-flow")
+  LS("/workspaces/ruv-FANN/ruv-swarm/npm")
+
+  // Search for best practices
+  Bash(`gh search repos "language:javascript template architecture" \
+    --limit 10 \
+    --json fullName,description,stargazersCount \
+    --sort stars \
+    --order desc`)
+
+  // Store analysis results
+  mcp__claude-flow__memory_usage({
+    action: "store",
+    key: "architecture/analysis/results",
+    value: {
+      repositories_analyzed: ["claude-code-flow", "ruv-swarm"],
+      optimization_areas: ["structure", "workflows", "templates"],
+      recommendations: ["standardize_structure", "improve_workflows"]
+    }
+  })
+```
+
+---
+
+## Template Creation
+
+Create a standardized repository template for consistent project scaffolding.
+
+```javascript
+// Create standardized repository template
+[Template Creation]:
+  // Create template repository
+  mcp__github__create_repository({
+    name: "claude-project-template",
+    description: "Standardized template for Claude Code projects",
+    private: false,
+    autoInit: true
+  })
+
+  // Push template structure
+  mcp__github__push_files({
+    repo: "claude-project-template",
+    files: [
+      {
+        path: ".claude/commands/github/github-modes.md",
+        content: "[GitHub modes template]"
+      },
+      {
+        path: ".claude/config.json",
+        content: JSON.stringify({
+          version: "1.0",
+          mcp_servers: {
+            "ruv-swarm": {
+              command: "npx",
+              args: ["ruv-swarm", "mcp", "start"]
+            }
+          }
+        })
+      },
+      {
+        path: "CLAUDE.md",
+        content: "[Standardized CLAUDE.md]"
+      },
+      {
+        path: "package.json",
+        content: JSON.stringify({
+          name: "claude-project-template",
+          engines: { node: ">=20.0.0" },
+          dependencies: { "ruv-swarm": "^1.0.11" }
+        })
+      }
+    ],
+    message: "feat: Create standardized template"
+  })
+```
+
+---
+
+## Cross-Repository Standardization
+
+Apply consistent workflows and CI configuration across all repositories.
+
+```javascript
+// Synchronize structure across repositories
+[Structure Standardization]:
+  const repositories = ["claude-code-flow", "ruv-swarm", "claude-extensions"]
+
+  // Update common files across all repositories
+  repositories.forEach(repo => {
+    mcp__github__create_or_update_file({
+      repo: "ruv-FANN",
+      path: `${repo}/.github/workflows/integration.yml`,
+      content: `name: Integration Tests
+on: [push, pull_request]
+jobs:
+  test:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - uses: actions/setup-node@v3
+        with: { node-version: '20' }
+      - run: npm install && npm test`,
+      message: "ci: Standardize integration workflow",
+      branch: "structure/standardization"
+    })
+  })
+```
+
+---
+
+## Architecture Health Metrics
+
+| Metric | Description |
+|--------|-------------|
+| Repository structure consistency | How closely repos match the standard template |
+| Documentation coverage % | Ratio of documented vs undocumented components |
+| Cross-repo integration success rate | Pass rate for cross-repo integration tests |
+| Template adoption statistics | How many repos use the standard template |
diff --git a/.claude/skills/github-multi-repo/references/sync-patterns.md b/.claude/skills/github-multi-repo/references/sync-patterns.md
new file mode 100644 (file)
index 0000000..0483868
--- /dev/null
@@ -0,0 +1,75 @@
+# Synchronization Patterns
+
+Consistency strategies for coordinating state across multiple repositories.
+
+---
+
+## Eventually Consistent
+
+Best for low-criticality updates (documentation, formatting, non-breaking changes).
+
+```javascript
+{
+  "sync": {
+    "strategy": "eventual",
+    "max-lag": "5m",
+    "retry": {
+      "attempts": 3,
+      "backoff": "exponential"
+    }
+  }
+}
+```
+
+**When to use:** Documentation sync, style updates, non-breaking dependency bumps.
+
+---
+
+## Strong Consistency
+
+Best for critical updates that must be applied atomically across repositories.
+
+```javascript
+{
+  "sync": {
+    "strategy": "strong",
+    "consensus": "raft",
+    "quorum": 0.51,
+    "timeout": "30s"
+  }
+}
+```
+
+**When to use:** Security patches, breaking API changes, schema migrations.
+
+---
+
+## Hybrid Approach
+
+Combine strategies per update type for the best balance of speed and safety.
+
+```javascript
+{
+  "sync": {
+    "default": "eventual",
+    "overrides": {
+      "security-updates": "strong",
+      "dependency-updates": "strong",
+      "documentation": "eventual"
+    }
+  }
+}
+```
+
+---
+
+## Pattern Selection Guide
+
+| Update Type | Strategy | Max Lag | Rationale |
+|-------------|----------|---------|-----------|
+| Security patches | Strong | 0 | Must be atomic |
+| Breaking API changes | Strong | 0 | Consumers must update simultaneously |
+| Dependency bumps | Strong | 30s | Avoid version drift |
+| Documentation | Eventual | 5m | Non-critical, high volume |
+| Formatting / style | Eventual | 15m | Cosmetic only |
+| Feature flags | Eventual | 1m | Controlled rollout |
diff --git a/.claude/skills/github-project-management/assets/cli-reference.md b/.claude/skills/github-project-management/assets/cli-reference.md
new file mode 100644 (file)
index 0000000..9c3d43d
--- /dev/null
@@ -0,0 +1,75 @@
+# CLI Quick Reference
+
+## Issue Management
+
+| Command | Description |
+|---------|-------------|
+| `gh issue create --title "..." --body "..." --label "..."` | Create a new issue |
+| `gh issue view <number> --json title,body,labels` | View issue details as JSON |
+| `gh issue edit <number> --add-label "..."` | Add label to issue |
+| `gh issue close <number> --comment "..."` | Close issue with comment |
+| `gh issue list --label "..." --state open` | List issues by label |
+| `npx ruv-swarm github issue-init <number>` | Initialize swarm for issue |
+| `npx ruv-swarm github issue-decompose <number>` | Decompose issue into subtasks |
+| `npx ruv-swarm github issue-to-swarm <number>` | Convert issue to swarm tasks |
+| `npx ruv-swarm github triage --unlabeled` | Auto-triage unlabeled issues |
+| `npx ruv-swarm github find-duplicates` | Find and link duplicate issues |
+| `npx ruv-swarm github issue-progress <number>` | Get swarm progress for issue |
+
+## Project Boards
+
+| Command | Description |
+|---------|-------------|
+| `gh project list --owner @me --format json` | List projects |
+| `gh project item-add <id> --owner @me --url "..."` | Add item to project |
+| `gh project item-list <id> --owner @me --format json` | List project items |
+| `gh project field-create <id> --owner @me --name "..."` | Create custom field |
+| `npx ruv-swarm github board-init --project-id <id>` | Initialize board sync |
+| `npx ruv-swarm github board-sync` | Sync swarm with board |
+| `npx ruv-swarm github board-analytics` | Generate board analytics |
+| `npx ruv-swarm github board-auto-assign` | Auto-assign cards |
+| `npx ruv-swarm github board-bulk` | Bulk card operations |
+| `npx ruv-swarm github board-diagnose` | Diagnose sync problems |
+| `npx ruv-swarm github board-optimize` | Optimize board performance |
+
+## Sprint Management
+
+| Command | Description |
+|---------|-------------|
+| `npx ruv-swarm github sprint-manage --sprint "Sprint X"` | Manage sprint |
+| `npx ruv-swarm github milestone-track --milestone "vX.X"` | Track milestone |
+| `npx ruv-swarm github agile-board` | Setup agile (Scrum) board |
+| `npx ruv-swarm github kanban-board` | Setup Kanban board |
+| `npx ruv-swarm github board-progress` | Track progress metrics |
+| `npx ruv-swarm github board-report` | Generate sprint report |
+| `npx ruv-swarm github release-plan-board` | Plan releases |
+
+## Analytics & Metrics
+
+| Command | Description |
+|---------|-------------|
+| `npx ruv-swarm github board-kpis` | Track board KPIs |
+| `npx ruv-swarm github team-metrics` | Track team performance |
+| `npx ruv-swarm github issue-metrics --issue <number>` | Issue resolution metrics |
+| `npx ruv-swarm github effectiveness` | Swarm effectiveness report |
+
+## Advanced Coordination
+
+| Command | Description |
+|---------|-------------|
+| `npx ruv-swarm github multi-board-sync` | Cross-board sync |
+| `npx ruv-swarm github cross-org-sync` | Cross-organization sync |
+| `npx ruv-swarm github cross-repo` | Cross-repository issues |
+| `npx ruv-swarm github issue-deps <number>` | Resolve issue dependencies |
+| `npx ruv-swarm github epic-swarm --epic <number>` | Coordinate epic |
+| `npx ruv-swarm github board-distribute` | Distribute work |
+| `npx ruv-swarm github standup-report` | Generate standup report |
+| `npx ruv-swarm github review-coordinate` | Coordinate reviews |
+
+## Specialized Swarms
+
+| Command | Description |
+|---------|-------------|
+| `npx ruv-swarm github bug-swarm <number>` | Bug investigation swarm |
+| `npx ruv-swarm github feature-swarm <number>` | Feature implementation swarm |
+| `npx ruv-swarm github debt-swarm <number>` | Tech debt refactoring swarm |
diff --git a/.claude/skills/github-project-management/assets/issue-templates.md b/.claude/skills/github-project-management/assets/issue-templates.md
new file mode 100644 (file)
index 0000000..010e711
--- /dev/null
@@ -0,0 +1,163 @@
+# Issue Templates
+
+## Integration Issue Template
+
+```markdown
+## Integration Task
+
+### Overview
+[Brief description of integration requirements]
+
+### Objectives
+- [ ] Component A integration
+- [ ] Component B validation
+- [ ] Testing and verification
+- [ ] Documentation updates
+
+### Integration Areas
+#### Dependencies
+- [ ] Package.json updates
+- [ ] Version compatibility
+- [ ] Import statements
+
+#### Functionality
+- [ ] Core feature integration
+- [ ] API compatibility
+- [ ] Performance validation
+
+#### Testing
+- [ ] Unit tests
+- [ ] Integration tests
+- [ ] End-to-end validation
+
+### Swarm Coordination
+- **Coordinator**: Overall progress tracking
+- **Analyst**: Technical validation
+- **Tester**: Quality assurance
+- **Documenter**: Documentation updates
+
+### Progress Tracking
+Updates will be posted automatically by swarm agents during implementation.
+
+---
+Generated with Claude Code
+```
+
+---
+
+## Bug Report Template
+
+```markdown
+## Bug Report
+
+### Problem Description
+[Clear description of the issue]
+
+### Expected Behavior
+[What should happen]
+
+### Actual Behavior
+[What actually happens]
+
+### Reproduction Steps
+1. [Step 1]
+2. [Step 2]
+3. [Step 3]
+
+### Environment
+- Package: [package name and version]
+- Node.js: [version]
+- OS: [operating system]
+
+### Investigation Plan
+- [ ] Root cause analysis
+- [ ] Fix implementation
+- [ ] Testing and validation
+- [ ] Regression testing
+
+### Swarm Assignment
+- **Debugger**: Issue investigation
+- **Coder**: Fix implementation
+- **Tester**: Validation and testing
+
+---
+Generated with Claude Code
+```
+
+---
+
+## Feature Request Template
+
+```markdown
+## Feature Request
+
+### Feature Description
+[Clear description of the proposed feature]
+
+### Use Cases
+1. [Use case 1]
+2. [Use case 2]
+3. [Use case 3]
+
+### Acceptance Criteria
+- [ ] Criterion 1
+- [ ] Criterion 2
+- [ ] Criterion 3
+
+### Implementation Approach
+#### Design
+- [ ] Architecture design
+- [ ] API design
+- [ ] UI/UX mockups
+
+#### Development
+- [ ] Core implementation
+- [ ] Integration with existing features
+- [ ] Performance optimization
+
+#### Testing
+- [ ] Unit tests
+- [ ] Integration tests
+- [ ] User acceptance testing
+
+### Swarm Coordination
+- **Architect**: Design and planning
+- **Coder**: Implementation
+- **Tester**: Quality assurance
+- **Documenter**: Documentation
+
+---
+Generated with Claude Code
+```
+
+---
+
+## Swarm Task Template (YAML)
+
+```yaml
+# .github/ISSUE_TEMPLATE/swarm-task.yml
+name: Swarm Task
+description: Create a task for AI swarm processing
+body:
+  - type: dropdown
+    id: topology
+    attributes:
+      label: Swarm Topology
+      options:
+        - mesh
+        - hierarchical
+        - ring
+        - star
+  - type: input
+    id: agents
+    attributes:
+      label: Required Agents
+      placeholder: "coder, tester, analyst"
+  - type: textarea
+    id: tasks
+    attributes:
+      label: Task Breakdown
+      placeholder: |
+        1. Task one description
+        2. Task two description
+```
diff --git a/.claude/skills/github-project-management/assets/workflow-configs.md b/.claude/skills/github-project-management/assets/workflow-configs.md
new file mode 100644 (file)
index 0000000..314db35
--- /dev/null
@@ -0,0 +1,88 @@
+# Workflow Configurations
+
+## GitHub Actions for Issue Management
+
+```yaml
+# .github/workflows/issue-swarm.yml
+name: Issue Swarm Handler
+on:
+  issues:
+    types: [opened, labeled, commented]
+
+jobs:
+  swarm-process:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Process Issue
+        uses: ruvnet/swarm-action@v1
+        with:
+          command: |
+            if [[ "${{ github.event.label.name }}" == "swarm-ready" ]]; then
+              npx ruv-swarm github issue-init ${{ github.event.issue.number }}
+            fi
+```
+
+---
+
+## Board Integration Workflow
+
+```bash
+# Sync with project board
+npx ruv-swarm github issue-board-sync \
+  --project "Development" \
+  --column-mapping '{
+    "To Do": "pending",
+    "In Progress": "active",
+    "Done": "completed"
+  }'
+```
+
+---
+
+## Complete Workflow Example: Full-Stack Feature Development
+
+```bash
+# 1. Create feature issue with swarm coordination
+gh issue create \
+  --title "Feature: Real-time Collaboration" \
+  --body "$(cat <<EOF
+## Feature: Real-time Collaboration
+
+### Overview
+Implement real-time collaboration features using WebSockets.
+
+### Objectives
+- [ ] WebSocket server setup
+- [ ] Client-side integration
+- [ ] Presence tracking
+- [ ] Conflict resolution
+- [ ] Testing and documentation
+
+### Swarm Coordination
+This feature will use mesh topology for parallel development.
+EOF
+)" \
+  --label "enhancement,swarm-ready,high-priority"
+
+# 2. Initialize swarm and decompose tasks
+ISSUE_NUM=$(gh issue list --label "swarm-ready" --limit 1 --json number --jq '.[0].number')
+npx ruv-swarm github issue-init $ISSUE_NUM \
+  --topology mesh \
+  --auto-decompose \
+  --assign-agents "architect,coder,tester"
+
+# 3. Add to project board
+PROJECT_ID=$(gh project list --owner @me --format json | jq -r '.projects[0].id')
+gh project item-add $PROJECT_ID --owner @me \
+  --url "https://github.com/$GITHUB_REPOSITORY/issues/$ISSUE_NUM"
+
+# 4. Set up automated tracking
+npx ruv-swarm github board-sync \
+  --auto-move-cards \
+  --update-metadata
+
+# 5. Monitor progress
+npx ruv-swarm github issue-progress $ISSUE_NUM \
+  --auto-update-comments \
+  --notify-on-completion
+```
diff --git a/.claude/skills/github-project-management/references/advanced-coordination.md b/.claude/skills/github-project-management/references/advanced-coordination.md
new file mode 100644 (file)
index 0000000..a635998
--- /dev/null
@@ -0,0 +1,97 @@
+# Advanced Coordination — Detailed Reference
+
+## Multi-Board Synchronization
+
+### Cross-Board Sync
+
+```bash
+# Sync across multiple boards
+npx ruv-swarm github multi-board-sync \
+  --boards "Development,QA,Release" \
+  --sync-rules '{
+    "Development->QA": "when:ready-for-test",
+    "QA->Release": "when:tests-pass"
+  }'
+
+# Cross-organization sync
+npx ruv-swarm github cross-org-sync \
+  --source "org1/Project-A" \
+  --target "org2/Project-B" \
+  --field-mapping "custom" \
+  --conflict-resolution "source-wins"
+```
+
+---
+
+## Issue Dependencies & Epic Management
+
+### Dependency Resolution
+
+```bash
+# Handle issue dependencies
+npx ruv-swarm github issue-deps 456 \
+  --resolve-order \
+  --parallel-safe \
+  --update-blocking
+```
+
+### Epic Coordination
+
+```bash
+# Coordinate epic-level swarms
+npx ruv-swarm github epic-swarm \
+  --epic 123 \
+  --child-issues "456,457,458" \
+  --orchestrate
+```
+
+---
+
+## Cross-Repository Coordination
+
+### Multi-Repo Issue Management
+
+```bash
+# Handle issues across repositories
+npx ruv-swarm github cross-repo \
+  --issue "org/repo#456" \
+  --related "org/other-repo#123" \
+  --coordinate
+```
+
+---
+
+## Team Collaboration
+
+### Work Distribution
+
+```bash
+# Distribute work among team
+npx ruv-swarm github board-distribute \
+  --strategy "skills-based" \
+  --balance-workload \
+  --respect-preferences \
+  --notify-assignments
+```
+
+### Standup Automation
+
+```bash
+# Generate standup reports
+npx ruv-swarm github standup-report \
+  --team "frontend" \
+  --include "yesterday,today,blockers" \
+  --format "slack" \
+  --schedule "daily-9am"
+```
+
+### Review Coordination
+
+```bash
+# Coordinate reviews via board
+npx ruv-swarm github review-coordinate \
+  --board "Code Review" \
+  --assign-reviewers \
+  --track-feedback \
+  --ensure-coverage
+```
diff --git a/.claude/skills/github-project-management/references/board-automation.md b/.claude/skills/github-project-management/references/board-automation.md
new file mode 100644 (file)
index 0000000..7b47d04
--- /dev/null
@@ -0,0 +1,213 @@
+# Project Board Automation — Detailed Reference
+
+## Board Initialization & Configuration
+
+### Connect Swarm to GitHub Project
+
+```bash
+# Get project details
+PROJECT_ID=$(gh project list --owner @me --format json | \
+  jq -r '.projects[] | select(.title == "Development Board") | .id')
+
+# Initialize swarm with project
+npx ruv-swarm github board-init \
+  --project-id "$PROJECT_ID" \
+  --sync-mode "bidirectional" \
+  --create-views "swarm-status,agent-workload,priority"
+
+# Create project fields for swarm tracking
+gh project field-create $PROJECT_ID --owner @me \
+  --name "Swarm Status" \
+  --data-type "SINGLE_SELECT" \
+  --single-select-options "pending,in_progress,completed"
+```
+
+### Board Mapping Configuration
+
+```yaml
+# .github/board-sync.yml
+version: 1
+project:
+  name: "AI Development Board"
+  number: 1
+
+mapping:
+  # Map swarm task status to board columns
+  status:
+    pending: "Backlog"
+    assigned: "Ready"
+    in_progress: "In Progress"
+    review: "Review"
+    completed: "Done"
+    blocked: "Blocked"
+
+  # Map agent types to labels
+  agents:
+    coder: "Development"
+    tester: "Testing"
+    analyst: "Analysis"
+    designer: "Design"
+    architect: "Architecture"
+
+  # Map priority to project fields
+  priority:
+    critical: "Critical"
+    high: "High"
+    medium: "Medium"
+    low: "Low"
+
+  # Custom fields
+  fields:
+    - name: "Agent Count"
+      type: number
+      source: task.agents.length
+    - name: "Complexity"
+      type: select
+      source: task.complexity
+    - name: "ETA"
+      type: date
+      source: task.estimatedCompletion
+```
+
+---
+
+## Task Synchronization
+
+### Real-time Board Sync
+
+```bash
+# Sync swarm tasks with project cards
+npx ruv-swarm github board-sync \
+  --map-status '{
+    "todo": "To Do",
+    "in_progress": "In Progress",
+    "review": "Review",
+    "done": "Done"
+  }' \
+  --auto-move-cards \
+  --update-metadata
+
+# Enable real-time board updates
+npx ruv-swarm github board-realtime \
+  --webhook-endpoint "https://api.example.com/github-sync" \
+  --update-frequency "immediate" \
+  --batch-updates false
+```
+
+### Convert Issues to Project Cards
+
+```bash
+# List issues with label
+ISSUES=$(gh issue list --label "enhancement" --json number,title,body)
+
+# Add issues to project
+echo "$ISSUES" | jq -r '.[].number' | while read -r issue; do
+  gh project item-add $PROJECT_ID --owner @me --url "https://github.com/$GITHUB_REPOSITORY/issues/$issue"
+done
+
+# Process with swarm
+npx ruv-swarm github board-import-issues \
+  --issues "$ISSUES" \
+  --add-to-column "Backlog" \
+  --parse-checklist \
+  --assign-agents
+```
+
+---
+
+## Smart Card Management
+
+### Auto-Assignment
+
+```bash
+# Automatically assign cards to agents
+npx ruv-swarm github board-auto-assign \
+  --strategy "load-balanced" \
+  --consider "expertise,workload,availability" \
+  --update-cards
+```
+
+### Intelligent Card State Transitions
+
+```bash
+# Smart card movement based on rules
+npx ruv-swarm github board-smart-move \
+  --rules '{
+    "auto-progress": "when:all-subtasks-done",
+    "auto-review": "when:tests-pass",
+    "auto-done": "when:pr-merged"
+  }'
+```
+
+### Bulk Operations
+
+```bash
+# Bulk card operations
+npx ruv-swarm github board-bulk \
+  --filter "status:blocked" \
+  --action "add-label:needs-attention" \
+  --notify-assignees
+```
+
+---
+
+## Custom Views & Dashboards
+
+### View Configuration
+
+```javascript
+// Custom board views
+{
+  "views": [
+    {
+      "name": "Swarm Overview",
+      "type": "board",
+      "groupBy": "status",
+      "filters": ["is:open"],
+      "sort": "priority:desc"
+    },
+    {
+      "name": "Agent Workload",
+      "type": "table",
+      "groupBy": "assignedAgent",
+      "columns": ["title", "status", "priority", "eta"],
+      "sort": "eta:asc"
+    },
+    {
+      "name": "Sprint Progress",
+      "type": "roadmap",
+      "dateField": "eta",
+      "groupBy": "milestone"
+    }
+  ]
+}
+```
+
+### Dashboard Configuration
+
+```javascript
+// Dashboard with performance widgets
+{
+  "dashboard": {
+    "widgets": [
+      {
+        "type": "chart",
+        "title": "Task Completion Rate",
+        "data": "completed-per-day",
+        "visualization": "line"
+      },
+      {
+        "type": "gauge",
+        "title": "Sprint Progress",
+        "data": "sprint-completion",
+        "target": 100
+      },
+      {
+        "type": "heatmap",
+        "title": "Agent Activity",
+        "data": "agent-tasks-per-day"
+      }
+    ]
+  }
+}
+```
diff --git a/.claude/skills/github-project-management/references/issue-management.md b/.claude/skills/github-project-management/references/issue-management.md
new file mode 100644 (file)
index 0000000..46a30ae
--- /dev/null
@@ -0,0 +1,310 @@
+# Issue Management & Triage — Detailed Reference
+
+## Automated Issue Creation
+
+### Single Issue with Swarm Coordination
+
+```javascript
+// Initialize issue management swarm
+mcp__claude-flow__swarm_init { topology: "star", maxAgents: 3 }
+mcp__claude-flow__agent_spawn { type: "coordinator", name: "Issue Coordinator" }
+mcp__claude-flow__agent_spawn { type: "researcher", name: "Requirements Analyst" }
+mcp__claude-flow__agent_spawn { type: "coder", name: "Implementation Planner" }
+
+// Create comprehensive issue
+mcp__github__create_issue {
+  owner: "org",
+  repo: "repository",
+  title: "Integration Review: Complete system integration",
+  body: `## Integration Review
+
+  ### Overview
+  Comprehensive review and integration between components.
+
+  ### Objectives
+  - [ ] Verify dependencies and imports
+  - [ ] Ensure API integration
+  - [ ] Check hook system integration
+  - [ ] Validate data systems alignment
+
+  ### Swarm Coordination
+  This issue will be managed by coordinated swarm agents for optimal progress tracking.`,
+  labels: ["integration", "review", "enhancement"],
+  assignees: ["username"]
+}
+
+// Set up automated tracking
+mcp__claude-flow__task_orchestrate {
+  task: "Monitor and coordinate issue progress with automated updates",
+  strategy: "adaptive",
+  priority: "medium"
+}
+```
+
+### Batch Issue Creation
+
+```bash
+# Create multiple related issues using gh CLI
+gh issue create \
+  --title "Feature: Advanced GitHub Integration" \
+  --body "Implement comprehensive GitHub workflow automation..." \
+  --label "feature,github,high-priority"
+
+gh issue create \
+  --title "Bug: Merge conflicts in integration branch" \
+  --body "Resolve merge conflicts..." \
+  --label "bug,integration,urgent"
+
+gh issue create \
+  --title "Documentation: Update integration guides" \
+  --body "Update all documentation..." \
+  --label "documentation,integration"
+```
+
+---
+
+## Issue-to-Swarm Conversion
+
+### Transform Issues into Swarm Tasks
+
+```bash
+# Get issue details
+ISSUE_DATA=$(gh issue view 456 --json title,body,labels,assignees,comments)
+
+# Create swarm from issue
+npx ruv-swarm github issue-to-swarm 456 \
+  --issue-data "$ISSUE_DATA" \
+  --auto-decompose \
+  --assign-agents
+
+# Batch process multiple issues
+ISSUES=$(gh issue list --label "swarm-ready" --json number,title,body,labels)
+npx ruv-swarm github issues-batch \
+  --issues "$ISSUES" \
+  --parallel
+
+# Update issues with swarm status
+echo "$ISSUES" | jq -r '.[].number' | while read -r num; do
+  gh issue edit $num --add-label "swarm-processing"
+done
+```
+
+### Issue Comment Commands
+
+Execute swarm operations via issue comments:
+
+```markdown
+<!-- In issue comment -->
+/swarm analyze
+/swarm decompose 5
+/swarm assign @agent-coder
+/swarm estimate
+/swarm start
+```
+
+---
+
+## Automated Issue Triage
+
+### Auto-Label Based on Content
+
+```javascript
+// .github/swarm-labels.json
+{
+  "rules": [
+    {
+      "keywords": ["bug", "error", "broken"],
+      "labels": ["bug", "swarm-debugger"],
+      "agents": ["debugger", "tester"]
+    },
+    {
+      "keywords": ["feature", "implement", "add"],
+      "labels": ["enhancement", "swarm-feature"],
+      "agents": ["architect", "coder", "tester"]
+    },
+    {
+      "keywords": ["slow", "performance", "optimize"],
+      "labels": ["performance", "swarm-optimizer"],
+      "agents": ["analyst", "optimizer"]
+    }
+  ]
+}
+```
+
+### Automated Triage System
+
+```bash
+# Analyze and triage unlabeled issues
+npx ruv-swarm github triage \
+  --unlabeled \
+  --analyze-content \
+  --suggest-labels \
+  --assign-priority
+
+# Find and link duplicate issues
+npx ruv-swarm github find-duplicates \
+  --threshold 0.8 \
+  --link-related \
+  --close-duplicates
+```
+
+---
+
+## Task Decomposition & Progress Tracking
+
+### Break Down Issues into Subtasks
+
+```bash
+# Get issue body
+ISSUE_BODY=$(gh issue view 456 --json body --jq '.body')
+
+# Decompose into subtasks
+SUBTASKS=$(npx ruv-swarm github issue-decompose 456 \
+  --body "$ISSUE_BODY" \
+  --max-subtasks 10 \
+  --assign-priorities)
+
+# Update issue with checklist
+CHECKLIST=$(echo "$SUBTASKS" | jq -r '.tasks[] | "- [ ] " + .description')
+UPDATED_BODY="$ISSUE_BODY
+
+## Subtasks
+$CHECKLIST"
+
+gh issue edit 456 --body "$UPDATED_BODY"
+
+# Create linked issues for major subtasks
+echo "$SUBTASKS" | jq -r '.tasks[] | select(.priority == "high")' | while read -r task; do
+  TITLE=$(echo "$task" | jq -r '.title')
+  BODY=$(echo "$task" | jq -r '.description')
+
+  gh issue create \
+    --title "$TITLE" \
+    --body "$BODY
+
+Parent issue: #456" \
+    --label "subtask"
+done
+```
+
+### Automated Progress Updates
+
+```bash
+# Get current issue state
+CURRENT=$(gh issue view 456 --json body,labels)
+
+# Get swarm progress
+PROGRESS=$(npx ruv-swarm github issue-progress 456)
+
+# Update checklist in issue body
+UPDATED_BODY=$(echo "$CURRENT" | jq -r '.body' | \
+  npx ruv-swarm github update-checklist --progress "$PROGRESS")
+
+# Edit issue with updated body
+gh issue edit 456 --body "$UPDATED_BODY"
+
+# Post progress summary as comment
+SUMMARY=$(echo "$PROGRESS" | jq -r '
+"## Progress Update
+
+**Completion**: \(.completion)%
+**ETA**: \(.eta)
+
+### Completed Tasks
+\(.completed | map("- " + .) | join("\n"))
+
+### In Progress
+\(.in_progress | map("- " + .) | join("\n"))
+
+### Remaining
+\(.remaining | map("- " + .) | join("\n"))
+
+---
+Automated update by swarm agent"')
+
+gh issue comment 456 --body "$SUMMARY"
+
+# Update labels based on progress
+if [[ $(echo "$PROGRESS" | jq -r '.completion') -eq 100 ]]; then
+  gh issue edit 456 --add-label "ready-for-review" --remove-label "in-progress"
+fi
+```
+
+---
+
+## Stale Issue Management
+
+### Auto-Close Stale Issues with Swarm Analysis
+
+```bash
+# Find stale issues
+STALE_DATE=$(date -d '30 days ago' --iso-8601)
+STALE_ISSUES=$(gh issue list --state open --json number,title,updatedAt,labels \
+  --jq ".[] | select(.updatedAt < \"$STALE_DATE\")")
+
+# Analyze each stale issue
+echo "$STALE_ISSUES" | jq -r '.number' | while read -r num; do
+  # Get full issue context
+  ISSUE=$(gh issue view $num --json title,body,comments,labels)
+
+  # Analyze with swarm
+  ACTION=$(npx ruv-swarm github analyze-stale \
+    --issue "$ISSUE" \
+    --suggest-action)
+
+  case "$ACTION" in
+    "close")
+      gh issue comment $num --body "This issue has been inactive for 30 days and will be closed in 7 days if there's no further activity."
+      gh issue edit $num --add-label "stale"
+      ;;
+    "keep")
+      gh issue edit $num --remove-label "stale" 2>/dev/null || true
+      ;;
+    "needs-info")
+      gh issue comment $num --body "This issue needs more information. Please provide additional context or it may be closed as stale."
+      gh issue edit $num --add-label "needs-info"
+      ;;
+  esac
+done
+
+# Close issues that have been stale for 37+ days
+gh issue list --label stale --state open --json number,updatedAt \
+  --jq ".[] | select(.updatedAt < \"$(date -d '37 days ago' --iso-8601)\") | .number" | \
+  while read -r num; do
+    gh issue close $num --comment "Closing due to inactivity. Feel free to reopen if this is still relevant."
+  done
+```
+
+---
+
+## Specialized Issue Strategies
+
+### Bug Investigation Swarm
+
+```bash
+npx ruv-swarm github bug-swarm 456 \
+  --reproduce \
+  --isolate \
+  --fix \
+  --test
+```
+
+### Feature Implementation Swarm
+
+```bash
+npx ruv-swarm github feature-swarm 456 \
+  --design \
+  --implement \
+  --document \
+  --demo
+```
+
+### Technical Debt Refactoring
+
+```bash
+npx ruv-swarm github debt-swarm 456 \
+  --analyze-impact \
+  --plan-migration \
+  --execute \
+  --validate
+```
diff --git a/.claude/skills/github-project-management/references/sprint-planning.md b/.claude/skills/github-project-management/references/sprint-planning.md
new file mode 100644 (file)
index 0000000..608b6cb
--- /dev/null
@@ -0,0 +1,125 @@
+# Sprint Planning & Tracking — Detailed Reference
+
+## Sprint Management
+
+### Initialize Sprint with Swarm Coordination
+
+```bash
+# Manage sprints with swarms
+npx ruv-swarm github sprint-manage \
+  --sprint "Sprint 23" \
+  --auto-populate \
+  --capacity-planning \
+  --track-velocity
+
+# Track milestone progress
+npx ruv-swarm github milestone-track \
+  --milestone "v2.0 Release" \
+  --update-board \
+  --show-dependencies \
+  --predict-completion
+```
+
+### Agile Development Board Setup
+
+```bash
+# Setup agile board
+npx ruv-swarm github agile-board \
+  --methodology "scrum" \
+  --sprint-length "2w" \
+  --ceremonies "planning,review,retro" \
+  --metrics "velocity,burndown"
+```
+
+### Kanban Flow Board Setup
+
+```bash
+# Setup kanban board
+npx ruv-swarm github kanban-board \
+  --wip-limits '{
+    "In Progress": 5,
+    "Review": 3
+  }' \
+  --cycle-time-tracking \
+  --continuous-flow
+```
+
+---
+
+## Progress Tracking & Analytics
+
+### Board Analytics
+
+```bash
+# Fetch project data
+PROJECT_DATA=$(gh project item-list $PROJECT_ID --owner @me --format json)
+
+# Get issue metrics
+ISSUE_METRICS=$(echo "$PROJECT_DATA" | jq -r '.items[] | select(.content.type == "Issue")' | \
+  while read -r item; do
+    ISSUE_NUM=$(echo "$item" | jq -r '.content.number')
+    gh issue view $ISSUE_NUM --json createdAt,closedAt,labels,assignees
+  done)
+
+# Generate analytics with swarm
+npx ruv-swarm github board-analytics \
+  --project-data "$PROJECT_DATA" \
+  --issue-metrics "$ISSUE_METRICS" \
+  --metrics "throughput,cycle-time,wip" \
+  --group-by "agent,priority,type" \
+  --time-range "30d" \
+  --export "dashboard"
+```
+
+### Performance Reports
+
+```bash
+# Track and visualize progress
+npx ruv-swarm github board-progress \
+  --show "burndown,velocity,cycle-time" \
+  --time-period "sprint" \
+  --export-metrics
+
+# Generate reports
+npx ruv-swarm github board-report \
+  --type "sprint-summary" \
+  --format "markdown" \
+  --include "velocity,burndown,blockers" \
+  --distribute "slack,email"
+```
+
+### KPI Tracking
+
+```bash
+# Track board performance
+npx ruv-swarm github board-kpis \
+  --metrics '[
+    "average-cycle-time",
+    "throughput-per-sprint",
+    "blocked-time-percentage",
+    "first-time-pass-rate"
+  ]' \
+  --dashboard-url
+
+# Track team performance
+npx ruv-swarm github team-metrics \
+  --board "Development" \
+  --per-member \
+  --include "velocity,quality,collaboration" \
+  --anonymous-option
+```
+
+---
+
+## Release Planning
+
+### Release Coordination
+
+```bash
+# Plan releases using board data
+npx ruv-swarm github release-plan-board \
+  --analyze-velocity \
+  --estimate-completion \
+  --identify-risks \
+  --optimize-scope
+```
diff --git a/.claude/skills/github-project-management/references/troubleshooting.md b/.claude/skills/github-project-management/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..356f8f6
--- /dev/null
@@ -0,0 +1,88 @@
+# Troubleshooting, Security & Metrics — Detailed Reference
+
+## Troubleshooting
+
+### Sync Issues
+
+```bash
+# Diagnose sync problems
+npx ruv-swarm github board-diagnose \
+  --check "permissions,webhooks,rate-limits" \
+  --test-sync \
+  --show-conflicts
+```
+
+### Performance Optimization
+
+```bash
+# Optimize board performance
+npx ruv-swarm github board-optimize \
+  --analyze-size \
+  --archive-completed \
+  --index-fields \
+  --cache-views
+```
+
+### Data Recovery
+
+```bash
+# Recover board data
+npx ruv-swarm github board-recover \
+  --backup-id "2024-01-15" \
+  --restore-cards \
+  --preserve-current \
+  --merge-conflicts
+```
+
+---
+
+## Security & Permissions
+
+1. **Command Authorization** -- Validate user permissions before executing commands
+2. **Rate Limiting** -- Prevent spam and abuse of issue commands
+3. **Audit Logging** -- Track all swarm operations on issues and boards
+4. **Data Privacy** -- Respect private repository settings
+5. **Access Control** -- Proper GitHub permissions for board operations
+6. **Webhook Security** -- Secure webhook endpoints for real-time updates
+
+---
+
+## Metrics & Analytics
+
+### Performance Metrics
+
+Automatic tracking of:
+- Issue creation and resolution times
+- Agent productivity metrics
+- Project milestone progress
+- Cross-repository coordination efficiency
+- Sprint velocity and burndown
+- Cycle time and throughput
+- Work-in-progress limits
+
+### Reporting Features
+
+- Weekly progress summaries
+- Agent performance analytics
+- Project health metrics
+- Integration success rates
+- Team collaboration metrics
+- Quality and defect tracking
+
+### Issue Resolution Time
+
+```bash
+# Analyze swarm performance
+npx ruv-swarm github issue-metrics \
+  --issue 456 \
+  --metrics "time-to-close,agent-efficiency,subtask-completion"
+```
+
+### Swarm Effectiveness
+
+```bash
+# Generate effectiveness report
+npx ruv-swarm github effectiveness \
+  --issues "closed:>2024-01-01" \
+  --compare "with-swarm,without-swarm"
+```
diff --git a/.claude/skills/github-release-management/assets/hotfix-workflow.yml b/.claude/skills/github-release-management/assets/hotfix-workflow.yml
new file mode 100644 (file)
index 0000000..74b1fce
--- /dev/null
@@ -0,0 +1,32 @@
+# .github/workflows/hotfix.yml
+name: Emergency Hotfix Workflow
+on:
+  issues:
+    types: [labeled]
+
+jobs:
+  emergency-hotfix:
+    if: contains(github.event.issue.labels.*.name, 'critical-hotfix')
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: Create Hotfix Branch
+        run: |
+          LAST_STABLE=$(gh release list --limit 1 --json tagName -q '.[0].tagName')
+          HOTFIX_VERSION=$(echo $LAST_STABLE | awk -F. '{print $1"."$2"."$3+1}')
+
+          git checkout -b hotfix/$HOTFIX_VERSION $LAST_STABLE
+
+      - name: Fast-Track Testing
+        run: |
+          npm ci
+          npm run test:critical
+          npm run build
+
+      - name: Emergency Release
+        run: |
+          npx claude-flow@alpha github emergency-release \
+            --issue ${{ github.event.issue.number }} \
+            --severity critical \
+            --fast-track \
+            --notify-all
diff --git a/.claude/skills/github-release-management/assets/release-deployment.yml b/.claude/skills/github-release-management/assets/release-deployment.yml
new file mode 100644 (file)
index 0000000..f897467
--- /dev/null
@@ -0,0 +1,28 @@
+# .github/release-deployment.yml
+# Progressive staged rollout configuration
+deployment:
+  strategy: progressive
+  stages:
+    - name: canary
+      percentage: 5
+      duration: 1h
+      metrics:
+        - error-rate < 0.1%
+        - latency-p99 < 200ms
+      auto-advance: true
+
+    - name: partial
+      percentage: 25
+      duration: 4h
+      validation: automated-tests
+      approval: qa-team
+
+    - name: rollout
+      percentage: 50
+      duration: 8h
+      monitor: true
+
+    - name: full
+      percentage: 100
+      approval: release-manager
+      rollback-enabled: true
diff --git a/.claude/skills/github-release-management/assets/release-swarm-config.yml b/.claude/skills/github-release-management/assets/release-swarm-config.yml
new file mode 100644 (file)
index 0000000..da39d97
--- /dev/null
@@ -0,0 +1,110 @@
+# .github/release-swarm.yml
+# Comprehensive release configuration for AI swarm orchestration
+version: 2.0.0
+
+release:
+  versioning:
+    strategy: semantic
+    breaking-keywords: ["BREAKING", "BREAKING CHANGE", "!"]
+    feature-keywords: ["feat", "feature"]
+    fix-keywords: ["fix", "bugfix"]
+
+  changelog:
+    sections:
+      - title: "Features"
+        labels: ["feature", "enhancement"]
+        emoji: true
+      - title: "Bug Fixes"
+        labels: ["bug", "fix"]
+      - title: "Breaking Changes"
+        labels: ["breaking"]
+        highlight: true
+      - title: "Documentation"
+        labels: ["docs", "documentation"]
+      - title: "Performance"
+        labels: ["performance", "optimization"]
+      - title: "Security"
+        labels: ["security"]
+        priority: critical
+
+  artifacts:
+    - name: npm-package
+      build: npm run build
+      test: npm run test:all
+      publish: npm publish
+      registry: https://registry.npmjs.org
+
+    - name: docker-image
+      build: docker build -t app:$VERSION .
+      test: docker run app:$VERSION npm test
+      publish: docker push app:$VERSION
+      platforms: [linux/amd64, linux/arm64]
+
+    - name: binaries
+      build: ./scripts/build-binaries.sh
+      platforms: [linux, macos, windows]
+      architectures: [x64, arm64]
+      upload: github-release
+      sign: true
+
+  validation:
+    pre-release:
+      - lint: npm run lint
+      - typecheck: npm run typecheck
+      - unit-tests: npm run test:unit
+      - integration-tests: npm run test:integration
+      - security-scan: npm audit
+      - license-check: npm run license-check
+
+    post-release:
+      - smoke-tests: npm run test:smoke
+      - deployment-validation: ./scripts/validate-deployment.sh
+      - performance-baseline: npm run benchmark
+
+  deployment:
+    environments:
+      - name: staging
+        auto-deploy: true
+        validation: npm run test:e2e
+        approval: false
+
+      - name: production
+        auto-deploy: false
+        approval-required: true
+        approvers: ["release-manager", "tech-lead"]
+        rollback-enabled: true
+        health-checks:
+          - endpoint: /health
+            expected: 200
+            timeout: 30s
+
+  monitoring:
+    metrics:
+      - error-rate: <1%
+      - latency-p95: <500ms
+      - availability: >99.9%
+      - memory-usage: <80%
+
+    alerts:
+      - type: slack
+        channel: releases
+        on: [deploy, rollback, error]
+      - type: email
+        recipients: ["team@company.com"]
+        on: [critical-error, rollback]
+      - type: pagerduty
+        service: production-releases
+        on: [critical-error]
+
+  rollback:
+    auto-rollback:
+      triggers:
+        - error-rate > 5%
+        - latency-p99 > 2000ms
+        - availability < 99%
+      grace-period: 5m
+
+    manual-rollback:
+      preserve-data: true
+      notify-users: true
+      create-incident: true
diff --git a/.claude/skills/github-release-management/assets/release-workflow.yml b/.claude/skills/github-release-management/assets/release-workflow.yml
new file mode 100644 (file)
index 0000000..25c971c
--- /dev/null
@@ -0,0 +1,146 @@
+# .github/workflows/release.yml
+name: Intelligent Release Workflow
+on:
+  push:
+    tags: ['v*']
+
+jobs:
+  release-orchestration:
+    runs-on: ubuntu-latest
+    permissions:
+      contents: write
+      packages: write
+      issues: write
+
+    steps:
+      - name: Checkout Repository
+        uses: actions/checkout@v3
+        with:
+          fetch-depth: 0
+
+      - name: Setup Node.js
+        uses: actions/setup-node@v3
+        with:
+          node-version: '20'
+          cache: 'npm'
+
+      - name: Authenticate GitHub CLI
+        run: echo "${{ secrets.GITHUB_TOKEN }}" | gh auth login --with-token
+
+      - name: Initialize Release Swarm
+        run: |
+          # Extract version from tag
+          RELEASE_TAG=${{ github.ref_name }}
+          PREV_TAG=$(gh release list --limit 2 --json tagName -q '.[1].tagName')
+
+          # Get merged PRs for changelog
+          PRS=$(gh pr list --state merged --base main --json number,title,labels,author,mergedAt \
+            --jq ".[] | select(.mergedAt > \"$(gh release view $PREV_TAG --json publishedAt -q .publishedAt)\")")
+
+          # Get commit history
+          COMMITS=$(gh api repos/${{ github.repository }}/compare/${PREV_TAG}...HEAD \
+            --jq '.commits[].commit.message')
+
+          # Initialize swarm coordination
+          npx claude-flow@alpha swarm init --topology hierarchical
+
+          # Store release context
+          echo "$PRS" > /tmp/release-prs.json
+          echo "$COMMITS" > /tmp/release-commits.txt
+
+      - name: Generate Release Changelog
+        run: |
+          # Generate intelligent changelog
+          CHANGELOG=$(npx claude-flow@alpha github changelog \
+            --prs "$(cat /tmp/release-prs.json)" \
+            --commits "$(cat /tmp/release-commits.txt)" \
+            --from $PREV_TAG \
+            --to $RELEASE_TAG \
+            --categorize \
+            --add-migration-guide \
+            --format markdown)
+
+          echo "$CHANGELOG" > RELEASE_CHANGELOG.md
+
+      - name: Build Release Artifacts
+        run: |
+          # Install dependencies
+          npm ci
+
+          # Run comprehensive validation
+          npm run lint
+          npm run typecheck
+          npm run test:all
+          npm run build
+
+          # Build platform-specific binaries
+          npx claude-flow@alpha github release-build \
+            --platforms "linux,macos,windows" \
+            --architectures "x64,arm64" \
+            --parallel
+
+      - name: Security Scan
+        run: |
+          # Run security validation
+          npm audit --audit-level=moderate
+
+          npx claude-flow@alpha github release-security \
+            --scan-dependencies \
+            --check-secrets \
+            --sign-artifacts
+
+      - name: Create GitHub Release
+        run: |
+          # Update release with generated changelog
+          gh release edit ${{ github.ref_name }} \
+            --notes "$(cat RELEASE_CHANGELOG.md)" \
+            --draft=false
+
+          # Upload all artifacts
+          for file in dist/*; do
+            gh release upload ${{ github.ref_name }} "$file"
+          done
+
+      - name: Deploy to Package Registries
+        run: |
+          # Publish to npm
+          echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}" > .npmrc
+          npm publish
+
+          # Build and push Docker images
+          docker build -t ${{ github.repository }}:${{ github.ref_name }} .
+          docker push ${{ github.repository }}:${{ github.ref_name }}
+
+      - name: Post-Release Validation
+        run: |
+          # Run smoke tests
+          npm run test:smoke
+
+          # Validate deployment
+          npx claude-flow@alpha github release-validate \
+            --version ${{ github.ref_name }} \
+            --smoke-tests \
+            --health-checks
+
+      - name: Create Release Announcement
+        run: |
+          # Create announcement issue
+          gh issue create \
+            --title "Released ${{ github.ref_name }}" \
+            --body "$(cat RELEASE_CHANGELOG.md)" \
+            --label "announcement,release"
+
+          # Notify via discussion
+          gh api repos/${{ github.repository }}/discussions \
+            --method POST \
+            -f title="Release ${{ github.ref_name }} Now Available" \
+            -f body="$(cat RELEASE_CHANGELOG.md)" \
+            -f category_id="$(gh api repos/${{ github.repository }}/discussions/categories --jq '.[] | select(.slug=="announcements") | .id')"
+
+      - name: Monitor Release
+        run: |
+          # Start release monitoring
+          npx claude-flow@alpha github release-monitor \
+            --version ${{ github.ref_name }} \
+            --duration 1h \
+            --alert-on-errors &
diff --git a/.claude/skills/github-release-management/references/advanced-workflows.md b/.claude/skills/github-release-management/references/advanced-workflows.md
new file mode 100644 (file)
index 0000000..417f155
--- /dev/null
@@ -0,0 +1,141 @@
+# Advanced Workflows — Level 3
+
+Multi-package releases, progressive deployments, cross-repo coordination, and emergency hotfix procedures.
+
+---
+
+## Multi-Package Release Coordination
+
+### Monorepo Release Strategy
+
+```javascript
+[Single Message - Multi-Package Release]:
+  // Initialize mesh topology for cross-package coordination
+  mcp__claude-flow__swarm_init { topology: "mesh", maxAgents: 8 }
+
+  // Spawn package-specific agents
+  Task("Package A Manager", "Coordinate claude-flow package release v1.0.72", "coder")
+  Task("Package B Manager", "Coordinate ruv-swarm package release v1.0.12", "coder")
+  Task("Integration Tester", "Validate cross-package compatibility", "tester")
+  Task("Version Coordinator", "Align dependencies and versions", "coordinator")
+
+  // Update all packages simultaneously
+  Write("packages/claude-flow/package.json", "[v1.0.72 content]")
+  Write("packages/ruv-swarm/package.json", "[v1.0.12 content]")
+  Write("CHANGELOG.md", "[consolidated changelog]")
+
+  // Run cross-package validation
+  Bash("cd packages/claude-flow && npm install && npm test")
+  Bash("cd packages/ruv-swarm && npm install && npm test")
+  Bash("npm run test:integration")
+
+  // Create unified release PR
+  Bash(`gh pr create \
+    --title "Release: claude-flow v1.0.72, ruv-swarm v1.0.12" \
+    --body "Multi-package coordinated release with cross-compatibility validation"`)
+```
+
+---
+
+## Progressive Deployment Strategy
+
+Read `assets/release-deployment.yml` for the staged rollout configuration.
+
+### Execute Staged Deployment
+
+```bash
+# Deploy with progressive rollout
+npx claude-flow github release-deploy \
+  --version v2.0.0 \
+  --strategy progressive \
+  --config .github/release-deployment.yml \
+  --monitor-metrics \
+  --auto-rollback-on-error
+```
+
+---
+
+## Multi-Repository Coordination
+
+### Coordinated Multi-Repo Release
+
+```bash
+# Synchronize releases across repositories
+npx claude-flow github multi-release \
+  --repos "frontend:v2.0.0,backend:v2.1.0,cli:v1.5.0" \
+  --ensure-compatibility \
+  --atomic-release \
+  --synchronized \
+  --rollback-all-on-failure
+```
+
+### Cross-Repo Dependency Management
+
+```javascript
+[Single Message - Cross-Repo Release]:
+  // Initialize star topology for centralized coordination
+  mcp__claude-flow__swarm_init { topology: "star", maxAgents: 6 }
+
+  // Spawn repo-specific coordinators
+  Task("Frontend Release", "Release frontend v2.0.0 with API compatibility", "coordinator")
+  Task("Backend Release", "Release backend v2.1.0 with breaking changes", "coordinator")
+  Task("CLI Release", "Release CLI v1.5.0 with new commands", "coordinator")
+  Task("Compatibility Checker", "Validate cross-repo compatibility", "researcher")
+
+  // Coordinate version updates across repos
+  Bash("gh api repos/org/frontend/dispatches --method POST -f event_type='release' -F client_payload[version]=v2.0.0")
+  Bash("gh api repos/org/backend/dispatches --method POST -f event_type='release' -F client_payload[version]=v2.1.0")
+  Bash("gh api repos/org/cli/dispatches --method POST -f event_type='release' -F client_payload[version]=v1.5.0")
+
+  // Monitor all releases
+  mcp__claude-flow__swarm_monitor { interval: 5, duration: 300 }
+```
+
+---
+
+## Hotfix Emergency Procedures
+
+### Emergency Hotfix Workflow
+
+```bash
+# Fast-track critical bug fix
+npx claude-flow github emergency-release \
+  --issue 789 \
+  --severity critical \
+  --target-version v1.2.4 \
+  --cherry-pick-commits \
+  --bypass-checks security-only \
+  --fast-track \
+  --notify-all
+```
+
+### Automated Hotfix Process
+
+```javascript
+[Single Message - Emergency Hotfix]:
+  // Create hotfix branch from last stable release
+  Bash("git checkout -b hotfix/v1.2.4 v1.2.3")
+
+  // Cherry-pick critical fixes
+  Bash("git cherry-pick abc123def")
+
+  // Fast validation
+  Bash("npm run test:critical && npm run build")
+
+  // Create emergency release
+  Bash(`gh release create v1.2.4 \
+    --title "HOTFIX v1.2.4: Critical Security Patch" \
+    --notes "Emergency release addressing CVE-2024-XXXX" \
+    --prerelease=false`)
+
+  // Immediate deployment
+  Bash("npm publish --tag hotfix")
+
+  // Notify stakeholders
+  Bash(`gh issue create \
+    --title "HOTFIX v1.2.4 Deployed" \
+    --body "Critical security patch deployed. Please update immediately." \
+    --label "critical,security,hotfix"`)
+```
+
+See also: `assets/hotfix-workflow.yml` for the GitHub Actions automation.
diff --git a/.claude/skills/github-release-management/references/basic-usage.md b/.claude/skills/github-release-management/references/basic-usage.md
new file mode 100644 (file)
index 0000000..16d7a4c
--- /dev/null
@@ -0,0 +1,64 @@
+# Basic Usage — Level 1
+
+Essential release commands and simple integration patterns for getting started with GitHub release management.
+
+---
+
+## Create Release Draft
+
+```bash
+# Get last release tag
+LAST_TAG=$(gh release list --limit 1 --json tagName -q '.[0].tagName')
+
+# Generate changelog from commits
+CHANGELOG=$(gh api repos/:owner/:repo/compare/${LAST_TAG}...HEAD \
+  --jq '.commits[].commit.message')
+
+# Create draft release
+gh release create v2.0.0 \
+  --draft \
+  --title "Release v2.0.0" \
+  --notes "$CHANGELOG" \
+  --target main
+```
+
+## Basic Version Bump
+
+```bash
+# Update package.json version
+npm version patch  # or minor, major
+
+# Push version tag
+git push --follow-tags
+```
+
+## Simple Deployment
+
+```bash
+# Build and publish npm package
+npm run build
+npm publish
+
+# Create GitHub release
+gh release create $(npm pkg get version) \
+  --generate-notes
+```
+
+## Quick Integration Example
+
+```javascript
+// Simple release preparation in Claude Code
+[Single Message]:
+  // Update version files
+  Edit("package.json", { old: '"version": "1.0.0"', new: '"version": "2.0.0"' })
+
+  // Generate changelog
+  Bash("gh api repos/:owner/:repo/compare/v1.0.0...HEAD --jq '.commits[].commit.message' > CHANGELOG.md")
+
+  // Create release branch
+  Bash("git checkout -b release/v2.0.0")
+  Bash("git add -A && git commit -m 'release: Prepare v2.0.0'")
+
+  // Create PR
+  Bash("gh pr create --title 'Release v2.0.0' --body 'Automated release preparation'")
+```
diff --git a/.claude/skills/github-release-management/references/best-practices.md b/.claude/skills/github-release-management/references/best-practices.md
new file mode 100644 (file)
index 0000000..a21c400
--- /dev/null
@@ -0,0 +1,92 @@
+# Best Practices & Patterns
+
+Guidelines for release planning, automation, monitoring, and documentation standards.
+
+---
+
+## Release Planning Guidelines
+
+### 1. Regular Release Cadence
+- **Weekly**: Patch releases with bug fixes
+- **Bi-weekly**: Minor releases with features
+- **Quarterly**: Major releases with breaking changes
+- **On-demand**: Hotfixes for critical issues
+
+### 2. Feature Freeze Strategy
+- Code freeze 3 days before release
+- Only critical bug fixes allowed
+- Beta testing period for major releases
+- Stakeholder communication plan
+
+### 3. Version Management Rules
+- Strict semantic versioning compliance
+- Breaking changes only in major versions
+- Deprecation warnings one minor version ahead
+- Cross-package version synchronization
+
+---
+
+## Automation Recommendations
+
+### 1. Comprehensive CI/CD Pipeline
+- Automated testing at every stage
+- Security scanning before release
+- Performance benchmarking
+- Documentation generation
+
+### 2. Progressive Deployment
+- Canary releases for early detection
+- Staged rollouts with monitoring
+- Automated health checks
+- Quick rollback mechanisms
+
+### 3. Monitoring & Observability
+- Real-time error tracking
+- Performance metrics collection
+- User adoption analytics
+- Feedback collection automation
+
+---
+
+## Documentation Standards
+
+### 1. Changelog Requirements
+- Categorized changes by type
+- Breaking changes highlighted
+- Migration guides for major versions
+- Contributor attribution
+
+### 2. Release Notes Content
+- High-level feature summaries
+- Detailed technical changes
+- Upgrade instructions
+- Known issues and limitations
+
+### 3. API Documentation
+- Automated API doc generation
+- Example code updates
+- Deprecation notices
+- Version compatibility matrix
+
+---
+
+## Performance Metrics & Benchmarks
+
+### Expected Performance
+- **Release Planning**: < 2 minutes
+- **Build Process**: 3-8 minutes (varies by project)
+- **Test Execution**: 5-15 minutes
+- **Deployment**: 2-5 minutes per target
+- **Complete Pipeline**: 15-30 minutes
+
+### Optimization Tips
+1. **Parallel Execution**: Use swarm coordination for concurrent tasks
+2. **Caching**: Enable build and dependency caching
+3. **Incremental Builds**: Only rebuild changed components
+4. **Test Optimization**: Run critical tests first, full suite in parallel
+
+### Success Metrics
+- **Release Frequency**: Target weekly minor releases
+- **Lead Time**: < 2 hours from commit to production
+- **Failure Rate**: < 2% of releases require rollback
+- **MTTR**: < 30 minutes for critical hotfixes
diff --git a/.claude/skills/github-release-management/references/enterprise-features.md b/.claude/skills/github-release-management/references/enterprise-features.md
new file mode 100644 (file)
index 0000000..fb4c91f
--- /dev/null
@@ -0,0 +1,139 @@
+# Enterprise Features — Level 4
+
+Release configuration management, advanced testing strategies, monitoring, analytics, security, and compliance.
+
+---
+
+## Release Configuration Management
+
+Read `assets/release-swarm-config.yml` for the comprehensive release configuration template.
+
+The configuration covers:
+- **Versioning** — Semantic versioning strategy with keyword detection
+- **Changelog** — Categorized sections with emoji and label mapping
+- **Artifacts** — npm, Docker, and binary build/test/publish pipelines
+- **Validation** — Pre-release and post-release check suites
+- **Deployment** — Staging and production environments with approvals
+- **Monitoring** — Metrics thresholds and multi-channel alerts
+- **Rollback** — Auto-rollback triggers with grace periods
+
+---
+
+## Advanced Testing Strategies
+
+### Comprehensive Validation Suite
+
+```bash
+# Pre-release validation with all checks
+npx claude-flow github release-validate \
+  --checks "
+    version-conflicts,
+    dependency-compatibility,
+    api-breaking-changes,
+    security-vulnerabilities,
+    performance-regression,
+    documentation-completeness,
+    license-compliance,
+    backwards-compatibility
+  " \
+  --block-on-failure \
+  --generate-report \
+  --upload-results
+```
+
+### Backward Compatibility Testing
+
+```bash
+# Test against previous versions
+npx claude-flow github compat-test \
+  --previous-versions "v1.0,v1.1,v1.2" \
+  --api-contracts \
+  --data-migrations \
+  --integration-tests \
+  --generate-report
+```
+
+### Performance Regression Detection
+
+```bash
+# Benchmark against baseline
+npx claude-flow github performance-test \
+  --baseline v1.9.0 \
+  --candidate v2.0.0 \
+  --metrics "throughput,latency,memory,cpu" \
+  --threshold 5% \
+  --fail-on-regression
+```
+
+---
+
+## Release Monitoring & Analytics
+
+### Real-Time Release Monitoring
+
+```bash
+# Monitor release health post-deployment
+npx claude-flow github release-monitor \
+  --version v2.0.0 \
+  --metrics "error-rate,latency,throughput,adoption" \
+  --alert-thresholds \
+  --duration 24h \
+  --export-dashboard
+```
+
+### Release Analytics & Insights
+
+```bash
+# Analyze release performance and adoption
+npx claude-flow github release-analytics \
+  --version v2.0.0 \
+  --compare-with v1.9.0 \
+  --metrics "adoption,performance,stability,feedback" \
+  --generate-insights \
+  --export-report
+```
+
+### Automated Rollback Configuration
+
+```bash
+# Configure intelligent auto-rollback
+npx claude-flow github rollback-config \
+  --triggers '{
+    "error-rate": ">5%",
+    "latency-p99": ">1000ms",
+    "availability": "<99.9%",
+    "failed-health-checks": ">3"
+  }' \
+  --grace-period 5m \
+  --notify-on-rollback \
+  --preserve-metrics
+```
+
+---
+
+## Security & Compliance
+
+### Security Scanning
+
+```bash
+# Comprehensive security validation
+npx claude-flow github release-security \
+  --scan-dependencies \
+  --check-secrets \
+  --audit-permissions \
+  --sign-artifacts \
+  --sbom-generation \
+  --vulnerability-report
+```
+
+### Compliance Validation
+
+```bash
+# Ensure regulatory compliance
+npx claude-flow github release-compliance \
+  --standards "SOC2,GDPR,HIPAA" \
+  --license-audit \
+  --data-governance \
+  --audit-trail \
+  --generate-attestation
+```
diff --git a/.claude/skills/github-release-management/references/release-checklists.md b/.claude/skills/github-release-management/references/release-checklists.md
new file mode 100644 (file)
index 0000000..28ac49b
--- /dev/null
@@ -0,0 +1,46 @@
+# Release Checklists
+
+Complete pre-release, release, and post-release verification checklists.
+
+---
+
+## Pre-Release Checklist
+
+- [ ] Version numbers updated across all packages
+- [ ] Changelog generated and reviewed
+- [ ] Breaking changes documented with migration guide
+- [ ] All tests passing (unit, integration, e2e)
+- [ ] Security scan completed with no critical issues
+- [ ] Performance benchmarks within acceptable range
+- [ ] Documentation updated (API docs, README, examples)
+- [ ] Release notes drafted and reviewed
+- [ ] Stakeholders notified of upcoming release
+- [ ] Deployment plan reviewed and approved
+
+---
+
+## Release Checklist
+
+- [ ] Release branch created and validated
+- [ ] CI/CD pipeline completed successfully
+- [ ] Artifacts built and verified
+- [ ] GitHub release created with proper notes
+- [ ] Packages published to registries
+- [ ] Docker images pushed to container registry
+- [ ] Deployment to staging successful
+- [ ] Smoke tests passing in staging
+- [ ] Production deployment completed
+- [ ] Health checks passing
+
+---
+
+## Post-Release Checklist
+
+- [ ] Release announcement published
+- [ ] Monitoring dashboards reviewed
+- [ ] Error rates within normal range
+- [ ] Performance metrics stable
+- [ ] User feedback collected
+- [ ] Documentation links verified
+- [ ] Release retrospective scheduled
+- [ ] Next release planning initiated
diff --git a/.claude/skills/github-release-management/references/swarm-coordination.md b/.claude/skills/github-release-management/references/swarm-coordination.md
new file mode 100644 (file)
index 0000000..d35c154
--- /dev/null
@@ -0,0 +1,167 @@
+# Swarm Coordination — Level 2
+
+AI swarm release orchestration patterns, specialized agent roles, and coordinated release workflows.
+
+---
+
+## Initialize Release Swarm
+
+```javascript
+// Set up coordinated release team
+[Single Message - Swarm Initialization]:
+  mcp__claude-flow__swarm_init {
+    topology: "hierarchical",
+    maxAgents: 6,
+    strategy: "balanced"
+  }
+
+  // Spawn specialized agents
+  mcp__claude-flow__agent_spawn { type: "coordinator", name: "Release Director" }
+  mcp__claude-flow__agent_spawn { type: "coder", name: "Version Manager" }
+  mcp__claude-flow__agent_spawn { type: "tester", name: "QA Engineer" }
+  mcp__claude-flow__agent_spawn { type: "reviewer", name: "Release Reviewer" }
+  mcp__claude-flow__agent_spawn { type: "analyst", name: "Deployment Analyst" }
+  mcp__claude-flow__agent_spawn { type: "researcher", name: "Compatibility Checker" }
+```
+
+## Coordinated Release Workflow
+
+```javascript
+[Single Message - Full Release Coordination]:
+  // Create release branch
+  Bash("gh api repos/:owner/:repo/git/refs --method POST -f ref='refs/heads/release/v2.0.0' -f sha=$(gh api repos/:owner/:repo/git/refs/heads/main --jq '.object.sha')")
+
+  // Orchestrate release preparation
+  mcp__claude-flow__task_orchestrate {
+    task: "Prepare release v2.0.0 with comprehensive testing and validation",
+    strategy: "sequential",
+    priority: "critical",
+    maxAgents: 6
+  }
+
+  // Update all release files
+  Write("package.json", "[updated version]")
+  Write("CHANGELOG.md", "[release changelog]")
+  Write("RELEASE_NOTES.md", "[detailed notes]")
+
+  // Run comprehensive validation
+  Bash("npm install && npm test && npm run lint && npm run build")
+
+  // Create release PR
+  Bash(`gh pr create \
+    --title "Release v2.0.0: Feature Set and Improvements" \
+    --head "release/v2.0.0" \
+    --base "main" \
+    --body "$(cat RELEASE_NOTES.md)"`)
+
+  // Track progress
+  TodoWrite { todos: [
+    { content: "Prepare release branch", status: "completed", priority: "critical" },
+    { content: "Run validation suite", status: "completed", priority: "high" },
+    { content: "Create release PR", status: "completed", priority: "high" },
+    { content: "Code review approval", status: "pending", priority: "high" },
+    { content: "Merge and deploy", status: "pending", priority: "critical" }
+  ]}
+
+  // Store release state
+  mcp__claude-flow__memory_usage {
+    action: "store",
+    key: "release/v2.0.0/status",
+    value: JSON.stringify({
+      version: "2.0.0",
+      stage: "validation_complete",
+      timestamp: Date.now(),
+      ready_for_review: true
+    })
+  }
+```
+
+---
+
+## Release Agent Specializations
+
+### Changelog Agent
+
+```bash
+# Get merged PRs between versions
+PRS=$(gh pr list --state merged --base main --json number,title,labels,author,mergedAt \
+  --jq ".[] | select(.mergedAt > \"$(gh release view v1.0.0 --json publishedAt -q .publishedAt)\")")
+
+# Get commit history
+COMMITS=$(gh api repos/:owner/:repo/compare/v1.0.0...HEAD \
+  --jq '.commits[].commit.message')
+
+# Generate categorized changelog
+npx claude-flow github changelog \
+  --prs "$PRS" \
+  --commits "$COMMITS" \
+  --from v1.0.0 \
+  --to HEAD \
+  --categorize \
+  --add-migration-guide
+```
+
+**Capabilities:**
+- Semantic commit analysis
+- Breaking change detection
+- Contributor attribution
+- Migration guide generation
+- Multi-language support
+
+### Version Agent
+
+```bash
+# Intelligent version suggestion
+npx claude-flow github version-suggest \
+  --current v1.2.3 \
+  --analyze-commits \
+  --check-compatibility \
+  --suggest-pre-release
+```
+
+**Logic:**
+- Analyzes commit messages and PR labels
+- Detects breaking changes via keywords
+- Suggests appropriate version bump
+- Handles pre-release versioning
+- Validates version constraints
+
+### Build Agent
+
+```bash
+# Multi-platform build coordination
+npx claude-flow github release-build \
+  --platforms "linux,macos,windows" \
+  --architectures "x64,arm64" \
+  --parallel \
+  --optimize-size
+```
+
+**Features:**
+- Cross-platform compilation
+- Parallel build execution
+- Artifact optimization and compression
+- Dependency bundling
+- Build caching and reuse
+
+### Test Agent
+
+```bash
+# Comprehensive pre-release testing
+npx claude-flow github release-test \
+  --suites "unit,integration,e2e,performance" \
+  --environments "node:16,node:18,node:20" \
+  --fail-fast false \
+  --generate-report
+```
+
+### Deploy Agent
+
+```bash
+# Multi-target deployment orchestration
+npx claude-flow github release-deploy \
+  --targets "npm,docker,github,s3" \
+  --staged-rollout \
+  --monitor-metrics \
+  --auto-rollback
+```
diff --git a/.claude/skills/github-release-management/references/troubleshooting.md b/.claude/skills/github-release-management/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..cf68de6
--- /dev/null
@@ -0,0 +1,69 @@
+# Troubleshooting & Common Issues
+
+Diagnostic procedures and solutions for frequent release pipeline problems.
+
+---
+
+## Issue: Failed Release Build
+
+```bash
+# Debug build failures
+npx claude-flow@alpha diagnostic-run \
+  --component build \
+  --verbose
+
+# Retry with isolated environment
+docker run --rm -v $(pwd):/app node:20 \
+  bash -c "cd /app && npm ci && npm run build"
+```
+
+---
+
+## Issue: Test Failures in CI
+
+```bash
+# Run tests with detailed output
+npm run test -- --verbose --coverage
+
+# Check for environment-specific issues
+npm run test:ci
+
+# Compare local vs CI environment
+npx claude-flow@alpha github compat-test \
+  --environments "local,ci" \
+  --compare
+```
+
+---
+
+## Issue: Deployment Rollback Needed
+
+```bash
+# Immediate rollback to previous version
+npx claude-flow@alpha github rollback \
+  --to-version v1.9.9 \
+  --reason "Critical bug in v2.0.0" \
+  --preserve-data \
+  --notify-users
+
+# Investigate rollback cause
+npx claude-flow@alpha github release-analytics \
+  --version v2.0.0 \
+  --identify-issues
+```
+
+---
+
+## Issue: Version Conflicts
+
+```bash
+# Check and resolve version conflicts
+npx claude-flow@alpha github release-validate \
+  --checks version-conflicts \
+  --auto-resolve
+
+# Align multi-package versions
+npx claude-flow@alpha github version-sync \
+  --packages "package-a,package-b" \
+  --strategy semantic
+```
diff --git a/.claude/skills/github-workflow-automation/assets/command-reference.md b/.claude/skills/github-workflow-automation/assets/command-reference.md
new file mode 100644 (file)
index 0000000..172be04
--- /dev/null
@@ -0,0 +1,70 @@
+# Command Reference
+
+Complete CLI reference for all GitHub workflow automation commands.
+
+---
+
+## Workflow Generation
+
+```bash
+npx ruv-swarm actions generate-workflow [options]
+  --analyze-codebase       Analyze repository structure
+  --detect-languages       Detect programming languages
+  --create-optimal-pipeline Generate optimized workflow
+```
+
+## Optimization
+
+```bash
+npx ruv-swarm actions optimize [options]
+  --workflow <path>        Path to workflow file
+  --suggest-parallelization Suggest parallel execution
+  --reduce-redundancy      Remove redundant steps
+  --estimate-savings       Estimate time/cost savings
+```
+
+## Analysis
+
+```bash
+npx ruv-swarm actions analyze [options]
+  --commit <sha>           Analyze specific commit
+  --suggest-tests          Suggest test improvements
+  --optimize-pipeline      Optimize pipeline structure
+```
+
+## Testing
+
+```bash
+npx ruv-swarm actions smart-test [options]
+  --changed-files <files>  Files that changed
+  --impact-analysis        Analyze test impact
+  --parallel-safe          Only parallel-safe tests
+```
+
+## Security
+
+```bash
+npx ruv-swarm actions security [options]
+  --deep-scan             Deep security analysis
+  --format <format>       Output format (json/text)
+  --create-issues         Auto-create GitHub issues
+```
+
+## Deployment
+
+```bash
+npx ruv-swarm actions deploy [options]
+  --strategy <type>       Deployment strategy
+  --risk <level>          Risk assessment level
+  --auto-execute          Execute automatically
+```
+
+## Monitoring
+
+```bash
+npx ruv-swarm actions analytics [options]
+  --workflow <name>       Workflow to analyze
+  --period <duration>     Analysis period
+  --identify-bottlenecks  Find bottlenecks
+  --suggest-improvements  Improvement suggestions
+```
diff --git a/.claude/skills/github-workflow-automation/assets/setup-checklist.md b/.claude/skills/github-workflow-automation/assets/setup-checklist.md
new file mode 100644 (file)
index 0000000..86d2830
--- /dev/null
@@ -0,0 +1,41 @@
+# Integration Checklist
+
+Setup verification and bootstrap script for GitHub workflow automation.
+
+---
+
+## Prerequisites
+
+- [ ] GitHub CLI (`gh`) installed and authenticated
+- [ ] Git configured with user credentials
+- [ ] Node.js v16+ installed
+- [ ] `claude-flow@alpha` package available
+- [ ] Repository has `.github/workflows` directory
+- [ ] GitHub Actions enabled on repository
+- [ ] Necessary secrets configured
+- [ ] Runner permissions verified
+
+---
+
+## Quick Setup Script
+
+```bash
+#!/bin/bash
+# setup-github-automation.sh
+
+# Install dependencies
+npm install -g claude-flow@alpha
+
+# Verify GitHub CLI
+gh auth status || gh auth login
+
+# Create workflow directory
+mkdir -p .github/workflows
+
+# Generate initial workflow
+npx ruv-swarm actions generate-workflow \
+  --analyze-codebase \
+  --create-optimal-pipeline > .github/workflows/ci.yml
+
+echo "GitHub workflow automation setup complete"
+```
diff --git a/.claude/skills/github-workflow-automation/references/advanced-features.md b/.claude/skills/github-workflow-automation/references/advanced-features.md
new file mode 100644 (file)
index 0000000..e10e3e9
--- /dev/null
@@ -0,0 +1,118 @@
+# Advanced Features
+
+Dynamic test strategies, predictive analysis, and custom actions development.
+
+---
+
+## Dynamic Test Strategies
+
+### Smart Test Selection
+
+```yaml
+# Automatically select relevant tests
+- name: Swarm Test Selection
+  run: |
+    npx ruv-swarm actions smart-test \
+      --changed-files ${{ steps.files.outputs.all }} \
+      --impact-analysis \
+      --parallel-safe
+```
+
+### Dynamic Test Matrix
+
+```yaml
+# Generate test matrix from code analysis
+jobs:
+  generate-matrix:
+    outputs:
+      matrix: ${{ steps.set-matrix.outputs.matrix }}
+    steps:
+      - id: set-matrix
+        run: |
+          MATRIX=$(npx ruv-swarm actions test-matrix \
+            --detect-frameworks \
+            --optimize-coverage)
+          echo "matrix=${MATRIX}" >> $GITHUB_OUTPUT
+
+  test:
+    needs: generate-matrix
+    strategy:
+      matrix: ${{fromJson(needs.generate-matrix.outputs.matrix)}}
+```
+
+### Intelligent Parallelization
+
+```bash
+# Determine optimal parallelization
+npx ruv-swarm actions parallel-strategy \
+  --analyze-dependencies \
+  --time-estimates \
+  --cost-aware
+```
+
+---
+
+## Predictive Analysis
+
+### Predictive Failures
+
+```bash
+# Predict potential failures
+npx ruv-swarm actions predict \
+  --analyze-history \
+  --identify-risks \
+  --suggest-preventive
+```
+
+### Workflow Recommendations
+
+```bash
+# Get workflow recommendations
+npx ruv-swarm actions recommend \
+  --analyze-repo \
+  --suggest-workflows \
+  --industry-best-practices
+```
+
+### Automated Optimization
+
+```bash
+# Continuously optimize workflows
+npx ruv-swarm actions auto-optimize \
+  --monitor-performance \
+  --apply-improvements \
+  --track-savings
+```
+
+---
+
+## Custom Actions Development
+
+### Custom Swarm Action Template
+
+```javascript
+// action.yml
+name: 'Swarm Custom Action'
+description: 'Custom swarm-powered action'
+inputs:
+  task:
+    description: 'Task for swarm'
+    required: true
+runs:
+  using: 'node16'
+  main: 'dist/index.js'
+
+// index.js
+const { SwarmAction } = require('ruv-swarm');
+
+async function run() {
+  const swarm = new SwarmAction({
+    topology: 'mesh',
+    agents: ['analyzer', 'optimizer']
+  });
+
+  await swarm.execute(core.getInput('task'));
+}
+
+run().catch(error => core.setFailed(error.message));
+```
diff --git a/.claude/skills/github-workflow-automation/references/best-practices.md b/.claude/skills/github-workflow-automation/references/best-practices.md
new file mode 100644 (file)
index 0000000..43f5093
--- /dev/null
@@ -0,0 +1,164 @@
+# Best Practices
+
+Workflow organization, security hardening, and performance optimization patterns.
+
+---
+
+## Workflow Organization
+
+### 1. Use Reusable Workflows
+
+```yaml
+# .github/workflows/reusable-swarm.yml
+name: Reusable Swarm Workflow
+on:
+  workflow_call:
+    inputs:
+      topology:
+        required: true
+        type: string
+
+jobs:
+  swarm-task:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Initialize Swarm
+        run: |
+          npx ruv-swarm init --topology ${{ inputs.topology }}
+```
+
+### 2. Implement Proper Caching
+
+```yaml
+- name: Cache Swarm Dependencies
+  uses: actions/cache@v3
+  with:
+    path: ~/.npm
+    key: ${{ runner.os }}-swarm-${{ hashFiles('**/package-lock.json') }}
+```
+
+### 3. Set Appropriate Timeouts
+
+```yaml
+jobs:
+  swarm-task:
+    timeout-minutes: 30
+    steps:
+      - name: Swarm Operation
+        timeout-minutes: 10
+```
+
+### 4. Use Workflow Dependencies
+
+```yaml
+jobs:
+  setup:
+    runs-on: ubuntu-latest
+
+  test:
+    needs: setup
+    runs-on: ubuntu-latest
+
+  deploy:
+    needs: [setup, test]
+    runs-on: ubuntu-latest
+```
+
+---
+
+## Security Best Practices
+
+### 1. Store Configurations Securely
+
+```yaml
+- name: Setup Swarm
+  env:
+    SWARM_CONFIG: ${{ secrets.SWARM_CONFIG }}
+    API_KEY: ${{ secrets.API_KEY }}
+  run: |
+    npx ruv-swarm init --config "$SWARM_CONFIG"
+```
+
+### 2. Use OIDC Authentication
+
+```yaml
+permissions:
+  id-token: write
+  contents: read
+
+- name: Configure AWS Credentials
+  uses: aws-actions/configure-aws-credentials@v2
+  with:
+    role-to-assume: arn:aws:iam::123456789012:role/GitHubAction
+    aws-region: us-east-1
+```
+
+### 3. Implement Least-Privilege
+
+```yaml
+permissions:
+  contents: read
+  pull-requests: write
+  issues: write
+```
+
+### 4. Audit Swarm Operations
+
+```yaml
+- name: Audit Swarm Actions
+  run: |
+    npx ruv-swarm actions audit \
+      --export-logs \
+      --compliance-report
+```
+
+---
+
+## Performance Optimization
+
+### 1. Cache Swarm Dependencies
+
+```yaml
+- uses: actions/cache@v3
+  with:
+    path: |
+      ~/.npm
+      node_modules
+    key: ${{ runner.os }}-swarm-${{ hashFiles('**/package-lock.json') }}
+```
+
+### 2. Use Appropriate Runner Sizes
+
+```yaml
+jobs:
+  heavy-task:
+    runs-on: ubuntu-latest-4-cores
+    steps:
+      - name: Intensive Swarm Operation
+```
+
+### 3. Implement Early Termination
+
+```yaml
+- name: Quick Fail Check
+  run: |
+    if ! npx ruv-swarm actions pre-check; then
+      echo "Pre-check failed, terminating early"
+      exit 1
+    fi
+```
+
+### 4. Optimize Parallel Execution
+
+```yaml
+strategy:
+  matrix:
+    include:
+      - runner: ubuntu-latest
+        task: test
+      - runner: ubuntu-latest
+        task: lint
+      - runner: ubuntu-latest
+        task: security
+  max-parallel: 3
+```
diff --git a/.claude/skills/github-workflow-automation/references/claude-flow-integration.md b/.claude/skills/github-workflow-automation/references/claude-flow-integration.md
new file mode 100644 (file)
index 0000000..18f5978
--- /dev/null
@@ -0,0 +1,70 @@
+# Claude-Flow Integration
+
+MCP-based swarm coordination patterns and batch operations for GitHub workflows.
+
+---
+
+## Swarm Coordination Patterns
+
+### Initialize GitHub Swarm
+
+```javascript
+// Step 1: Initialize swarm coordination
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",
+  maxAgents: 8
+}
+
+// Step 2: Spawn specialized agents
+mcp__claude-flow__agent_spawn { type: "coordinator", name: "GitHub Coordinator" }
+mcp__claude-flow__agent_spawn { type: "reviewer", name: "Code Reviewer" }
+mcp__claude-flow__agent_spawn { type: "tester", name: "QA Agent" }
+mcp__claude-flow__agent_spawn { type: "analyst", name: "Security Analyst" }
+
+// Step 3: Orchestrate GitHub workflow
+mcp__claude-flow__task_orchestrate {
+  task: "Complete PR review and merge workflow",
+  strategy: "parallel",
+  priority: "high"
+}
+```
+
+### GitHub Hooks Integration
+
+```bash
+# Pre-task: Setup GitHub context
+npx claude-flow@alpha hooks pre-task \
+  --description "PR review workflow" \
+  --context "pr-123"
+
+# During task: Track progress
+npx claude-flow@alpha hooks notify \
+  --message "Completed security scan" \
+  --type "github-action"
+
+# Post-task: Export results
+npx claude-flow@alpha hooks post-task \
+  --task-id "pr-review-123" \
+  --export-github-summary
+```
+
+---
+
+## Batch Operations
+
+### Parallel GitHub CLI Commands
+
+```javascript
+// Single message with all GitHub operations
+[Concurrent Execution]:
+  Bash("gh issue create --title 'Feature A' --body 'Description A' --label 'enhancement'")
+  Bash("gh issue create --title 'Feature B' --body 'Description B' --label 'enhancement'")
+  Bash("gh pr create --title 'PR 1' --head 'feature-a' --base 'main'")
+  Bash("gh pr create --title 'PR 2' --head 'feature-b' --base 'main'")
+  Bash("gh pr checks 123 --watch")
+  TodoWrite { todos: [
+    {content: "Review security scan results", status: "pending"},
+    {content: "Merge approved PRs", status: "pending"},
+    {content: "Update changelog", status: "pending"}
+  ]}
+```
diff --git a/.claude/skills/github-workflow-automation/references/debugging.md b/.claude/skills/github-workflow-automation/references/debugging.md
new file mode 100644 (file)
index 0000000..6a97a9d
--- /dev/null
@@ -0,0 +1,54 @@
+# Debugging and Troubleshooting
+
+Debug tools, performance profiling, failure analysis, and log inspection.
+
+---
+
+## Debug Mode
+
+```yaml
+- name: Debug Swarm
+  run: |
+    npx ruv-swarm actions debug \
+      --verbose \
+      --trace-agents \
+      --export-logs
+  env:
+    ACTIONS_STEP_DEBUG: true
+```
+
+---
+
+## Performance Profiling
+
+```bash
+# Profile workflow performance
+npx ruv-swarm actions profile \
+  --workflow "ci.yml" \
+  --identify-slow-steps \
+  --suggest-optimizations
+```
+
+---
+
+## Failure Analysis
+
+```bash
+# Analyze failed runs
+gh run view <run-id> --json jobs,conclusion | \
+  npx ruv-swarm actions analyze-failure \
+    --suggest-fixes \
+    --auto-retry-flaky
+```
+
+---
+
+## Log Analysis
+
+```bash
+# Download and analyze logs
+gh run download <run-id>
+npx ruv-swarm actions analyze-logs \
+  --directory ./logs \
+  --identify-errors
+```
diff --git a/.claude/skills/github-workflow-automation/references/monitoring-analytics.md b/.claude/skills/github-workflow-automation/references/monitoring-analytics.md
new file mode 100644 (file)
index 0000000..4570804
--- /dev/null
@@ -0,0 +1,52 @@
+# Monitoring and Analytics
+
+Workflow analysis, cost optimization, and failure pattern detection.
+
+---
+
+## Workflow Analytics
+
+```bash
+# Analyze workflow performance
+npx ruv-swarm actions analytics \
+  --workflow "ci.yml" \
+  --period 30d \
+  --identify-bottlenecks \
+  --suggest-improvements
+```
+
+---
+
+## Cost Optimization
+
+```bash
+# Optimize GitHub Actions costs
+npx ruv-swarm actions cost-optimize \
+  --analyze-usage \
+  --suggest-caching \
+  --recommend-self-hosted
+```
+
+---
+
+## Failure Pattern Analysis
+
+```bash
+# Identify failure patterns
+npx ruv-swarm actions failure-patterns \
+  --period 90d \
+  --classify-failures \
+  --suggest-preventions
+```
+
+---
+
+## Resource Management
+
+```bash
+# Optimize resource usage
+npx ruv-swarm actions resources \
+  --analyze-usage \
+  --suggest-runners \
+  --cost-optimize
+```
diff --git a/.claude/skills/github-workflow-automation/references/real-world-examples.md b/.claude/skills/github-workflow-automation/references/real-world-examples.md
new file mode 100644 (file)
index 0000000..44c042e
--- /dev/null
@@ -0,0 +1,124 @@
+# Real-World Examples
+
+Production-ready integration examples for common scenarios.
+
+---
+
+## Example 1: Full-Stack Application CI/CD
+
+```yaml
+name: Full-Stack CI/CD with Swarms
+on:
+  push:
+    branches: [main, develop]
+  pull_request:
+
+jobs:
+  initialize:
+    runs-on: ubuntu-latest
+    outputs:
+      swarm-id: ${{ steps.init.outputs.swarm-id }}
+    steps:
+      - id: init
+        run: |
+          SWARM_ID=$(npx ruv-swarm init --topology mesh --output json | jq -r '.id')
+          echo "swarm-id=${SWARM_ID}" >> $GITHUB_OUTPUT
+
+  backend:
+    needs: initialize
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - name: Backend Tests
+        run: |
+          npx ruv-swarm agents spawn --type tester \
+            --task "Run backend test suite" \
+            --swarm-id ${{ needs.initialize.outputs.swarm-id }}
+
+  frontend:
+    needs: initialize
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - name: Frontend Tests
+        run: |
+          npx ruv-swarm agents spawn --type tester \
+            --task "Run frontend test suite" \
+            --swarm-id ${{ needs.initialize.outputs.swarm-id }}
+
+  security:
+    needs: initialize
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - name: Security Scan
+        run: |
+          npx ruv-swarm agents spawn --type security \
+            --task "Security audit" \
+            --swarm-id ${{ needs.initialize.outputs.swarm-id }}
+
+  deploy:
+    needs: [backend, frontend, security]
+    if: github.ref == 'refs/heads/main'
+    runs-on: ubuntu-latest
+    steps:
+      - name: Deploy
+        run: |
+          npx ruv-swarm actions deploy \
+            --strategy progressive \
+            --swarm-id ${{ needs.initialize.outputs.swarm-id }}
+```
+
+---
+
+## Example 2: Monorepo Management
+
+```yaml
+name: Monorepo Coordination
+on: push
+
+jobs:
+  detect-changes:
+    runs-on: ubuntu-latest
+    outputs:
+      packages: ${{ steps.detect.outputs.packages }}
+    steps:
+      - uses: actions/checkout@v3
+        with:
+          fetch-depth: 0
+
+      - id: detect
+        run: |
+          PACKAGES=$(npx ruv-swarm actions detect-changes \
+            --monorepo \
+            --output json)
+          echo "packages=${PACKAGES}" >> $GITHUB_OUTPUT
+
+  build-packages:
+    needs: detect-changes
+    runs-on: ubuntu-latest
+    strategy:
+      matrix:
+        package: ${{ fromJson(needs.detect-changes.outputs.packages) }}
+    steps:
+      - name: Build Package
+        run: |
+          npx ruv-swarm actions build \
+            --package ${{ matrix.package }} \
+            --parallel-deps
+```
+
+---
+
+## Example 3: Multi-Repo Synchronization
+
+```bash
+# Synchronize multiple repositories
+npx claude-flow@alpha github sync-coordinator \
+  "Synchronize version updates across:
+   - github.com/org/repo-a
+   - github.com/org/repo-b
+   - github.com/org/repo-c
+
+   Update dependencies, align versions, create PRs"
+```
diff --git a/.claude/skills/github-workflow-automation/references/swarm-modes.md b/.claude/skills/github-workflow-automation/references/swarm-modes.md
new file mode 100644 (file)
index 0000000..21f81c0
--- /dev/null
@@ -0,0 +1,130 @@
+# Swarm-Powered GitHub Modes
+
+Reference for all available GitHub integration modes with swarm coordination.
+
+---
+
+## 1. gh-coordinator
+
+**GitHub workflow orchestration and coordination**
+
+- **Coordination Mode**: Hierarchical
+- **Max Parallel Operations**: 10
+- **Batch Optimized**: Yes
+- **Best For**: Complex GitHub workflows, multi-repo coordination
+
+```bash
+npx claude-flow@alpha github gh-coordinator \
+  "Coordinate multi-repo release across 5 repositories"
+```
+
+---
+
+## 2. pr-manager
+
+**Pull request management and review coordination**
+
+- **Review Mode**: Automated
+- **Multi-reviewer**: Yes
+- **Conflict Resolution**: Intelligent
+
+```bash
+gh pr create --title "Feature: New capability" \
+  --body "Automated PR with swarm review" | \
+  npx ruv-swarm actions pr-validate \
+    --spawn-agents "linter,tester,security,docs"
+```
+
+---
+
+## 3. issue-tracker
+
+**Issue management and project coordination**
+
+- **Issue Workflow**: Automated
+- **Label Management**: Smart
+- **Progress Tracking**: Real-time
+
+```bash
+npx claude-flow@alpha github issue-tracker \
+  "Manage sprint issues with automated tracking"
+```
+
+---
+
+## 4. release-manager
+
+**Release coordination and deployment**
+
+- **Release Pipeline**: Automated
+- **Versioning**: Semantic
+- **Deployment**: Multi-stage
+
+```bash
+npx claude-flow@alpha github release-manager \
+  "Create v2.0.0 release with changelog and deployment"
+```
+
+---
+
+## 5. repo-architect
+
+**Repository structure and organization**
+
+- **Structure Optimization**: Yes
+- **Multi-repo Support**: Yes
+- **Template Management**: Advanced
+
+```bash
+npx claude-flow@alpha github repo-architect \
+  "Restructure monorepo with optimal organization"
+```
+
+---
+
+## 6. code-reviewer
+
+**Automated code review and quality assurance**
+
+- **Review Quality**: Deep
+- **Security Analysis**: Yes
+- **Performance Check**: Automated
+
+```bash
+gh pr view 123 --json files | \
+  npx ruv-swarm actions pr-validate \
+    --deep-review \
+    --security-scan
+```
+
+---
+
+## 7. ci-orchestrator
+
+**CI/CD pipeline coordination**
+
+- **Pipeline Management**: Advanced
+- **Test Coordination**: Parallel
+- **Deployment**: Automated
+
+```bash
+npx claude-flow@alpha github ci-orchestrator \
+  "Setup parallel test execution with smart caching"
+```
+
+---
+
+## 8. security-guardian
+
+**Security and compliance management**
+
+- **Security Scan**: Automated
+- **Compliance Check**: Continuous
+- **Vulnerability Management**: Proactive
+
+```bash
+npx ruv-swarm actions security \
+  --deep-scan \
+  --compliance-check \
+  --create-issues
+```
diff --git a/.claude/skills/github-workflow-automation/references/workflow-templates.md b/.claude/skills/github-workflow-automation/references/workflow-templates.md
new file mode 100644 (file)
index 0000000..6c67942
--- /dev/null
@@ -0,0 +1,215 @@
+# Production-Ready GitHub Actions Workflow Templates
+
+Complete YAML templates for swarm-coordinated CI/CD pipelines.
+
+---
+
+## 1. Intelligent CI with Swarms
+
+```yaml
+# .github/workflows/swarm-ci.yml
+name: Intelligent CI with Swarms
+on: [push, pull_request]
+
+jobs:
+  swarm-analysis:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+
+      - name: Initialize Swarm
+        uses: ruvnet/swarm-action@v1
+        with:
+          topology: mesh
+          max-agents: 6
+
+      - name: Analyze Changes
+        run: |
+          npx ruv-swarm actions analyze \
+            --commit ${{ github.sha }} \
+            --suggest-tests \
+            --optimize-pipeline
+```
+
+---
+
+## 2. Multi-Language Detection
+
+```yaml
+# .github/workflows/polyglot-swarm.yml
+name: Polyglot Project Handler
+on: push
+
+jobs:
+  detect-and-build:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+
+      - name: Detect Languages
+        id: detect
+        run: |
+          npx ruv-swarm actions detect-stack \
+            --output json > stack.json
+
+      - name: Dynamic Build Matrix
+        run: |
+          npx ruv-swarm actions create-matrix \
+            --from stack.json \
+            --parallel-builds
+```
+
+---
+
+## 3. Adaptive Security Scanning
+
+```yaml
+# .github/workflows/security-swarm.yml
+name: Intelligent Security Scan
+on:
+  schedule:
+    - cron: '0 0 * * *'
+  workflow_dispatch:
+
+jobs:
+  security-swarm:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Security Analysis Swarm
+        run: |
+          SECURITY_ISSUES=$(npx ruv-swarm actions security \
+            --deep-scan \
+            --format json)
+
+          echo "$SECURITY_ISSUES" | jq -r '.issues[]? | @base64' | while read -r issue; do
+            _jq() {
+              echo ${issue} | base64 --decode | jq -r ${1}
+            }
+            gh issue create \
+              --title "$(_jq '.title')" \
+              --body "$(_jq '.body')" \
+              --label "security,critical"
+          done
+```
+
+---
+
+## 4. Self-Healing Pipeline
+
+```yaml
+# .github/workflows/self-healing.yml
+name: Self-Healing Pipeline
+on: workflow_run
+
+jobs:
+  heal-pipeline:
+    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
+    runs-on: ubuntu-latest
+    steps:
+      - name: Diagnose and Fix
+        run: |
+          npx ruv-swarm actions self-heal \
+            --run-id ${{ github.event.workflow_run.id }} \
+            --auto-fix-common \
+            --create-pr-complex
+```
+
+---
+
+## 5. Progressive Deployment
+
+```yaml
+# .github/workflows/smart-deployment.yml
+name: Smart Deployment
+on:
+  push:
+    branches: [main]
+
+jobs:
+  progressive-deploy:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Analyze Risk
+        id: risk
+        run: |
+          npx ruv-swarm actions deploy-risk \
+            --changes ${{ github.sha }} \
+            --history 30d
+
+      - name: Choose Strategy
+        run: |
+          npx ruv-swarm actions deploy-strategy \
+            --risk ${{ steps.risk.outputs.level }} \
+            --auto-execute
+```
+
+---
+
+## 6. Performance Regression Detection
+
+```yaml
+# .github/workflows/performance-guard.yml
+name: Performance Guard
+on: pull_request
+
+jobs:
+  perf-swarm:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Performance Analysis
+        run: |
+          npx ruv-swarm actions perf-test \
+            --baseline main \
+            --threshold 10% \
+            --auto-profile-regression
+```
+
+---
+
+## 7. PR Validation Swarm
+
+```yaml
+# .github/workflows/pr-validation.yml
+name: PR Validation Swarm
+on: pull_request
+
+jobs:
+  validate:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Multi-Agent Validation
+        run: |
+          PR_DATA=$(gh pr view ${{ github.event.pull_request.number }} --json files,labels)
+
+          RESULTS=$(npx ruv-swarm actions pr-validate \
+            --spawn-agents "linter,tester,security,docs" \
+            --parallel \
+            --pr-data "$PR_DATA")
+
+          gh pr comment ${{ github.event.pull_request.number }} \
+            --body "$RESULTS"
+```
+
+---
+
+## 8. Intelligent Release
+
+```yaml
+# .github/workflows/intelligent-release.yml
+name: Intelligent Release
+on:
+  push:
+    tags: ['v*']
+
+jobs:
+  release:
+    runs-on: ubuntu-latest
+    steps:
+      - name: Release Swarm
+        run: |
+          npx ruv-swarm actions release \
+            --analyze-changes \
+            --generate-notes \
+            --create-artifacts \
+            --publish-smart
+```
diff --git a/.claude/skills/hive-mind-advanced/assets/hive-config.json b/.claude/skills/hive-mind-advanced/assets/hive-config.json
new file mode 100644 (file)
index 0000000..8576033
--- /dev/null
@@ -0,0 +1,19 @@
+{
+  "$schema": "hive-mind-config",
+  "$comment": "Full configuration schema for Hive Mind initialization",
+
+  "objective": "Build microservices",
+  "name": "my-hive",
+  "queenType": "strategic",
+  "maxWorkers": 8,
+  "consensusAlgorithm": "byzantine",
+  "autoScale": true,
+  "memorySize": 100,
+  "taskTimeout": 60,
+  "encryption": false,
+
+  "_queenType_options": ["strategic", "tactical", "adaptive"],
+  "_consensusAlgorithm_options": ["majority", "weighted", "byzantine"],
+  "_memorySize_unit": "MB",
+  "_taskTimeout_unit": "minutes"
+}
diff --git a/.claude/skills/hive-mind-advanced/assets/memory-config.json b/.claude/skills/hive-mind-advanced/assets/memory-config.json
new file mode 100644 (file)
index 0000000..e4aeac3
--- /dev/null
@@ -0,0 +1,16 @@
+{
+  "$schema": "collective-memory-config",
+  "$comment": "Full configuration schema for Collective Memory",
+
+  "maxSize": 100,
+  "compressionThreshold": 1024,
+  "gcInterval": 300000,
+  "cacheSize": 1000,
+  "cacheMemoryMB": 50,
+  "enablePooling": true,
+  "enableAsyncOperations": true,
+
+  "_maxSize_unit": "MB",
+  "_compressionThreshold_unit": "bytes",
+  "_gcInterval_unit": "milliseconds (300000 = 5 minutes)"
+}
diff --git a/.claude/skills/hive-mind-advanced/references/api-reference.md b/.claude/skills/hive-mind-advanced/references/api-reference.md
new file mode 100644 (file)
index 0000000..b574616
--- /dev/null
@@ -0,0 +1,105 @@
+# API Reference
+
+Complete programmatic interface for Hive Mind components.
+
+## HiveMindCore
+
+```javascript
+const hiveMind = new HiveMindCore({
+  objective: 'Build system',
+  queenType: 'strategic',       // 'strategic' | 'tactical' | 'adaptive'
+  maxWorkers: 8,
+  consensusAlgorithm: 'byzantine' // 'majority' | 'weighted' | 'byzantine'
+});
+```
+
+### Methods
+
+| Method | Returns | Description |
+|--------|---------|-------------|
+| `initialize()` | `Promise<void>` | Bootstrap the hive mind instance |
+| `spawnQueen(queenData)` | `Promise<Queen>` | Create and register the queen agent |
+| `spawnWorkers(types[])` | `Promise<Worker[]>` | Spawn workers by type |
+| `createTask(desc, priority, opts?)` | `Promise<Task>` | Create and auto-assign a task |
+| `buildConsensus(topic, options[])` | `Promise<Decision>` | Run consensus protocol |
+| `getStatus()` | `Status` | Current hive status snapshot |
+| `getPerformanceInsights()` | `Insights` | Performance metrics and analytics |
+| `shutdown()` | `Promise<void>` | Gracefully stop all agents |
+
+### Decision Object
+
+```javascript
+{
+  decision: string,     // Winning option
+  confidence: number,   // Vote percentage (0..1)
+  votes: [              // Individual agent votes
+    { agentId: string, vote: string, weight: number }
+  ]
+}
+```
+
+## CollectiveMemory
+
+```javascript
+const memory = new CollectiveMemory({
+  swarmId: 'hive-123',
+  maxSize: 100,        // MB
+  cacheSize: 1000      // LRU entries
+});
+```
+
+### Methods
+
+| Method | Returns | Description |
+|--------|---------|-------------|
+| `store(key, value, type, metadata?)` | `Promise<void>` | Store a memory entry |
+| `retrieve(key)` | `Promise<any>` | Get entry by key |
+| `search(pattern, opts?)` | `Promise<Entry[]>` | Search by glob pattern |
+| `getRelated(key, limit?)` | `Promise<Entry[]>` | Get associated entries |
+| `associate(key1, key2, strength)` | `Promise<void>` | Link two entries |
+| `getStatistics()` | `Stats` | Cache and storage stats |
+| `getAnalytics()` | `Analytics` | Read/write ratios, hot keys |
+| `healthCheck()` | `Promise<Health>` | DB and cache health |
+
+### Search Options
+
+```javascript
+{
+  type: 'knowledge',     // Filter by memory type
+  minConfidence: 0.8,    // Minimum confidence threshold
+  limit: 50              // Max results
+}
+```
+
+## HiveMindSessionManager
+
+```javascript
+const sessionManager = new HiveMindSessionManager();
+```
+
+### Methods
+
+| Method | Returns | Description |
+|--------|---------|-------------|
+| `createSession(swarmId, name, objective, meta?)` | `Promise<string>` | Create session, returns ID |
+| `getSession(sessionId)` | `Promise<Session>` | Get session details |
+| `getActiveSessions()` | `Promise<Session[]>` | List active sessions |
+| `saveCheckpoint(sessionId, name, data)` | `Promise<void>` | Persist checkpoint |
+| `pauseSession(sessionId)` | `Promise<void>` | Pause execution |
+| `resumeSession(sessionId)` | `Promise<void>` | Resume paused session |
+| `stopSession(sessionId)` | `Promise<void>` | Stop session |
+| `completeSession(sessionId)` | `Promise<void>` | Mark as completed |
+
+### Session Lifecycle
+
+```
+created → running → paused → running → completed
+                  → stopped
+```
+
+## Export / Import
+
+```bash
+npx claude-flow hive-mind export <session-id> --output backup.json
+npx claude-flow hive-mind import backup.json
+```
diff --git a/.claude/skills/hive-mind-advanced/references/best-practices.md b/.claude/skills/hive-mind-advanced/references/best-practices.md
new file mode 100644 (file)
index 0000000..566b31e
--- /dev/null
@@ -0,0 +1,84 @@
+# Best Practices
+
+Guidelines for effective use of the Hive Mind system.
+
+## 1. Choose the Right Queen Type
+
+| Scenario | Queen Type | Why |
+|----------|-----------|-----|
+| Research / planning / analysis | Strategic | Long-horizon decomposition |
+| Implementation / sprints | Tactical | Concrete deliverables, short loops |
+| Optimization / dynamic workloads | Adaptive | Real-time rebalancing |
+
+## 2. Leverage Consensus for Critical Decisions
+
+Use consensus protocols for decisions that affect multiple agents or have lasting impact:
+
+- Architecture pattern selection
+- Technology stack choices
+- Implementation approach
+- Code review approval
+- Release readiness
+
+Choose the algorithm by stakes: majority for low-risk, weighted for guided, byzantine for critical.
+
+## 3. Utilize Collective Memory
+
+**Store learnings after successful implementations:**
+
+```javascript
+await memory.store('auth-pattern', {
+  approach: 'JWT with refresh tokens',
+  pros: ['Stateless', 'Scalable'],
+  cons: ['Token size', 'Revocation complexity'],
+  implementation: { /* details */ }
+}, 'knowledge', { confidence: 0.95 });
+```
+
+**Build associations to enable graph-based retrieval:**
+
+```javascript
+await memory.associate('jwt-auth', 'refresh-tokens', 0.9);
+await memory.associate('jwt-auth', 'oauth2', 0.7);
+```
+
+## 4. Monitor Performance Regularly
+
+```bash
+npx claude-flow hive-mind status    # Overall health
+npx claude-flow hive-mind metrics   # Task throughput
+npx claude-flow hive-mind memory    # Memory layer stats
+```
+
+## 5. Checkpoint Frequently
+
+Create checkpoints at key milestones to enable session recovery:
+
+```javascript
+await sessionManager.saveCheckpoint(
+  sessionId,
+  'api-routes-complete',
+  { completedRoutes: [...], remaining: [...] }
+);
+```
+
+Resume from any checkpoint:
+
+```bash
+npx claude-flow hive-mind resume <session-id>
+```
+
+## 6. Right-Size the Worker Pool
+
+- Start with 4-6 workers for focused tasks
+- Scale to 8-12 for broad, parallel workloads
+- Enable `autoScale` for unpredictable complexity
+- Monitor queue depth to avoid over-provisioning
+
+## 7. Use Hooks for Automation
+
+Integrate pre-task and post-task hooks to reduce manual overhead:
+
+- **Pre-task:** auto-assign by file type, validate complexity, cache search patterns
+- **Post-task:** auto-format deliverables, train neural patterns, update collective memory
+- **Session:** generate summaries, persist checkpoints, track metrics
diff --git a/.claude/skills/hive-mind-advanced/references/collective-memory.md b/.claude/skills/hive-mind-advanced/references/collective-memory.md
new file mode 100644 (file)
index 0000000..c03a72a
--- /dev/null
@@ -0,0 +1,141 @@
+# Collective Memory
+
+Detailed reference for the shared knowledge base, persistence layer, and memory operations.
+
+## Architecture
+
+The collective memory system provides a shared knowledge base across all agents in a hive:
+
+- **LRU cache** with configurable size (default: 1000 entries)
+- **SQLite persistence** with WAL (Write-Ahead Logging) mode
+- **Memory pressure handling** (default: 50MB)
+- **Automatic consolidation and association**
+- **Access pattern tracking** for optimization
+
+## Memory Types
+
+| Type | TTL | Compression | Description |
+|------|-----|-------------|-------------|
+| `knowledge` | Permanent | No | Permanent insights and learnings |
+| `context` | 1 hour | No | Session context data |
+| `task` | 30 minutes | No | Task-specific working data |
+| `result` | Permanent | Yes | Execution results |
+| `error` | 24 hours | No | Error logs and diagnostics |
+| `metric` | 1 hour | No | Performance metrics |
+| `consensus` | Permanent | No | Decision records |
+| `system` | Permanent | No | System configuration |
+
+## Storing Knowledge
+
+```javascript
+// Store with type and metadata
+await memory.store('api-patterns', {
+  rest: { pros: ['Simple', 'Cacheable'], cons: ['Over-fetching'] },
+  graphql: { pros: ['Flexible', 'Efficient'], cons: ['Complex'] }
+}, 'knowledge', { confidence: 0.95 });
+
+// Store after successful pattern implementation
+await memory.store('auth-pattern', {
+  approach: 'JWT with refresh tokens',
+  pros: ['Stateless', 'Scalable'],
+  cons: ['Token size', 'Revocation complexity'],
+  implementation: { /* details */ }
+}, 'knowledge', { confidence: 0.95 });
+```
+
+## Searching and Retrieval
+
+```javascript
+// Search memory by pattern with filters
+const results = await memory.search('api*', {
+  type: 'knowledge',
+  minConfidence: 0.8,
+  limit: 50
+});
+
+// Retrieve a single entry by key
+const data = await memory.retrieve('api-patterns');
+
+// Get related memories by association strength
+const related = await memory.getRelated('api-patterns', 10);
+```
+
+## Building Associations
+
+Link related concepts for graph-based retrieval:
+
+```javascript
+await memory.associate('jwt-auth', 'refresh-tokens', 0.9);
+await memory.associate('jwt-auth', 'oauth2', 0.7);
+await memory.associate('rest-api', 'authentication', 0.9);
+```
+
+## Database Optimization
+
+The SQLite backend uses the following tuning:
+
+| Parameter | Value | Purpose |
+|-----------|-------|---------|
+| Journal mode | WAL | Concurrent read/write |
+| Cache size | 64 MB | Reduced disk I/O |
+| Memory mapping | 256 MB | Faster large reads |
+| Prepared statements | Yes | Reduced parse overhead |
+| Auto ANALYZE | Yes | Query planner accuracy |
+
+## Object Pooling
+
+The memory system employs object pooling for reduced GC pressure:
+
+- **Query result pools** — Reuse result objects from common queries
+- **Memory entry pools** — Pre-allocated entry containers
+- **Batch write buffers** — Coalesced writes to SQLite
+
+## Garbage Collection and Maintenance
+
+```bash
+# Run garbage collection
+npx claude-flow hive-mind memory --gc
+
+# Optimize database (VACUUM + ANALYZE)
+npx claude-flow hive-mind memory --optimize
+
+# Export all memory and clear
+npx claude-flow hive-mind memory --export --clear
+```
+
+## Statistics and Health
+
+```javascript
+const stats = memory.getStatistics();
+// { totalEntries, cacheHitRate, memoryUsageMB, ... }
+
+const analytics = memory.getAnalytics();
+// { readWriteRatio, hotKeys, coldKeys, ... }
+
+const health = await memory.healthCheck();
+// { status: 'healthy', dbSize, cacheUtilization, ... }
+```
+
+## Configuration
+
+See `assets/memory-config.json` for the full configuration schema.
+
+```javascript
+{
+  "maxSize": 100,                // MB total
+  "compressionThreshold": 1024,  // bytes — entries above this get compressed
+  "gcInterval": 300000,          // 5 minutes
+  "cacheSize": 1000,             // max LRU entries
+  "cacheMemoryMB": 50,           // max cache memory
+  "enablePooling": true,
+  "enableAsyncOperations": true
+}
+```
+
+## Troubleshooting
+
+**High memory usage:**
+Run `--gc` and `--optimize`, or increase `maxSize`.
+
+**Low cache hit rate:**
+Increase `cacheSize` and `cacheMemoryMB` in the memory config.
diff --git a/.claude/skills/hive-mind-advanced/references/consensus-mechanisms.md b/.claude/skills/hive-mind-advanced/references/consensus-mechanisms.md
new file mode 100644 (file)
index 0000000..16389d1
--- /dev/null
@@ -0,0 +1,89 @@
+# Consensus Mechanisms
+
+Detailed reference for distributed decision-making protocols in the Hive Mind system.
+
+## Algorithms
+
+### Majority Consensus
+
+Simple democratic voting where the option with the most votes wins.
+
+- **Threshold:** >50% of voting agents
+- **Use case:** Low-stakes decisions, quick polls
+- **Failure mode:** Ties resolved by queen's preference
+
+```bash
+npx claude-flow hive-mind spawn "..." --consensus majority
+```
+
+### Weighted Consensus
+
+Queen vote counts as 3x weight, providing strategic guidance while preserving worker input.
+
+- **Queen weight:** 3x
+- **Worker weight:** 1x
+- **Use case:** Decisions requiring leadership direction
+- **Failure mode:** Falls back to queen's choice on tie
+
+```bash
+npx claude-flow hive-mind spawn "..." --consensus weighted
+```
+
+### Byzantine Fault Tolerance
+
+Requires 2/3 supermajority for decision approval, ensuring robust consensus even with
+faulty or adversarial agents.
+
+- **Threshold:** >= 2/3 of voting agents
+- **Use case:** Critical architecture decisions, security-sensitive choices
+- **Failure mode:** No consensus reached; escalate or retry with different algorithm
+
+```bash
+npx claude-flow hive-mind spawn "..." --consensus byzantine
+```
+
+## Programmatic Usage
+
+```javascript
+// Build consensus on a topic with predefined options
+const decision = await hiveMind.buildConsensus(
+  'Architecture pattern selection',
+  ['microservices', 'monolith', 'serverless']
+);
+
+// Result structure:
+// {
+//   decision: 'microservices',    // Winning option
+//   confidence: 0.83,             // Vote percentage
+//   votes: [                      // Individual agent votes
+//     { agentId: 'queen-1', vote: 'microservices', weight: 3 },
+//     { agentId: 'coder-1', vote: 'microservices', weight: 1 },
+//     { agentId: 'architect-1', vote: 'serverless', weight: 1 }
+//   ]
+// }
+```
+
+## When to Use Each Algorithm
+
+| Scenario | Recommended Algorithm |
+|----------|----------------------|
+| Quick technology poll | Majority |
+| Architecture pattern selection | Weighted |
+| Security-critical decision | Byzantine |
+| Code review approval | Weighted |
+| Release readiness | Byzantine |
+| Implementation approach | Majority or Weighted |
+
+## Troubleshooting
+
+**No consensus reached (Byzantine):**
+Switch to weighted consensus for more decisive results, or use simple majority:
+
+```bash
+npx claude-flow hive-mind spawn "..." --consensus weighted
+npx claude-flow hive-mind spawn "..." --consensus majority
+```
+
+**Ties in majority voting:**
+The queen's preference breaks the tie automatically. If no queen is active,
+the first-submitted vote wins.
diff --git a/.claude/skills/hive-mind-advanced/references/examples.md b/.claude/skills/hive-mind-advanced/references/examples.md
new file mode 100644 (file)
index 0000000..3d6987a
--- /dev/null
@@ -0,0 +1,73 @@
+# Examples
+
+End-to-end usage examples for common Hive Mind scenarios.
+
+## Full-Stack Development
+
+```bash
+npx claude-flow hive-mind init
+
+npx claude-flow hive-mind spawn "Build e-commerce platform" \
+  --queen-type strategic \
+  --max-workers 10 \
+  --consensus weighted \
+  --claude
+```
+
+Generated agents:
+- Queen coordinator (strategic)
+- Frontend developers (React)
+- Backend developers (Node.js)
+- Database architects
+- DevOps engineers
+- Security auditors
+- Test engineers
+- Documentation specialists
+
+## Research and Analysis
+
+```bash
+npx claude-flow hive-mind spawn "Research GraphQL vs REST" \
+  --queen-type adaptive \
+  --consensus byzantine
+```
+
+Workflow:
+1. Researcher agents gather data on both approaches
+2. Analyst agents process findings into structured comparisons
+3. Queen builds consensus on the recommendation
+4. Results stored in collective memory for future reference
+
+## Code Review
+
+```bash
+npx claude-flow hive-mind spawn "Review PR #456" \
+  --queen-type tactical \
+  --max-workers 6
+```
+
+Spawns:
+- Code analyzers
+- Security reviewers
+- Performance reviewers
+- Test coverage analyzers
+- Documentation reviewers
+- Queen builds consensus on approval/changes requested
+
+## Session Checkpoint and Resume
+
+```bash
+# Start a long-running task
+npx claude-flow hive-mind spawn "Migrate monolith to microservices" \
+  --queen-type strategic \
+  --max-workers 10
+
+# Pause when needed
+npx claude-flow hive-mind pause <session-id>
+
+# Resume later
+npx claude-flow hive-mind resume <session-id>
+
+# Export for backup
+npx claude-flow hive-mind export <session-id> --output backup.json
+```
diff --git a/.claude/skills/hive-mind-advanced/references/integration-patterns.md b/.claude/skills/hive-mind-advanced/references/integration-patterns.md
new file mode 100644 (file)
index 0000000..f248116
--- /dev/null
@@ -0,0 +1,71 @@
+# Integration Patterns
+
+How the Hive Mind system integrates with Claude Code, SPARC, and GitHub workflows.
+
+## With Claude Code
+
+Generate Claude Code `Task()` spawn commands directly:
+
+```bash
+npx claude-flow hive-mind spawn "Build REST API" --claude
+```
+
+Output:
+
+```javascript
+Task("Queen Coordinator", "Orchestrate REST API development...", "coordinator")
+Task("Backend Developer", "Implement Express routes...", "backend-dev")
+Task("Database Architect", "Design PostgreSQL schema...", "code-analyzer")
+Task("Test Engineer", "Create Jest test suite...", "tester")
+```
+
+## With SPARC Methodology
+
+```bash
+npx claude-flow sparc tdd "User authentication" --hive-mind
+```
+
+Spawns a full SPARC team:
+- **Specification agent** — Requirements and acceptance criteria
+- **Architecture agent** — System design
+- **Coder agents** — Implementation
+- **Tester agents** — TDD red-green-refactor
+- **Reviewer agents** — Code review and quality
+
+## With GitHub Integration
+
+```bash
+# Repository analysis with hive mind
+npx claude-flow hive-mind spawn "Analyze repo quality" --objective "owner/repo"
+
+# PR review coordination
+npx claude-flow hive-mind spawn "Review PR #123" --queen-type tactical
+```
+
+The tactical queen coordinates multiple reviewer agents (code, security, performance,
+test coverage, documentation) and builds consensus on approval or requested changes.
+
+## Hooks Integration
+
+The Hive Mind integrates with Claude Flow hooks for end-to-end automation:
+
+### Pre-Task Hooks
+
+- Auto-assign agents by file type
+- Validate objective complexity
+- Optimize topology selection
+- Cache search patterns
+
+### Post-Task Hooks
+
+- Auto-format deliverables
+- Train neural patterns
+- Update collective memory
+- Analyze performance bottlenecks
+
+### Session Hooks
+
+- Generate session summaries
+- Persist checkpoint data
+- Track comprehensive metrics
+- Restore execution context
diff --git a/.claude/skills/hive-mind-advanced/references/performance-optimization.md b/.claude/skills/hive-mind-advanced/references/performance-optimization.md
new file mode 100644 (file)
index 0000000..ab4cfb3
--- /dev/null
@@ -0,0 +1,64 @@
+# Performance Optimization
+
+Detailed reference for tuning Hive Mind throughput, latency, and resource usage.
+
+## Benchmarks
+
+| Metric | Value |
+|--------|-------|
+| Batch spawning speedup | 10-20x vs sequential |
+| Overall speed improvement | 2.8-4.4x |
+| Token reduction | 32.3% |
+| SWE-Bench solve rate | 84.8% |
+
+## Parallel Processing
+
+- **Batch agent spawning** — 5 agents per batch for reduced orchestration overhead
+- **Concurrent task orchestration** — Non-blocking task assignment via async queue
+- **Async operation optimization** — Configurable concurrency level
+
+```javascript
+// Async queue concurrency (default: min(maxWorkers * 2, 20))
+{
+  "asyncQueueConcurrency": 20
+}
+```
+
+## Memory Layer Tuning
+
+See `references/collective-memory.md` for SQLite and cache tuning.
+
+Key levers:
+- Increase `cacheSize` for higher hit rates
+- Increase `cacheMemoryMB` for larger working sets
+- Enable `enablePooling` for reduced GC pressure
+- Use `enableAsyncOperations` for non-blocking writes
+
+## Performance Metrics
+
+```javascript
+const insights = hiveMind.getPerformanceInsights();
+// {
+//   asyncQueueUtilization: 0.65,
+//   batchProcessingStats: { avgBatchSize: 4.2, batchesProcessed: 12 },
+//   successRates: { tasks: 0.94, consensus: 0.88 },
+//   avgProcessingTime: 2340,   // ms
+//   memoryEfficiency: 0.78
+// }
+```
+
+## Task Execution Optimization
+
+**Slow task assignment:**
+The system caches best worker matches for 5 minutes automatically. No configuration needed.
+
+**High queue utilization:**
+Increase `asyncQueueConcurrency` or add more workers via `maxWorkers`.
+
+## Neural Pattern Training
+
+The system trains on successful patterns automatically:
+
+1. After successful task completion, the pattern is stored in collective memory
+2. Future task matching uses stored patterns for improved assignment accuracy
+3. Pattern confidence increases with repeated successful usage
diff --git a/.claude/skills/hive-mind-advanced/references/queen-worker-architecture.md b/.claude/skills/hive-mind-advanced/references/queen-worker-architecture.md
new file mode 100644 (file)
index 0000000..477d5da
--- /dev/null
@@ -0,0 +1,115 @@
+# Queen-Worker Architecture
+
+Detailed reference for the queen-led hierarchical coordination model.
+
+## Queen Types
+
+### Strategic Queen
+
+The strategic queen orchestrates high-level objectives: research, planning, and analysis tasks.
+It decomposes complex goals into sub-objectives and delegates to specialized workers.
+
+```bash
+npx claude-flow hive-mind spawn "Research ML frameworks" --queen-type strategic
+```
+
+**Best suited for:**
+- Multi-phase research projects
+- Architecture planning
+- Technology evaluation
+- Long-term roadmap execution
+
+### Tactical Queen
+
+The tactical queen manages mid-level execution: implementation, builds, and deployments.
+It focuses on concrete deliverables and short feedback loops.
+
+```bash
+npx claude-flow hive-mind spawn "Build authentication" --queen-type tactical
+```
+
+**Best suited for:**
+- Feature implementation
+- Sprint-scoped work
+- PR review coordination
+- Bug triage and resolution
+
+### Adaptive Queen
+
+The adaptive queen dynamically adjusts strategies based on real-time performance metrics.
+It monitors worker throughput and redistributes load when bottlenecks appear.
+
+```bash
+npx claude-flow hive-mind spawn "Optimize performance" --queen-type adaptive
+```
+
+**Best suited for:**
+- Performance optimization
+- Load balancing across workers
+- Tasks with unpredictable complexity
+- Continuous improvement workflows
+
+## Worker Specialization
+
+| Worker Type | Role | Keywords Matched |
+|-------------|------|------------------|
+| Researcher | Analysis and investigation | research, analyze, investigate, study |
+| Coder | Implementation and development | implement, build, code, develop |
+| Analyst | Data processing and metrics | data, metrics, statistics, process |
+| Tester | Quality assurance and validation | test, validate, verify, QA |
+| Architect | System design and planning | design, architecture, plan, structure |
+| Reviewer | Code review and improvement | review, improve, refactor, audit |
+| Optimizer | Performance enhancement | optimize, performance, speed, cache |
+| Documenter | Documentation generation | document, write, describe, explain |
+
+### Task-to-Worker Matching
+
+The system assigns tasks automatically based on:
+
+1. **Keyword matching** — Task description tokens mapped to worker specialization keywords
+2. **Historical performance** — Past success rates for similar task types
+3. **Worker availability** — Current load and queue depth per worker
+4. **Task complexity** — Estimated duration and dependency count
+
+### Auto-Scaling
+
+Configure dynamic worker pool scaling:
+
+```javascript
+const config = {
+  autoScale: true,
+  maxWorkers: 12,
+  scaleUpThreshold: 2,   // Pending tasks per idle worker triggers scale-up
+  scaleDownThreshold: 2   // Idle workers above pending tasks triggers scale-down
+};
+```
+
+## Custom Worker Types
+
+Define specialized workers in `.claude/agents/`:
+
+```yaml
+name: security-auditor
+type: specialist
+capabilities:
+  - vulnerability-scanning
+  - security-review
+  - penetration-testing
+  - compliance-checking
+priority: high
+```
+
+## Multi-Hive Coordination
+
+Run multiple hive minds simultaneously. They share collective memory for cross-hive awareness.
+
+```bash
+# Frontend hive
+npx claude-flow hive-mind spawn "Build UI" --name frontend-hive
+
+# Backend hive
+npx claude-flow hive-mind spawn "Build API" --name backend-hive
+```
+
+Each hive maintains its own queen and worker pool, while the shared collective memory layer
+enables coordination on interfaces, contracts, and shared data models.
diff --git a/.claude/skills/hive-mind-advanced/references/troubleshooting.md b/.claude/skills/hive-mind-advanced/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..9dde134
--- /dev/null
@@ -0,0 +1,98 @@
+# Troubleshooting
+
+Common issues and solutions for the Hive Mind system.
+
+## Memory Issues
+
+### High Memory Usage
+
+```bash
+# Run garbage collection
+npx claude-flow hive-mind memory --gc
+
+# Optimize database (VACUUM + ANALYZE)
+npx claude-flow hive-mind memory --optimize
+
+# Export all memory and clear
+npx claude-flow hive-mind memory --export --clear
+```
+
+### Low Cache Hit Rate
+
+Increase cache parameters in the memory configuration:
+
+```javascript
+{
+  "cacheSize": 2000,
+  "cacheMemoryMB": 100
+}
+```
+
+## Performance Issues
+
+### Slow Task Assignment
+
+The system caches best worker matches for 5 minutes automatically. No manual
+configuration is needed. If slowness persists, check worker count and queue depth.
+
+### High Queue Utilization
+
+Increase async queue concurrency:
+
+```javascript
+{
+  "asyncQueueConcurrency": 20  // Default: min(maxWorkers * 2, 20)
+}
+```
+
+Or increase `maxWorkers` to add processing capacity.
+
+## Consensus Failures
+
+### No Consensus Reached (Byzantine)
+
+Byzantine requires 2/3 supermajority. If the swarm cannot reach agreement:
+
+1. Switch to weighted consensus for more decisive results
+2. Or use simple majority for quick resolution
+
+```bash
+npx claude-flow hive-mind spawn "..." --consensus weighted
+npx claude-flow hive-mind spawn "..." --consensus majority
+```
+
+### Ties in Majority Voting
+
+The queen's preference breaks ties. If no queen is active, the first-submitted vote wins.
+
+## Session Issues
+
+### Session Not Resuming
+
+Verify the session ID exists and is in a paused state:
+
+```bash
+npx claude-flow hive-mind sessions
+npx claude-flow hive-mind resume <session-id>
+```
+
+### Lost Checkpoint Data
+
+Export sessions frequently for backup:
+
+```bash
+npx claude-flow hive-mind export <session-id> --output backup.json
+```
+
+## Worker Issues
+
+### Workers Not Spawning
+
+Check `maxWorkers` in the hive configuration. The default is 8; increase if the
+task decomposition produces more sub-tasks than available slots.
+
+### Incorrect Worker Assignment
+
+The keyword matching system maps task descriptions to worker types. Use explicit
+keywords in the task description to improve assignment accuracy (e.g., include
+"test" for tester workers, "review" for reviewer workers).
diff --git a/.claude/skills/hooks-automation/assets/settings-advanced.json b/.claude/skills/hooks-automation/assets/settings-advanced.json
new file mode 100644 (file)
index 0000000..7cac823
--- /dev/null
@@ -0,0 +1,94 @@
+{
+  "hooks": {
+    "enabled": true,
+    "debug": false,
+    "timeout": 5000,
+
+    "PreToolUse": [
+      {
+        "matcher": "^(Write|Edit|MultiEdit)$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook pre-edit --file '${tool.params.file_path}' --auto-assign-agent --validate-syntax",
+            "timeout": 3000,
+            "continueOnError": true
+          }
+        ]
+      },
+      {
+        "matcher": "^Task$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook pre-task --description '${tool.params.task}' --auto-spawn-agents --load-memory",
+            "async": true
+          }
+        ]
+      },
+      {
+        "matcher": "^Grep$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook pre-search --query '${tool.params.pattern}' --check-cache"
+          }
+        ]
+      }
+    ],
+
+    "PostToolUse": [
+      {
+        "matcher": "^(Write|Edit|MultiEdit)$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook post-edit --file '${tool.params.file_path}' --memory-key 'edits/${tool.params.file_path}' --auto-format --train-patterns",
+            "async": true
+          }
+        ]
+      },
+      {
+        "matcher": "^Task$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook post-task --task-id '${result.task_id}' --analyze-performance --store-decisions --export-learnings",
+            "async": true
+          }
+        ]
+      },
+      {
+        "matcher": "^Grep$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook post-search --query '${tool.params.pattern}' --cache-results --train-patterns"
+          }
+        ]
+      }
+    ],
+
+    "SessionStart": [
+      {
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook session-start --session-id '${session.id}' --load-context"
+          }
+        ]
+      }
+    ],
+
+    "SessionEnd": [
+      {
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook session-end --session-id '${session.id}' --export-metrics --generate-summary --cleanup-temp"
+          }
+        ]
+      }
+    ]
+  }
+}
diff --git a/.claude/skills/hooks-automation/assets/settings-auto-testing.json b/.claude/skills/hooks-automation/assets/settings-auto-testing.json
new file mode 100644 (file)
index 0000000..b5218cd
--- /dev/null
@@ -0,0 +1,16 @@
+{
+  "hooks": {
+    "PostToolUse": [
+      {
+        "matcher": "^Write$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "test -f '${tool.params.file_path%.js}.test.js' && npm test '${tool.params.file_path%.js}.test.js'",
+            "continueOnError": true
+          }
+        ]
+      }
+    ]
+  }
+}
diff --git a/.claude/skills/hooks-automation/assets/settings-basic.json b/.claude/skills/hooks-automation/assets/settings-basic.json
new file mode 100644 (file)
index 0000000..a40148b
--- /dev/null
@@ -0,0 +1,36 @@
+{
+  "hooks": {
+    "PreToolUse": [
+      {
+        "matcher": "^(Write|Edit|MultiEdit)$",
+        "hooks": [{
+          "type": "command",
+          "command": "npx claude-flow hook pre-edit --file '${tool.params.file_path}' --memory-key 'swarm/editor/current'"
+        }]
+      },
+      {
+        "matcher": "^Bash$",
+        "hooks": [{
+          "type": "command",
+          "command": "npx claude-flow hook pre-bash --command '${tool.params.command}'"
+        }]
+      }
+    ],
+    "PostToolUse": [
+      {
+        "matcher": "^(Write|Edit|MultiEdit)$",
+        "hooks": [{
+          "type": "command",
+          "command": "npx claude-flow hook post-edit --file '${tool.params.file_path}' --memory-key 'swarm/editor/complete' --auto-format --train-patterns"
+        }]
+      },
+      {
+        "matcher": "^Bash$",
+        "hooks": [{
+          "type": "command",
+          "command": "npx claude-flow hook post-bash --command '${tool.params.command}' --update-metrics"
+        }]
+      }
+    ]
+  }
+}
diff --git a/.claude/skills/hooks-automation/assets/settings-protected.json b/.claude/skills/hooks-automation/assets/settings-protected.json
new file mode 100644 (file)
index 0000000..374f486
--- /dev/null
@@ -0,0 +1,15 @@
+{
+  "hooks": {
+    "PreToolUse": [
+      {
+        "matcher": "^(Write|Edit|MultiEdit)$",
+        "hooks": [
+          {
+            "type": "command",
+            "command": "npx claude-flow hook check-protected --file '${tool.params.file_path}'"
+          }
+        ]
+      }
+    ]
+  }
+}
diff --git a/.claude/skills/hooks-automation/references/custom-hooks.md b/.claude/skills/hooks-automation/references/custom-hooks.md
new file mode 100644 (file)
index 0000000..2a33f9a
--- /dev/null
@@ -0,0 +1,87 @@
+# Custom Hook Creation
+
+Create custom hooks for project-specific workflows. Custom hooks follow a
+standard template and integrate with the settings.json registration system.
+
+---
+
+## Custom Hook Template
+
+```javascript
+// .claude/hooks/custom-quality-check.js
+
+module.exports = {
+  name: 'custom-quality-check',
+  type: 'pre',
+  matcher: /\.(ts|js)$/,
+
+  async execute(context) {
+    const { file, content } = context;
+
+    // Custom validation logic
+    const complexity = await analyzeComplexity(content);
+    const securityIssues = await scanSecurity(content);
+
+    // Store in memory
+    await storeInMemory({
+      key: `quality/${file}`,
+      value: { complexity, securityIssues }
+    });
+
+    // Return decision
+    if (complexity > 15 || securityIssues.length > 0) {
+      return {
+        continue: false,
+        reason: 'Quality checks failed',
+        warnings: [
+          `Complexity: ${complexity} (max: 15)`,
+          `Security issues: ${securityIssues.length}`
+        ]
+      };
+    }
+
+    return {
+      continue: true,
+      reason: 'Quality checks passed',
+      metadata: { complexity, securityIssues: 0 }
+    };
+  }
+};
+```
+
+## Register Custom Hook
+
+Add the custom hook to `.claude/settings.json`:
+
+```json
+{
+  "hooks": {
+    "PreToolUse": [
+      {
+        "matcher": "^(Write|Edit)$",
+        "hooks": [
+          {
+            "type": "script",
+            "script": ".claude/hooks/custom-quality-check.js"
+          }
+        ]
+      }
+    ]
+  }
+}
+```
+
+## Hook Interface
+
+Every custom hook module must export an object with these fields:
+
+| Field | Type | Required | Description |
+|-------|------|----------|-------------|
+| `name` | string | Yes | Unique hook identifier |
+| `type` | `"pre"` or `"post"` | Yes | Execution phase |
+| `matcher` | RegExp | Yes | File pattern to match |
+| `execute` | async function | Yes | Hook logic, receives `context` |
+
+The `execute` function receives a context object and must return a response
+object. Read `references/mcp-integration.md` (Hook Response Format) for the
+response schema.
diff --git a/.claude/skills/hooks-automation/references/git-hooks.md b/.claude/skills/hooks-automation/references/git-hooks.md
new file mode 100644 (file)
index 0000000..d639bf5
--- /dev/null
@@ -0,0 +1,77 @@
+# Git Integration
+
+Hooks integrate with Git operations for quality control. Add scripts to
+`.git/hooks/` or use husky for automatic execution.
+
+---
+
+## Pre-Commit Hook
+
+```bash
+#!/bin/bash
+# .git/hooks/pre-commit
+# Run quality checks before commit
+
+# Get staged files
+FILES=$(git diff --cached --name-only --diff-filter=ACM)
+
+for FILE in $FILES; do
+  # Run pre-edit hook for validation
+  npx claude-flow hook pre-edit --file "$FILE" --validate-syntax
+
+  if [ $? -ne 0 ]; then
+    echo "Validation failed for $FILE"
+    exit 1
+  fi
+
+  # Run post-edit hook for formatting
+  npx claude-flow hook post-edit --file "$FILE" --auto-format
+done
+
+# Run tests
+npm test
+
+exit $?
+```
+
+## Post-Commit Hook
+
+```bash
+#!/bin/bash
+# .git/hooks/post-commit
+# Track commit metrics
+
+COMMIT_HASH=$(git rev-parse HEAD)
+COMMIT_MSG=$(git log -1 --pretty=%B)
+
+npx claude-flow hook notify \
+  --message "Commit completed: $COMMIT_MSG" \
+  --level info \
+  --swarm-status
+```
+
+## Pre-Push Hook
+
+```bash
+#!/bin/bash
+# .git/hooks/pre-push
+# Quality gate before push
+
+# Run full test suite
+npm run test:all
+
+# Run quality checks
+npx claude-flow hook session-end \
+  --generate-report \
+  --export-metrics
+
+# Verify quality thresholds
+TRUTH_SCORE=$(npx claude-flow metrics score --format json | jq -r '.truth_score')
+
+if (( $(echo "$TRUTH_SCORE < 0.95" | bc -l) )); then
+  echo "Truth score below threshold: $TRUTH_SCORE < 0.95"
+  exit 1
+fi
+
+exit 0
+```
diff --git a/.claude/skills/hooks-automation/references/hook-cli-reference.md b/.claude/skills/hooks-automation/references/hook-cli-reference.md
new file mode 100644 (file)
index 0000000..74aab1c
--- /dev/null
@@ -0,0 +1,415 @@
+# Hook CLI Reference
+
+Complete CLI reference for all available hooks in the Hooks Automation system.
+
+---
+
+## Pre-Operation Hooks
+
+Hooks that execute BEFORE operations to prepare and validate.
+
+### pre-edit
+
+Validate and assign agents before file modifications.
+
+```bash
+npx claude-flow hook pre-edit [options]
+
+Options:
+  --file, -f <path>         File path to be edited
+  --auto-assign-agent       Automatically assign best agent (default: true)
+  --validate-syntax         Pre-validate syntax before edit
+  --check-conflicts         Check for merge conflicts
+  --backup-file             Create backup before editing
+
+Examples:
+  npx claude-flow hook pre-edit --file "src/auth/login.js"
+  npx claude-flow hook pre-edit -f "config/db.js" --validate-syntax
+  npx claude-flow hook pre-edit -f "production.env" --backup-file --check-conflicts
+```
+
+**Features:**
+- Auto agent assignment based on file type
+- Syntax validation to prevent broken code
+- Conflict detection for concurrent edits
+- Automatic file backups for safety
+
+### pre-bash
+
+Check command safety and resource requirements.
+
+```bash
+npx claude-flow hook pre-bash --command <cmd>
+
+Options:
+  --command, -c <cmd>       Command to validate
+  --check-safety            Verify command safety (default: true)
+  --estimate-resources      Estimate resource usage
+  --require-confirmation    Request user confirmation for risky commands
+
+Examples:
+  npx claude-flow hook pre-bash -c "rm -rf /tmp/cache"
+  npx claude-flow hook pre-bash --command "docker build ." --estimate-resources
+```
+
+**Features:**
+- Command safety validation
+- Resource requirement estimation
+- Destructive command confirmation
+- Permission checks
+
+### pre-task
+
+Auto-spawn agents and prepare for complex tasks.
+
+```bash
+npx claude-flow hook pre-task [options]
+
+Options:
+  --description, -d <text>  Task description for context
+  --auto-spawn-agents       Automatically spawn required agents (default: true)
+  --load-memory             Load relevant memory from previous sessions
+  --optimize-topology       Select optimal swarm topology
+  --estimate-complexity     Analyze task complexity
+
+Examples:
+  npx claude-flow hook pre-task --description "Implement user authentication"
+  npx claude-flow hook pre-task -d "Continue API dev" --load-memory
+  npx claude-flow hook pre-task -d "Refactor codebase" --optimize-topology
+```
+
+**Features:**
+- Automatic agent spawning based on task analysis
+- Memory loading for context continuity
+- Topology optimization for task structure
+- Complexity estimation and time prediction
+
+### pre-search
+
+Prepare and optimize search operations.
+
+```bash
+npx claude-flow hook pre-search --query <query>
+
+Options:
+  --query, -q <text>        Search query
+  --check-cache             Check cache first (default: true)
+  --optimize-query          Optimize search pattern
+
+Examples:
+  npx claude-flow hook pre-search -q "authentication middleware"
+```
+
+**Features:**
+- Cache checking for faster results
+- Query optimization
+- Search pattern improvement
+
+---
+
+## Post-Operation Hooks
+
+Hooks that execute AFTER operations to process and learn.
+
+### post-edit
+
+Auto-format, validate, and update memory.
+
+```bash
+npx claude-flow hook post-edit [options]
+
+Options:
+  --file, -f <path>         File path that was edited
+  --auto-format             Automatically format code (default: true)
+  --memory-key, -m <key>    Store edit context in memory
+  --train-patterns          Train neural patterns from edit
+  --validate-output         Validate edited file
+
+Examples:
+  npx claude-flow hook post-edit --file "src/components/Button.jsx"
+  npx claude-flow hook post-edit -f "api/auth.js" --memory-key "auth/login"
+  npx claude-flow hook post-edit -f "utils/helpers.ts" --train-patterns
+```
+
+**Features:**
+- Language-specific auto-formatting (Prettier, Black, gofmt)
+- Memory storage for edit context and decisions
+- Neural pattern training for continuous improvement
+- Output validation with linting
+
+### post-bash
+
+Log execution and update metrics.
+
+```bash
+npx claude-flow hook post-bash --command <cmd>
+
+Options:
+  --command, -c <cmd>       Command that was executed
+  --log-output              Log command output (default: true)
+  --update-metrics          Update performance metrics
+  --store-result            Store result in memory
+
+Examples:
+  npx claude-flow hook post-bash -c "npm test" --update-metrics
+```
+
+**Features:**
+- Command execution logging
+- Performance metric tracking
+- Result storage for analysis
+- Error pattern detection
+
+### post-task
+
+Performance analysis and decision storage.
+
+```bash
+npx claude-flow hook post-task [options]
+
+Options:
+  --task-id, -t <id>        Task identifier for tracking
+  --analyze-performance     Generate performance metrics (default: true)
+  --store-decisions         Save task decisions to memory
+  --export-learnings        Export neural pattern learnings
+  --generate-report         Create task completion report
+
+Examples:
+  npx claude-flow hook post-task --task-id "auth-implementation"
+  npx claude-flow hook post-task -t "api-refactor" --analyze-performance
+  npx claude-flow hook post-task -t "bug-fix-123" --store-decisions
+```
+
+**Features:**
+- Execution time and token usage measurement
+- Decision and implementation choice recording
+- Neural learning pattern export
+- Completion report generation
+
+### post-search
+
+Cache results and improve patterns.
+
+```bash
+npx claude-flow hook post-search --query <query> --results <path>
+
+Options:
+  --query, -q <text>        Original search query
+  --results, -r <path>      Results file path
+  --cache-results           Cache for future use (default: true)
+  --train-patterns          Improve search patterns
+
+Examples:
+  npx claude-flow hook post-search -q "auth" -r "results.json" --train-patterns
+```
+
+**Features:**
+- Result caching for faster subsequent searches
+- Search pattern improvement
+- Relevance scoring
+
+---
+
+## MCP Integration Hooks
+
+Hooks that coordinate with MCP swarm tools.
+
+### mcp-initialized
+
+Persist swarm configuration.
+
+```bash
+npx claude-flow hook mcp-initialized --swarm-id <id>
+```
+
+**Features:**
+- Save swarm topology and configuration
+- Store agent roster in memory
+- Initialize coordination namespace
+
+### agent-spawned
+
+Update agent roster and memory.
+
+```bash
+npx claude-flow hook agent-spawned --agent-id <id> --type <type>
+```
+
+**Features:**
+- Register agent in coordination memory
+- Update agent roster
+- Initialize agent-specific memory namespace
+
+### task-orchestrated
+
+Monitor task progress.
+
+```bash
+npx claude-flow hook task-orchestrated --task-id <id>
+```
+
+**Features:**
+- Track task progress through memory
+- Monitor agent assignments
+- Update coordination state
+
+### neural-trained
+
+Save pattern improvements.
+
+```bash
+npx claude-flow hook neural-trained --pattern <name>
+```
+
+**Features:**
+- Export trained neural patterns
+- Update coordination models
+- Share learning across agents
+
+---
+
+## Memory Coordination Hooks
+
+### memory-write
+
+Triggered when agents write to coordination memory.
+
+**Features:**
+- Validate memory key format
+- Update cross-agent indexes
+- Trigger dependent hooks
+- Notify subscribed agents
+
+### memory-read
+
+Triggered when agents read from coordination memory.
+
+**Features:**
+- Log access patterns
+- Update popularity metrics
+- Preload related data
+- Track usage statistics
+
+### memory-sync
+
+Synchronize memory across swarm agents.
+
+```bash
+npx claude-flow hook memory-sync --namespace <ns>
+```
+
+**Features:**
+- Sync memory state across agents
+- Resolve conflicts
+- Propagate updates
+- Maintain consistency
+
+---
+
+## Session Hooks
+
+### session-start
+
+Initialize new session.
+
+```bash
+npx claude-flow hook session-start --session-id <id>
+
+Options:
+  --session-id, -s <id>     Session identifier
+  --load-context            Load context from previous session
+  --init-agents             Initialize required agents
+```
+
+**Features:**
+- Create session directory
+- Initialize metrics tracking
+- Load previous context
+- Set up coordination namespace
+
+### session-restore
+
+Load previous session state.
+
+```bash
+npx claude-flow hook session-restore --session-id <id>
+
+Options:
+  --session-id, -s <id>     Session to restore
+  --restore-memory          Restore memory state (default: true)
+  --restore-agents          Restore agent configurations
+
+Examples:
+  npx claude-flow hook session-restore --session-id "swarm-20241019"
+  npx claude-flow hook session-restore -s "feature-auth" --restore-memory
+```
+
+**Features:**
+- Load previous session context
+- Restore memory state and decisions
+- Reconfigure agents to previous state
+- Resume in-progress tasks
+
+### session-end
+
+Cleanup and persist session state.
+
+```bash
+npx claude-flow hook session-end [options]
+
+Options:
+  --session-id, -s <id>     Session identifier to end
+  --save-state              Save current session state (default: true)
+  --export-metrics          Export session metrics
+  --generate-summary        Create session summary
+  --cleanup-temp            Remove temporary files
+
+Examples:
+  npx claude-flow hook session-end --session-id "dev-session-2024"
+  npx claude-flow hook session-end -s "feature-auth" --export-metrics --generate-summary
+  npx claude-flow hook session-end -s "quick-fix" --cleanup-temp
+```
+
+**Features:**
+- Save current context and progress
+- Export session metrics (duration, commands, tokens, files)
+- Generate work summary with decisions and next steps
+- Cleanup temporary files and optimize storage
+
+### notify
+
+Custom notifications with swarm status.
+
+```bash
+npx claude-flow hook notify --message <msg>
+
+Options:
+  --message, -m <text>      Notification message
+  --level <level>           Notification level (info|warning|error)
+  --swarm-status            Include swarm status (default: true)
+  --broadcast               Send to all agents
+
+Examples:
+  npx claude-flow hook notify -m "Task completed" --level info
+  npx claude-flow hook notify -m "Critical error" --level error --broadcast
+```
+
+**Features:**
+- Send notifications to coordination system
+- Include swarm status and metrics
+- Broadcast to all agents
+- Log important events
+
+---
+
+## Utility Commands
+
+```bash
+npx claude-flow init --hooks              # Initialize hooks system
+npx claude-flow hook --list               # List available hooks
+npx claude-flow hook --test <hook>        # Test specific hook
+npx claude-flow hook validate-config      # Validate configuration
+npx claude-flow memory usage              # Manage memory
+npx claude-flow agent spawn               # Spawn agents
+npx claude-flow swarm init                # Initialize swarm
+```
diff --git a/.claude/skills/hooks-automation/references/mcp-integration.md b/.claude/skills/hooks-automation/references/mcp-integration.md
new file mode 100644 (file)
index 0000000..8e973ae
--- /dev/null
@@ -0,0 +1,179 @@
+# MCP Tool Integration
+
+Hooks automatically integrate with MCP tools for coordination. This reference
+documents the internal MCP calls and the three-phase memory coordination protocol.
+
+---
+
+## Pre-Task Hook with Agent Spawning
+
+```javascript
+// Hook command
+npx claude-flow hook pre-task --description "Build REST API"
+
+// Internally calls MCP tools:
+mcp__claude-flow__agent_spawn {
+  type: "backend-dev",
+  capabilities: ["api", "database", "testing"]
+}
+
+mcp__claude-flow__memory_usage {
+  action: "store",
+  key: "swarm/task/api-build/context",
+  namespace: "coordination",
+  value: JSON.stringify({
+    description: "Build REST API",
+    agents: ["backend-dev"],
+    started: Date.now()
+  })
+}
+```
+
+## Post-Edit Hook with Memory Storage
+
+```javascript
+// Hook command
+npx claude-flow hook post-edit --file "api/auth.js"
+
+// Internally calls MCP tools:
+mcp__claude-flow__memory_usage {
+  action: "store",
+  key: "swarm/edits/api/auth.js",
+  namespace: "coordination",
+  value: JSON.stringify({
+    file: "api/auth.js",
+    timestamp: Date.now(),
+    changes: { added: 45, removed: 12 },
+    formatted: true,
+    linted: true
+  })
+}
+
+mcp__claude-flow__neural_train {
+  pattern_type: "coordination",
+  training_data: { /* edit patterns */ }
+}
+```
+
+## Session End Hook with State Persistence
+
+```javascript
+// Hook command
+npx claude-flow hook session-end --session-id "dev-2024"
+
+// Internally calls MCP tools:
+mcp__claude-flow__memory_persist {
+  sessionId: "dev-2024"
+}
+
+mcp__claude-flow__swarm_status {
+  swarmId: "current"
+}
+
+// Generates metrics and summary
+```
+
+---
+
+## Three-Phase Memory Coordination Protocol
+
+All hooks follow a standardized memory coordination pattern with three phases.
+
+### Phase 1: STATUS -- Hook starts
+
+```javascript
+mcp__claude-flow__memory_usage {
+  action: "store",
+  key: "swarm/hooks/pre-edit/status",
+  namespace: "coordination",
+  value: JSON.stringify({
+    status: "running",
+    hook: "pre-edit",
+    file: "src/auth.js",
+    timestamp: Date.now()
+  })
+}
+```
+
+### Phase 2: PROGRESS -- Hook processes
+
+```javascript
+mcp__claude-flow__memory_usage {
+  action: "store",
+  key: "swarm/hooks/pre-edit/progress",
+  namespace: "coordination",
+  value: JSON.stringify({
+    progress: 50,
+    action: "validating syntax",
+    file: "src/auth.js"
+  })
+}
+```
+
+### Phase 3: COMPLETE -- Hook finishes
+
+```javascript
+mcp__claude-flow__memory_usage {
+  action: "store",
+  key: "swarm/hooks/pre-edit/complete",
+  namespace: "coordination",
+  value: JSON.stringify({
+    status: "complete",
+    result: "success",
+    agent_assigned: "backend-dev",
+    syntax_valid: true,
+    backup_created: true
+  })
+}
+```
+
+---
+
+## Hook Response Format
+
+Hooks return JSON responses to control operation flow.
+
+### Continue Response
+
+```json
+{
+  "continue": true,
+  "reason": "All validations passed",
+  "metadata": {
+    "agent_assigned": "backend-dev",
+    "syntax_valid": true,
+    "file": "src/auth.js"
+  }
+}
+```
+
+### Block Response
+
+```json
+{
+  "continue": false,
+  "reason": "Protected file - manual review required",
+  "metadata": {
+    "file": ".env.production",
+    "protection_level": "high",
+    "requires": "manual_approval"
+  }
+}
+```
+
+### Warning Response
+
+```json
+{
+  "continue": true,
+  "reason": "Syntax valid but complexity high",
+  "warnings": [
+    "Cyclomatic complexity: 15 (threshold: 10)",
+    "Consider refactoring for better maintainability"
+  ],
+  "metadata": {
+    "complexity": 15,
+    "threshold": 10
+  }
+}
+```
diff --git a/.claude/skills/hooks-automation/references/troubleshooting.md b/.claude/skills/hooks-automation/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..67dc7d5
--- /dev/null
@@ -0,0 +1,113 @@
+# Troubleshooting, Performance, and Best Practices
+
+---
+
+## Performance Tips
+
+1. **Keep Hooks Lightweight** -- Target < 100ms execution time
+2. **Use Async for Heavy Operations** -- Set `"async": true` to avoid blocking the main flow
+3. **Cache Aggressively** -- Store frequently accessed data with `--cache-results`
+4. **Batch Related Operations** -- Combine multiple actions in a single hook
+5. **Use Memory Wisely** -- Set appropriate TTLs for memory keys
+6. **Monitor Hook Performance** -- Track execution times with `--update-metrics`
+7. **Parallelize When Possible** -- Run independent hooks concurrently
+
+---
+
+## Debugging Hooks
+
+Enable debug mode for troubleshooting:
+
+```bash
+# Enable debug output
+export CLAUDE_FLOW_DEBUG=true
+
+# Test specific hook with verbose output
+npx claude-flow hook pre-edit --file "test.js" --debug
+
+# Check hook execution logs
+cat .claude-flow/logs/hooks-$(date +%Y-%m-%d).log
+
+# Validate configuration
+npx claude-flow hook validate-config
+```
+
+---
+
+## Common Issues
+
+### Hooks Not Executing
+
+- Verify `.claude/settings.json` syntax (valid JSON)
+- Check hook matcher regex patterns
+- Enable debug mode (`CLAUDE_FLOW_DEBUG=true`)
+- Review permission settings on hook scripts
+- Ensure `claude-flow` CLI is in PATH
+
+### Hook Timeouts
+
+- Increase `timeout` values in configuration (default: 5000ms)
+- Mark heavy hooks as `"async": true`
+- Optimize hook logic to reduce execution time
+- Check network connectivity for MCP tool calls
+
+### Memory Issues
+
+- Set appropriate TTLs for memory keys
+- Clean up old memory entries periodically
+- Use memory namespaces effectively (`swarm/`, `edits/`, `quality/`)
+- Monitor memory usage with `npx claude-flow memory usage`
+
+### Performance Problems
+
+- Profile hook execution times with debug mode
+- Use caching for repeated operations
+- Batch operations when possible
+- Reduce hook complexity -- split large hooks into smaller ones
+
+---
+
+## Best Practices
+
+1. **Configure Hooks Early** -- Set up during project initialization
+2. **Use Memory Keys Strategically** -- Organize with clear namespaces
+3. **Enable Auto-Formatting** -- Maintain code consistency
+4. **Train Patterns Continuously** -- Learn from successful operations
+5. **Monitor Performance** -- Track hook execution times
+6. **Validate Configuration** -- Test hooks before production use
+7. **Document Custom Hooks** -- Maintain hook documentation
+8. **Set Appropriate Timeouts** -- Prevent hanging operations
+9. **Handle Errors Gracefully** -- Use `continueOnError: true` when appropriate
+10. **Review Metrics Regularly** -- Optimize based on usage patterns
+
+---
+
+## Benefits Summary
+
+- **Automatic Agent Assignment**: Right agent for every file type
+- **Consistent Code Formatting**: Language-specific formatters
+- **Continuous Learning**: Neural patterns improve over time
+- **Cross-Session Memory**: Context persists between sessions
+- **Performance Tracking**: Comprehensive metrics and analytics
+- **Automatic Coordination**: Agents sync via memory
+- **Smart Agent Spawning**: Task-based agent selection
+- **Quality Gates**: Pre-commit validation and verification
+- **Error Prevention**: Syntax validation before edits
+- **Knowledge Sharing**: Decisions stored and shared
+- **Reduced Manual Work**: Automation of repetitive tasks
+- **Better Collaboration**: Seamless multi-agent coordination
+
+---
+
+## Integration with Other Skills
+
+This skill works seamlessly with:
+
+| Skill | Integration Point |
+|-------|-------------------|
+| SPARC Methodology | Hooks enhance SPARC workflows |
+| Pair Programming | Automated quality in pairing sessions |
+| Verification Quality | Truth-score validation in hooks |
+| GitHub Workflows | Git integration for commits/PRs |
+| Performance Analysis | Metrics collection in hooks |
+| Swarm Advanced | Multi-agent coordination via hooks |
diff --git a/.claude/skills/hooks-automation/references/workflow-examples.md b/.claude/skills/hooks-automation/references/workflow-examples.md
new file mode 100644 (file)
index 0000000..5040f69
--- /dev/null
@@ -0,0 +1,202 @@
+# Workflow Examples
+
+End-to-end examples showing hook usage in real development scenarios.
+
+---
+
+## Agent Coordination Workflow
+
+### Agent 1: Backend Developer
+
+```bash
+# STEP 1: Pre-task preparation
+npx claude-flow hook pre-task \
+  --description "Implement user authentication API" \
+  --auto-spawn-agents \
+  --load-memory
+
+# STEP 2: Work begins - pre-edit validation
+npx claude-flow hook pre-edit \
+  --file "api/auth.js" \
+  --auto-assign-agent \
+  --validate-syntax
+
+# STEP 3: Edit file (via Claude Code Edit tool)
+# ... code changes ...
+
+# STEP 4: Post-edit processing
+npx claude-flow hook post-edit \
+  --file "api/auth.js" \
+  --memory-key "swarm/backend/auth-api" \
+  --auto-format \
+  --train-patterns
+
+# STEP 5: Notify coordination system
+npx claude-flow hook notify \
+  --message "Auth API implementation complete" \
+  --swarm-status \
+  --broadcast
+
+# STEP 6: Task completion
+npx claude-flow hook post-task \
+  --task-id "auth-api" \
+  --analyze-performance \
+  --store-decisions \
+  --export-learnings
+```
+
+### Agent 2: Test Engineer (receives notification)
+
+```bash
+# STEP 1: Check memory for API details
+npx claude-flow hook session-restore \
+  --session-id "swarm-current" \
+  --restore-memory
+
+# Memory contains: swarm/backend/auth-api with implementation details
+
+# STEP 2: Generate tests
+npx claude-flow hook pre-task \
+  --description "Write tests for auth API" \
+  --load-memory
+
+# STEP 3: Create test file
+npx claude-flow hook post-edit \
+  --file "api/auth.test.js" \
+  --memory-key "swarm/testing/auth-api-tests" \
+  --train-patterns
+
+# STEP 4: Share test results
+npx claude-flow hook notify \
+  --message "Auth API tests complete - 100% coverage" \
+  --broadcast
+```
+
+---
+
+## Example 1: Full-Stack Development Workflow
+
+```bash
+# Session start - initialize coordination
+npx claude-flow hook session-start --session-id "fullstack-feature"
+
+# Pre-task planning
+npx claude-flow hook pre-task \
+  --description "Build user profile feature - frontend + backend + tests" \
+  --auto-spawn-agents \
+  --optimize-topology
+
+# Backend work
+npx claude-flow hook pre-edit --file "api/profile.js"
+# ... implement backend ...
+npx claude-flow hook post-edit \
+  --file "api/profile.js" \
+  --memory-key "profile/backend" \
+  --train-patterns
+
+# Frontend work (reads backend details from memory)
+npx claude-flow hook pre-edit --file "components/Profile.jsx"
+# ... implement frontend ...
+npx claude-flow hook post-edit \
+  --file "components/Profile.jsx" \
+  --memory-key "profile/frontend" \
+  --train-patterns
+
+# Testing (reads both backend and frontend from memory)
+npx claude-flow hook pre-task \
+  --description "Test profile feature" \
+  --load-memory
+
+# Session end - export everything
+npx claude-flow hook session-end \
+  --session-id "fullstack-feature" \
+  --export-metrics \
+  --generate-summary
+```
+
+---
+
+## Example 2: Debugging with Hooks
+
+```bash
+# Start debugging session
+npx claude-flow hook session-start --session-id "debug-memory-leak"
+
+# Pre-task: analyze issue
+npx claude-flow hook pre-task \
+  --description "Debug memory leak in event handlers" \
+  --load-memory \
+  --estimate-complexity
+
+# Search for event emitters
+npx claude-flow hook pre-search --query "EventEmitter"
+# ... search executes ...
+npx claude-flow hook post-search \
+  --query "EventEmitter" \
+  --cache-results
+
+# Fix the issue
+npx claude-flow hook pre-edit \
+  --file "services/events.js" \
+  --backup-file
+# ... fix code ...
+npx claude-flow hook post-edit \
+  --file "services/events.js" \
+  --memory-key "debug/memory-leak-fix" \
+  --validate-output
+
+# Verify fix
+npx claude-flow hook post-task \
+  --task-id "memory-leak-fix" \
+  --analyze-performance \
+  --generate-report
+
+# End session
+npx claude-flow hook session-end \
+  --session-id "debug-memory-leak" \
+  --export-metrics
+```
+
+---
+
+## Example 3: Multi-Agent Refactoring
+
+```bash
+# Initialize swarm for refactoring
+npx claude-flow hook pre-task \
+  --description "Refactor legacy codebase to modern patterns" \
+  --auto-spawn-agents \
+  --optimize-topology
+
+# Agent 1: Code Analyzer
+npx claude-flow hook pre-task --description "Analyze code complexity"
+# ... analysis ...
+npx claude-flow hook post-task \
+  --task-id "analysis" \
+  --store-decisions
+
+# Agent 2: Refactoring (reads analysis from memory)
+npx claude-flow hook session-restore \
+  --session-id "swarm-refactor" \
+  --restore-memory
+
+for file in src/**/*.js; do
+  npx claude-flow hook pre-edit --file "$file" --backup-file
+  # ... refactor ...
+  npx claude-flow hook post-edit \
+    --file "$file" \
+    --memory-key "refactor/$file" \
+    --auto-format \
+    --train-patterns
+done
+
+# Agent 3: Testing (reads refactored code from memory)
+npx claude-flow hook pre-task \
+  --description "Generate tests for refactored code" \
+  --load-memory
+
+# Broadcast completion
+npx claude-flow hook notify \
+  --message "Refactoring complete - all tests passing" \
+  --broadcast
+```
diff --git a/.claude/skills/jira-comment/references/adf-format.md b/.claude/skills/jira-comment/references/adf-format.md
new file mode 100644 (file)
index 0000000..258e1a9
--- /dev/null
@@ -0,0 +1,210 @@
+# Atlassian Document Format (ADF)
+
+Используется для HTTP fallback (когда MCP недоступен).
+MCP-режим принимает plain text/markdown, ADF не нужен.
+
+---
+
+## Структура документа
+
+```json
+{
+  "body": {
+    "type": "doc",
+    "version": 1,
+    "content": [...]
+  }
+}
+```
+
+## Элементы ADF
+
+### Heading
+
+```json
+{
+  "type": "heading",
+  "attrs": {"level": 3},
+  "content": [{"type": "text", "text": "Заголовок"}]
+}
+```
+
+### Paragraph с форматированием
+
+```json
+{
+  "type": "paragraph",
+  "content": [
+    {"type": "text", "text": "Обычный текст "},
+    {"type": "text", "marks": [{"type": "strong"}], "text": "жирный"},
+    {"type": "text", "text": " и "},
+    {"type": "text", "marks": [{"type": "em"}], "text": "курсив"}
+  ]
+}
+```
+
+### Bullet list
+
+```json
+{
+  "type": "bulletList",
+  "content": [
+    {
+      "type": "listItem",
+      "content": [
+        {"type": "paragraph", "content": [{"type": "text", "text": "Пункт 1"}]}
+      ]
+    },
+    {
+      "type": "listItem",
+      "content": [
+        {"type": "paragraph", "content": [{"type": "text", "text": "Пункт 2"}]}
+      ]
+    }
+  ]
+}
+```
+
+### Code block
+
+```json
+{
+  "type": "codeBlock",
+  "attrs": {"language": "bash"},
+  "content": [{"type": "text", "text": "echo hello"}]
+}
+```
+
+### Emoji
+
+```json
+{
+  "type": "emoji",
+  "attrs": {"shortName": ":rocket:"}
+}
+```
+
+### Horizontal rule
+
+```json
+{"type": "rule"}
+```
+
+---
+
+## Конвертация Markdown в ADF
+
+```python
+def markdown_to_adf(markdown: str) -> dict:
+    """
+    Конвертирует markdown в Atlassian Document Format.
+    """
+    content = []
+    lines = markdown.split('\n')
+    i = 0
+
+    while i < len(lines):
+        line = lines[i]
+
+        # Heading
+        if line.startswith('# '):
+            content.append({
+                "type": "heading",
+                "attrs": {"level": 1},
+                "content": [{"type": "text", "text": line[2:]}]
+            })
+        elif line.startswith('## '):
+            content.append({
+                "type": "heading",
+                "attrs": {"level": 2},
+                "content": [{"type": "text", "text": line[3:]}]
+            })
+        elif line.startswith('### '):
+            content.append({
+                "type": "heading",
+                "attrs": {"level": 3},
+                "content": [{"type": "text", "text": line[4:]}]
+            })
+
+        # Bullet list
+        elif line.startswith('- ') or line.startswith('* '):
+            items = []
+            while i < len(lines) and (lines[i].startswith('- ') or lines[i].startswith('* ')):
+                item_text = lines[i][2:]
+                items.append({
+                    "type": "listItem",
+                    "content": [{
+                        "type": "paragraph",
+                        "content": parse_inline_formatting(item_text)
+                    }]
+                })
+                i += 1
+            content.append({"type": "bulletList", "content": items})
+            continue
+
+        # Code block
+        elif line.startswith('```'):
+            lang = line[3:] or None
+            code_lines = []
+            i += 1
+            while i < len(lines) and not lines[i].startswith('```'):
+                code_lines.append(lines[i])
+                i += 1
+            block = {
+                "type": "codeBlock",
+                "content": [{"type": "text", "text": '\n'.join(code_lines)}]
+            }
+            if lang:
+                block["attrs"] = {"language": lang}
+            content.append(block)
+
+        # Horizontal rule
+        elif line == '---':
+            content.append({"type": "rule"})
+
+        # Paragraph
+        elif line.strip():
+            content.append({
+                "type": "paragraph",
+                "content": parse_inline_formatting(line)
+            })
+
+        i += 1
+
+    return {"type": "doc", "version": 1, "content": content}
+
+
+def parse_inline_formatting(text: str) -> list:
+    """Парсит inline форматирование (bold, italic, code)."""
+    import re
+
+    content = []
+
+    # Handle emoji shortcuts
+    emoji_map = {
+        "🚀": ":rocket:",
+        "📝": ":memo:",
+        "📋": ":clipboard:",
+        "🗣️": ":speaking_head:",
+        "📐": ":triangular_ruler:",
+        "📊": ":bar_chart:",
+        "✅": ":white_check_mark:",
+        "📎": ":paperclip:"
+    }
+
+    for emoji, shortname in emoji_map.items():
+        if emoji in text:
+            parts = text.split(emoji)
+            for j, part in enumerate(parts):
+                if part:
+                    content.append({"type": "text", "text": part})
+                if j < len(parts) - 1:
+                    content.append({
+                        "type": "emoji",
+                        "attrs": {"shortName": shortname}
+                    })
+            return content
+
+    # Default: plain text
+    return [{"type": "text", "text": text}]
+```
diff --git a/.claude/skills/jira-comment/references/api-reference.md b/.claude/skills/jira-comment/references/api-reference.md
new file mode 100644 (file)
index 0000000..66d8906
--- /dev/null
@@ -0,0 +1,83 @@
+# Jira API Reference for Comments
+
+Справочник по API вызовам для комментариев, вложений и подзадач.
+
+---
+
+## 1. Добавление комментария
+
+### MCP режим (основной)
+
+```
+atlassian_jira_add_comment(
+    issue_key="PROJ-123",
+    body="Текст комментария"
+)
+```
+
+**Важно:** Передавай plain text с markdown разметкой, НЕ ADF объект.
+Atlassian MCP сам обрабатывает форматирование.
+
+### HTTP fallback (REST API v3)
+
+Используется только если Atlassian MCP недоступен.
+Требует env переменные: `ATLASSIAN_HOST`, `ATLASSIAN_EMAIL`, `ATLASSIAN_TOKEN`.
+
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST \
+  -H "Content-Type: application/json" \
+  -d '{
+    "body": {ADF_OBJECT}
+  }' \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue/${issue_key}/comment"
+```
+
+При HTTP режиме требуется ADF формат в body.
+См. `references/adf-format.md` для структуры ADF.
+
+---
+
+## 2. Прикрепление артефактов
+
+**Важно:** MCP НЕ поддерживает прикрепление файлов. Всегда используй HTTP.
+
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST \
+  -H "X-Atlassian-Token: no-check" \
+  -F "file=@docs/jira/PROJ-123/prd.md" \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue/PROJ-123/attachments"
+```
+
+Успешный ответ: HTTP 200, JSON с информацией о файле.
+Ошибка: HTTP 4xx/5xx с описанием проблемы.
+
+---
+
+## 3. Создание подзадач
+
+```python
+def create_subtask(parent_key: str, summary: str, description: str) -> str:
+    """Создаёт подзадачу и возвращает её ключ."""
+    payload = {
+        "fields": {
+            "project": {"key": parent_key.split('-')[0]},
+            "parent": {"key": parent_key},
+            "summary": summary,
+            "description": markdown_to_adf(description),
+            "issuetype": {"name": "Sub-task"}
+        }
+    }
+
+    response = requests.post(
+        f"https://{host}/rest/api/3/issue",
+        auth=(email, token),
+        headers={"Content-Type": "application/json"},
+        json=payload
+    )
+
+    if response.status_code == 201:
+        return response.json()["key"]
+    return None
+```
diff --git a/.claude/skills/jira-comment/references/comment-templates.md b/.claude/skills/jira-comment/references/comment-templates.md
new file mode 100644 (file)
index 0000000..63d7eb2
--- /dev/null
@@ -0,0 +1,113 @@
+# Comment Templates by Workflow Stage
+
+Шаблоны комментариев для каждого этапа workflow.
+Подставляй переменные в `{фигурных скобках}` из контекста этапа.
+
+---
+
+## После fetch
+
+```markdown
+🚀 **Начата обработка задачи**
+
+**Workflow:** {workflow_type}
+**Режим доступа:** {jira_access_mode}
+**Этапы:** {stages_list}
+
+Следующий этап: Интервью
+```
+
+## После interview
+
+```markdown
+📝 **Интервью завершено**
+
+**Вопросов задано:** {questions_count}
+**Требований извлечено:** {requirements_count}
+
+**Основные требования:**
+{requirements_summary}
+
+📎 Артефакт: `interview.md` (прикреплён)
+```
+
+## После PRD
+
+```markdown
+📋 **PRD создан**
+
+**Разделы:**
+- Executive Summary
+- Problem Statement
+- Goals & Success Metrics
+- Requirements (Functional/Non-functional)
+- User Stories
+- Acceptance Criteria
+
+📎 Артефакт: `prd.md` (прикреплён)
+
+Следующий этап: Дебаты PRD
+```
+
+## После debate
+
+```markdown
+🗣️ **Дебаты завершены** ({document_type})
+
+**Персоны:** {personas_list}
+**Раундов:** {total_rounds}
+**Моделей:** {models_count}
+
+**Улучшения:**
+{improvements_list}
+
+📎 Артефакт: `debate-log.md` (обновлён)
+```
+
+## После spec
+
+```markdown
+📐 **Техническая спецификация создана**
+
+**Архитектура:** {architecture_summary}
+**API endpoints:** {api_count}
+**DB changes:** {db_changes}
+
+📎 Артефакт: `spec.md` (прикреплён)
+```
+
+## После plan
+
+```markdown
+📊 **План реализации создан**
+
+**Подзадач создано:** {subtasks_count}
+
+**Подзадачи:**
+{subtask_links}
+
+**Зависимости:**
+{dependency_graph}
+
+📎 Артефакт: `plan.md` (прикреплён)
+```
+
+## После implement (финальный)
+
+```markdown
+✅ **Реализация завершена**
+
+**Статус:** Выполнено
+**Подзадач выполнено:** {completed}/{total}
+
+**Созданные файлы:**
+{files_created}
+
+**Изменённые файлы:**
+{files_modified}
+
+**Тесты:** {tests_passed} passed, {tests_failed} failed
+
+---
+Workflow завершён успешно.
+```
diff --git a/.claude/skills/jira-debate/assets/debate-log-template.md b/.claude/skills/jira-debate/assets/debate-log-template.md
new file mode 100644 (file)
index 0000000..e47220c
--- /dev/null
@@ -0,0 +1,120 @@
+# Debate Log Template
+
+Template for `debate-log.md` saved after each completed debate session.
+
+---
+
+```markdown
+# Debate Log: {issue_key} ({document_type})
+
+## Summary
+- **Document:** {document_type} ({document_path})
+- **Rounds:** 3
+- **Models:** GPT-5.2, DeepSeek v3.2, Grok 4.1 Fast, Gemini 3 Pro, GLM 4.7, MiMo-V2-Flash
+- **Status:** CONSENSUS_REACHED
+- **Duration:** 15 minutes
+
+## Round 1
+
+### GPT-5.2 (Lead Technical Critic)
+**Severity:** MAJOR
+**Issue:**
+> Section "Error Handling" does not describe retry behavior for API timeouts.
+
+**Suggestion:**
+> Add retry policy with exponential backoff (3 attempts, 2s/4s/8s delays).
+
+### DeepSeek v3.2 (Architecture Critic)
+**Severity:** CRITICAL
+**Issue:**
+> The proposed architecture creates a circular dependency between ServiceA and ServiceB.
+
+**Suggestion:**
+> Introduce a mediator interface or event-based communication.
+
+...
+
+## Round 2
+
+### Changes Applied
+1. Added "Retry Policy" section (addressing GPT-5.2 feedback)
+2. Refactored architecture: introduced EventBus (addressing DeepSeek v3.2)
+3. Added pagination for batch operations (addressing Gemini 3 Pro)
+
+### Model Responses
+...
+
+## Round 3 (Final)
+
+### Voting Results
+| Model | Verdict | Notes |
+|-------|---------|-------|
+| GPT-5.2 | APPROVED | |
+| DeepSeek v3.2 | APPROVED | |
+| Grok 4.1 Fast | APPROVED | |
+| Gemini 3 Pro | APPROVED | 1 minor: "Consider adding diagram" |
+| GLM 4.7 | APPROVED | |
+| MiMo-V2-Flash | APPROVED | |
+
+**Consensus reached: 6/6 models approved**
+
+## Improvements Made
+1. Error handling with retry policy
+2. Architecture refactoring (EventBus pattern)
+3. Batch operation pagination
+4. Additional edge case handling
+
+## Remaining Minor Issues (deferred)
+- Add architecture diagram
+- Consider caching strategy for read-heavy operations
+```
+
+---
+
+## Прикрепление лога к Jira
+
+After the debate completes, the agent attaches the log and posts a summary comment.
+
+```python
+# Прикрепить debate-log к задаче
+attach_artifact(issue_key, f"docs/jira/{issue_key}/debate-log.md")
+
+# Добавить комментарий
+jira_add_comment(issue_key, f"""
+Дебаты завершены ({document_type})
+
+Раундов: {rounds_count}
+Моделей: {models_count}
+Статус: Консенсус достигнут
+
+Улучшения внесены:
+{improvements_list}
+
+Лог дебатов: docs/jira/{issue_key}/debate-log.md
+""")
+```
+
+## Обновление состояния
+
+```json
+{
+  "completed_stages": ["fetch", "interview", "prd", "debate_prd", "spec", "debate_spec"],
+  "current_stage": "plan",
+  "debate": {
+    "prd": {
+      "rounds": 2,
+      "consensus": true
+    },
+    "spec": {
+      "rounds": 3,
+      "models_participated": 6,
+      "consensus": true,
+      "improvements_made": 4
+    }
+  },
+  "artifacts": {
+    "debate_log": "docs/jira/PROJ-123/debate-log.md",
+    "spec_final": "docs/jira/PROJ-123/spec.md"
+  }
+}
+```
diff --git a/.claude/skills/jira-debate/assets/review-prompt-templates.md b/.claude/skills/jira-debate/assets/review-prompt-templates.md
new file mode 100644 (file)
index 0000000..006db7a
--- /dev/null
@@ -0,0 +1,169 @@
+# Review Prompt Templates
+
+Prompt templates for each document type. The agent selects the appropriate template
+based on `document_type` and fills in `{role}`, `{focus_description}`, and `{document_content}`.
+
+---
+
+## PRD (`document_type: "prd"`)
+
+```markdown
+## Context
+
+You are a {role} reviewing a Product Requirements Document (PRD).
+
+### Your Focus
+{focus_description}
+
+### PRD-Specific Review Criteria
+- Completeness of functional and non-functional requirements
+- Business value and problem statement clarity
+- Acceptance criteria specificity and testability
+- Scope definition (in-scope vs out-of-scope)
+- Risk assessment and mitigation
+- User stories clarity
+- MoSCoW prioritization correctness
+
+### Severity Levels
+- CRITICAL: Missing critical requirement or fundamentally wrong assumption
+- MAJOR: Incomplete requirement or unclear acceptance criteria
+- MINOR: Minor improvement to clarity or structure
+- SUGGESTION: Optional enhancement
+- APPROVED: PRD is acceptable
+
+### Task
+
+Review the following PRD and provide feedback.
+
+For each issue found:
+1. State the severity level
+2. Quote the problematic section
+3. Explain the issue
+4. Suggest a fix
+
+If the PRD is acceptable, respond with "APPROVED" and brief justification.
+
+---
+
+## PRD
+
+{document_content}
+
+---
+
+Provide your review:
+```
+
+---
+
+## Spec (`document_type: "spec"`)
+
+```markdown
+## Context
+
+You are a {role} reviewing a technical specification.
+
+### Your Focus
+{focus_description}
+
+### Spec-Specific Review Criteria
+- Architecture correctness and patterns
+- API design (RESTfulness, schemas, error handling)
+- Database schema (normalization, indexes, migrations)
+- Security considerations (auth, validation, injection)
+- Performance and scalability
+- Testing strategy completeness
+- Code examples accuracy
+
+### Severity Levels
+- CRITICAL: Blocks implementation, must be fixed
+- MAJOR: Important to fix before implementation
+- MINOR: Recommendation, can be deferred
+- SUGGESTION: Optional improvement
+- APPROVED: Specification is acceptable
+
+### Task
+
+Review the following technical specification and provide feedback.
+
+For each issue found:
+1. State the severity level
+2. Quote the problematic section
+3. Explain the issue
+4. Suggest a fix
+
+If the specification is acceptable, respond with "APPROVED" and brief justification.
+
+---
+
+## Specification
+
+{document_content}
+
+---
+
+## PRD Reference
+
+{prd_summary}
+
+---
+
+Provide your review:
+```
+
+---
+
+## Plan (`document_type: "plan"`)
+
+```markdown
+## Context
+
+You are a {role} reviewing an implementation plan.
+
+### Your Focus
+{focus_description}
+
+### Plan-Specific Review Criteria
+- Task decomposition granularity (2-8 hours per task)
+- Dependency graph correctness (no circular deps)
+- Risk coverage and mitigation strategies
+- Execution order feasibility
+- Acceptance criteria for each subtask
+- Coverage of all spec requirements
+- Testing strategy per phase
+
+### Severity Levels
+- CRITICAL: Circular dependency or missing critical task
+- MAJOR: Wrong dependency order or insufficient decomposition
+- MINOR: Minor improvement to task description
+- SUGGESTION: Optional optimization
+- APPROVED: Plan is acceptable
+
+### Task
+
+Review the following implementation plan and provide feedback.
+
+For each issue found:
+1. State the severity level
+2. Quote the problematic section
+3. Explain the issue
+4. Suggest a fix
+
+If the plan is acceptable, respond with "APPROVED" and brief justification.
+
+---
+
+## Implementation Plan
+
+{document_content}
+
+---
+
+## Specification Reference
+
+{spec_summary}
+
+---
+
+Provide your review:
+```
diff --git a/.claude/skills/jira-debate/references/consensus-logic.md b/.claude/skills/jira-debate/references/consensus-logic.md
new file mode 100644 (file)
index 0000000..b3755be
--- /dev/null
@@ -0,0 +1,80 @@
+# Consensus Logic
+
+Feedback aggregation, consensus checking, and document update procedures.
+
+## Агрегация результатов
+
+After each round the agent aggregates feedback by severity level.
+
+```python
+def aggregate_feedback(round_results: dict) -> dict:
+    critical_issues = []
+    major_issues = []
+    minor_issues = []
+    suggestions = []
+
+    for model, result in round_results.items():
+        for issue in result["issues"]:
+            if issue["severity"] == "CRITICAL":
+                critical_issues.append({**issue, "source": model})
+            elif issue["severity"] == "MAJOR":
+                major_issues.append({**issue, "source": model})
+            # ...
+
+    return {
+        "critical": critical_issues,
+        "major": major_issues,
+        "minor": minor_issues,
+        "suggestions": suggestions,
+        "approved_count": count_approved(round_results),
+        "total_models": len(round_results)
+    }
+```
+
+## Проверка консенсуса
+
+Consensus is reached when all three conditions hold:
+- 70%+ models return APPROVED
+- Zero CRITICAL issues remain
+- Maximum 2 MINOR issues per model (configurable)
+
+```python
+def check_consensus(aggregated: dict, config: dict) -> bool:
+    approved_ratio = aggregated["approved_count"] / aggregated["total_models"]
+
+    if approved_ratio < config["approved_threshold"]:
+        return False
+
+    if len(aggregated["critical"]) > 0:
+        return False
+
+    if config["no_critical_issues"] and len(aggregated["major"]) > 0:
+        return False
+
+    return True
+```
+
+## Обновление документа между раундами
+
+When consensus is not reached, apply CRITICAL and MAJOR fixes. MINOR and SUGGESTION
+issues are noted but do not block progress.
+
+```python
+def update_document_with_feedback(document_content: str, feedback: dict, document_type: str) -> str:
+    """
+    Обновляет документ на основе CRITICAL и MAJOR issues.
+    MINOR и SUGGESTION учитываются, но не блокируют.
+    """
+
+    changes = []
+
+    for issue in feedback["critical"] + feedback["major"]:
+        # Применяем исправление
+        change = apply_fix(document_content, issue)
+        changes.append(change)
+
+    # Добавляем changelog
+    document_content = add_changelog_entry(document_content, changes)
+
+    return document_content
+```
diff --git a/.claude/skills/jira-debate/references/openrouter-api.md b/.claude/skills/jira-debate/references/openrouter-api.md
new file mode 100644 (file)
index 0000000..47badac
--- /dev/null
@@ -0,0 +1,77 @@
+# OpenRouter API Integration
+
+API call configuration, response parsing, and round execution logic.
+
+## Выполнение раунда
+
+```python
+async def execute_round(round_number: int, document_content: str, active_models: list, document_type: str):
+    results = {}
+
+    # Параллельные запросы ко всем моделям
+    for model in active_models:
+        response = await call_openrouter(
+            model_id=model["id"],
+            prompt=build_prompt(model["role"], model["focus"], document_content, document_type),
+            temperature=model["temperature"]
+        )
+        results[model["display_name"]] = parse_review(response)
+
+    return results
+```
+
+## OpenRouter API вызов
+
+```python
+async def call_openrouter(model_id: str, prompt: str, temperature: float):
+    headers = {
+        "Authorization": f"Bearer {OPENROUTER_API_KEY}",
+        "Content-Type": "application/json",
+        "HTTP-Referer": "https://jira-workflow-plugin",
+        "X-Title": "Jira Workflow Debate"
+    }
+
+    payload = {
+        "model": model_id,
+        "messages": [
+            {"role": "system", "content": "You are a technical specification reviewer."},
+            {"role": "user", "content": prompt}
+        ],
+        "temperature": temperature,
+        "max_tokens": 2000
+    }
+
+    response = await httpx.post(
+        "https://openrouter.ai/api/v1/chat/completions",
+        headers=headers,
+        json=payload,
+        timeout=60.0
+    )
+
+    return response.json()["choices"][0]["message"]["content"]
+```
+
+## Парсинг результатов
+
+```python
+def parse_review(response: str) -> dict:
+    """
+    Парсит ответ модели и извлекает:
+    - severity: CRITICAL/MAJOR/MINOR/SUGGESTION/APPROVED
+    - issues: list of {section, problem, suggestion}
+    """
+
+    if "APPROVED" in response.upper() and "CRITICAL" not in response.upper():
+        return {"severity": "APPROVED", "issues": [], "summary": response}
+
+    issues = []
+    # Parse structured feedback...
+
+    max_severity = determine_max_severity(issues)
+
+    return {
+        "severity": max_severity,
+        "issues": issues,
+        "raw_response": response
+    }
+```
diff --git a/.claude/skills/jira-debate/references/persona-debate-protocol.md b/.claude/skills/jira-debate/references/persona-debate-protocol.md
new file mode 100644 (file)
index 0000000..e4071ff
--- /dev/null
@@ -0,0 +1,143 @@
+# Persona-Based Debate Protocol
+
+Detailed procedures for running focused debates with persona iterations.
+
+## Определение персон
+
+```python
+# В начале дебатов
+if auto_detect_focus and not personas:
+    # Вызвать jira-focus-detect
+    focus_result = invoke_skill("jira-focus-detect", issue_data=issue_data)
+    personas = focus_result["detected_personas"]
+
+    if focus_result["confidence"] == "low":
+        # Запросить подтверждение у пользователя
+        personas = confirm_personas_with_user(personas)
+```
+
+## Загрузка конфигурации персоны
+
+```python
+def load_persona_config(persona_id: str) -> dict:
+    with open("config/debate-personas.json") as f:
+        config = json.load(f)
+    return config["personas"][persona_id]
+```
+
+## Построение промпта с фокусом персоны
+
+```python
+def build_persona_prompt(base_prompt: str, persona: dict, document_type: str) -> str:
+    review_criteria = persona["review_criteria"][document_type]
+    focus_areas = persona["focus_areas"]
+
+    persona_context = f"""
+## Persona Focus: {persona["name"]}
+
+You are reviewing this document from the perspective of a {persona["name"]}.
+
+### Focus Areas
+{chr(10).join(f"- {area}" for area in focus_areas)}
+
+### Specific Review Questions for {document_type.upper()}
+{chr(10).join(f"- {q}" for q in review_criteria)}
+
+{persona["prompt_suffix"]}
+
+---
+
+"""
+    return persona_context + base_prompt
+```
+
+## Итерация по персонам
+
+```python
+async def debate_with_personas(
+    document: str,
+    personas: list,
+    document_type: str,
+    active_models: list
+) -> tuple[str, list]:
+    """
+    Проводит дебаты с итерациями по персонам.
+
+    Returns:
+        tuple: (updated_document, all_improvements)
+    """
+    all_improvements = []
+    debate_config = load_debate_config()
+
+    for persona_id in personas:
+        persona = load_persona_config(persona_id)
+        rounds_config = debate_config["rounds_per_persona"][document_type]
+
+        print(f"[debate:{document_type}] Итерация: {persona['name_ru']}")
+
+        round_num = 0
+        consensus = False
+
+        while not consensus and round_num < rounds_config["max"]:
+            round_num += 1
+            print(f"  Раунд {round_num}/{rounds_config['max']} ({persona['name_ru']})...")
+
+            # Выполнить раунд с фокусом персоны
+            results = await execute_round_with_persona(
+                document=document,
+                persona=persona,
+                document_type=document_type,
+                active_models=active_models
+            )
+
+            # Проверить консенсус по фокусу персоны
+            consensus = check_persona_consensus(results, persona)
+
+            if not consensus:
+                document = update_document_with_feedback(document, results)
+                all_improvements.extend(results["improvements"])
+
+        print(f"  {persona['name_ru']}: консенсус за {round_num} раунд(ов)")
+
+    return document, all_improvements
+```
+
+## Пример вывода прогресса с персонами
+
+```
+[debate:spec] Фокусированные дебаты (3 персоны)
+
+[debate:spec] Итерация 1/3: Backend-архитектор
+  Раунд 1/3 (Backend-архитектор)...
+    GPT-5.2: MAJOR (API design: missing pagination)
+    DeepSeek v3.2: CRITICAL (DB schema: no indexes)
+    Gemini 3 Pro: APPROVED
+    ...
+  -> Обновляю спецификацию
+
+  Раунд 2/3 (Backend-архитектор)...
+    GPT-5.2: APPROVED
+    DeepSeek v3.2: APPROVED
+    ...
+  Backend-архитектор: консенсус за 2 раунда
+
+[debate:spec] Итерация 2/3: Frontend-архитектор
+  Раунд 1/3 (Frontend-архитектор)...
+    GPT-5.2: MINOR (component structure suggestion)
+    ...
+  Frontend-архитектор: консенсус за 1 раунд
+
+[debate:spec] Итерация 3/3: Инженер безопасности
+  Раунд 1/3 (Инженер безопасности)...
+    GPT-5.2: MAJOR (missing CSRF protection)
+    ...
+  -> Обновляю спецификацию
+
+  Раунд 2/3 (Инженер безопасности)...
+    ...
+  Инженер безопасности: консенсус за 2 раунда
+
+[debate:spec] Все итерации завершены
+  Улучшений внесено: 7
+  Документ обновлён: docs/jira/PROJ-123/spec.md
+```
diff --git a/.claude/skills/jira-debate/references/progress-output.md b/.claude/skills/jira-debate/references/progress-output.md
new file mode 100644 (file)
index 0000000..a9660eb
--- /dev/null
@@ -0,0 +1,54 @@
+# Progress Output Format
+
+Standard console output format during debate execution.
+
+## Стандартный режим
+
+```
+[debate:{document_type}] Проверяю доступность моделей...
+         GPT-5.2: доступна
+         DeepSeek v3.2: доступна
+         Grok 4.1 Fast: доступна
+         Gemini 3 Pro: доступна
+         Perplexity Sonar Pro: недоступна (пропущена)
+         GLM 4.7: доступна
+         MiMo-V2-Flash: доступна
+
+[debate:{document_type}] Раунд 1/{max_rounds}...
+         Отправляю документ 6 моделям...
+
+         GPT-5.2: MAJOR
+           -> "Не описан retry при ошибках API"
+         DeepSeek v3.2: CRITICAL
+           -> "Circular dependency в архитектуре"
+         Grok 4.1 Fast: MINOR
+           -> "Edge case: пустой результат"
+         Gemini 3 Pro: MAJOR
+           -> "Bottleneck при batch > 100"
+         GLM 4.7: APPROVED
+         MiMo-V2-Flash: APPROVED
+
+         Консенсус: 2/6 APPROVED (нужно 70%)
+         Critical: 1, Major: 2, Minor: 1
+
+         -> Обновляю документ...
+
+[debate:{document_type}] Раунд 2/{max_rounds}...
+         ...
+
+[debate:{document_type}] Раунд 3/{max_rounds}...
+
+         GPT-5.2: APPROVED
+         DeepSeek v3.2: APPROVED
+         Grok 4.1 Fast: APPROVED
+         Gemini 3 Pro: APPROVED (1 minor)
+         GLM 4.7: APPROVED
+         MiMo-V2-Flash: APPROVED
+
+         Консенсус достигнут: 6/6 APPROVED
+         Сохранено: docs/jira/PROJ-123/debate-log.md
+```
+
+## Режим с персонами
+
+Read `references/persona-debate-protocol.md` for persona-mode output examples.
diff --git a/.claude/skills/jira-focus-detect/references/detection-rules.md b/.claude/skills/jira-focus-detect/references/detection-rules.md
new file mode 100644 (file)
index 0000000..03522ae
--- /dev/null
@@ -0,0 +1,95 @@
+# Focus Detection Rules
+
+Mappings and keyword dictionaries for automatic persona detection.
+
+## Component Mapping
+
+Maps Jira components to persona IDs (weight: 2 per match).
+
+```python
+component_mapping = {
+    # Frontend
+    "Frontend": "frontend",
+    "UI": "frontend",
+    "Web": "frontend",
+    "Mobile": "frontend",
+
+    # Backend
+    "Backend": "backend",
+    "API": "backend",
+    "Database": "backend",
+    "Server": "backend",
+
+    # DevOps
+    "Infrastructure": "devops",
+    "DevOps": "devops",
+    "CI/CD": "devops",
+    "Deployment": "devops",
+
+    # Security
+    "Security": "security",
+    "Auth": "security",
+    "Authentication": "security",
+
+    # QA
+    "QA": "qa",
+    "Testing": "qa",
+    "Quality": "qa"
+}
+```
+
+## Label Mapping
+
+Maps Jira labels to persona IDs (weight: 3 per match).
+
+```python
+label_mapping = {
+    "frontend": "frontend",
+    "backend": "backend",
+    "api": "backend",
+    "database": "backend",
+    "devops": "devops",
+    "infrastructure": "devops",
+    "security": "security",
+    "auth": "security",
+    "testing": "qa",
+    "e2e": "qa",
+    "performance": "backend"
+}
+```
+
+## File Pattern Matching
+
+Detects focus from affected file paths using glob patterns.
+
+```python
+def detect_focus_from_files(files: list, patterns: dict) -> dict:
+    """
+    Определяет фокусы по паттернам файлов.
+
+    Args:
+        files: список путей к файлам
+        patterns: component_patterns из конфига
+    """
+    from fnmatch import fnmatch
+
+    scores = {}
+    for persona, file_patterns in patterns.items():
+        for pattern in file_patterns:
+            matches = [f for f in files if fnmatch(f, pattern) or pattern in f]
+            if matches:
+                scores[persona] = scores.get(persona, 0) + len(matches)
+
+    return scores
+```
+
+## Source Weights
+
+Default weights for combining scores from different detection sources:
+
+| Source | Weight | Rationale |
+|--------|--------|-----------|
+| `text` | 1.0 | Keyword matches in summary/description |
+| `components` | 2.0 | Explicit Jira component assignment |
+| `labels` | 3.0 | Most intentional signal |
+| `files` | 1.5 | Inferred from codebase context |
diff --git a/.claude/skills/jira-focus-detect/references/output-and-integration.md b/.claude/skills/jira-focus-detect/references/output-and-integration.md
new file mode 100644 (file)
index 0000000..ba3672e
--- /dev/null
@@ -0,0 +1,68 @@
+# Output Format & Integration
+
+## Interactive Mode
+
+When `debate_config.allow_user_override: true`:
+
+```
+Выберите персоны для дебатов:
+
+  [x] 1) Backend-архитектор (рекомендован)
+  [x] 2) Инженер безопасности (рекомендован)
+  [ ] 3) Frontend-архитектор
+  [ ] 4) DevOps-инженер
+  [ ] 5) QA-инженер
+
+Введите номера через пробел или Enter для рекомендованных: _
+```
+
+## Output JSON Schema
+
+```json
+{
+  "detected_personas": ["backend", "security"],
+  "detection_scores": {
+    "backend": 12,
+    "security": 6,
+    "frontend": 2,
+    "devops": 0,
+    "qa": 1
+  },
+  "matched_keywords": {
+    "backend": ["api", "database", "endpoint", "service"],
+    "security": ["authentication", "token"]
+  },
+  "confidence": "high",
+  "user_confirmed": true,
+  "user_modified": false
+}
+```
+
+## Integration with jira-debate
+
+Called before debate starts:
+
+```python
+# В jira-debate
+if auto_detect_focus and not personas:
+    focus_result = invoke_skill("jira-focus-detect", issue_data=issue_data)
+    personas = focus_result["detected_personas"]
+
+    if focus_result["confidence"] == "low":
+        personas = confirm_personas_with_user(personas)
+```
+
+## State Persistence
+
+Result saved to `config/state/{ISSUE-KEY}.json`:
+
+```json
+{
+  "focus_detection": {
+    "detected_at": "2026-01-28T12:00:00Z",
+    "personas": ["backend", "security"],
+    "confidence": "high",
+    "scores": {"backend": 12, "security": 6}
+  }
+}
+```
diff --git a/.claude/skills/jira-focus-detect/references/scoring-algorithm.md b/.claude/skills/jira-focus-detect/references/scoring-algorithm.md
new file mode 100644 (file)
index 0000000..d223744
--- /dev/null
@@ -0,0 +1,148 @@
+# Scoring Algorithm
+
+Full implementation of the multi-source scoring and combination logic.
+
+## Text Analysis
+
+```python
+def detect_focus_from_text(text: str, keywords: dict) -> dict:
+    """
+    Определяет фокусы по ключевым словам в тексте.
+
+    Returns:
+        tuple: (scores_dict, matched_keywords_dict)
+    """
+    text_lower = text.lower()
+    scores = {}
+    matched_keywords = {}
+
+    for persona, kw_list in keywords.items():
+        matches = [kw for kw in kw_list if kw in text_lower]
+        if matches:
+            scores[persona] = len(matches)
+            matched_keywords[persona] = matches
+
+    return scores, matched_keywords
+```
+
+## Score Combination
+
+```python
+def combine_scores(*score_dicts, weights=None) -> list:
+    """
+    Объединяет scores из разных источников.
+
+    Args:
+        score_dicts: кортежи (scores_dict, source_name)
+        weights: веса для источников (see detection-rules.md)
+
+    Returns:
+        list: топ-3 персоны, отсортированные по score
+    """
+    default_weights = {
+        "text": 1.0,
+        "components": 2.0,
+        "labels": 3.0,
+        "files": 1.5
+    }
+    weights = weights or default_weights
+
+    combined = {}
+    for scores, source in score_dicts:
+        weight = weights.get(source, 1.0)
+        for persona, score in scores.items():
+            combined[persona] = combined.get(persona, 0) + (score * weight)
+
+    sorted_personas = sorted(
+        combined.items(),
+        key=lambda x: x[1],
+        reverse=True
+    )
+
+    return [p[0] for p in sorted_personas[:3] if p[1] > 0]
+```
+
+## Main Detection Function
+
+```python
+def detect_task_focus(issue_data: dict, files: list = None) -> dict:
+    """
+    Основная функция определения фокусов задачи.
+
+    Args:
+        issue_data: данные задачи из Jira
+        files: опциональный список затрагиваемых файлов
+
+    Returns:
+        dict: {
+            "detected_personas": ["backend", "frontend", ...],
+            "detection_scores": {...},
+            "matched_keywords": {...},
+            "confidence": "high" | "medium" | "low"
+        }
+    """
+    detection_config, personas = load_focus_config()
+    keywords = detection_config["keywords"]
+    patterns = detection_config["component_patterns"]
+
+    # Собираем текст для анализа
+    text = " ".join([
+        issue_data.get("summary", ""),
+        issue_data.get("description", ""),
+        " ".join(issue_data.get("acceptance_criteria", []))
+    ])
+
+    # Анализ из разных источников
+    text_scores, matched = detect_focus_from_text(text, keywords)
+    component_scores = detect_focus_from_components(
+        issue_data.get("components", [])
+    )
+    label_scores = detect_focus_from_labels(
+        issue_data.get("labels", [])
+    )
+
+    score_sources = [
+        (text_scores, "text"),
+        (component_scores, "components"),
+        (label_scores, "labels")
+    ]
+
+    # Добавляем файлы если есть
+    if files:
+        file_scores = detect_focus_from_files(files, patterns)
+        score_sources.append((file_scores, "files"))
+
+    # Объединяем
+    detected = combine_scores(*score_sources)
+
+    # Определяем confidence
+    total_score = sum(sum(s.values()) for s, _ in score_sources)
+    confidence = "high" if total_score > 10 else "medium" if total_score > 5 else "low"
+
+    # Fallback если ничего не найдено
+    if not detected:
+        detected = ["backend"]  # default fallback
+        confidence = "low"
+
+    return {
+        "detected_personas": detected,
+        "detection_scores": dict(
+            sorted(
+                {k: v for d, _ in score_sources for k, v in d.items()}.items(),
+                key=lambda x: x[1],
+                reverse=True
+            )
+        ),
+        "matched_keywords": matched,
+        "confidence": confidence
+    }
+```
+
+## Confidence Thresholds
+
+| Total Score | Confidence | Action |
+|-------------|-----------|--------|
+| > 10 | `high` | Auto-select personas |
+| 5-10 | `medium` | Suggest with confirmation |
+| < 5 | `low` | Request user input |
+| 0 | `low` (fallback) | Default to `["backend"]` |
diff --git a/.claude/skills/jira-implement/references/implementation-patterns.md b/.claude/skills/jira-implement/references/implementation-patterns.md
new file mode 100644 (file)
index 0000000..ddf1c75
--- /dev/null
@@ -0,0 +1,138 @@
+# Implementation Patterns by Subtask Type
+
+Reference file for jira-implement skill. Contains code generation patterns for each subtask type.
+
+## Database Migration
+
+```python
+async def implement_db_migration(details: dict):
+    """
+    1. Создать файл миграции
+    2. Применить миграцию
+    3. Проверить результат
+    """
+
+    # Создать миграцию
+    migration_content = generate_migration(details["schema"])
+    Write(
+        file_path=f"db/migrations/versions/{timestamp}_{details['name']}.py",
+        content=migration_content
+    )
+
+    # Применить
+    Bash(command="alembic upgrade head")
+
+    # Проверить
+    Bash(command="alembic current")
+
+    return {"type": "migration", "file": migration_file}
+```
+
+## Models
+
+```python
+async def implement_models(details: dict):
+    """
+    1. Создать SQLAlchemy модель
+    2. Создать Pydantic схемы
+    3. Обновить __init__.py
+    """
+
+    # Модель
+    model_content = generate_model(details["model"])
+    Write(
+        file_path=f"src/models/{details['model_name']}.py",
+        content=model_content
+    )
+
+    # Схемы
+    schema_content = generate_schemas(details["model"])
+    Write(
+        file_path=f"src/schemas/{details['model_name']}.py",
+        content=schema_content
+    )
+
+    # Обновить экспорты
+    Edit(
+        file_path="src/models/__init__.py",
+        old_string="# models",
+        new_string=f"# models\nfrom .{details['model_name']} import {details['class_name']}"
+    )
+
+    return {"type": "models", "files": [model_file, schema_file]}
+```
+
+## Service
+
+```python
+async def implement_service(details: dict):
+    """
+    1. Создать сервис с методами из spec
+    2. Включить improvements из debate (retry policy и т.д.)
+    """
+
+    service_content = generate_service(
+        details["service"],
+        debate_improvements=details.get("debate_improvements", [])
+    )
+
+    Write(
+        file_path=f"src/services/{details['service_name']}.py",
+        content=service_content
+    )
+
+    return {"type": "service", "file": service_file}
+```
+
+## API Endpoint
+
+```python
+async def implement_api(details: dict):
+    """
+    1. Создать route с валидацией
+    2. Добавить Swagger docs
+    3. Включить debate improvements (pagination и т.д.)
+    """
+
+    api_content = generate_api_endpoint(
+        details["endpoint"],
+        debate_improvements=details.get("debate_improvements", [])
+    )
+
+    Write(
+        file_path=f"src/api/{details['resource']}.py",
+        content=api_content
+    )
+
+    return {"type": "api", "file": api_file}
+```
+
+## Tests
+
+```python
+async def implement_tests(details: dict):
+    """
+    1. Unit тесты для сервиса
+    2. Integration тесты для API
+    3. Edge cases из interview
+    """
+
+    # Unit tests
+    unit_content = generate_unit_tests(details["service"])
+    Write(
+        file_path=f"tests/unit/test_{details['service_name']}.py",
+        content=unit_content
+    )
+
+    # Integration tests
+    integration_content = generate_integration_tests(details["api"])
+    Write(
+        file_path=f"tests/integration/test_{details['resource']}_api.py",
+        content=integration_content
+    )
+
+    # Запустить тесты
+    Bash(command=f"pytest tests/ -v")
+
+    return {"type": "tests", "files": [unit_file, integration_file]}
+```
diff --git a/.claude/skills/jira-implement/references/lifecycle-and-errors.md b/.claude/skills/jira-implement/references/lifecycle-and-errors.md
new file mode 100644 (file)
index 0000000..7fc6e87
--- /dev/null
@@ -0,0 +1,172 @@
+# Task Lifecycle, Error Handling, and Finalization
+
+Reference file for jira-implement skill. Contains execution lifecycle, error handling, progress output, and finalization patterns.
+
+## Task Execution Lifecycle
+
+```python
+async def execute_task(task: dict, context: dict):
+    task_id = task["id"]
+    jira_subtask = task["metadata"]["jira_subtask"]
+
+    # 1. Пометить как in_progress
+    TaskUpdate(taskId=task_id, status="in_progress")
+
+    # 2. Sync с Jira
+    await jira_transition_issue(jira_subtask, "In Progress")
+    await jira_add_comment(jira_subtask, "Started implementation")
+
+    # 3. Извлечь детали из плана
+    subtask_details = extract_subtask_from_plan(context["plan"], jira_subtask)
+
+    # 4. Выполнить реализацию
+    result = await implement_subtask(subtask_details)
+
+    # 5. Пометить как completed
+    TaskUpdate(taskId=task_id, status="completed")
+
+    # 6. Sync с Jira
+    await jira_transition_issue(jira_subtask, "Done")
+    await jira_add_comment(jira_subtask, f"Completed\n\nChanges:\n{result['summary']}")
+
+    return result
+```
+
+## Error Handling
+
+```python
+async def handle_implementation_error(task: dict, error: Exception):
+    """
+    При ошибке:
+    1. НЕ помечать task как completed
+    2. Добавить комментарий в Jira
+    3. Сохранить состояние для resume
+    """
+
+    jira_subtask = task["metadata"]["jira_subtask"]
+
+    await jira_add_comment(jira_subtask, f"""
+Implementation error:
+
+{str(error)}
+
+Task remains in progress. Manual intervention may be required.
+""")
+
+    # Сохранить состояние
+    save_state({
+        "current_task": task["id"],
+        "error": str(error),
+        "can_resume": True
+    })
+
+    raise ImplementationError(f"Failed to implement {jira_subtask}: {error}")
+```
+
+## Progress Output Format
+
+```
+[implement] Начинаю реализацию PROJ-123...
+
+    План: 5 subtasks
+
+    [1/5] PROJ-123-1: Database Migration
+          -> Starting...
+          -> Migration created
+          -> Migration applied
+          -> Completed
+
+    [2/5] PROJ-123-2: SQLAlchemy Models
+          -> Starting...
+          -> Model created
+          -> Schemas created
+          -> Exports updated
+          -> Completed
+
+    [3/5] PROJ-123-3: Service Layer
+          -> Starting...
+          -> Service created (with retry policy from debate)
+          -> Completed
+
+    [4/5] PROJ-123-4: API Endpoint
+          -> Starting...
+          -> Endpoint created (with pagination from debate)
+          -> Swagger docs added
+          -> Completed
+
+    [5/5] PROJ-123-5: Tests
+          -> Starting...
+          -> Unit tests: 12 passed
+          -> Integration tests: 4 passed
+          -> Completed
+
+    ===================================
+    Implementation complete!
+
+    Files created: 8
+    Files modified: 3
+    Tests: 16 passed
+
+    All subtasks synced with Jira.
+    Parent issue PROJ-123 ready for review.
+```
+
+## Finalization
+
+```python
+async def finalize_implementation(issue_key: str, results: list):
+    # Обновить состояние
+    update_state(issue_key, {
+        "completed_stages": [..., "implement"],
+        "current_stage": "completed",
+        "implementation": {
+            "subtasks_completed": len(results),
+            "files_created": count_created(results),
+            "files_modified": count_modified(results),
+            "tests_passed": count_tests(results)
+        }
+    })
+
+    # Финальный комментарий в Jira
+    await jira_add_comment(issue_key, f"""
+Implementation completed!
+
+**Summary:**
+- Subtasks: {len(results)} completed
+- Files created: {count_created(results)}
+- Files modified: {count_modified(results)}
+- Tests: {count_tests(results)} passed
+
+**Artifacts:**
+- PRD: docs/jira/{issue_key}/prd.md
+- Spec: docs/jira/{issue_key}/spec.md
+- Debate: docs/jira/{issue_key}/debate-log.md
+- Plan: docs/jira/{issue_key}/plan.md
+
+Ready for code review.
+""")
+```
+
+## Output Schema
+
+```json
+{
+  "issue_key": "PROJ-123",
+  "status": "completed",
+  "subtasks_completed": 5,
+  "files": {
+    "created": ["migration.py", "model.py", "schema.py", "service.py", "api.py", "tests.py"],
+    "modified": ["__init__.py", "routes.py"]
+  },
+  "tests": {
+    "unit": 12,
+    "integration": 4,
+    "passed": 16,
+    "failed": 0
+  },
+  "debate_improvements_applied": [
+    "retry_policy",
+    "pagination"
+  ]
+}
+```
diff --git a/.claude/skills/jira-plan/references/decomposition-rules.md b/.claude/skills/jira-plan/references/decomposition-rules.md
new file mode 100644 (file)
index 0000000..1367436
--- /dev/null
@@ -0,0 +1,105 @@
+# Decomposition Rules & Subtask Creation
+
+## Decomposition Rules
+
+```yaml
+rules:
+  - name: "One concern per subtask"
+    description: "Каждый subtask решает одну задачу"
+
+  - name: "Testable outcome"
+    description: "Результат subtask можно проверить"
+
+  - name: "Estimated size"
+    description: "Subtask занимает 2-8 часов работы"
+
+  - name: "Clear dependencies"
+    description: "Зависимости между subtasks явные"
+```
+
+## Estimation Guidelines
+
+| Complexity | Hours | Examples |
+|-----------|-------|---------|
+| Trivial | 1-2 | Config change, simple migration |
+| Simple | 2-4 | CRUD model, basic endpoint |
+| Medium | 4-8 | Service with business logic, complex query |
+| Complex | 8-16 | Multi-service integration, complex algorithm |
+
+Rule: if subtask > 8h, decompose further.
+
+## Spec Analysis Extraction
+
+From spec.md extract:
+- **API changes** -> subtask per endpoint
+- **Database changes** -> subtask for migrations
+- **Service changes** -> subtask per service
+- **Tests** -> subtask for testing
+
+From debate-log.md extract improvements:
+```python
+def extract_debate_improvements(debate_log: str) -> list:
+    """
+    Извлекает improvements из debate:
+    - Added retry policy
+    - Refactored architecture
+    - Added pagination
+    """
+    # Parse "Improvements Made" section
+    return improvements
+```
+
+## Creating Subtasks in Jira
+
+**MCP:**
+```python
+async def create_jira_subtasks(parent_key: str, subtasks: list):
+    created = []
+    for subtask in subtasks:
+        result = await jira_create_issue(
+            project=extract_project(parent_key),
+            issue_type="Sub-task",
+            parent=parent_key,
+            summary=subtask["summary"],
+            description=format_subtask_description(subtask),
+            labels=["auto-generated", "jira-workflow"]
+        )
+        created.append(result["key"])
+    return created
+```
+
+**HTTP fallback:**
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST \
+  -H "Content-Type: application/json" \
+  -d '{"fields": {"project": {"key": "PROJ"}, "issuetype": {"name": "Sub-task"}, "parent": {"key": "PROJ-123"}, "summary": "...", "description": {"type": "doc", "version": 1, "content": [{"type": "paragraph", "content": [{"type": "text", "text": "..."}]}]}, "labels": ["auto-generated", "jira-workflow"]}}' \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue"
+```
+
+## Local Task Creation
+
+```python
+for subtask in subtasks:
+    TaskCreate(
+        subject=f"{subtask['key']}: {subtask['summary']}",
+        description=subtask["description"],
+        activeForm=f"Реализую {subtask['summary']}",
+        metadata={
+            "jira_subtask": subtask["key"],
+            "blocked_by": subtask["blocked_by"],
+            "blocks": subtask["blocks"]
+        }
+    )
+```
+
+## Setting Dependencies
+
+```python
+for subtask in subtasks:
+    if subtask["blocked_by"]:
+        TaskUpdate(
+            taskId=subtask["task_id"],
+            addBlockedBy=[find_task_id(dep) for dep in subtask["blocked_by"]]
+        )
+```
diff --git a/.claude/skills/jira-plan/references/jira-sync-details.md b/.claude/skills/jira-plan/references/jira-sync-details.md
new file mode 100644 (file)
index 0000000..481e964
--- /dev/null
@@ -0,0 +1,83 @@
+# Jira Sync & State Management
+
+## State Update
+
+After plan creation, update workflow state:
+
+```json
+{
+  "completed_stages": ["fetch", "interview", "prd", "debate_prd", "spec", "debate_spec", "plan"],
+  "current_stage": "debate_plan",
+  "plan": {
+    "subtasks_count": 5,
+    "jira_subtasks": ["PROJ-123-1", "PROJ-123-2", "PROJ-123-3", "PROJ-123-4", "PROJ-123-5"],
+    "local_tasks": [1, 2, 3, 4, 5]
+  },
+  "artifacts": {
+    "plan": "docs/jira/PROJ-123/plan.md"
+  }
+}
+```
+
+## Attaching Plan to Jira Issue
+
+```python
+attach_artifact(issue_key, f"docs/jira/{issue_key}/plan.md")
+```
+
+**HTTP fallback:**
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST \
+  -H "X-Atlassian-Token: no-check" \
+  -F "file=@docs/jira/${ISSUE_KEY}/plan.md" \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue/${ISSUE_KEY}/attachments"
+```
+
+## Adding Jira Comment (Russian)
+
+```python
+jira_add_comment(parent_key, f"""
+Plan реализации создан
+
+Подзадачи:
+{format_subtasks_list(created_subtasks)}
+
+Документ плана: docs/jira/{parent_key}/plan.md
+
+Фазы:
+1. Фундамент (миграции БД)
+2. Ядро (модели, сервисы)
+3. Интерфейс (API)
+4. Качество (тесты)
+
+Готово к реализации.
+""")
+```
+
+## Output Format
+
+```
+[plan] Создаю план реализации...
+
+       Анализ спецификации:
+       - API changes: 2 endpoints
+       - Database: 1 new table
+       - Services: 1 new service
+       - Debate improvements: 2
+
+       Декомпозиция на subtasks:
+       PROJ-123-1: Database Migration
+       PROJ-123-2: SQLAlchemy Models
+       PROJ-123-3: Service Layer (+ retry policy)
+       PROJ-123-4: API Endpoint (+ pagination)
+       PROJ-123-5: Tests
+
+       Subtasks созданы в Jira
+       Tasks созданы локально (TaskCreate)
+       Зависимости установлены
+       План сохранён: docs/jira/PROJ-123/plan.md
+       План прикреплён к задаче в Jira
+
+       -> Переход к этапу: debate (документ: Plan)
+```
diff --git a/.claude/skills/jira-plan/references/plan-template.md b/.claude/skills/jira-plan/references/plan-template.md
new file mode 100644 (file)
index 0000000..cb6013d
--- /dev/null
@@ -0,0 +1,186 @@
+# Plan Template Reference
+
+Full template for `docs/jira/{issue_key}/plan.md`.
+
+## Template
+
+```markdown
+# Implementation Plan: {issue_key}
+
+**Spec:** docs/jira/{issue_key}/spec.md
+**Debate:** docs/jira/{issue_key}/debate-log.md
+**Date:** {current_date}
+
+---
+
+## Overview
+
+{Краткое описание что будет реализовано}
+
+Total subtasks: {N}
+Estimated effort: {hours} hours
+
+## Dependencies Graph
+
+```
+[PROJ-123-1: DB Migration]
+        |
+[PROJ-123-2: Models] --> [PROJ-123-4: API Endpoint]
+        |                         |
+[PROJ-123-3: Service] <----------+
+        |
+[PROJ-123-5: Tests]
+```
+
+## Subtasks
+
+### PROJ-123-1: Database Migration
+
+**Description:**
+Создать миграцию для новой таблицы `{table_name}`.
+
+**Acceptance Criteria:**
+- [ ] Миграция создана в `db/migrations/versions/`
+- [ ] Миграция применяется без ошибок
+- [ ] Rollback работает корректно
+
+**Files to modify:**
+- `db/migrations/versions/{timestamp}_{name}.py` (create)
+
+**Blocked by:** None
+**Blocks:** PROJ-123-2, PROJ-123-3
+
+---
+
+### PROJ-123-2: SQLAlchemy Models
+
+**Description:**
+Создать модель `{ModelName}` и соответствующие DTO.
+
+**Acceptance Criteria:**
+- [ ] Модель создана в `models/`
+- [ ] DTO созданы в `schemas/`
+- [ ] Связи с другими моделями настроены
+
+**Files to modify:**
+- `src/models/{model_name}.py` (create)
+- `src/schemas/{model_name}.py` (create)
+- `src/models/__init__.py` (modify)
+
+**Blocked by:** PROJ-123-1
+**Blocks:** PROJ-123-3, PROJ-123-4
+
+---
+
+### PROJ-123-3: Service Layer
+
+**Description:**
+Реализовать `{ServiceName}Service` с бизнес-логикой.
+
+**Acceptance Criteria:**
+- [ ] Сервис создан в `services/`
+- [ ] CRUD операции реализованы
+- [ ] Error handling добавлен (включая retry policy из debate)
+
+**Files to modify:**
+- `src/services/{service_name}.py` (create)
+- `src/services/__init__.py` (modify)
+
+**Blocked by:** PROJ-123-2
+**Blocks:** PROJ-123-4
+
+**Debate improvements included:**
+- Retry policy with exponential backoff (GPT-5.2)
+
+---
+
+### PROJ-123-4: API Endpoint
+
+**Description:**
+Создать REST endpoint `{method} /api/v1/{resource}`.
+
+**Acceptance Criteria:**
+- [ ] Endpoint создан в `api/`
+- [ ] Валидация входных данных
+- [ ] Swagger документация
+- [ ] Pagination добавлена (из debate)
+
+**Files to modify:**
+- `src/api/{resource}.py` (create or modify)
+- `src/api/__init__.py` (modify)
+
+**Blocked by:** PROJ-123-3
+**Blocks:** PROJ-123-5
+
+**Debate improvements included:**
+- Pagination for batch operations (Gemini 3 Pro)
+
+---
+
+### PROJ-123-5: Tests
+
+**Description:**
+Написать unit и integration тесты.
+
+**Acceptance Criteria:**
+- [ ] Unit тесты для сервиса (>80% coverage)
+- [ ] Integration тесты для API
+- [ ] Edge cases из interview покрыты
+
+**Files to modify:**
+- `tests/unit/test_{service_name}.py` (create)
+- `tests/integration/test_{resource}_api.py` (create)
+
+**Blocked by:** PROJ-123-4
+**Blocks:** None
+
+---
+
+## Execution Order
+
+1. **Phase 1: Foundation**
+   - PROJ-123-1: Database Migration
+
+2. **Phase 2: Core**
+   - PROJ-123-2: Models
+   - PROJ-123-3: Service (after models)
+
+3. **Phase 3: Interface**
+   - PROJ-123-4: API Endpoint
+
+4. **Phase 4: Quality**
+   - PROJ-123-5: Tests
+
+## Risk Mitigation
+
+| Risk | Mitigation |
+|------|------------|
+| {risk_from_spec} | {mitigation} |
+
+## Notes
+
+- {any_additional_notes}
+```
+
+## Subtask Template (Single)
+
+Each subtask follows this structure:
+
+```markdown
+### {PROJ-KEY}: {Summary}
+
+**Description:** {What to implement}
+
+**Acceptance Criteria:**
+- [ ] {Criterion 1}
+- [ ] {Criterion 2}
+
+**Files to modify:**
+- `path/to/file.ext` (create | modify)
+
+**Blocked by:** {dependencies or None}
+**Blocks:** {dependent subtasks or None}
+
+**Debate improvements included:** (if any)
+- {improvement} ({source persona})
+```
diff --git a/.claude/skills/jira-report/references/metric-extraction.md b/.claude/skills/jira-report/references/metric-extraction.md
new file mode 100644 (file)
index 0000000..264e7d3
--- /dev/null
@@ -0,0 +1,120 @@
+# Metric Extraction & Helper Functions
+
+## Data Loading
+
+```python
+def load_report_data(issue_key: str) -> dict:
+    """Собирает все данные для отчёта."""
+
+    # Загрузить state
+    state_path = f"config/state/{issue_key}.json"
+    with open(state_path) as f:
+        state = json.load(f)
+
+    # Загрузить артефакты
+    artifacts_dir = f"docs/jira/{issue_key}"
+    artifacts = {}
+
+    for artifact in ["interview.md", "prd.md", "spec.md", "debate-log.md", "plan.md"]:
+        path = f"{artifacts_dir}/{artifact}"
+        if os.path.exists(path):
+            with open(path) as f:
+                artifacts[artifact] = f.read()
+
+    return {
+        "state": state,
+        "artifacts": artifacts,
+        "issue_key": issue_key
+    }
+```
+
+## Metric Extraction
+
+```python
+def extract_metrics(data: dict) -> dict:
+    """Извлекает метрики из артефактов и состояния."""
+
+    state = data["state"]
+    artifacts = data["artifacts"]
+
+    # Метрики дебатов
+    debate_metrics = {
+        "prd": extract_debate_metrics(state.get("debate", {}).get("prd", {})),
+        "spec": extract_debate_metrics(state.get("debate", {}).get("spec", {})),
+        "plan": extract_debate_metrics(state.get("debate", {}).get("plan", {}))
+    }
+
+    # Подсчёт подзадач из plan.md
+    subtasks = parse_subtasks_from_plan(artifacts.get("plan.md", ""))
+
+    # Извлечение архитектуры из spec.md
+    architecture = extract_architecture_summary(artifacts.get("spec.md", ""))
+
+    return {
+        "debates": debate_metrics,
+        "subtasks": subtasks,
+        "architecture": architecture,
+        "total_improvements": sum(
+            d.get("improvements", 0) for d in debate_metrics.values()
+        ),
+        "total_rounds": sum(
+            d.get("rounds", 0) for d in debate_metrics.values()
+        )
+    }
+```
+
+## Dependency Graph Generation
+
+```python
+def generate_dependency_graph(subtasks: list) -> str:
+    """Генерирует ASCII граф зависимостей."""
+
+    lines = []
+    for task in subtasks:
+        deps = task.get("depends_on", [])
+        if deps:
+            for dep in deps:
+                lines.append(f"{dep} --> {task['key']}")
+        else:
+            lines.append(f"[start] --> {task['key']}")
+
+    # Простой ASCII формат
+    graph = []
+    levels = calculate_task_levels(subtasks)
+
+    for level, tasks in enumerate(levels):
+        if level == 0:
+            graph.append("  " + "  ".join(f"[{t}]" for t in tasks))
+        else:
+            prev_level = levels[level - 1]
+            # Draw connections
+            connections = []
+            for task in tasks:
+                task_deps = next(
+                    (t["depends_on"] for t in subtasks if t["key"] == task),
+                    []
+                )
+                for dep in task_deps:
+                    if dep in prev_level:
+                        connections.append(f"    |")
+            if connections:
+                graph.append("    " + "  ".join(connections))
+            graph.append("  " + "  ".join(f"[{t}]" for t in tasks))
+
+    return "\n".join(graph)
+```
+
+## Subtask Table Formatting
+
+```python
+def format_subtasks_table(subtasks: list) -> str:
+    """Форматирует подзадачи в markdown таблицу."""
+    lines = []
+    for i, task in enumerate(subtasks, 1):
+        deps = ", ".join(task.get("depends_on", [])) or "—"
+        estimate = task.get("estimate", "—")
+        lines.append(
+            f"| {i} | {task['key']} | {task['summary']} | {deps} | {estimate} |"
+        )
+    return "\n".join(lines)
+```
diff --git a/.claude/skills/jira-report/references/report-template.md b/.claude/skills/jira-report/references/report-template.md
new file mode 100644 (file)
index 0000000..09f5e1a
--- /dev/null
@@ -0,0 +1,145 @@
+# Report Template Reference
+
+Full markdown template generated by `generate_report()`.
+
+## Template Structure
+
+```python
+def generate_report(issue_key: str, data: dict, metrics: dict) -> str:
+    """Генерирует markdown отчёт."""
+
+    state = data["state"]
+    issue_data = state.get("issue_data", {})
+
+    report = f"""# Отчёт о подготовке к реализации: {issue_key}
+
+**Задача:** {issue_data.get("summary", "N/A")}
+**Тип:** {issue_data.get("issue_type", "N/A")}
+**Приоритет:** {issue_data.get("priority", "N/A")}
+**Workflow:** {state.get("workflow_type", "full")}
+
+---
+
+## 1. Обзор
+
+### Цель
+{extract_goal_from_prd(data["artifacts"].get("prd.md", ""))}
+
+### Scope
+**В scope:**
+{extract_in_scope(data["artifacts"].get("prd.md", ""))}
+
+**Вне scope:**
+{extract_out_of_scope(data["artifacts"].get("prd.md", ""))}
+
+---
+
+## 2. Созданные документы
+
+| Документ | Файл | Статус |
+|----------|------|--------|
+| Интервью | interview.md | {"✅ Завершено" if "interview.md" in data["artifacts"] else "⏳ Не создан"} |
+| PRD | prd.md | {"✅ Завершено" if "prd.md" in data["artifacts"] else "⏳ Не создан"} |
+| Спецификация | spec.md | {"✅ Завершено" if "spec.md" in data["artifacts"] else "⏳ Не создан"} |
+| Лог дебатов | debate-log.md | {"✅ Завершено" if "debate-log.md" in data["artifacts"] else "⏳ Не создан"} |
+| План | plan.md | {"✅ Завершено" if "plan.md" in data["artifacts"] else "⏳ Не создан"} |
+
+---
+
+## 3. Дебаты
+
+### Статистика
+
+| Документ | Персоны | Раундов | Улучшений |
+|----------|---------|---------|-----------|
+| PRD | {format_personas(metrics["debates"]["prd"])} | {metrics["debates"]["prd"].get("rounds", 0)} | {metrics["debates"]["prd"].get("improvements", 0)} |
+| Spec | {format_personas(metrics["debates"]["spec"])} | {metrics["debates"]["spec"].get("rounds", 0)} | {metrics["debates"]["spec"].get("improvements", 0)} |
+| Plan | {format_personas(metrics["debates"]["plan"])} | {metrics["debates"]["plan"].get("rounds", 0)} | {metrics["debates"]["plan"].get("improvements", 0)} |
+
+### Участвовавшие модели
+
+{format_models_table(state.get("active_models", []))}
+
+### Персоны и фокусы
+
+{format_personas_details(state.get("focus_detection", {}))}
+
+### Ключевые улучшения
+
+{format_improvements_list(state.get("improvements", []))}
+
+---
+
+## 4. План реализации
+
+### Подзадачи
+
+| # | Ключ | Название | Зависит от | Оценка |
+|---|------|----------|------------|--------|
+{format_subtasks_table(metrics["subtasks"])}
+
+### Граф зависимостей
+
+```
+{generate_dependency_graph(metrics["subtasks"])}
+```
+
+---
+
+## 5. Технические решения
+
+### Архитектура
+{metrics["architecture"].get("summary", "Не определена")}
+
+### API
+{metrics["architecture"].get("api_summary", "Не определено")}
+
+### База данных
+{metrics["architecture"].get("db_summary", "Не определено")}
+
+### Безопасность
+{metrics["architecture"].get("security_summary", "Не определено")}
+
+---
+
+## 6. Риски и mitigation
+
+{format_risks_table(state.get("risks", []))}
+
+---
+
+## 7. Рекомендации
+
+### Рекомендуемые скиллы для реализации
+
+{format_recommended_skills(state.get("skill_recommendations", {}))}
+
+### Дополнительные проверки
+
+{format_additional_checks(state.get("additional_checks", []))}
+
+---
+
+**Отчёт сгенерирован:** {datetime.now().isoformat()}
+**Версия плагина:** 2.0.0
+
+---
+
+> Для начала реализации выполните:
+> `/jira-workflow {issue_key} --continue-to-implement`
+"""
+
+    return report
+```
+
+## Section Summary
+
+| # | Секция | Содержание |
+|---|--------|-----------|
+| 1 | Обзор | Цель из PRD, scope (in/out) |
+| 2 | Созданные документы | Таблица артефактов со статусом |
+| 3 | Дебаты | Статистика, модели, персоны, улучшения |
+| 4 | План реализации | Таблица подзадач, ASCII граф зависимостей |
+| 5 | Технические решения | Архитектура, API, БД, безопасность |
+| 6 | Риски | Таблица рисков с mitigation |
+| 7 | Рекомендации | Скиллы для реализации, доп. проверки |
diff --git a/.claude/skills/jira-report/references/workflow-integration.md b/.claude/skills/jira-report/references/workflow-integration.md
new file mode 100644 (file)
index 0000000..140a25c
--- /dev/null
@@ -0,0 +1,101 @@
+# Workflow Integration & Finalization
+
+## Jira Attachment
+
+```python
+def finalize_report(issue_key: str, report_path: str):
+    """Прикрепляет отчёт к Jira и добавляет комментарий."""
+
+    # 1. Прикрепить report.md как вложение
+    attach_artifact(issue_key, report_path)
+
+    # 2. Добавить комментарий
+    invoke_skill("jira-comment",
+        issue_key=issue_key,
+        stage="report",
+        context={
+            "artifacts_count": 5,
+            "debates_count": 3,
+            "improvements_count": 12,
+            "subtasks_count": 5
+        }
+    )
+
+    # 3. Обновить статус (опционально)
+    # update_issue_status(issue_key, "Ready for Development")
+```
+
+## Console Output
+
+```
+[report] Генерирую итоговый отчёт для PROJ-123...
+
+         Документы: 5 файлов
+         Дебаты: 3 документа, 7 раундов, 12 улучшений
+         План: 5 подзадач
+
+         Отчёт: docs/jira/PROJ-123/report.md
+         Прикреплён к задаче: ✓
+
+         Готово к реализации!
+```
+
+## Main Orchestrator
+
+```python
+def generate_workflow_report(issue_key: str) -> str:
+    """Главная функция генерации отчёта."""
+
+    print(f"[report] Генерирую итоговый отчёт для {issue_key}...")
+
+    # 1. Загрузить данные
+    data = load_report_data(issue_key)
+
+    # 2. Извлечь метрики
+    metrics = extract_metrics(data)
+
+    print(f"         Документы: {len(data['artifacts'])} файлов")
+    print(f"         Дебаты: {len([d for d in metrics['debates'].values() if d])} документа, "
+          f"{metrics['total_rounds']} раундов, {metrics['total_improvements']} улучшений")
+    print(f"         План: {len(metrics['subtasks'])} подзадач")
+
+    # 3. Сгенерировать отчёт
+    report = generate_report(issue_key, data, metrics)
+
+    # 4. Сохранить
+    report_path = f"docs/jira/{issue_key}/report.md"
+    os.makedirs(os.path.dirname(report_path), exist_ok=True)
+    with open(report_path, 'w') as f:
+        f.write(report)
+
+    print(f"         Отчёт: {report_path}")
+
+    # 5. Прикрепить к Jira
+    finalize_report(issue_key, report_path)
+    print(f"         Прикреплён к задаче: ✓")
+
+    print(f"\n         Готово к реализации!")
+
+    return report_path
+```
+
+## Workflow Integration
+
+Called automatically after `debate_plan` and before `implement`:
+
+```python
+# В jira-workflow
+if current_stage == "debate_plan" and consensus_reached:
+    # Генерируем отчёт перед переходом к implement
+    report_path = invoke_skill("jira-report", issue_key=issue_key)
+
+    # Показываем пользователю
+    print(f"\n[workflow] Отчёт готов: {report_path}")
+    print("[workflow] Проверьте отчёт перед началом реализации.")
+
+    # Спрашиваем подтверждение
+    confirm = ask_user("Начать реализацию?", options=["Да", "Нет, нужны правки"])
+
+    if confirm == "Да":
+        proceed_to_implement()
+```
diff --git a/.claude/skills/jira-skill-recommend/references/integration-patterns.md b/.claude/skills/jira-skill-recommend/references/integration-patterns.md
new file mode 100644 (file)
index 0000000..cfd895c
--- /dev/null
@@ -0,0 +1,144 @@
+# Integration Patterns
+
+## Main Orchestrator
+
+```python
+def recommend_skills(issue_data: dict, focuses: list = None) -> dict:
+    """Главная функция рекомендации скиллов."""
+
+    print(f"[skill-recommend] Анализирую задачу...")
+
+    # 1. Получить фокусы
+    focuses = get_task_focuses(issue_data, focuses)
+    print(f"Обнаруженные фокусы: {', '.join(focuses)}")
+
+    # 2. Загрузить маппинг
+    skill_mapping = load_skill_mapping()
+
+    # 3. Получить рекомендуемые скиллы
+    recommended = get_recommended_skills(focuses, skill_mapping)
+
+    # 4. Получить установленные скиллы
+    installed = get_installed_skills()
+
+    # 5. Проверить покрытие
+    coverage = check_skill_coverage(recommended, installed)
+
+    # 6. Приоритизировать
+    available_prioritized = prioritize_skills(
+        coverage["available"], focuses, skill_mapping
+    )
+
+    return {
+        "focuses": focuses,
+        "recommended": recommended,
+        "available": available_prioritized,
+        "missing": coverage["missing"],
+        "coverage": coverage["coverage"],
+        "coverage_percent": coverage["coverage_percent"]
+    }
+```
+
+## Interactive Selection
+
+```python
+def interactive_skill_selection(result: dict) -> list:
+    """Позволяет пользователю выбрать скиллы для использования."""
+
+    print("\nВыберите скиллы для использования:")
+    print("")
+
+    all_skills = result["available"] + result["missing"]
+
+    for i, skill in enumerate(all_skills, 1):
+        status = "✅" if skill in result["available"] else "❌"
+        recommended = "рекомендован" if skill in result["recommended"][:3] else ""
+        print(f"  [{status}] {i}) {skill} {recommended}")
+
+    print("")
+    selection = input("Введите номера через пробел (Enter для рекомендованных): ")
+
+    if not selection.strip():
+        # Использовать топ-3 доступных
+        return result["available"][:3]
+
+    selected = []
+    for num in selection.split():
+        try:
+            idx = int(num) - 1
+            if 0 <= idx < len(all_skills):
+                skill = all_skills[idx]
+                if skill in result["available"]:
+                    selected.append(skill)
+                else:
+                    print(f"  ⚠ {skill} не установлен, пропущен")
+        except ValueError:
+            pass
+
+    return selected if selected else result["available"][:3]
+```
+
+## PRD/Spec Enhancement
+
+```python
+def enhance_document_with_skills(document: str, skills: list, doc_type: str) -> str:
+    """Улучшает документ с помощью выбранных скиллов."""
+
+    for skill in skills:
+        print(f"[{doc_type}] Применяю скилл: {skill}")
+
+        # Вызвать скилл для получения рекомендаций
+        skill_output = invoke_skill(skill, context={
+            "document_type": doc_type,
+            "content_preview": document[:2000]
+        })
+
+        # Добавить секцию с рекомендациями скилла
+        if skill_output.get("recommendations"):
+            document += f"\n\n## Рекомендации от {skill}\n\n"
+            document += skill_output["recommendations"]
+
+    return document
+```
+
+## State Persistence
+
+```json
+{
+  "skill_recommendations": {
+    "focuses": ["backend", "security", "python"],
+    "recommended": ["api-design-principles", "architecture-patterns", "..."],
+    "available": ["api-design-principles", "architecture-patterns", "..."],
+    "missing": ["error-handling-patterns", "secrets-management"],
+    "coverage": 0.66,
+    "used_in_prd": ["api-design-principles"],
+    "used_in_spec": ["api-design-principles", "architecture-patterns"],
+    "used_in_plan": []
+  }
+}
+```
+
+## Workflow Integration
+
+Called after `jira-focus-detect` and before `jira-prd`:
+
+```python
+# В jira-workflow
+if current_stage == "interview":
+    # После интервью определяем фокусы и скиллы
+    focus_result = invoke_skill("jira-focus-detect", issue_data=issue_data)
+
+    skill_result = invoke_skill("jira-skill-recommend",
+        issue_data=issue_data,
+        focuses=focus_result["detected_personas"]
+    )
+
+    # Сохраняем в state
+    state["focus_detection"] = focus_result
+    state["skill_recommendations"] = skill_result
+
+    # Показываем пользователю
+    if skill_result["coverage"] < 0.5:
+        print(f"[workflow] Низкое покрытие скиллами ({skill_result['coverage_percent']}%)")
+        print("[workflow] Качество документации может быть ниже")
+```
diff --git a/.claude/skills/jira-skill-recommend/references/matching-algorithm.md b/.claude/skills/jira-skill-recommend/references/matching-algorithm.md
new file mode 100644 (file)
index 0000000..d3655e3
--- /dev/null
@@ -0,0 +1,109 @@
+# Matching Algorithm & Coverage Analysis
+
+## Step 1: Determine Task Focuses
+
+```python
+def get_task_focuses(issue_data: dict, provided_focuses: list = None) -> list:
+    """Получает фокусы задачи."""
+
+    if provided_focuses:
+        return provided_focuses
+
+    # Вызвать jira-focus-detect
+    focus_result = invoke_skill("jira-focus-detect", issue_data=issue_data)
+    return focus_result["detected_personas"]
+```
+
+## Step 2: Discover Installed Skills
+
+```python
+def get_installed_skills() -> list:
+    """Получает список установленных скиллов из Claude Code."""
+
+    # Через Claude Code API или filesystem
+    skills_dir = os.path.expanduser("~/.claude/plugins")
+    installed = []
+
+    for plugin_dir in os.listdir(skills_dir):
+        plugin_path = os.path.join(skills_dir, plugin_dir)
+        if os.path.isdir(plugin_path):
+            skills_path = os.path.join(plugin_path, "skills")
+            if os.path.isdir(skills_path):
+                for skill_dir in os.listdir(skills_path):
+                    skill_file = os.path.join(skills_path, skill_dir, "SKILL.md")
+                    if os.path.exists(skill_file):
+                        installed.append(skill_dir)
+
+    # Также проверить встроенные скиллы
+    builtin_skills = [
+        "frontend-design",
+        "api-design-principles",
+        "architecture-patterns",
+        "debugging-strategies",
+        # ... и другие из списка в Skill tool
+    ]
+
+    return list(set(installed + builtin_skills))
+```
+
+## Step 3: Match Skills to Focuses
+
+```python
+def get_recommended_skills(focuses: list, skill_mapping: dict) -> list:
+    """Определяет рекомендуемые скиллы на основе фокусов."""
+
+    recommended = []
+    for focus in focuses:
+        if focus in skill_mapping:
+            recommended.extend(skill_mapping[focus])
+
+    # Убрать дубликаты, сохранив порядок
+    seen = set()
+    unique = []
+    for skill in recommended:
+        if skill not in seen:
+            seen.add(skill)
+            unique.append(skill)
+
+    return unique
+```
+
+## Step 4: Verify Coverage
+
+```python
+def check_skill_coverage(recommended: list, installed: list) -> dict:
+    """Проверяет какие рекомендуемые скиллы установлены."""
+
+    installed_set = set(installed)
+
+    available = [s for s in recommended if s in installed_set]
+    missing = [s for s in recommended if s not in installed_set]
+
+    coverage = len(available) / len(recommended) if recommended else 1.0
+
+    return {
+        "available": available,
+        "missing": missing,
+        "coverage": coverage,
+        "coverage_percent": int(coverage * 100)
+    }
+```
+
+## Step 5: Prioritize by Relevance
+
+```python
+def prioritize_skills(skills: list, focuses: list, skill_mapping: dict) -> list:
+    """Сортирует скиллы по релевантности к фокусам."""
+
+    skill_scores = {}
+    for skill in skills:
+        score = 0
+        for focus in focuses:
+            if skill in skill_mapping.get(focus, []):
+                # Больший score для первых фокусов
+                focus_idx = focuses.index(focus)
+                score += 10 - focus_idx
+        skill_scores[skill] = score
+
+    return sorted(skills, key=lambda s: skill_scores.get(s, 0), reverse=True)
+```
diff --git a/.claude/skills/jira-skill-recommend/references/skill-catalog.md b/.claude/skills/jira-skill-recommend/references/skill-catalog.md
new file mode 100644 (file)
index 0000000..347476c
--- /dev/null
@@ -0,0 +1,131 @@
+# Skill Catalog: Focus-to-Skill Mapping
+
+## Focus Mapping
+
+```json
+{
+  "frontend": [
+    "frontend-design",
+    "react-modernization",
+    "design-system-patterns",
+    "e2e-testing-patterns",
+    "nextjs-app-router-patterns",
+    "mobile-ios-design",
+    "mobile-android-design"
+  ],
+  "backend": [
+    "api-design-principles",
+    "architecture-patterns",
+    "auth-implementation-patterns",
+    "error-handling-patterns",
+    "microservices-patterns",
+    "sql-optimization-patterns",
+    "database-migration",
+    "async-python-patterns",
+    "dotnet-backend-patterns",
+    "fastapi-templates"
+  ],
+  "devops": [
+    "deployment-pipeline-design",
+    "k8s-manifest-generator",
+    "gitlab-ci-patterns",
+    "cost-optimization",
+    "prometheus-configuration",
+    "grafana-dashboards",
+    "secrets-management",
+    "workflow-orchestration-patterns"
+  ],
+  "security": [
+    "auth-implementation-patterns",
+    "secrets-management",
+    "memory-safety-patterns"
+  ],
+  "qa": [
+    "e2e-testing-patterns",
+    "python-testing-patterns",
+    "debugging-strategies",
+    "llm-evaluation"
+  ],
+  "python": [
+    "async-python-patterns",
+    "python-packaging",
+    "python-performance-optimization",
+    "python-testing-patterns",
+    "uv-package-manager",
+    "fastapi-templates"
+  ],
+  "ai_ml": [
+    "langchain-architecture",
+    "llm-evaluation",
+    "ml-pipeline-workflow",
+    "prompt-engineering-patterns",
+    "rag-implementation",
+    "hybrid-search-implementation",
+    "similarity-search-patterns",
+    "vector-index-tuning"
+  ],
+  "api": [
+    "api-design-principles",
+    "openapi-spec-generation"
+  ],
+  "database": [
+    "postgresql-table-design",
+    "sql-optimization-patterns",
+    "database-migration"
+  ],
+  "git": [
+    "git-advanced-workflows",
+    "monorepo-management"
+  ]
+}
+```
+
+## Skill Descriptions
+
+```python
+SKILL_DESCRIPTIONS = {
+    "api-design-principles": "REST/GraphQL дизайн, версионирование, документация",
+    "architecture-patterns": "Clean Architecture, Hexagonal, DDD",
+    "auth-implementation-patterns": "JWT, OAuth2, сессии, RBAC",
+    "error-handling-patterns": "Exceptions, Result types, graceful degradation",
+    "frontend-design": "UI компоненты, дизайн-система, responsive",
+    "e2e-testing-patterns": "Playwright, Cypress, test automation",
+    "python-testing-patterns": "pytest, fixtures, mocking, TDD",
+    "secrets-management": "Vault, AWS Secrets Manager, encryption",
+    "react-modernization": "React 18+, hooks, server components",
+    "design-system-patterns": "Token-based design, component library",
+    "nextjs-app-router-patterns": "Next.js 13+ app router, RSC",
+    "mobile-ios-design": "SwiftUI, UIKit, iOS design patterns",
+    "mobile-android-design": "Jetpack Compose, Material Design",
+    "microservices-patterns": "Event sourcing, CQRS, saga",
+    "sql-optimization-patterns": "Query optimization, indexing, explain analyze",
+    "database-migration": "Schema versioning, zero-downtime migrations",
+    "async-python-patterns": "asyncio, aiohttp, concurrent patterns",
+    "dotnet-backend-patterns": ".NET 8, minimal API, EF Core",
+    "fastapi-templates": "FastAPI project templates, dependency injection",
+    "deployment-pipeline-design": "CI/CD, blue-green, canary deployments",
+    "k8s-manifest-generator": "Kubernetes manifests, Helm charts",
+    "gitlab-ci-patterns": "GitLab CI/CD pipelines, stages, artifacts",
+    "cost-optimization": "Cloud cost reduction, right-sizing",
+    "prometheus-configuration": "Metrics, alerting, service monitoring",
+    "grafana-dashboards": "Dashboard design, visualization",
+    "workflow-orchestration-patterns": "Temporal, Airflow, orchestration",
+    "memory-safety-patterns": "Rust-style ownership, buffer safety",
+    "debugging-strategies": "Systematic debugging, profiling",
+    "llm-evaluation": "LLM benchmarks, eval frameworks",
+    "python-packaging": "pyproject.toml, wheels, distribution",
+    "python-performance-optimization": "Profiling, Cython, multiprocessing",
+    "uv-package-manager": "uv for fast Python package management",
+    "langchain-architecture": "LangChain agents, chains, tools",
+    "ml-pipeline-workflow": "MLOps, experiment tracking, model serving",
+    "prompt-engineering-patterns": "Prompt templates, few-shot, CoT",
+    "rag-implementation": "Retrieval-Augmented Generation patterns",
+    "hybrid-search-implementation": "Keyword + vector search combination",
+    "similarity-search-patterns": "ANN, HNSW, product quantization",
+    "vector-index-tuning": "Index parameters, recall vs speed",
+    "openapi-spec-generation": "OpenAPI 3.0+ spec generation",
+    "postgresql-table-design": "PostgreSQL schema, partitioning, constraints",
+    "git-advanced-workflows": "Rebase, cherry-pick, bisect strategies",
+    "monorepo-management": "Nx, Turborepo, workspace management"
+}
+```
diff --git a/.claude/skills/jira-spec/references/generation-rules.md b/.claude/skills/jira-spec/references/generation-rules.md
new file mode 100644 (file)
index 0000000..8d45b6a
--- /dev/null
@@ -0,0 +1,104 @@
+# Spec Generation Rules & Jira Sync
+
+Правила генерации спецификации и синхронизации с Jira.
+
+---
+
+## Правила генерации
+
+1. **Соответствие кодовой базе** -- использовать существующие паттерны и соглашения проекта
+2. **Конкретные примеры** -- реальный код, не псевдокод
+3. **Трассируемость к PRD** -- каждое техническое решение связано с requirement из PRD
+4. **Готовность к debate** -- достаточно деталей для критики AI-персонами
+
+---
+
+## Анализ кодовой базы
+
+Перед генерацией спецификации исследуй проект:
+
+```bash
+# Структура директорий
+Glob: **/*.py, **/*.ts, **/*.js, **/*.php
+
+# Существующие модели
+Grep: "class.*Model", "class.*ActiveRecord", "interface.*"
+
+# API endpoints
+Grep: "@app.route", "@router", "app.get", "app.post", "actionIndex", "actionCreate"
+
+# Существующие сервисы
+Grep: "class.*Service", "class.*Repository"
+```
+
+---
+
+## Обновление состояния workflow
+
+```json
+{
+  "completed_stages": ["fetch", "interview", "prd", "debate_prd", "spec"],
+  "current_stage": "debate_spec",
+  "artifacts": {
+    "spec": "docs/jira/PROJ-123/spec.md"
+  }
+}
+```
+
+---
+
+## Прикрепление к Jira
+
+### Прикрепить spec как вложение
+
+```python
+attach_artifact(issue_key, f"docs/jira/{issue_key}/spec.md")
+```
+
+### HTTP fallback
+
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST \
+  -H "X-Atlassian-Token: no-check" \
+  -F "file=@docs/jira/${ISSUE_KEY}/spec.md" \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue/${ISSUE_KEY}/attachments"
+```
+
+---
+
+## Комментарий в Jira (на русском)
+
+```
+jira_add_comment(issue_key, """
+📝 Техническая спецификация создана
+
+Документ: docs/jira/PROJ-123/spec.md
+
+Изменения:
+- API: {N} новых endpoints
+- БД: {N} таблиц/миграций
+- Сервисы: {N} новых/{N} изменённых
+
+Готово к дебатам (adversarial review).
+""")
+```
+
+---
+
+## Формат вывода
+
+```
+[spec] Спецификация сгенерирована
+       Сохранено: docs/jira/PROJ-123/spec.md
+       Прикреплена к задаче в Jira
+       Ссылка добавлена в Jira
+
+       Changes summary:
+       - API: 2 new endpoints
+       - Database: 1 new table, 1 migration
+       - Services: 1 new, 1 modified
+       - Tests: 8 unit, 4 integration
+
+       -> Переход к этапу: debate (документ: Spec)
+```
diff --git a/.claude/skills/jira-spec/references/spec-template.md b/.claude/skills/jira-spec/references/spec-template.md
new file mode 100644 (file)
index 0000000..2218660
--- /dev/null
@@ -0,0 +1,336 @@
+# Specification Template
+
+Полный шаблон `docs/jira/{issue_key}/spec.md`.
+Подставляй переменные в `{фигурных скобках}` из контекста задачи.
+
+---
+
+```markdown
+# Technical Specification: {issue_key}
+
+**PRD:** docs/jira/{issue_key}/prd.md
+**Date:** {current_date}
+**Author:** Claude
+**Status:** Draft (pending debate)
+
+---
+
+## 1. Overview
+
+### 1.1 Summary
+{Техническое описание в 2-3 предложениях}
+
+### 1.2 Goals
+{Технические цели реализации}
+
+### 1.3 Background
+{Технический контекст, связь с существующей архитектурой}
+
+## 2. Architecture
+
+### 2.1 High-Level Design
+
+```mermaid
+graph LR
+    Client -->|request| API
+    API -->|validate| Service
+    Service -->|persist| Database
+```
+
+### 2.2 Component Diagram
+{Диаграмма компонентов в Mermaid}
+
+### 2.3 Data Flow
+1. Client sends request to /api/v1/{endpoint}
+2. API validates input using {schema}
+3. Service processes request
+4. Repository persists data
+5. Response returned to client
+
+## 3. Detailed Design
+
+### 3.1 API Changes
+
+#### New Endpoints
+
+| Method | Endpoint | Description | Request | Response |
+|--------|----------|-------------|---------|----------|
+| POST | /api/v1/{resource} | Create | {schema} | {schema} |
+| GET | /api/v1/{resource}/{id} | Get by ID | - | {schema} |
+
+#### Request Schema
+
+\`\`\`json
+{
+  "field1": "string",
+  "field2": 123,
+  "nested": {
+    "field3": true
+  }
+}
+\`\`\`
+
+#### Response Schema
+
+\`\`\`json
+{
+  "id": "uuid",
+  "field1": "string",
+  "created_at": "datetime"
+}
+\`\`\`
+
+#### Error Responses
+
+| Status | Code | Description |
+|--------|------|-------------|
+| 400 | VALIDATION_ERROR | Invalid input |
+| 404 | NOT_FOUND | Resource not found |
+| 500 | INTERNAL_ERROR | Server error |
+
+### 3.2 Database Changes
+
+#### New Tables
+
+\`\`\`sql
+CREATE TABLE {table_name} (
+    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
+    {field1} VARCHAR(255) NOT NULL,
+    {field2} INTEGER,
+    created_at TIMESTAMP DEFAULT NOW(),
+    updated_at TIMESTAMP DEFAULT NOW()
+);
+
+CREATE INDEX idx_{table_name}_{field1} ON {table_name}({field1});
+\`\`\`
+
+#### Schema Changes
+
+\`\`\`sql
+ALTER TABLE {existing_table}
+ADD COLUMN {new_field} VARCHAR(100);
+\`\`\`
+
+#### Migration Strategy
+1. Create new tables/columns
+2. Backfill data if needed
+3. Update application code
+4. Remove deprecated columns (if any)
+
+### 3.3 Service Layer
+
+#### New Services
+
+\`\`\`python
+class {ServiceName}Service:
+    """
+    {Description}
+    """
+    def __init__(self, repository: {Repository}):
+        self.repository = repository
+
+    async def create(self, data: {CreateDTO}) -> {Entity}:
+        """Create new {entity}"""
+        pass
+
+    async def get_by_id(self, id: UUID) -> Optional[{Entity}]:
+        """Get {entity} by ID"""
+        pass
+\`\`\`
+
+#### Modified Services
+
+\`\`\`python
+# {existing_service}.py — Add new method:
+
+async def {new_method}(self, params: {Params}) -> {Result}:
+    """
+    {Description}
+    """
+    pass
+\`\`\`
+
+### 3.4 Models
+
+#### New Models
+
+\`\`\`python
+class {ModelName}(Base):
+    __tablename__ = "{table_name}"
+
+    id = Column(UUID, primary_key=True, default=uuid4)
+    {field1} = Column(String(255), nullable=False)
+    {field2} = Column(Integer)
+
+    # Relationships
+    {relation} = relationship("{RelatedModel}", back_populates="{back_ref}")
+\`\`\`
+
+#### DTOs
+
+\`\`\`python
+class {ModelName}Create(BaseModel):
+    field1: str
+    field2: Optional[int] = None
+
+class {ModelName}Response(BaseModel):
+    id: UUID
+    field1: str
+    created_at: datetime
+
+    class Config:
+        from_attributes = True
+\`\`\`
+
+## 4. Implementation Details
+
+### 4.1 Key Algorithms
+{Описание ключевых алгоритмов если применимо}
+
+### 4.2 External Integrations
+
+| Service | Purpose | Auth | Endpoint |
+|---------|---------|------|----------|
+| {service} | {purpose} | API Key | {url} |
+
+### 4.3 Configuration
+
+\`\`\`python
+# config.py
+{CONFIG_VAR}: str = Field(default="{default}", env="{ENV_VAR}")
+\`\`\`
+
+### 4.4 Feature Flags
+
+| Flag | Default | Description |
+|------|---------|-------------|
+| FEATURE_{NAME} | False | Enable {feature} |
+
+## 5. Testing Strategy
+
+### 5.1 Unit Tests
+
+\`\`\`python
+def test_{function}_success():
+    """Test {function} with valid input"""
+    pass
+
+def test_{function}_invalid_input():
+    """Test {function} with invalid input"""
+    pass
+
+def test_{function}_edge_case():
+    """Test {function} edge case: {description}"""
+    pass
+\`\`\`
+
+### 5.2 Integration Tests
+
+\`\`\`python
+async def test_{endpoint}_create():
+    """Test POST /api/v1/{resource}"""
+    pass
+
+async def test_{endpoint}_get():
+    """Test GET /api/v1/{resource}/{id}"""
+    pass
+\`\`\`
+
+### 5.3 Test Coverage Requirements
+- Unit tests: >80% coverage
+- Integration tests: all API endpoints
+- Edge cases: all identified in interview
+
+## 6. Security Considerations
+
+### 6.1 Authentication
+{Требования к аутентификации}
+
+### 6.2 Authorization
+{Требования к авторизации}
+
+### 6.3 Data Validation
+{Правила валидации входных данных}
+
+### 6.4 Sensitive Data
+{Обработка чувствительных данных}
+
+## 7. Performance Considerations
+
+### 7.1 Expected Load
+- Requests per second: {rps}
+- Data volume: {volume}
+
+### 7.2 Optimization Strategies
+- {strategy_1}
+- {strategy_2}
+
+### 7.3 Caching
+
+| Data | Strategy | TTL |
+|------|----------|-----|
+| {data} | {strategy} | {ttl} |
+
+## 8. Monitoring & Observability
+
+### 8.1 Metrics
+
+| Metric | Type | Description |
+|--------|------|-------------|
+| {metric_name} | Counter/Gauge/Histogram | {description} |
+
+### 8.2 Logging
+
+\`\`\`python
+logger.info("{event}", extra={"field": value})
+\`\`\`
+
+### 8.3 Alerts
+
+| Condition | Severity | Action |
+|-----------|----------|--------|
+| {condition} | Warning/Critical | {action} |
+
+## 9. Rollout Plan
+
+### 9.1 Deployment Steps
+1. Deploy database migrations
+2. Deploy application with feature flag OFF
+3. Enable feature flag for 10% users
+4. Monitor metrics
+5. Gradual rollout to 100%
+
+### 9.2 Rollback Plan
+1. Disable feature flag
+2. Revert application deployment
+3. Rollback database migrations (if safe)
+
+## 10. Open Questions
+- [ ] {question_1} - TBD
+- [ ] {question_2} - TBD
+
+---
+
+## Appendix
+
+### A. File Changes Summary
+
+| File | Action | Description |
+|------|--------|-------------|
+| {path/to/file.py} | Create | New service |
+| {path/to/existing.py} | Modify | Add method |
+
+### B. Dependencies
+
+| Package | Version | Purpose |
+|---------|---------|---------|
+| {package} | {version} | {purpose} |
+
+---
+
+**Changelog:**
+
+| Version | Date | Author | Changes |
+|---------|------|--------|---------|
+| 0.1 | {date} | Claude | Initial draft |
+```
diff --git a/.claude/skills/jira-workflow/references/execution-instructions.md b/.claude/skills/jira-workflow/references/execution-instructions.md
new file mode 100644 (file)
index 0000000..d18cde7
--- /dev/null
@@ -0,0 +1,110 @@
+# Execution Instructions for Claude
+
+Reference file for jira-workflow skill. Contains detailed Claude instructions, comment/attachment procedures, error handling, and resume flow.
+
+## Step-by-Step Execution
+
+**ВАЖНО:** После каждого этапа ОБЯЗАТЕЛЬНО добавляй комментарий в Jira!
+
+Последовательно выполняй:
+
+```
+Skill: jira-fetch
+-> Добавь комментарий (stage: fetch) через jira_add_comment
+
+Skill: jira-interview (если не --no-interview)
+-> Добавь комментарий (stage: interview)
+-> Прикрепи docs/jira/{ISSUE-KEY}/interview.md
+
+Skill: jira-focus-detect
+Skill: jira-skill-recommend
+
+Skill: jira-prd (если не quick workflow)
+-> Добавь комментарий (stage: prd)
+-> Прикрепи docs/jira/{ISSUE-KEY}/prd.md
+
+Skill: jira-debate --document prd --personas
+-> Добавь комментарий (stage: debate)
+-> Прикрепи/обнови docs/jira/{ISSUE-KEY}/debate-log.md
+
+Skill: jira-spec
+-> Добавь комментарий (stage: spec)
+-> Прикрепи docs/jira/{ISSUE-KEY}/spec.md
+
+Skill: jira-debate --document spec --personas
+-> Добавь комментарий (stage: debate)
+
+Skill: jira-plan
+-> Добавь комментарий (stage: plan)
+-> Прикрепи docs/jira/{ISSUE-KEY}/plan.md
+
+Skill: jira-debate --document plan --personas
+-> Добавь комментарий (stage: debate)
+
+Skill: jira-report
+-> Прикрепи docs/jira/{ISSUE-KEY}/report.md
+
+Skill: jira-implement
+-> Добавь комментарий (stage: complete)
+```
+
+## Jira Comment Methods
+
+**MCP режим:** `jira_add_comment(issue_key, comment="plain text")` -- НЕ передавай ADF!
+
+**HTTP режим:** curl с ADF в body (см. jira-comment skill)
+
+## File Attachment
+
+ВСЕГДА через HTTP (MCP не поддерживает attachments):
+
+```bash
+curl -s -u "${ATLASSIAN_EMAIL}:${ATLASSIAN_TOKEN}" \
+  -X POST -H "X-Atlassian-Token: no-check" \
+  -F "file=@docs/jira/{ISSUE-KEY}/prd.md" \
+  "https://${ATLASSIAN_HOST}/rest/api/3/issue/{ISSUE-KEY}/attachments"
+```
+
+## Jira Access Mode Detection
+
+1. Попытаться вызвать MCP tool `jira_get_issue` с тестовым запросом
+2. Если MCP доступен -> использовать MCP (режим: mcp)
+3. Если MCP недоступен (ошибка / timeout / нет бинарника):
+   a. Проверить наличие `ATLASSIAN_HOST`, `ATLASSIAN_EMAIL`, `ATLASSIAN_TOKEN`
+   b. Если переменные установлены -> переключиться на HTTP fallback (режим: http)
+   c. Если нет -> ошибка с инструкцией по настройке
+4. Сохранить режим в состояние workflow: `jira_access_mode: "mcp" | "http"`
+
+## Error Handling
+
+### Transient ошибки (сеть, rate limit)
+Автоматический retry 3 раза с exponential backoff.
+
+### Критические ошибки
+1. Состояние сохраняется в `config/state/{ISSUE-KEY}.json`
+2. Можно продолжить через `--resume`
+
+## Resume Example
+
+```bash
+> /jira-workflow --resume PROJ-123
+
+Найдено состояние для PROJ-123:
+  Завершено: fetch -> interview -> prd -> debate(prd) -> spec
+  Ошибка на: debate(spec) (round 3)
+  Причина: OpenRouter rate limit
+
+Продолжить? [Y/n] Y
+
+[debate:spec] Продолжаю с раунда 3...
+...
+```
+
+## Synchronization
+
+Jira-sync вызывается автоматически через hooks.
+Артефакты прикрепляются к задаче как вложения.
+
+## Output Rules
+
+Показывай прогресс на каждом этапе. Все комментарии в Jira на русском языке.
diff --git a/.claude/skills/jira-workflow/references/pipeline-steps.md b/.claude/skills/jira-workflow/references/pipeline-steps.md
new file mode 100644 (file)
index 0000000..2e563a0
--- /dev/null
@@ -0,0 +1,84 @@
+# Pipeline Steps Detail
+
+Reference file for jira-workflow skill. Contains detailed step configurations, debate personas, skill recommendations.
+
+## Step Details
+
+### 1. Fetch (jira-fetch)
+Получение данных задачи из Jira, определение типа workflow.
+Поддержка MCP и HTTP fallback.
+
+### 2. Interview (jira-interview)
+Уточнение требований через структурированное интервью (adversarial-spec style).
+**Всегда выполняется** (если не указан `--no-interview`).
+
+### 3. PRD (jira-prd)
+Генерация Product Requirements Document.
+*Пропускается для `quick` workflow.*
+
+### 4. Debate PRD (jira-debate --document prd)
+Критика PRD AI-моделями: фокус на полноте требований, бизнес-ценности, критериях приёмки.
+**2-5 раундов** до достижения консенсуса (70% APPROVED).
+*Пропускается для `quick` workflow.*
+
+### 5. Spec (jira-spec)
+Генерация технической спецификации.
+
+### 6. Debate Spec (jira-debate --document spec)
+Критика спецификации 7 AI-моделями через OpenRouter:
+- GPT-5.2 (Lead Technical Critic)
+- DeepSeek v3.2 (Architecture Critic)
+- Grok 4.1 Fast (Edge Case Critic)
+- Gemini 3 Pro (Scalability Critic)
+- Perplexity Sonar Pro (Deep Research Critic)
+- GLM 4.7 (Alternative Perspective Critic)
+- MiMo-V2-Flash (Rapid Validation Critic)
+
+**2-10 раундов** до достижения консенсуса (70% APPROVED).
+
+### 7. Plan (jira-plan)
+Декомпозиция на subtasks, создание в Jira и локально через TaskCreate.
+
+### 8. Debate Plan (jira-debate --document plan)
+Критика плана AI-моделями: фокус на декомпозиции, зависимостях, рисках.
+**1-3 раунда** до достижения консенсуса.
+
+### 9. Implement (jira-implement)
+Реализация кода по плану с автоматической синхронизацией статусов.
+
+## Focused Debate Personas
+
+После интервью автоматически определяются персоны для фокусированных дебатов:
+
+- **Frontend-архитектор** -- UI/UX, компоненты, state management
+- **Backend-архитектор** -- API, БД, scalability
+- **DevOps-инженер** -- CI/CD, инфраструктура, мониторинг
+- **Инженер безопасности** -- auth, защита данных, compliance
+- **QA-инженер** -- тестирование, edge cases, качество
+
+Дебаты проходят итерациями по персонам с фокусированными критериями.
+
+## Skill Recommendation
+
+Перед генерацией PRD/Spec анализируются доступные скиллы и рекомендуются релевантные:
+
+```
+[skill-recommend] Обнаруженные фокусы: backend, security
+
+Рекомендуемые скиллы:
+  [installed]     api-design-principles
+  [installed]     auth-implementation-patterns
+  [not installed] secrets-management
+
+Покрытие: 66%
+```
+
+## Workflow Types
+
+| Issue Type | Workflow | Этапы |
+|------------|----------|-------|
+| Story | full | Все этапы (9) |
+| Task | full | Все этапы (9) |
+| Bug | quick | Без PRD и debate(PRD), меньше вопросов |
+| Epic | decompose | Без implement, создаёт child issues |
+| Sub-task | quick | Без PRD и debate(PRD), меньше вопросов |
diff --git a/.claude/skills/pair-programming/references/commands.md b/.claude/skills/pair-programming/references/commands.md
new file mode 100644 (file)
index 0000000..c173f3d
--- /dev/null
@@ -0,0 +1,223 @@
+# In-Session Command Reference
+
+Complete reference of all commands available during a pair programming session.
+
+---
+
+## Code Commands
+
+```
+/explain [--level basic|detailed|expert]
+  Explain the current code or selection
+
+/suggest [--type refactor|optimize|security|style]
+  Get improvement suggestions
+
+/implement <description>
+  Request implementation (navigator mode)
+
+/refactor [--pattern <pattern>] [--scope function|file|module]
+  Refactor selected code
+
+/optimize [--target speed|memory|both]
+  Optimize code for performance
+
+/document [--format jsdoc|markdown|inline]
+  Add documentation to code
+
+/comment [--verbose]
+  Add inline comments
+
+/pattern <pattern-name> [--example]
+  Apply a design pattern
+```
+
+---
+
+## Testing Commands
+
+```
+/test [--watch] [--coverage] [--only <pattern>]
+  Run test suite
+
+/test-gen [--type unit|integration|e2e]
+  Generate tests for current code
+
+/coverage [--report html|json|terminal]
+  Check test coverage
+
+/mock <target> [--realistic]
+  Generate mock data or functions
+
+/test-watch [--on-save]
+  Enable test watching
+
+/snapshot [--update]
+  Create test snapshots
+```
+
+---
+
+## Review Commands
+
+```
+/review [--scope current|file|changes] [--strict]
+  Perform code review
+
+/security [--deep] [--fix]
+  Security analysis
+
+/perf [--profile] [--suggestions]
+  Performance analysis
+
+/quality [--detailed]
+  Check code quality metrics
+
+/lint [--fix] [--config <config>]
+  Run linters
+
+/complexity [--threshold <value>]
+  Analyze code complexity
+```
+
+---
+
+## Navigation Commands
+
+```
+/goto <file>[:line[:column]]
+  Navigate to file or location
+
+/find <pattern> [--regex] [--case-sensitive]
+  Search in project
+
+/recent [--limit <n>]
+  Show recent files
+
+/bookmark [add|list|goto|remove] [<name>]
+  Manage bookmarks
+
+/history [--limit <n>] [--filter <pattern>]
+  Show command history
+
+/tree [--depth <n>] [--filter <pattern>]
+  Show project structure
+```
+
+---
+
+## Git Commands
+
+```
+/diff [--staged] [--file <file>]
+  Show git diff
+
+/commit [--message <msg>] [--amend]
+  Commit with verification
+
+/branch [create|switch|delete|list] [<name>]
+  Branch operations
+
+/stash [save|pop|list|apply] [<message>]
+  Stash operations
+
+/log [--oneline] [--limit <n>]
+  View git log
+
+/blame [<file>]
+  Show git blame
+```
+
+---
+
+## AI Partner Commands
+
+```
+/agent [switch|info|config] [<agent-name>]
+  Manage AI agent
+
+/teach <preference>
+  Teach the AI your preferences
+
+/feedback [positive|negative] <message>
+  Provide feedback to AI
+
+/personality [professional|friendly|concise|verbose]
+  Adjust AI personality
+
+/expertise [add|remove|list] [<domain>]
+  Set AI expertise focus
+```
+
+---
+
+## Metrics Commands
+
+```
+/metrics [--period today|session|week|all]
+  Show session metrics
+
+/score [--breakdown]
+  Show quality scores
+
+/productivity [--chart]
+  Show productivity metrics
+
+/leaderboard [--personal|team]
+  Show improvement leaderboard
+```
+
+---
+
+## Role & Mode Commands
+
+```
+/switch [--immediate]
+  Switch driver/navigator roles
+
+/mode <type>
+  Change mode (driver|navigator|switch|tdd|review|mentor|debug)
+
+/role
+  Show current role
+
+/handoff
+  Prepare role handoff
+```
+
+---
+
+## Command Shortcuts
+
+| Alias | Full Command |
+|-------|-------------|
+| `/s` | `/suggest` |
+| `/e` | `/explain` |
+| `/t` | `/test` |
+| `/r` | `/review` |
+| `/c` | `/commit` |
+| `/g` | `/goto` |
+| `/f` | `/find` |
+| `/h` | `/help` |
+| `/sw` | `/switch` |
+| `/st` | `/status` |
+
+---
+
+## Command Chaining
+
+Chain commands with `&&`:
+
+```
+/test && /commit && /push
+/lint --fix && /test && /review --strict
+```
+
+---
+
+## Command History Navigation
+
+- Up/Down arrows -- Navigate through command history
+- `Ctrl+R` -- Search command history
+- `!!` -- Repeat last command
+- `!<n>` -- Run command n from history
diff --git a/.claude/skills/pair-programming/references/configuration.md b/.claude/skills/pair-programming/references/configuration.md
new file mode 100644 (file)
index 0000000..815e8dc
--- /dev/null
@@ -0,0 +1,280 @@
+# Configuration Reference
+
+All configuration options for pair programming sessions, including agents, profiles, and environment variables.
+
+---
+
+## Basic Configuration
+
+Create `.claude-flow/pair-config.json`:
+
+```json
+{
+  "pair": {
+    "enabled": true,
+    "defaultMode": "switch",
+    "defaultAgent": "auto",
+    "autoStart": false,
+    "theme": "professional"
+  }
+}
+```
+
+---
+
+## Complete Configuration
+
+```json
+{
+  "pair": {
+    "general": {
+      "enabled": true,
+      "defaultMode": "switch",
+      "defaultAgent": "senior-dev",
+      "language": "javascript",
+      "timezone": "UTC"
+    },
+
+    "modes": {
+      "driver": {
+        "enabled": true,
+        "suggestions": true,
+        "realTimeReview": true,
+        "autoComplete": false
+      },
+      "navigator": {
+        "enabled": true,
+        "codeGeneration": true,
+        "explanations": true,
+        "alternatives": true
+      },
+      "switch": {
+        "enabled": true,
+        "interval": "10m",
+        "warning": "30s",
+        "autoSwitch": true,
+        "pauseOnIdle": true
+      }
+    },
+
+    "verification": {
+      "enabled": true,
+      "threshold": 0.95,
+      "autoRollback": true,
+      "preCommitCheck": true,
+      "continuousMonitoring": true,
+      "blockOnFailure": true
+    },
+
+    "testing": {
+      "enabled": true,
+      "autoRun": true,
+      "framework": "jest",
+      "onSave": true,
+      "coverage": {
+        "enabled": true,
+        "minimum": 80,
+        "enforce": true,
+        "reportFormat": "html"
+      }
+    },
+
+    "review": {
+      "enabled": true,
+      "continuous": true,
+      "preCommit": true,
+      "security": true,
+      "performance": true,
+      "style": true,
+      "complexity": {
+        "maxComplexity": 10,
+        "maxDepth": 4,
+        "maxLines": 100
+      }
+    },
+
+    "git": {
+      "enabled": true,
+      "autoCommit": false,
+      "commitTemplate": "feat: {message}",
+      "signCommits": false,
+      "pushOnEnd": false,
+      "branchProtection": true
+    },
+
+    "session": {
+      "autoSave": true,
+      "saveInterval": "5m",
+      "maxDuration": "4h",
+      "idleTimeout": "15m",
+      "breakReminder": "45m",
+      "metricsInterval": "1m"
+    },
+
+    "ai": {
+      "model": "advanced",
+      "temperature": 0.7,
+      "maxTokens": 4000,
+      "personality": "professional",
+      "expertise": ["backend", "testing", "security"],
+      "learningEnabled": true
+    }
+  }
+}
+```
+
+---
+
+## Built-in Agents
+
+```json
+{
+  "agents": {
+    "senior-dev": {
+      "expertise": ["architecture", "patterns", "optimization"],
+      "style": "thorough",
+      "reviewLevel": "strict"
+    },
+    "tdd-specialist": {
+      "expertise": ["testing", "mocks", "coverage"],
+      "style": "test-first",
+      "reviewLevel": "comprehensive"
+    },
+    "debugger-expert": {
+      "expertise": ["debugging", "profiling", "tracing"],
+      "style": "analytical",
+      "reviewLevel": "focused"
+    },
+    "junior-dev": {
+      "expertise": ["learning", "basics", "documentation"],
+      "style": "questioning",
+      "reviewLevel": "educational"
+    }
+  }
+}
+```
+
+---
+
+## CLI Configuration
+
+```bash
+# Set configuration
+claude-flow pair config set defaultMode switch
+claude-flow pair config set verification.threshold 0.98
+
+# Get configuration
+claude-flow pair config get
+claude-flow pair config get defaultMode
+
+# Export/Import
+claude-flow pair config export > config.json
+claude-flow pair config import config.json
+
+# Reset
+claude-flow pair config reset
+```
+
+---
+
+## Profile Management
+
+Create reusable profiles:
+
+```bash
+# Create profile
+claude-flow pair profile create refactoring \
+  --mode driver \
+  --verify true \
+  --threshold 0.98 \
+  --focus refactor
+
+# Use profile
+claude-flow pair --start --profile refactoring
+
+# List profiles
+claude-flow pair profile list
+```
+
+Profile configuration:
+
+```json
+{
+  "profiles": {
+    "refactoring": {
+      "mode": "driver",
+      "verification": {
+        "enabled": true,
+        "threshold": 0.98
+      },
+      "focus": "refactor"
+    },
+    "debugging": {
+      "mode": "navigator",
+      "agent": "debugger-expert",
+      "trace": true,
+      "verbose": true
+    },
+    "learning": {
+      "mode": "mentor",
+      "pace": "slow",
+      "explanations": "detailed",
+      "examples": true
+    }
+  }
+}
+```
+
+---
+
+## Environment Variables
+
+Override configuration via environment:
+
+```bash
+export CLAUDE_PAIR_MODE=driver
+export CLAUDE_PAIR_VERIFY=true
+export CLAUDE_PAIR_THRESHOLD=0.98
+export CLAUDE_PAIR_AGENT=senior-dev
+export CLAUDE_PAIR_AUTO_TEST=true
+```
+
+---
+
+## Keyboard Shortcuts
+
+Default shortcuts (configurable):
+
+```json
+{
+  "shortcuts": {
+    "switch": "ctrl+shift+s",
+    "suggest": "ctrl+space",
+    "review": "ctrl+r",
+    "test": "ctrl+t"
+  }
+}
+```
+
+---
+
+## Custom Commands
+
+Define in configuration:
+
+```json
+{
+  "customCommands": {
+    "tdd": "/test-gen && /test --watch",
+    "full-review": "/lint --fix && /test && /review --strict",
+    "quick-fix": "/suggest --type fix && /implement && /test"
+  }
+}
+```
+
+Use custom commands:
+
+```
+/custom tdd
+/custom full-review
+```
diff --git a/.claude/skills/pair-programming/references/examples.md b/.claude/skills/pair-programming/references/examples.md
new file mode 100644 (file)
index 0000000..b9fad84
--- /dev/null
@@ -0,0 +1,283 @@
+# Real-World Session Examples
+
+Detailed walkthroughs of pair programming sessions for common development scenarios.
+
+---
+
+## Example 1: Feature Implementation
+
+Implementing user authentication with JWT tokens.
+
+```bash
+claude-flow pair --start \
+  --mode switch \
+  --agent senior-dev \
+  --focus implement \
+  --verify \
+  --test
+```
+
+**Session Flow:**
+
+```
+Starting pair programming for authentication feature...
+
+[DRIVER: You - 10 minutes]
+/explain JWT authentication flow
+> AI explains JWT concepts and best practices
+
+/suggest implementation approach
+> AI suggests using middleware pattern with refresh tokens
+
+# You write the basic auth middleware structure
+
+[SWITCH TO NAVIGATOR]
+
+[NAVIGATOR: AI - 10 minutes]
+/implement JWT token generation with refresh tokens
+> AI generates secure token implementation
+
+/test-gen
+> AI creates comprehensive test suite
+
+[SWITCH TO DRIVER]
+
+[DRIVER: You - 10 minutes]
+# You refine the implementation
+/review --security
+> AI performs security review, suggests improvements
+
+/commit --message "feat: JWT authentication with refresh tokens"
+Truth Score: 0.98 - Committed successfully
+```
+
+---
+
+## Example 2: Bug Fixing
+
+Debugging a memory leak in Node.js.
+
+```bash
+claude-flow pair --start \
+  --mode navigator \
+  --agent debugger-expert \
+  --focus debug \
+  --trace
+```
+
+**Session Flow:**
+
+```
+Starting debugging session...
+
+/status
+> Analyzing application for memory issues...
+
+/perf --profile
+> Memory usage growing: 150MB -> 450MB over 10 minutes
+
+/find "new EventEmitter" --regex
+> Found 3 instances of EventEmitter creation
+
+/inspect eventEmitters --deep
+> Discovering listeners not being removed
+
+/suggest fix for memory leak
+> AI suggests: "Add removeListener in cleanup functions"
+
+/implement cleanup functions for all event emitters
+> AI generates proper cleanup code
+
+/test
+> Memory stable at 150MB
+
+/commit --message "fix: memory leak in event emitters"
+```
+
+---
+
+## Example 3: TDD Session
+
+Building shopping cart with test-driven development.
+
+```bash
+claude-flow pair --start \
+  --mode tdd \
+  --agent tdd-specialist \
+  --test-first
+```
+
+**Session Flow:**
+
+```
+TDD Session: Shopping Cart Feature
+
+[RED PHASE]
+/test-gen "add item to cart"
+> AI writes failing test:
+  FAIL should add item to cart
+  FAIL should update quantity for existing item
+  FAIL should calculate total price
+
+[GREEN PHASE]
+/implement minimal cart functionality
+> You write just enough code to pass tests
+
+/test
+> Tests passing: 3/3
+
+[REFACTOR PHASE]
+/refactor --pattern repository
+> AI refactors to repository pattern
+
+/test
+> Tests still passing: 3/3
+
+[NEXT CYCLE]
+/test-gen "remove item from cart"
+> AI writes new failing tests...
+```
+
+---
+
+## Example 4: Code Refactoring
+
+Modernizing legacy code.
+
+```bash
+claude-flow pair --start \
+  --mode driver \
+  --focus refactor \
+  --verify \
+  --threshold 0.98
+```
+
+**Session Flow:**
+
+```
+Refactoring Session: Modernizing UserService
+
+/analyze UserService.js
+> AI identifies:
+  - Callback hell (5 levels deep)
+  - No error handling
+  - Tight coupling
+  - No tests
+
+/suggest refactoring plan
+> AI suggests:
+  1. Convert callbacks to async/await
+  2. Add error boundaries
+  3. Extract dependencies
+  4. Add unit tests
+
+/test-gen --before-refactor
+> AI generates tests for current behavior
+
+/refactor callbacks to async/await
+# You refactor with AI guidance
+
+/test
+> All tests passing
+
+/review --compare
+> AI shows before/after comparison
+> Code complexity: 35 -> 12
+> Truth score: 0.99
+
+/commit --message "refactor: modernize UserService with async/await"
+```
+
+---
+
+## Example 5: Performance Optimization
+
+Optimizing slow React application.
+
+```bash
+claude-flow pair --start \
+  --mode switch \
+  --agent performance-expert \
+  --focus optimize \
+  --profile
+```
+
+**Session Flow:**
+
+```
+Performance Optimization Session
+
+/perf --profile
+> React DevTools Profiler Results:
+  - ProductList: 450ms render
+  - CartSummary: 200ms render
+  - Unnecessary re-renders: 15
+
+/suggest optimizations for ProductList
+> AI suggests:
+  1. Add React.memo
+  2. Use useMemo for expensive calculations
+  3. Implement virtualization for long lists
+
+/implement React.memo and useMemo
+# You implement with AI guidance
+
+/perf --profile
+> ProductList: 45ms render (90% improvement!)
+
+/implement virtualization with react-window
+> AI implements virtual scrolling
+
+/perf --profile
+> ProductList: 12ms render (97% improvement!)
+> FPS: 60 stable
+
+/commit --message "perf: optimize ProductList with memoization and virtualization"
+```
+
+---
+
+## Example 6: API Development
+
+Building RESTful API with Express.
+
+```bash
+claude-flow pair --start \
+  --mode navigator \
+  --agent backend-expert \
+  --focus implement \
+  --test
+```
+
+**Session Flow:**
+
+```
+API Development Session
+
+/design REST API for blog platform
+> AI designs endpoints:
+  POST   /api/posts
+  GET    /api/posts
+  GET    /api/posts/:id
+  PUT    /api/posts/:id
+  DELETE /api/posts/:id
+
+/implement CRUD endpoints with validation
+> AI implements with Express + Joi validation
+
+/test-gen --integration
+> AI generates integration tests
+
+/security --api
+> AI adds:
+  - Rate limiting
+  - Input sanitization
+  - JWT authentication
+  - CORS configuration
+
+/document --openapi
+> AI generates OpenAPI documentation
+
+/test --integration
+> All endpoints tested: 15/15
+```
diff --git a/.claude/skills/pair-programming/references/modes.md b/.claude/skills/pair-programming/references/modes.md
new file mode 100644 (file)
index 0000000..a5f2983
--- /dev/null
@@ -0,0 +1,179 @@
+# Pair Programming Modes
+
+Detailed descriptions of all available pair programming modes, their roles, responsibilities, and use cases.
+
+---
+
+## Driver Mode
+
+The user writes code while AI provides guidance.
+
+```bash
+claude-flow pair --start --mode driver
+```
+
+**User Responsibilities:**
+- Write actual code
+- Implement solutions
+- Make immediate decisions
+- Handle syntax and structure
+
+**AI Navigator:**
+- Strategic guidance
+- Spot potential issues
+- Suggest improvements
+- Real-time review
+- Track overall direction
+
+**Best For:**
+- Learning new patterns
+- Implementing familiar features
+- Quick iterations
+- Hands-on debugging
+
+**Commands:**
+```
+/suggest     - Get implementation suggestions
+/review      - Request code review
+/explain     - Ask for explanations
+/optimize    - Request optimization ideas
+/patterns    - Get pattern recommendations
+```
+
+---
+
+## Navigator Mode
+
+AI writes code while the user provides direction.
+
+```bash
+claude-flow pair --start --mode navigator
+```
+
+**User Responsibilities:**
+- Provide high-level direction
+- Review generated code
+- Make architectural decisions
+- Ensure business requirements
+
+**AI Driver:**
+- Write implementation code
+- Handle syntax details
+- Implement user guidance
+- Manage boilerplate
+- Execute refactoring
+
+**Best For:**
+- Rapid prototyping
+- Boilerplate generation
+- Learning from AI patterns
+- Exploring solutions
+
+**Commands:**
+```
+/implement   - Direct implementation
+/refactor    - Request refactoring
+/test        - Generate tests
+/document    - Add documentation
+/alternate   - See alternative approaches
+```
+
+---
+
+## Switch Mode
+
+Automatically alternates roles at intervals.
+
+```bash
+# Default 10-minute intervals
+claude-flow pair --start --mode switch
+
+# 5-minute intervals (rapid)
+claude-flow pair --start --mode switch --interval 5m
+
+# 15-minute intervals (deep focus)
+claude-flow pair --start --mode switch --interval 15m
+```
+
+**Handoff Process:**
+1. 30-second warning before switch
+2. Current driver completes thought
+3. Context summary generated
+4. Roles swap smoothly
+5. New driver continues
+
+**Best For:**
+- Balanced collaboration
+- Knowledge sharing
+- Complex features
+- Extended sessions
+
+---
+
+## Specialized Modes
+
+### TDD Mode
+
+Test-Driven Development workflow.
+
+```bash
+claude-flow pair --start \
+  --mode tdd \
+  --test-first \
+  --coverage 100
+```
+
+Workflow: Write failing test -> Implement -> Refactor -> Repeat
+
+### Review Mode
+
+Continuous code review.
+
+```bash
+claude-flow pair --start \
+  --mode review \
+  --strict \
+  --security
+```
+
+Features: Real-time feedback, security scanning, performance analysis
+
+### Mentor Mode
+
+Learning-focused pair programming.
+
+```bash
+claude-flow pair --start \
+  --mode mentor \
+  --explain-all \
+  --pace slow
+```
+
+Features: Detailed explanations, step-by-step guidance, pattern teaching
+
+### Debug Mode
+
+Problem-solving focused.
+
+```bash
+claude-flow pair --start \
+  --mode debug \
+  --verbose \
+  --trace
+```
+
+Features: Issue identification, root cause analysis, fix suggestions
+
+---
+
+## Mode Selection Guide
+
+| Mode | When to Use |
+|------|-------------|
+| Driver | Learning, controlling implementation |
+| Navigator | Rapid prototyping, generation |
+| Switch | Long sessions, balanced collaboration |
+| TDD | Building with tests |
+| Review | Quality focus |
+| Mentor | Learning priority |
+| Debug | Fixing issues |
diff --git a/.claude/skills/pair-programming/references/session-management.md b/.claude/skills/pair-programming/references/session-management.md
new file mode 100644 (file)
index 0000000..3df0e66
--- /dev/null
@@ -0,0 +1,226 @@
+# Session Management
+
+Session lifecycle, templates, persistence, monitoring, and status reporting.
+
+---
+
+## Session Control
+
+### Starting Sessions
+
+```bash
+# Basic start
+claude-flow pair --start
+
+# Expert refactoring session
+claude-flow pair --start \
+  --agent senior-dev \
+  --focus refactor \
+  --verify \
+  --threshold 0.98
+
+# Debugging session
+claude-flow pair --start \
+  --agent debugger-expert \
+  --focus debug \
+  --review
+
+# Learning session
+claude-flow pair --start \
+  --mode mentor \
+  --pace slow \
+  --examples
+```
+
+### Session Lifecycle
+
+```bash
+# Check status
+claude-flow pair --status
+
+# View history
+claude-flow pair --history
+
+# Pause session
+/pause [--reason <reason>]
+
+# Resume session
+/resume
+
+# End session
+claude-flow pair --end [--save] [--report]
+```
+
+---
+
+## Session Templates
+
+### Quick Start Templates
+
+```bash
+# Refactoring template
+claude-flow pair --template refactor
+# Focus: Code improvement
+# Verification: High (0.98)
+# Testing: After each change
+# Review: Continuous
+
+# Feature template
+claude-flow pair --template feature
+# Focus: Implementation
+# Verification: Standard (0.95)
+# Testing: On completion
+# Review: Pre-commit
+
+# Debug template
+claude-flow pair --template debug
+# Focus: Problem solving
+# Verification: Moderate (0.90)
+# Testing: Regression tests
+# Review: Root cause
+
+# Learning template
+claude-flow pair --template learn
+# Mode: Mentor
+# Pace: Slow
+# Explanations: Detailed
+# Examples: Many
+```
+
+---
+
+## Session Status
+
+```bash
+claude-flow pair --status
+```
+
+Sample output:
+
+```
+Pair Programming Session
+---
+
+Session ID: pair_1755021234567
+Duration: 45 minutes
+Status: Active
+
+Partner: senior-dev
+Current Role: DRIVER (you)
+Mode: Switch (10m intervals)
+Next Switch: in 3 minutes
+
+Metrics:
+  Truth Score: 0.982
+  Lines Changed: 234
+  Files Modified: 5
+  Tests Added: 12
+  Coverage: 87% (up 3%)
+  Commits: 3
+
+Focus: Implementation
+Current File: src/auth/login.js
+```
+
+---
+
+## Session History
+
+```bash
+claude-flow pair --history
+```
+
+Sample output:
+
+```
+Session History
+---
+
+1. 2024-01-15 14:30 - 16:45 (2h 15m)
+   Partner: expert-coder
+   Focus: Refactoring
+   Truth Score: 0.975
+   Changes: +340 -125 lines
+
+2. 2024-01-14 10:00 - 11:30 (1h 30m)
+   Partner: tdd-specialist
+   Focus: Testing
+   Truth Score: 0.991
+   Tests Added: 24
+
+3. 2024-01-13 15:00 - 17:00 (2h)
+   Partner: debugger-expert
+   Focus: Bug Fixing
+   Truth Score: 0.968
+   Issues Fixed: 5
+```
+
+---
+
+## Session Persistence
+
+```bash
+# Save session
+claude-flow pair --save [--name <name>]
+
+# Load session
+claude-flow pair --load <session-id>
+
+# Export session
+claude-flow pair --export <session-id> [--format json|md]
+
+# Generate report
+claude-flow pair --report <session-id>
+```
+
+---
+
+## Background Sessions
+
+```bash
+# Start in background
+claude-flow pair --start --background
+
+# Monitor background session
+claude-flow pair --monitor
+
+# Attach to background session
+claude-flow pair --attach <session-id>
+
+# End background session
+claude-flow pair --end <session-id>
+```
+
+---
+
+## Session Recording
+
+```bash
+# Start with recording
+claude-flow pair --start --record
+
+# Replay session
+claude-flow pair --replay <session-id>
+
+# Session analytics
+claude-flow pair --analytics <session-id>
+```
+
+---
+
+## Integration Options
+
+**With Git:**
+```bash
+claude-flow pair --start --git --auto-commit
+```
+
+**With CI/CD:**
+```bash
+claude-flow pair --start --ci --non-interactive
+```
+
+**With IDE:**
+```bash
+claude-flow pair --start --ide vscode
+```
diff --git a/.claude/skills/pair-programming/references/troubleshooting.md b/.claude/skills/pair-programming/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..c7e7243
--- /dev/null
@@ -0,0 +1,99 @@
+# Troubleshooting & Quality Metrics
+
+Diagnostics for common issues, quality thresholds, and best practices.
+
+---
+
+## Common Issues
+
+### Session Will Not Start
+
+- Check agent availability
+- Verify configuration file syntax
+- Ensure clean workspace
+- Review log files
+
+### Session Disconnected
+
+- Use `--recover` to restore
+- Check network connection
+- Verify background processes
+- Review auto-save files
+
+### Poor Performance
+
+- Reduce verification threshold
+- Disable continuous testing
+- Check system resources
+- Use lighter AI model
+
+### Configuration Issues
+
+- Validate JSON syntax
+- Check file permissions
+- Review priority order (CLI > env > project > user > global)
+- Run `claude-flow pair config validate`
+
+---
+
+## Quality Metrics
+
+### Truth Score Thresholds
+
+```
+Error:     < 0.90
+Warning:   0.90 - 0.95
+Good:      0.95 - 0.98
+Excellent: > 0.98
+```
+
+### Coverage Thresholds
+
+```
+Error:     < 70%
+Warning:   70% - 80%
+Good:      80% - 90%
+Excellent: > 90%
+```
+
+### Complexity Thresholds
+
+```
+Error:     > 15
+Warning:   10 - 15
+Good:      5 - 10
+Excellent: < 5
+```
+
+---
+
+## Best Practices
+
+### Session Practices
+
+1. **Clear Goals** -- Define session objectives upfront
+2. **Appropriate Mode** -- Choose based on task type
+3. **Enable Verification** -- For critical code paths
+4. **Regular Testing** -- Maintain quality continuously
+5. **Session Notes** -- Document important decisions
+6. **Regular Breaks** -- Take breaks every 45-60 minutes
+
+### Code Practices
+
+1. **Test Early** -- Run tests after each change
+2. **Verify Before Commit** -- Check truth scores
+3. **Review Security** -- Always for sensitive code
+4. **Profile Performance** -- Use `/perf` for optimization
+5. **Save Sessions** -- For complex work
+6. **Learn from AI** -- Ask questions frequently
+
+---
+
+## Related Commands
+
+- `claude-flow pair --help` -- Show help
+- `claude-flow pair config` -- Manage configuration
+- `claude-flow pair config validate` -- Validate configuration
+- `claude-flow pair profile` -- Manage profiles
+- `claude-flow pair templates` -- List templates
+- `claude-flow pair agents` -- List available agents
diff --git a/.claude/skills/performance-analysis/assets/analyze-performance.js b/.claude/skills/performance-analysis/assets/analyze-performance.js
new file mode 100644 (file)
index 0000000..9f0af65
--- /dev/null
@@ -0,0 +1,45 @@
+// scripts/analyze-performance.js
+const { exec } = require('child_process');
+const fs = require('fs');
+
+async function analyzePerformance() {
+  // Run bottleneck detection
+  const bottlenecks = await runCommand(
+    'npx claude-flow bottleneck detect --format json'
+  );
+
+  // Generate performance report
+  const report = await runCommand(
+    'npx claude-flow analysis performance-report --format json'
+  );
+
+  // Analyze results
+  const analysis = {
+    bottlenecks: JSON.parse(bottlenecks),
+    performance: JSON.parse(report),
+    timestamp: new Date().toISOString()
+  };
+
+  // Save combined analysis
+  fs.writeFileSync(
+    'analysis/combined-report.json',
+    JSON.stringify(analysis, null, 2)
+  );
+
+  // Generate alerts if needed
+  if (analysis.bottlenecks.critical.length > 0) {
+    console.error('CRITICAL: Performance bottlenecks detected!');
+    process.exit(1);
+  }
+}
+
+function runCommand(cmd) {
+  return new Promise((resolve, reject) => {
+    exec(cmd, (error, stdout, stderr) => {
+      if (error) reject(error);
+      else resolve(stdout);
+    });
+  });
+}
+
+analyzePerformance().catch(console.error);
diff --git a/.claude/skills/performance-analysis/assets/github-workflow.yml b/.claude/skills/performance-analysis/assets/github-workflow.yml
new file mode 100644 (file)
index 0000000..a4363d7
--- /dev/null
@@ -0,0 +1,26 @@
+# .github/workflows/performance.yml
+name: Performance Analysis
+on: [push, pull_request]
+
+jobs:
+  analyze:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v2
+      - name: Run Performance Analysis
+        run: |
+          npx claude-flow analysis performance-report \
+            --format json \
+            --output performance.json
+      - name: Check Performance Thresholds
+        run: |
+          npx claude-flow bottleneck detect \
+            --threshold 15 \
+            --export bottlenecks.json
+      - name: Upload Reports
+        uses: actions/upload-artifact@v2
+        with:
+          name: performance-reports
+          path: |
+            performance.json
+            bottlenecks.json
diff --git a/.claude/skills/performance-analysis/references/advanced-usage.md b/.claude/skills/performance-analysis/references/advanced-usage.md
new file mode 100644 (file)
index 0000000..96b444f
--- /dev/null
@@ -0,0 +1,70 @@
+# Advanced Usage Reference
+
+Continuous monitoring, CI/CD integration, and custom analysis scripts.
+
+## Continuous Monitoring
+
+```bash
+# Monitor performance in real-time
+npx claude-flow swarm monitor --interval 5
+
+# Generate hourly reports
+while true; do
+  npx claude-flow analysis performance-report \
+    --format json \
+    --output logs/perf-$(date +%Y%m%d-%H%M).json
+  sleep 3600
+done
+```
+
+## CI/CD Integration
+
+Use the provided workflow template at `assets/github-workflow.yml`.
+
+Key steps:
+1. Run `npx claude-flow analysis performance-report` to generate the report.
+2. Run `npx claude-flow bottleneck detect --threshold 15` to check thresholds.
+3. Upload reports as build artifacts.
+
+## Custom Analysis Scripts
+
+Use the provided script template at `assets/analyze-performance.js`.
+
+The script performs:
+1. Bottleneck detection in JSON format
+2. Performance report generation in JSON format
+3. Combined analysis output
+4. Alerting on critical bottlenecks (exits with code 1)
+
+## Best Practices
+
+### Regular Analysis
+
+- Run bottleneck detection after major changes
+- Generate weekly performance reports
+- Monitor trends over time
+- Set up automated alerts
+
+### Threshold Tuning
+
+| Environment | Recommended Threshold |
+|-------------|----------------------|
+| Production | 10-15% |
+| Development | 25-30% |
+| Default | 20% |
+
+Adjust based on specific requirements.
+
+### Report Integration
+
+- Include in documentation
+- Share with team regularly
+- Track improvements over time
+- Use for capacity planning
+
+### Continuous Optimization
+
+- Learn from each analysis
+- Build performance budgets
+- Establish baselines
+- Set improvement goals
diff --git a/.claude/skills/performance-analysis/references/bottleneck-metrics.md b/.claude/skills/performance-analysis/references/bottleneck-metrics.md
new file mode 100644 (file)
index 0000000..f7a4d55
--- /dev/null
@@ -0,0 +1,80 @@
+# Bottleneck Metrics Reference
+
+Detailed breakdown of all metrics analyzed during bottleneck detection.
+
+## Communication Bottlenecks
+
+- Message queue delays
+- Agent response times
+- Coordination overhead
+- Memory access patterns
+- Inter-agent communication latency
+
+## Processing Bottlenecks
+
+- Task completion times
+- Agent utilization rates
+- Parallel execution efficiency
+- Resource contention
+- CPU/memory usage patterns
+
+## Memory Bottlenecks
+
+- Cache hit rates
+- Memory access patterns
+- Storage I/O performance
+- Neural pattern loading times
+- Memory allocation efficiency
+
+## Network Bottlenecks
+
+- API call latency
+- MCP communication delays
+- External service timeouts
+- Concurrent request limits
+- Network throughput issues
+
+## Output Format
+
+```
+🔍 Bottleneck Analysis Report
+━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+📊 Summary
+├── Time Range: Last 1 hour
+├── Agents Analyzed: 6
+├── Tasks Processed: 42
+└── Critical Issues: 2
+
+🚨 Critical Bottlenecks
+1. Agent Communication (35% impact)
+   └── coordinator → coder-1 messages delayed by 2.3s avg
+
+2. Memory Access (28% impact)
+   └── Neural pattern loading taking 1.8s per access
+
+⚠️ Warning Bottlenecks
+1. Task Queue (18% impact)
+   └── 5 tasks waiting > 10s for assignment
+
+💡 Recommendations
+1. Switch to hierarchical topology (est. 40% improvement)
+2. Enable memory caching (est. 25% improvement)
+3. Increase agent concurrency to 8 (est. 20% improvement)
+
+✅ Quick Fixes Available
+Run with --fix to apply:
+- Enable smart caching
+- Optimize message routing
+- Adjust agent priorities
+```
+
+## Performance Impact After Resolution
+
+| Area | Expected Improvement |
+|------|---------------------|
+| Communication | 30-50% faster message delivery |
+| Processing | 20-40% reduced task completion time |
+| Memory | 40-60% fewer cache misses |
+| Network | 25-45% reduced API latency |
+| Overall | 25-45% total performance improvement |
diff --git a/.claude/skills/performance-analysis/references/optimization-fixes.md b/.claude/skills/performance-analysis/references/optimization-fixes.md
new file mode 100644 (file)
index 0000000..a7cf912
--- /dev/null
@@ -0,0 +1,59 @@
+# Optimization & Automatic Fixes Reference
+
+Details on auto-fix capabilities and optimization strategies.
+
+## Automatic Fixes (`--fix`)
+
+When using `--fix`, the following optimizations may be applied.
+
+### 1. Topology Optimization
+
+- Switch to more efficient topology (mesh to hierarchical)
+- Adjust communication patterns
+- Reduce coordination overhead
+- Optimize message routing
+
+### 2. Caching Enhancement
+
+- Enable memory caching
+- Optimize cache strategies
+- Preload common patterns
+- Implement cache warming
+
+### 3. Concurrency Tuning
+
+- Adjust agent counts
+- Optimize parallel execution
+- Balance workload distribution
+- Implement load balancing
+
+### 4. Priority Adjustment
+
+- Reorder task queues
+- Prioritize critical paths
+- Reduce wait times
+- Implement fair scheduling
+
+### 5. Resource Optimization
+
+- Optimize memory usage
+- Reduce I/O operations
+- Batch API calls
+- Implement connection pooling
+
+## Fix Strategy (Best Practices)
+
+1. Always review before applying `--fix`
+2. Test fixes in development first
+3. Apply fixes incrementally
+4. Monitor impact after changes
+
+## Performance Impact
+
+| Area | Expected Improvement |
+|------|---------------------|
+| Communication | 30-50% faster message delivery |
+| Processing | 20-40% reduced task completion time |
+| Memory | 40-60% fewer cache misses |
+| Network | 25-45% reduced API latency |
+| Overall | 25-45% total performance improvement |
diff --git a/.claude/skills/performance-analysis/references/profiling-patterns.md b/.claude/skills/performance-analysis/references/profiling-patterns.md
new file mode 100644 (file)
index 0000000..a428721
--- /dev/null
@@ -0,0 +1,90 @@
+# Performance Profiling Patterns
+
+Detailed reference for real-time profiling and pattern detection.
+
+## Real-time Detection
+
+Automatic analysis during task execution:
+- Execution time vs. complexity
+- Agent utilization rates
+- Resource constraints
+- Operation patterns
+
+## Common Bottleneck Patterns
+
+### Time Bottlenecks
+
+- Tasks taking > 5 minutes
+- Sequential operations that could parallelize
+- Redundant file operations
+- Inefficient algorithm implementations
+
+### Coordination Bottlenecks
+
+- Single agent for complex tasks
+- Unbalanced agent workloads
+- Poor topology selection
+- Excessive synchronization points
+
+### Resource Bottlenecks
+
+- High operation count (> 100)
+- Memory constraints
+- I/O limitations
+- Thread pool saturation
+
+## MCP Integration
+
+### Detect Bottlenecks via MCP
+
+```javascript
+mcp__claude-flow__bottleneck_detect({
+  timeRange: "1h",
+  threshold: 20,
+  autoFix: false
+})
+```
+
+### Get Detailed Task Results
+
+```javascript
+mcp__claude-flow__task_results({
+  taskId: "task-123",
+  format: "detailed"
+})
+```
+
+## MCP Result Format
+
+```json
+{
+  "bottlenecks": [
+    {
+      "type": "coordination",
+      "severity": "high",
+      "description": "Single agent used for complex task",
+      "recommendation": "Spawn specialized agents for parallel work",
+      "impact": "35%",
+      "affectedComponents": ["coordinator", "coder-1"]
+    }
+  ],
+  "improvements": [
+    {
+      "area": "execution_time",
+      "suggestion": "Use parallel task execution",
+      "expectedImprovement": "30-50% time reduction",
+      "implementationSteps": [
+        "Split task into smaller units",
+        "Spawn 3-4 specialized agents",
+        "Use mesh topology for coordination"
+      ]
+    }
+  ],
+  "metrics": {
+    "avgExecutionTime": "142s",
+    "agentUtilization": "67%",
+    "cacheHitRate": "82%",
+    "parallelizationFactor": 1.2
+  }
+}
+```
diff --git a/.claude/skills/performance-analysis/references/report-generation.md b/.claude/skills/performance-analysis/references/report-generation.md
new file mode 100644 (file)
index 0000000..54efc3a
--- /dev/null
@@ -0,0 +1,93 @@
+# Report Generation Reference
+
+Full details on generating performance reports.
+
+## Command Syntax
+
+```bash
+npx claude-flow analysis performance-report [options]
+```
+
+## Options
+
+| Option | Description | Default |
+|--------|-------------|---------|
+| `--format <type>` | Report format: json, html, markdown | markdown |
+| `--include-metrics` | Include detailed metrics and charts | false |
+| `--compare <id>` | Compare with previous swarm | - |
+| `--time-range <range>` | Analysis period: 1h, 24h, 7d, 30d, all | - |
+| `--output <file>` | Output file path | stdout |
+| `--sections <list>` | Comma-separated sections to include | all |
+
+## Report Sections
+
+1. **Executive Summary** -- Overall performance score, key metrics overview, critical findings.
+2. **Swarm Overview** -- Topology configuration, agent distribution, task statistics.
+3. **Performance Metrics** -- Execution times, throughput analysis, resource utilization, latency breakdown.
+4. **Bottleneck Analysis** -- Identified bottlenecks, impact assessment, optimization priorities.
+5. **Comparative Analysis** (when `--compare` used) -- Performance trends, improvement metrics, regression detection.
+6. **Recommendations** -- Prioritized action items, expected improvements, implementation guidance.
+
+## Usage Examples
+
+```bash
+# Generate HTML report with all metrics
+npx claude-flow analysis performance-report --format html --include-metrics
+
+# Compare current swarm with previous
+npx claude-flow analysis performance-report --compare swarm-123 --format markdown
+
+# Custom output with specific sections
+npx claude-flow analysis performance-report \
+  --sections summary,metrics,recommendations \
+  --output reports/perf-analysis.html \
+  --format html
+
+# Weekly performance report
+npx claude-flow analysis performance-report \
+  --time-range 7d \
+  --include-metrics \
+  --format markdown \
+  --output docs/weekly-performance.md
+
+# JSON format for CI/CD integration
+npx claude-flow analysis performance-report \
+  --format json \
+  --output build/performance.json
+```
+
+## Sample Markdown Report
+
+```markdown
+# Performance Analysis Report
+
+## Executive Summary
+- **Overall Score**: 87/100
+- **Analysis Period**: Last 24 hours
+- **Swarms Analyzed**: 3
+- **Critical Issues**: 1
+
+## Key Metrics
+| Metric | Value | Trend | Target |
+|--------|-------|-------|--------|
+| Avg Task Time | 42s | down 12% | 35s |
+| Agent Utilization | 78% | up 5% | 85% |
+| Cache Hit Rate | 91% | stable | 90% |
+| Parallel Efficiency | 2.3x | up 0.4x | 2.5x |
+
+## Bottleneck Analysis
+### Critical
+1. **Agent Communication Delay** (Impact: 35%)
+   - Coordinator to Coder messages delayed by 2.3s avg
+   - **Fix**: Switch to hierarchical topology
+
+### Warnings
+1. **Memory Access Pattern** (Impact: 18%)
+   - Neural pattern loading: 1.8s per access
+   - **Fix**: Enable memory caching
+
+## Recommendations
+1. **High Priority**: Switch to hierarchical topology (40% improvement)
+2. **Medium Priority**: Enable memory caching (25% improvement)
+3. **Low Priority**: Increase agent concurrency to 8 (20% improvement)
+```
diff --git a/.claude/skills/performance-analysis/references/troubleshooting.md b/.claude/skills/performance-analysis/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..83b61c7
--- /dev/null
@@ -0,0 +1,42 @@
+# Troubleshooting Reference
+
+Common performance issues and resolution commands.
+
+## High Memory Usage
+
+```bash
+# Analyze memory bottlenecks
+npx claude-flow bottleneck detect --threshold 10
+
+# Check cache performance
+npx claude-flow cache manage --action stats
+
+# Review memory metrics
+npx claude-flow memory usage
+```
+
+## Slow Task Execution
+
+```bash
+# Identify slow tasks
+npx claude-flow task status --detailed
+
+# Analyze coordination overhead
+npx claude-flow bottleneck detect --time-range 1h
+
+# Check agent utilization
+npx claude-flow agent metrics
+```
+
+## Poor Cache Performance
+
+```bash
+# Analyze cache hit rates
+npx claude-flow analysis performance-report --sections metrics
+
+# Review cache strategy
+npx claude-flow cache manage --action analyze
+
+# Enable cache warming
+npx claude-flow bottleneck detect --fix
+```
diff --git a/.claude/skills/skill-builder/assets/examples-wild.md b/.claude/skills/skill-builder/assets/examples-wild.md
new file mode 100644 (file)
index 0000000..649d9a0
--- /dev/null
@@ -0,0 +1,82 @@
+# Examples from the Wild
+
+Real-world skill examples demonstrating best practices.
+
+---
+
+## Example 1: Simple Documentation Skill
+
+```markdown
+---
+name: "README Generator"
+description: "Generate comprehensive README.md files for GitHub repositories. Use when starting new projects, documenting code, or improving existing READMEs."
+---
+
+# README Generator
+
+## What This Skill Does
+Creates well-structured README.md files with badges, installation, usage, and contribution sections.
+
+## Quick Start
+```bash
+# Answer a few questions
+./scripts/generate-readme.sh
+
+# README.md created with:
+# - Project title and description
+# - Installation instructions
+# - Usage examples
+# - Contribution guidelines
+\```
+
+## Customization
+Edit sections in `resources/templates/sections/` before generating.
+```
+
+---
+
+## Example 2: Code Generation Skill
+
+```markdown
+---
+name: "React Component Generator"
+description: "Generate React functional components with TypeScript, hooks, tests, and Storybook stories. Use when creating new components, scaffolding UI, or following component architecture patterns."
+---
+
+# React Component Generator
+
+## Prerequisites
+- Node.js 18+
+- React 18+
+- TypeScript 5+
+
+## Quick Start
+```bash
+./scripts/generate-component.sh MyComponent
+
+# Creates:
+# - src/components/MyComponent/MyComponent.tsx
+# - src/components/MyComponent/MyComponent.test.tsx
+# - src/components/MyComponent/MyComponent.stories.tsx
+# - src/components/MyComponent/index.ts
+\```
+
+## Step-by-Step Guide
+
+### 1. Run Generator
+```bash
+./scripts/generate-component.sh ComponentName
+\```
+
+### 2. Choose Template
+- Basic: Simple functional component
+- With State: useState hooks
+- With Context: useContext integration
+- With API: Data fetching component
+
+### 3. Customize
+Edit generated files in `src/components/ComponentName/`
+
+## Templates
+See `resources/templates/` for available component templates.
+```
diff --git a/.claude/skills/skill-builder/assets/template-advanced.md b/.claude/skills/skill-builder/assets/template-advanced.md
new file mode 100644 (file)
index 0000000..326e168
--- /dev/null
@@ -0,0 +1,159 @@
+# Template: Advanced Skill (Full-Featured)
+
+Copy this template for complex, multi-file skills with scripts, resources, and documentation.
+
+```markdown
+---
+name: "My Advanced Skill"
+description: "Comprehensive what with all features and integrations. Use when [trigger 1], [trigger 2], or [trigger 3]. Supports [technology stack]."
+---
+
+# My Advanced Skill
+
+## Overview
+[Brief 2-3 sentence description]
+
+## Prerequisites
+- Technology 1 (version X+)
+- Technology 2 (version Y+)
+- API keys or credentials
+
+## What This Skill Does
+1. **Core Feature**: Description
+2. **Integration**: Description
+3. **Automation**: Description
+
+---
+
+## Quick Start (60 seconds)
+
+### Installation
+```bash
+./scripts/install.sh
+\```
+
+### First Use
+```bash
+./scripts/quickstart.sh
+\```
+
+Expected output:
+\```
+Setup complete
+Configuration validated
+Ready to use
+\```
+
+---
+
+## Configuration
+
+### Basic Configuration
+Edit `config.json`:
+```json
+{
+  "mode": "production",
+  "features": ["feature1", "feature2"]
+}
+\```
+
+### Advanced Configuration
+See [Configuration Guide](docs/CONFIGURATION.md)
+
+---
+
+## Step-by-Step Guide
+
+### 1. Initial Setup
+[Detailed steps]
+
+### 2. Core Workflow
+[Main procedures]
+
+### 3. Integration
+[Integration steps]
+
+---
+
+## Advanced Features
+
+### Feature 1: Custom Templates
+```bash
+./scripts/generate.sh --template custom
+\```
+
+### Feature 2: Batch Processing
+```bash
+./scripts/batch.sh --input data.json
+\```
+
+### Feature 3: CI/CD Integration
+See [CI/CD Guide](docs/CICD.md)
+
+---
+
+## Scripts Reference
+
+| Script | Purpose | Usage |
+|--------|---------|-------|
+| `install.sh` | Install dependencies | `./scripts/install.sh` |
+| `generate.sh` | Generate code | `./scripts/generate.sh [name]` |
+| `validate.sh` | Validate output | `./scripts/validate.sh` |
+| `deploy.sh` | Deploy to environment | `./scripts/deploy.sh [env]` |
+
+---
+
+## Resources
+
+### Templates
+- `resources/templates/basic.template` - Basic template
+- `resources/templates/advanced.template` - Advanced template
+
+### Examples
+- `resources/examples/basic/` - Simple example
+- `resources/examples/advanced/` - Complex example
+- `resources/examples/integration/` - Integration example
+
+### Schemas
+- `resources/schemas/config.schema.json` - Configuration schema
+- `resources/schemas/output.schema.json` - Output validation
+
+---
+
+## Troubleshooting
+
+### Issue: Installation Failed
+**Symptoms**: Error during `install.sh`
+**Cause**: Missing dependencies
+**Solution**:
+```bash
+# Install prerequisites
+npm install -g required-package
+./scripts/install.sh --force
+\```
+
+### Issue: Validation Errors
+**Symptoms**: Validation script fails
+**Solution**: See [Troubleshooting Guide](docs/TROUBLESHOOTING.md)
+
+---
+
+## API Reference
+Complete API documentation: [API_REFERENCE.md](docs/API_REFERENCE.md)
+
+## Related Skills
+- [Related Skill 1](../related-skill-1/)
+- [Related Skill 2](../related-skill-2/)
+
+## Resources
+- [Official Documentation](https://example.com/docs)
+- [GitHub Repository](https://github.com/example/repo)
+- [Community Forum](https://forum.example.com)
+
+---
+
+**Created**: YYYY-MM-DD
+**Category**: Advanced
+**Difficulty**: Intermediate
+**Estimated Time**: 15-30 minutes
+```
diff --git a/.claude/skills/skill-builder/assets/template-basic.md b/.claude/skills/skill-builder/assets/template-basic.md
new file mode 100644 (file)
index 0000000..3f0e401
--- /dev/null
@@ -0,0 +1,35 @@
+# Template: Basic Skill (Minimal)
+
+Copy this template for simple, single-purpose skills.
+
+```markdown
+---
+name: "My Basic Skill"
+description: "One sentence what. One sentence when to use."
+---
+
+# My Basic Skill
+
+## What This Skill Does
+[2-3 sentences describing functionality]
+
+## Quick Start
+```bash
+# Single command to get started
+\```
+
+## Step-by-Step Guide
+
+### Step 1: Setup
+[Instructions]
+
+### Step 2: Usage
+[Instructions]
+
+### Step 3: Verify
+[Instructions]
+
+## Troubleshooting
+- **Issue**: Problem description
+  - **Solution**: Fix description
+```
diff --git a/.claude/skills/skill-builder/assets/template-intermediate.md b/.claude/skills/skill-builder/assets/template-intermediate.md
new file mode 100644 (file)
index 0000000..3b5670c
--- /dev/null
@@ -0,0 +1,56 @@
+# Template: Intermediate Skill (With Scripts)
+
+Copy this template for skills that include scripts and configuration.
+
+```markdown
+---
+name: "My Intermediate Skill"
+description: "Detailed what with key features. When to use with specific triggers: scaffolding, generating, building."
+---
+
+# My Intermediate Skill
+
+## Prerequisites
+- Requirement 1
+- Requirement 2
+
+## What This Skill Does
+1. Primary function
+2. Secondary function
+3. Integration capability
+
+## Quick Start
+```bash
+./scripts/setup.sh
+./scripts/generate.sh my-project
+\```
+
+## Configuration
+Edit `config.json`:
+```json
+{
+  "option1": "value1",
+  "option2": "value2"
+}
+\```
+
+## Step-by-Step Guide
+
+### Basic Usage
+[Steps for 80% use case]
+
+### Advanced Usage
+[Steps for complex scenarios]
+
+## Available Scripts
+- `scripts/setup.sh` - Initial setup
+- `scripts/generate.sh` - Code generation
+- `scripts/validate.sh` - Validation
+
+## Resources
+- Templates: `resources/templates/`
+- Examples: `resources/examples/`
+
+## Troubleshooting
+[Common issues and solutions]
+```
diff --git a/.claude/skills/skill-builder/references/content-structure.md b/.claude/skills/skill-builder/references/content-structure.md
new file mode 100644 (file)
index 0000000..8628d79
--- /dev/null
@@ -0,0 +1,171 @@
+# SKILL.md Content Structure
+
+## Recommended 4-Level Structure
+
+```markdown
+---
+name: "Your Skill Name"
+description: "What it does and when to use it"
+---
+
+# Your Skill Name
+
+## Level 1: Overview (Always Read First)
+Brief 2-3 sentence description of the skill.
+
+## Prerequisites
+- Requirement 1
+- Requirement 2
+
+## What This Skill Does
+1. Primary function
+2. Secondary function
+3. Key benefit
+
+---
+
+## Level 2: Quick Start (For Fast Onboarding)
+
+### Basic Usage
+```bash
+# Simplest use case
+command --option value
+\```
+
+### Common Scenarios
+1. **Scenario 1**: How to...
+2. **Scenario 2**: How to...
+
+---
+
+## Level 3: Detailed Instructions (For Deep Work)
+
+### Step-by-Step Guide
+
+#### Step 1: Initial Setup
+```bash
+# Commands
+\```
+Expected output:
+\```
+Success message
+\```
+
+#### Step 2: Configuration
+- Configuration option 1
+- Configuration option 2
+
+#### Step 3: Execution
+- Run the main command
+- Verify results
+
+### Advanced Options
+
+#### Option 1: Custom Configuration
+```bash
+# Advanced usage
+\```
+
+#### Option 2: Integration
+```bash
+# Integration steps
+\```
+
+---
+
+## Level 4: Reference (Rarely Needed)
+
+### Troubleshooting
+
+#### Issue: Common Problem
+**Symptoms**: What you see
+**Cause**: Why it happens
+**Solution**: How to fix
+```bash
+# Fix command
+\```
+
+#### Issue: Another Problem
+**Solution**: Steps to resolve
+
+### Complete API Reference
+See [API_REFERENCE.md](docs/API_REFERENCE.md)
+
+### Examples
+See [examples/](resources/examples/)
+
+### Related Skills
+- [Related Skill 1](#)
+- [Related Skill 2](#)
+
+### Resources
+- [External Link 1](https://example.com)
+- [Documentation](https://docs.example.com)
+```
+
+---
+
+## Content Best Practices
+
+### Writing Effective Descriptions
+
+**Front-Load Keywords**:
+```yaml
+# GOOD: Keywords first
+description: "Generate TypeScript interfaces from JSON schema. Use when converting schemas, creating types, or building API clients."
+
+# BAD: Keywords buried
+description: "This skill helps developers who need to work with JSON schemas by providing a way to generate TypeScript interfaces."
+```
+
+**Include Trigger Conditions**:
+```yaml
+# GOOD: Clear "when" clause
+description: "Debug React performance issues using Chrome DevTools. Use when components re-render unnecessarily, investigating slow updates, or optimizing bundle size."
+
+# BAD: No trigger conditions
+description: "Helps with React performance debugging."
+```
+
+**Be Specific**:
+```yaml
+# GOOD: Specific technologies
+description: "Create Express.js REST endpoints with Joi validation, Swagger docs, and Jest tests. Use when building new APIs or adding endpoints."
+
+# BAD: Too generic
+description: "Build API endpoints with proper validation and testing."
+```
+
+### Progressive Disclosure Writing
+
+**Keep Level 1 Brief** (Overview):
+```markdown
+## What This Skill Does
+Creates production-ready React components with TypeScript, hooks, and tests in 3 steps.
+```
+
+**Level 2 for Common Paths** (Quick Start):
+```markdown
+## Quick Start
+```bash
+# Most common use case (80% of users)
+generate-component MyComponent
+\```
+```
+
+**Level 3 for Details** (Step-by-Step):
+```markdown
+## Step-by-Step Guide
+
+### Creating a Basic Component
+1. Run generator
+2. Choose template
+3. Customize options
+[Detailed explanations]
+```
+
+**Level 4 for Edge Cases** (Reference):
+```markdown
+## Advanced Configuration
+For complex scenarios like HOCs, render props, or custom hooks, see [ADVANCED.md](docs/ADVANCED.md).
+```
diff --git a/.claude/skills/skill-builder/references/progressive-disclosure.md b/.claude/skills/skill-builder/references/progressive-disclosure.md
new file mode 100644 (file)
index 0000000..26834c9
--- /dev/null
@@ -0,0 +1,67 @@
+# Progressive Disclosure Architecture
+
+Claude Code uses a **3-level progressive disclosure system** to scale to 100+ skills without context penalty.
+
+---
+
+## Level 1: Metadata (Name + Description)
+
+**Loaded**: At Claude Code startup, always
+**Size**: ~200 chars per skill
+**Purpose**: Enable autonomous skill matching
+**Context**: Loaded into system prompt for ALL skills
+
+```yaml
+---
+name: "API Builder"                   # 11 chars
+description: "Creates REST APIs..."   # ~50 chars
+---
+# Total: ~61 chars per skill
+# 100 skills = ~6KB context (minimal!)
+```
+
+---
+
+## Level 2: SKILL.md Body
+
+**Loaded**: When skill is triggered/matched
+**Size**: ~1-10KB typically
+**Purpose**: Main instructions and procedures
+**Context**: Only loaded for ACTIVE skills
+
+```markdown
+# API Builder
+
+## What This Skill Does
+[Main instructions - loaded only when skill is active]
+
+## Quick Start
+[Basic procedures]
+
+## Step-by-Step Guide
+[Detailed instructions]
+```
+
+---
+
+## Level 3+: Referenced Files
+
+**Loaded**: On-demand as Claude navigates
+**Size**: Variable (KB to MB)
+**Purpose**: Deep reference, examples, schemas
+**Context**: Loaded only when Claude accesses specific files
+
+```markdown
+# In SKILL.md
+See [Advanced Configuration](docs/ADVANCED.md) for complex scenarios.
+See [API Reference](docs/API_REFERENCE.md) for complete documentation.
+Use template: `resources/templates/api-template.js`
+
+# Claude will load these files ONLY if needed
+```
+
+---
+
+## Key Benefit
+
+Install 100+ skills with ~6KB context overhead. Only the active skill's content (1-10KB) enters context. Referenced files load on-demand.
diff --git a/.claude/skills/skill-builder/references/scripts-and-resources.md b/.claude/skills/skill-builder/references/scripts-and-resources.md
new file mode 100644 (file)
index 0000000..8504639
--- /dev/null
@@ -0,0 +1,107 @@
+# Scripts and Resources
+
+## Scripts Directory
+
+**Purpose**: Executable scripts that Claude can run
+**Location**: `scripts/` in skill directory
+**Usage**: Referenced from SKILL.md
+
+### Example Layout
+
+```
+scripts/
+├── setup.sh          # Initialization script
+├── validate.js       # Validation logic
+├── generate.py       # Code generation
+└── deploy.sh         # Deployment script
+```
+
+### Reference from SKILL.md
+
+```markdown
+## Setup
+Run the setup script:
+```bash
+./scripts/setup.sh
+\```
+
+## Validation
+Validate your configuration:
+```bash
+node scripts/validate.js config.json
+\```
+```
+
+---
+
+## Resources Directory
+
+**Purpose**: Templates, examples, schemas, static files
+**Location**: `resources/` in skill directory
+**Usage**: Referenced or copied by scripts
+
+### Example Layout
+
+```
+resources/
+├── templates/
+│   ├── component.tsx.template
+│   ├── test.spec.ts.template
+│   └── story.stories.tsx.template
+├── examples/
+│   ├── basic-example/
+│   ├── advanced-example/
+│   └── integration-example/
+└── schemas/
+    ├── config.schema.json
+    └── output.schema.json
+```
+
+### Reference from SKILL.md
+
+```markdown
+## Templates
+Use the component template:
+```bash
+cp resources/templates/component.tsx.template src/components/MyComponent.tsx
+\```
+
+## Examples
+See working examples in `resources/examples/`:
+- `basic-example/` - Simple component
+- `advanced-example/` - With hooks and context
+```
+
+---
+
+## File References and Navigation
+
+Claude can navigate to referenced files automatically. Use these patterns:
+
+### Markdown Links
+
+```markdown
+See [Advanced Configuration](docs/ADVANCED.md) for complex scenarios.
+See [Troubleshooting Guide](docs/TROUBLESHOOTING.md) if you encounter errors.
+```
+
+### Relative File Paths
+
+```markdown
+Use the template located at `resources/templates/api-template.js`
+See examples in `resources/examples/basic-usage/`
+```
+
+### Inline File Content
+
+```markdown
+## Example Configuration
+See `resources/examples/config.json`:
+```json
+{
+  "option": "value"
+}
+\```
+```
+
+**Best Practice**: Keep SKILL.md lean (~2-5KB). Move lengthy content to separate files and reference them. Claude loads only what is needed.
diff --git a/.claude/skills/skill-builder/references/validation-checklist.md b/.claude/skills/skill-builder/references/validation-checklist.md
new file mode 100644 (file)
index 0000000..7c47336
--- /dev/null
@@ -0,0 +1,45 @@
+# Validation Checklist
+
+Before publishing a skill, verify every item below.
+
+---
+
+## YAML Frontmatter
+
+- [ ] Starts with `---`
+- [ ] Contains `name` field (max 64 chars)
+- [ ] Contains `description` field (max 1024 chars)
+- [ ] Description includes "what" and "when"
+- [ ] Ends with `---`
+- [ ] No YAML syntax errors
+
+## File Structure
+
+- [ ] SKILL.md exists in skill directory
+- [ ] Directory is DIRECTLY in `~/.claude/skills/[skill-name]/` or `.claude/skills/[skill-name]/`
+- [ ] Uses clear, descriptive directory name
+- [ ] **NO nested subdirectories** (Claude Code requires top-level structure)
+
+## Content Quality
+
+- [ ] Level 1 (Overview) is brief and clear
+- [ ] Level 2 (Quick Start) shows common use case
+- [ ] Level 3 (Details) provides step-by-step guide
+- [ ] Level 4 (Reference) links to advanced content
+- [ ] Examples are concrete and runnable
+- [ ] Troubleshooting section addresses common issues
+
+## Progressive Disclosure
+
+- [ ] Core instructions in SKILL.md (~2-5KB)
+- [ ] Advanced content in separate docs/
+- [ ] Large resources in resources/ directory
+- [ ] Clear navigation between levels
+
+## Testing
+
+- [ ] Skill appears in Claude's skill list
+- [ ] Description triggers on relevant queries
+- [ ] Instructions are clear and actionable
+- [ ] Scripts execute successfully (if included)
+- [ ] Examples work as documented
diff --git a/.claude/skills/skill-builder/references/yaml-frontmatter.md b/.claude/skills/skill-builder/references/yaml-frontmatter.md
new file mode 100644 (file)
index 0000000..01865da
--- /dev/null
@@ -0,0 +1,77 @@
+# YAML Frontmatter Specification
+
+Every SKILL.md **must** start with YAML frontmatter containing exactly two required fields:
+
+```yaml
+---
+name: "Skill Name"                    # REQUIRED: Max 64 chars
+description: "What this skill does    # REQUIRED: Max 1024 chars
+and when Claude should use it."       # Include BOTH what & when
+---
+```
+
+---
+
+## Field Requirements
+
+### `name` (REQUIRED)
+
+- **Type**: String
+- **Max Length**: 64 characters
+- **Format**: Human-friendly display name
+- **Usage**: Shown in skill lists, UI, and loaded into Claude's system prompt
+- **Best Practice**: Use Title Case, be concise and descriptive
+- **Examples**:
+  - "API Documentation Generator"
+  - "React Component Builder"
+  - "Database Schema Designer"
+  - ~~"skill-1"~~ (not descriptive)
+  - ~~"This is a very long skill name that exceeds sixty-four characters"~~ (too long)
+
+### `description` (REQUIRED)
+
+- **Type**: String
+- **Max Length**: 1024 characters
+- **Format**: Plain text or minimal markdown
+- **Content**: MUST include:
+  1. **What** the skill does (functionality)
+  2. **When** Claude should invoke it (trigger conditions)
+- **Usage**: Loaded into Claude's system prompt for autonomous matching
+- **Best Practice**: Front-load key trigger words, be specific about use cases
+- **Examples**:
+  - "Generate OpenAPI 3.0 documentation from Express.js routes. Use when creating API docs, documenting endpoints, or building API specifications."
+  - "Create React functional components with TypeScript, hooks, and tests. Use when scaffolding new components or converting class components."
+  - ~~"A comprehensive guide to API documentation"~~ (no "when" clause)
+  - ~~"Documentation tool"~~ (too vague)
+
+---
+
+## YAML Formatting Rules
+
+```yaml
+---
+# CORRECT: Simple string
+name: "API Builder"
+description: "Creates REST APIs with Express and TypeScript."
+
+# CORRECT: Multi-line description
+name: "Full-Stack Generator"
+description: "Generates full-stack applications with React frontend and Node.js backend. Use when starting new projects or scaffolding applications."
+
+# CORRECT: Special characters quoted
+name: "JSON:API Builder"
+description: "Creates JSON:API compliant endpoints: pagination, filtering, relationships."
+
+# WRONG: Missing quotes with special chars
+name: API:Builder  # YAML parse error!
+
+# WRONG: Extra fields (ignored but discouraged)
+name: "My Skill"
+description: "My description"
+version: "1.0.0"       # NOT part of spec
+author: "Me"           # NOT part of spec
+tags: ["dev", "api"]   # NOT part of spec
+---
+```
+
+**Critical**: Only `name` and `description` are used by Claude. Additional fields are ignored.
diff --git a/.claude/skills/sparc-methodology/assets/advanced-features.md b/.claude/skills/sparc-methodology/assets/advanced-features.md
new file mode 100644 (file)
index 0000000..3fb7209
--- /dev/null
@@ -0,0 +1,93 @@
+# Advanced Features
+
+Specialized capabilities for advanced SPARC usage.
+
+---
+
+## Neural Pattern Training
+
+Train patterns from successful workflows for reuse.
+
+```javascript
+mcp__claude-flow__neural_train {
+  pattern_type: "coordination",
+  training_data: "successful_tdd_workflow.json",
+  epochs: 50
+}
+```
+
+---
+
+## Cross-Session Memory
+
+Persist and restore session state across conversations.
+
+```javascript
+// Save session state
+mcp__claude-flow__memory_persist {
+  sessionId: "feature-auth-v1"
+}
+
+// Restore in new session
+mcp__claude-flow__context_restore {
+  snapshotId: "feature-auth-v1"
+}
+```
+
+---
+
+## GitHub Integration
+
+Repository analysis and pull request management.
+
+```javascript
+// Analyze repository
+mcp__claude-flow__github_repo_analyze {
+  repo: "owner/repo",
+  analysis_type: "code_quality"
+}
+
+// Manage pull requests
+mcp__claude-flow__github_pr_manage {
+  repo: "owner/repo",
+  pr_number: 123,
+  action: "review"
+}
+```
+
+---
+
+## Performance Monitoring
+
+Real-time metrics for running swarms and agents.
+
+```javascript
+// Real-time swarm monitoring
+mcp__claude-flow__swarm_monitor {
+  swarmId: "current",
+  interval: 5000
+}
+
+// Bottleneck analysis
+mcp__claude-flow__bottleneck_analyze {
+  component: "api-layer",
+  metrics: ["latency", "throughput", "errors"]
+}
+
+// Token usage tracking
+mcp__claude-flow__token_usage {
+  operation: "feature-development",
+  timeframe: "24h"
+}
+```
+
+---
+
+## Performance Benefits
+
+**Proven Results**:
+- **84.8%** SWE-Bench solve rate
+- **32.3%** token reduction through optimizations
+- **2.8-4.4x** speed improvement with parallel execution
+- **27+** neural models for pattern learning
+- **90%+** test coverage standard
diff --git a/.claude/skills/sparc-methodology/assets/common-workflows.md b/.claude/skills/sparc-methodology/assets/common-workflows.md
new file mode 100644 (file)
index 0000000..61f492b
--- /dev/null
@@ -0,0 +1,115 @@
+# Common Workflows
+
+Pre-built CLI workflows for frequent development tasks.
+
+---
+
+## Workflow 1: Feature Development
+
+```bash
+# Step 1: Research and planning
+npx claude-flow sparc run researcher "authentication patterns"
+
+# Step 2: Architecture design
+npx claude-flow sparc run architect "design auth system"
+
+# Step 3: TDD implementation
+npx claude-flow sparc tdd "user authentication feature"
+
+# Step 4: Code review
+npx claude-flow sparc run reviewer "review auth implementation"
+
+# Step 5: Documentation
+npx claude-flow sparc run documenter "document auth API"
+```
+
+---
+
+## Workflow 2: Bug Investigation
+
+```bash
+# Step 1: Analyze issue
+npx claude-flow sparc run analyzer "investigate bug #456"
+
+# Step 2: Debug systematically
+npx claude-flow sparc run debugger "fix memory leak in service X"
+
+# Step 3: Create tests
+npx claude-flow sparc run tester "regression tests for bug #456"
+
+# Step 4: Review fix
+npx claude-flow sparc run reviewer "validate bug fix"
+```
+
+---
+
+## Workflow 3: Performance Optimization
+
+```bash
+# Step 1: Profile performance
+npx claude-flow sparc run analyzer "profile API response times"
+
+# Step 2: Identify bottlenecks
+npx claude-flow sparc run optimizer "optimize database queries"
+
+# Step 3: Implement improvements
+npx claude-flow sparc run coder "implement caching layer"
+
+# Step 4: Benchmark results
+npx claude-flow sparc run tester "performance benchmarks"
+```
+
+---
+
+## Workflow 4: Complete Pipeline
+
+```bash
+# Execute full development pipeline
+npx claude-flow sparc pipeline "e-commerce checkout feature"
+
+# This automatically runs:
+# 1. researcher - Gather requirements
+# 2. architect - Design system
+# 3. coder - Implement features
+# 4. tdd - Create comprehensive tests
+# 5. reviewer - Code quality review
+# 6. optimizer - Performance tuning
+# 7. documenter - Documentation
+```
+
+---
+
+## Quick Reference: CLI Commands
+
+```bash
+# List modes
+npx claude-flow sparc modes
+
+# Run specific mode
+npx claude-flow sparc run <mode> "task"
+
+# TDD workflow
+npx claude-flow sparc tdd "feature"
+
+# Full pipeline
+npx claude-flow sparc pipeline "task"
+
+# Batch execution
+npx claude-flow sparc batch <modes> "task"
+```
+
+## Quick Reference: MCP Calls
+
+```javascript
+// Initialize swarm
+mcp__claude-flow__swarm_init { topology: "hierarchical" }
+
+// Execute mode
+mcp__claude-flow__sparc_mode { mode: "coder", task_description: "..." }
+
+// Monitor progress
+mcp__claude-flow__swarm_monitor { interval: 5000 }
+
+// Store in memory
+mcp__claude-flow__memory_usage { action: "store", key: "...", value: "..." }
+```
diff --git a/.claude/skills/sparc-methodology/assets/examples.md b/.claude/skills/sparc-methodology/assets/examples.md
new file mode 100644 (file)
index 0000000..d9c7600
--- /dev/null
@@ -0,0 +1,145 @@
+# Integration Examples
+
+Ready-to-use SPARC workflow examples for common development scenarios.
+
+---
+
+## Example 1: Full-Stack Development
+
+```javascript
+[Single Message - Parallel Agent Execution]:
+
+// Initialize swarm
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",
+  maxAgents: 10
+}
+
+// Architecture phase
+mcp__claude-flow__sparc_mode {
+  mode: "architect",
+  task_description: "design REST API with authentication",
+  options: { memory_enabled: true }
+}
+
+// Research phase
+mcp__claude-flow__sparc_mode {
+  mode: "researcher",
+  task_description: "research authentication best practices"
+}
+
+// Implementation phase
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "implement Express API with JWT auth",
+  options: { test_driven: true }
+}
+
+// Testing phase
+mcp__claude-flow__sparc_mode {
+  mode: "tdd",
+  task_description: "comprehensive API tests",
+  options: { coverage_target: 90 }
+}
+
+// Review phase
+mcp__claude-flow__sparc_mode {
+  mode: "reviewer",
+  task_description: "security and performance review",
+  options: { security_check: true }
+}
+
+// Batch todos
+TodoWrite {
+  todos: [
+    {content: "Design API schema", status: "completed"},
+    {content: "Research JWT implementation", status: "completed"},
+    {content: "Implement authentication", status: "in_progress"},
+    {content: "Write API tests", status: "pending"},
+    {content: "Security review", status: "pending"},
+    {content: "Performance optimization", status: "pending"},
+    {content: "API documentation", status: "pending"},
+    {content: "Deployment setup", status: "pending"}
+  ]
+}
+```
+
+---
+
+## Example 2: Research-Driven Innovation
+
+```javascript
+// Research phase
+mcp__claude-flow__sparc_mode {
+  mode: "researcher",
+  task_description: "research AI-powered search implementations",
+  options: {
+    depth: "comprehensive",
+    sources: ["academic", "industry"]
+  }
+}
+
+// Innovation phase
+mcp__claude-flow__sparc_mode {
+  mode: "innovator",
+  task_description: "propose novel search algorithm",
+  options: { memory_enabled: true }
+}
+
+// Architecture phase
+mcp__claude-flow__sparc_mode {
+  mode: "architect",
+  task_description: "design scalable search system"
+}
+
+// Implementation phase
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "implement search algorithm",
+  options: { test_driven: true }
+}
+
+// Documentation phase
+mcp__claude-flow__sparc_mode {
+  mode: "documenter",
+  task_description: "document search system architecture and API"
+}
+```
+
+---
+
+## Example 3: Legacy Code Refactoring
+
+```javascript
+// Analysis phase
+mcp__claude-flow__sparc_mode {
+  mode: "analyzer",
+  task_description: "analyze legacy codebase dependencies"
+}
+
+// Planning phase
+mcp__claude-flow__sparc_mode {
+  mode: "orchestrator",
+  task_description: "plan incremental refactoring strategy"
+}
+
+// Testing phase (create safety net)
+mcp__claude-flow__sparc_mode {
+  mode: "tester",
+  task_description: "create comprehensive test suite for legacy code",
+  options: { coverage_target: 80 }
+}
+
+// Refactoring phase
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "refactor module X with modern patterns",
+  options: { maintain_tests: true }
+}
+
+// Review phase
+mcp__claude-flow__sparc_mode {
+  mode: "reviewer",
+  task_description: "validate refactoring maintains functionality"
+}
+```
diff --git a/.claude/skills/sparc-methodology/references/activation-methods.md b/.claude/skills/sparc-methodology/references/activation-methods.md
new file mode 100644 (file)
index 0000000..bb4f7ff
--- /dev/null
@@ -0,0 +1,82 @@
+# Activation Methods
+
+Three ways to invoke SPARC modes depending on the environment.
+
+---
+
+## Method 1: MCP Tools (Preferred in Claude Code)
+
+**Best for**: Integrated Claude Code workflows with full orchestration capabilities.
+
+```javascript
+// Basic mode execution
+mcp__claude-flow__sparc_mode {
+  mode: "<mode-name>",
+  task_description: "<task description>",
+  options: {
+    // mode-specific options
+  }
+}
+
+// Initialize swarm for complex tasks
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",  // or "mesh", "ring", "star"
+  strategy: "auto",           // or "balanced", "specialized", "adaptive"
+  maxAgents: 8
+}
+
+// Spawn specialized agents
+mcp__claude-flow__agent_spawn {
+  type: "<agent-type>",
+  capabilities: ["<capability1>", "<capability2>"]
+}
+
+// Monitor execution
+mcp__claude-flow__swarm_monitor {
+  swarmId: "current",
+  interval: 5000
+}
+```
+
+---
+
+## Method 2: NPX CLI (Fallback)
+
+**Best for**: Terminal usage or when MCP tools are unavailable.
+
+```bash
+# Execute specific mode
+npx claude-flow sparc run <mode> "task description"
+
+# Use alpha features
+npx claude-flow@alpha sparc run <mode> "task description"
+
+# List all available modes
+npx claude-flow sparc modes
+
+# Get help for specific mode
+npx claude-flow sparc help <mode>
+
+# Run with options
+npx claude-flow sparc run <mode> "task" --parallel --monitor
+
+# Execute TDD workflow
+npx claude-flow sparc tdd "feature description"
+
+# Batch execution
+npx claude-flow sparc batch <mode1,mode2,mode3> "task"
+
+# Pipeline execution
+npx claude-flow sparc pipeline "task description"
+```
+
+---
+
+## Method 3: Local Installation
+
+**Best for**: Projects with local claude-flow installation.
+
+```bash
+# If claude-flow is installed locally
+./claude-flow sparc run <mode> "task description"
+```
diff --git a/.claude/skills/sparc-methodology/references/best-practices.md b/.claude/skills/sparc-methodology/references/best-practices.md
new file mode 100644 (file)
index 0000000..bd0a29e
--- /dev/null
@@ -0,0 +1,104 @@
+# Best Practices
+
+Guidelines for effective use of the SPARC methodology.
+
+---
+
+## 1. Memory Integration
+
+Always use Memory for cross-agent coordination.
+
+```javascript
+// Store architectural decisions
+mcp__claude-flow__memory_usage {
+  action: "store",
+  namespace: "architecture",
+  key: "api-design-v1",
+  value: JSON.stringify(apiDesign),
+  ttl: 86400000  // 24 hours
+}
+
+// Retrieve in subsequent agents
+mcp__claude-flow__memory_usage {
+  action: "retrieve",
+  namespace: "architecture",
+  key: "api-design-v1"
+}
+```
+
+---
+
+## 2. Parallel Operations
+
+Batch all related operations in a single message.
+
+```javascript
+// CORRECT: All operations together
+[Single Message]:
+  mcp__claude-flow__agent_spawn { type: "researcher" }
+  mcp__claude-flow__agent_spawn { type: "coder" }
+  mcp__claude-flow__agent_spawn { type: "tester" }
+  TodoWrite { todos: [8-10 todos] }
+
+// WRONG: Multiple messages
+Message 1: mcp__claude-flow__agent_spawn { type: "researcher" }
+Message 2: mcp__claude-flow__agent_spawn { type: "coder" }
+Message 3: TodoWrite { todos: [...] }
+```
+
+---
+
+## 3. Hook Integration
+
+Every SPARC mode should use hooks for lifecycle management.
+
+```bash
+# Before work
+npx claude-flow@alpha hooks pre-task --description "implement auth"
+
+# During work
+npx claude-flow@alpha hooks post-edit --file "auth.js"
+
+# After work
+npx claude-flow@alpha hooks post-task --task-id "task-123"
+```
+
+---
+
+## 4. Test Coverage
+
+Maintain minimum 90% coverage:
+
+- Unit tests for all functions
+- Integration tests for APIs
+- E2E tests for critical flows
+- Edge case coverage
+- Error path testing
+
+---
+
+## 5. Documentation
+
+Document as you build:
+
+- API documentation (OpenAPI)
+- Architecture decision records (ADR)
+- Code comments for complex logic
+- README with setup instructions
+- Changelog for version tracking
+
+---
+
+## 6. File Organization
+
+Never save to the root folder. Maintain clean structure:
+
+```
+project/
+├── src/           # Source code
+├── tests/         # Test files
+├── docs/          # Documentation
+├── config/        # Configuration
+├── scripts/       # Utility scripts
+└── examples/      # Example code
+```
diff --git a/.claude/skills/sparc-methodology/references/modes.md b/.claude/skills/sparc-methodology/references/modes.md
new file mode 100644 (file)
index 0000000..9dbdd85
--- /dev/null
@@ -0,0 +1,344 @@
+# SPARC Modes Reference
+
+All 17 specialized modes available in the SPARC methodology.
+
+---
+
+## Core Orchestration Modes
+
+### `orchestrator`
+
+Multi-agent task orchestration with TodoWrite/Task/Memory coordination.
+
+**Capabilities**:
+- Task decomposition into manageable units
+- Agent coordination and resource allocation
+- Progress tracking and result synthesis
+- Adaptive strategy selection
+- Cross-agent communication
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "orchestrator",
+  task_description: "coordinate feature development",
+  options: { parallel: true, monitor: true }
+}
+```
+
+### `swarm-coordinator`
+
+Specialized swarm management for complex multi-agent workflows.
+
+**Capabilities**:
+- Topology optimization (mesh, hierarchical, ring, star)
+- Agent lifecycle management
+- Dynamic scaling based on workload
+- Fault tolerance and recovery
+- Performance monitoring
+
+### `workflow-manager`
+
+Process automation and workflow orchestration.
+
+**Capabilities**:
+- Workflow definition and execution
+- Event-driven triggers
+- Sequential and parallel pipelines
+- State management
+- Error handling and retry logic
+
+### `batch-executor`
+
+Parallel task execution for high-throughput operations.
+
+**Capabilities**:
+- Concurrent file operations
+- Batch processing optimization
+- Resource pooling
+- Load balancing
+- Progress aggregation
+
+---
+
+## Development Modes
+
+### `coder`
+
+Autonomous code generation with batch file operations.
+
+**Capabilities**:
+- Feature implementation
+- Code refactoring
+- Bug fixes and patches
+- API development
+- Algorithm implementation
+
+**Quality Standards**:
+- ES2022+ standards
+- TypeScript type safety
+- Comprehensive error handling
+- Performance optimization
+- Security best practices
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "implement user authentication with JWT",
+  options: {
+    test_driven: true,
+    parallel_edits: true,
+    typescript: true
+  }
+}
+```
+
+### `architect`
+
+System design with Memory-based coordination.
+
+**Capabilities**:
+- Microservices architecture
+- Event-driven design
+- Domain-driven design (DDD)
+- Hexagonal architecture
+- CQRS and Event Sourcing
+
+**Memory Integration**:
+- Store architectural decisions
+- Share component specifications
+- Maintain design consistency
+- Track architectural evolution
+
+**Design Patterns**:
+- Layered architecture
+- Microservices patterns
+- Event-driven patterns
+- Domain modeling
+- Infrastructure as Code
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "architect",
+  task_description: "design scalable e-commerce platform",
+  options: {
+    detailed: true,
+    memory_enabled: true,
+    patterns: ["microservices", "event-driven"]
+  }
+}
+```
+
+### `tdd`
+
+Test-driven development with comprehensive testing.
+
+**Capabilities**:
+- Test-first development
+- Red-green-refactor cycle
+- Test suite design
+- Coverage optimization (target: 90%+)
+- Continuous testing
+
+**TDD Workflow**:
+1. Write failing test (RED)
+2. Implement minimum code
+3. Make test pass (GREEN)
+4. Refactor for quality (REFACTOR)
+5. Repeat cycle
+
+**Testing Strategies**:
+- Unit testing (Jest, Mocha, Vitest)
+- Integration testing
+- End-to-end testing (Playwright, Cypress)
+- Performance testing
+- Security testing
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "tdd",
+  task_description: "shopping cart feature with payment integration",
+  options: {
+    coverage_target: 90,
+    test_framework: "jest",
+    e2e_framework: "playwright"
+  }
+}
+```
+
+### `reviewer`
+
+Code review using batch file analysis.
+
+**Capabilities**:
+- Code quality assessment
+- Security vulnerability detection
+- Performance analysis
+- Best practices validation
+- Documentation review
+
+**Review Criteria**:
+- Code correctness and logic
+- Design pattern adherence
+- Comprehensive error handling
+- Test coverage adequacy
+- Maintainability and readability
+- Security vulnerabilities
+- Performance bottlenecks
+
+**Batch Analysis**:
+- Parallel file review
+- Pattern detection
+- Dependency checking
+- Consistency validation
+- Automated reporting
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "reviewer",
+  task_description: "review authentication module PR #123",
+  options: {
+    security_check: true,
+    performance_check: true,
+    test_coverage_check: true
+  }
+}
+```
+
+---
+
+## Analysis and Research Modes
+
+### `researcher`
+
+Deep research with parallel WebSearch/WebFetch and Memory coordination.
+
+**Capabilities**:
+- Comprehensive information gathering
+- Source credibility evaluation
+- Trend analysis and forecasting
+- Competitive research
+- Technology assessment
+
+**Research Methods**:
+- Parallel web searches
+- Academic paper analysis
+- Industry report synthesis
+- Expert opinion gathering
+- Statistical data compilation
+
+**Memory Integration**:
+- Store research findings with citations
+- Build knowledge graphs
+- Track information sources
+- Cross-reference insights
+- Maintain research history
+
+**Usage**:
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "researcher",
+  task_description: "research microservices best practices 2024",
+  options: {
+    depth: "comprehensive",
+    sources: ["academic", "industry", "news"],
+    citations: true
+  }
+}
+```
+
+### `analyzer`
+
+Code and data analysis with pattern recognition.
+
+**Capabilities**:
+- Static code analysis
+- Dependency analysis
+- Performance profiling
+- Security scanning
+- Data pattern recognition
+
+### `optimizer`
+
+Performance optimization and bottleneck resolution.
+
+**Capabilities**:
+- Algorithm optimization
+- Database query tuning
+- Caching strategy design
+- Bundle size reduction
+- Memory leak detection
+
+---
+
+## Creative and Support Modes
+
+### `designer`
+
+UI/UX design with accessibility focus.
+
+**Capabilities**:
+- Interface design
+- User experience optimization
+- Accessibility compliance (WCAG 2.1)
+- Design system creation
+- Responsive layout design
+
+### `innovator`
+
+Creative problem-solving and novel solutions.
+
+**Capabilities**:
+- Brainstorming and ideation
+- Alternative approach generation
+- Technology evaluation
+- Proof of concept development
+- Innovation feasibility analysis
+
+### `documenter`
+
+Comprehensive documentation generation.
+
+**Capabilities**:
+- API documentation (OpenAPI/Swagger)
+- Architecture diagrams
+- User guides and tutorials
+- Code comments and JSDoc
+- README and changelog maintenance
+
+### `debugger`
+
+Systematic debugging and issue resolution.
+
+**Capabilities**:
+- Bug reproduction
+- Root cause analysis
+- Fix implementation
+- Regression prevention
+- Debug logging optimization
+
+### `tester`
+
+Comprehensive testing beyond TDD.
+
+**Capabilities**:
+- Test suite expansion
+- Edge case identification
+- Performance testing
+- Load testing
+- Chaos engineering
+
+### `memory-manager`
+
+Knowledge management and context preservation.
+
+**Capabilities**:
+- Cross-session memory persistence
+- Knowledge graph construction
+- Context restoration
+- Learning pattern extraction
+- Decision tracking
diff --git a/.claude/skills/sparc-methodology/references/orchestration-patterns.md b/.claude/skills/sparc-methodology/references/orchestration-patterns.md
new file mode 100644 (file)
index 0000000..d20f90a
--- /dev/null
@@ -0,0 +1,108 @@
+# Orchestration Patterns
+
+Five coordination topologies for multi-agent SPARC workflows.
+
+---
+
+## Pattern 1: Hierarchical Coordination
+
+**Best for**: Complex projects with clear delegation hierarchy.
+
+```javascript
+// Initialize hierarchical swarm
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",
+  maxAgents: 12
+}
+
+// Spawn coordinator
+mcp__claude-flow__agent_spawn {
+  type: "coordinator",
+  capabilities: ["planning", "delegation", "monitoring"]
+}
+
+// Spawn specialized workers
+mcp__claude-flow__agent_spawn { type: "architect" }
+mcp__claude-flow__agent_spawn { type: "coder" }
+mcp__claude-flow__agent_spawn { type: "tester" }
+mcp__claude-flow__agent_spawn { type: "reviewer" }
+```
+
+---
+
+## Pattern 2: Mesh Coordination
+
+**Best for**: Collaborative tasks requiring peer-to-peer communication.
+
+```javascript
+mcp__claude-flow__swarm_init {
+  topology: "mesh",
+  strategy: "balanced",
+  maxAgents: 6
+}
+```
+
+---
+
+## Pattern 3: Sequential Pipeline
+
+**Best for**: Ordered workflow execution (spec -> design -> code -> test -> review).
+
+```javascript
+mcp__claude-flow__workflow_create {
+  name: "development-pipeline",
+  steps: [
+    { mode: "researcher", task: "gather requirements" },
+    { mode: "architect", task: "design system" },
+    { mode: "coder", task: "implement features" },
+    { mode: "tdd", task: "create tests" },
+    { mode: "reviewer", task: "review code" }
+  ],
+  triggers: ["on_step_complete"]
+}
+```
+
+---
+
+## Pattern 4: Parallel Execution
+
+**Best for**: Independent tasks that can run concurrently.
+
+```javascript
+mcp__claude-flow__task_orchestrate {
+  task: "build full-stack application",
+  strategy: "parallel",
+  dependencies: {
+    backend: [],
+    frontend: [],
+    database: [],
+    tests: ["backend", "frontend"]
+  }
+}
+```
+
+---
+
+## Pattern 5: Adaptive Strategy
+
+**Best for**: Dynamic workloads with changing requirements.
+
+```javascript
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",
+  strategy: "adaptive",  // Auto-adjusts based on workload
+  maxAgents: 20
+}
+```
+
+---
+
+## Choosing a Pattern
+
+| Scenario | Recommended Pattern |
+|----------|-------------------|
+| Large feature with sub-teams | Hierarchical |
+| Peer code review / brainstorm | Mesh |
+| Waterfall-style pipeline | Sequential Pipeline |
+| Independent micro-tasks | Parallel Execution |
+| Uncertain scope / evolving reqs | Adaptive |
diff --git a/.claude/skills/sparc-methodology/references/tdd-workflows.md b/.claude/skills/sparc-methodology/references/tdd-workflows.md
new file mode 100644 (file)
index 0000000..c0cd4da
--- /dev/null
@@ -0,0 +1,103 @@
+# TDD Workflows
+
+Test-driven development workflows within the SPARC methodology.
+
+---
+
+## Complete TDD Workflow
+
+A full six-step pipeline from research through optimization.
+
+```javascript
+// Step 1: Initialize TDD swarm
+mcp__claude-flow__swarm_init {
+  topology: "hierarchical",
+  maxAgents: 8
+}
+
+// Step 2: Research and planning
+mcp__claude-flow__sparc_mode {
+  mode: "researcher",
+  task_description: "research testing best practices for feature X"
+}
+
+// Step 3: Architecture design
+mcp__claude-flow__sparc_mode {
+  mode: "architect",
+  task_description: "design testable architecture for feature X"
+}
+
+// Step 4: TDD implementation
+mcp__claude-flow__sparc_mode {
+  mode: "tdd",
+  task_description: "implement feature X with 90% coverage",
+  options: {
+    coverage_target: 90,
+    test_framework: "jest",
+    parallel_tests: true
+  }
+}
+
+// Step 5: Code review
+mcp__claude-flow__sparc_mode {
+  mode: "reviewer",
+  task_description: "review feature X implementation",
+  options: {
+    test_coverage_check: true,
+    security_check: true
+  }
+}
+
+// Step 6: Optimization
+mcp__claude-flow__sparc_mode {
+  mode: "optimizer",
+  task_description: "optimize feature X performance"
+}
+```
+
+---
+
+## Red-Green-Refactor Cycle
+
+The atomic unit of TDD: one iteration of the cycle.
+
+### RED: Write failing test
+
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "tester",
+  task_description: "create failing test for shopping cart add item",
+  options: { expect_failure: true }
+}
+```
+
+### GREEN: Minimal implementation
+
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "implement minimal code to pass test",
+  options: { minimal: true }
+}
+```
+
+### REFACTOR: Improve code quality
+
+```javascript
+mcp__claude-flow__sparc_mode {
+  mode: "coder",
+  task_description: "refactor shopping cart implementation",
+  options: { maintain_tests: true }
+}
+```
+
+---
+
+## Coverage Targets
+
+| Test type | Target | Notes |
+|-----------|--------|-------|
+| Unit tests | 90%+ | All functions and branches |
+| Integration tests | 80%+ | API endpoints, DB interactions |
+| E2E tests | Critical paths | Login, checkout, payment |
+| Edge cases | All identified | Boundary values, error paths |
diff --git a/.claude/skills/stream-chain/references/advanced-use-cases.md b/.claude/skills/stream-chain/references/advanced-use-cases.md
new file mode 100644 (file)
index 0000000..3b51a95
--- /dev/null
@@ -0,0 +1,56 @@
+# Advanced Use Cases
+
+Sophisticated workflow patterns leveraging stream-chain for complex scenarios.
+
+---
+
+## Multi-Agent Coordination
+
+Chain different agent types for complex workflows:
+
+```bash
+claude-flow stream-chain run \
+  "Research best practices for API design" \
+  "Design REST API with discovered patterns" \
+  "Implement API endpoints with validation" \
+  "Generate OpenAPI specification" \
+  "Create integration tests" \
+  "Write deployment documentation"
+```
+
+## Data Transformation Pipeline
+
+Process and transform data through multiple stages:
+
+```bash
+claude-flow stream-chain run \
+  "Extract user data from CSV files" \
+  "Normalize and validate data format" \
+  "Enrich data with external API calls" \
+  "Generate analytics report" \
+  "Create visualization code"
+```
+
+## Code Migration Workflow
+
+Systematic code migration with validation:
+
+```bash
+claude-flow stream-chain run \
+  "Analyze legacy codebase dependencies" \
+  "Create migration plan with risk assessment" \
+  "Generate modernized code for high-priority modules" \
+  "Create migration tests" \
+  "Document migration steps and rollback procedures"
+```
+
+## Quality Assurance Chain
+
+Comprehensive code quality workflow by chaining multiple pipelines:
+
+```bash
+claude-flow stream-chain pipeline analysis
+claude-flow stream-chain pipeline refactor
+claude-flow stream-chain pipeline test
+claude-flow stream-chain pipeline optimize
+```
diff --git a/.claude/skills/stream-chain/references/best-practices.md b/.claude/skills/stream-chain/references/best-practices.md
new file mode 100644 (file)
index 0000000..469356f
--- /dev/null
@@ -0,0 +1,58 @@
+# Best Practices
+
+Guidelines for writing effective stream-chain prompts and designing robust pipelines.
+
+---
+
+## 1. Clear and Specific Prompts
+
+**Good:**
+```bash
+"Analyze authentication.js for SQL injection vulnerabilities"
+```
+
+**Avoid:**
+```bash
+"Check security"
+```
+
+## 2. Logical Progression
+
+Order prompts to build on previous outputs:
+```bash
+1. "Identify the problem"
+2. "Analyze root causes"
+3. "Design solution"
+4. "Implement solution"
+5. "Verify implementation"
+```
+
+## 3. Appropriate Timeouts
+
+| Task Type | Recommended Timeout |
+|-----------|-------------------|
+| Simple tasks | 30 seconds (default) |
+| Analysis tasks | 45-60 seconds |
+| Implementation tasks | 60-90 seconds |
+| Complex workflows | 90-120 seconds |
+
+## 4. Verification Steps
+
+Always include validation in chains:
+```bash
+claude-flow stream-chain run \
+  "Implement feature X" \
+  "Write tests for feature X" \
+  "Verify tests pass and cover edge cases"
+```
+
+## 5. Iterative Refinement
+
+Use chains for iterative improvement:
+```bash
+claude-flow stream-chain run \
+  "Generate initial implementation" \
+  "Review and identify issues" \
+  "Refine based on issues found" \
+  "Final quality check"
+```
diff --git a/.claude/skills/stream-chain/references/custom-chain-examples.md b/.claude/skills/stream-chain/references/custom-chain-examples.md
new file mode 100644 (file)
index 0000000..d0cf983
--- /dev/null
@@ -0,0 +1,83 @@
+# Custom Chain Examples
+
+Detailed examples for `claude-flow stream-chain run` with various workflow patterns.
+
+---
+
+## Basic Development Chain
+
+```bash
+claude-flow stream-chain run \
+  "Write a user authentication function" \
+  "Add input validation and error handling" \
+  "Create unit tests with edge cases"
+```
+
+## Security Audit Workflow
+
+```bash
+claude-flow stream-chain run \
+  "Analyze authentication system for vulnerabilities" \
+  "Identify and categorize security issues by severity" \
+  "Propose fixes with implementation priority" \
+  "Generate security test cases" \
+  --timeout 45 \
+  --verbose
+```
+
+## Code Refactoring Chain
+
+```bash
+claude-flow stream-chain run \
+  "Identify code smells in src/ directory" \
+  "Create refactoring plan with specific changes" \
+  "Apply refactoring to top 3 priority items" \
+  "Verify refactored code maintains behavior" \
+  --debug
+```
+
+## Data Processing Pipeline
+
+```bash
+claude-flow stream-chain run \
+  "Extract data from API responses" \
+  "Transform data into normalized format" \
+  "Validate data against schema" \
+  "Generate data quality report"
+```
+
+## Complete Development Workflow
+
+```bash
+claude-flow stream-chain run \
+  "Analyze requirements for user profile feature" \
+  "Design database schema and API endpoints" \
+  "Implement backend with validation" \
+  "Create frontend components" \
+  "Write comprehensive tests" \
+  "Generate API documentation" \
+  --timeout 60 \
+  --verbose
+```
+
+## Code Review Pipeline
+
+```bash
+claude-flow stream-chain run \
+  "Analyze recent git changes" \
+  "Identify code quality issues" \
+  "Check for security vulnerabilities" \
+  "Verify test coverage" \
+  "Generate code review report with recommendations"
+```
+
+## Migration Assistant
+
+```bash
+claude-flow stream-chain run \
+  "Analyze current Vue 2 codebase" \
+  "Identify Vue 3 breaking changes" \
+  "Create migration checklist" \
+  "Generate migration scripts" \
+  "Provide updated code examples"
+```
diff --git a/.claude/skills/stream-chain/references/custom-pipeline-config.md b/.claude/skills/stream-chain/references/custom-pipeline-config.md
new file mode 100644 (file)
index 0000000..aa8fd41
--- /dev/null
@@ -0,0 +1,52 @@
+# Custom Pipeline Definitions
+
+Define reusable pipelines in `.claude-flow/config.json` for project-specific workflows.
+
+---
+
+## Configuration Format
+
+```json
+{
+  "streamChain": {
+    "pipelines": {
+      "security": {
+        "name": "Security Audit Pipeline",
+        "description": "Comprehensive security analysis",
+        "prompts": [
+          "Scan codebase for security vulnerabilities",
+          "Categorize issues by severity (critical/high/medium/low)",
+          "Generate fixes with priority and implementation steps",
+          "Create security test suite"
+        ],
+        "timeout": 45
+      },
+      "documentation": {
+        "name": "Documentation Generation Pipeline",
+        "prompts": [
+          "Analyze code structure and identify undocumented areas",
+          "Generate API documentation with examples",
+          "Create usage guides and tutorials",
+          "Build architecture diagrams and flow charts"
+        ]
+      }
+    }
+  }
+}
+```
+
+## Executing Custom Pipelines
+
+```bash
+claude-flow stream-chain pipeline security
+claude-flow stream-chain pipeline documentation
+```
+
+## Configuration Fields
+
+| Field | Type | Required | Description |
+|-------|------|----------|-------------|
+| `name` | string | Yes | Human-readable pipeline name |
+| `description` | string | No | Pipeline purpose |
+| `prompts` | string[] | Yes | Ordered list of step prompts |
+| `timeout` | number | No | Per-step timeout in seconds (default: 30) |
diff --git a/.claude/skills/stream-chain/references/integration.md b/.claude/skills/stream-chain/references/integration.md
new file mode 100644 (file)
index 0000000..b958942
--- /dev/null
@@ -0,0 +1,46 @@
+# Integration with Claude Flow
+
+How stream-chain integrates with other Claude Flow subsystems.
+
+---
+
+## Swarm Coordination
+
+Combine stream chains with swarm agents for parallel-then-sequential workflows:
+
+```bash
+# Initialize swarm for coordination
+claude-flow swarm init --topology mesh
+
+# Execute stream chain with swarm agents
+claude-flow stream-chain run \
+  "Agent 1: Research task" \
+  "Agent 2: Implement solution" \
+  "Agent 3: Test implementation" \
+  "Agent 4: Review and refine"
+```
+
+## Memory Integration
+
+Stream chains automatically store context in memory for cross-session persistence:
+
+```bash
+# Execute chain with memory
+claude-flow stream-chain run \
+  "Analyze requirements" \
+  "Design architecture" \
+  --verbose
+
+# Results stored in .claude-flow/memory/stream-chain/
+```
+
+## Neural Pattern Training
+
+Successful chains train neural patterns for improved performance:
+
+```bash
+# Enable neural training
+claude-flow stream-chain pipeline optimize --debug
+
+# Patterns learned and stored for future optimizations
+```
diff --git a/.claude/skills/stream-chain/references/predefined-pipelines.md b/.claude/skills/stream-chain/references/predefined-pipelines.md
new file mode 100644 (file)
index 0000000..d0881a1
--- /dev/null
@@ -0,0 +1,122 @@
+# Predefined Pipelines
+
+Complete reference for all built-in `claude-flow stream-chain pipeline` types.
+
+---
+
+## 1. Analysis Pipeline
+
+Comprehensive codebase analysis and improvement identification.
+
+```bash
+claude-flow stream-chain pipeline analysis
+```
+
+**Workflow Steps:**
+1. **Structure Analysis**: Map directory structure and identify components
+2. **Issue Detection**: Find potential improvements and problems
+3. **Recommendations**: Generate actionable improvement report
+
+**Use Cases:**
+- New codebase onboarding
+- Technical debt assessment
+- Architecture review
+- Code quality audits
+
+---
+
+## 2. Refactor Pipeline
+
+Systematic code refactoring with prioritization.
+
+```bash
+claude-flow stream-chain pipeline refactor
+```
+
+**Workflow Steps:**
+1. **Candidate Identification**: Find code needing refactoring
+2. **Prioritization**: Create ranked refactoring plan
+3. **Implementation**: Provide refactored code for top priorities
+
+**Use Cases:**
+- Technical debt reduction
+- Code quality improvement
+- Legacy code modernization
+- Design pattern implementation
+
+---
+
+## 3. Test Pipeline
+
+Comprehensive test generation with coverage analysis.
+
+```bash
+claude-flow stream-chain pipeline test
+```
+
+**Workflow Steps:**
+1. **Coverage Analysis**: Identify areas lacking tests
+2. **Test Design**: Create test cases for critical functions
+3. **Implementation**: Generate unit tests with assertions
+
+**Use Cases:**
+- Increasing test coverage
+- TDD workflow support
+- Regression test creation
+- Quality assurance
+
+---
+
+## 4. Optimize Pipeline
+
+Performance optimization with profiling and implementation.
+
+```bash
+claude-flow stream-chain pipeline optimize
+```
+
+**Workflow Steps:**
+1. **Profiling**: Identify performance bottlenecks
+2. **Strategy**: Analyze and suggest optimization approaches
+3. **Implementation**: Provide optimized code
+
+**Use Cases:**
+- Performance improvement
+- Resource optimization
+- Scalability enhancement
+- Latency reduction
+
+---
+
+## Pipeline Options
+
+| Option | Description | Default |
+|--------|-------------|---------|
+| `--verbose` | Show detailed execution | `false` |
+| `--timeout <seconds>` | Timeout per step | `30` |
+| `--debug` | Enable debug mode | `false` |
+
+## Pipeline Invocation Examples
+
+```bash
+# Quick analysis
+claude-flow stream-chain pipeline analysis
+
+# Extended refactoring
+claude-flow stream-chain pipeline refactor --timeout 60 --verbose
+
+# Debug test generation
+claude-flow stream-chain pipeline test --debug
+
+# Comprehensive optimization
+claude-flow stream-chain pipeline optimize --timeout 90 --verbose
+```
+
+## Pipeline Output
+
+Each pipeline execution provides:
+
+- **Progress**: Step-by-step execution status
+- **Results**: Success/failure per step
+- **Timing**: Total and per-step execution time
+- **Summary**: Consolidated results and recommendations
diff --git a/.claude/skills/stream-chain/references/troubleshooting.md b/.claude/skills/stream-chain/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..5a26440
--- /dev/null
@@ -0,0 +1,43 @@
+# Troubleshooting and Performance
+
+Common issues, solutions, and performance characteristics.
+
+---
+
+## Common Issues
+
+### Chain Timeout
+
+If steps timeout, increase the timeout value:
+
+```bash
+claude-flow stream-chain run "complex task" --timeout 120
+```
+
+### Context Loss
+
+If context is not flowing properly, enable debug mode:
+
+```bash
+claude-flow stream-chain run "step 1" "step 2" --debug
+```
+
+### Pipeline Not Found
+
+Verify pipeline name and custom definitions:
+
+```bash
+# Check available pipelines
+cat .claude-flow/config.json | grep -A 10 "streamChain"
+```
+
+---
+
+## Performance Characteristics
+
+| Metric | Value |
+|--------|-------|
+| Throughput | 2-5 steps per minute (varies by complexity) |
+| Context Size | Up to 100K tokens per step |
+| Memory Usage | ~50MB per active chain |
+| Concurrency | Supports parallel chain execution |
diff --git a/.claude/skills/swarm-advanced/assets/cli-reference.md b/.claude/skills/swarm-advanced/assets/cli-reference.md
new file mode 100644 (file)
index 0000000..1ce6119
--- /dev/null
@@ -0,0 +1,113 @@
+# CLI Reference for Swarm Orchestration
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Installation
+
+```bash
+# Install Claude Flow
+npm install -g claude-flow@alpha
+
+# Add MCP server (if using MCP tools)
+claude mcp add claude-flow npx claude-flow@alpha mcp start
+```
+
+## Swarm Commands
+
+### Research Swarm
+
+```bash
+npx claude-flow swarm "research AI trends in 2025" \
+  --strategy research \
+  --mode distributed \
+  --max-agents 6 \
+  --parallel \
+  --output research-report.md
+```
+
+### Development Swarm
+
+```bash
+npx claude-flow swarm "build REST API with authentication" \
+  --strategy development \
+  --mode hierarchical \
+  --monitor \
+  --output sqlite
+```
+
+### Testing Swarm
+
+```bash
+npx claude-flow swarm "test application comprehensively" \
+  --strategy testing \
+  --mode star \
+  --parallel \
+  --timeout 600
+```
+
+### Analysis Swarm
+
+```bash
+npx claude-flow swarm "analyze codebase for issues" \
+  --strategy analysis \
+  --mode mesh \
+  --max-agents 5 \
+  --output analysis-report.md
+```
+
+## Common Flags
+
+| Flag | Description | Example |
+|------|-------------|---------|
+| `--strategy` | Swarm strategy type | `research`, `development`, `testing` |
+| `--mode` | Topology mode | `mesh`, `hierarchical`, `star`, `ring` |
+| `--max-agents` | Maximum agent count | `4`, `6`, `8` |
+| `--parallel` | Enable parallel execution | (flag, no value) |
+| `--monitor` | Enable real-time monitoring | (flag, no value) |
+| `--timeout` | Execution timeout in seconds | `300`, `600` |
+| `--output` | Output format or file | `sqlite`, `report.md` |
+
+## MCP Tool Quick Reference
+
+| Tool | Purpose |
+|------|---------|
+| `swarm_init` | Initialize swarm with topology and strategy |
+| `agent_spawn` | Spawn a specialized agent |
+| `task_orchestrate` | Orchestrate a task with strategy |
+| `parallel_execute` | Run tasks in parallel |
+| `batch_process` | Batch process items |
+| `swarm_status` | Check swarm health |
+| `swarm_monitor` | Real-time monitoring |
+| `swarm_scale` | Scale agent count |
+| `memory_usage` | Store/retrieve memory |
+| `memory_search` | Search memory by pattern |
+| `memory_persist` | Persist session state |
+| `memory_backup` | Backup memory stores |
+| `workflow_create` | Create reusable workflow |
+| `workflow_execute` | Execute a workflow |
+| `pipeline_create` | Create CI/CD pipeline |
+| `quality_assess` | Assess quality metrics |
+| `pattern_recognize` | Recognize data patterns |
+| `neural_patterns` | Neural pattern operations |
+| `neural_train` | Train neural patterns |
+| `cognitive_analyze` | Cognitive analysis |
+| `benchmark_run` | Run benchmarks |
+| `bottleneck_analyze` | Analyze bottlenecks |
+| `security_scan` | Security scanning |
+| `error_analysis` | Analyze error logs |
+| `performance_report` | Generate performance report |
+| `trend_analysis` | Analyze trends over time |
+| `cost_analysis` | Analyze resource costs |
+| `health_check` | Health monitoring |
+| `metrics_collect` | Collect metrics |
+| `daa_fault_tolerance` | Configure fault tolerance |
+| `state_snapshot` | Create state snapshot |
+| `context_restore` | Restore from snapshot |
+| `topology_optimize` | Optimize topology |
+| `load_balance` | Balance load across agents |
+| `coordination_sync` | Sync agent coordination |
+| `automation_setup` | Setup automation rules |
+| `trigger_setup` | Configure event triggers |
+| `learning_adapt` | Adaptive learning |
+| `usage_stats` | Usage statistics |
+| `task_results` | Get task results |
diff --git a/.claude/skills/swarm-advanced/assets/examples.md b/.claude/skills/swarm-advanced/assets/examples.md
new file mode 100644 (file)
index 0000000..8f29d59
--- /dev/null
@@ -0,0 +1,92 @@
+# Real-World Swarm Examples
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Example 1: AI Research Project
+
+```javascript
+// Research AI trends, analyze findings, generate report
+mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 6 })
+
+// Team: 2 researchers, 2 analysts, 1 synthesizer, 1 documenter
+const agents = [
+  { type: "researcher", name: "Web Researcher", capabilities: ["web-search", "source-validation"] },
+  { type: "researcher", name: "Academic Researcher", capabilities: ["paper-analysis", "literature-review"] },
+  { type: "analyst", name: "Data Analyst", capabilities: ["statistical-analysis", "visualization"] },
+  { type: "analyst", name: "Pattern Analyzer", capabilities: ["trend-detection", "correlation-analysis"] },
+  { type: "analyst", name: "Synthesizer", capabilities: ["cross-reference", "synthesis"] },
+  { type: "documenter", name: "Report Writer", capabilities: ["technical-writing", "formatting"] }
+]
+
+// Workflow: Parallel gather -> Analyze patterns -> Synthesize -> Report
+mcp__claude-flow__parallel_execute({
+  tasks: [
+    { id: "gather-web", command: "search recent AI publications" },
+    { id: "gather-academic", command: "search academic AI databases" }
+  ]
+})
+// Then sequential: analyze -> synthesize -> generate report
+```
+
+## Example 2: Full-Stack Application
+
+```javascript
+// Build complete web application with testing
+mcp__claude-flow__swarm_init({ topology: "hierarchical", maxAgents: 8 })
+
+// Team: 1 architect, 2 devs, 1 db engineer, 2 testers, 1 reviewer, 1 devops
+const team = [
+  { type: "architect", name: "System Architect", role: "coordinator" },
+  { type: "coder", name: "Backend Dev", capabilities: ["node", "api"] },
+  { type: "coder", name: "Frontend Dev", capabilities: ["react", "ui"] },
+  { type: "coder", name: "DB Engineer", capabilities: ["sql", "optimization"] },
+  { type: "tester", name: "QA Engineer", capabilities: ["unit", "integration"] },
+  { type: "tester", name: "E2E Tester", capabilities: ["e2e", "selenium"] },
+  { type: "reviewer", name: "Code Reviewer", capabilities: ["security", "best-practices"] },
+  { type: "monitor", name: "DevOps", capabilities: ["ci-cd", "deployment"] }
+]
+
+// Workflow: Design -> Parallel implement -> Test -> Review -> Deploy
+```
+
+## Example 3: Security Audit
+
+```javascript
+// Comprehensive security analysis
+mcp__claude-flow__swarm_init({ topology: "star", maxAgents: 5 })
+
+// Team: 1 coordinator, 1 code analyzer, 1 security scanner, 1 pen tester, 1 reporter
+const securityTeam = [
+  { type: "analyst", name: "Coordinator", role: "coordinator" },
+  { type: "analyst", name: "Code Analyzer", capabilities: ["static-analysis", "dependency-audit"] },
+  { type: "monitor", name: "Security Scanner", capabilities: ["vulnerability-scanning", "config-audit"] },
+  { type: "monitor", name: "Penetration Tester", capabilities: ["penetration-testing", "exploit-detection"] },
+  { type: "documenter", name: "Security Reporter", capabilities: ["reporting", "recommendations"] }
+]
+
+// Workflow: Parallel scan -> Vulnerability analysis -> Penetration test -> Report
+```
+
+## Example 4: Performance Optimization
+
+```javascript
+// Identify and fix performance bottlenecks
+mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 4 })
+
+// Team: 1 profiler, 1 bottleneck analyzer, 1 optimizer, 1 tester
+const perfTeam = [
+  { type: "analyst", name: "Profiler", capabilities: ["profiling", "metrics-collection"] },
+  { type: "analyst", name: "Bottleneck Analyzer", capabilities: ["bottleneck-detection", "root-cause"] },
+  { type: "coder", name: "Optimizer", capabilities: ["optimization", "refactoring"] },
+  { type: "tester", name: "Benchmark Tester", capabilities: ["benchmarking", "regression-testing"] }
+]
+
+// Workflow: Profile -> Identify bottlenecks -> Optimize -> Validate
+mcp__claude-flow__parallel_execute({
+  tasks: [
+    { id: "profile", command: "run application profiling" },
+    { id: "analyze", command: "collect baseline metrics" }
+  ]
+})
+// Then sequential: identify bottlenecks -> optimize -> validate with benchmarks
+```
diff --git a/.claude/skills/swarm-advanced/references/advanced-techniques.md b/.claude/skills/swarm-advanced/references/advanced-techniques.md
new file mode 100644 (file)
index 0000000..3b90a82
--- /dev/null
@@ -0,0 +1,190 @@
+# Advanced Techniques
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Error Handling and Fault Tolerance
+
+```javascript
+// Setup fault tolerance for all agents
+mcp__claude-flow__daa_fault_tolerance({
+  "agentId": "all",
+  "strategy": "auto-recovery"
+})
+
+// Error handling pattern
+try {
+  await mcp__claude-flow__task_orchestrate({
+    "task": "complex operation",
+    "strategy": "parallel",
+    "priority": "high"
+  })
+} catch (error) {
+  // Check swarm health
+  const status = await mcp__claude-flow__swarm_status({})
+
+  // Analyze error patterns
+  await mcp__claude-flow__error_analysis({
+    "logs": [error.message]
+  })
+
+  // Auto-recovery attempt
+  if (status.healthy) {
+    await mcp__claude-flow__task_orchestrate({
+      "task": "retry failed operation",
+      "strategy": "sequential"
+    })
+  }
+}
+```
+
+## Memory and State Management
+
+```javascript
+// Cross-session persistence
+mcp__claude-flow__memory_persist({
+  "sessionId": "swarm-session-001"
+})
+
+// Namespace management for different swarms
+mcp__claude-flow__memory_namespace({
+  "namespace": "research-swarm",
+  "action": "create"
+})
+
+// Create state snapshot
+mcp__claude-flow__state_snapshot({
+  "name": "development-checkpoint-1"
+})
+
+// Restore from snapshot if needed
+mcp__claude-flow__context_restore({
+  "snapshotId": "development-checkpoint-1"
+})
+
+// Backup memory stores
+mcp__claude-flow__memory_backup({
+  "path": "/workspaces/claude-code-flow/backups/swarm-memory.json"
+})
+```
+
+## Neural Pattern Learning
+
+```javascript
+// Train neural patterns from successful workflows
+mcp__claude-flow__neural_train({
+  "pattern_type": "coordination",
+  "training_data": JSON.stringify(successfulWorkflows),
+  "epochs": 50
+})
+
+// Adaptive learning from experience
+mcp__claude-flow__learning_adapt({
+  "experience": {
+    "workflow": "research-to-report",
+    "success": true,
+    "duration": 3600,
+    "quality": 0.95
+  }
+})
+
+// Pattern recognition for optimization
+mcp__claude-flow__pattern_recognize({
+  "data": workflowMetrics,
+  "patterns": ["bottleneck", "optimization-opportunity", "efficiency-gain"]
+})
+```
+
+## Workflow Automation
+
+```javascript
+// Create reusable workflow
+mcp__claude-flow__workflow_create({
+  "name": "full-stack-development",
+  "steps": [
+    { "phase": "design", "agents": ["architect"] },
+    { "phase": "implement", "agents": ["backend-dev", "frontend-dev"], "parallel": true },
+    { "phase": "test", "agents": ["tester", "security-tester"], "parallel": true },
+    { "phase": "review", "agents": ["reviewer"] },
+    { "phase": "deploy", "agents": ["devops"] }
+  ],
+  "triggers": ["on-commit", "scheduled-daily"]
+})
+
+// Setup automation rules
+mcp__claude-flow__automation_setup({
+  "rules": [
+    {
+      "trigger": "file-changed",
+      "pattern": "*.js",
+      "action": "run-tests"
+    },
+    {
+      "trigger": "PR-created",
+      "action": "code-review-swarm"
+    }
+  ]
+})
+
+// Event-driven triggers
+mcp__claude-flow__trigger_setup({
+  "events": ["code-commit", "PR-merge", "deployment"],
+  "actions": ["test", "analyze", "document"]
+})
+```
+
+## Performance Optimization
+
+```javascript
+// Topology optimization
+mcp__claude-flow__topology_optimize({
+  "swarmId": "current-swarm"
+})
+
+// Load balancing
+mcp__claude-flow__load_balance({
+  "swarmId": "development-swarm",
+  "tasks": taskQueue
+})
+
+// Agent coordination sync
+mcp__claude-flow__coordination_sync({
+  "swarmId": "development-swarm"
+})
+
+// Auto-scaling
+mcp__claude-flow__swarm_scale({
+  "swarmId": "development-swarm",
+  "targetSize": 12
+})
+```
+
+## Monitoring and Metrics
+
+```javascript
+// Real-time swarm monitoring
+mcp__claude-flow__swarm_monitor({
+  "swarmId": "active-swarm",
+  "interval": 3000
+})
+
+// Collect comprehensive metrics
+mcp__claude-flow__metrics_collect({
+  "components": ["agents", "tasks", "memory", "performance"]
+})
+
+// Health monitoring
+mcp__claude-flow__health_check({
+  "components": ["swarm", "agents", "neural", "memory"]
+})
+
+// Usage statistics
+mcp__claude-flow__usage_stats({
+  "component": "swarm-orchestration"
+})
+
+// Trend analysis
+mcp__claude-flow__trend_analysis({
+  "metric": "agent-performance",
+  "period": "7d"
+})
+```
diff --git a/.claude/skills/swarm-advanced/references/analysis-swarm.md b/.claude/skills/swarm-advanced/references/analysis-swarm.md
new file mode 100644 (file)
index 0000000..826b781
--- /dev/null
@@ -0,0 +1,81 @@
+# Pattern: Analysis Swarm
+
+> Extracted from SKILL.md Pattern 4. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Purpose
+
+Deep code and system analysis through specialized analyzers.
+
+## Architecture
+
+```javascript
+// Initialize analysis swarm
+mcp__claude-flow__swarm_init({
+  "topology": "mesh",
+  "maxAgents": 5,
+  "strategy": "adaptive"
+})
+
+// Spawn analysis specialists
+const analysisTeam = [
+  {
+    type: "analyst",
+    name: "Code Analyzer",
+    capabilities: ["static-analysis", "complexity-analysis", "dead-code-detection"]
+  },
+  {
+    type: "analyst",
+    name: "Security Analyzer",
+    capabilities: ["security-scan", "vulnerability-detection", "dependency-audit"]
+  },
+  {
+    type: "analyst",
+    name: "Performance Analyzer",
+    capabilities: ["profiling", "bottleneck-detection", "optimization"]
+  },
+  {
+    type: "analyst",
+    name: "Architecture Analyzer",
+    capabilities: ["dependency-analysis", "coupling-detection", "modularity-assessment"]
+  },
+  {
+    type: "documenter",
+    name: "Analysis Reporter",
+    capabilities: ["reporting", "visualization", "recommendations"]
+  }
+]
+
+// Spawn all analysts
+analysisTeam.forEach(analyst => {
+  mcp__claude-flow__agent_spawn({
+    type: analyst.type,
+    name: analyst.name,
+    capabilities: analyst.capabilities
+  })
+})
+```
+
+## Analysis Workflow
+
+```javascript
+// Parallel analysis execution
+mcp__claude-flow__parallel_execute({
+  "tasks": [
+    { "id": "analyze-code", "command": "analyze codebase structure and quality" },
+    { "id": "analyze-security", "command": "scan for security vulnerabilities" },
+    { "id": "analyze-performance", "command": "identify performance bottlenecks" },
+    { "id": "analyze-architecture", "command": "assess architectural patterns" }
+  ]
+})
+
+// Generate comprehensive analysis report
+mcp__claude-flow__performance_report({
+  "format": "detailed",
+  "timeframe": "current"
+})
+
+// Cost analysis
+mcp__claude-flow__cost_analysis({
+  "timeframe": "30d"
+})
+```
diff --git a/.claude/skills/swarm-advanced/references/best-practices.md b/.claude/skills/swarm-advanced/references/best-practices.md
new file mode 100644 (file)
index 0000000..6c99dc2
--- /dev/null
@@ -0,0 +1,69 @@
+# Best Practices and Troubleshooting
+
+> Extracted from SKILL.md. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Best Practices
+
+### 1. Choosing the Right Topology
+
+- **Mesh**: Research, brainstorming, collaborative analysis
+- **Hierarchical**: Structured development, sequential workflows
+- **Star**: Testing, validation, centralized coordination
+- **Ring**: Pipeline processing, staged workflows
+
+### 2. Agent Specialization
+
+- Assign specific capabilities to each agent.
+- Avoid overlapping responsibilities.
+- Use coordination agents for complex workflows.
+- Leverage memory for agent communication.
+
+### 3. Parallel Execution
+
+- Identify independent tasks for parallelization.
+- Use sequential execution for dependent tasks.
+- Monitor resource usage during parallel execution.
+- Implement proper error handling.
+
+### 4. Memory Management
+
+- Use namespaces to organize memory.
+- Set appropriate TTL values.
+- Create regular backups.
+- Implement state snapshots for checkpoints.
+
+### 5. Monitoring and Optimization
+
+- Monitor swarm health regularly.
+- Collect and analyze metrics.
+- Optimize topology based on performance.
+- Use neural patterns to learn from success.
+
+### 6. Error Recovery
+
+- Implement fault tolerance strategies.
+- Use auto-recovery mechanisms.
+- Analyze error patterns.
+- Create fallback workflows.
+
+## Troubleshooting
+
+### Swarm agents not coordinating properly
+
+**Symptoms**: Agents produce duplicate work or miss tasks.
+**Solution**: Check topology selection, verify memory usage, enable monitoring.
+
+### Parallel execution failing
+
+**Symptoms**: Tasks time out or return partial results.
+**Solution**: Verify task dependencies, check resource limits, implement error handling.
+
+### Memory persistence not working
+
+**Symptoms**: State lost between sessions.
+**Solution**: Verify namespaces, check TTL settings, ensure backup configuration.
+
+### Performance degradation
+
+**Symptoms**: Swarm becomes slower over time.
+**Solution**: Optimize topology, reduce agent count, analyze bottlenecks.
diff --git a/.claude/skills/swarm-advanced/references/development-swarm.md b/.claude/skills/swarm-advanced/references/development-swarm.md
new file mode 100644 (file)
index 0000000..73f66de
--- /dev/null
@@ -0,0 +1,150 @@
+# Pattern: Development Swarm
+
+> Extracted from SKILL.md Pattern 2. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Purpose
+
+Full-stack development through coordinated specialist agents.
+
+## Architecture
+
+```javascript
+// Initialize development swarm with hierarchy
+mcp__claude-flow__swarm_init({
+  "topology": "hierarchical",
+  "maxAgents": 8,
+  "strategy": "balanced"
+})
+
+// Spawn development team
+const devTeam = [
+  { type: "architect", name: "System Architect", role: "coordinator" },
+  { type: "coder", name: "Backend Developer", capabilities: ["node", "api", "database"] },
+  { type: "coder", name: "Frontend Developer", capabilities: ["react", "ui", "ux"] },
+  { type: "coder", name: "Database Engineer", capabilities: ["sql", "nosql", "optimization"] },
+  { type: "tester", name: "QA Engineer", capabilities: ["unit", "integration", "e2e"] },
+  { type: "reviewer", name: "Code Reviewer", capabilities: ["security", "performance", "best-practices"] },
+  { type: "documenter", name: "Technical Writer", capabilities: ["api-docs", "guides", "tutorials"] },
+  { type: "monitor", name: "DevOps Engineer", capabilities: ["ci-cd", "deployment", "monitoring"] }
+]
+
+// Spawn all team members
+devTeam.forEach(member => {
+  mcp__claude-flow__agent_spawn({
+    type: member.type,
+    name: member.name,
+    capabilities: member.capabilities,
+    swarmId: "dev-swarm"
+  })
+})
+```
+
+## Development Workflow
+
+### Phase 1: Architecture and Design
+
+```javascript
+// System architecture design
+mcp__claude-flow__task_orchestrate({
+  "task": "design system architecture for REST API",
+  "strategy": "sequential",
+  "priority": "critical",
+  "assignTo": "System Architect"
+})
+
+// Store architecture decisions
+mcp__claude-flow__memory_usage({
+  "action": "store",
+  "key": "architecture-decisions",
+  "value": JSON.stringify(architectureDoc),
+  "namespace": "development/design"
+})
+```
+
+### Phase 2: Parallel Implementation
+
+```javascript
+// Parallel development tasks
+mcp__claude-flow__parallel_execute({
+  "tasks": [
+    {
+      "id": "backend-api",
+      "command": "implement REST API endpoints",
+      "assignTo": "Backend Developer"
+    },
+    {
+      "id": "frontend-ui",
+      "command": "build user interface components",
+      "assignTo": "Frontend Developer"
+    },
+    {
+      "id": "database-schema",
+      "command": "design and implement database schema",
+      "assignTo": "Database Engineer"
+    },
+    {
+      "id": "api-documentation",
+      "command": "create API documentation",
+      "assignTo": "Technical Writer"
+    }
+  ]
+})
+
+// Monitor development progress
+mcp__claude-flow__swarm_monitor({
+  "swarmId": "dev-swarm",
+  "interval": 5000
+})
+```
+
+### Phase 3: Testing and Validation
+
+```javascript
+// Comprehensive testing
+mcp__claude-flow__batch_process({
+  "items": [
+    { type: "unit", target: "all-modules" },
+    { type: "integration", target: "api-endpoints" },
+    { type: "e2e", target: "user-flows" },
+    { type: "performance", target: "critical-paths" }
+  ],
+  "operation": "execute-tests"
+})
+
+// Quality assessment
+mcp__claude-flow__quality_assess({
+  "target": "codebase",
+  "criteria": ["coverage", "complexity", "maintainability", "security"]
+})
+```
+
+### Phase 4: Review and Deployment
+
+```javascript
+// Code review workflow
+mcp__claude-flow__workflow_execute({
+  "workflowId": "code-review-process",
+  "params": {
+    "reviewers": ["Code Reviewer"],
+    "criteria": ["security", "performance", "best-practices"]
+  }
+})
+
+// CI/CD pipeline
+mcp__claude-flow__pipeline_create({
+  "config": {
+    "stages": ["build", "test", "security-scan", "deploy"],
+    "environment": "production"
+  }
+})
+```
+
+### CLI Fallback
+
+```bash
+npx claude-flow swarm "build REST API with authentication" \
+  --strategy development \
+  --mode hierarchical \
+  --monitor \
+  --output sqlite
+```
diff --git a/.claude/skills/swarm-advanced/references/research-swarm.md b/.claude/skills/swarm-advanced/references/research-swarm.md
new file mode 100644 (file)
index 0000000..c8632f7
--- /dev/null
@@ -0,0 +1,190 @@
+# Pattern: Research Swarm
+
+> Extracted from SKILL.md Pattern 1. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Purpose
+
+Deep research through parallel information gathering, analysis, and synthesis.
+
+## Architecture
+
+```javascript
+// Initialize research swarm
+mcp__claude-flow__swarm_init({
+  "topology": "mesh",
+  "maxAgents": 6,
+  "strategy": "adaptive"
+})
+
+// Spawn research team
+const researchAgents = [
+  {
+    type: "researcher",
+    name: "Web Researcher",
+    capabilities: ["web-search", "content-extraction", "source-validation"]
+  },
+  {
+    type: "researcher",
+    name: "Academic Researcher",
+    capabilities: ["paper-analysis", "citation-tracking", "literature-review"]
+  },
+  {
+    type: "analyst",
+    name: "Data Analyst",
+    capabilities: ["data-processing", "statistical-analysis", "visualization"]
+  },
+  {
+    type: "analyst",
+    name: "Pattern Analyzer",
+    capabilities: ["trend-detection", "correlation-analysis", "outlier-detection"]
+  },
+  {
+    type: "documenter",
+    name: "Report Writer",
+    capabilities: ["synthesis", "technical-writing", "formatting"]
+  }
+]
+
+// Spawn all agents
+researchAgents.forEach(agent => {
+  mcp__claude-flow__agent_spawn({
+    type: agent.type,
+    name: agent.name,
+    capabilities: agent.capabilities
+  })
+})
+```
+
+## Research Workflow
+
+### Phase 1: Information Gathering
+
+```javascript
+// Parallel information collection
+mcp__claude-flow__parallel_execute({
+  "tasks": [
+    {
+      "id": "web-search",
+      "command": "search recent publications and articles"
+    },
+    {
+      "id": "academic-search",
+      "command": "search academic databases and papers"
+    },
+    {
+      "id": "data-collection",
+      "command": "gather relevant datasets and statistics"
+    },
+    {
+      "id": "expert-search",
+      "command": "identify domain experts and thought leaders"
+    }
+  ]
+})
+
+// Store research findings in memory
+mcp__claude-flow__memory_usage({
+  "action": "store",
+  "key": "research-findings-" + Date.now(),
+  "value": JSON.stringify(findings),
+  "namespace": "research",
+  "ttl": 604800 // 7 days
+})
+```
+
+### Phase 2: Analysis and Validation
+
+```javascript
+// Pattern recognition in findings
+mcp__claude-flow__pattern_recognize({
+  "data": researchData,
+  "patterns": ["trend", "correlation", "outlier", "emerging-pattern"]
+})
+
+// Cognitive analysis
+mcp__claude-flow__cognitive_analyze({
+  "behavior": "research-synthesis"
+})
+
+// Quality assessment
+mcp__claude-flow__quality_assess({
+  "target": "research-sources",
+  "criteria": ["credibility", "relevance", "recency", "authority"]
+})
+
+// Cross-reference validation
+mcp__claude-flow__neural_patterns({
+  "action": "analyze",
+  "operation": "fact-checking",
+  "metadata": { "sources": sourcesArray }
+})
+```
+
+### Phase 3: Knowledge Management
+
+```javascript
+// Search existing knowledge base
+mcp__claude-flow__memory_search({
+  "pattern": "topic X",
+  "namespace": "research",
+  "limit": 20
+})
+
+// Create knowledge graph connections
+mcp__claude-flow__neural_patterns({
+  "action": "learn",
+  "operation": "knowledge-graph",
+  "metadata": {
+    "topic": "X",
+    "connections": relatedTopics,
+    "depth": 3
+  }
+})
+
+// Store connections for future use
+mcp__claude-flow__memory_usage({
+  "action": "store",
+  "key": "knowledge-graph-X",
+  "value": JSON.stringify(knowledgeGraph),
+  "namespace": "research/graphs",
+  "ttl": 2592000 // 30 days
+})
+```
+
+### Phase 4: Report Generation
+
+```javascript
+// Orchestrate report generation
+mcp__claude-flow__task_orchestrate({
+  "task": "generate comprehensive research report",
+  "strategy": "sequential",
+  "priority": "high",
+  "dependencies": ["gather", "analyze", "validate", "synthesize"]
+})
+
+// Monitor research progress
+mcp__claude-flow__swarm_status({
+  "swarmId": "research-swarm"
+})
+
+// Generate final report
+mcp__claude-flow__workflow_execute({
+  "workflowId": "research-report-generation",
+  "params": {
+    "findings": findings,
+    "format": "comprehensive",
+    "sections": ["executive-summary", "methodology", "findings", "analysis", "conclusions", "references"]
+  }
+})
+```
+
+### CLI Fallback
+
+```bash
+npx claude-flow swarm "research AI trends in 2025" \
+  --strategy research \
+  --mode distributed \
+  --max-agents 6 \
+  --parallel \
+  --output research-report.md
+```
diff --git a/.claude/skills/swarm-advanced/references/testing-swarm.md b/.claude/skills/swarm-advanced/references/testing-swarm.md
new file mode 100644 (file)
index 0000000..4d27dc7
--- /dev/null
@@ -0,0 +1,207 @@
+# Pattern: Testing Swarm
+
+> Extracted from SKILL.md Pattern 3. Return to [SKILL.md](../SKILL.md) for navigation.
+
+## Purpose
+
+Comprehensive quality assurance through distributed testing.
+
+## Architecture
+
+```javascript
+// Initialize testing swarm with star topology
+mcp__claude-flow__swarm_init({
+  "topology": "star",
+  "maxAgents": 7,
+  "strategy": "parallel"
+})
+
+// Spawn testing team
+const testingTeam = [
+  {
+    type: "tester",
+    name: "Unit Test Coordinator",
+    capabilities: ["unit-testing", "mocking", "coverage", "tdd"]
+  },
+  {
+    type: "tester",
+    name: "Integration Tester",
+    capabilities: ["integration", "api-testing", "contract-testing"]
+  },
+  {
+    type: "tester",
+    name: "E2E Tester",
+    capabilities: ["e2e", "ui-testing", "user-flows", "selenium"]
+  },
+  {
+    type: "tester",
+    name: "Performance Tester",
+    capabilities: ["load-testing", "stress-testing", "benchmarking"]
+  },
+  {
+    type: "monitor",
+    name: "Security Tester",
+    capabilities: ["security-testing", "penetration-testing", "vulnerability-scanning"]
+  },
+  {
+    type: "analyst",
+    name: "Test Analyst",
+    capabilities: ["coverage-analysis", "test-optimization", "reporting"]
+  },
+  {
+    type: "documenter",
+    name: "Test Documenter",
+    capabilities: ["test-documentation", "test-plans", "reports"]
+  }
+]
+
+// Spawn all testers
+testingTeam.forEach(tester => {
+  mcp__claude-flow__agent_spawn({
+    type: tester.type,
+    name: tester.name,
+    capabilities: tester.capabilities,
+    swarmId: "testing-swarm"
+  })
+})
+```
+
+## Testing Workflow
+
+### Phase 1: Test Planning
+
+```javascript
+// Analyze test coverage requirements
+mcp__claude-flow__quality_assess({
+  "target": "test-coverage",
+  "criteria": [
+    "line-coverage",
+    "branch-coverage",
+    "function-coverage",
+    "edge-cases"
+  ]
+})
+
+// Identify test scenarios
+mcp__claude-flow__pattern_recognize({
+  "data": testScenarios,
+  "patterns": [
+    "edge-case",
+    "boundary-condition",
+    "error-path",
+    "happy-path"
+  ]
+})
+
+// Store test plan
+mcp__claude-flow__memory_usage({
+  "action": "store",
+  "key": "test-plan-" + Date.now(),
+  "value": JSON.stringify(testPlan),
+  "namespace": "testing/plans"
+})
+```
+
+### Phase 2: Parallel Test Execution
+
+```javascript
+// Execute all test suites in parallel
+mcp__claude-flow__parallel_execute({
+  "tasks": [
+    {
+      "id": "unit-tests",
+      "command": "npm run test:unit",
+      "assignTo": "Unit Test Coordinator"
+    },
+    {
+      "id": "integration-tests",
+      "command": "npm run test:integration",
+      "assignTo": "Integration Tester"
+    },
+    {
+      "id": "e2e-tests",
+      "command": "npm run test:e2e",
+      "assignTo": "E2E Tester"
+    },
+    {
+      "id": "performance-tests",
+      "command": "npm run test:performance",
+      "assignTo": "Performance Tester"
+    },
+    {
+      "id": "security-tests",
+      "command": "npm run test:security",
+      "assignTo": "Security Tester"
+    }
+  ]
+})
+
+// Batch process test suites
+mcp__claude-flow__batch_process({
+  "items": testSuites,
+  "operation": "execute-test-suite"
+})
+```
+
+### Phase 3: Performance and Security
+
+```javascript
+// Run performance benchmarks
+mcp__claude-flow__benchmark_run({
+  "suite": "comprehensive-performance"
+})
+
+// Bottleneck analysis
+mcp__claude-flow__bottleneck_analyze({
+  "component": "application",
+  "metrics": ["response-time", "throughput", "memory", "cpu"]
+})
+
+// Security scanning
+mcp__claude-flow__security_scan({
+  "target": "application",
+  "depth": "comprehensive"
+})
+
+// Vulnerability analysis
+mcp__claude-flow__error_analysis({
+  "logs": securityScanLogs
+})
+```
+
+### Phase 4: Monitoring and Reporting
+
+```javascript
+// Real-time test monitoring
+mcp__claude-flow__swarm_monitor({
+  "swarmId": "testing-swarm",
+  "interval": 2000
+})
+
+// Generate comprehensive test report
+mcp__claude-flow__performance_report({
+  "format": "detailed",
+  "timeframe": "current-run"
+})
+
+// Get test results
+mcp__claude-flow__task_results({
+  "taskId": "test-execution-001"
+})
+
+// Trend analysis
+mcp__claude-flow__trend_analysis({
+  "metric": "test-coverage",
+  "period": "30d"
+})
+```
+
+### CLI Fallback
+
+```bash
+npx claude-flow swarm "test application comprehensively" \
+  --strategy testing \
+  --mode star \
+  --parallel \
+  --timeout 600
+```
diff --git a/.claude/skills/verification-quality/references/configuration.md b/.claude/skills/verification-quality/references/configuration.md
new file mode 100644 (file)
index 0000000..021ba73
--- /dev/null
@@ -0,0 +1,66 @@
+# Configuration — Reference
+
+## Default Configuration
+
+Set verification preferences in `.claude-flow/config.json`:
+
+```json
+{
+  "verification": {
+    "threshold": 0.95,
+    "autoRollback": true,
+    "gitIntegration": true,
+    "hooks": {
+      "preCommit": true,
+      "preTask": true,
+      "postEdit": true
+    },
+    "checks": {
+      "codeCorrectness": true,
+      "security": true,
+      "performance": true,
+      "documentation": true,
+      "bestPractices": true
+    }
+  },
+  "truth": {
+    "defaultFormat": "table",
+    "defaultPeriod": "24h",
+    "warningThreshold": 0.85,
+    "criticalThreshold": 0.75,
+    "autoExport": {
+      "enabled": true,
+      "path": ".claude-flow/metrics/truth-daily.json"
+    }
+  }
+}
+```
+
+## Threshold Configuration
+
+### Adjust Verification Strictness
+
+```bash
+# Strict mode (99% accuracy required)
+npx claude-flow@alpha verify check --threshold 0.99
+
+# Lenient mode (90% acceptable)
+npx claude-flow@alpha verify check --threshold 0.90
+
+# Set default threshold
+npx claude-flow@alpha config set verification.threshold 0.98
+```
+
+### Per-environment Thresholds
+
+```json
+{
+  "verification": {
+    "thresholds": {
+      "production": 0.99,
+      "staging": 0.95,
+      "development": 0.90
+    }
+  }
+}
+```
diff --git a/.claude/skills/verification-quality/references/integrations.md b/.claude/skills/verification-quality/references/integrations.md
new file mode 100644 (file)
index 0000000..e403ee7
--- /dev/null
@@ -0,0 +1,148 @@
+# Integration Examples — Reference
+
+## CI/CD Integration
+
+### GitHub Actions
+
+```yaml
+name: Quality Verification
+
+on: [push, pull_request]
+
+jobs:
+  verify:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+
+      - name: Install Dependencies
+        run: npm install
+
+      - name: Run Verification
+        run: |
+          npx claude-flow@alpha verify check --json > verification.json
+
+      - name: Check Truth Score
+        run: |
+          score=$(jq '.overallScore' verification.json)
+          if (( $(echo "$score < 0.95" | bc -l) )); then
+            echo "Truth score too low: $score"
+            exit 1
+          fi
+
+      - name: Upload Report
+        uses: actions/upload-artifact@v3
+        with:
+          name: verification-report
+          path: verification.json
+```
+
+### GitLab CI
+
+```yaml
+verify:
+  stage: test
+  script:
+    - npx claude-flow@alpha verify check --threshold 0.95 --json > verification.json
+    - |
+      score=$(jq '.overallScore' verification.json)
+      if [ $(echo "$score < 0.95" | bc) -eq 1 ]; then
+        echo "Verification failed with score: $score"
+        exit 1
+      fi
+  artifacts:
+    paths:
+      - verification.json
+    reports:
+      junit: verification.json
+```
+
+## Swarm Integration
+
+Run verification automatically during swarm operations:
+
+```bash
+# Swarm with verification enabled
+npx claude-flow@alpha swarm --verify --threshold 0.98
+
+# Hive Mind with auto-rollback
+npx claude-flow@alpha hive-mind --verify --rollback-on-fail
+
+# Training pipeline with verification
+npx claude-flow@alpha train --verify --threshold 0.99
+```
+
+## Pair Programming Integration
+
+Enable real-time verification during collaborative development:
+
+```bash
+# Pair with verification
+npx claude-flow@alpha pair --verify --real-time
+
+# Pair with custom threshold
+npx claude-flow@alpha pair --verify --threshold 0.97 --auto-fix
+```
+
+## Continuous Verification
+
+Monitor codebase continuously during development:
+
+```bash
+# Watch directory for changes
+npx claude-flow@alpha verify watch --directory src/
+
+# Watch with auto-fix
+npx claude-flow@alpha verify watch --directory src/ --auto-fix
+
+# Watch with notifications
+npx claude-flow@alpha verify watch --notify --threshold 0.95
+```
+
+## Monitoring Integration
+
+Send metrics to external monitoring systems:
+
+```bash
+# Export to Prometheus
+npx claude-flow@alpha truth --format json | \
+  curl -X POST https://pushgateway.example.com/metrics/job/claude-flow \
+  -d @-
+
+# Send to DataDog
+npx claude-flow@alpha verify report --format json | \
+  curl -X POST "https://api.datadoghq.com/api/v1/series?api_key=${DD_API_KEY}" \
+  -H "Content-Type: application/json" \
+  -d @-
+
+# Custom webhook
+npx claude-flow@alpha truth --format json | \
+  curl -X POST https://metrics.example.com/api/truth \
+  -H "Content-Type: application/json" \
+  -d @-
+```
+
+## Pre-commit Hooks
+
+Automatically verify before commits:
+
+```bash
+# Install pre-commit hook
+npx claude-flow@alpha verify install-hook --pre-commit
+```
+
+Example `.git/hooks/pre-commit`:
+
+```bash
+#!/bin/bash
+npx claude-flow@alpha verify check --threshold 0.95 --json > /tmp/verify.json
+
+score=$(jq '.overallScore' /tmp/verify.json)
+if (( $(echo "$score < 0.95" | bc -l) )); then
+  echo "Verification failed with score: $score"
+  echo "Run 'npx claude-flow@alpha verify check --verbose' for details"
+  exit 1
+fi
+
+echo "Verification passed with score: $score"
+```
diff --git a/.claude/skills/verification-quality/references/reports-dashboard.md b/.claude/skills/verification-quality/references/reports-dashboard.md
new file mode 100644 (file)
index 0000000..1d5eb3d
--- /dev/null
@@ -0,0 +1,79 @@
+# Reports & Dashboard — Reference
+
+## Generate Reports
+
+Create detailed verification reports with metrics and visualizations.
+
+### Report Formats
+
+```bash
+# JSON report
+npx claude-flow@alpha verify report --format json
+
+# HTML report with charts
+npx claude-flow@alpha verify report --export metrics.html --format html
+
+# CSV for data analysis
+npx claude-flow@alpha verify report --format csv --export metrics.csv
+
+# Markdown summary
+npx claude-flow@alpha verify report --format markdown
+```
+
+### Time-based Reports
+
+```bash
+# Last 24 hours
+npx claude-flow@alpha verify report --period 24h
+
+# Last 7 days
+npx claude-flow@alpha verify report --period 7d
+
+# Last 30 days with trends
+npx claude-flow@alpha verify report --period 30d --include-trends
+
+# Custom date range
+npx claude-flow@alpha verify report --from 2025-01-01 --to 2025-01-31
+```
+
+### Report Content
+
+Each report includes:
+
+- Overall truth scores
+- Per-agent performance metrics
+- Task completion quality
+- Verification pass/fail rates
+- Rollback frequency
+- Quality improvement trends
+- Statistical confidence intervals
+
+## Interactive Dashboard
+
+### Launch Dashboard
+
+Run interactive web-based verification dashboard with real-time updates.
+
+```bash
+# Launch dashboard on default port (3000)
+npx claude-flow@alpha verify dashboard
+
+# Custom port
+npx claude-flow@alpha verify dashboard --port 8080
+
+# Export dashboard data
+npx claude-flow@alpha verify dashboard --export
+
+# Dashboard with auto-refresh
+npx claude-flow@alpha verify dashboard --refresh 5s
+```
+
+### Dashboard Features
+
+- Real-time truth score updates (WebSocket)
+- Interactive charts and graphs
+- Agent performance comparison
+- Task history timeline
+- Rollback history viewer
+- Export to PDF/HTML
+- Filter by time period/agent/score
diff --git a/.claude/skills/verification-quality/references/troubleshooting.md b/.claude/skills/verification-quality/references/troubleshooting.md
new file mode 100644 (file)
index 0000000..2b3f1cc
--- /dev/null
@@ -0,0 +1,94 @@
+# Troubleshooting, Performance & Best Practices — Reference
+
+## Performance Metrics
+
+### Verification Speed
+
+| Operation | Typical Latency |
+|-----------|----------------|
+| Single file check | < 100 ms |
+| Directory scan (per 100 files) | < 500 ms |
+| Full codebase analysis | < 5 s |
+| Truth score calculation | < 50 ms |
+
+### Rollback Speed
+
+| Operation | Typical Latency |
+|-----------|----------------|
+| Git-based rollback | < 1 s |
+| Selective file rollback | < 500 ms |
+| Backup creation | < 2 s |
+
+### Dashboard Performance
+
+| Metric | Target |
+|--------|--------|
+| Initial load | < 1 s |
+| Real-time updates (WebSocket) | < 100 ms latency |
+| Chart rendering | 60 FPS |
+
+## Troubleshooting
+
+### Low Truth Scores
+
+```bash
+# Get detailed breakdown
+npx claude-flow@alpha truth --verbose --threshold 0.0
+
+# Check specific criteria
+npx claude-flow@alpha verify check --verbose
+
+# View agent-specific issues
+npx claude-flow@alpha truth --agent <agent-name> --format json
+```
+
+### Rollback Failures
+
+```bash
+# Check git status
+git status
+
+# View rollback history
+npx claude-flow@alpha verify rollback --history
+
+# Manual rollback
+git reset --hard HEAD~1
+```
+
+### Verification Timeouts
+
+```bash
+# Increase timeout
+npx claude-flow@alpha verify check --timeout 60s
+
+# Verify in batches
+npx claude-flow@alpha verify batch --batch-size 10
+```
+
+## Exit Codes
+
+| Code | Meaning |
+|------|---------|
+| `0` | Verification passed (score >= threshold) |
+| `1` | Verification failed (score < threshold) |
+| `2` | Error during verification (invalid input, system error) |
+
+## Best Practices
+
+1. **Set Appropriate Thresholds**: Use 0.99 for critical code, 0.95 for standard, 0.90 for experimental.
+2. **Enable Auto-rollback**: Prevent bad code from persisting.
+3. **Monitor Trends**: Track improvement over time, not just current scores.
+4. **Integrate with CI/CD**: Make verification part of the pipeline.
+5. **Use Watch Mode**: Get immediate feedback during development.
+6. **Export Metrics**: Track quality metrics in the monitoring system.
+7. **Review Rollbacks**: Understand why changes were rejected.
+8. **Train Agents**: Use verification feedback to improve agent performance.
+
+## Related Commands
+
+| Command | Purpose |
+|---------|---------|
+| `npx claude-flow@alpha pair` | Collaborative development with verification |
+| `npx claude-flow@alpha train` | Training with verification feedback |
+| `npx claude-flow@alpha swarm` | Multi-agent coordination with quality checks |
+| `npx claude-flow@alpha report` | Generate comprehensive project reports |
diff --git a/.claude/skills/verification-quality/references/truth-scoring.md b/.claude/skills/verification-quality/references/truth-scoring.md
new file mode 100644 (file)
index 0000000..bc4c3bf
--- /dev/null
@@ -0,0 +1,99 @@
+# Truth Scoring System — Reference
+
+## View Truth Metrics
+
+Display comprehensive quality and reliability metrics for the codebase and agent tasks.
+
+### Basic Usage
+
+```bash
+# View current truth scores (default: table format)
+npx claude-flow@alpha truth
+
+# View scores for specific time period
+npx claude-flow@alpha truth --period 7d
+
+# View scores for specific agent
+npx claude-flow@alpha truth --agent coder --period 24h
+
+# Find files/tasks below threshold
+npx claude-flow@alpha truth --threshold 0.8
+```
+
+### Output Formats
+
+```bash
+# Table format (default)
+npx claude-flow@alpha truth --format table
+
+# JSON for programmatic access
+npx claude-flow@alpha truth --format json
+
+# CSV for spreadsheet analysis
+npx claude-flow@alpha truth --format csv
+
+# HTML report with visualizations
+npx claude-flow@alpha truth --format html --export report.html
+```
+
+### Real-time Monitoring
+
+```bash
+# Watch mode with live updates
+npx claude-flow@alpha truth --watch
+
+# Export metrics automatically
+npx claude-flow@alpha truth --export .claude-flow/metrics/truth-$(date +%Y%m%d).json
+```
+
+## Dashboard
+
+Example dashboard output:
+
+```
+📊 Truth Metrics Dashboard
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+Overall Truth Score: 0.947 ✅
+Trend: ↗️ +2.3% (7d)
+
+Top Performers:
+  verification-agent   0.982 ⭐
+  code-analyzer       0.971 ⭐
+  test-generator      0.958 ✅
+
+Needs Attention:
+  refactor-agent      0.821 ⚠️
+  docs-generator      0.794 ⚠️
+
+Recent Tasks:
+  task-456  0.991 ✅  "Implement auth"
+  task-455  0.967 ✅  "Add tests"
+  task-454  0.743 ❌  "Refactor API"
+```
+
+## Metrics Explained
+
+### Truth Scores (0.0-1.0)
+
+| Range | Rating | Meaning |
+|-------|--------|---------|
+| 1.0-0.95 | Excellent ⭐ | Production-ready |
+| 0.94-0.85 | Good ✅ | Acceptable quality |
+| 0.84-0.75 | Warning ⚠️ | Needs attention |
+| < 0.75 | Critical ❌ | Requires immediate action |
+
+### Trend Indicators
+
+| Symbol | Meaning |
+|--------|---------|
+| ↗️ | Improving (positive trend) |
+| → | Stable (consistent performance) |
+| ↘️ | Declining (quality regression detected) |
+
+### Statistics
+
+- **Mean Score**: Average truth score across all measurements.
+- **Median Score**: Middle value (less affected by outliers).
+- **Standard Deviation**: Consistency of scores (lower = more consistent).
+- **Confidence Interval**: Statistical reliability of measurements.
diff --git a/.claude/skills/verification-quality/references/verification-checks.md b/.claude/skills/verification-quality/references/verification-checks.md
new file mode 100644 (file)
index 0000000..927037c
--- /dev/null
@@ -0,0 +1,116 @@
+# Verification Checks — Reference
+
+## Run Verification
+
+Execute comprehensive verification checks on code, tasks, or agent outputs.
+
+### File Verification
+
+```bash
+# Verify single file
+npx claude-flow@alpha verify check --file src/app.js
+
+# Verify directory recursively
+npx claude-flow@alpha verify check --directory src/
+
+# Verify with auto-fix enabled
+npx claude-flow@alpha verify check --file src/utils.js --auto-fix
+
+# Verify current working directory
+npx claude-flow@alpha verify check
+```
+
+### Task Verification
+
+```bash
+# Verify specific task output
+npx claude-flow@alpha verify check --task task-123
+
+# Verify with custom threshold
+npx claude-flow@alpha verify check --task task-456 --threshold 0.99
+
+# Verbose output for debugging
+npx claude-flow@alpha verify check --task task-789 --verbose
+```
+
+### Batch Verification
+
+```bash
+# Verify multiple files in parallel
+npx claude-flow@alpha verify batch --files "*.js" --parallel
+
+# Verify with pattern matching
+npx claude-flow@alpha verify batch --pattern "src/**/*.ts"
+
+# Integration test suite
+npx claude-flow@alpha verify integration --test-suite full
+```
+
+## Verification Criteria
+
+The verification system evaluates five dimensions:
+
+### 1. Code Correctness
+
+- Syntax validation
+- Type checking (TypeScript)
+- Logic flow analysis
+- Error handling completeness
+
+### 2. Best Practices
+
+- Code style adherence
+- SOLID principles
+- Design patterns usage
+- Modularity and reusability
+
+### 3. Security
+
+- Vulnerability scanning
+- Secret detection
+- Input validation
+- Authentication/authorization checks
+
+### 4. Performance
+
+- Algorithmic complexity
+- Memory usage patterns
+- Database query optimization
+- Bundle size impact
+
+### 5. Documentation
+
+- JSDoc/TypeDoc completeness
+- README accuracy
+- API documentation
+- Code comments quality
+
+## JSON Output for CI/CD
+
+```bash
+# Get structured JSON output
+npx claude-flow@alpha verify check --json > verification.json
+```
+
+Example JSON structure:
+
+```json
+{
+  "overallScore": 0.947,
+  "passed": true,
+  "threshold": 0.95,
+  "checks": [
+    {
+      "name": "code-correctness",
+      "score": 0.98,
+      "passed": true
+    },
+    {
+      "name": "security",
+      "score": 0.91,
+      "passed": false,
+      "issues": ["..."]
+    }
+  ]
+}
+```