chore(agent): configure AI agents and tools

Add configuration for BMad, Claude, OpenCode, and other AI agent tools and workflows.
This commit is contained in:
2026-02-11 11:51:23 +08:00
parent 9c2181f743
commit ad8e2e313e
977 changed files with 157625 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
---
name: 'agent-builder'
description: 'agent-builder agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmb/agents/agent-builder.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'module-builder'
description: 'module-builder agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmb/agents/module-builder.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'workflow-builder'
description: 'workflow-builder agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmb/agents/workflow-builder.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'analyst'
description: 'analyst agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/analyst.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'architect'
description: 'architect agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/architect.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'dev'
description: 'dev agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/dev.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'pm'
description: 'pm agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/pm.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'quick-flow-solo-dev'
description: 'quick-flow-solo-dev agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/quick-flow-solo-dev.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'sm'
description: 'sm agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/sm.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'tea'
description: 'tea agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/tea.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'tech-writer'
description: 'tech-writer agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/tech-writer.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'ux-designer'
description: 'ux-designer agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/bmm/agents/ux-designer.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'brainstorming-coach'
description: 'brainstorming-coach agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/brainstorming-coach.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'creative-problem-solver'
description: 'creative-problem-solver agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/creative-problem-solver.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'design-thinking-coach'
description: 'design-thinking-coach agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/design-thinking-coach.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'innovation-strategist'
description: 'innovation-strategist agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/innovation-strategist.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'presentation-master'
description: 'presentation-master agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/presentation-master.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'storyteller'
description: 'storyteller agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/cis/agents/storyteller/storyteller.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,14 @@
---
name: 'bmad-master'
description: 'bmad-master agent'
---
You must fully embody this agent's persona and follow all activation instructions exactly as specified. NEVER break character until given an exit command.
<agent-activation CRITICAL="TRUE">
1. LOAD the FULL agent file from @_bmad/core/agents/bmad-master.md
2. READ its entire contents - this contains the complete agent persona, menu, and instructions
3. Execute ALL activation steps exactly as written in the agent file
4. Follow the agent's persona and menu system precisely
5. Stay in character throughout the session
</agent-activation>

View File

@@ -0,0 +1,5 @@
---
description: 'Tri-modal workflow for creating, editing, and validating BMAD Core compliant agents'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmb/workflows/agent/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Quad-modal workflow for creating BMAD modules (Brief + Create + Edit + Validate)'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmb/workflows/module/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Create structured standalone workflows using markdown-based step architecture (tri-modal: create, validate, edit)'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmb/workflows/workflow/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Critical validation workflow that assesses PRD, Architecture, and Epics & Stories for completeness and alignment before implementation. Uses adversarial review approach to find gaps and issues.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/3-solutioning/check-implementation-readiness/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,13 @@
---
description: 'Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts `looks good` - must find minimum issues and can auto-fix with user approval.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/code-review/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/code-review/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Navigate significant changes during sprint execution by analyzing impact, proposing solutions, and routing for implementation'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,5 @@
---
description: 'Collaborative architectural decision facilitation for AI-agent consistency. Replaces template-driven architecture with intelligent, adaptive conversation that produces a decision-focused architecture document optimized for preventing agent conflicts.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/3-solutioning/create-architecture/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Transform PRD requirements and Architecture decisions into comprehensive stories organized by user value. This workflow requires completed PRD + Architecture documents (UX recommended if UI exists) and breaks down requirements into implementation-ready epics and user stories that incorporate all available technical and design context. Creates detailed, actionable stories with complete acceptance criteria for development teams.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/3-solutioning/create-epics-and-stories/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,13 @@
---
description: 'Create data flow diagrams (DFD) in Excalidraw format'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/excalidraw-diagrams/create-dataflow/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/excalidraw-diagrams/create-dataflow/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Create system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/excalidraw-diagrams/create-diagram/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/excalidraw-diagrams/create-diagram/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Create a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/excalidraw-diagrams/create-flowchart/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/excalidraw-diagrams/create-flowchart/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Create website or app wireframes in Excalidraw format'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/excalidraw-diagrams/create-wireframe/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/excalidraw-diagrams/create-wireframe/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,5 @@
---
description: 'Create comprehensive product briefs through collaborative step-by-step discovery as creative Business Analyst working with the user as peers.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/1-analysis/create-product-brief/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,13 @@
---
description: 'Create the next user story from epics+stories with enhanced context analysis and direct ready-for-dev marking'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/create-story/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/create-story/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,5 @@
---
description: 'Work with a peer UX Design expert to plan your applications UX patterns, look and feel.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,13 @@
---
description: 'Execute a story by implementing tasks/subtasks, writing tests, validating, and updating the story file per acceptance criteria'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/document-project/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/document-project/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,5 @@
---
description: 'Creates a concise project-context.md file with critical rules and patterns that AI agents must follow when implementing code. Optimized for LLM context efficiency.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/generate-project-context/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'PRD tri-modal workflow - Create, Validate, or Edit comprehensive PRDs'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/2-plan-workflows/prd/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Flexible development - execute tech-specs OR direct instructions with optional planning.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/bmad-quick-flow/quick-dev/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Conversational spec engineering - ask questions, investigate code, produce implementation-ready tech-spec.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/bmad-quick-flow/quick-spec/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Conduct comprehensive research across multiple domains using current web data and verified sources - Market, Technical, Domain and other research types.'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/bmm/workflows/1-analysis/research/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,13 @@
---
description: 'Run after epic completion to review overall success, extract lessons learned, and explore if new information emerged that might impact the next epic'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/retrospective/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/retrospective/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Generate and manage the sprint status tracking file for Phase 4 implementation, extracting all epics and stories from epic files and tracking their status through the development lifecycle'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/sprint-planning/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/sprint-planning/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Summarize sprint-status.yaml, surface risks, and route to the right implementation workflow.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/4-implementation/sprint-status/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/4-implementation/sprint-status/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Generate failing acceptance tests before implementation using TDD red-green-refactor cycle'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/atdd/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/atdd/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Expand test automation coverage after implementation or analyze existing codebase to generate comprehensive test suite'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/automate/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/automate/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Scaffold CI/CD quality pipeline with test execution, burn-in loops, and artifact collection'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/ci/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/ci/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Initialize production-ready test framework architecture (Playwright or Cypress) with fixtures, helpers, and configuration'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/framework/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/framework/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Assess non-functional requirements (performance, security, reliability, maintainability) before release with evidence-based validation'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/nfr-assess/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/nfr-assess/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Dual-mode workflow: (1) System-level testability review in Solutioning phase, or (2) Epic-level test planning in Implementation phase. Auto-detects mode based on project phase.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/test-design/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/test-design/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Review test quality using comprehensive knowledge base and best practices validation'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/test-review/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/test-review/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Generate requirements-to-tests traceability matrix, analyze coverage, and make quality gate decision (PASS/CONCERNS/FAIL/WAIVED)'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/testarch/trace/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/testarch/trace/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Initialize a new BMM project by determining level, type, and creating workflow path'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/workflow-status/init/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/workflow-status/init/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Lightweight status checker - answers ""what should I do now?"" for any agent. Reads YAML status file for workflow tracking. Use workflow-init for new projects.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/bmm/workflows/workflow-status/workflow.yaml
3. Pass the yaml path _bmad/bmm/workflows/workflow-status/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Guide human-centered design processes using empathy-driven methodologies. This workflow walks through the design thinking phases - Empathize, Define, Ideate, Prototype, and Test - to create solutions deeply rooted in user needs.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/cis/workflows/design-thinking/workflow.yaml
3. Pass the yaml path _bmad/cis/workflows/design-thinking/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Identify disruption opportunities and architect business model innovation. This workflow guides strategic analysis of markets, competitive dynamics, and business model innovation to uncover sustainable competitive advantages and breakthrough opportunities.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/cis/workflows/innovation-strategy/workflow.yaml
3. Pass the yaml path _bmad/cis/workflows/innovation-strategy/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Apply systematic problem-solving methodologies to crack complex challenges. This workflow guides through problem diagnosis, root cause analysis, creative solution generation, evaluation, and implementation planning using proven frameworks.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/cis/workflows/problem-solving/workflow.yaml
3. Pass the yaml path _bmad/cis/workflows/problem-solving/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,13 @@
---
description: 'Craft compelling narratives using proven story frameworks and techniques. This workflow guides users through structured narrative development, applying appropriate story frameworks to create emotionally resonant and engaging stories for any purpose.'
---
IT IS CRITICAL THAT YOU FOLLOW THESE STEPS - while staying in character as the current agent persona you may have loaded:
<steps CRITICAL="TRUE">
1. Always LOAD the FULL @_bmad/core/tasks/workflow.xml
2. READ its entire contents - this is the CORE OS for EXECUTING the specific workflow-config @_bmad/cis/workflows/storytelling/workflow.yaml
3. Pass the yaml path _bmad/cis/workflows/storytelling/workflow.yaml as 'workflow-config' parameter to the workflow.xml instructions
4. Follow workflow.xml instructions EXACTLY as written to process and follow the specific workflow config and its instructions
5. Save outputs after EACH section when generating any documents from templates
</steps>

View File

@@ -0,0 +1,5 @@
---
description: 'Facilitate interactive brainstorming sessions using diverse creative techniques and ideation methods'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/core/workflows/brainstorming/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,5 @@
---
description: 'Orchestrates group discussions between all installed BMAD agents, enabling natural multi-agent conversations'
---
IT IS CRITICAL THAT YOU FOLLOW THIS COMMAND: LOAD the FULL @_bmad/core/workflows/party-mode/workflow.md, READ its entire contents and follow its directions exactly!

View File

@@ -0,0 +1,9 @@
---
description: 'Generates or updates an index.md of all documents in the specified directory'
---
# Index Docs
LOAD and execute the task at: _bmad/core/tasks/index-docs.xml
Follow all instructions in the task file exactly as written.

View File

@@ -0,0 +1,9 @@
---
description: 'Splits large markdown documents into smaller, organized files based on level 2 (default) sections'
---
# Shard Document
LOAD and execute the task at: _bmad/core/tasks/shard-doc.xml
Follow all instructions in the task file exactly as written.

1
.opencode/skill Symbolic link
View File

@@ -0,0 +1 @@
../.agent/skills

View File

@@ -0,0 +1,125 @@
---
name: Confidence Check
description: Pre-implementation confidence assessment (≥90% required). Use before starting any implementation to verify readiness with duplicate check, architecture compliance, official docs verification, OSS references, and root cause identification.
allowed-tools: Read, Grep, Glob, WebFetch, WebSearch
---
# Confidence Check Skill
## Purpose
Prevents wrong-direction execution by assessing confidence **BEFORE** starting implementation.
**Requirement**: ≥90% confidence to proceed with implementation.
**Test Results** (2025-10-21):
- Precision: 1.000 (no false positives)
- Recall: 1.000 (no false negatives)
- 8/8 test cases passed
## When to Use
Use this skill BEFORE implementing any task to ensure:
- No duplicate implementations exist
- Architecture compliance verified
- Official documentation reviewed
- Working OSS implementations found
- Root cause properly identified
## Confidence Assessment Criteria
Calculate confidence score (0.0 - 1.0) based on 5 checks:
### 1. No Duplicate Implementations? (25%)
**Check**: Search codebase for existing functionality
```bash
# Use Grep to search for similar functions
# Use Glob to find related modules
```
✅ Pass if no duplicates found
❌ Fail if similar implementation exists
### 2. Architecture Compliance? (25%)
**Check**: Verify tech stack alignment
- Read `CLAUDE.md`, `PLANNING.md`
- Confirm existing patterns used
- Avoid reinventing existing solutions
✅ Pass if uses existing tech stack (e.g., Supabase, UV, pytest)
❌ Fail if introduces new dependencies unnecessarily
### 3. Official Documentation Verified? (20%)
**Check**: Review official docs before implementation
- Use Context7 MCP for official docs
- Use WebFetch for documentation URLs
- Verify API compatibility
✅ Pass if official docs reviewed
❌ Fail if relying on assumptions
### 4. Working OSS Implementations Referenced? (15%)
**Check**: Find proven implementations
- Use Tavily MCP or WebSearch
- Search GitHub for examples
- Verify working code samples
✅ Pass if OSS reference found
❌ Fail if no working examples
### 5. Root Cause Identified? (15%)
**Check**: Understand the actual problem
- Analyze error messages
- Check logs and stack traces
- Identify underlying issue
✅ Pass if root cause clear
❌ Fail if symptoms unclear
## Confidence Score Calculation
```
Total = Check1 (25%) + Check2 (25%) + Check3 (20%) + Check4 (15%) + Check5 (15%)
If Total >= 0.90: ✅ Proceed with implementation
If Total >= 0.70: ⚠️ Present alternatives, ask questions
If Total < 0.70: ❌ STOP - Request more context
```
## Output Format
```
📋 Confidence Checks:
✅ No duplicate implementations found
✅ Uses existing tech stack
✅ Official documentation verified
✅ Working OSS implementation found
✅ Root cause identified
📊 Confidence: 1.00 (100%)
✅ High confidence - Proceeding to implementation
```
## Implementation Details
The TypeScript implementation is available in `confidence.ts` for reference, containing:
- `confidenceCheck(context)` - Main assessment function
- Detailed check implementations
- Context interface definitions
## ROI
**Token Savings**: Spend 100-200 tokens on confidence check to save 5,000-50,000 tokens on wrong-direction work.
**Success Rate**: 100% precision and recall in production testing.

View File

@@ -0,0 +1,171 @@
/**
* Confidence Check - Pre-implementation confidence assessment
*
* Prevents wrong-direction execution by assessing confidence BEFORE starting.
* Requires ≥90% confidence to proceed with implementation.
*
* Test Results (2025-10-21):
* - Precision: 1.000 (no false positives)
* - Recall: 1.000 (no false negatives)
* - 8/8 test cases passed
*/
export interface Context {
task?: string;
duplicate_check_complete?: boolean;
architecture_check_complete?: boolean;
official_docs_verified?: boolean;
oss_reference_complete?: boolean;
root_cause_identified?: boolean;
confidence_checks?: string[];
[key: string]: any;
}
/**
* Assess confidence level (0.0 - 1.0)
*
* Investigation Phase Checks:
* 1. No duplicate implementations? (25%)
* 2. Architecture compliance? (25%)
* 3. Official documentation verified? (20%)
* 4. Working OSS implementations referenced? (15%)
* 5. Root cause identified? (15%)
*
* @param context - Task context with investigation flags
* @returns Confidence score (0.0 = no confidence, 1.0 = absolute certainty)
*/
export async function confidenceCheck(context: Context): Promise<number> {
let score = 0.0;
const checks: string[] = [];
// Check 1: No duplicate implementations (25%)
if (noDuplicates(context)) {
score += 0.25;
checks.push("✅ No duplicate implementations found");
} else {
checks.push("❌ Check for existing implementations first");
}
// Check 2: Architecture compliance (25%)
if (architectureCompliant(context)) {
score += 0.25;
checks.push("✅ Uses existing tech stack (e.g., Supabase)");
} else {
checks.push("❌ Verify architecture compliance (avoid reinventing)");
}
// Check 3: Official documentation verified (20%)
if (hasOfficialDocs(context)) {
score += 0.2;
checks.push("✅ Official documentation verified");
} else {
checks.push("❌ Read official docs first");
}
// Check 4: Working OSS implementations referenced (15%)
if (hasOssReference(context)) {
score += 0.15;
checks.push("✅ Working OSS implementation found");
} else {
checks.push("❌ Search for OSS implementations");
}
// Check 5: Root cause identified (15%)
if (rootCauseIdentified(context)) {
score += 0.15;
checks.push("✅ Root cause identified");
} else {
checks.push("❌ Continue investigation to identify root cause");
}
// Store check results
context.confidence_checks = checks;
// Display checks
console.log("📋 Confidence Checks:");
checks.forEach((check) => console.log(` ${check}`));
console.log("");
return score;
}
/**
* Check for duplicate implementations
*
* Before implementing, verify:
* - No existing similar functions/modules (Glob/Grep)
* - No helper functions that solve the same problem
* - No libraries that provide this functionality
*/
function noDuplicates(context: Context): boolean {
return context.duplicate_check_complete ?? false;
}
/**
* Check architecture compliance
*
* Verify solution uses existing tech stack:
* - Supabase project → Use Supabase APIs (not custom API)
* - Next.js project → Use Next.js patterns (not custom routing)
* - Turborepo → Use workspace patterns (not manual scripts)
*/
function architectureCompliant(context: Context): boolean {
return context.architecture_check_complete ?? false;
}
/**
* Check if official documentation verified
*
* For testing: uses context flag 'official_docs_verified'
* For production: checks for README.md, CLAUDE.md, docs/ directory
*/
function hasOfficialDocs(context: Context): boolean {
// Check context flag (for testing and runtime)
if ("official_docs_verified" in context) {
return context.official_docs_verified ?? false;
}
// Fallback: check for documentation files (production)
// This would require filesystem access in Node.js
return false;
}
/**
* Check if working OSS implementations referenced
*
* Search for:
* - Similar open-source solutions
* - Reference implementations in popular projects
* - Community best practices
*/
function hasOssReference(context: Context): boolean {
return context.oss_reference_complete ?? false;
}
/**
* Check if root cause is identified with high certainty
*
* Verify:
* - Problem source pinpointed (not guessing)
* - Solution addresses root cause (not symptoms)
* - Fix verified against official docs/OSS patterns
*/
function rootCauseIdentified(context: Context): boolean {
return context.root_cause_identified ?? false;
}
/**
* Get recommended action based on confidence level
*
* @param confidence - Confidence score (0.0 - 1.0)
* @returns Recommended action
*/
export function getRecommendation(confidence: number): string {
if (confidence >= 0.9) {
return "✅ High confidence (≥90%) - Proceed with implementation";
}
if (confidence >= 0.7) {
return "⚠️ Medium confidence (70-89%) - Continue investigation, DO NOT implement yet";
}
return "❌ Low confidence (<70%) - STOP and continue investigation loop";
}

View File

@@ -0,0 +1,356 @@
---
name: agent-browser
description: Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages.
allowed-tools: Bash(agent-browser:*)
---
# Browser Automation with agent-browser
## Quick start
```bash
agent-browser open <url> # Navigate to page
agent-browser snapshot -i # Get interactive elements with refs
agent-browser click @e1 # Click element by ref
agent-browser fill @e2 "text" # Fill input by ref
agent-browser close # Close browser
```
## Core workflow
1. Navigate: `agent-browser open <url>`
2. Snapshot: `agent-browser snapshot -i` (returns elements with refs like `@e1`, `@e2`)
3. Interact using refs from the snapshot
4. Re-snapshot after navigation or significant DOM changes
## Commands
### Navigation
```bash
agent-browser open <url> # Navigate to URL (aliases: goto, navigate)
# Supports: https://, http://, file://, about:, data://
# Auto-prepends https:// if no protocol given
agent-browser back # Go back
agent-browser forward # Go forward
agent-browser reload # Reload page
agent-browser close # Close browser (aliases: quit, exit)
agent-browser connect 9222 # Connect to browser via CDP port
```
### Snapshot (page analysis)
```bash
agent-browser snapshot # Full accessibility tree
agent-browser snapshot -i # Interactive elements only (recommended)
agent-browser snapshot -c # Compact output
agent-browser snapshot -d 3 # Limit depth to 3
agent-browser snapshot -s "#main" # Scope to CSS selector
```
### Interactions (use @refs from snapshot)
```bash
agent-browser click @e1 # Click
agent-browser dblclick @e1 # Double-click
agent-browser focus @e1 # Focus element
agent-browser fill @e2 "text" # Clear and type
agent-browser type @e2 "text" # Type without clearing
agent-browser press Enter # Press key (alias: key)
agent-browser press Control+a # Key combination
agent-browser keydown Shift # Hold key down
agent-browser keyup Shift # Release key
agent-browser hover @e1 # Hover
agent-browser check @e1 # Check checkbox
agent-browser uncheck @e1 # Uncheck checkbox
agent-browser select @e1 "value" # Select dropdown option
agent-browser select @e1 "a" "b" # Select multiple options
agent-browser scroll down 500 # Scroll page (default: down 300px)
agent-browser scrollintoview @e1 # Scroll element into view (alias: scrollinto)
agent-browser drag @e1 @e2 # Drag and drop
agent-browser upload @e1 file.pdf # Upload files
```
### Get information
```bash
agent-browser get text @e1 # Get element text
agent-browser get html @e1 # Get innerHTML
agent-browser get value @e1 # Get input value
agent-browser get attr @e1 href # Get attribute
agent-browser get title # Get page title
agent-browser get url # Get current URL
agent-browser get count ".item" # Count matching elements
agent-browser get box @e1 # Get bounding box
agent-browser get styles @e1 # Get computed styles (font, color, bg, etc.)
```
### Check state
```bash
agent-browser is visible @e1 # Check if visible
agent-browser is enabled @e1 # Check if enabled
agent-browser is checked @e1 # Check if checked
```
### Screenshots & PDF
```bash
agent-browser screenshot # Save to a temporary directory
agent-browser screenshot path.png # Save to a specific path
agent-browser screenshot --full # Full page
agent-browser pdf output.pdf # Save as PDF
```
### Video recording
```bash
agent-browser record start ./demo.webm # Start recording (uses current URL + state)
agent-browser click @e1 # Perform actions
agent-browser record stop # Stop and save video
agent-browser record restart ./take2.webm # Stop current + start new recording
```
Recording creates a fresh context but preserves cookies/storage from your session. If no URL is provided, it
automatically returns to your current page. For smooth demos, explore first, then start recording.
### Wait
```bash
agent-browser wait @e1 # Wait for element
agent-browser wait 2000 # Wait milliseconds
agent-browser wait --text "Success" # Wait for text (or -t)
agent-browser wait --url "**/dashboard" # Wait for URL pattern (or -u)
agent-browser wait --load networkidle # Wait for network idle (or -l)
agent-browser wait --fn "window.ready" # Wait for JS condition (or -f)
```
### Mouse control
```bash
agent-browser mouse move 100 200 # Move mouse
agent-browser mouse down left # Press button
agent-browser mouse up left # Release button
agent-browser mouse wheel 100 # Scroll wheel
```
### Semantic locators (alternative to refs)
```bash
agent-browser find role button click --name "Submit"
agent-browser find text "Sign In" click
agent-browser find text "Sign In" click --exact # Exact match only
agent-browser find label "Email" fill "user@test.com"
agent-browser find placeholder "Search" type "query"
agent-browser find alt "Logo" click
agent-browser find title "Close" click
agent-browser find testid "submit-btn" click
agent-browser find first ".item" click
agent-browser find last ".item" click
agent-browser find nth 2 "a" hover
```
### Browser settings
```bash
agent-browser set viewport 1920 1080 # Set viewport size
agent-browser set device "iPhone 14" # Emulate device
agent-browser set geo 37.7749 -122.4194 # Set geolocation (alias: geolocation)
agent-browser set offline on # Toggle offline mode
agent-browser set headers '{"X-Key":"v"}' # Extra HTTP headers
agent-browser set credentials user pass # HTTP basic auth (alias: auth)
agent-browser set media dark # Emulate color scheme
agent-browser set media light reduced-motion # Light mode + reduced motion
```
### Cookies & Storage
```bash
agent-browser cookies # Get all cookies
agent-browser cookies set name value # Set cookie
agent-browser cookies clear # Clear cookies
agent-browser storage local # Get all localStorage
agent-browser storage local key # Get specific key
agent-browser storage local set k v # Set value
agent-browser storage local clear # Clear all
```
### Network
```bash
agent-browser network route <url> # Intercept requests
agent-browser network route <url> --abort # Block requests
agent-browser network route <url> --body '{}' # Mock response
agent-browser network unroute [url] # Remove routes
agent-browser network requests # View tracked requests
agent-browser network requests --filter api # Filter requests
```
### Tabs & Windows
```bash
agent-browser tab # List tabs
agent-browser tab new [url] # New tab
agent-browser tab 2 # Switch to tab by index
agent-browser tab close # Close current tab
agent-browser tab close 2 # Close tab by index
agent-browser window new # New window
```
### Frames
```bash
agent-browser frame "#iframe" # Switch to iframe
agent-browser frame main # Back to main frame
```
### Dialogs
```bash
agent-browser dialog accept [text] # Accept dialog
agent-browser dialog dismiss # Dismiss dialog
```
### JavaScript
```bash
agent-browser eval "document.title" # Run JavaScript
```
## Global options
```bash
agent-browser --session <name> ... # Isolated browser session
agent-browser --json ... # JSON output for parsing
agent-browser --headed ... # Show browser window (not headless)
agent-browser --full ... # Full page screenshot (-f)
agent-browser --cdp <port> ... # Connect via Chrome DevTools Protocol
agent-browser -p <provider> ... # Cloud browser provider (--provider)
agent-browser --proxy <url> ... # Use proxy server
agent-browser --headers <json> ... # HTTP headers scoped to URL's origin
agent-browser --executable-path <p> # Custom browser executable
agent-browser --extension <path> ... # Load browser extension (repeatable)
agent-browser --help # Show help (-h)
agent-browser --version # Show version (-V)
agent-browser <command> --help # Show detailed help for a command
```
### Proxy support
```bash
agent-browser --proxy http://proxy.com:8080 open example.com
agent-browser --proxy http://user:pass@proxy.com:8080 open example.com
agent-browser --proxy socks5://proxy.com:1080 open example.com
```
## Environment variables
```bash
AGENT_BROWSER_SESSION="mysession" # Default session name
AGENT_BROWSER_EXECUTABLE_PATH="/path/chrome" # Custom browser path
AGENT_BROWSER_EXTENSIONS="/ext1,/ext2" # Comma-separated extension paths
AGENT_BROWSER_PROVIDER="your-cloud-browser-provider" # Cloud browser provider (select browseruse or browserbase)
AGENT_BROWSER_STREAM_PORT="9223" # WebSocket streaming port
AGENT_BROWSER_HOME="/path/to/agent-browser" # Custom install location (for daemon.js)
```
## Example: Form submission
```bash
agent-browser open https://example.com/form
agent-browser snapshot -i
# Output shows: textbox "Email" [ref=e1], textbox "Password" [ref=e2], button "Submit" [ref=e3]
agent-browser fill @e1 "user@example.com"
agent-browser fill @e2 "password123"
agent-browser click @e3
agent-browser wait --load networkidle
agent-browser snapshot -i # Check result
```
## Example: Authentication with saved state
```bash
# Login once
agent-browser open https://app.example.com/login
agent-browser snapshot -i
agent-browser fill @e1 "username"
agent-browser fill @e2 "password"
agent-browser click @e3
agent-browser wait --url "**/dashboard"
agent-browser state save auth.json
# Later sessions: load saved state
agent-browser state load auth.json
agent-browser open https://app.example.com/dashboard
```
## Sessions (parallel browsers)
```bash
agent-browser --session test1 open site-a.com
agent-browser --session test2 open site-b.com
agent-browser session list
```
## JSON output (for parsing)
Add `--json` for machine-readable output:
```bash
agent-browser snapshot -i --json
agent-browser get text @e1 --json
```
## Debugging
```bash
agent-browser --headed open example.com # Show browser window
agent-browser --cdp 9222 snapshot # Connect via CDP port
agent-browser connect 9222 # Alternative: connect command
agent-browser console # View console messages
agent-browser console --clear # Clear console
agent-browser errors # View page errors
agent-browser errors --clear # Clear errors
agent-browser highlight @e1 # Highlight element
agent-browser trace start # Start recording trace
agent-browser trace stop trace.zip # Stop and save trace
agent-browser record start ./debug.webm # Record video from current page
agent-browser record stop # Save recording
```
## Deep-dive documentation
For detailed patterns and best practices, see:
| Reference | Description |
|-----------|-------------|
| [references/snapshot-refs.md](references/snapshot-refs.md) | Ref lifecycle, invalidation rules, troubleshooting |
| [references/session-management.md](references/session-management.md) | Parallel sessions, state persistence, concurrent scraping |
| [references/authentication.md](references/authentication.md) | Login flows, OAuth, 2FA handling, state reuse |
| [references/video-recording.md](references/video-recording.md) | Recording workflows for debugging and documentation |
| [references/proxy-support.md](references/proxy-support.md) | Proxy configuration, geo-testing, rotating proxies |
## Ready-to-use templates
Executable workflow scripts for common patterns:
| Template | Description |
|----------|-------------|
| [templates/form-automation.sh](templates/form-automation.sh) | Form filling with validation |
| [templates/authenticated-session.sh](templates/authenticated-session.sh) | Login once, reuse state |
| [templates/capture-workflow.sh](templates/capture-workflow.sh) | Content extraction with screenshots |
Usage:
```bash
./templates/form-automation.sh https://example.com/form
./templates/authenticated-session.sh https://app.example.com/login
./templates/capture-workflow.sh https://example.com ./output
```
## HTTPS Certificate Errors
For sites with self-signed or invalid certificates:
```bash
agent-browser open https://localhost:8443 --ignore-https-errors
```

View File

@@ -0,0 +1,188 @@
# Authentication Patterns
Patterns for handling login flows, session persistence, and authenticated browsing.
## Basic Login Flow
```bash
# Navigate to login page
agent-browser open https://app.example.com/login
agent-browser wait --load networkidle
# Get form elements
agent-browser snapshot -i
# Output: @e1 [input type="email"], @e2 [input type="password"], @e3 [button] "Sign In"
# Fill credentials
agent-browser fill @e1 "user@example.com"
agent-browser fill @e2 "password123"
# Submit
agent-browser click @e3
agent-browser wait --load networkidle
# Verify login succeeded
agent-browser get url # Should be dashboard, not login
```
## Saving Authentication State
After logging in, save state for reuse:
```bash
# Login first (see above)
agent-browser open https://app.example.com/login
agent-browser snapshot -i
agent-browser fill @e1 "user@example.com"
agent-browser fill @e2 "password123"
agent-browser click @e3
agent-browser wait --url "**/dashboard"
# Save authenticated state
agent-browser state save ./auth-state.json
```
## Restoring Authentication
Skip login by loading saved state:
```bash
# Load saved auth state
agent-browser state load ./auth-state.json
# Navigate directly to protected page
agent-browser open https://app.example.com/dashboard
# Verify authenticated
agent-browser snapshot -i
```
## OAuth / SSO Flows
For OAuth redirects:
```bash
# Start OAuth flow
agent-browser open https://app.example.com/auth/google
# Handle redirects automatically
agent-browser wait --url "**/accounts.google.com**"
agent-browser snapshot -i
# Fill Google credentials
agent-browser fill @e1 "user@gmail.com"
agent-browser click @e2 # Next button
agent-browser wait 2000
agent-browser snapshot -i
agent-browser fill @e3 "password"
agent-browser click @e4 # Sign in
# Wait for redirect back
agent-browser wait --url "**/app.example.com**"
agent-browser state save ./oauth-state.json
```
## Two-Factor Authentication
Handle 2FA with manual intervention:
```bash
# Login with credentials
agent-browser open https://app.example.com/login --headed # Show browser
agent-browser snapshot -i
agent-browser fill @e1 "user@example.com"
agent-browser fill @e2 "password123"
agent-browser click @e3
# Wait for user to complete 2FA manually
echo "Complete 2FA in the browser window..."
agent-browser wait --url "**/dashboard" --timeout 120000
# Save state after 2FA
agent-browser state save ./2fa-state.json
```
## HTTP Basic Auth
For sites using HTTP Basic Authentication:
```bash
# Set credentials before navigation
agent-browser set credentials username password
# Navigate to protected resource
agent-browser open https://protected.example.com/api
```
## Cookie-Based Auth
Manually set authentication cookies:
```bash
# Set auth cookie
agent-browser cookies set session_token "abc123xyz"
# Navigate to protected page
agent-browser open https://app.example.com/dashboard
```
## Token Refresh Handling
For sessions with expiring tokens:
```bash
#!/bin/bash
# Wrapper that handles token refresh
STATE_FILE="./auth-state.json"
# Try loading existing state
if [[ -f "$STATE_FILE" ]]; then
agent-browser state load "$STATE_FILE"
agent-browser open https://app.example.com/dashboard
# Check if session is still valid
URL=$(agent-browser get url)
if [[ "$URL" == *"/login"* ]]; then
echo "Session expired, re-authenticating..."
# Perform fresh login
agent-browser snapshot -i
agent-browser fill @e1 "$USERNAME"
agent-browser fill @e2 "$PASSWORD"
agent-browser click @e3
agent-browser wait --url "**/dashboard"
agent-browser state save "$STATE_FILE"
fi
else
# First-time login
agent-browser open https://app.example.com/login
# ... login flow ...
fi
```
## Security Best Practices
1. **Never commit state files** - They contain session tokens
```bash
echo "*.auth-state.json" >> .gitignore
```
2. **Use environment variables for credentials**
```bash
agent-browser fill @e1 "$APP_USERNAME"
agent-browser fill @e2 "$APP_PASSWORD"
```
3. **Clean up after automation**
```bash
agent-browser cookies clear
rm -f ./auth-state.json
```
4. **Use short-lived sessions for CI/CD**
```bash
# Don't persist state in CI
agent-browser open https://app.example.com/login
# ... login and perform actions ...
agent-browser close # Session ends, nothing persisted
```

View File

@@ -0,0 +1,175 @@
# Proxy Support
Configure proxy servers for browser automation, useful for geo-testing, rate limiting avoidance, and corporate environments.
## Basic Proxy Configuration
Set proxy via environment variable before starting:
```bash
# HTTP proxy
export HTTP_PROXY="http://proxy.example.com:8080"
agent-browser open https://example.com
# HTTPS proxy
export HTTPS_PROXY="https://proxy.example.com:8080"
agent-browser open https://example.com
# Both
export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"
agent-browser open https://example.com
```
## Authenticated Proxy
For proxies requiring authentication:
```bash
# Include credentials in URL
export HTTP_PROXY="http://username:password@proxy.example.com:8080"
agent-browser open https://example.com
```
## SOCKS Proxy
```bash
# SOCKS5 proxy
export ALL_PROXY="socks5://proxy.example.com:1080"
agent-browser open https://example.com
# SOCKS5 with auth
export ALL_PROXY="socks5://user:pass@proxy.example.com:1080"
agent-browser open https://example.com
```
## Proxy Bypass
Skip proxy for specific domains:
```bash
# Bypass proxy for local addresses
export NO_PROXY="localhost,127.0.0.1,.internal.company.com"
agent-browser open https://internal.company.com # Direct connection
agent-browser open https://external.com # Via proxy
```
## Common Use Cases
### Geo-Location Testing
```bash
#!/bin/bash
# Test site from different regions using geo-located proxies
PROXIES=(
"http://us-proxy.example.com:8080"
"http://eu-proxy.example.com:8080"
"http://asia-proxy.example.com:8080"
)
for proxy in "${PROXIES[@]}"; do
export HTTP_PROXY="$proxy"
export HTTPS_PROXY="$proxy"
region=$(echo "$proxy" | grep -oP '^\w+-\w+')
echo "Testing from: $region"
agent-browser --session "$region" open https://example.com
agent-browser --session "$region" screenshot "./screenshots/$region.png"
agent-browser --session "$region" close
done
```
### Rotating Proxies for Scraping
```bash
#!/bin/bash
# Rotate through proxy list to avoid rate limiting
PROXY_LIST=(
"http://proxy1.example.com:8080"
"http://proxy2.example.com:8080"
"http://proxy3.example.com:8080"
)
URLS=(
"https://site.com/page1"
"https://site.com/page2"
"https://site.com/page3"
)
for i in "${!URLS[@]}"; do
proxy_index=$((i % ${#PROXY_LIST[@]}))
export HTTP_PROXY="${PROXY_LIST[$proxy_index]}"
export HTTPS_PROXY="${PROXY_LIST[$proxy_index]}"
agent-browser open "${URLS[$i]}"
agent-browser get text body > "output-$i.txt"
agent-browser close
sleep 1 # Polite delay
done
```
### Corporate Network Access
```bash
#!/bin/bash
# Access internal sites via corporate proxy
export HTTP_PROXY="http://corpproxy.company.com:8080"
export HTTPS_PROXY="http://corpproxy.company.com:8080"
export NO_PROXY="localhost,127.0.0.1,.company.com"
# External sites go through proxy
agent-browser open https://external-vendor.com
# Internal sites bypass proxy
agent-browser open https://intranet.company.com
```
## Verifying Proxy Connection
```bash
# Check your apparent IP
agent-browser open https://httpbin.org/ip
agent-browser get text body
# Should show proxy's IP, not your real IP
```
## Troubleshooting
### Proxy Connection Failed
```bash
# Test proxy connectivity first
curl -x http://proxy.example.com:8080 https://httpbin.org/ip
# Check if proxy requires auth
export HTTP_PROXY="http://user:pass@proxy.example.com:8080"
```
### SSL/TLS Errors Through Proxy
Some proxies perform SSL inspection. If you encounter certificate errors:
```bash
# For testing only - not recommended for production
agent-browser open https://example.com --ignore-https-errors
```
### Slow Performance
```bash
# Use proxy only when necessary
export NO_PROXY="*.cdn.com,*.static.com" # Direct CDN access
```
## Best Practices
1. **Use environment variables** - Don't hardcode proxy credentials
2. **Set NO_PROXY appropriately** - Avoid routing local traffic through proxy
3. **Test proxy before automation** - Verify connectivity with simple requests
4. **Handle proxy failures gracefully** - Implement retry logic for unstable proxies
5. **Rotate proxies for large scraping jobs** - Distribute load and avoid bans

View File

@@ -0,0 +1,181 @@
# Session Management
Run multiple isolated browser sessions concurrently with state persistence.
## Named Sessions
Use `--session` flag to isolate browser contexts:
```bash
# Session 1: Authentication flow
agent-browser --session auth open https://app.example.com/login
# Session 2: Public browsing (separate cookies, storage)
agent-browser --session public open https://example.com
# Commands are isolated by session
agent-browser --session auth fill @e1 "user@example.com"
agent-browser --session public get text body
```
## Session Isolation Properties
Each session has independent:
- Cookies
- LocalStorage / SessionStorage
- IndexedDB
- Cache
- Browsing history
- Open tabs
## Session State Persistence
### Save Session State
```bash
# Save cookies, storage, and auth state
agent-browser state save /path/to/auth-state.json
```
### Load Session State
```bash
# Restore saved state
agent-browser state load /path/to/auth-state.json
# Continue with authenticated session
agent-browser open https://app.example.com/dashboard
```
### State File Contents
```json
{
"cookies": [...],
"localStorage": {...},
"sessionStorage": {...},
"origins": [...]
}
```
## Common Patterns
### Authenticated Session Reuse
```bash
#!/bin/bash
# Save login state once, reuse many times
STATE_FILE="/tmp/auth-state.json"
# Check if we have saved state
if [[ -f "$STATE_FILE" ]]; then
agent-browser state load "$STATE_FILE"
agent-browser open https://app.example.com/dashboard
else
# Perform login
agent-browser open https://app.example.com/login
agent-browser snapshot -i
agent-browser fill @e1 "$USERNAME"
agent-browser fill @e2 "$PASSWORD"
agent-browser click @e3
agent-browser wait --load networkidle
# Save for future use
agent-browser state save "$STATE_FILE"
fi
```
### Concurrent Scraping
```bash
#!/bin/bash
# Scrape multiple sites concurrently
# Start all sessions
agent-browser --session site1 open https://site1.com &
agent-browser --session site2 open https://site2.com &
agent-browser --session site3 open https://site3.com &
wait
# Extract from each
agent-browser --session site1 get text body > site1.txt
agent-browser --session site2 get text body > site2.txt
agent-browser --session site3 get text body > site3.txt
# Cleanup
agent-browser --session site1 close
agent-browser --session site2 close
agent-browser --session site3 close
```
### A/B Testing Sessions
```bash
# Test different user experiences
agent-browser --session variant-a open "https://app.com?variant=a"
agent-browser --session variant-b open "https://app.com?variant=b"
# Compare
agent-browser --session variant-a screenshot /tmp/variant-a.png
agent-browser --session variant-b screenshot /tmp/variant-b.png
```
## Default Session
When `--session` is omitted, commands use the default session:
```bash
# These use the same default session
agent-browser open https://example.com
agent-browser snapshot -i
agent-browser close # Closes default session
```
## Session Cleanup
```bash
# Close specific session
agent-browser --session auth close
# List active sessions
agent-browser session list
```
## Best Practices
### 1. Name Sessions Semantically
```bash
# GOOD: Clear purpose
agent-browser --session github-auth open https://github.com
agent-browser --session docs-scrape open https://docs.example.com
# AVOID: Generic names
agent-browser --session s1 open https://github.com
```
### 2. Always Clean Up
```bash
# Close sessions when done
agent-browser --session auth close
agent-browser --session scrape close
```
### 3. Handle State Files Securely
```bash
# Don't commit state files (contain auth tokens!)
echo "*.auth-state.json" >> .gitignore
# Delete after use
rm /tmp/auth-state.json
```
### 4. Timeout Long Sessions
```bash
# Set timeout for automated scripts
timeout 60 agent-browser --session long-task get text body
```

View File

@@ -0,0 +1,186 @@
# Snapshot + Refs Workflow
The core innovation of agent-browser: compact element references that reduce context usage dramatically for AI agents.
## How It Works
### The Problem
Traditional browser automation sends full DOM to AI agents:
```
Full DOM/HTML sent → AI parses → Generates CSS selector → Executes action
~3000-5000 tokens per interaction
```
### The Solution
agent-browser uses compact snapshots with refs:
```
Compact snapshot → @refs assigned → Direct ref interaction
~200-400 tokens per interaction
```
## The Snapshot Command
```bash
# Basic snapshot (shows page structure)
agent-browser snapshot
# Interactive snapshot (-i flag) - RECOMMENDED
agent-browser snapshot -i
```
### Snapshot Output Format
```
Page: Example Site - Home
URL: https://example.com
@e1 [header]
@e2 [nav]
@e3 [a] "Home"
@e4 [a] "Products"
@e5 [a] "About"
@e6 [button] "Sign In"
@e7 [main]
@e8 [h1] "Welcome"
@e9 [form]
@e10 [input type="email"] placeholder="Email"
@e11 [input type="password"] placeholder="Password"
@e12 [button type="submit"] "Log In"
@e13 [footer]
@e14 [a] "Privacy Policy"
```
## Using Refs
Once you have refs, interact directly:
```bash
# Click the "Sign In" button
agent-browser click @e6
# Fill email input
agent-browser fill @e10 "user@example.com"
# Fill password
agent-browser fill @e11 "password123"
# Submit the form
agent-browser click @e12
```
## Ref Lifecycle
**IMPORTANT**: Refs are invalidated when the page changes!
```bash
# Get initial snapshot
agent-browser snapshot -i
# @e1 [button] "Next"
# Click triggers page change
agent-browser click @e1
# MUST re-snapshot to get new refs!
agent-browser snapshot -i
# @e1 [h1] "Page 2" ← Different element now!
```
## Best Practices
### 1. Always Snapshot Before Interacting
```bash
# CORRECT
agent-browser open https://example.com
agent-browser snapshot -i # Get refs first
agent-browser click @e1 # Use ref
# WRONG
agent-browser open https://example.com
agent-browser click @e1 # Ref doesn't exist yet!
```
### 2. Re-Snapshot After Navigation
```bash
agent-browser click @e5 # Navigates to new page
agent-browser snapshot -i # Get new refs
agent-browser click @e1 # Use new refs
```
### 3. Re-Snapshot After Dynamic Changes
```bash
agent-browser click @e1 # Opens dropdown
agent-browser snapshot -i # See dropdown items
agent-browser click @e7 # Select item
```
### 4. Snapshot Specific Regions
For complex pages, snapshot specific areas:
```bash
# Snapshot just the form
agent-browser snapshot @e9
```
## Ref Notation Details
```
@e1 [tag type="value"] "text content" placeholder="hint"
│ │ │ │ │
│ │ │ │ └─ Additional attributes
│ │ │ └─ Visible text
│ │ └─ Key attributes shown
│ └─ HTML tag name
└─ Unique ref ID
```
### Common Patterns
```
@e1 [button] "Submit" # Button with text
@e2 [input type="email"] # Email input
@e3 [input type="password"] # Password input
@e4 [a href="/page"] "Link Text" # Anchor link
@e5 [select] # Dropdown
@e6 [textarea] placeholder="Message" # Text area
@e7 [div class="modal"] # Container (when relevant)
@e8 [img alt="Logo"] # Image
@e9 [checkbox] checked # Checked checkbox
@e10 [radio] selected # Selected radio
```
## Troubleshooting
### "Ref not found" Error
```bash
# Ref may have changed - re-snapshot
agent-browser snapshot -i
```
### Element Not Visible in Snapshot
```bash
# Scroll to reveal element
agent-browser scroll --bottom
agent-browser snapshot -i
# Or wait for dynamic content
agent-browser wait 1000
agent-browser snapshot -i
```
### Too Many Elements
```bash
# Snapshot specific container
agent-browser snapshot @e5
# Or use get text for content-only extraction
agent-browser get text @e5
```

View File

@@ -0,0 +1,162 @@
# Video Recording
Capture browser automation sessions as video for debugging, documentation, or verification.
## Basic Recording
```bash
# Start recording
agent-browser record start ./demo.webm
# Perform actions
agent-browser open https://example.com
agent-browser snapshot -i
agent-browser click @e1
agent-browser fill @e2 "test input"
# Stop and save
agent-browser record stop
```
## Recording Commands
```bash
# Start recording to file
agent-browser record start ./output.webm
# Stop current recording
agent-browser record stop
# Restart with new file (stops current + starts new)
agent-browser record restart ./take2.webm
```
## Use Cases
### Debugging Failed Automation
```bash
#!/bin/bash
# Record automation for debugging
agent-browser record start ./debug-$(date +%Y%m%d-%H%M%S).webm
# Run your automation
agent-browser open https://app.example.com
agent-browser snapshot -i
agent-browser click @e1 || {
echo "Click failed - check recording"
agent-browser record stop
exit 1
}
agent-browser record stop
```
### Documentation Generation
```bash
#!/bin/bash
# Record workflow for documentation
agent-browser record start ./docs/how-to-login.webm
agent-browser open https://app.example.com/login
agent-browser wait 1000 # Pause for visibility
agent-browser snapshot -i
agent-browser fill @e1 "demo@example.com"
agent-browser wait 500
agent-browser fill @e2 "password"
agent-browser wait 500
agent-browser click @e3
agent-browser wait --load networkidle
agent-browser wait 1000 # Show result
agent-browser record stop
```
### CI/CD Test Evidence
```bash
#!/bin/bash
# Record E2E test runs for CI artifacts
TEST_NAME="${1:-e2e-test}"
RECORDING_DIR="./test-recordings"
mkdir -p "$RECORDING_DIR"
agent-browser record start "$RECORDING_DIR/$TEST_NAME-$(date +%s).webm"
# Run test
if run_e2e_test; then
echo "Test passed"
else
echo "Test failed - recording saved"
fi
agent-browser record stop
```
## Best Practices
### 1. Add Pauses for Clarity
```bash
# Slow down for human viewing
agent-browser click @e1
agent-browser wait 500 # Let viewer see result
```
### 2. Use Descriptive Filenames
```bash
# Include context in filename
agent-browser record start ./recordings/login-flow-2024-01-15.webm
agent-browser record start ./recordings/checkout-test-run-42.webm
```
### 3. Handle Recording in Error Cases
```bash
#!/bin/bash
set -e
cleanup() {
agent-browser record stop 2>/dev/null || true
agent-browser close 2>/dev/null || true
}
trap cleanup EXIT
agent-browser record start ./automation.webm
# ... automation steps ...
```
### 4. Combine with Screenshots
```bash
# Record video AND capture key frames
agent-browser record start ./flow.webm
agent-browser open https://example.com
agent-browser screenshot ./screenshots/step1-homepage.png
agent-browser click @e1
agent-browser screenshot ./screenshots/step2-after-click.png
agent-browser record stop
```
## Output Format
- Default format: WebM (VP8/VP9 codec)
- Compatible with all modern browsers and video players
- Compressed but high quality
## Limitations
- Recording adds slight overhead to automation
- Large recordings can consume significant disk space
- Some headless environments may have codec limitations

View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Template: Authenticated Session Workflow
# Login once, save state, reuse for subsequent runs
#
# Usage:
# ./authenticated-session.sh <login-url> [state-file]
#
# Setup:
# 1. Run once to see your form structure
# 2. Note the @refs for your fields
# 3. Uncomment LOGIN FLOW section and update refs
set -euo pipefail
LOGIN_URL="${1:?Usage: $0 <login-url> [state-file]}"
STATE_FILE="${2:-./auth-state.json}"
echo "Authentication workflow for: $LOGIN_URL"
# ══════════════════════════════════════════════════════════════
# SAVED STATE: Skip login if we have valid saved state
# ══════════════════════════════════════════════════════════════
if [[ -f "$STATE_FILE" ]]; then
echo "Loading saved authentication state..."
agent-browser state load "$STATE_FILE"
agent-browser open "$LOGIN_URL"
agent-browser wait --load networkidle
CURRENT_URL=$(agent-browser get url)
if [[ "$CURRENT_URL" != *"login"* ]] && [[ "$CURRENT_URL" != *"signin"* ]]; then
echo "Session restored successfully!"
agent-browser snapshot -i
exit 0
fi
echo "Session expired, performing fresh login..."
rm -f "$STATE_FILE"
fi
# ══════════════════════════════════════════════════════════════
# DISCOVERY MODE: Show form structure (remove after setup)
# ══════════════════════════════════════════════════════════════
echo "Opening login page..."
agent-browser open "$LOGIN_URL"
agent-browser wait --load networkidle
echo ""
echo "┌─────────────────────────────────────────────────────────┐"
echo "│ LOGIN FORM STRUCTURE │"
echo "├─────────────────────────────────────────────────────────┤"
agent-browser snapshot -i
echo "└─────────────────────────────────────────────────────────┘"
echo ""
echo "Next steps:"
echo " 1. Note refs: @e? = username, @e? = password, @e? = submit"
echo " 2. Uncomment LOGIN FLOW section below"
echo " 3. Replace @e1, @e2, @e3 with your refs"
echo " 4. Delete this DISCOVERY MODE section"
echo ""
agent-browser close
exit 0
# ══════════════════════════════════════════════════════════════
# LOGIN FLOW: Uncomment and customize after discovery
# ══════════════════════════════════════════════════════════════
# : "${APP_USERNAME:?Set APP_USERNAME environment variable}"
# : "${APP_PASSWORD:?Set APP_PASSWORD environment variable}"
#
# agent-browser open "$LOGIN_URL"
# agent-browser wait --load networkidle
# agent-browser snapshot -i
#
# # Fill credentials (update refs to match your form)
# agent-browser fill @e1 "$APP_USERNAME"
# agent-browser fill @e2 "$APP_PASSWORD"
# agent-browser click @e3
# agent-browser wait --load networkidle
#
# # Verify login succeeded
# FINAL_URL=$(agent-browser get url)
# if [[ "$FINAL_URL" == *"login"* ]] || [[ "$FINAL_URL" == *"signin"* ]]; then
# echo "ERROR: Login failed - still on login page"
# agent-browser screenshot /tmp/login-failed.png
# agent-browser close
# exit 1
# fi
#
# # Save state for future runs
# echo "Saving authentication state to: $STATE_FILE"
# agent-browser state save "$STATE_FILE"
# echo "Login successful!"
# agent-browser snapshot -i

View File

@@ -0,0 +1,68 @@
#!/bin/bash
# Template: Content Capture Workflow
# Extract content from web pages with optional authentication
set -euo pipefail
TARGET_URL="${1:?Usage: $0 <url> [output-dir]}"
OUTPUT_DIR="${2:-.}"
echo "Capturing content from: $TARGET_URL"
mkdir -p "$OUTPUT_DIR"
# Optional: Load authentication state if needed
# if [[ -f "./auth-state.json" ]]; then
# agent-browser state load "./auth-state.json"
# fi
# Navigate to target page
agent-browser open "$TARGET_URL"
agent-browser wait --load networkidle
# Get page metadata
echo "Page title: $(agent-browser get title)"
echo "Page URL: $(agent-browser get url)"
# Capture full page screenshot
agent-browser screenshot --full "$OUTPUT_DIR/page-full.png"
echo "Screenshot saved: $OUTPUT_DIR/page-full.png"
# Get page structure
agent-browser snapshot -i > "$OUTPUT_DIR/page-structure.txt"
echo "Structure saved: $OUTPUT_DIR/page-structure.txt"
# Extract main content
# Adjust selector based on target site structure
# agent-browser get text @e1 > "$OUTPUT_DIR/main-content.txt"
# Extract specific elements (uncomment as needed)
# agent-browser get text "article" > "$OUTPUT_DIR/article.txt"
# agent-browser get text "main" > "$OUTPUT_DIR/main.txt"
# agent-browser get text ".content" > "$OUTPUT_DIR/content.txt"
# Get full page text
agent-browser get text body > "$OUTPUT_DIR/page-text.txt"
echo "Text content saved: $OUTPUT_DIR/page-text.txt"
# Optional: Save as PDF
agent-browser pdf "$OUTPUT_DIR/page.pdf"
echo "PDF saved: $OUTPUT_DIR/page.pdf"
# Optional: Capture with scrolling for infinite scroll pages
# scroll_and_capture() {
# local count=0
# while [[ $count -lt 5 ]]; do
# agent-browser scroll down 1000
# agent-browser wait 1000
# ((count++))
# done
# agent-browser screenshot --full "$OUTPUT_DIR/page-scrolled.png"
# }
# scroll_and_capture
# Cleanup
agent-browser close
echo ""
echo "Capture complete! Files saved to: $OUTPUT_DIR"
ls -la "$OUTPUT_DIR"

View File

@@ -0,0 +1,64 @@
#!/bin/bash
# Template: Form Automation Workflow
# Fills and submits web forms with validation
set -euo pipefail
FORM_URL="${1:?Usage: $0 <form-url>}"
echo "Automating form at: $FORM_URL"
# Navigate to form page
agent-browser open "$FORM_URL"
agent-browser wait --load networkidle
# Get interactive snapshot to identify form fields
echo "Analyzing form structure..."
agent-browser snapshot -i
# Example: Fill common form fields
# Uncomment and modify refs based on snapshot output
# Text inputs
# agent-browser fill @e1 "John Doe" # Name field
# agent-browser fill @e2 "user@example.com" # Email field
# agent-browser fill @e3 "+1-555-123-4567" # Phone field
# Password fields
# agent-browser fill @e4 "SecureP@ssw0rd!"
# Dropdowns
# agent-browser select @e5 "Option Value"
# Checkboxes
# agent-browser check @e6 # Check
# agent-browser uncheck @e7 # Uncheck
# Radio buttons
# agent-browser click @e8 # Select radio option
# Text areas
# agent-browser fill @e9 "Multi-line text content here"
# File uploads
# agent-browser upload @e10 /path/to/file.pdf
# Submit form
# agent-browser click @e11 # Submit button
# Wait for response
# agent-browser wait --load networkidle
# agent-browser wait --url "**/success" # Or wait for redirect
# Verify submission
echo "Form submission result:"
agent-browser get url
agent-browser snapshot -i
# Take screenshot of result
agent-browser screenshot /tmp/form-result.png
# Cleanup
agent-browser close
echo "Form automation complete"

View File

@@ -0,0 +1,287 @@
---
name: agent-md-refactor
description: Refactor bloated AGENTS.md, CLAUDE.md, or similar agent instruction files to follow progressive disclosure principles. Splits monolithic files into organized, linked documentation.
license: MIT
---
# Agent MD Refactor
Refactor bloated agent instruction files (AGENTS.md, CLAUDE.md, COPILOT.md, etc.) to follow **progressive disclosure principles** - keeping essentials at root and organizing the rest into linked, categorized files.
---
## Triggers
Use this skill when:
- "refactor my AGENTS.md" / "refactor my CLAUDE.md"
- "split my agent instructions"
- "organize my CLAUDE.md file"
- "my AGENTS.md is too long"
- "progressive disclosure for my instructions"
- "clean up my agent config"
---
## Quick Reference
| Phase | Action | Output |
|-------|--------|--------|
| 1. Analyze | Find contradictions | List of conflicts to resolve |
| 2. Extract | Identify essentials | Core instructions for root file |
| 3. Categorize | Group remaining instructions | Logical categories |
| 4. Structure | Create file hierarchy | Root + linked files |
| 5. Prune | Flag for deletion | Redundant/vague instructions |
---
## Process
### Phase 1: Find Contradictions
Identify any instructions that conflict with each other.
**Look for:**
- Contradictory style guidelines (e.g., "use semicolons" vs "no semicolons")
- Conflicting workflow instructions
- Incompatible tool preferences
- Mutually exclusive patterns
**For each contradiction found:**
```markdown
## Contradiction Found
**Instruction A:** [quote]
**Instruction B:** [quote]
**Question:** Which should take precedence, or should both be conditional?
```
Ask the user to resolve before proceeding.
---
### Phase 2: Identify the Essentials
Extract ONLY what belongs in the root agent file. The root should be minimal - information that applies to **every single task**.
**Essential content (keep in root):**
| Category | Example |
|----------|---------|
| Project description | One sentence: "A React dashboard for analytics" |
| Package manager | Only if not npm (e.g., "Uses pnpm") |
| Non-standard commands | Custom build/test/typecheck commands |
| Critical overrides | Things that MUST override defaults |
| Universal rules | Applies to 100% of tasks |
**NOT essential (move to linked files):**
- Language-specific conventions
- Testing guidelines
- Code style details
- Framework patterns
- Documentation standards
- Git workflow details
---
### Phase 3: Group the Rest
Organize remaining instructions into logical categories.
**Common categories:**
| Category | Contents |
|----------|----------|
| `typescript.md` | TS conventions, type patterns, strict mode rules |
| `testing.md` | Test frameworks, coverage, mocking patterns |
| `code-style.md` | Formatting, naming, comments, structure |
| `git-workflow.md` | Commits, branches, PRs, reviews |
| `architecture.md` | Patterns, folder structure, dependencies |
| `api-design.md` | REST/GraphQL conventions, error handling |
| `security.md` | Auth patterns, input validation, secrets |
| `performance.md` | Optimization rules, caching, lazy loading |
**Grouping rules:**
1. Each file should be self-contained for its topic
2. Aim for 3-8 files (not too granular, not too broad)
3. Name files clearly: `{topic}.md`
4. Include only actionable instructions
---
### Phase 4: Create the File Structure
**Output structure:**
```
project-root/
├── CLAUDE.md (or AGENTS.md) # Minimal root with links
└── .claude/ # Or docs/agent-instructions/
├── typescript.md
├── testing.md
├── code-style.md
├── git-workflow.md
└── architecture.md
```
**Root file template:**
```markdown
# Project Name
One-sentence description of the project.
## Quick Reference
- **Package Manager:** pnpm
- **Build:** `pnpm build`
- **Test:** `pnpm test`
- **Typecheck:** `pnpm typecheck`
## Detailed Instructions
For specific guidelines, see:
- [TypeScript Conventions](.claude/typescript.md)
- [Testing Guidelines](.claude/testing.md)
- [Code Style](.claude/code-style.md)
- [Git Workflow](.claude/git-workflow.md)
- [Architecture Patterns](.claude/architecture.md)
```
**Each linked file template:**
```markdown
# {Topic} Guidelines
## Overview
Brief context for when these guidelines apply.
## Rules
### Rule Category 1
- Specific, actionable instruction
- Another specific instruction
### Rule Category 2
- Specific, actionable instruction
## Examples
### Good
\`\`\`typescript
// Example of correct pattern
\`\`\`
### Avoid
\`\`\`typescript
// Example of what not to do
\`\`\`
```
---
### Phase 5: Flag for Deletion
Identify instructions that should be removed entirely.
**Delete if:**
| Criterion | Example | Why Delete |
|-----------|---------|------------|
| Redundant | "Use TypeScript" (in a .ts project) | Agent already knows |
| Too vague | "Write clean code" | Not actionable |
| Overly obvious | "Don't introduce bugs" | Wastes context |
| Default behavior | "Use descriptive variable names" | Standard practice |
| Outdated | References deprecated APIs | No longer applies |
**Output format:**
```markdown
## Flagged for Deletion
| Instruction | Reason |
|-------------|--------|
| "Write clean, maintainable code" | Too vague to be actionable |
| "Use TypeScript" | Redundant - project is already TS |
| "Don't commit secrets" | Agent already knows this |
| "Follow best practices" | Meaningless without specifics |
```
---
## Execution Checklist
```
[ ] Phase 1: All contradictions identified and resolved
[ ] Phase 2: Root file contains ONLY essentials
[ ] Phase 3: All remaining instructions categorized
[ ] Phase 4: File structure created with proper links
[ ] Phase 5: Redundant/vague instructions removed
[ ] Verify: Each linked file is self-contained
[ ] Verify: Root file is under 50 lines
[ ] Verify: All links work correctly
```
---
## Anti-Patterns
| Avoid | Why | Instead |
|-------|-----|---------|
| Keeping everything in root | Bloated, hard to maintain | Split into linked files |
| Too many categories | Fragmentation | Consolidate related topics |
| Vague instructions | Wastes tokens, no value | Be specific or delete |
| Duplicating defaults | Agent already knows | Only override when needed |
| Deep nesting | Hard to navigate | Flat structure with links |
---
## Examples
### Before (Bloated Root)
```markdown
# CLAUDE.md
This is a React project.
## Code Style
- Use 2 spaces
- Use semicolons
- Prefer const over let
- Use arrow functions
... (200 more lines)
## Testing
- Use Jest
- Coverage > 80%
... (100 more lines)
## TypeScript
- Enable strict mode
... (150 more lines)
```
### After (Progressive Disclosure)
```markdown
# CLAUDE.md
React dashboard for real-time analytics visualization.
## Commands
- `pnpm dev` - Start development server
- `pnpm test` - Run tests with coverage
- `pnpm build` - Production build
## Guidelines
- [Code Style](.claude/code-style.md)
- [Testing](.claude/testing.md)
- [TypeScript](.claude/typescript.md)
```
---
## Verification
After refactoring, verify:
1. **Root file is minimal** - Under 50 lines, only universal info
2. **Links work** - All referenced files exist
3. **No contradictions** - Instructions are consistent
4. **Actionable content** - Every instruction is specific
5. **Complete coverage** - No instructions were lost (unless flagged for deletion)
6. **Self-contained files** - Each linked file stands alone
---

View File

@@ -0,0 +1,320 @@
---
name: astro-cloudflare-deploy
description: Deploy Astro 6 frontend applications to Cloudflare Workers. This skill should be used when deploying an Astro project to Cloudflare, whether as a static site, hybrid rendering, or full SSR. Handles setup of @astrojs/cloudflare adapter, wrangler.jsonc configuration, environment variables, and CI/CD deployment workflows.
---
# Astro 6 to Cloudflare Workers Deployment
## Overview
This skill provides a complete workflow for deploying Astro 6 applications to Cloudflare Workers. It covers static sites, hybrid rendering, and full SSR deployments using the official @astrojs/cloudflare adapter.
**Key Requirements:**
- Astro 6.x (requires Node.js 22.12.0+)
- @astrojs/cloudflare adapter v13+
- Wrangler CLI v4+
## Deployment Decision Tree
First, determine the deployment mode based on project requirements:
```
┌─────────────────────────────────────────────────────────────────┐
│ DEPLOYMENT MODE DECISION │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Static Site? │
│ └─ Marketing sites, blogs, documentation │
│ └─ No server-side rendering needed │
│ └─ Go to: Static Deployment │
│ │
│ 2. Mixed static + dynamic pages? │
│ └─ Some pages need SSR (dashboard, user-specific content) │
│ └─ Most pages are static │
│ └─ Go to: Hybrid Deployment │
│ │
│ 3. All pages need server rendering? │
│ └─ Web app with authentication, dynamic content │
│ └─ Real-time data on all pages │
│ └─ Go to: Full SSR Deployment │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Step 1: Verify Prerequisites
Before deployment, verify the following:
```bash
# Check Node.js version (must be 22.12.0+)
node --version
# If Node.js is outdated, upgrade to v22 LTS or latest
# Check Astro version
npm list astro
# If upgrading to Astro 6:
npx @astrojs/upgrade@beta
```
**Important:** Astro 6 requires Node.js 22.12.0 or higher. Verify both local and CI/CD environments meet this requirement.
## Step 2: Install Dependencies
Install the Cloudflare adapter and Wrangler:
```bash
# Automated installation (recommended)
npx astro add cloudflare
# Manual installation
npm install @astrojs/cloudflare wrangler --save-dev
```
The automated command will:
- Install `@astrojs/cloudflare`
- Update `astro.config.mjs` with the adapter
- Prompt for deployment mode selection
## Step 3: Configure Astro
Edit `astro.config.mjs` or `astro.config.ts` based on the deployment mode.
### Static Deployment
For purely static sites (no adapter needed):
```javascript
import { defineConfig } from 'astro/config';
export default defineConfig({
output: 'static',
});
```
### Hybrid Deployment (Recommended for Most Projects)
```javascript
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'hybrid',
adapter: cloudflare({
imageService: 'passthrough', // or 'compile' for optimization
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
},
}),
});
```
Mark specific pages for SSR with `export const prerender = false`.
### Full SSR Deployment
```javascript
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'server',
adapter: cloudflare({
mode: 'directory', // or 'standalone' for single worker
imageService: 'passthrough',
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
},
}),
});
```
## Step 4: Create wrangler.jsonc
Cloudflare now recommends `wrangler.jsonc` (JSON with comments) over `wrangler.toml`. Use the template in `assets/wrangler.jsonc` as a starting point.
Key configuration:
```jsonc
{
"$schema": "./node_modules/wrangler/config-schema.json",
"name": "your-app-name",
"compatibility_date": "2025-01-19",
"assets": {
"directory": "./dist",
"binding": "ASSETS"
}
}
```
**Copy the template from:**
```
assets/wrangler-static.jsonc - For static sites
assets/wrangler-hybrid.jsonc - For hybrid rendering
assets/wrangler-ssr.jsonc - For full SSR
```
## Step 5: Configure TypeScript Types
For TypeScript projects, create or update `src/env.d.ts`:
```typescript
/// <reference path="../.astro/types.d.ts" />
interface Env {
// Add your Cloudflare bindings here
MY_KV_NAMESPACE: KVNamespace;
MY_D1_DATABASE: D1Database;
API_URL: string;
}
type Runtime = import('@astrojs/cloudflare').Runtime<Env>;
declare namespace App {
interface Locals extends Runtime {}
}
```
Update `tsconfig.json`:
```json
{
"compilerOptions": {
"types": ["@cloudflare/workers-types"]
}
}
```
## Step 6: Deploy
### Local Development
```bash
# Build the project
npm run build
# Local development with Wrangler
npx wrangler dev
# Remote development (test against production environment)
npx wrangler dev --remote
```
### Production Deployment
```bash
# Deploy to Cloudflare Workers
npx wrangler deploy
# Deploy to specific environment
npx wrangler deploy --env staging
```
### Using GitHub Actions
See `assets/github-actions-deploy.yml` for a complete CI/CD workflow template.
## Step 7: Configure Bindings (Optional)
For advanced features, add bindings in `wrangler.jsonc`:
```jsonc
{
"kv_namespaces": [
{ "binding": "MY_KV", "id": "your-kv-id" }
],
"d1_databases": [
{ "binding": "DB", "database_name": "my-db", "database_id": "your-d1-id" }
],
"r2_buckets": [
{ "binding": "BUCKET", "bucket_name": "my-bucket" }
]
}
```
Access bindings in Astro code:
```javascript
---
const kv = Astro.locals.runtime.env.MY_KV;
const value = await kv.get("key");
---
```
## Environment Variables
### Non-Sensitive Variables
Define in `wrangler.jsonc`:
```jsonc
{
"vars": {
"API_URL": "https://api.example.com",
"ENVIRONMENT": "production"
}
}
```
### Sensitive Secrets
```bash
# Add a secret (encrypted, not stored in config)
npx wrangler secret put API_KEY
# Add environment-specific secret
npx wrangler secret put API_KEY --env staging
# List all secrets
npx wrangler secret list
```
### Local Development Secrets
Create `.dev.vars` (add to `.gitignore`):
```bash
API_KEY=local_dev_key
DATABASE_URL=postgresql://localhost:5432/mydb
```
## Troubleshooting
Refer to `references/troubleshooting.md` for common issues and solutions.
Common problems:
1. **"MessageChannel is not defined"** - React 19 compatibility issue
- Solution: See troubleshooting guide
2. **Build fails with Node.js version error**
- Solution: Upgrade to Node.js 22.12.0+
3. **Styling lost in Astro 6 beta dev mode**
- Solution: Known bug, check GitHub issue status
4. **404 errors on deployment**
- Solution: Check `_routes.json` configuration
## Resources
### references/
- `troubleshooting.md` - Common issues and solutions
- `configuration-guide.md` - Detailed configuration options
- `upgrade-guide.md` - Migrating from older versions
### assets/
- `wrangler-static.jsonc` - Static site configuration template
- `wrangler-hybrid.jsonc` - Hybrid rendering configuration template
- `wrangler-ssr.jsonc` - Full SSR configuration template
- `github-actions-deploy.yml` - CI/CD workflow template
- `dev.vars.example` - Local secrets template
## Official Documentation
- [Astro Cloudflare Adapter](https://docs.astro.build/en/guides/integrations-guide/cloudflare/)
- [Cloudflare Workers Documentation](https://developers.cloudflare.com/workers/)
- [Wrangler CLI Reference](https://developers.cloudflare.com/workers/wrangler/)
- [Astro 6 Beta Announcement](https://astro.build/blog/astro-6-beta/)

View File

@@ -0,0 +1,40 @@
// Hybrid rendering configuration - Recommended for most projects
// Static pages by default, SSR where needed with `export const prerender = false`
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'hybrid',
adapter: cloudflare({
// Mode: 'directory' (default) = separate function per route
// 'standalone' = single worker for all routes
mode: 'directory',
// Image service: 'passthrough' (default) or 'compile'
imageService: 'passthrough',
// Platform proxy for local development with Cloudflare bindings
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
},
}),
// Optional: Add integrations
// integrations: [
// tailwind(),
// react(),
// sitemap(),
// ],
vite: {
build: {
chunkSizeWarningLimit: 1000,
},
},
});
// Usage: Add to pages that need SSR:
// export const prerender = false;

View File

@@ -0,0 +1,35 @@
// Full SSR configuration - All routes server-rendered
// Use this for web apps with authentication, dynamic content on all pages
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'server',
adapter: cloudflare({
mode: 'directory',
imageService: 'passthrough',
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
},
}),
// Optional: Add integrations
// integrations: [
// tailwind(),
// react(),
// viewTransitions(),
// ],
vite: {
build: {
chunkSizeWarningLimit: 1000,
},
},
});
// All pages are server-rendered by default.
// Access Cloudflare bindings with:
// const env = Astro.locals.runtime.env;

View File

@@ -0,0 +1,22 @@
// Static site configuration - No adapter needed
// Use this for purely static sites (blogs, marketing sites, documentation)
import { defineConfig } from 'astro/config';
export default defineConfig({
output: 'static',
// Optional: Add integrations
// integrations: [
// tailwind(),
// sitemap(),
// ],
// Vite configuration
vite: {
build: {
// Adjust chunk size warning limit
chunkSizeWarningLimit: 1000,
},
},
});

View File

@@ -0,0 +1,26 @@
# .dev.vars - Local development secrets
# Copy this file to .dev.vars and fill in your values
# IMPORTANT: Add .dev.vars to .gitignore!
# Cloudflare Account
CLOUDFLARE_ACCOUNT_ID=your-account-id-here
# API Keys
API_KEY=your-local-api-key
API_SECRET=your-local-api-secret
# Database URLs
DATABASE_URL=postgresql://localhost:5432/mydb
REDIS_URL=redis://localhost:6379
# Third-party Services
STRIPE_SECRET_KEY=sk_test_your_key
SENDGRID_API_KEY=your_sendgrid_key
# OAuth (if using authentication)
GITHUB_CLIENT_ID=your_github_client_id
GITHUB_CLIENT_SECRET=your_github_client_secret
# Feature Flags
ENABLE_ANALYTICS=false
ENABLE_BETA_FEATURES=true

View File

@@ -0,0 +1,40 @@
/// <reference path="../.astro/types.d.ts" />
// TypeScript type definitions for Cloudflare bindings
// Update this file with your actual binding names
interface Env {
// Environment Variables (from wrangler.jsonc vars section)
ENVIRONMENT: string;
PUBLIC_SITE_URL: string;
API_URL?: string;
// Cloudflare Bindings (configure in wrangler.jsonc)
CACHE?: KVNamespace;
DB?: D1Database;
STORAGE?: R2Bucket;
// Add your custom bindings here
// MY_KV_NAMESPACE: KVNamespace;
// MY_D1_DATABASE: D1Database;
// MY_R2_BUCKET: R2Bucket;
// Sensitive secrets (use wrangler secret put)
API_KEY?: string;
DATABASE_URL?: string;
}
// Runtime type for Astro
type Runtime = import('@astrojs/cloudflare').Runtime<Env>;
// Extend Astro's interfaces
declare namespace App {
interface Locals extends Runtime {}
}
declare namespace Astro {
interface Locals extends Runtime {}
}
// For API endpoints
export type { Env, Runtime };

View File

@@ -0,0 +1,94 @@
name: Deploy to Cloudflare Workers
on:
push:
branches:
- main
pull_request:
branches:
- main
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
name: Build and Deploy
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Wrangler
run: npm install -g wrangler@latest
- name: Build Astro
run: npm run build
env:
# Build-time environment variables
NODE_ENV: production
- name: Deploy to Cloudflare Workers
run: wrangler deploy
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
deploy-staging:
runs-on: ubuntu-latest
name: Deploy to Staging
if: github.ref == 'refs/heads/staging'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Wrangler
run: npm install -g wrangler@latest
- name: Build Astro
run: npm run build
- name: Deploy to Staging
run: wrangler deploy --env staging
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
# Optional: Run tests before deployment
test:
runs-on: ubuntu-latest
name: Run Tests
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test

View File

@@ -0,0 +1,52 @@
{
"$schema": "./node_modules/wrangler/config-schema.json",
"// Comment": "Hybrid rendering configuration for Astro on Cloudflare Workers",
"name": "your-app-name",
"compatibility_date": "2025-01-19",
"compatibility_flags": ["nodejs_compat"],
"assets": {
"directory": "./dist",
"binding": "ASSETS"
},
"vars": {
"ENVIRONMENT": "production",
"PUBLIC_SITE_URL": "https://your-app-name.workers.dev"
},
"// Comment env": "Environment-specific configurations",
"env": {
"staging": {
"name": "your-app-name-staging",
"vars": {
"ENVIRONMENT": "staging",
"PUBLIC_SITE_URL": "https://staging-your-app-name.workers.dev"
}
},
"production": {
"name": "your-app-name-production",
"vars": {
"ENVIRONMENT": "production",
"PUBLIC_SITE_URL": "https://your-app-name.workers.dev"
}
}
},
"// Comment bindings_examples": "Uncomment and configure as needed",
"// kv_namespaces": [
// {
// "binding": "MY_KV",
// "id": "your-kv-namespace-id"
// }
// ],
"// d1_databases": [
// {
// "binding": "DB",
// "database_name": "my-database",
// "database_id": "your-d1-database-id"
// }
// ],
"// r2_buckets": [
// {
// "binding": "BUCKET",
// "bucket_name": "my-bucket"
// }
// ]
}

View File

@@ -0,0 +1,54 @@
{
"$schema": "./node_modules/wrangler/config-schema.json",
"// Comment": "Full SSR configuration for Astro on Cloudflare Workers",
"name": "your-app-name",
"compatibility_date": "2025-01-19",
"compatibility_flags": ["nodejs_compat", "disable_nodejs_process_v2"],
"assets": {
"directory": "./dist",
"binding": "ASSETS"
},
"vars": {
"ENVIRONMENT": "production",
"PUBLIC_SITE_URL": "https://your-app-name.workers.dev",
"API_URL": "https://api.example.com"
},
"env": {
"staging": {
"name": "your-app-name-staging",
"vars": {
"ENVIRONMENT": "staging",
"PUBLIC_SITE_URL": "https://staging-your-app-name.workers.dev",
"API_URL": "https://staging-api.example.com"
}
},
"production": {
"name": "your-app-name-production",
"vars": {
"ENVIRONMENT": "production",
"PUBLIC_SITE_URL": "https://your-app-name.workers.dev",
"API_URL": "https://api.example.com"
}
}
},
"// Comment bindings": "Configure Cloudflare bindings for your SSR app",
"kv_namespaces": [
{
"binding": "CACHE",
"id": "your-kv-namespace-id"
}
],
"d1_databases": [
{
"binding": "DB",
"database_name": "my-database",
"database_id": "your-d1-database-id"
}
],
"r2_buckets": [
{
"binding": "STORAGE",
"bucket_name": "my-storage-bucket"
}
]
}

View File

@@ -0,0 +1,20 @@
{
"$schema": "./node_modules/wrangler/config-schema.json",
"// Comment": "Static site deployment configuration for Astro on Cloudflare Workers",
"name": "your-app-name",
"compatibility_date": "2025-01-19",
"// Comment assets": "Static assets configuration",
"assets": {
"directory": "./dist",
"binding": "ASSETS",
"// Comment html_handling": "Options: none, force-trailing-slash, strip-trailing-slash",
"html_handling": "none",
"// Comment not_found_handling": "Options: none, 404-page, spa-fallback",
"not_found_handling": "none"
},
"// Comment vars": "Non-sensitive environment variables",
"vars": {
"ENVIRONMENT": "production",
"PUBLIC_SITE_URL": "https://your-app-name.workers.dev"
}
}

View File

@@ -0,0 +1,407 @@
# Configuration Guide
Complete reference for all configuration options when deploying Astro to Cloudflare Workers.
## Table of Contents
1. [wrangler.jsonc Reference](#wranglerjsonc-reference)
2. [Astro Configuration](#astro-configuration)
3. [Environment-Specific Configuration](#environment-specific-configuration)
4. [Bindings Configuration](#bindings-configuration)
5. [Advanced Options](#advanced-options)
---
## wrangler.jsonc Reference
### Core Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `name` | string | Yes | Worker/Project name |
| `compatibility_date` | string (YYYY-MM-DD) | Yes | Runtime API version |
| `$schema` | string | No | Path to JSON schema for validation |
| `main` | string | No | Entry point file (auto-detected for Astro) |
| `account_id` | string | No | Cloudflare account ID |
### Assets Configuration
```jsonc
{
"assets": {
"directory": "./dist",
"binding": "ASSETS",
"html_handling": "force-trailing-slash",
"not_found_handling": "404-page"
}
}
```
| Option | Values | Default | Description |
|--------|--------|---------|-------------|
| `directory` | path | `"./dist"` | Build output directory |
| `binding` | string | `"ASSETS"` | Name to access assets in code |
| `html_handling` | `"none"`, `"force-trailing-slash"`, `"strip-trailing-slash"` | `"none"` | URL handling behavior |
| `not_found_handling` | `"none"`, `"404-page"`, `"spa-fallback"` | `"none"` | 404 error behavior |
### Compatibility Flags
```jsonc
{
"compatibility_flags": ["nodejs_compat", "disable_nodejs_process_v2"]
}
```
| Flag | Purpose |
|------|---------|
| `nodejs_compat` | Enable Node.js APIs in Workers |
| `disable_nodejs_process_v2` | Use legacy process global (for some packages) |
---
## Astro Configuration
### Adapter Options
```javascript
// astro.config.mjs
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
adapter: cloudflare({
// Mode: how routes are deployed
mode: 'directory', // 'directory' (default) or 'standalone'
// Image service handling
imageService: 'passthrough', // 'passthrough' (default) or 'compile'
// Platform proxy for local development
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
persist: {
path: './.cache/wrangler/v3',
},
},
}),
});
```
### Mode Comparison
| Mode | Description | Use Case |
|------|-------------|----------|
| `directory` | Separate function per route | Most projects, better caching |
| `standalone` | Single worker for all routes | Simple apps, shared state |
### Image Service Options
| Option | Description |
|--------|-------------|
| `passthrough` | Images pass through unchanged (default) |
| `compile` | Images optimized at build time using Sharp |
---
## Environment-Specific Configuration
### Multiple Environments
```jsonc
{
"name": "my-app",
"vars": {
"ENVIRONMENT": "production",
"API_URL": "https://api.example.com"
},
"env": {
"staging": {
"name": "my-app-staging",
"vars": {
"ENVIRONMENT": "staging",
"API_URL": "https://staging-api.example.com"
}
},
"production": {
"name": "my-app-production",
"vars": {
"ENVIRONMENT": "production",
"API_URL": "https://api.example.com"
}
}
}
}
```
### Deploying to Environment
```bash
# Deploy to staging
npx wrangler deploy --env staging
# Deploy to production
npx wrangler deploy --env production
```
---
## Bindings Configuration
### KV Namespace
```jsonc
{
"kv_namespaces": [
{
"binding": "MY_KV",
"id": "your-kv-namespace-id",
"preview_id": "your-preview-kv-id"
}
]
}
```
**Usage in Astro:**
```javascript
const kv = Astro.locals.runtime.env.MY_KV;
const value = await kv.get("key");
await kv.put("key", "value", { expirationTtl: 3600 });
```
**Creating KV:**
```bash
npx wrangler kv:namespace create MY_KV
```
### D1 Database
```jsonc
{
"d1_databases": [
{
"binding": "DB",
"database_name": "my-database",
"database_id": "your-d1-database-id"
}
]
}
```
**Usage in Astro:**
```javascript
const db = Astro.locals.runtime.env.DB;
const result = await db.prepare("SELECT * FROM users").all();
```
**Creating D1:**
```bash
npx wrangler d1 create my-database
npx wrangler d1 execute my-database --file=./schema.sql
```
### R2 Storage
```jsonc
{
"r2_buckets": [
{
"binding": "BUCKET",
"bucket_name": "my-bucket"
}
]
}
```
**Usage in Astro:**
```javascript
const bucket = Astro.locals.runtime.env.BUCKET;
await bucket.put("file.txt", "Hello World");
const object = await bucket.get("file.txt");
```
**Creating R2:**
```bash
npx wrangler r2 bucket create my-bucket
```
### Durable Objects
```jsonc
{
"durable_objects": {
"bindings": [
{
"name": "MY_DURABLE_OBJECT",
"class_name": "MyDurableObject",
"script_name": "durable-object-worker"
}
]
}
}
```
---
## Advanced Options
### Custom Routing
Create `_routes.json` in project root for advanced routing control:
```json
{
"version": 1,
"include": ["/*"],
"exclude": ["/api/*", "/admin/*"]
}
```
- **include**: Patterns to route to Worker
- **exclude**: Patterns to serve as static assets
### Scheduled Tasks (Cron Triggers)
```jsonc
{
"triggers": {
"crons": [
{ "cron": "0 * * * *", "path": "/api/hourly" },
{ "cron": "0 0 * * *", "path": "/api/daily" }
]
}
}
```
Create corresponding API routes:
```javascript
// src/pages/api/hourly.js
export async function GET({ locals }) {
// Runs every hour
return new Response("Hourly task complete");
}
```
### Rate Limiting
```jsonc
{
"routes": [
{
"pattern": "api.example.com/*",
"zone_name": "example.com"
}
],
"limits": {
"cpu_ms": 50
}
}
```
### Logging and Monitoring
```jsonc
{
"logpush": true,
"placement": {
"mode": "smart"
}
}
```
**View logs in real-time:**
```bash
npx wrangler tail
```
---
## TypeScript Configuration
### Complete tsconfig.json
```json
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"allowJs": true,
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"types": ["@cloudflare/workers-types"],
"jsx": "react-jsx",
"jsxImportSource": "react"
},
"include": ["src"],
"exclude": ["node_modules", "dist"]
}
```
### Environment Type Definition
```typescript
// src/env.d.ts
/// <reference path="../.astro/types.d.ts" />
interface Env {
// Cloudflare bindings
MY_KV: KVNamespace;
DB: D1Database;
BUCKET: R2Bucket;
// Environment variables
API_URL: string;
ENVIRONMENT: string;
SECRET_VALUE?: string;
}
type Runtime = import('@astrojs/cloudflare').Runtime<Env>;
declare namespace App {
interface Locals extends Runtime {}
}
declare namespace Astro {
interface Locals extends Runtime {}
}
```
---
## Build Configuration
### package.json Scripts
```json
{
"scripts": {
"dev": "astro dev",
"build": "astro build",
"preview": "wrangler dev",
"deploy": "npm run build && wrangler deploy",
"deploy:staging": "npm run build && wrangler deploy --env staging",
"cf:dev": "wrangler dev",
"cf:dev:remote": "wrangler dev --remote",
"cf:tail": "wrangler tail"
}
}
```
### Vite Configuration
```javascript
// vite.config.js (if needed)
import { defineConfig } from 'vite';
export default defineConfig({
build: {
// Adjust chunk size warnings
chunkSizeWarningLimit: 1000,
},
});
```

View File

@@ -0,0 +1,376 @@
# Troubleshooting Guide
This guide covers common issues when deploying Astro 6 to Cloudflare Workers.
## Table of Contents
1. [Build Errors](#build-errors)
2. [Runtime Errors](#runtime-errors)
3. [Deployment Issues](#deployment-issues)
4. [Performance Issues](#performance-issues)
5. [Development Server Issues](#development-server-issues)
---
## Build Errors
### "MessageChannel is not defined"
**Symptoms:**
- Build fails with reference to `MessageChannel`
- Occurs when using React 19 with Cloudflare adapter
**Cause:**
React 19 uses `MessageChannel` which is not available in the Cloudflare Workers runtime by default.
**Solutions:**
1. **Add compatibility flag** in `wrangler.jsonc`:
```jsonc
{
"compatibility_flags": ["nodejs_compat"]
}
```
2. **Use React 18** temporarily if the issue persists:
```bash
npm install react@18 react-dom@18
```
3. **Check for related GitHub issues:**
- [Astro Issue #12824](https://github.com/withastro/astro/issues/12824)
### "Cannot find module '@astrojs/cloudflare'"
**Symptoms:**
- Import error in `astro.config.mjs`
- Type errors in TypeScript
**Solutions:**
1. **Install the adapter:**
```bash
npm install @astrojs/cloudflare
```
2. **Verify installation:**
```bash
npm list @astrojs/cloudflare
```
3. **For Astro 6, ensure v13+:**
```bash
npm install @astrojs/cloudflare@beta
```
### "Too many files for webpack"
**Symptoms:**
- Build fails with file limit error
- Occurs in large projects
**Solution:**
The Cloudflare adapter uses Vite, not webpack. If you see this error, check:
1. **Ensure adapter is properly configured:**
```javascript
// astro.config.mjs
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
adapter: cloudflare(),
});
```
2. **Check for legacy configuration:**
- Remove any `@astrojs/vercel` or other adapter references
- Ensure `output` mode is set correctly
---
## Runtime Errors
### 404 Errors on Specific Routes
**Symptoms:**
- Some routes return 404 after deployment
- Static assets not found
**Solutions:**
1. **Check `_routes.json` configuration** (for advanced routing):
```json
{
"version": 1,
"include": ["/*"],
"exclude": ["/api/*"]
}
```
2. **Verify build output:**
```bash
npm run build
ls -la dist/
```
3. **Check wrangler.jsonc assets directory:**
```jsonc
{
"assets": {
"directory": "./dist",
"binding": "ASSETS"
}
}
```
### "env is not defined" or "runtime is not defined"
**Symptoms:**
- Cannot access Cloudflare bindings in Astro code
- Runtime errors in server components
**Solutions:**
1. **Ensure TypeScript types are configured:**
```typescript
// src/env.d.ts
type Runtime = import('@astrojs/cloudflare').Runtime<Env>;
declare namespace App {
interface Locals extends Runtime {}
}
```
2. **Access bindings correctly:**
```astro
---
// Correct
const env = Astro.locals.runtime.env;
const kv = env.MY_KV_NAMESPACE;
// Incorrect
const kv = Astro.locals.env.MY_KV_NAMESPACE;
---
```
3. **Verify platformProxy is enabled:**
```javascript
// astro.config.mjs
adapter: cloudflare({
platformProxy: {
enabled: true,
},
})
```
---
## Deployment Issues
### "Authentication required" or "Not logged in"
**Symptoms:**
- `wrangler deploy` fails with authentication error
- CI/CD deployment fails
**Solutions:**
1. **Authenticate locally:**
```bash
npx wrangler login
```
2. **For CI/CD, create API token:**
- Go to Cloudflare Dashboard → My Profile → API Tokens
- Create token with "Edit Cloudflare Workers" template
- Set as `CLOUDFLARE_API_TOKEN` in GitHub/GitLab secrets
3. **Set account ID:**
```bash
# Get account ID
npx wrangler whoami
# Add to wrangler.jsonc or environment
export CLOUDFLARE_ACCOUNT_ID=your-account-id
```
### "Project name already exists"
**Symptoms:**
- Deployment fails due to naming conflict
**Solutions:**
1. **Change project name in wrangler.jsonc:**
```jsonc
{
"name": "my-app-production"
}
```
2. **Or use environments:**
```jsonc
{
"env": {
"staging": {
"name": "my-app-staging"
}
}
}
```
### Deployment succeeds but site doesn't update
**Symptoms:**
- `wrangler deploy` reports success
- Old version still served
**Solutions:**
1. **Clear browser cache** (Ctrl+Shift+R or Cmd+Shift+R)
2. **Verify deployment:**
```bash
npx wrangler deployments list
```
3. **Check for cached versions:**
```bash
npx wrangler versions list
```
4. **Force deployment:**
```bash
npx wrangler deploy --compatibility-date 2025-01-19
```
---
## Performance Issues
### Slow initial page load
**Symptoms:**
- First Contentful Paint (FCP) > 2 seconds
- Large Time to First Byte (TTFB)
**Solutions:**
1. **Use hybrid or static output:**
```javascript
// Pre-render static pages where possible
export const prerender = true;
```
2. **Enable image optimization:**
```javascript
adapter: cloudflare({
imageService: 'compile',
})
```
3. **Cache at edge:**
```javascript
export async function getStaticPaths() {
return [{
params: { id: '1' },
props: { data: await fetchData() },
}];
}
```
### High cold start latency
**Symptoms:**
- First request after inactivity is slow
- Subsequent requests are fast
**Solutions:**
1. **Use mode: 'directory'** for better caching:
```javascript
adapter: cloudflare({
mode: 'directory',
})
```
2. **Keep bundle size small** - avoid heavy dependencies
3. **Use Cloudflare KV** for frequently accessed data:
```javascript
const cached = await env.KV.get('key');
if (!cached) {
const data = await fetch();
await env.KV.put('key', data, { expirationTtl: 3600 });
}
```
---
## Development Server Issues
### Styling not applied in dev mode (Astro 6 Beta)
**Symptoms:**
- CSS not loading in `astro dev`
- Works in production but not locally
**Status:** Known bug in Astro 6 beta
**Workarounds:**
1. **Use production build locally:**
```bash
npm run build
npx wrangler dev --local
```
2. **Check GitHub issue for updates:**
- [Astro Issue #15194](https://github.com/withastro/astro/issues/15194)
### Cannot test bindings locally
**Symptoms:**
- `Astro.locals.runtime.env` is undefined locally
- Cloudflare bindings don't work in dev
**Solutions:**
1. **Ensure platformProxy is enabled:**
```javascript
adapter: cloudflare({
platformProxy: {
enabled: true,
configPath: './wrangler.jsonc',
},
})
```
2. **Create .dev.vars for local secrets:**
```bash
API_KEY=local_key
DATABASE_URL=postgresql://localhost:5432/db
```
3. **Use remote development:**
```bash
npx wrangler dev --remote
```
---
## Getting Help
If issues persist:
1. **Check official documentation:**
- [Astro Cloudflare Guide](https://docs.astro.build/en/guides/deploy/cloudflare/)
- [Cloudflare Workers Docs](https://developers.cloudflare.com/workers/)
2. **Search existing issues:**
- [Astro GitHub Issues](https://github.com/withastro/astro/issues)
- [Cloudflare Workers Discussions](https://github.com/cloudflare/workers-sdk/discussions)
3. **Join community:**
- [Astro Discord](https://astro.build/chat)
- [Cloudflare Discord](https://discord.gg/cloudflaredev)

View File

@@ -0,0 +1,329 @@
# Upgrade Guide
Migrating existing Astro projects to deploy on Cloudflare Workers.
## Table of Contents
1. [From Astro 5 to Astro 6](#from-astro-5-to-astro-6)
2. [From Other Platforms to Cloudflare](#from-other-platforms-to-cloudflare)
3. [Adapter Migration](#adapter-migration)
4. [Breaking Changes](#breaking-changes)
---
## From Astro 5 to Astro 6
### Prerequisites Check
Astro 6 requires:
| Requirement | Minimum Version | Check Command |
|-------------|-----------------|---------------|
| Node.js | 22.12.0+ | `node --version` |
| Astro | 6.0.0 | `npm list astro` |
| Cloudflare Adapter | 13.0.0+ | `npm list @astrojs/cloudflare` |
### Upgrade Steps
1. **Backup current state:**
```bash
git commit -am "Pre-upgrade commit"
```
2. **Run automated upgrade:**
```bash
npx @astrojs/upgrade@beta
```
3. **Update adapter:**
```bash
npm install @astrojs/cloudflare@beta
```
4. **Update Node.js** if needed:
```bash
# Using nvm
nvm install 22
nvm use 22
# Or download from nodejs.org
```
5. **Update CI/CD Node.js version:**
```yaml
# .github/workflows/deploy.yml
- uses: actions/setup-node@v4
with:
node-version: '22'
```
6. **Test locally:**
```bash
npm install
npm run dev
npm run build
npx wrangler dev
```
### Breaking Changes
#### 1. Vite 7.0
Vite has been upgraded to Vite 7.0. Check plugin compatibility:
```bash
# Check for outdated plugins
npm outdated
# Update Vite-specific plugins
npm update @vitejs/plugin-react
```
#### 2. Hybrid Output Behavior
The `hybrid` output mode behavior has changed:
```javascript
// Old (Astro 5)
export const prerender = true; // Static
// New (Astro 6) - same, but default behavior changed
// Static is now the default for all pages in hybrid mode
```
#### 3. Development Server
The new dev server runs on the production runtime:
```javascript
// Old: Vite dev server
// New: workerd runtime (same as production)
// Update your code if it relied on Vite-specific behavior
```
---
## From Other Platforms to Cloudflare
### From Vercel
**Remove Vercel adapter:**
```bash
npm uninstall @astrojs/vercel
```
**Install Cloudflare adapter:**
```bash
npm install @astrojs/cloudflare wrangler --save-dev
```
**Update astro.config.mjs:**
```javascript
// Before
import vercel from '@astrojs/vercel';
export default defineConfig({
adapter: vercel(),
});
// After
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
adapter: cloudflare(),
});
```
**Update environment variables:**
- Vercel: `process.env.VARIABLE`
- Cloudflare: `Astro.locals.runtime.env.VARIABLE` or `env.VARIABLE` in endpoints
### From Netlify
**Remove Netlify adapter:**
```bash
npm uninstall @astrojs/netlify
```
**Install Cloudflare adapter:**
```bash
npm install @astrojs/cloudflare wrangler --save-dev
```
**Update netlify.toml to wrangler.jsonc:**
```toml
# netlify.toml (old)
[build]
command = "astro build"
publish = "dist"
[functions]
node_bundler = "esbuild"
```
```jsonc
// wrangler.jsonc (new)
{
"name": "my-app",
"compatibility_date": "2025-01-19",
"assets": {
"directory": "./dist"
}
}
```
### From Node.js Server
**Before (Express/Fastify server):**
```javascript
// server.js
import express from 'express';
app.use(express.static('dist'));
app.listen(3000);
```
**After (Cloudflare Workers):**
```javascript
// astro.config.mjs
export default defineConfig({
output: 'server',
adapter: cloudflare(),
});
// Deploy
npx wrangler deploy
```
---
## Adapter Migration
### From Astro 4 to 5/6
**Old adapter syntax:**
```javascript
// Astro 4
adapter: cloudflare({
functionPerRoute: true,
})
```
**New adapter syntax:**
```javascript
// Astro 5/6
adapter: cloudflare({
mode: 'directory', // equivalent to functionPerRoute: true
})
```
### Mode Migration Guide
| Old Option | New Option | Notes |
|------------|------------|-------|
| `functionPerRoute: true` | `mode: 'directory'` | Recommended |
| `functionPerRoute: false` | `mode: 'standalone'` | Single worker |
---
## Breaking Changes
### Removed APIs
1. **`Astro.locals` changes:**
```javascript
// Old
const env = Astro.locals.env;
// New
const env = Astro.locals.runtime.env;
```
2. **Endpoint API changes:**
```javascript
// Old
export async function get({ locals }) {
const { env } = locals;
}
// New
export async function GET({ locals }) {
const env = locals.runtime.env;
}
```
### TypeScript Changes
```typescript
// Old type imports
import type { Runtime } from '@astrojs/cloudflare';
// New type imports
import type { Runtime } from '@astrojs/cloudflare/virtual';
// Or use the adapter export
import cloudflare from '@astrojs/cloudflare';
type Runtime = typeof cloudflare.Runtime;
```
---
## Rollback Procedures
### If Deployment Fails
1. **Keep old version deployed:**
```bash
npx wrangler versions list
npx wrangler versions rollback <version-id>
```
2. **Or rollback git changes:**
```bash
git revert HEAD
npx wrangler deploy
```
### If Build Fails
1. **Clear cache:**
```bash
rm -rf node_modules .astro dist
npm install
npm run build
```
2. **Check for incompatible dependencies:**
```bash
npm ls
```
3. **Temporarily pin to previous version:**
```bash
npm install astro@5
npm install @astrojs/cloudflare@12
```
---
## Verification Checklist
After upgrading, verify:
- [ ] Local dev server starts without errors
- [ ] Build completes successfully
- [ ] `wrangler dev` works locally
- [ ] Static assets load correctly
- [ ] SSR routes render properly
- [ ] Environment variables are accessible
- [ ] Cloudflare bindings (KV/D1/R2) work
- [ ] TypeScript types are correct
- [ ] CI/CD pipeline succeeds
- [ ] Production deployment works
---
## Getting Help
- [Astro Discord](https://astro.build/chat)
- [Cloudflare Discord](https://discord.gg/cloudflaredev)
- [Astro GitHub Issues](https://github.com/withastro/astro/issues)

View File

@@ -0,0 +1,88 @@
---
name: astro
description: Skill for using Astro projects. Includes CLI commands, project structure, core config options, and adapters. Use this skill when the user needs to work with Astro or when the user mentions Astro.
license: MIT
metadata:
authors: "Astro Team"
version: "0.0.1"
---
# Astro Usage Guide
**Always consult [docs.astro.build](https://docs.astro.build) for code examples and latest API.**
Astro is the web framework for content-driven websites.
---
## Quick Reference
### File Location
CLI looks for `astro.config.js`, `astro.config.mjs`, `astro.config.cjs`, and `astro.config.ts` in: `./`. Use `--config` for custom path.
### CLI Commands
- `npx astro dev` - Start the development server.
- `npx astro build` - Build your project and write it to disk.
- `npx astro check` - Check your project for errors.
- `npx astro add` - Add an integration.
- `npmx astro sync` - Generate TypeScript types for all Astro modules.
**Re-run after adding/changing plugins.**
### Project Structure
Astro leverages an opinionated folder layout for your project. Every Astro project root should include some directories and files. Reference [project structure docs](https://docs.astro.build/en/basics/project-structure).
- `src/*` - Your project source code (components, pages, styles, images, etc.)
- `src/pages` - Required sub-directory in your Astro project. Without it, your site will have no pages or routes!
- `src/components` - It is common to group and organize all of your project components together in this folder. This is a common convention in Astro projects, but it is not required. Feel free to organize your components however you like!
- `src/layouts` - Just like `src/components`, this directory is a common convention but not required.
- `src/styles` - It is a common convention to store your CSS or Sass files here, but this is not required. As long as your styles live somewhere in the src/ directory and are imported correctly, Astro will handle and optimize them.
- `public/*` - Your non-code, unprocessed assets (fonts, icons, etc.). The files in this folder will be copied into the build folder untouched, and then your site will be built.
- `package.json` - A project manifest.
- `astro.config.{js,mjs,cjs,ts}` - An Astro configuration file. (recommended)
- `tsconfig.json` - A TypeScript configuration file. (recommended)
---
## Core Config Options
| Option | Notes |
|--------|-------|
| `site` | Your final, deployed URL. Astro uses this full URL to generate your sitemap and canonical URLs in your final build. |
---
## Adapters
Deploy to your favorite server, serverless, or edge host with build adapters. Use an adapter to enable on-demand rendering in your Astro project.
**Add [Node.js](https://docs.astro.build/en/guides/integrations-guide/node) adapter using astro add:**
```
npx astro add node --yes
```
**Add [Cloudflare](https://docs.astro.build/en/guides/integrations-guide/cloudflare) adapter using astro add:**
```
npx astro add cloudflare --yes
```
**Add [Netlify](https://docs.astro.build/en/guides/integrations-guide/netlify) adapter using astro add:**
```
npx astro add netlify --yes
```
**Add [Vercel](https://docs.astro.build/en/guides/integrations-guide/vercel) adapter using astro add:**
```
npx astro add vercel --yes
```
[Other Community adapters](https://astro.build/integrations/2/?search=&categories%5B%5D=adapters)
## Resources
- [Docs](https://docs.astro.build)
- [Config Reference](https://docs.astro.build/en/reference/configuration-reference/)
- [llms.txt](https://docs.astro.build/llms.txt)
- [GitHub](https://github.com/withastro/astro)

View File

@@ -0,0 +1,172 @@
---
name: design-md
description: Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files
allowed-tools:
- "stitch*:*"
- "Read"
- "Write"
- "web_fetch"
---
# Stitch DESIGN.md Skill
You are an expert Design Systems Lead. Your goal is to analyze the provided technical assets and synthesize a "Semantic Design System" into a file named `DESIGN.md`.
## Overview
This skill helps you create `DESIGN.md` files that serve as the "source of truth" for prompting Stitch to generate new screens that align perfectly with existing design language. Stitch interprets design through "Visual Descriptions" supported by specific color values.
## Prerequisites
- Access to the Stitch MCP Server
- A Stitch project with at least one designed screen
- Access to the Stitch Effective Prompting Guide: https://stitch.withgoogle.com/docs/learn/prompting/
## The Goal
The `DESIGN.md` file will serve as the "source of truth" for prompting Stitch to generate new screens that align perfectly with the existing design language. Stitch interprets design through "Visual Descriptions" supported by specific color values.
## Retrieval and Networking
To analyze a Stitch project, you must retrieve screen metadata and design assets using the Stitch MCP Server tools:
1. **Namespace discovery**: Run `list_tools` to find the Stitch MCP prefix. Use this prefix (e.g., `mcp_stitch:`) for all subsequent calls.
2. **Project lookup** (if Project ID is not provided):
- Call `[prefix]:list_projects` with `filter: "view=owned"` to retrieve all user projects
- Identify the target project by title or URL pattern
- Extract the Project ID from the `name` field (e.g., `projects/13534454087919359824`)
3. **Screen lookup** (if Screen ID is not provided):
- Call `[prefix]:list_screens` with the `projectId` (just the numeric ID, not the full path)
- Review screen titles to identify the target screen (e.g., "Home", "Landing Page")
- Extract the Screen ID from the screen's `name` field
4. **Metadata fetch**:
- Call `[prefix]:get_screen` with both `projectId` and `screenId` (both as numeric IDs only)
- This returns the complete screen object including:
- `screenshot.downloadUrl` - Visual reference of the design
- `htmlCode.downloadUrl` - Full HTML/CSS source code
- `width`, `height`, `deviceType` - Screen dimensions and target platform
- Project metadata including `designTheme` with color and style information
5. **Asset download**:
- Use `web_fetch` or `read_url_content` to download the HTML code from `htmlCode.downloadUrl`
- Optionally download the screenshot from `screenshot.downloadUrl` for visual reference
- Parse the HTML to extract Tailwind classes, custom CSS, and component patterns
6. **Project metadata extraction**:
- Call `[prefix]:get_project` with the project `name` (full path: `projects/{id}`) to get:
- `designTheme` object with color mode, fonts, roundness, custom colors
- Project-level design guidelines and descriptions
- Device type preferences and layout principles
## Analysis & Synthesis Instructions
### 1. Extract Project Identity (JSON)
- Locate the Project Title
- Locate the specific Project ID (e.g., from the `name` field in the JSON)
### 2. Define the Atmosphere (Image/HTML)
Evaluate the screenshot and HTML structure to capture the overall "vibe." Use evocative adjectives to describe the mood (e.g., "Airy," "Dense," "Minimalist," "Utilitarian").
### 3. Map the Color Palette (Tailwind Config/JSON)
Identify the key colors in the system. For each color, provide:
- A descriptive, natural language name that conveys its character (e.g., "Deep Muted Teal-Navy")
- The specific hex code in parentheses for precision (e.g., "#294056")
- Its specific functional role (e.g., "Used for primary actions")
### 4. Translate Geometry & Shape (CSS/Tailwind)
Convert technical `border-radius` and layout values into physical descriptions:
- Describe `rounded-full` as "Pill-shaped"
- Describe `rounded-lg` as "Subtly rounded corners"
- Describe `rounded-none` as "Sharp, squared-off edges"
### 5. Describe Depth & Elevation
Explain how the UI handles layers. Describe the presence and quality of shadows (e.g., "Flat," "Whisper-soft diffused shadows," or "Heavy, high-contrast drop shadows").
## Output Guidelines
- **Language:** Use descriptive design terminology and natural language exclusively
- **Format:** Generate a clean Markdown file following the structure below
- **Precision:** Include exact hex codes for colors while using descriptive names
- **Context:** Explain the "why" behind design decisions, not just the "what"
## Output Format (DESIGN.md Structure)
```markdown
# Design System: [Project Title]
**Project ID:** [Insert Project ID Here]
## 1. Visual Theme & Atmosphere
(Description of the mood, density, and aesthetic philosophy.)
## 2. Color Palette & Roles
(List colors by Descriptive Name + Hex Code + Functional Role.)
## 3. Typography Rules
(Description of font family, weight usage for headers vs. body, and letter-spacing character.)
## 4. Component Stylings
* **Buttons:** (Shape description, color assignment, behavior).
* **Cards/Containers:** (Corner roundness description, background color, shadow depth).
* **Inputs/Forms:** (Stroke style, background).
## 5. Layout Principles
(Description of whitespace strategy, margins, and grid alignment.)
```
## Usage Example
To use this skill for the Furniture Collection project:
1. **Retrieve project information:**
```
Use the Stitch MCP Server to get the Furniture Collection project
```
2. **Get the Home page screen details:**
```
Retrieve the Home page screen's code, image, and screen object information
```
3. **Reference best practices:**
```
Review the Stitch Effective Prompting Guide at:
https://stitch.withgoogle.com/docs/learn/prompting/
```
4. **Analyze and synthesize:**
- Extract all relevant design tokens from the screen
- Translate technical values into descriptive language
- Organize information according to the DESIGN.md structure
5. **Generate the file:**
- Create `DESIGN.md` in the project directory
- Follow the prescribed format exactly
- Ensure all color codes are accurate
- Use evocative, designer-friendly language
## Best Practices
- **Be Descriptive:** Avoid generic terms like "blue" or "rounded." Use "Ocean-deep Cerulean (#0077B6)" or "Gently curved edges"
- **Be Functional:** Always explain what each design element is used for
- **Be Consistent:** Use the same terminology throughout the document
- **Be Visual:** Help readers visualize the design through your descriptions
- **Be Precise:** Include exact values (hex codes, pixel values) in parentheses after natural language descriptions
## Tips for Success
1. **Start with the big picture:** Understand the overall aesthetic before diving into details
2. **Look for patterns:** Identify consistent spacing, sizing, and styling patterns
3. **Think semantically:** Name colors by their purpose, not just their appearance
4. **Consider hierarchy:** Document how visual weight and importance are communicated
5. **Reference the guide:** Use language and patterns from the Stitch Effective Prompting Guide
## Common Pitfalls to Avoid
- ❌ Using technical jargon without translation (e.g., "rounded-xl" instead of "generously rounded corners")
- ❌ Omitting color codes or using only descriptive names
- ❌ Forgetting to explain functional roles of design elements
- ❌ Being too vague in atmosphere descriptions
- ❌ Ignoring subtle design details like shadows or spacing patterns

View File

@@ -0,0 +1,154 @@
# Design System: Furniture Collections List
**Project ID:** 13534454087919359824
## 1. Visual Theme & Atmosphere
The Furniture Collections List embodies a **sophisticated, minimalist sanctuary** that marries the pristine simplicity of Scandinavian design with the refined visual language of luxury editorial presentation. The interface feels **spacious and tranquil**, prioritizing breathing room and visual clarity above all else. The design philosophy is gallery-like and photography-first, allowing each furniture piece to command attention as an individual art object.
The overall mood is **airy yet grounded**, creating an aspirational aesthetic that remains approachable and welcoming. The interface feels **utilitarian in its restraint** but elegant in its execution, with every element serving a clear purpose while maintaining visual sophistication. The atmosphere evokes the serene ambiance of a high-end furniture showroom where customers can browse thoughtfully without visual overwhelm.
**Key Characteristics:**
- Expansive whitespace creating generous breathing room between elements
- Clean, architectural grid system with structured content blocks
- Photography-first presentation with minimal UI interference
- Whisper-soft visual hierarchy that guides without shouting
- Refined, understated interactive elements
- Professional yet inviting editorial tone
## 2. Color Palette & Roles
### Primary Foundation
- **Warm Barely-There Cream** (#FCFAFA) Primary background color. Creates an almost imperceptible warmth that feels more inviting than pure white, serving as the serene canvas for the entire experience.
- **Crisp Very Light Gray** (#F5F5F5) Secondary surface color used for card backgrounds and content areas. Provides subtle visual separation while maintaining the airy, ethereal quality.
### Accent & Interactive
- **Deep Muted Teal-Navy** (#294056) The sole vibrant accent in the palette. Used exclusively for primary call-to-action buttons (e.g., "Shop Now", "View all products"), active navigation links, selected filter states, and subtle interaction highlights. This sophisticated anchor color creates visual focus points without disrupting the serene neutral foundation.
### Typography & Text Hierarchy
- **Charcoal Near-Black** (#2C2C2C) Primary text color for headlines and product names. Provides strong readable contrast while being softer and more refined than pure black.
- **Soft Warm Gray** (#6B6B6B) Secondary text used for body copy, product descriptions, and supporting metadata. Creates clear typographic hierarchy without harsh contrast.
- **Ultra-Soft Silver Gray** (#E0E0E0) Tertiary color for borders, dividers, and subtle structural elements. Creates separation so gentle it's almost imperceptible.
### Functional States (Reserved for system feedback)
- **Success Moss** (#10B981) Stock availability, confirmation states, positive indicators
- **Alert Terracotta** (#EF4444) Low stock warnings, error states, critical alerts
- **Informational Slate** (#64748B) Neutral system messages, informational callouts
## 3. Typography Rules
**Primary Font Family:** Manrope
**Character:** Modern, geometric sans-serif with gentle humanist warmth. Slightly rounded letterforms that feel contemporary yet approachable.
### Hierarchy & Weights
- **Display Headlines (H1):** Semi-bold weight (600), generous letter-spacing (0.02em for elegance), 2.75-3.5rem size. Used sparingly for hero sections and major page titles.
- **Section Headers (H2):** Semi-bold weight (600), subtle letter-spacing (0.01em), 2-2.5rem size. Establishes clear content zones and featured collections.
- **Subsection Headers (H3):** Medium weight (500), normal letter-spacing, 1.5-1.75rem size. Product names and category labels.
- **Body Text:** Regular weight (400), relaxed line-height (1.7), 1rem size. Descriptions and supporting content prioritize comfortable readability.
- **Small Text/Meta:** Regular weight (400), slightly tighter line-height (1.5), 0.875rem size. Prices, availability, and metadata remain legible but visually recessive.
- **CTA Buttons:** Medium weight (500), subtle letter-spacing (0.01em), 1rem size. Balanced presence without visual aggression.
### Spacing Principles
- Headers use slightly expanded letter-spacing for refined elegance
- Body text maintains generous line-height (1.7) for effortless reading
- Consistent vertical rhythm with 2-3rem between related text blocks
- Large margins (4-6rem) between major sections to reinforce spaciousness
## 4. Component Stylings
### Buttons
- **Shape:** Subtly rounded corners (8px/0.5rem radius) approachable and modern without appearing playful or childish
- **Primary CTA:** Deep Muted Teal-Navy (#294056) background with pure white text, comfortable padding (0.875rem vertical, 2rem horizontal)
- **Hover State:** Subtle darkening to deeper navy, smooth 250ms ease-in-out transition
- **Focus State:** Soft outer glow in the primary color for keyboard navigation accessibility
- **Secondary CTA (if needed):** Outlined style with Deep Muted Teal-Navy border, transparent background, hover fills with whisper-soft teal tint
### Cards & Product Containers
- **Corner Style:** Gently rounded corners (12px/0.75rem radius) creating soft, refined edges
- **Background:** Alternates between Warm Barely-There Cream and Crisp Very Light Gray based on layering needs
- **Shadow Strategy:** Flat by default. On hover, whisper-soft diffused shadow appears (`0 2px 8px rgba(0,0,0,0.06)`) creating subtle depth
- **Border:** Optional hairline border (1px) in Ultra-Soft Silver Gray for delicate definition when shadows aren't present
- **Internal Padding:** Generous 2-2.5rem creating comfortable breathing room for content
- **Image Treatment:** Full-bleed at the top of cards, square or 4:3 ratio, seamless edge-to-edge presentation
### Navigation
- **Style:** Clean horizontal layout with generous spacing (2-3rem) between menu items
- **Typography:** Medium weight (500), subtle uppercase, expanded letter-spacing (0.06em) for refined sophistication
- **Default State:** Charcoal Near-Black text
- **Active/Hover State:** Smooth 200ms color transition to Deep Muted Teal-Navy
- **Active Indicator:** Thin underline (2px) in Deep Muted Teal-Navy appearing below current section
- **Mobile:** Converts to elegant hamburger menu with sliding drawer
### Inputs & Forms
- **Stroke Style:** Refined 1px border in Soft Warm Gray
- **Background:** Warm Barely-There Cream with transition to Crisp Very Light Gray on focus
- **Corner Style:** Matching button roundness (8px/0.5rem) for visual consistency
- **Focus State:** Border color shifts to Deep Muted Teal-Navy with subtle outer glow
- **Padding:** Comfortable 0.875rem vertical, 1.25rem horizontal for touch-friendly targets
- **Placeholder Text:** Ultra-Soft Silver Gray, elegant and unobtrusive
### Product Cards (Specific Pattern)
- **Image Area:** Square (1:1) or landscape (4:3) ratio filling card width completely
- **Content Stack:** Product name (H3), brief descriptor, material/finish, price
- **Price Display:** Emphasized with semi-bold weight (600) in Charcoal Near-Black
- **Hover Behavior:** Gentle lift effect (translateY -4px) combined with enhanced shadow
- **Spacing:** Consistent 1.5rem internal padding below image
## 5. Layout Principles
### Grid & Structure
- **Max Content Width:** 1440px for optimal readability and visual balance on large displays
- **Grid System:** Responsive 12-column grid with fluid gutters (24px mobile, 32px desktop)
- **Product Grid:** 4 columns on large desktop, 3 on desktop, 2 on tablet, 1 on mobile
- **Breakpoints:**
- Mobile: <768px
- Tablet: 768-1024px
- Desktop: 1024-1440px
- Large Desktop: >1440px
### Whitespace Strategy (Critical to the Design)
- **Base Unit:** 8px for micro-spacing, 16px for component spacing
- **Vertical Rhythm:** Consistent 2rem (32px) base unit between related elements
- **Section Margins:** Generous 5-8rem (80-128px) between major sections creating dramatic breathing room
- **Edge Padding:** 1.5rem (24px) mobile, 3rem (48px) tablet/desktop for comfortable framing
- **Hero Sections:** Extra-generous top/bottom padding (8-12rem) for impactful presentation
### Alignment & Visual Balance
- **Text Alignment:** Left-aligned for body and navigation (optimal readability), centered for hero headlines and featured content
- **Image to Text Ratio:** Heavily weighted toward imagery (70-30 split) reinforcing photography-first philosophy
- **Asymmetric Balance:** Large hero images offset by compact, refined text blocks
- **Visual Weight Distribution:** Strategic use of whitespace to draw eyes to hero products and primary CTAs
- **Reading Flow:** Clear top-to-bottom, left-to-right pattern with intentional focal points
### Responsive Behavior & Touch
- **Mobile-First Foundation:** Core experience designed and perfected for smallest screens first
- **Progressive Enhancement:** Additional columns, imagery, and details added gracefully at larger breakpoints
- **Touch Targets:** Minimum 44x44px for all interactive elements (WCAG AAA compliant)
- **Image Optimization:** Responsive images with appropriate resolutions for each breakpoint, lazy-loading for performance
- **Collapsing Strategy:** Navigation collapses to hamburger, grid reduces columns, padding scales proportionally
## 6. Design System Notes for Stitch Generation
When creating new screens for this project using Stitch, reference these specific instructions:
### Language to Use
- **Atmosphere:** "Sophisticated minimalist sanctuary with gallery-like spaciousness"
- **Button Shapes:** "Subtly rounded corners" (not "rounded-md" or "8px")
- **Shadows:** "Whisper-soft diffused shadows on hover" (not "shadow-sm")
- **Spacing:** "Generous breathing room" and "expansive whitespace"
### Color References
Always use the descriptive names with hex codes:
- Primary CTA: "Deep Muted Teal-Navy (#294056)"
- Backgrounds: "Warm Barely-There Cream (#FCFAFA)" or "Crisp Very Light Gray (#F5F5F5)"
- Text: "Charcoal Near-Black (#2C2C2C)" or "Soft Warm Gray (#6B6B6B)"
### Component Prompts
- "Create a product card with gently rounded corners, full-bleed square product image, and whisper-soft shadow on hover"
- "Design a primary call-to-action button in Deep Muted Teal-Navy (#294056) with subtle rounded corners and comfortable padding"
- "Add a navigation bar with generous spacing between items, using medium-weight Manrope with subtle uppercase and expanded letter-spacing"
### Incremental Iteration
When refining existing screens:
1. Focus on ONE component at a time (e.g., "Update the product grid cards")
2. Be specific about what to change (e.g., "Increase the internal padding of product cards from 1.5rem to 2rem")
3. Reference this design system language consistently

View File

@@ -0,0 +1,82 @@
---
name: docker-build-push
description: Build Docker images and push to Docker Hub for Coolify deployment. Use when the user needs to (1) build a Docker image locally, (2) push an image to Docker Hub, (3) deploy to Coolify via Docker image, or (4) set up CI/CD for Docker-based deployments with Gitea Actions.
---
# Docker Build and Push
Build Docker images locally and push to Docker Hub for Coolify deployment.
## Prerequisites
1. Docker installed and running
2. Docker Hub account
3. Logged in to Docker Hub: `docker login`
## Build and Push Workflow
### 1. Build the Image
```bash
docker build -t DOCKERHUB_USERNAME/IMAGE_NAME:latest .
```
Optional version tag:
```bash
docker build -t DOCKERHUB_USERNAME/IMAGE_NAME:v1.0.0 .
```
### 2. Test Locally (Optional)
```bash
docker run -p 3000:3000 DOCKERHUB_USERNAME/IMAGE_NAME:latest
```
### 3. Push to Docker Hub
```bash
docker push DOCKERHUB_USERNAME/IMAGE_NAME:latest
```
## Coolify Deployment
In Coolify dashboard:
1. Create/edit service → Select **Docker Image** as source
2. Enter image: `DOCKERHUB_USERNAME/IMAGE_NAME:latest`
3. Configure environment variables
4. Deploy
## Automated Deployment with Gitea Actions
Create `.gitea/workflows/deploy.yaml`:
```yaml
name: Deploy to Coolify
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Trigger Coolify Deployment
run: |
curl -X POST "${{ secrets.COOLIFY_WEBHOOK_URL }}"
```
### Setup:
1. **Get Coolify Webhook URL**: Service settings → Webhooks → Copy URL
2. **Add to Gitea Secrets**: Settings → Secrets → Add `COOLIFY_WEBHOOK_URL`
### Full Workflow:
1. Build and push locally
2. Push code to Gitea (triggers workflow)
3. Gitea notifies Coolify
4. Coolify pulls latest image and redeploys

View File

@@ -0,0 +1,196 @@
---
name: docker-optimizer
description: Reviews Dockerfiles for best practices, security issues, and image size optimizations including multi-stage builds and layer caching. Use when working with Docker, containers, or deployment.
allowed-tools: Read, Grep, Glob, Write, Edit
---
# Docker Optimizer
Analyzes and optimizes Dockerfiles for performance, security, and best practices.
## When to Use
- User working with Docker or containers
- Dockerfile optimization needed
- Container image too large
- User mentions "Docker", "container", "image size", or "deployment"
## Instructions
### 1. Find Dockerfiles
Search for: `Dockerfile`, `Dockerfile.*`, `*.dockerfile`
### 2. Check Best Practices
**Use specific base image versions:**
```dockerfile
# Bad
FROM node:latest
# Good
FROM node:18-alpine
```
**Minimize layers:**
```dockerfile
# Bad
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git
# Good
RUN apt-get update && \
apt-get install -y curl git && \
rm -rf /var/lib/apt/lists/*
```
**Order instructions by change frequency:**
```dockerfile
# Dependencies change less than code
COPY package*.json ./
RUN npm install
COPY . .
```
**Use .dockerignore:**
```
node_modules
.git
.env
*.md
```
### 3. Multi-Stage Builds
Reduce final image size:
```dockerfile
# Build stage
FROM node:18 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
```
### 4. Security Issues
**Don't run as root:**
```dockerfile
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
```
**No secrets in image:**
```dockerfile
# Bad: Hardcoded secret
ENV API_KEY=secret123
# Good: Use build args or runtime env
ARG BUILD_ENV
ENV NODE_ENV=${BUILD_ENV}
```
**Scan for vulnerabilities:**
```bash
docker scan image:tag
trivy image image:tag
```
### 5. Size Optimization
**Use Alpine images:**
- `node:18-alpine` vs `node:18` (900MB → 170MB)
- `python:3.11-alpine` vs `python:3.11` (900MB → 50MB)
**Remove unnecessary files:**
```dockerfile
RUN npm install --production && \
npm cache clean --force
```
**Use specific COPY:**
```dockerfile
# Bad: Copies everything
COPY . .
# Good: Copy only what's needed
COPY package*.json ./
COPY src ./src
```
### 6. Caching Strategy
Layer caching optimization:
```dockerfile
# Install dependencies first (cached if package.json unchanged)
COPY package*.json ./
RUN npm install
# Copy source (changes more frequently)
COPY . .
RUN npm run build
```
### 7. Health Checks
```dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js
```
### 8. Generate Optimized Dockerfile
Provide improved version with:
- Multi-stage build
- Appropriate base image
- Security improvements
- Layer optimization
- Build caching
- .dockerignore file
### 9. Build Commands
**Efficient build:**
```bash
# Use BuildKit
DOCKER_BUILDKIT=1 docker build -t app:latest .
# Build with cache from registry
docker build --cache-from myregistry/app:latest -t app:latest .
```
### 10. Dockerfile Checklist
- [ ] Specific base image tag (not `latest`)
- [ ] Multi-stage build if applicable
- [ ] Non-root user
- [ ] Minimal layers (combined RUN commands)
- [ ] .dockerignore present
- [ ] No secrets in image
- [ ] Proper layer ordering for caching
- [ ] Alpine or slim variant used
- [ ] Cleanup in same RUN layer
- [ ] HEALTHCHECK defined
## Security Best Practices
- Scan images regularly
- Use official base images
- Keep base images updated
- Minimize attack surface (fewer packages)
- Run as non-root user
- Use read-only filesystem where possible
## Supporting Files
- `templates/Dockerfile.optimized`: Optimized multi-stage Dockerfile example
- `templates/.dockerignore`: Common .dockerignore patterns

View File

@@ -0,0 +1,190 @@
{
"schema_version": "2.0",
"meta": {
"generated_at": "2026-01-10T12:49:08.788Z",
"slug": "crazydubya-docker-optimizer",
"source_url": "https://github.com/CrazyDubya/claude-skills/tree/main/docker-optimizer",
"source_ref": "main",
"model": "claude",
"analysis_version": "2.0.0",
"source_type": "community",
"content_hash": "91e122d5cb5f029f55f8ef0d0271eb27a36814091d8749886a847b682f5d5156",
"tree_hash": "67892c5573ebf65b1bc8bc3227aa00dd785c102b1874e665c8e5b2d78a3079a0"
},
"skill": {
"name": "docker-optimizer",
"description": "Reviews Dockerfiles for best practices, security issues, and image size optimizations including multi-stage builds and layer caching. Use when working with Docker, containers, or deployment.",
"summary": "Reviews Dockerfiles for best practices, security issues, and image size optimizations including mult...",
"icon": "🐳",
"version": "1.0.0",
"author": "CrazyDubya",
"license": "MIT",
"category": "devops",
"tags": [
"docker",
"containers",
"optimization",
"security",
"devops"
],
"supported_tools": [
"claude",
"codex",
"claude-code"
],
"risk_factors": []
},
"security_audit": {
"risk_level": "safe",
"is_blocked": false,
"safe_to_publish": true,
"summary": "This is a legitimate Docker optimization tool with strong security practices. It contains documentation and templates that promote secure containerization practices without any executable code or network operations.",
"risk_factor_evidence": [],
"critical_findings": [],
"high_findings": [],
"medium_findings": [],
"low_findings": [],
"dangerous_patterns": [],
"files_scanned": 3,
"total_lines": 317,
"audit_model": "claude",
"audited_at": "2026-01-10T12:49:08.788Z"
},
"content": {
"user_title": "Optimize Dockerfiles for Security and Performance",
"value_statement": "Docker images are often bloated and insecure. This skill analyzes your Dockerfiles and provides optimized versions with multi-stage builds, security hardening, and size reduction techniques.",
"seo_keywords": [
"docker optimization",
"dockerfile best practices",
"container security",
"multi-stage builds",
"docker image size",
"claude docker",
"codex containers",
"claude-code devops",
"docker layer caching",
"container optimization"
],
"actual_capabilities": [
"Analyzes Dockerfiles for security vulnerabilities and best practice violations",
"Recommends specific base image versions and multi-stage build patterns",
"Provides optimized .dockerignore templates to prevent sensitive data exposure",
"Suggests layer caching strategies to speed up builds",
"Generates production-ready Dockerfile examples with non-root users"
],
"limitations": [
"Only analyzes Dockerfile syntax and structure, not runtime behavior",
"Requires manual implementation of recommended changes",
"Cannot scan existing Docker images for vulnerabilities",
"Limited to Node.js examples in provided templates"
],
"use_cases": [
{
"target_user": "DevOps Engineers",
"title": "Production Deployment Optimization",
"description": "Reduce Docker image sizes by 80% and improve security posture for production deployments with hardened configurations."
},
{
"target_user": "Developers",
"title": "Development Workflow Enhancement",
"description": "Speed up local development with optimized layer caching and multi-stage builds that separate build dependencies from runtime."
},
{
"target_user": "Security Teams",
"title": "Container Security Auditing",
"description": "Identify security anti-patterns in Dockerfiles like running as root, exposing secrets, or using vulnerable base images."
}
],
"prompt_templates": [
{
"title": "Basic Dockerfile Review",
"scenario": "First-time Docker user needs guidance",
"prompt": "Review this Dockerfile and tell me what's wrong: [paste Dockerfile content]. I'm new to Docker and want to follow best practices."
},
{
"title": "Image Size Optimization",
"scenario": "Large image slowing down deployments",
"prompt": "My Docker image is 2GB and takes forever to build. Here's my Dockerfile: [paste content]. How can I make it smaller and faster?"
},
{
"title": "Security Hardening",
"scenario": "Production security requirements",
"prompt": "I need to secure this Dockerfile for production use: [paste content]. Please check for security issues and provide a hardened version."
},
{
"title": "Multi-Stage Build Conversion",
"scenario": "Complex application with build dependencies",
"prompt": "Convert this single-stage Dockerfile to use multi-stage builds to separate build dependencies from the runtime image: [paste content]"
}
],
"output_examples": [
{
"input": "Review my Node.js Dockerfile for best practices",
"output": [
"✓ Found 3 optimization opportunities:",
"• Use specific base image version (node:18-alpine instead of node:latest)",
"• Add multi-stage build to reduce final image size by 70%",
"• Create non-root user for security (currently running as root)",
"• Move dependencies copy before source code for better caching",
"• Add .dockerignore to exclude 15 unnecessary files",
"• Include HEALTHCHECK instruction for container health monitoring"
]
}
],
"best_practices": [
"Always use specific base image tags instead of 'latest' for reproducible builds",
"Implement multi-stage builds to keep production images minimal and secure",
"Create and use non-root users to limit container privileges"
],
"anti_patterns": [
"Never hardcode secrets or API keys directly in Dockerfiles using ENV instructions",
"Avoid copying entire source directories when only specific files are needed",
"Don't run package managers without cleaning caches in the same layer"
],
"faq": [
{
"question": "Which base images should I use?",
"answer": "Use Alpine variants for smaller sizes (node:18-alpine, python:3.11-alpine) or distroless images for maximum security."
},
{
"question": "How much can this reduce my image size?",
"answer": "Typically 60-80% reduction through multi-stage builds and Alpine base images. A 2GB Node.js image can become 200-400MB."
},
{
"question": "Does this work with all programming languages?",
"answer": "Yes, the optimization principles apply to all languages. Examples cover Node.js, Python, Go, Java, and Ruby Dockerfiles."
},
{
"question": "Is my code safe when using this skill?",
"answer": "Yes, this skill only reads and analyzes your Dockerfile. It doesn't execute code or make network calls."
},
{
"question": "What if my build breaks after optimization?",
"answer": "The skill provides gradual optimization steps. Test each change separately and keep your original Dockerfile as backup."
},
{
"question": "How does this compare to Docker's best practices documentation?",
"answer": "This skill provides actionable, specific recommendations based on your actual Dockerfile rather than generic guidelines."
}
]
},
"file_structure": [
{
"name": "templates",
"type": "dir",
"path": "templates",
"children": [
{
"name": "Dockerfile.optimized",
"type": "file",
"path": "templates/Dockerfile.optimized"
}
]
},
{
"name": "SKILL.md",
"type": "file",
"path": "SKILL.md"
}
]
}

View File

@@ -0,0 +1,49 @@
# Multi-stage Dockerfile Example (Node.js)
# Build stage
FROM node:18-alpine AS build
WORKDIR /app
# Copy dependency files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production && \
npm cache clean --force
# Copy source code
COPY . .
# Build application
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy built application from build stage
COPY --from=build --chown=appuser:appgroup /app/dist ./dist
COPY --from=build --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --chown=appuser:appgroup package*.json ./
# Switch to non-root user
USER appuser
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1
# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]
# Start application
CMD ["node", "dist/index.js"]

View File

@@ -0,0 +1,86 @@
---
name: git-commit
description: Use when creating git commits to ensure commit messages follow project standards. Applies the 7 rules for great commit messages with focus on conciseness and imperative mood.
---
# Git Commit Guidelines
Follow these rules when creating commits for this repository.
## The 7 Rules
1. **Separate subject from body with a blank line**
2. **Limit the subject line to 50 characters**
3. **Capitalize the subject line**
4. **Do not end the subject line with a period**
5. **Use the imperative mood** ("Add feature" not "Added feature")
6. **Wrap the body at 72 characters**
7. **Use the body to explain what and why vs. how**
## Key Principles
**Be concise, not verbose.** Every word should add value. Avoid unnecessary details about implementation mechanics - focus on what changed and why it matters.
**Subject line should stand alone** - don't require reading the body to understand the change. Body is optional and only needed for non-obvious context.
**Focus on the change, not how it was discovered** - never reference "review feedback", "PR comments", or "code review" in commit messages. Describe what the change does and why, not that someone asked for it.
**Avoid bullet points** - write prose, not lists. If you need bullets to explain a change, you're either committing too much at once or over-explaining implementation details.
## Format
Always use a HEREDOC to ensure proper formatting:
```bash
git commit -m "$(cat <<'EOF'
Subject line here
Optional body paragraph explaining what and why.
EOF
)"
```
## Good Examples
```
Add session isolation for concurrent executions
```
```
Fix encoding parameter handling in file operations
The encoding parameter wasn't properly passed through the validation
layer, causing base64 content to be treated as UTF-8.
```
## Bad Examples
```
Update files
Changes some things related to sessions and also fixes a bug.
```
Problem: Vague subject, doesn't explain what changed
```
Add file operations support
Implements FileClient with read/write methods and adds FileService
in the container with a validation layer. Includes comprehensive test
coverage for edge cases and supports both UTF-8 text and base64 binary
encodings. Uses proper error handling with custom error types from the
shared package for consistency across the SDK.
```
Problem: Over-explains implementation details, uses too many words
## Checklist Before Committing
- [ ] Subject is ≤50 characters
- [ ] Subject uses imperative mood
- [ ] Subject is capitalized, no period at end
- [ ] Body (if present) explains why, not how
- [ ] No references to review feedback or PR comments
- [ ] No bullet points in body
- [ ] Not committing sensitive files (.env, credentials)

View File

@@ -0,0 +1,227 @@
---
name: parallel-execution
description: Patterns for parallel subagent execution using Task tool with run_in_background. Use when coordinating multiple independent tasks, spawning dynamic subagents, or implementing features that can be parallelized.
---
# Parallel Execution Patterns
## Core Concept
Parallel execution spawns multiple subagents simultaneously using the Task tool with `run_in_background: true`. This enables N tasks to run concurrently, dramatically reducing total execution time.
**Critical Rule**: ALL Task calls MUST be in a SINGLE assistant message for true parallelism. If Task calls are in separate messages, they run sequentially.
## Execution Protocol
### Step 1: Identify Parallelizable Tasks
Before spawning, verify tasks are independent:
- No task depends on another's output
- Tasks target different files or concerns
- Can run simultaneously without conflicts
### Step 2: Prepare Dynamic Subagent Prompts
Each subagent receives a custom prompt defining its role:
```
You are a [ROLE] specialist for this specific task.
Task: [CLEAR DESCRIPTION]
Context:
[RELEVANT CONTEXT ABOUT THE CODEBASE/PROJECT]
Files to work with:
[SPECIFIC FILES OR PATTERNS]
Output format:
[EXPECTED OUTPUT STRUCTURE]
Focus areas:
- [PRIORITY 1]
- [PRIORITY 2]
```
### Step 3: Launch All Tasks in ONE Message
**CRITICAL**: Make ALL Task calls in the SAME assistant message:
```
I'm launching N parallel subagents:
[Task 1]
description: "Subagent A - [brief purpose]"
prompt: "[detailed instructions for subagent A]"
run_in_background: true
[Task 2]
description: "Subagent B - [brief purpose]"
prompt: "[detailed instructions for subagent B]"
run_in_background: true
[Task 3]
description: "Subagent C - [brief purpose]"
prompt: "[detailed instructions for subagent C]"
run_in_background: true
```
### Step 4: Retrieve Results with TaskOutput
After launching, retrieve each result:
```
[Wait for completion, then retrieve]
TaskOutput: task_1_id
TaskOutput: task_2_id
TaskOutput: task_3_id
```
### Step 5: Synthesize Results
Combine all subagent outputs into unified result:
- Merge related findings
- Resolve conflicts between recommendations
- Prioritize by severity/importance
- Create actionable summary
## Dynamic Subagent Patterns
### Pattern 1: Task-Based Parallelization
When you have N tasks to implement, spawn N subagents:
```
Plan:
1. Implement auth module
2. Create API endpoints
3. Add database schema
4. Write unit tests
5. Update documentation
Spawn 5 subagents (one per task):
- Subagent 1: Implements auth module
- Subagent 2: Creates API endpoints
- Subagent 3: Adds database schema
- Subagent 4: Writes unit tests
- Subagent 5: Updates documentation
```
### Pattern 2: Directory-Based Parallelization
Analyze multiple directories simultaneously:
```
Directories: src/auth, src/api, src/db
Spawn 3 subagents:
- Subagent 1: Analyzes src/auth
- Subagent 2: Analyzes src/api
- Subagent 3: Analyzes src/db
```
### Pattern 3: Perspective-Based Parallelization
Review from multiple angles simultaneously:
```
Perspectives: Security, Performance, Testing, Architecture
Spawn 4 subagents:
- Subagent 1: Security review
- Subagent 2: Performance analysis
- Subagent 3: Test coverage review
- Subagent 4: Architecture assessment
```
## TodoWrite Integration
When using parallel execution, TodoWrite behavior differs:
**Sequential execution**: Only ONE task `in_progress` at a time
**Parallel execution**: MULTIPLE tasks can be `in_progress` simultaneously
```
# Before launching parallel tasks
todos = [
{ content: "Task A", status: "in_progress" },
{ content: "Task B", status: "in_progress" },
{ content: "Task C", status: "in_progress" },
{ content: "Synthesize results", status: "pending" }
]
# After each TaskOutput retrieval, mark as completed
todos = [
{ content: "Task A", status: "completed" },
{ content: "Task B", status: "completed" },
{ content: "Task C", status: "completed" },
{ content: "Synthesize results", status: "in_progress" }
]
```
## When to Use Parallel Execution
**Good candidates:**
- Multiple independent analyses (code review, security, tests)
- Multi-file processing where files are independent
- Exploratory tasks with different perspectives
- Verification tasks with different checks
- Feature implementation with independent components
**Avoid parallelization when:**
- Tasks have dependencies (Task B needs Task A's output)
- Sequential workflows are required (commit -> push -> PR)
- Tasks modify the same files (risk of conflicts)
- Order matters for correctness
## Performance Benefits
| Approach | 5 Tasks @ 30s each | Total Time |
|----------|-------------------|------------|
| Sequential | 30s + 30s + 30s + 30s + 30s | ~150s |
| Parallel | All 5 run simultaneously | ~30s |
Parallel execution is approximately Nx faster where N is the number of independent tasks.
## Example: Feature Implementation
**User request**: "Implement user authentication with login, registration, and password reset"
**Orchestrator creates plan**:
1. Implement login endpoint
2. Implement registration endpoint
3. Implement password reset endpoint
4. Add authentication middleware
5. Write integration tests
**Parallel execution**:
```
Launching 5 subagents in parallel:
[Task 1] Login endpoint implementation
[Task 2] Registration endpoint implementation
[Task 3] Password reset endpoint implementation
[Task 4] Auth middleware implementation
[Task 5] Integration test writing
All tasks run simultaneously...
[Collect results via TaskOutput]
[Synthesize into cohesive implementation]
```
## Troubleshooting
**Tasks running sequentially?**
- Verify ALL Task calls are in SINGLE message
- Check `run_in_background: true` is set for each
**Results not available?**
- Use TaskOutput with correct task IDs
- Wait for tasks to complete before retrieving
**Conflicts in output?**
- Ensure tasks don't modify same files
- Add conflict resolution in synthesis step

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,351 @@
---
name: payload-cms
description: >
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API).
Triggers on tasks involving: collection definitions, field configurations, hooks, access control, database queries,
custom endpoints, authentication, file uploads, drafts/versions, live preview, or plugin development.
Also use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
author: payloadcms
version: 1.0.0
---
# Payload CMS Development
Payload is a Next.js native CMS with TypeScript-first architecture. This skill transfers expert knowledge for building collections, hooks, access control, and queries the right way.
## Mental Model
Think of Payload as **three interconnected layers**:
1. **Config Layer** → Collections, globals, fields define your schema
2. **Hook Layer** → Lifecycle events transform and validate data
3. **Access Layer** → Functions control who can do what
Every operation flows through: `Config → Access Check → Hook Chain → Database → Response Hooks`
## Quick Reference
| Task | Solution | Details |
|------|----------|---------|
| Auto-generate slugs | `slugField()` or beforeChange hook | [references/fields.md#slug-field] |
| Restrict by user | Access control with query constraint | [references/access-control.md] |
| Local API with auth | `user` + `overrideAccess: false` | [references/queries.md#local-api] |
| Draft/publish | `versions: { drafts: true }` | [references/collections.md#drafts] |
| Computed fields | `virtual: true` with afterRead hook | [references/fields.md#virtual] |
| Conditional fields | `admin.condition` | [references/fields.md#conditional] |
| Filter relationships | `filterOptions` on field | [references/fields.md#relationship] |
| Prevent hook loops | `req.context` flag | [references/hooks.md#context] |
| Transactions | Pass `req` to all operations | [references/hooks.md#transactions] |
| Background jobs | Jobs queue with tasks | [references/advanced.md#jobs] |
## Quick Start
```bash
npx create-payload-app@latest my-app
cd my-app
pnpm dev
```
### Minimal Config
```ts
import { buildConfig } from 'payload'
import { mongooseAdapter } from '@payloadcms/db-mongodb'
import { lexicalEditor } from '@payloadcms/richtext-lexical'
export default buildConfig({
admin: { user: 'users' },
collections: [Users, Media, Posts],
editor: lexicalEditor(),
secret: process.env.PAYLOAD_SECRET,
typescript: { outputFile: 'payload-types.ts' },
db: mongooseAdapter({ url: process.env.DATABASE_URL }),
})
```
## Core Patterns
### Collection Definition
```ts
import type { CollectionConfig } from 'payload'
export const Posts: CollectionConfig = {
slug: 'posts',
admin: {
useAsTitle: 'title',
defaultColumns: ['title', 'author', 'status', 'createdAt'],
},
fields: [
{ name: 'title', type: 'text', required: true },
{ name: 'slug', type: 'text', unique: true, index: true },
{ name: 'content', type: 'richText' },
{ name: 'author', type: 'relationship', relationTo: 'users' },
{ name: 'status', type: 'select', options: ['draft', 'published'], defaultValue: 'draft' },
],
timestamps: true,
}
```
### Hook Pattern (Auto-slug)
```ts
export const Posts: CollectionConfig = {
slug: 'posts',
hooks: {
beforeChange: [
async ({ data, operation }) => {
if (operation === 'create' && data.title) {
data.slug = data.title.toLowerCase().replace(/\s+/g, '-')
}
return data
},
],
},
fields: [{ name: 'title', type: 'text', required: true }],
}
```
### Access Control Pattern
```ts
import type { Access } from 'payload'
// Type-safe: admin-only access
export const adminOnly: Access = ({ req }) => {
return req.user?.roles?.includes('admin') ?? false
}
// Row-level: users see only their own posts
export const ownPostsOnly: Access = ({ req }) => {
if (!req.user) return false
if (req.user.roles?.includes('admin')) return true
return { author: { equals: req.user.id } }
}
```
### Query Pattern
```ts
// Local API with access control
const posts = await payload.find({
collection: 'posts',
where: {
status: { equals: 'published' },
'author.name': { contains: 'john' },
},
depth: 2,
limit: 10,
sort: '-createdAt',
user: req.user,
overrideAccess: false, // CRITICAL: enforce permissions
})
```
## Critical Security Rules
### 1. Local API Access Control
**Default behavior bypasses ALL access control.** This is the #1 security mistake.
```ts
// ❌ SECURITY BUG: Access control bypassed even with user
await payload.find({ collection: 'posts', user: someUser })
// ✅ SECURE: Explicitly enforce permissions
await payload.find({
collection: 'posts',
user: someUser,
overrideAccess: false, // REQUIRED
})
```
**Rule:** Use `overrideAccess: false` for any operation acting on behalf of a user.
### 2. Transaction Integrity
**Operations without `req` run in separate transactions.**
```ts
// ❌ DATA CORRUPTION: Separate transaction
hooks: {
afterChange: [async ({ doc, req }) => {
await req.payload.create({
collection: 'audit-log',
data: { docId: doc.id },
// Missing req - breaks atomicity!
})
}]
}
// ✅ ATOMIC: Same transaction
hooks: {
afterChange: [async ({ doc, req }) => {
await req.payload.create({
collection: 'audit-log',
data: { docId: doc.id },
req, // Maintains transaction
})
}]
}
```
**Rule:** Always pass `req` to nested operations in hooks.
### 3. Infinite Hook Loops
**Hooks triggering themselves create infinite loops.**
```ts
// ❌ INFINITE LOOP
hooks: {
afterChange: [async ({ doc, req }) => {
await req.payload.update({
collection: 'posts',
id: doc.id,
data: { views: doc.views + 1 },
req,
}) // Triggers afterChange again!
}]
}
// ✅ SAFE: Context flag breaks the loop
hooks: {
afterChange: [async ({ doc, req, context }) => {
if (context.skipViewUpdate) return
await req.payload.update({
collection: 'posts',
id: doc.id,
data: { views: doc.views + 1 },
req,
context: { skipViewUpdate: true },
})
}]
}
```
## Project Structure
```
src/
├── app/
│ ├── (frontend)/page.tsx
│ └── (payload)/admin/[[...segments]]/page.tsx
├── collections/
│ ├── Posts.ts
│ ├── Media.ts
│ └── Users.ts
├── globals/Header.ts
├── hooks/slugify.ts
└── payload.config.ts
```
## Type Generation
Generate types after schema changes:
```ts
// payload.config.ts
export default buildConfig({
typescript: { outputFile: 'payload-types.ts' },
})
// Usage
import type { Post, User } from '@/payload-types'
```
## Getting Payload Instance
```ts
// In API routes
import { getPayload } from 'payload'
import config from '@payload-config'
export async function GET() {
const payload = await getPayload({ config })
const posts = await payload.find({ collection: 'posts' })
return Response.json(posts)
}
// In Server Components
export default async function Page() {
const payload = await getPayload({ config })
const { docs } = await payload.find({ collection: 'posts' })
return <div>{docs.map(p => <h1 key={p.id}>{p.title}</h1>)}</div>
}
```
## Common Field Types
```ts
// Text
{ name: 'title', type: 'text', required: true }
// Relationship
{ name: 'author', type: 'relationship', relationTo: 'users' }
// Rich text
{ name: 'content', type: 'richText' }
// Select
{ name: 'status', type: 'select', options: ['draft', 'published'] }
// Upload
{ name: 'image', type: 'upload', relationTo: 'media' }
// Array
{
name: 'tags',
type: 'array',
fields: [{ name: 'tag', type: 'text' }],
}
// Blocks (polymorphic content)
{
name: 'layout',
type: 'blocks',
blocks: [HeroBlock, ContentBlock, CTABlock],
}
```
## Decision Framework
**When choosing between approaches:**
| Scenario | Approach |
|----------|----------|
| Data transformation before save | `beforeChange` hook |
| Data transformation after read | `afterRead` hook |
| Enforce business rules | Access control function |
| Complex validation | `validate` function on field |
| Computed display value | Virtual field with `afterRead` |
| Related docs list | `join` field type |
| Side effects (email, webhook) | `afterChange` hook with context guard |
| Database-level constraint | Field with `unique: true` or `index: true` |
## Quality Checks
Good Payload code:
- [ ] All Local API calls with user context use `overrideAccess: false`
- [ ] All hook operations pass `req` for transaction integrity
- [ ] Recursive hooks use `context` flags
- [ ] Types generated and imported from `payload-types.ts`
- [ ] Access control functions are typed with `Access` type
- [ ] Collections have meaningful `admin.useAsTitle` set
## Reference Documentation
For detailed patterns, see:
- **[references/fields.md](references/fields.md)** - All field types, validation, conditional logic
- **[references/collections.md](references/collections.md)** - Auth, uploads, drafts, live preview
- **[references/hooks.md](references/hooks.md)** - Hook lifecycle, context, patterns
- **[references/access-control.md](references/access-control.md)** - RBAC, row-level, field-level
- **[references/queries.md](references/queries.md)** - Operators, Local/REST/GraphQL APIs
- **[references/advanced.md](references/advanced.md)** - Jobs, plugins, localization
## Resources
- Docs: https://payloadcms.com/docs
- LLM Context: https://payloadcms.com/llms-full.txt
- GitHub: https://github.com/payloadcms/payload
- Templates: https://github.com/payloadcms/payload/tree/main/templates

View File

@@ -0,0 +1,242 @@
# Access Control Reference
## Overview
Access control functions determine WHO can do WHAT with documents:
```ts
type Access = (args: AccessArgs) => boolean | Where | Promise<boolean | Where>
```
Returns:
- `true` - Full access
- `false` - No access
- `Where` query - Filtered access (row-level security)
## Collection-Level Access
```ts
export const Posts: CollectionConfig = {
slug: 'posts',
access: {
create: isLoggedIn,
read: isPublishedOrAdmin,
update: isAdminOrAuthor,
delete: isAdmin,
},
fields: [...],
}
```
## Common Patterns
### Public Read, Admin Write
```ts
const isAdmin: Access = ({ req }) => {
return req.user?.roles?.includes('admin') ?? false
}
const isLoggedIn: Access = ({ req }) => {
return !!req.user
}
access: {
create: isLoggedIn,
read: () => true, // Public
update: isAdmin,
delete: isAdmin,
}
```
### Row-Level Security (User's Own Documents)
```ts
const ownDocsOnly: Access = ({ req }) => {
if (!req.user) return false
// Admins see everything
if (req.user.roles?.includes('admin')) return true
// Others see only their own
return {
author: { equals: req.user.id },
}
}
access: {
read: ownDocsOnly,
update: ownDocsOnly,
delete: ownDocsOnly,
}
```
### Complex Queries
```ts
const publishedOrOwn: Access = ({ req }) => {
// Not logged in: published only
if (!req.user) {
return { status: { equals: 'published' } }
}
// Admin: see all
if (req.user.roles?.includes('admin')) return true
// Others: published OR own drafts
return {
or: [
{ status: { equals: 'published' } },
{ author: { equals: req.user.id } },
],
}
}
```
## Field-Level Access
Control access to specific fields:
```ts
{
name: 'internalNotes',
type: 'textarea',
access: {
read: ({ req }) => req.user?.roles?.includes('admin'),
update: ({ req }) => req.user?.roles?.includes('admin'),
},
}
```
### Hide Field Completely
```ts
{
name: 'secretKey',
type: 'text',
access: {
read: () => false, // Never returned in API
update: ({ req }) => req.user?.roles?.includes('admin'),
},
}
```
## Access Control Arguments
```ts
type AccessArgs = {
req: PayloadRequest
id?: string | number // Document ID (for update/delete)
data?: Record<string, unknown> // Incoming data (for create/update)
}
```
## RBAC (Role-Based Access Control)
```ts
// Define roles
type Role = 'admin' | 'editor' | 'author' | 'subscriber'
// Helper functions
const hasRole = (req: PayloadRequest, role: Role): boolean => {
return req.user?.roles?.includes(role) ?? false
}
const hasAnyRole = (req: PayloadRequest, roles: Role[]): boolean => {
return roles.some(role => hasRole(req, role))
}
// Use in access control
const canEdit: Access = ({ req }) => {
return hasAnyRole(req, ['admin', 'editor'])
}
const canPublish: Access = ({ req }) => {
return hasAnyRole(req, ['admin', 'editor'])
}
const canDelete: Access = ({ req }) => {
return hasRole(req, 'admin')
}
```
## Multi-Tenant Access
```ts
// Users belong to organizations
const sameOrgOnly: Access = ({ req }) => {
if (!req.user) return false
// Super admin sees all
if (req.user.roles?.includes('super-admin')) return true
// Others see only their org's data
return {
organization: { equals: req.user.organization },
}
}
// Apply to collection
access: {
create: ({ req }) => !!req.user,
read: sameOrgOnly,
update: sameOrgOnly,
delete: sameOrgOnly,
}
```
## Global Access
For singleton documents:
```ts
export const Settings: GlobalConfig = {
slug: 'settings',
access: {
read: () => true,
update: ({ req }) => req.user?.roles?.includes('admin'),
},
fields: [...],
}
```
## Important: Local API Access Control
**Local API bypasses access control by default!**
```ts
// ❌ SECURITY BUG: Access control bypassed
await payload.find({
collection: 'posts',
user: someUser,
})
// ✅ SECURE: Explicitly enforce access control
await payload.find({
collection: 'posts',
user: someUser,
overrideAccess: false, // REQUIRED
})
```
## Access Control with req.context
Share state between access checks and hooks:
```ts
const conditionalAccess: Access = ({ req }) => {
// Check context set by middleware or previous operation
if (req.context?.bypassAuth) return true
return req.user?.roles?.includes('admin')
}
```
## Best Practices
1. **Default to restrictive** - Start with `false`, add permissions
2. **Use query constraints for row-level** - More efficient than filtering after
3. **Keep logic in reusable functions** - DRY across collections
4. **Test with different user types** - Admin, regular user, anonymous
5. **Remember Local API default** - Always use `overrideAccess: false` for user-facing operations
6. **Document your access rules** - Complex logic needs comments

Some files were not shown because too many files have changed in this diff Show More