SwissArmyHammer
The MCP server for managing prompts as markdown files
SwissArmyHammer solves the problem of AI prompt management by providing a comprehensive solution that treats prompts as first-class citizens in your development workflow. Unlike scattered prompt files or hard-coded templates, SwissArmyHammer offers a structured, versioned, and collaborative approach to prompt engineering.
The Problem
As AI becomes central to development workflows, developers and teams face growing challenges with prompt management:
The Prompt Chaos Problem:
- Scattered Everywhere: Prompts live in scattered text files, notes apps, chat histories, and code comments - impossible to find when you need them
- No Version Control: Critical prompt changes disappear without trace, making it impossible to understand what worked and why
- Copy-Paste Proliferation: The same prompt gets duplicated and slightly tweaked across projects, creating maintenance nightmares
- Team Isolation: Valuable prompts remain locked in individual workflows, preventing knowledge sharing and collaboration
- Format Anarchy: Every project reinvents prompt organization, making it hard to move between teams or onboard new members
The Cost of Disorganization: Without proper prompt management, teams waste hours recreating existing prompts, struggle to maintain consistency across projects, and lose valuable prompt engineering knowledge when team members leave.
The Solution
SwissArmyHammer transforms prompt chaos into organized, collaborative workflow with a comprehensive management system:
ποΈ Unified Prompt Organization: Replace scattered prompt files with a structured, hierarchical system. Store prompts as markdown files with YAML metadata, organizing them from global built-ins to project-specific customizations.
π Git-Native Workflow: Because prompts are plain markdown files, they integrate seamlessly with your existing Git workflow. Track changes, collaborate through pull requests, and maintain a complete history of your prompt evolution.
π§ Powerful Template Engine: Stop copy-pasting similar prompts. Use Liquid templating with custom filters to create dynamic, reusable prompts that adapt to different contexts and requirements.
π€ Claude Code Integration: Access your entire prompt library directly in Claude Code through native MCP protocol support. No more switching between tools or hunting for that perfect prompt.
β‘ Developer-First Tooling: Rich CLI with instant search, validation, testing, and diagnostics ensures your prompts are always discoverable, reliable, and maintainable.
The Result: Teams report 5x faster prompt iteration, zero lost prompts, and dramatically improved prompt quality through systematic organization and collaboration.
How SwissArmyHammer Works
SwissArmyHammer transforms your prompt workflow through:
- π File-based prompt management - Store prompts as markdown files with YAML front matter
- π Live reloading - Changes to prompt files are automatically detected and reloaded
- π― Template variables - Use
{{variable}}
syntax for dynamic prompt customization - β‘ MCP integration - Works seamlessly with Claude Code and other MCP clients
- ποΈ Organized hierarchy - Support for built-in, user, and local prompt directories
- π οΈ Developer-friendly - Rich CLI with diagnostics and shell completions
Quick Start
Installation
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
Basic Usage
- Create a prompt directory:
mkdir ~/.swissarmyhammer/prompts
- Configure Claude Code: Add SwissArmyHammer to your MCP configuration
- Create your first prompt: Use the simple markdown + YAML format
- Start using prompts: Available immediately in Claude Code
Key Benefits
- π§ Zero Configuration: Works out of the box with sensible defaults
- π± Cross-Platform: Runs on macOS, Linux, and Windows
- π Real-Time Updates: File changes are automatically detected and reloaded
- π― Type Safe: Rust implementation provides reliability and performance
- π Community Driven: Open source with active development and contributions
π Simple Prompt Format
Create prompts using familiar markdown with YAML front matter:
---
title: Code Review Helper
description: Helps review code for best practices and potential issues
arguments:
- name: code
description: The code to review
required: true
- name: language
description: Programming language
required: false
default: "auto-detect"
---
# Code Review
Please review the following {{language}} code:
{{code}}
Focus on:
- Code quality and readability
- Potential bugs or security issues
- Performance considerations
- Best practices adherence
π― Template Variables
Use template variables to make prompts dynamic and reusable:
{{variable}}
- Required variables{{variable:default}}
- Optional variables with defaults- Support for strings, numbers, booleans, and JSON objects
π§ Built-in Diagnostics
The doctor
command helps troubleshoot setup issues:
swissarmyhammer doctor
Who Should Use SwissArmyHammer?
Development Teams
- Standardize prompts across projects and team members
- Version control prompt changes with Git integration
- Code review prompt modifications like any other code
- Share libraries of tested, proven prompts
Individual Developers
- Organize personal prompts in a structured hierarchy
- Reuse prompts across different projects and contexts
- Build expertise through curated prompt collections
- Integrate seamlessly with existing development workflows
Content Creators & Researchers
- Manage specialized prompts for specific domains
- Create template libraries for common content types
- Collaborate effectively on prompt development
- Maintain quality through validation and testing
Students & Educators
- Learn prompt engineering through structured examples
- Build knowledge bases of educational prompts
- Share resources with classmates and colleagues
- Track progress through versioned prompt evolution
Architecture
SwissArmyHammer follows a simple but powerful architecture:
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Claude Code βββββΊβ SwissArmyHammer βββββΊβ Prompt Files β
β (MCP Client) β β (MCP Server) β β (.md files) β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βΌ
ββββββββββββββββββββ
β File Watcher β
β (Auto-reload) β
ββββββββββββββββββββ
Next Steps
- Install SwissArmyHammer - Get up and running quickly
- Create Your First Prompt - Learn the basics
- Integrate with Claude Code - Connect to your AI assistant
- Explore Advanced Features - Unlock the full potential
Why Choose SwissArmyHammer?
Proven Architecture: Built on well-tested technologies like Rust, Liquid templating, and the MCP protocol.
Active Development: Regular updates, bug fixes, and new features based on community feedback.
Comprehensive Documentation: Detailed guides, examples, and API reference to get you productive quickly.
Open Source: MIT licensed with a welcoming community for contributions and feedback.
Join the Community
- GitHub Repository - Source code, issues, and discussions
- Contributing Guide - How to contribute to the project
- Issue Tracker - Report bugs and request features
- Discussions - Community Q&A and sharing
License
SwissArmyHammer is open source software licensed under the MIT License. See the License page for details.
Installation
SwissArmyHammer can be installed in several ways depending on your needs and platform.
Pre-built Binaries
Currently, SwissArmyHammer does not provide pre-built binaries for download. This is a planned feature for future releases. For now, please use the Cargo installation method below.
Quick Install (Recommended)
To update SwissArmyHammer to the latest version:
# Update from git repository
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli --force
The --force
flag will overwrite the existing installation.
Clone and Build
If you are installing from source:
- Rust 1.70 or later - Install from rustup.rs
- Git - For cloning the repository
If you want to build from source or contribute to development:
# Clone the repository
git clone https://github.com/wballard/swissarmyhammer.git
cd swissarmyhammer
# Build the CLI (debug mode for development)
cargo build
# Build optimized release version
cargo build --release
# Install from the local source
cargo install --path swissarmyhammer-cli
# Or run directly without installing
cargo run --bin swissarmyhammer -- --help
Verification
After installation, verify that SwissArmyHammer is working correctly:
# Check version
swissarmyhammer --version
# Run diagnostics
swissarmyhammer doctor
# Show help
swissarmyhammer --help
The doctor
command will check your installation and provide helpful diagnostics if anything needs attention.
Shell Completions
Generate and install shell completions for better CLI experience:
Bash
# Linux/macOS (user-specific)
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# macOS with Homebrew bash-completion
swissarmyhammer completion bash > $(brew --prefix)/etc/bash_completion.d/swissarmyhammer
# Alternative location (ensure directory exists)
mkdir -p ~/.bash_completion.d
swissarmyhammer completion bash > ~/.bash_completion.d/swissarmyhammer
Zsh
# User-specific (ensure ~/.zfunc is in your fpath)
mkdir -p ~/.zfunc
swissarmyhammer completion zsh > ~/.zfunc/_swissarmyhammer
# Add to ~/.zshrc if not already present:
# fpath=(~/.zfunc $fpath)
# autoload -U compinit && compinit
# System-wide (with appropriate permissions)
swissarmyhammer completion zsh > /usr/local/share/zsh/site-functions/_swissarmyhammer
Fish
# User-specific
swissarmyhammer completion fish > ~/.config/fish/completions/swissarmyhammer.fish
# Ensure the directory exists
mkdir -p ~/.config/fish/completions
swissarmyhammer completion fish > ~/.config/fish/completions/swissarmyhammer.fish
PowerShell
# Add to PowerShell profile
swissarmyhammer completion powershell >> $PROFILE
# Or create profile directory if it doesn't exist
New-Item -ItemType Directory -Path (Split-Path $PROFILE) -Force
swissarmyhammer completion powershell >> $PROFILE
Remember to reload your shell or start a new terminal session for completions to take effect.
Next Steps
Once installed, continue to the Quick Start guide to set up SwissArmyHammer with Claude Code and create your first prompt.
Troubleshooting
Common Issues
Command not found: Make sure ~/.cargo/bin
is in your PATH.
Build failures: Ensure you have Rust 1.70+ installed and try updating Rust:
rustup update
Permission errors: Donβt use sudo
with cargo install - it installs to your user directory.
For more help, check the Troubleshooting guide or run:
swissarmyhammer doctor
Quick Start
Get up and running with SwissArmyHammer in just a few minutes.
Prerequisites
Before you begin, make sure you have:
- SwissArmyHammer installed (see Installation)
- Claude Code (or another MCP-compatible client)
Step 1: Verify Installation
First, check that SwissArmyHammer is properly installed:
swissarmyhammer --version
Run the doctor command to check your setup:
swissarmyhammer doctor
This will check your system and provide recommendations if anything needs attention.
Step 2: Configure Claude Code
Add SwissArmyHammer to your Claude Code MCP configuration:
Find Your Config File
The Claude Code configuration file is located at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
Add the Configuration
Create or edit the configuration file with the following content:
{
"mcpServers": {
"swissarmyhammer": {
"command": "swissarmyhammer",
"args": ["serve"]
}
}
}
If you already have other MCP servers configured, just add the swissarmyhammer
entry to your existing mcpServers
object.
Step 3: Create Your Prompt Directory
Create a directory for your prompts:
mkdir -p ~/.swissarmyhammer/prompts
This is where youβll store your custom prompts. SwissArmyHammer will automatically watch this directory for changes.
Step 4: Create Your First Prompt
Create a simple prompt file:
cat > ~/.swissarmyhammer/prompts/helper.md << 'EOF'
---
title: General Helper
description: A helpful assistant for various tasks
arguments:
- name: task
description: What you need help with
required: true
- name: style
description: How to approach the task
required: false
default: "friendly and concise"
---
# Task Helper
Please help me with: {{task}}
Approach this in a {{style}} manner. Provide clear, actionable advice.
EOF
Step 5: Test the Setup
-
Restart Claude Code to pick up the new MCP server configuration.
-
Open Claude Code and start a new conversation.
-
Try using your prompt: In Claude Code, you should now see SwissArmyHammer prompts available in the prompt picker.
-
Use the built-in prompts: SwissArmyHammer comes with several built-in prompts you can try right away:
help
- Get help with using SwissArmyHammerdebug-error
- Debug error messagescode-review
- Review code for issuesdocs-readme
- Generate README files
Step 6: Verify Everything Works
Test that SwissArmyHammer is working correctly:
# Check if Claude Code can connect (this will show server info)
swissarmyhammer serve --help
# Run diagnostics again to see the updated status
swissarmyhammer doctor
The doctor command should now show that Claude Code configuration is found and prompts are loading correctly.
Whatβs Next?
Now that you have SwissArmyHammer set up, you can:
- Explore built-in prompts - See whatβs available out of the box
- Create more prompts - Build your own prompt library
- Learn advanced features - Template variables, prompt organization, etc.
Recommended Next Steps
- Create Your First Custom Prompt
- Learn about Template Variables
- Explore Built-in Prompts
- Advanced Prompt Techniques
Troubleshooting
If something isnβt working:
- Run the doctor:
swissarmyhammer doctor
- Check Claude Code logs: Look for any error messages
- Verify file permissions: Make sure SwissArmyHammer can read your prompt files
- Restart Claude Code: Sometimes a restart is needed after configuration changes
For more detailed troubleshooting, see the Troubleshooting guide.
Getting Help
If you need help:
- Check the Troubleshooting guide
- Look at Examples for inspiration
- Ask questions in GitHub Discussions
- Report bugs in GitHub Issues
Your First Prompt
Letβs create your first custom prompt with SwissArmyHammer! This guide will walk you through creating a useful code review prompt.
Understanding Prompt Structure
SwissArmyHammer prompts are markdown files with YAML front matter. Hereβs the basic structure:
---
title: Your Prompt Title
description: What this prompt does
arguments:
- name: argument_name
description: What this argument is for
required: true/false
default: "optional default value"
---
# Your Prompt Content
Use {{argument_name}} to insert variables into your prompt.
Creating a Code Review Prompt
Letβs create a practical code review prompt step by step.
Step 1: Create the File
First, create a new prompt file in your prompts directory:
# Create the file
touch ~/.swissarmyhammer/prompts/code-review.md
# Or create a category directory
mkdir -p ~/.swissarmyhammer/prompts/development
touch ~/.swissarmyhammer/prompts/development/code-review.md
Step 2: Add the YAML Front Matter
Open the file in your favorite editor and add the front matter:
---
title: Code Review Assistant
description: Comprehensive code review with focus on best practices, security, and performance
arguments:
- name: code
description: The code to review (can be a function, class, or entire file)
required: true
- name: language
description: Programming language (helps with language-specific advice)
required: false
default: "auto-detect"
- name: focus
description: Areas to focus on (security, performance, readability, etc.)
required: false
default: "general best practices"
---
Step 3: Write the Prompt Content
Below the front matter, add the prompt content:
# Code Review
I need a thorough code review for the following {{language}} code.
## Code to Review
```{{language}}
{{code}}
Review Focus
Please focus on: {{focus}}
Review Criteria
Please analyze the code for:
π Security
- Potential security vulnerabilities
- Input validation issues
- Authentication/authorization concerns
π Performance
- Inefficient algorithms or operations
- Memory usage concerns
- Potential bottlenecks
π Readability & Maintainability
- Code clarity and organization
- Naming conventions
- Documentation needs
π§ͺ Testing & Reliability
- Error handling
- Edge cases
- Testability
ποΈ Architecture & Design
- SOLID principles adherence
- Design patterns usage
- Code structure
Output Format
Please provide:
- Overall Assessment - Brief summary of code quality
- Specific Issues - List each issue with:
- Severity (High/Medium/Low)
- Location (line numbers if applicable)
- Explanation of the problem
- Suggested fix
- Positive Aspects - Whatβs done well
- Recommendations - Broader suggestions for improvement
Focus especially on {{focus}} in your analysis.
### Step 4: Complete File Example
Here's the complete prompt file:
```markdown
---
title: Code Review Assistant
description: Comprehensive code review with focus on best practices, security, and performance
arguments:
- name: code
description: The code to review (can be a function, class, or entire file)
required: true
- name: language
description: Programming language (helps with language-specific advice)
required: false
default: "auto-detect"
- name: focus
description: Areas to focus on (security, performance, readability, etc.)
required: false
default: "general best practices"
---
# Code Review
I need a thorough code review for the following {{language}} code.
## Code to Review
```{{language}}
{{code}}
Review Focus
Please focus on: {{focus}}
Review Criteria
Please analyze the code for:
π Security
- Potential security vulnerabilities
- Input validation issues
- Authentication/authorization concerns
π Performance
- Inefficient algorithms or operations
- Memory usage concerns
- Potential bottlenecks
π Readability & Maintainability
- Code clarity and organization
- Naming conventions
- Documentation needs
π§ͺ Testing & Reliability
- Error handling
- Edge cases
- Testability
ποΈ Architecture & Design
- SOLID principles adherence
- Design patterns usage
- Code structure
Output Format
Please provide:
- Overall Assessment - Brief summary of code quality
- Specific Issues - List each issue with:
- Severity (High/Medium/Low)
- Location (line numbers if applicable)
- Explanation of the problem
- Suggested fix
- Positive Aspects - Whatβs done well
- Recommendations - Broader suggestions for improvement
Focus especially on {{focus}} in your analysis.
## Step 5: Test Your Prompt
Save the file and test that SwissArmyHammer can load it:
```bash
# Check if your prompt loads correctly
swissarmyhammer doctor
The doctor command will validate your YAML syntax and confirm the prompt is loaded.
Step 6: Use Your Prompt
- Open Claude Code
- Start a new conversation
- Look for your prompt in the prompt picker - it should appear as βCode Review Assistantβ
- Fill in the parameters:
code
: Paste some code you want reviewedlanguage
: Specify the programming language (optional)focus
: Specify what to focus on (optional)
Understanding What Happened
When you created this prompt, SwissArmyHammer:
- Detected the new file using its file watcher
- Parsed the YAML front matter to understand the prompt structure
- Made it available to Claude Code via the MCP protocol
- Prepared for template substitution when the prompt is used
Best Practices for Your First Prompt
β Doβs
- Use descriptive titles and descriptions
- Document your arguments clearly
- Provide sensible defaults for optional arguments
- Structure your prompt content with clear sections
- Use template variables to make prompts flexible
β Donβts
- Donβt use required arguments unless necessary
- Donβt make prompts too rigid - allow for flexibility
- Donβt forget to test your YAML syntax
- Donβt use overly complex template logic in your first prompts
Next Steps
Now that youβve created your first prompt, you can:
- Create more prompts for different use cases
- Organize prompts into directories by category
- Learn advanced template features like conditionals and loops
- Share prompts with your team or the community
Recommended Reading
- Creating Prompts - Comprehensive guide to prompt creation
- Template Variables - Advanced template features
- Prompt Organization - How to organize your prompt library
- Built-in Prompts - Examples from the built-in library
Troubleshooting
If your prompt isnβt working:
- Check YAML syntax - Make sure your front matter is valid YAML
- Run doctor -
swissarmyhammer doctor
will catch common issues - Check file permissions - Make sure SwissArmyHammer can read the file
- Restart Claude Code - Sometimes needed after creating new prompts
Common issues:
- YAML indentation errors - Use spaces, not tabs
- Missing required fields - Title and description are required
- Invalid argument structure - Check the argument format
- File encoding - Use UTF-8 encoding for your markdown files
Creating Prompts
SwissArmyHammer prompts are markdown files with YAML front matter that define reusable AI prompts. This guide walks you through creating effective prompts.
Basic Structure
Every prompt file has two parts:
- YAML Front Matter - Metadata about the prompt
- Markdown Content - The actual prompt template
---
name: my-prompt
title: My Awesome Prompt
description: Does something useful
arguments:
- name: input
description: What to process
required: true
---
# My Prompt
Please help me with {{input}}.
Provide a detailed response.
YAML Front Matter
The YAML front matter defines the promptβs metadata:
Required Fields
name
- Unique identifier for the prompttitle
- Human-readable namedescription
- What the prompt does
Optional Fields
category
- Group related prompts (e.g., βdevelopmentβ, βwritingβ)tags
- List of keywords for discoveryarguments
- Input parameters (see Arguments)author
- Prompt creatorversion
- Version numberlicense
- License information
Example
---
name: code-review
title: Code Review Assistant
description: Reviews code for best practices, bugs, and improvements
category: development
tags: ["code", "review", "quality", "best-practices"]
author: SwissArmyHammer Team
version: 1.0.0
arguments:
- name: code
description: The code to review
required: true
- name: language
description: Programming language
required: false
default: auto-detect
- name: focus
description: Specific areas to focus on
required: false
default: all aspects
---
Arguments
Arguments define the inputs your prompt accepts. Each argument has:
Argument Properties
name
- Parameter name (used in template as{{name}}
)description
- What this parameter is forrequired
- Whether the argument is mandatorydefault
- Default value if not providedtype_hint
- Expected data type (documentation only)
Example Arguments
arguments:
- name: text
description: Text to analyze
required: true
type_hint: string
- name: format
description: Output format
required: false
default: markdown
type_hint: string
- name: max_length
description: Maximum response length
required: false
default: 500
type_hint: integer
- name: include_examples
description: Include code examples
required: false
default: true
type_hint: boolean
Template Content
The markdown content is your prompt template using Liquid templating.
Basic Variables
Use {{variable}}
to insert argument values:
Please review this {{language}} code:
```{{code}}```
Focus on {{focus}}.
Conditional Logic
Use {% if %}
blocks for conditional content:
{% if language == "python" %}
Pay special attention to:
- PEP 8 style guidelines
- Type hints
- Error handling
{% elsif language == "javascript" %}
Pay special attention to:
- ESLint rules
- Async/await usage
- Error handling
{% endif %}
Loops
Use {% for %}
to iterate over lists:
{% if tags %}
Tags: {% for tag in tags %}#{{tag}}{% unless forloop.last %}, {% endunless %}{% endfor %}
{% endif %}
Filters
Apply filters to transform data:
Language: {{language | capitalize}}
Code length: {{code | length}} characters
Summary: {{description | truncate: 100}}
Organization
Directory Structure
Organize prompts in logical directories:
prompts/
βββ development/
β βββ code-review.md
β βββ debug-helper.md
β βββ api-docs.md
βββ writing/
β βββ blog-post.md
β βββ email-draft.md
β βββ summary.md
βββ analysis/
βββ data-insights.md
βββ competitor-analysis.md
Naming Conventions
- Use kebab-case for filenames:
code-review.md
- Make names descriptive:
debug-python-errors.md
notdebug.md
- Include the category in the path, not the filename
Categories and Tags
Use categories for broad groupings:
development
- Code-related promptswriting
- Content creationanalysis
- Data and researchproductivity
- Task management
Use tags for specific features:
["python", "debugging", "error-handling"]
["marketing", "email", "b2b"]
["data", "visualization", "charts"]
Best Practices
1. Write Clear Descriptions
# Good
description: Reviews Python code for PEP 8 compliance, type hints, and common bugs
# Bad
description: Code review
2. Provide Helpful Defaults
arguments:
- name: style_guide
description: Coding style guide to follow
required: false
default: PEP 8 # Sensible default
3. Use Descriptive Variable Names
# Good
Please analyze this {{source_code}} for {{security_vulnerabilities}}.
# Bad
Please analyze this {{input}} for {{stuff}}.
4. Include Examples in Descriptions
arguments:
- name: format
description: Output format (markdown, json, html)
required: false
default: markdown
5. Structure Your Prompts
Use clear sections and formatting:
# Code Review
## Overview
Please review the following {{language}} code for:
## Focus Areas
1. **Best Practices** - Follow {{style_guide}} guidelines
2. **Security** - Identify potential vulnerabilities
3. **Performance** - Suggest optimizations
4. **Maintainability** - Assess code clarity
## Code to Review
```{{code}}```
## Instructions
{{#if include_suggestions}}
Please provide specific improvement suggestions.
{{/if}}
6. Test Your Prompts
Use the CLI to test prompts:
# Test with required arguments
swissarmyhammer test code-review --code "def hello(): print('hi')" --language python
# Test with defaults
swissarmyhammer test code-review --code "function hello() { console.log('hi'); }"
Common Patterns
Code Analysis
---
name: analyze-code
title: Code Analyzer
description: Analyzes code for issues and improvements
arguments:
- name: code
description: Code to analyze
required: true
- name: language
description: Programming language
required: false
default: auto-detect
---
# Code Analysis
Analyze this {{language}} code:
```{{code}}```
Provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance optimizations
- Readability improvements
Document Generation
---
name: api-docs
title: API Documentation Generator
description: Generates API documentation from code
arguments:
- name: code
description: API code to document
required: true
- name: format
description: Documentation format
required: false
default: markdown
---
# API Documentation
Generate {{format}} documentation for this API:
```{{code}}```
Include:
- Endpoint descriptions
- Parameter details
- Response examples
- Error codes
Text Processing
---
name: summarize
title: Text Summarizer
description: Creates concise summaries of text
arguments:
- name: text
description: Text to summarize
required: true
- name: length
description: Target summary length
required: false
default: 3 sentences
---
# Text Summary
Create a {{length}} summary of this text:
{{text}}
Focus on the key points and main ideas.
Next Steps
- Learn about YAML Front Matter in detail
- Explore Template Variables and Liquid syntax
- Check out Custom Filters for advanced transformations
- See Examples for real-world prompt templates
- Read about Prompt Organization strategies
YAML Front Matter
YAML front matter is the metadata section at the beginning of each prompt file. It defines the promptβs properties, arguments, and other configuration details.
Structure
Front matter appears between triple dashes (---
) at the start of your markdown file:
---
name: my-prompt
title: My Prompt
description: What this prompt does
---
# Prompt content starts here
Required Fields
Every prompt must have these fields:
name
- Type: String
- Description: Unique identifier for the prompt
- Format: kebab-case recommended
- Example:
code-review
,debug-helper
,api-docs
name: code-review
title
- Type: String
- Description: Human-readable name displayed in UIs
- Format: Title Case
- Example:
Code Review Assistant
,Debug Helper
title: Code Review Assistant
description
- Type: String
- Description: Brief explanation of what the prompt does
- Format: Sentence or short paragraph
- Example: Clear, actionable description
description: Reviews code for best practices, bugs, and potential improvements
Optional Fields
category
- Type: String
- Description: Groups related prompts together
- Common Values:
development
,writing
,analysis
,productivity
category: development
tags
- Type: Array of strings
- Description: Keywords for discovery and filtering
- Format: Lowercase, descriptive terms
tags: ["python", "debugging", "error-handling"]
arguments
- Type: Array of argument objects
- Description: Input parameters the prompt accepts
- See: Arguments section below
arguments:
- name: code
description: Code to review
required: true
author
- Type: String
- Description: Creator of the prompt
- Format: Name or organization
author: SwissArmyHammer Team
version
- Type: String
- Description: Version number for tracking changes
- Format: Semantic versioning recommended
version: 1.2.0
license
- Type: String
- Description: License for the prompt
- Common Values:
MIT
,Apache-2.0
,GPL-3.0
license: MIT
created
- Type: Date string
- Description: When the prompt was created
- Format: ISO 8601 date
created: 2024-01-15
updated
- Type: Date string
- Description: Last modification date
- Format: ISO 8601 date
updated: 2024-03-20
keywords
- Type: Array of strings
- Description: Alternative to tags for SEO/discovery
- Note: Similar to tags but more formal
keywords: ["code quality", "static analysis", "best practices"]
Arguments
Arguments define the inputs your prompt can accept. Each argument is an object with these properties:
Argument Properties
name
(required)
- Type: String
- Description: Parameter name used in template
- Format: snake_case or kebab-case
- Usage: Referenced as
{{name}}
in template
arguments:
- name: source_code
# Used as {{source_code}} in template
description
(required)
- Type: String
- Description: What this argument is for
- Format: Clear, helpful explanation
arguments:
- name: language
description: Programming language of the code
required
(optional, default: false)
- Type: Boolean
- Description: Whether this argument must be provided
- Default:
false
arguments:
- name: code
description: Code to analyze
required: true # Must be provided
- name: style_guide
description: Coding style to follow
required: false # Optional
default
(optional)
- Type: String
- Description: Default value if not provided
- Note: Only used when
required: false
arguments:
- name: format
description: Output format
required: false
default: markdown
type_hint
(optional)
- Type: String
- Description: Expected data type (documentation only)
- Common Values:
string
,integer
,boolean
,array
,object
arguments:
- name: max_length
description: Maximum output length
required: false
default: 500
type_hint: integer
Argument Examples
Simple Text Input
arguments:
- name: text
description: Text to process
required: true
type_hint: string
Optional with Default
arguments:
- name: format
description: Output format (markdown, json, html)
required: false
default: markdown
type_hint: string
Boolean Flag
arguments:
- name: include_examples
description: Include code examples in output
required: false
default: true
type_hint: boolean
Multiple Arguments
arguments:
- name: code
description: Source code to review
required: true
type_hint: string
- name: language
description: Programming language
required: false
default: auto-detect
type_hint: string
- name: severity
description: Minimum issue severity to report
required: false
default: medium
type_hint: string
- name: include_suggestions
description: Include improvement suggestions
required: false
default: true
type_hint: boolean
Complete Example
Hereβs a comprehensive example showing all available fields:
---
name: comprehensive-code-review
title: Comprehensive Code Review Assistant
description: Performs detailed code review focusing on best practices, security, and performance
category: development
tags: ["code-review", "security", "performance", "best-practices"]
author: SwissArmyHammer Team
version: 2.1.0
license: MIT
created: 2024-01-15
updated: 2024-03-20
keywords: ["static analysis", "code quality", "security audit"]
arguments:
- name: code
description: Source code to review
required: true
type_hint: string
- name: language
description: Programming language (python, javascript, rust, etc.)
required: false
default: auto-detect
type_hint: string
- name: focus_areas
description: Specific areas to focus on (security, performance, style)
required: false
default: all
type_hint: string
- name: severity_threshold
description: Minimum severity level to report (low, medium, high)
required: false
default: medium
type_hint: string
- name: include_examples
description: Include code examples in suggestions
required: false
default: true
type_hint: boolean
- name: max_suggestions
description: Maximum number of suggestions to provide
required: false
default: 10
type_hint: integer
---
# Comprehensive Code Review
I'll perform a detailed review of your {{language}} code, focusing on {{focus_areas}}.
## Code Analysis
```{{code}}```
## Review Criteria
I'll evaluate the code for:
{% if focus_areas contains "security" or focus_areas == "all" %}
- **Security vulnerabilities** and best practices
{% endif %}
{% if focus_areas contains "performance" or focus_areas == "all" %}
- **Performance optimizations** and efficiency
{% endif %}
{% if focus_areas contains "style" or focus_areas == "all" %}
- **Code style** and formatting consistency
{% endif %}
{% if language != "auto-detect" %}
- **{{language | capitalize}}-specific** best practices and idioms
{% endif %}
## Reporting
- Minimum severity: {{severity_threshold}}
- Maximum suggestions: {{max_suggestions}}
{% if include_examples %}
- Including code examples and fixes
{% endif %}
Please provide detailed feedback with specific line references where applicable.
Validation Rules
SwissArmyHammer validates your YAML front matter:
Required Field Validation
name
,title
, anddescription
must be presentname
must be unique within the prompt libraryname
must not contain spaces or special characters
Argument Validation
- Each argument must have
name
anddescription
- Argument names must be valid template variables
- Required arguments cannot have default values
- Argument names must be unique within the prompt
Type Validation
- Arrays must contain valid elements
- Dates must be in ISO format
- Booleans must be
true
orfalse
Best Practices
1. Use Descriptive Names
# Good
name: python-code-review
title: Python Code Review Assistant
# Bad
name: review
title: Review
2. Write Clear Descriptions
# Good
description: Analyzes Python code for PEP 8 compliance, type hints, security issues, and performance optimizations
# Bad
description: Reviews code
3. Organize with Categories and Tags
category: development
tags: ["python", "pep8", "security", "performance", "code-quality"]
4. Provide Sensible Defaults
arguments:
- name: style_guide
description: Python style guide to follow
required: false
default: PEP 8 # Most common choice
5. Use Type Hints
arguments:
- name: max_issues
description: Maximum number of issues to report
required: false
default: 20
type_hint: integer # Helps users understand expected format
6. Keep Versions Updated
version: 1.2.0 # Update when you make changes
updated: 2024-03-20 # Track modification date
Common Patterns
Code Processing Prompt
---
name: code-processor
title: Code Processor
description: Processes and transforms code
category: development
arguments:
- name: code
description: Source code to process
required: true
- name: language
description: Programming language
required: false
default: auto-detect
- name: output_format
description: Desired output format
required: false
default: markdown
---
Text Analysis Prompt
---
name: text-analyzer
title: Text Analyzer
description: Analyzes text for various metrics
category: analysis
arguments:
- name: text
description: Text to analyze
required: true
- name: analysis_type
description: Type of analysis to perform
required: false
default: comprehensive
- name: include_stats
description: Include statistical analysis
required: false
default: true
type_hint: boolean
---
Document Generator
---
name: doc-generator
title: Document Generator
description: Generates documentation from code or specifications
category: documentation
arguments:
- name: source
description: Source material to document
required: true
- name: format
description: Output format
required: false
default: markdown
- name: include_examples
description: Include usage examples
required: false
default: true
type_hint: boolean
---
Troubleshooting
Common Errors
Invalid YAML Syntax
# Error: Missing quotes around string with special characters
description: This won't work: because of the colon
# Fixed: Quote strings with special characters
description: "This works: because it's quoted"
Missing Required Fields
# Error: Missing required fields
title: My Prompt
# Fixed: Include all required fields
name: my-prompt
title: My Prompt
description: What this prompt does
Invalid Argument Structure
# Error: Missing required argument properties
arguments:
- name: code
# Fixed: Include required properties
arguments:
- name: code
description: Code to process
required: true
Validation Tips
- Use a YAML validator - Many online tools can check syntax
- Test with CLI - Use
swissarmyhammer test
to validate prompts - Check the logs - SwissArmyHammer provides detailed error messages
- Start simple - Begin with minimal front matter and add complexity
Next Steps
- Learn about Template Variables to use your arguments
- Explore Custom Filters for advanced text processing
- See Creating Prompts for the complete workflow
- Check Examples for real-world YAML configurations
Template Variables
SwissArmyHammer uses the Liquid template engine for processing prompts. This provides powerful templating features including variables, conditionals, loops, and filters.
Basic Variable Substitution
Variables are inserted using double curly braces:
Hello {{ name }}!
Your email is {{ email }}.
For backward compatibility, variables without spaces also work:
Hello {{name}}!
Conditionals
If Statements
Use if
statements to conditionally include content:
{% if user_type == "admin" %}
Welcome, administrator!
{% elsif user_type == "moderator" %}
Welcome, moderator!
{% else %}
Welcome, user!
{% endif %}
Unless Statements
unless
is the opposite of if
:
{% unless error_count == 0 %}
Warning: {{ error_count }} errors found.
{% endunless %}
Comparison Operators
==
- equals!=
- not equals>
- greater than<
- less than>=
- greater or equal<=
- less or equalcontains
- string/array containsand
- logical ANDor
- logical OR
Example:
{% if age >= 18 and country == "US" %}
You are eligible to vote.
{% endif %}
{% if tags contains "urgent" %}
π¨ This is urgent!
{% endif %}
Case Statements
For multiple conditions, use case
:
{% case status %}
{% when "pending" %}
β³ Waiting for approval
{% when "approved" %}
β
Approved and ready
{% when "rejected" %}
β Rejected
{% else %}
β Unknown status
{% endcase %}
Loops
Basic For Loops
Iterate over arrays:
{% for item in items %}
- {{ item }}
{% endfor %}
Range Loops
Loop over a range of numbers:
{% for i in (1..5) %}
Step {{ i }} of 5
{% endfor %}
Loop Variables
Inside loops, you have access to special variables:
{% for item in items %}
{% if forloop.first %}First item: {% endif %}
{{ forloop.index }}. {{ item }}
{% if forloop.last %}(last item){% endif %}
{% endfor %}
Available loop variables:
forloop.index
- current iteration (1-based)forloop.index0
- current iteration (0-based)forloop.first
- true on first iterationforloop.last
- true on last iterationforloop.length
- total number of items
Loop Control
Use break
and continue
for flow control:
{% for item in items %}
{% if item == "skip" %}
{% continue %}
{% endif %}
{% if item == "stop" %}
{% break %}
{% endif %}
Processing: {{ item }}
{% endfor %}
Cycle
Alternate between values:
{% for row in data %}
<tr class="{% cycle 'odd', 'even' %}">
<td>{{ row }}</td>
</tr>
{% endfor %}
Filters
Filters modify variables using the pipe (|
) character:
String Filters
{{ name | upcase }} # ALICE
{{ name | downcase }} # alice
{{ name | capitalize }} # Alice
{{ text | strip }} # removes whitespace
{{ text | truncate: 20 }} # truncates to 20 chars
{{ text | truncate: 20, "..." }} # custom ellipsis
{{ text | append: "!" }} # adds to end
{{ text | prepend: "Hello " }} # adds to beginning
{{ text | remove: "bad" }} # removes all occurrences
{{ text | replace: "old", "new" }} # replaces all
{{ text | split: "," }} # splits into array
Array Filters
{{ array | first }} # first element
{{ array | last }} # last element
{{ array | join: ", " }} # joins with delimiter
{{ array | sort }} # sorts array
{{ array | reverse }} # reverses array
{{ array | size }} # number of elements
{{ array | uniq }} # removes duplicates
Math Filters
{{ number | plus: 5 }} # addition
{{ number | minus: 3 }} # subtraction
{{ number | times: 2 }} # multiplication
{{ number | divided_by: 4 }} # division
{{ number | modulo: 3 }} # remainder
{{ number | ceil }} # round up
{{ number | floor }} # round down
{{ number | round }} # round to nearest
{{ number | round: 2 }} # round to 2 decimals
{{ number | abs }} # absolute value
Default Filter
Provide fallback values:
Hello {{ name | default: "Guest" }}!
Score: {{ score | default: 0 }}
Date Filters
{{ date | date: "%Y-%m-%d" }} # 2024-01-15
{{ date | date: "%B %d, %Y" }} # January 15, 2024
{{ "now" | date: "%Y" }} # current year
Advanced Features
Comments
Comments are not rendered in output:
{% comment %}
This is a comment that won't appear in the output.
Useful for documentation or temporarily disabling code.
{% endcomment %}
Raw Blocks
Prevent Liquid processing:
{% raw %}
This {{ variable }} won't be processed.
Useful for showing Liquid syntax examples.
{% endraw %}
Assign Variables
Create new variables:
{% assign full_name = first_name | append: " " | append: last_name %}
Welcome, {{ full_name }}!
{% assign item_count = items | size %}
You have {{ item_count }} items.
Capture Blocks
Capture content into a variable:
{% capture greeting %}
{% if time_of_day == "morning" %}
Good morning
{% elsif time_of_day == "evening" %}
Good evening
{% else %}
Hello
{% endif %}
{% endcapture %}
{{ greeting }}, {{ name }}!
Environment Variables
Access environment variables through the env
object:
Current user: {{ env.USER }}
Home directory: {{ env.HOME }}
Custom setting: {{ env.MY_APP_CONFIG | default: "not set" }}
Object Access
Access nested objects and arrays:
{{ user.name }}
{{ user.address.city }}
{{ items[0] }}
{{ items[index] }}
{{ data["dynamic_key"] }}
Truthy and Falsy Values
In Liquid conditions:
- Falsy:
false
,nil
- Truthy: everything else (including
0
,""
,[]
)
{% if value %}
This shows unless value is false or nil
{% endif %}
Error Handling
When a variable is undefined:
- In backward-compatible mode:
{{ undefined }}
renders as{{ undefined }}
- With validation: An error is raised for missing required arguments
Use the default
filter to handle missing values gracefully:
{{ optional_var | default: "fallback value" }}
Partials
SwissArmyHammer supports partials (template fragments) that can be included in other templates. This allows you to create reusable components and organize your templates better.
Creating Partials
To create a partial, add the {% partial %}
tag at the beginning of your template file:
{% partial %}
## Code Review Guidelines
Please review the following code for:
- Syntax errors
- Logic issues
- Best practices
- Performance concerns
Using Partials
Use the {% render %}
tag to include partials in your templates:
# Main Template
{% render "code-review-guidelines" %}
## Code to Review
```{{ language }}
{{ code }}
Please focus on {{ focus_area }}.
### Partial Resolution
SwissArmyHammer automatically resolves partials based on your prompt library:
1. **Exact name match**: `{% render "my-partial" %}` looks for `my-partial`
2. **With extensions**: Also tries `my-partial.md`, `my-partial.liquid`, `my-partial.md.liquid`, `my-partial.liquid.markdown`
3. **Without extensions**: If you have `my-partial.md.liquid`, you can reference it as `{% render "my-partial" %}`
### Supported File Extensions
Partials can use any of these extensions:
- `.md` - Markdown files
- `.liquid` - Liquid template files
- `.md.liquid` - Markdown with Liquid processing
- `.liquid.markdown` - Liquid with Markdown processing
### Organizing Partials
You can organize partials in subdirectories:
prompts/ βββ main-template.md.liquid βββ partials/ β βββ header.liquid β βββ footer.liquid β βββ code-review/ β βββ guidelines.md.liquid β βββ checklist.liquid βββ shared/ βββ common-footer.md
Reference them with their relative path:
```liquid
{% render "partials/header" %}
{% render "partials/code-review/guidelines" %}
{% render "shared/common-footer" %}
Partials with Context
Partials have access to the same variables as the parent template:
Parent template:
{% assign project_name = "SwissArmyHammer" %}
{% assign language = "Rust" %}
{% render "project-info" %}
Partial (project-info.liquid):
{% partial %}
## Project: {{ project_name }}
This {{ language }} project follows strict coding standards.
Examples
Basic Partial Usage
footer.liquid:
{% partial %}
---
Generated by SwissArmyHammer
Report issues at: https://github.com/project/issues
main-template.md.liquid:
# Code Review for {{ file_name }}
Please review the following code:
```{{ language }}
{{ code }}
{% render βfooterβ %}
#### Conditional Partials
```liquid
{% if include_security_check %}
{% render "security-checklist" %}
{% endif %}
{% if language == "rust" %}
{% render "rust-specific-guidelines" %}
{% elsif language == "python" %}
{% render "python-specific-guidelines" %}
{% endif %}
Nested Partials
Partials can include other partials:
main-header.liquid:
{% partial %}
{% render "project-info" %}
{% render "timestamp" %}
---
project-info.liquid:
{% partial %}
# {{ project_name | default: "Unknown Project" }}
Version: {{ version | default: "1.0.0" }}
Partials in Loops
{% for task in tasks %}
{% render "task-template" %}
{% endfor %}
task-template.liquid:
{% partial %}
## Task: {{ task.title }}
Status: {{ task.status }}
Priority: {{ task.priority | default: "normal" }}
{% if task.description %}
Description: {{ task.description }}
{% endif %}
Best Practices
- Use the
{% partial %}
tag: Always start partial files with{% partial %}
to clearly mark them as partials - Meaningful names: Use descriptive names for partials (
code-review-guidelines
instead ofguidelines
) - Organize by function: Group related partials in subdirectories
- Keep partials focused: Each partial should have a single responsibility
- Document dependencies: If a partial expects certain variables, document them
- Test partials: Test partials with different variable contexts
Common Use Cases
Shared Headers and Footers
Create consistent headers and footers across multiple prompts:
{% render "shared/header" %}
{{ main_content }}
{% render "shared/footer" %}
Language-Specific Templates
{% case language %}
{% when "rust" %}
{% render "languages/rust-template" %}
{% when "python" %}
{% render "languages/python-template" %}
{% when "javascript" %}
{% render "languages/js-template" %}
{% else %}
{% render "languages/generic-template" %}
{% endcase %}
Conditional Content
{% if debug_mode %}
{% render "debug-info" %}
{% endif %}
{% if include_examples %}
{% render "code-examples" %}
{% endif %}
Troubleshooting
Partial not found: Check that the partial file exists in your prompt library and that the name matches exactly.
Variables not available: Partials use the same context as the parent template. Make sure variables are defined before rendering the partial.
Infinite recursion: Avoid having partials that include themselves or create circular dependencies.
Migration from Basic Templates
If youβre migrating from basic {{variable}}
syntax:
- Your existing templates still work - backward compatibility is maintained
- Add spaces for clarity:
{{var}}
β{{ var }}
- Use filters for transformation:
{{ name | upcase }}
instead of post-processing - Add conditions for dynamic content: Use
{% if %}
blocks - Use loops for repetitive content: Replace manual duplication with
{% for %}
Migration Examples
Before: Basic Variable Substitution
Please review the {{language}} code in {{file}}.
Focus on {{focus_area}}.
After: Enhanced with Liquid Features
Please review the {{ language | capitalize }} code in {{ file }}.
{% if focus_area %}
Focus on {{ focus_area }}.
{% else %}
Perform a general code review.
{% endif %}
{% if language == "python" %}
Pay special attention to PEP 8 compliance.
{% elsif language == "javascript" %}
Check for ESLint rule violations.
{% endif %}
Before: Manual List Creation
Files to review:
- {{file1}}
- {{file2}}
- {{file3}}
After: Dynamic Lists with Loops
Files to review:
{% for file in files %}
- {{ file }}{% if forloop.last %} (final file){% endif %}
{% endfor %}
Total: {{ files | size }} files
Before: Fixed Templates
Status: {{status}}
After: Conditional Formatting
Status: {% case status %}
{% when "success" %}β
{{ status | upcase }}
{% when "error" %}β {{ status | upcase }}
{% when "warning" %}β οΈ {{ status | capitalize }}
{% else %}{{ status }}
{% endcase %}
Differences from Handlebars/Mustache
If youβre familiar with Handlebars or Mustache templating:
Feature | Handlebars/Mustache | Liquid |
---|---|---|
Variables | {{variable}} | {{ variable }} |
Conditionals | {{#if}}...{{/if}} | {% if %}...{% endif %} |
Loops | {{#each}}...{{/each}} | {% for %}...{% endfor %} |
Comments | {{! comment }} | {% comment %}...{% endcomment %} |
Filters | Limited | Extensive built-in filters |
Logic | Minimal | Full comparison operators |
Common Migration Patterns
-
Variable with Default
- Before: Handle missing variables in code
- After:
{{ variable | default: "fallback" }}
-
Conditional Sections
- Before: Generate different templates
- After: Single template with
{% if %}
blocks
-
Repeated Content
- Before: Manual duplication
- After:
{% for %}
loops withforloop
variables
-
String Transformation
- Before: Transform in application code
- After: Use Liquid filters directly
Backward Compatibility Notes
- Simple
{{variable}}
syntax continues to work - Undefined variables are preserved as
{{ variable }}
in output - No breaking changes to existing templates
- Gradual migration is supported - mix old and new syntax
Examples
Dynamic Code Review
{% if language == "python" %}
Please review this Python code for PEP 8 compliance.
{% elsif language == "javascript" %}
Please review this JavaScript code for ESLint rules.
{% else %}
Please review this {{ language }} code for best practices.
{% endif %}
{% if include_security %}
Also check for security vulnerabilities.
{% endif %}
Formatted List
{% for item in tasks %}
{{ forloop.index }}. {{ item.title }}
{% if item.completed %}β{% else %}β{% endif %}
Priority: {{ item.priority | default: "normal" }}
{% unless item.completed %}
Due: {{ item.due_date | date: "%B %d" }}
{% endunless %}
{% endfor %}
Conditional Debugging
{% if debug_mode %}
=== Debug Information ===
Variables: {{ arguments | json }}
Environment: {{ env.NODE_ENV | default: "development" }}
{% for key in api_keys %}
{{ key }}: {{ key | truncate: 8 }}...
{% endfor %}
{% endif %}
Best Practices
- Use meaningful variable names:
{{ user_email }}
instead of{{ ue }}
- Provide defaults:
{{ value | default: "N/A" }}
for optional values - Format output: Use filters to ensure consistent formatting
- Comment complex logic: Use
{% comment %}
blocks - Test edge cases: Empty arrays, nil values, missing variables
- Keep it readable: Break complex templates into sections
Custom Filters
SwissArmyHammer includes specialized custom filters designed for prompt engineering:
Code Filters
{{ code | format_lang: "rust" }} # Format code with language
{{ code | extract_functions }} # Extract function signatures
{{ path | basename }} # Get filename from path
{{ path | dirname }} # Get directory from path
{{ text | count_lines }} # Count number of lines
{{ code | dedent }} # Remove common indentation
Text Processing Filters
{{ text | extract_urls }} # Extract URLs from text
{{ title | slugify }} # Convert to URL-friendly slug
{{ text | word_wrap: 80 }} # Wrap text at 80 characters
{{ text | indent: 2 }} # Indent all lines by 2 spaces
{{ items | bullet_list }} # Convert array to bullet list
{{ text | highlight: "keyword" }} # Highlight specific terms
Data Transformation Filters
{{ json_string | from_json }} # Parse JSON string
{{ data | to_json }} # Convert to JSON string
{{ csv_string | from_csv }} # Parse CSV string
{{ array | to_csv }} # Convert to CSV string
{{ yaml_string | from_yaml }} # Parse YAML string
{{ data | to_yaml }} # Convert to YAML string
Utility Filters
{{ text | md5 }} # Generate MD5 hash
{{ text | sha1 }} # Generate SHA1 hash
{{ text | sha256 }} # Generate SHA256 hash
{{ number | ordinal }} # Convert to ordinal (1st, 2nd, 3rd)
{{ 100 | lorem_words }} # Generate lorem ipsum words
{{ date | format_date: "%Y-%m-%d" }} # Advanced date formatting
For complete documentation of custom filters, see the Custom Filters Reference.
Limitations
- No custom tags: Only standard Liquid tags (plus SwissArmyHammerβs
{% partial %}
and{% render %}
) are supported - Performance: Very large loops may impact performance
Further Reading
- Official Liquid Documentation
- Liquid Playground - Test templates online
- Liquid Cheat Sheet
Custom Filters Reference
SwissArmyHammer includes a comprehensive set of custom Liquid filters designed specifically for prompt engineering and content processing.
Code Filters
format_lang
Formats code with language-specific syntax highlighting hints.
{{ code | format_lang: "rust" }}
{{ code | format_lang: language_var }}
Arguments:
language
- Programming language identifier (e.g., βrustβ, βpythonβ, βjavascriptβ)
Example:
<!-- Input -->
{{ "fn main() { println!(\"Hello\"); }" | format_lang: "rust" }}
<!-- Output -->
```rust
fn main() { println!("Hello"); }
### extract_functions
Extracts function signatures and definitions from code.
```liquid
{{ code | extract_functions }}
{{ code | extract_functions: "detailed" }}
Arguments:
mode
(optional) - βsignaturesβ (default) or βdetailedβ for full function bodies
Example:
<!-- Input -->
{{ rust_code | extract_functions }}
<!-- Output -->
- fn main()
- fn calculate(x: i32, y: i32) -> i32
- fn process_data(data: &Vec<String>) -> Result<(), Error>
basename
Extracts the filename from a file path.
{{ path | basename }}
Example:
<!-- Input -->
{{ "/usr/local/bin/swissarmyhammer" | basename }}
<!-- Output -->
swissarmyhammer
dirname
Extracts the directory path from a file path.
{{ path | dirname }}
Example:
<!-- Input -->
{{ "/usr/local/bin/swissarmyhammer" | dirname }}
<!-- Output -->
/usr/local/bin
count_lines
Counts the number of lines in text.
{{ text | count_lines }}
Example:
<!-- Input -->
{{ "line 1\nline 2\nline 3" | count_lines }}
<!-- Output -->
3
dedent
Removes common leading whitespace from all lines.
{{ code | dedent }}
Example:
<!-- Input -->
{{ " fn main() {\n println!(\"Hello\");\n }" | dedent }}
<!-- Output -->
fn main() {
println!("Hello");
}
Text Processing Filters
extract_urls
Extracts all URLs from text.
{{ text | extract_urls }}
{{ text | extract_urls: "list" }}
Arguments:
format
(optional) - βarrayβ (default) or βlistβ for bullet point list
Example:
<!-- Input -->
{{ "Visit https://example.com and https://github.com" | extract_urls }}
<!-- Output -->
["https://example.com", "https://github.com"]
<!-- With list format -->
{{ "Visit https://example.com and https://github.com" | extract_urls: "list" }}
<!-- Output -->
- https://example.com
- https://github.com
slugify
Converts text to a URL-friendly slug.
{{ title | slugify }}
Example:
<!-- Input -->
{{ "Advanced Code Review Helper!" | slugify }}
<!-- Output -->
advanced-code-review-helper
word_wrap
Wraps text at specified column width.
{{ text | word_wrap: 80 }}
{{ text | word_wrap: width_var }}
Arguments:
width
- Column width for wrapping (default: 80)
Example:
<!-- Input -->
{{ "This is a very long line that should be wrapped at a specific column width to ensure readability." | word_wrap: 30 }}
<!-- Output -->
This is a very long line that
should be wrapped at a specific
column width to ensure
readability.
indent
Indents all lines by specified number of spaces.
{{ text | indent: 4 }}
{{ text | indent: spaces_var }}
Arguments:
spaces
- Number of spaces to indent (default: 2)
Example:
<!-- Input -->
{{ "line 1\nline 2" | indent: 4 }}
<!-- Output -->
line 1
line 2
bullet_list
Converts an array to a bullet point list.
{{ array | bullet_list }}
{{ array | bullet_list: "*" }}
Arguments:
bullet
(optional) - Bullet character (default: β-β)
Example:
<!-- Input -->
{{ ["Item 1", "Item 2", "Item 3"] | bullet_list }}
<!-- Output -->
- Item 1
- Item 2
- Item 3
<!-- With custom bullet -->
{{ ["Item 1", "Item 2"] | bullet_list: "*" }}
<!-- Output -->
* Item 1
* Item 2
highlight
Highlights specific terms in text.
{{ text | highlight: "keyword" }}
{{ text | highlight: term_var }}
Arguments:
term
- Term to highlight
Example:
<!-- Input -->
{{ "This is important text with keywords" | highlight: "important" }}
<!-- Output -->
This is **important** text with keywords
Data Transformation Filters
from_json
Parses JSON string into object/array.
{{ json_string | from_json }}
Example:
<!-- Input -->
{% assign data = '{"name": "John", "age": 30}' | from_json %}
Name: {{ data.name }}
Age: {{ data.age }}
<!-- Output -->
Name: John
Age: 30
to_json
Converts object/array to JSON string.
{{ data | to_json }}
{{ data | to_json: "pretty" }}
Arguments:
format
(optional) - βcompactβ (default) or βprettyβ for formatted output
Example:
<!-- Input -->
{% assign user = { "name": "John", "age": 30 } %}
{{ user | to_json: "pretty" }}
<!-- Output -->
{
"name": "John",
"age": 30
}
from_csv
Parses CSV string into array of objects.
{{ csv_string | from_csv }}
{{ csv_string | from_csv: ";" }}
Arguments:
delimiter
(optional) - Field delimiter (default: β,β)
Example:
<!-- Input -->
{% assign data = "name,age\nJohn,30\nJane,25" | from_csv %}
{% for row in data %}
- {{ row.name }} is {{ row.age }} years old
{% endfor %}
<!-- Output -->
- John is 30 years old
- Jane is 25 years old
to_csv
Converts array of objects to CSV string.
{{ array | to_csv }}
{{ array | to_csv: ";" }}
Arguments:
delimiter
(optional) - Field delimiter (default: β,β)
Example:
<!-- Input -->
{% assign users = [{"name": "John", "age": 30}, {"name": "Jane", "age": 25}] %}
{{ users | to_csv }}
<!-- Output -->
name,age
John,30
Jane,25
from_yaml
Parses YAML string into object/array.
{{ yaml_string | from_yaml }}
Example:
<!-- Input -->
{% assign config = "database:\n host: localhost\n port: 5432" | from_yaml %}
Host: {{ config.database.host }}
Port: {{ config.database.port }}
<!-- Output -->
Host: localhost
Port: 5432
to_yaml
Converts object/array to YAML string.
{{ data | to_yaml }}
Example:
<!-- Input -->
{% assign config = {"database": {"host": "localhost", "port": 5432}} %}
{{ config | to_yaml }}
<!-- Output -->
database:
host: localhost
port: 5432
Hash Filters
md5
Generates MD5 hash of input text.
{{ text | md5 }}
Example:
<!-- Input -->
{{ "hello world" | md5 }}
<!-- Output -->
5d41402abc4b2a76b9719d911017c592
sha1
Generates SHA1 hash of input text.
{{ text | sha1 }}
Example:
<!-- Input -->
{{ "hello world" | sha1 }}
<!-- Output -->
2aae6c35c94fcfb415dbe95f408b9ce91ee846ed
sha256
Generates SHA256 hash of input text.
{{ text | sha256 }}
Example:
<!-- Input -->
{{ "hello world" | sha256 }}
<!-- Output -->
b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9
Utility Filters
ordinal
Converts number to ordinal format (1st, 2nd, 3rd, etc.).
{{ number | ordinal }}
Example:
<!-- Input -->
{{ 1 | ordinal }} item
{{ 22 | ordinal }} place
{{ 103 | ordinal }} attempt
<!-- Output -->
1st item
22nd place
103rd attempt
lorem_words
Generates lorem ipsum text with specified number of words.
{{ count | lorem_words }}
Arguments:
count
- Number of words to generate
Example:
<!-- Input -->
{{ 10 | lorem_words }}
<!-- Output -->
Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod
format_date
Advanced date formatting with custom format strings.
{{ date | format_date: "%Y-%m-%d %H:%M:%S" }}
{{ "now" | format_date: "%B %d, %Y" }}
Arguments:
format
- Date format string (uses strftime format)
Common format codes:
%Y
- 4-digit year (2024)%m
- Month as number (01-12)%d
- Day of month (01-31)%H
- Hour (00-23)%M
- Minute (00-59)%S
- Second (00-59)%B
- Full month name (January)%b
- Abbreviated month (Jan)%A
- Full weekday name (Monday)%a
- Abbreviated weekday (Mon)
Example:
<!-- Input -->
{{ "2024-01-15T10:30:00Z" | format_date: "%B %d, %Y at %I:%M %p" }}
{{ "now" | format_date: "%A, %Y-%m-%d" }}
<!-- Output -->
January 15, 2024 at 10:30 AM
Monday, 2024-01-15
Filter Chaining
Filters can be chained together for complex transformations:
{{ code | dedent | format_lang: "python" | highlight: "def" }}
{{ user_input | strip | truncate: 100 | capitalize }}
{{ data | to_json | indent: 2 }}
{{ filename | basename | slugify | append: ".md" }}
Error Handling
Custom filters handle errors gracefully:
- Invalid input: Returns original value or empty string
- Missing arguments: Uses sensible defaults
- Type mismatches: Attempts conversion or returns original value
Performance Notes
- Hash filters (md5, sha1, sha256) are computationally expensive for large inputs
- Data transformation filters (JSON, CSV, YAML) may consume memory for large datasets
- Text processing filters are optimized for typical prompt content sizes
- Code filters use efficient parsing algorithms
Integration Examples
Code Review Prompt
# Code Review: {{ filename | basename }}
## File Information
- **Path**: {{ filepath }}
- **Lines**: {{ code | count_lines }}
- **Language**: {{ language | capitalize }}
## Code to Review
{{ code | dedent | format_lang: language }}
## Functions Found
{{ code | extract_functions | bullet_list }}
## Review Checklist
{% assign hash = code | sha256 | truncate: 8 %}
- [ ] Security review (ID: {{ hash }})
- [ ] Performance analysis
- [ ] Style compliance
Data Analysis Prompt
# Data Analysis Report
## Dataset Summary
{% assign data = csv_data | from_csv %}
- **Records**: {{ data | size }}
- **Generated**: {{ "now" | format_date: "%Y-%m-%d %H:%M" }}
## Sample Data
{% for row in data limit:3 %}
{{ forloop.index | ordinal }} record: {{ row | to_json }}
{% endfor %}
## Field Analysis
{% assign fields = data[0] | keys %}
Available fields: {{ fields | bullet_list }}
Documentation Generator
# API Documentation
## Endpoints
{% for endpoint in api_endpoints %}
### {{ endpoint.method | upcase }} {{ endpoint.path }}
{{ endpoint.description | word_wrap: 80 }}
{% if endpoint.parameters %}
**Parameters:**
{{ endpoint.parameters | to_yaml | indent: 2 }}
{% endif %}
**Example:**
```{{ endpoint.language | default: "bash" }}
{{ endpoint.example | dedent }}
{% endfor %}
Generated on {{ βnowβ | format_date: β%B %d, %Yβ }}
## See Also
- [Template Variables](./template-variables.md) - Basic Liquid syntax
- [Advanced Prompts](./advanced-prompts.md) - Using filters in complex templates
- [Testing Guide](./testing-guide.md) - Testing templates with custom filters
Prompt Organization
Effective prompt organization is crucial for maintaining a scalable and manageable prompt library. This guide covers best practices for organizing your SwissArmyHammer prompts.
Directory Structure
Recommended Hierarchy
~/.swissarmyhammer/prompts/
βββ development/
β βββ languages/
β β βββ python/
β β βββ javascript/
β β βββ rust/
β βββ frameworks/
β β βββ react/
β β βββ django/
β β βββ fastapi/
β βββ tools/
β βββ git/
β βββ docker/
β βββ ci-cd/
βββ writing/
β βββ technical/
β βββ business/
β βββ creative/
βββ data/
β βββ analysis/
β βββ transformation/
β βββ visualization/
βββ productivity/
β βββ planning/
β βββ automation/
β βββ workflows/
βββ _shared/
βββ components/
βββ templates/
βββ utilities/
Directory Purposes
- development/ - Programming and technical prompts
- writing/ - Content creation and documentation
- data/ - Data processing and analysis
- productivity/ - Task management and workflows
- _shared/ - Reusable components and utilities
Naming Conventions
File Names
Use consistent, descriptive file names:
# Good examples
code-review-python.md
api-documentation-generator.md
git-commit-message.md
database-migration-planner.md
# Avoid
review.md
doc.md
prompt1.md
temp.md
Naming Rules
- Use kebab-case for file names
- Be descriptive - Include the purpose and context
- Add type suffix when multiple variants exist
- Keep under 40 characters for readability
Prompt Names (in YAML)
# Good - matches file name pattern
name: code-review-python
# Also good - hierarchical naming
name: development/code-review/python
# Avoid - too generic
name: review
Categories and Tags
Using Categories
Categories provide broad groupings:
---
name: api-security-scanner
title: API Security Scanner
category: development
subcategory: security
---
Effective Tagging
Tags enable fine-grained discovery:
---
name: react-component-generator
tags:
- react
- javascript
- frontend
- component
- generator
- boilerplate
---
Category vs Tags
- Categories: Single, broad classification
- Tags: Multiple, specific attributes
# Category for primary classification
category: development
# Tags for detailed attributes
tags:
- python
- testing
- pytest
- unit-tests
- tdd
Modular Design
Base Templates
Create base templates in _shared/templates/
:
<!-- _shared/templates/code-review-base.md -->
---
name: code-review-base
title: Base Code Review Template
abstract: true # Indicates this is a template
---
# Code Review
## Overview
Review the following code for:
- Best practices
- Potential bugs
- Performance issues
- Security concerns
## Code
```{{language}}
{{code}}
Analysis
{{analysis_content}}
### Extending Base Templates
```markdown
<!-- development/languages/python/code-review-python.md -->
---
name: code-review-python
title: Python Code Review
extends: code-review-base
---
{% capture analysis_content %}
### Python-Specific Checks
- PEP 8 compliance
- Type hints usage
- Pythonic idioms
- Import organization
{% endcapture %}
{% include "code-review-base" %}
Shared Components
Store reusable components in _shared/components/
:
<!-- _shared/components/security-checks.md -->
---
name: security-checks-component
component: true
---
### Security Analysis
- Input validation
- SQL injection risks
- XSS vulnerabilities
- Authentication flaws
- Data exposure
Use in prompts:
---
name: web-app-review
---
# Web Application Review
{{code}}
{% include "_shared/components/security-checks.md" %}
## Additional Checks
...
Versioning
Version in Metadata
Track prompt versions:
---
name: api-generator
version: 2.1.0
updated: 2024-03-20
changelog:
- 2.1.0: Added GraphQL support
- 2.0.0: Breaking change - new argument structure
- 1.0.0: Initial release
---
Version Directories
For major versions with breaking changes:
prompts/
βββ development/
β βββ api-generator/
β β βββ v1/
β β β βββ api-generator.md
β β βββ v2/
β β β βββ api-generator.md
β β βββ latest -> v2/api-generator.md
Migration Guides
Document version changes:
<!-- development/api-generator/MIGRATION.md -->
# API Generator Migration Guide
## v1 to v2
### Breaking Changes
- `endpoint_list` argument renamed to `endpoints`
- `auth_method` now requires specific values
### Migration Steps
1. Update argument names in your scripts
2. Validate auth_method values
3. Test with new version
Collections
Prompt Collections
Group related prompts:
<!-- collections/fullstack-development.md -->
---
name: fullstack-collection
title: Full-Stack Development Collection
type: collection
---
# Full-Stack Development Prompts
## Frontend
- `frontend/react-component` - React component generator
- `frontend/vue-template` - Vue.js templates
- `frontend/css-optimizer` - CSS optimization
## Backend
- `backend/api-design` - API design assistant
- `backend/database-schema` - Schema designer
- `backend/auth-implementation` - Authentication setup
## DevOps
- `devops/docker-config` - Docker configuration
- `devops/ci-pipeline` - CI/CD pipeline setup
- `devops/deployment-guide` - Deployment strategies
Collection Metadata
---
name: data-science-toolkit
type: collection
prompts:
- data/eda-assistant
- data/feature-engineering
- data/model-evaluation
- data/visualization-guide
dependencies:
- python
- pandas
- scikit-learn
---
Search and Discovery
Metadata for Search
Optimize prompts for discovery:
---
name: code-documenter
title: Intelligent Code Documentation Generator
description: |
Generates comprehensive documentation for code including:
- Function/method documentation
- Class documentation
- Module overview
- Usage examples
- API references
keywords:
- documentation
- docstring
- comments
- api docs
- code docs
- jsdoc
- sphinx
- rustdoc
search_terms:
- "generate documentation"
- "add comments to code"
- "create api docs"
- "document functions"
---
Aliases
Support multiple names:
---
name: git-commit-message
aliases:
- commit-message
- git-message
- commit-generator
---
Related Prompts
Link related prompts:
---
name: code-review-security
related:
- code-review-general
- security-audit
- vulnerability-scanner
- penetration-test-guide
---
Team Collaboration
Shared Conventions
Document team conventions in CONVENTIONS.md
:
# Prompt Conventions
## Naming
- Use `project-` prefix for project-specific prompts
- Use `team-` prefix for team-wide prompts
- Use `personal-` prefix for individual prompts
## Categories
- `project` - Project-specific
- `team` - Team standards
- `experimental` - Under development
## Required Metadata
All prompts must include:
- name
- title
- description
- author
- created date
- category
Ownership
Track prompt ownership:
---
name: deployment-checklist
author: jane.doe@company.com
team: platform-engineering
maintainers:
- jane.doe@company.com
- john.smith@company.com
review_required: true
last_reviewed: 2024-03-15
---
Review Process
Implement prompt review:
---
name: api-contract-generator
status: draft # draft, review, approved, deprecated
reviewers:
- senior-dev-team
approved_by: tech-lead@company.com
approval_date: 2024-03-10
---
Import/Export Strategies
Partial Exports
Export specific categories:
# Export only development prompts
swissarmyhammer export dev-prompts.tar.gz --filter "category:development"
# Export by tags
swissarmyhammer export python-prompts.tar.gz --filter "tag:python"
# Export by date
swissarmyhammer export recent-prompts.tar.gz --filter "updated:>2024-01-01"
Collection Bundles
Create installable bundles:
# bundle.yaml
name: web-development-bundle
version: 1.0.0
description: Complete web development prompt collection
prompts:
include:
- category: frontend
- category: backend
- tag: web
exclude:
- tag: experimental
dependencies:
- swissarmyhammer: ">=0.1.0"
install_to: ~/.swissarmyhammer/prompts/bundles/web-dev/
Sync Strategies
Keep prompts synchronized:
#!/bin/bash
# sync-prompts.sh
# Backup local changes
swissarmyhammer export local-backup-$(date +%Y%m%d).tar.gz
# Pull team prompts
git pull origin main
# Import team updates
swissarmyhammer import team-prompts.tar.gz --merge
# Export for distribution
swissarmyhammer export team-bundle.tar.gz --filter "team:approved"
Best Practices
1. Start Simple
Begin with basic organization:
prompts/
βββ work/
βββ personal/
βββ learning/
Then evolve as needed.
2. Use Meaningful Hierarchies
# Good - clear hierarchy
development/testing/unit-test-generator.md
development/testing/integration-test-builder.md
# Avoid - flat structure
unit-test-generator.md
integration-test-builder.md
3. Document Your System
Create prompts/README.md
:
# Prompt Library Organization
## Structure
- `development/` - Programming prompts
- `data/` - Data analysis prompts
- `writing/` - Content creation
- `_shared/` - Reusable components
## Naming Convention
- Files: `purpose-context-type.md`
- Prompts: Match file names
## How to Contribute
1. Choose appropriate category
2. Follow naming conventions
3. Include all required metadata
4. Test before committing
4. Regular Maintenance
# Find unused prompts
swissarmyhammer list --unused --days 90
# Find duplicates
swissarmyhammer list --duplicates
# Validate all prompts
swissarmyhammer doctor --check prompts
5. Progressive Enhancement
Start with basic prompts and enhance:
# Version 1 - Basic
name: code-review
description: Reviews code
# Version 2 - Enhanced
name: code-review
description: Reviews code for quality, security, and performance
category: development
tags: [review, quality, security]
version: 2.0.0
Examples
Enterprise Setup
company-prompts/
βββ departments/
β βββ engineering/
β β βββ standards/
β β βββ templates/
β β βββ tools/
β βββ product/
β β βββ specs/
β β βββ research/
β β βββ documentation/
β βββ data-science/
β βββ analysis/
β βββ models/
β βββ reporting/
βββ projects/
β βββ project-alpha/
β βββ project-beta/
β βββ _archived/
βββ shared/
β βββ components/
β βββ templates/
β βββ utilities/
βββ personal/
βββ [username]/
Open Source Project
oss-prompts/
βββ contribution/
β βββ issue-templates/
β βββ pr-templates/
β βββ code-review/
βββ documentation/
β βββ api-docs/
β βββ user-guide/
β βββ examples/
βββ maintenance/
β βββ release/
β βββ changelog/
β βββ security/
βββ community/
βββ support/
βββ onboarding/
Automation
Auto-Organization Script
#!/usr/bin/env python3
# organize-prompts.py
import os
import yaml
from pathlib import Path
def organize_prompts(source_dir, target_dir):
"""Auto-organize prompts based on metadata."""
for prompt_file in Path(source_dir).glob("**/*.md"):
with open(prompt_file) as f:
content = f.read()
# Extract front matter
if content.startswith("---"):
_, fm, _ = content.split("---", 2)
metadata = yaml.safe_load(fm)
# Determine target path
category = metadata.get("category", "uncategorized")
subcategory = metadata.get("subcategory", "")
target_path = Path(target_dir) / category
if subcategory:
target_path = target_path / subcategory
# Move file
target_path.mkdir(parents=True, exist_ok=True)
target_file = target_path / prompt_file.name
prompt_file.rename(target_file)
print(f"Moved {prompt_file} -> {target_file}")
Validation Script
#!/bin/bash
# validate-organization.sh
echo "Validating prompt organization..."
# Check for prompts without categories
echo "Prompts without categories:"
grep -L "category:" prompts/**/*.md
# Check for duplicate names
echo "Duplicate prompt names:"
swissarmyhammer list --format json | jq -r '.[] | .name' | sort | uniq -d
# Check naming conventions
echo "Files not following naming convention:"
find prompts -name "*.md" | grep -v "[a-z0-9-]*.md"
Next Steps
- See Creating Prompts for prompt creation guidelines
- Learn about Advanced Prompts for complex scenarios
- Explore Examples for organization patterns
- Read Configuration for system-wide settings
Claude Code Integration
SwissArmyHammer integrates seamlessly with Claude Code through the Model Context Protocol (MCP). This guide shows you how to set up and use SwissArmyHammer with Claude Code.
What is MCP?
The Model Context Protocol allows AI assistants like Claude to access external tools and data sources. SwissArmyHammer acts as an MCP server, providing Claude Code with access to your prompt library.
Installation
1. Install SwissArmyHammer
See the Installation Guide for detailed instructions. The quickest method:
# Install using the install script
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
2. Verify Installation
swissarmyhammer --version
3. Test the MCP Server
swissarmyhammer serve --help
Configuration
Add to Claude Code
Configure Claude Code to use SwissArmyHammer as an MCP server:
claude mcp add --scope user swissarmyhammer swissarmyhammer serve
This command:
- Adds
swissarmyhammer
as an MCP server name - Uses
swissarmyhammer serve
as the command to start the server - Sets user scope (available for your user account)
Alternative Configuration Methods
Manual Configuration
If you prefer to configure manually, add this to your Claude Code MCP configuration:
{
"mcpServers": {
"swissarmyhammer": {
"command": "swissarmyhammer",
"args": ["serve"]
}
}
}
Project-Specific Configuration
For project-specific prompts, create a local configuration:
# In your project directory
claude mcp add --scope project swissarmyhammer_local swissarmyhammer serve --prompts ./prompts
Verification
Check MCP Configuration
List your configured MCP servers:
claude mcp list
You should see swissarmyhammer
in the output.
Test the Connection
Start Claude Code and verify SwissArmyHammer is connected:
- Open Claude Code
- Look for SwissArmyHammer prompts in the available tools
- Try using a built-in prompt like
help
orplan
Debug Connection Issues
If SwissArmyHammer doesnβt appear in Claude Code:
# Check if the server starts correctly
swissarmyhammer serve --debug
# Verify prompts are loaded
swissarmyhammer list
# Check configuration
claude mcp list
Usage in Claude Code
Available Features
Once configured, SwissArmyHammer provides these features in Claude Code:
1. Prompt Library Access
All your prompts become available as tools in Claude Code:
- Built-in prompts (code review, debugging, documentation)
- User prompts (in
~/.swissarmyhammer/prompts/
) - Local prompts (in current project directory)
2. Dynamic Arguments
Prompts with arguments become interactive forms in Claude Code:
---
name: code-review
title: Code Review
description: Reviews code for best practices, bugs, and improvements
arguments:
- name: code
description: Code to review
required: true
type_hint: string
- name: language
description: Programming language
required: false
default: auto-detect
type_hint: string
---
3. Live Reloading
Changes to prompt files are automatically detected and reloaded.
Using Prompts
Basic Usage
- Select a Prompt: Choose from available SwissArmyHammer prompts
- Fill Arguments: Provide required and optional parameters
- Execute: Claude runs the prompt with your arguments
Example Workflow
-
Code Review:
- Select
code-review
prompt - Paste your code in the
code
field - Set
language
if needed - Execute to get detailed code analysis
- Select
-
Debug Helper:
- Select
debug
prompt - Describe your error in the
error
field - Get step-by-step debugging guidance
- Select
-
Documentation:
- Select
docs
prompt - Provide code or specifications
- Generate comprehensive documentation
- Select
Advanced Usage
Prompt Chaining
Use multiple prompts in sequence:
1. Use `analyze-code` to understand the codebase
2. Use `plan` to create implementation strategy
3. Use `code-review` on the new code
4. Use `docs` to generate documentation
Custom Workflows
Create project-specific prompt workflows:
# Project prompts in ./.swissarmyhammer/prompts/
## development/
- project-setup.md - Initialize new features
- code-standards.md - Apply project coding standards
- deployment.md - Deploy to staging/production
## documentation/
- api-docs.md - Generate API documentation
- user-guide.md - Create user-facing documentation
- changelog.md - Generate release notes
Configuration Options
Server Configuration
The swissarmyhammer serve
command accepts several options:
swissarmyhammer serve [OPTIONS]
Common Options
--port <PORT>
- Port for MCP communication (default: auto)--host <HOST>
- Host to bind to (default: localhost)--prompts <DIR>
- Additional prompt directories--builtin <BOOL>
- Include built-in prompts (default: true)--watch <BOOL>
- Enable file watching (default: true)--debug
- Enable debug logging
Examples
# Basic server
swissarmyhammer serve
# Custom prompt directory
swissarmyhammer serve --prompts /path/to/prompts
# Multiple prompt directories
swissarmyhammer serve --prompts ./prompts --prompts ~/.custom-prompts
# Debug mode
swissarmyhammer serve --debug
# Disable built-in prompts
swissarmyhammer serve --builtin false
Claude Code Configuration
Server Arguments
Pass arguments to the SwissArmyHammer server:
# Add with custom options
claude mcp add swissarmyhammer_custom swissarmyhammer serve --prompts ./project-prompts --debug
Environment Variables
Configure through environment variables:
export SWISSARMYHAMMER_PROMPTS_DIR=/path/to/prompts
export SWISSARMYHAMMER_DEBUG=true
claude mcp add swissarmyhammer swissarmyhammer serve
Prompt Organization
Directory Structure
Organize prompts for easy discovery in Claude Code:
~/.swissarmyhammer/prompts/
βββ development/
β βββ code-review.md
β βββ debug-helper.md
β βββ refactor.md
β βββ testing.md
βββ writing/
β βββ blog-post.md
β βββ documentation.md
β βββ email.md
βββ analysis/
β βββ data-insights.md
β βββ research.md
βββ productivity/
βββ task-planning.md
βββ meeting-notes.md
Naming Conventions
Use clear, descriptive names that work well in Claude Code:
# Good - Clear and specific
code-review-python.md
debug-javascript-async.md
documentation-api.md
# Bad - Too generic
review.md
debug.md
docs.md
Categories and Tags
Use categories and tags for better organization in Claude Code:
---
name: python-code-review
title: Python Code Review
description: Reviews Python code for PEP 8, security, and performance
category: development
tags: ["python", "code-review", "pep8", "security"]
---
Troubleshooting
Common Issues
SwissArmyHammer Not Available
Symptoms: SwissArmyHammer prompts donβt appear in Claude Code
Solutions:
- Verify installation:
swissarmyhammer --version
- Check MCP configuration:
claude mcp list
- Test server manually:
swissarmyhammer serve --debug
- Restart Claude Code
Connection Errors
Symptoms: Error messages about MCP connection
Solutions:
- Check if port is available:
swissarmyhammer serve --port 8080
- Verify permissions: Run with
--debug
to see detailed logs - Check firewall settings
- Try different host:
--host 127.0.0.1
Prompts Not Loading
Symptoms: Some prompts missing or outdated
Solutions:
- Check prompt syntax:
swissarmyhammer test <prompt-name>
- Verify file permissions in prompt directories
- Check for YAML syntax errors
- Restart the MCP server: Restart Claude Code
Performance Issues
Symptoms: Slow prompt loading or execution
Solutions:
- Reduce prompt directory size
- Disable file watching:
--watch false
- Use specific prompt directories:
--prompts ./specific-dir
- Check system resources
Debug Mode
Enable debug mode for detailed troubleshooting:
swissarmyhammer serve --debug
Debug mode provides:
- Detailed logging of MCP communication
- Prompt loading information
- Error stack traces
- Performance metrics
Logs and Diagnostics
Server Logs
SwissArmyHammer logs to standard output:
# Save logs to file
swissarmyhammer serve --debug > swissarmyhammer.log 2>&1
Claude Code Logs
Check Claude Code logs for MCP-related issues:
# Location varies by platform
# macOS: ~/Library/Logs/Claude Code/
# Linux: ~/.local/share/claude-code/logs/
# Windows: %APPDATA%/Claude Code/logs/
Health Check
Use the doctor command to check configuration:
swissarmyhammer doctor
This checks:
- Installation status
- Configuration validity
- Prompt directory accessibility
- MCP server functionality
Best Practices
1. Organize Prompts Logically
Structure prompts by workflow rather than just topic:
prompts/
βββ workflows/
β βββ code-review-workflow.md
β βββ feature-development.md
β βββ bug-fixing.md
βββ utilities/
β βββ format-code.md
β βββ generate-tests.md
β βββ extract-docs.md
2. Use Descriptive Metadata
Make prompts discoverable with good metadata:
---
name: comprehensive-code-review
title: Comprehensive Code Review
description: Deep analysis of code for security, performance, and maintainability
category: development
tags: ["security", "performance", "maintainability", "best-practices"]
keywords: ["static analysis", "code quality", "peer review"]
---
3. Test Prompts Regularly
Validate prompts before using in Claude Code:
# Test basic functionality
swissarmyhammer test code-review --code "def hello(): print('hi')"
# Test all prompts
swissarmyhammer test --all
4. Use Project-Specific Configurations
Create project-specific prompt collections:
# Per-project MCP server
cd my-project
claude mcp add project_prompts swissarmyhammer serve --prompts ./prompts
5. Keep Prompts Updated
Maintain prompt quality:
---
name: my-prompt
version: 1.2.0
updated: 2024-03-20 # Track changes
---
Examples
Complete Setup Example
Hereβs a complete example of setting up SwissArmyHammer for a development project:
# 1. Install SwissArmyHammer
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
# 2. Create project prompts
mkdir -p ./.swissarmyhammer/prompts/development
cat > ./.swissarmyhammer/prompts/development/project-review.md << 'EOF'
---
name: project-code-review
title: Project Code Review
description: Reviews code according to our project standards
category: development
arguments:
- name: code
description: Code to review
required: true
- name: component
description: Which component this code belongs to
required: false
default: general
---
# Project Code Review
Please review this {{component}} code according to our project standards:
```{{code}}```
Check for:
- Adherence to our coding standards
- Security best practices
- Performance considerations
- Documentation completeness
- Test coverage
Provide specific, actionable feedback.
EOF
# 3. Configure Claude Code
claude mcp add project_sah swissarmyhammer serve --prompts ./prompts
# 4. Test the setup
swissarmyhammer test project-code-review --code "print('hello')" --component "utility"
# 5. Start using in Claude Code
echo "Setup complete! Restart Claude Code to use your prompts."
Next Steps
- Explore Built-in Prompts to see whatβs available
- Learn Creating Prompts to build custom prompts
- Read about Prompt Organization strategies
- Check the CLI Reference for all available commands
- See Troubleshooting for additional help
Command Line Interface
SwissArmyHammer provides a comprehensive command-line interface for managing prompts, running the MCP server, and integrating with your development workflow.
Installation
# Install from Git repository (requires Rust)
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
# Ensure ~/.cargo/bin is in your PATH
export PATH="$HOME/.cargo/bin:$PATH"
Basic Usage
swissarmyhammer [COMMAND] [OPTIONS]
Global Options
--help, -h
- Display help information--version, -V
- Display version information
Commands Overview
Command | Description |
---|---|
serve | Run as MCP server for Claude Code integration |
search | Search and discover prompts with powerful filtering |
test | Interactively test prompts with arguments |
doctor | Diagnose configuration and setup issues |
completion | Generate shell completion scripts |
Quick Examples
Start MCP Server
# Run as MCP server (for Claude Code)
swissarmyhammer serve
Search for Prompts
# Search for code-related prompts
swissarmyhammer search code
# Search with regex in descriptions
swissarmyhammer search --regex "test.*unit" --in description
Test a Prompt
# Interactively test a prompt
swissarmyhammer test code-review
# Test with predefined arguments
swissarmyhammer test code-review --arg code="fn main() { println!(\"Hello\"); }"
Check Setup
# Diagnose any configuration issues
swissarmyhammer doctor
Generate Shell Completions
# Generate Bash completions
swissarmyhammer completion bash > ~/.bash_completion.d/swissarmyhammer
# Generate Zsh completions
swissarmyhammer completion zsh > ~/.zfunc/_swissarmyhammer
Exit Codes
0
- Success1
- General error2
- Command line usage error3
- Configuration error4
- Prompt not found5
- Template rendering error
Configuration
SwissArmyHammer looks for prompts in these directories (in order):
- Built-in prompts (embedded in the binary)
- User prompts:
~/.swissarmyhammer/prompts/
- Local prompts:
./.swissarmyhammer/prompts/
(current directory)
For detailed command documentation, see the individual command pages linked in the table above.
serve Command
The serve
command starts SwissArmyHammer as a Model Context Protocol (MCP) server, making your prompts available to Claude Code and other MCP clients.
Usage
swissarmyhammer serve [OPTIONS]
Overview
The serve command:
- Starts an MCP server that provides access to your prompt library
- Loads prompts from various directories (built-in, user, local)
- Watches for file changes and automatically reloads prompts
- Provides real-time access to prompts for Claude Code integration
Options
--port <PORT>
- Description: Port number for MCP communication
- Default: Automatically assigned by the system
- Example:
--port 8080
swissarmyhammer serve --port 8080
--host <HOST>
- Description: Host address to bind the server to
- Default:
localhost
- Example:
--host 127.0.0.1
swissarmyhammer serve --host 127.0.0.1
--prompts <DIRECTORY>
- Description: Additional directories to load prompts from
- Default: Standard locations (
~/.swissarmyhammer/prompts
,./.swissarmyhammer/prompts
) - Repeatable: Can be used multiple times
- Example:
--prompts ./custom-prompts
# Single custom directory
swissarmyhammer serve --prompts ./project-prompts
# Multiple directories
swissarmyhammer serve --prompts ./prompts --prompts ~/.custom-prompts
--builtin <BOOLEAN>
- Description: Include built-in prompts in the library
- Default:
true
- Values:
true
,false
- Example:
--builtin false
# Disable built-in prompts
swissarmyhammer serve --builtin false
# Explicitly enable built-in prompts
swissarmyhammer serve --builtin true
--watch <BOOLEAN>
- Description: Enable file watching for automatic prompt reloading
- Default:
true
- Values:
true
,false
- Example:
--watch false
# Disable file watching (for performance)
swissarmyhammer serve --watch false
--debug
- Description: Enable debug logging for troubleshooting
- Default: Disabled
- Output: Detailed logs to stdout
swissarmyhammer serve --debug
--config <FILE>
- Description: Path to configuration file
- Default:
~/.swissarmyhammer/config.toml
- Example:
--config ./custom-config.toml
swissarmyhammer serve --config ./project-config.toml
Examples
Basic Server
Start a basic MCP server with default settings:
swissarmyhammer serve
This loads:
- Built-in prompts
- User prompts from
~/.swissarmyhammer/prompts/
- Local prompts from
./.swissarmyhammer/prompts/
(if exists) - Enables file watching
Development Server
For development with debug logging:
swissarmyhammer serve --debug
Output includes:
- Prompt loading details
- MCP protocol messages
- File watching events
- Error stack traces
Custom Prompt Directory
Serve prompts from a specific directory:
swissarmyhammer serve --prompts /path/to/my/prompts
Multiple Directories
Load prompts from multiple locations:
swissarmyhammer serve \
--prompts ./project-prompts \
--prompts ~/.shared-prompts \
--prompts /team/common-prompts
Project-Only Prompts
Serve only local project prompts (no built-in or user prompts):
swissarmyhammer serve \
--prompts ./prompts \
--builtin false
Performance-Optimized
For large prompt collections, disable file watching:
swissarmyhammer serve \
--watch false \
--prompts ./large-prompt-collection
Custom Port and Host
Specify network settings:
swissarmyhammer serve \
--host 0.0.0.0 \
--port 9000
Prompt Loading Order
SwissArmyHammer loads prompts in this order:
-
Built-in prompts (if
--builtin true
)- Located in the binary
- Categories: development, writing, analysis, etc.
-
User prompts (always loaded)
- Location:
~/.swissarmyhammer/prompts/
- Your personal prompt library
- Location:
-
Custom directories (from
--prompts
flags)- Processed in order specified
- Can override earlier prompts with same name
-
Local prompts (always checked)
- Location:
./.swissarmyhammer/prompts/
in current directory - Project-specific prompts
- Location:
Prompt Override Behavior
When prompts have the same name:
- Later sources override earlier ones
- Local prompts have highest priority
- Built-in prompts have lowest priority
Example hierarchy:
./.swissarmyhammer/prompts/code-review.md (highest priority)
~/.custom/code-review.md (from --prompts ~/.custom)
~/.swissarmyhammer/prompts/code-review.md (user prompts)
built-in:code-review (lowest priority)
File Watching
When file watching is enabled (--watch true
), the server automatically:
Detects Changes
- New prompt files added
- Existing prompt files modified
- Prompt files deleted
- Directory structure changes
Reloads Prompts
- Parses updated files
- Validates YAML front matter
- Updates the prompt library
- Notifies connected MCP clients
Handles Errors
- Invalid YAML syntax
- Missing required fields
- Template compilation errors
- Logs errors without stopping the server
Performance Considerations
File watching uses system resources:
- Memory: Stores file metadata
- CPU: Processes file system events
- Disk I/O: Reads modified files
For large prompt collections (1000+ files), consider:
# Disable watching for better performance
swissarmyhammer serve --watch false
MCP Protocol Details
Server Capabilities
SwissArmyHammer advertises these MCP capabilities:
{
"capabilities": {
"prompts": {
"listChanged": true
},
"tools": {
"listChanged": false
}
}
}
Prompt Exposure
Each prompt becomes an MCP prompt with:
- Name: From promptβs
name
field - Description: From promptβs
description
field - Arguments: From promptβs
arguments
array
Example MCP Prompt
A SwissArmyHammer prompt:
---
name: code-review
title: Code Review Assistant
description: Reviews code for best practices and issues
arguments:
- name: code
description: Code to review
required: true
- name: language
description: Programming language
required: false
default: auto-detect
---
Becomes this MCP prompt:
{
"name": "code-review",
"description": "Reviews code for best practices and issues",
"arguments": [
{
"name": "code",
"description": "Code to review",
"required": true
},
{
"name": "language",
"description": "Programming language",
"required": false
}
]
}
Integration with Claude Code
Configuration
Add SwissArmyHammer to Claude Codeβs MCP configuration:
claude mcp add swissarmyhammer swissarmyhammer serve
Custom Configuration
Add with specific options:
claude mcp add project_sah swissarmyhammer serve --prompts ./project-prompts --debug
Multiple Servers
Run different SwissArmyHammer instances:
# Global prompts
claude mcp add sah_global swissarmyhammer serve
# Project-specific prompts
claude mcp add sah_project swissarmyhammer serve --prompts ./prompts --builtin false
Logging and Output
Standard Output
Normal operation logs:
2024-03-20T10:30:00Z INFO SwissArmyHammer MCP Server starting
2024-03-20T10:30:00Z INFO Loaded 25 prompts from 3 directories
2024-03-20T10:30:00Z INFO Server listening on localhost:8080
2024-03-20T10:30:00Z INFO MCP client connected
Debug Output
With --debug
flag:
2024-03-20T10:30:00Z DEBUG Loading prompts from: ~/.swissarmyhammer/prompts
2024-03-20T10:30:00Z DEBUG Found prompt file: code-review.md
2024-03-20T10:30:00Z DEBUG Parsed prompt: code-review (Code Review Assistant)
2024-03-20T10:30:00Z DEBUG MCP request: prompts/list
2024-03-20T10:30:00Z DEBUG MCP response: 25 prompts returned
Error Handling
The server continues running even with errors:
2024-03-20T10:30:00Z ERROR Failed to parse prompt: invalid-prompt.md
2024-03-20T10:30:00Z ERROR YAML error: missing required field 'description'
2024-03-20T10:30:00Z INFO Continuing with 24 valid prompts
Troubleshooting
Server Wonβt Start
Check port availability:
# Try a specific port
swissarmyhammer serve --port 8080
# Check if port is in use
lsof -i :8080 # macOS/Linux
netstat -an | findstr 8080 # Windows
Check permissions:
# Run with debug to see detailed errors
swissarmyhammer serve --debug
Prompts Not Loading
Verify directories exist:
# Check default directories
ls -la ~/.swissarmyhammer/prompts
ls -la ./.swissarmyhammer/prompts
# Check custom directories
ls -la /path/to/custom/prompts
Validate prompt syntax:
# Test individual prompts
swissarmyhammer test prompt-name
# Validate all prompts
swissarmyhammer doctor
Performance Issues
Large prompt collections:
# Disable file watching
swissarmyhammer serve --watch false
# Limit to specific directories
swissarmyhammer serve --prompts ./essential-prompts --builtin false
Memory usage:
# Monitor memory usage
top -p $(pgrep swissarmyhammer) # Linux
top | grep swissarmyhammer # macOS
Connection Issues
MCP client canβt connect:
# Check server is running
ps aux | grep swissarmyhammer
# Test with different host/port
swissarmyhammer serve --host 127.0.0.1 --port 8080
# Check firewall settings
Debug MCP communication:
# Enable debug logging
swissarmyhammer serve --debug
# Save logs to file
swissarmyhammer serve --debug > server.log 2>&1
Configuration File
Create a configuration file for persistent settings:
# ~/.swissarmyhammer/config.toml
[server]
host = "localhost"
port = 8080
debug = false
[prompts]
builtin = true
watch = true
directories = [
"~/.swissarmyhammer/prompts",
"./.swissarmyhammer/prompts",
"/team/shared-prompts"
]
Use with:
swissarmyhammer serve --config ~/.swissarmyhammer/config.toml
Environment Variables
Configure through environment variables:
export SWISSARMYHAMMER_HOST=localhost
export SWISSARMYHAMMER_PORT=8080
export SWISSARMYHAMMER_DEBUG=true
export SWISSARMYHAMMER_PROMPTS_DIR=/custom/prompts
swissarmyhammer serve
Best Practices
1. Use Consistent Directory Structure
~/.swissarmyhammer/prompts/
βββ development/
βββ writing/
βββ analysis/
βββ productivity/
2. Enable Debug During Development
swissarmyhammer serve --debug
3. Use Project-Specific Servers
# In each project
claude mcp add project_sah swissarmyhammer serve --prompts ./prompts
4. Monitor Performance
# For large collections
swissarmyhammer serve --watch false --debug
5. Version Control Integration
# .gitignore
.swissarmyhammer/cache/
.swissarmyhammer/logs/
# Keep prompts in version control
git add prompts/
Next Steps
- Learn about Claude Code Integration setup
- Explore Configuration options
- See Troubleshooting for common issues
- Check Built-in Prompts reference
search - Search and Discover Prompts
The search
command provides powerful functionality to find prompts in your collection using various search strategies and filters.
Synopsis
swissarmyhammer search [OPTIONS] [QUERY]
Description
Search through your prompt collection using fuzzy matching, regular expressions, or exact text matching. The search can target specific fields and provides relevance-ranked results.
Arguments
QUERY
- Search term or pattern (optional if using filters)
Options
Search Strategy
--case-sensitive, -c
- Enable case-sensitive matching--regex, -r
- Use regular expressions instead of fuzzy matching--fuzzy
- Use fuzzy string matching (default for simple queries)--semantic
- Use AI-powered semantic search with embeddings--hybrid
- Combine fuzzy, full-text, and semantic search results--full, -f
- Show full prompt content in results
Field Targeting
--in FIELD
- Search in specific field (title, description, content, all)title
- Search only in prompt titlesdescription
- Search only in prompt descriptionscontent
- Search only in prompt content/bodyall
- Search in all fields (default)
Filtering
--source SOURCE
- Filter by prompt source (builtin, user, local)--has-arg ARG
- Show prompts that have a specific argument--no-args
- Show prompts with no arguments--language LANG
- Filter by programming language (for semantic search)
Semantic Search Options
--threshold FLOAT
- Similarity threshold for semantic search (0.0-1.0)--model MODEL
- Embedding model to use for semantic search--include-structure
- Include code structure in semantic analysis--include-docs
- Include documentation and comments in search--code-only
- Search only code content, exclude comments
Output Control
--limit, -l N
- Limit results to N prompts (default: 20)--json
- Output results in JSON format
Examples
Basic Search
# Find prompts containing "code"
swissarmyhammer search code
# Case-sensitive search
swissarmyhammer search --case-sensitive "Code Review"
Field-Specific Search
# Search only in titles
swissarmyhammer search --in title "review"
# Search only in descriptions
swissarmyhammer search --in description "debugging"
# Search in content/body
swissarmyhammer search --in content "TODO"
Regular Expression Search
# Find prompts with "test" followed by any word
swissarmyhammer search --regex "test\s+\w+"
# Find prompts starting with specific pattern
swissarmyhammer search --regex "^(debug|fix|analyze)"
Advanced Filtering
# Find built-in prompts only
swissarmyhammer search --source builtin
# Find prompts with "code" argument
swissarmyhammer search --has-arg code
# Find prompts without any arguments
swissarmyhammer search --no-args
# Combine filters
swissarmyhammer search review --source user --has-arg language
Output Options
# Show full content of matching prompts
swissarmyhammer search code --full
# Limit to 5 results
swissarmyhammer search --limit 5 test
# Get JSON output for scripting
swissarmyhammer search --json "data analysis"
Semantic Search Examples
# Basic semantic search
swissarmyhammer search --semantic "error handling patterns"
# Language-specific semantic search
swissarmyhammer search --semantic "async functions" --language rust
# High-precision semantic search
swissarmyhammer search --semantic "database connection" --threshold 0.8
# Hybrid search combining all strategies
swissarmyhammer search --hybrid "authentication middleware"
# Semantic search with specific model
swissarmyhammer search --semantic "testing patterns" --model all-mpnet-base-v2
# Code-only semantic search
swissarmyhammer search --semantic "sorting algorithm" --code-only
Output Format
Default Output
Found 3 prompts matching "code":
π code-review (builtin)
Review code for best practices and potential issues
Arguments: code, language (optional)
π§ debug-code (user)
Help debug programming issues and errors
Arguments: error, context (optional)
π analyze-performance (local)
Analyze code performance and suggest optimizations
Arguments: code, language, metrics (optional)
JSON Output
{
"query": "code",
"results": [
{
"id": "code-review",
"title": "Code Review Helper",
"description": "Review code for best practices and potential issues",
"source": "builtin",
"path": "/builtin/review/code.md",
"arguments": [
{"name": "code", "required": true},
{"name": "language", "required": false, "default": "auto-detect"}
],
"score": 0.95
}
],
"total_found": 3
}
Search Scoring
Results are ranked by relevance using these factors:
- Exact matches score higher than partial matches
- Title matches score higher than description or content matches
- Multiple field matches increase the overall score
- Argument name matches are considered for relevance
Performance
- Search is optimized with an in-memory index
- Fuzzy matching uses efficient algorithms
- Results are cached for repeated queries
- Large prompt collections are handled efficiently
Integration with Other Commands
Search integrates well with other SwissArmyHammer commands:
# Find and test a prompt
PROMPT=$(swissarmyhammer search --json code | jq -r '.results[0].id')
swissarmyhammer test "$PROMPT"
# Export search results
swissarmyhammer search debug --limit 5 | \
grep -o '\w\+-\w\+' | \
xargs swissarmyhammer export
See Also
test
- Test prompts found through searchexport
- Export specific prompts- Search Guide - Advanced search strategies
- Search Architecture - Technical architecture details
- Index Management - Managing search indices
test - Interactive Prompt Testing
The test
command allows you to test prompts interactively, providing argument values and seeing the rendered output before using them with AI models.
Synopsis
swissarmyhammer test [OPTIONS] <PROMPT_ID>
Description
Test prompts interactively by providing arguments and viewing the rendered output. This is essential for debugging template issues, validating arguments, and refining prompts before deployment.
Arguments
PROMPT_ID
- The ID of the prompt to test (required)
Options
Argument Specification
--arg KEY=VALUE
- Provide argument values directly (can be used multiple times)--set KEY=VALUE
- Set liquid template variables (can be used multiple times)
Output Control
--raw
- Show raw template without rendering--copy
- Copy rendered result to clipboard--save FILE
- Save rendered result to file--debug
- Show detailed debug information including variable resolution
Interactive Mode
When no --arg
options are provided, the command enters interactive mode:
- Prompt Selection: If prompt ID is ambiguous, presents a fuzzy selector
- Argument Collection: Prompts for each required and optional argument
- Template Rendering: Shows the rendered output
- Actions: Offers to copy to clipboard or save to file
Examples
Interactive Testing
# Test a prompt interactively
swissarmyhammer test code-review
# The command will prompt for arguments:
# ? Enter value for 'code' (required): fn main() { println!("Hello"); }
# ? Enter value for 'language' (optional, default: auto-detect): rust
#
# [Rendered output shows here]
#
# ? What would you like to do?
# > View output
# Copy to clipboard
# Save to file
# Test with different arguments
# Exit
Non-Interactive Testing
# Test with predefined arguments
swissarmyhammer test code-review \
--arg code="fn main() { println!(\"Hello\"); }" \
--arg language="rust"
# Test and copy to clipboard
swissarmyhammer test debug-helper \
--arg error="compiler error" \
--copy
# Test and save output
swissarmyhammer test api-docs \
--arg code="$(cat src/api.rs)" \
--save generated-docs.md
Debug Mode
# Show debug information
swissarmyhammer test template-complex --debug
# Output includes:
# Variables resolved:
# user_input: "example text"
# timestamp: "2024-01-15T10:30:00Z"
#
# Template processing:
# Line 5: Variable 'user_input' resolved to "example text"
# Line 12: Filter 'capitalize' applied
# Line 18: Conditional block evaluated to true
#
# Final output:
# [rendered template]
Raw Template View
# View the raw template without rendering
swissarmyhammer test email-template --raw
# Shows:
# ---
# title: Email Template
# arguments:
# - name: recipient
# required: true
# ---
#
# Dear {{recipient | capitalize}},
#
# {% if urgent %}
# **URGENT:**
# {% endif %}
# {{message}}
Output Format
Default Output
Testing prompt: code-review
Arguments:
code: "fn main() { println!(\"Hello\"); }"
language: "rust" (default: auto-detect)
Rendered Output:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β # Code Review β
β β
β Please review the following rust code: β
β β
β ```rust β
β fn main() { println!("Hello"); } β
β ``` β
β β
β Focus on: β
β - Code quality and readability β
β - Potential bugs or security issues β
β - Performance considerations β
β - Best practices adherence β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Template rendered successfully (247 characters)
Debug Output
Testing prompt: code-review (debug mode)
Prompt loaded from: ~/.swissarmyhammer/prompts/review/code.md
Arguments defined: 2 (1 required, 1 optional)
Argument Resolution:
β code: "fn main() { println!(\"Hello\"); }" [user provided]
β language: "rust" [user provided, overrides default "auto-detect"]
Template Processing:
β Line 8: Variable 'language' resolved and capitalized
β Line 12-14: Code block with 'code' variable substitution
β Line 16-20: Static bullet list rendered
Filters Applied:
- capitalize: "rust" β "Rust"
Rendered Output:
[... same as above ...]
Performance:
- Template parsing: 2ms
- Variable resolution: 1ms
- Rendering: 3ms
- Total: 6ms
Error Handling
The test command provides helpful error messages for common issues:
Missing Arguments
$ swissarmyhammer test code-review
Error: Missing required argument 'code'
Available arguments:
code (required) - The code to review
language (optional) - Programming language (default: auto-detect)
Use --arg KEY=VALUE to provide arguments, or run without --arg for interactive mode.
Template Errors
$ swissarmyhammer test broken-template --arg data="test"
Error: Template rendering failed at line 15
13 | {% for item in items %}
14 | - {{item.name}}
> 15 | - {{item.invalid_field | unknown_filter}}
| ^^^^^^^^^^^^^^
16 | {% endfor %}
Unknown filter: unknown_filter
Available filters: capitalize, lower, upper, truncate, ...
Fix the template and try again.
Integration with Development Workflow
Testing Before Deployment
# Test a prompt before adding to Claude Code
swissarmyhammer test new-prompt --debug
# Validate all prompts in a directory
for prompt in $(ls prompts/*.md); do
swissarmyhammer test "${prompt%.md}" --arg placeholder="test"
done
Clipboard Integration
# Test and copy for immediate use
swissarmyhammer test quick-note \
--arg content="Meeting notes" \
--copy
# Now paste into your editor or Claude Code
Script Integration
#!/bin/bash
# test-and-deploy.sh
PROMPT_ID="$1"
if swissarmyhammer test "$PROMPT_ID" --arg test="validation"; then
echo "β Prompt test passed, deploying..."
swissarmyhammer export "$PROMPT_ID" --format directory deployment/
else
echo "β Prompt test failed, fix issues before deploying"
exit 1
fi
Template Variables with βset
The --set
parameter allows you to provide additional liquid template variables beyond the promptβs defined arguments. This is useful for:
- Providing metadata like author names, versions, or timestamps
- Overriding default values with template-level variables
- Testing liquid template features without modifying the prompt definition
Variable Precedence
When both --arg
and --set
provide the same variable name, --set
takes precedence. This allows you to override argument values using template variables.
Examples
# Basic usage with template variables
swissarmyhammer test code-review \
--arg code="main.rs" \
--set author="John Doe" \
--set version="1.0"
# Override an argument value with --set
swissarmyhammer test greeting \
--arg name="World" \
--set name="Universe" # This takes precedence, output will use "Universe"
# Use template variables in liquid templates
# If your template contains: {{author | default: "Anonymous"}}
swissarmyhammer test my-prompt --set author="Jane Smith"
# Combine with other features
swissarmyhammer test complex-prompt \
--arg input="data" \
--set debug="true" \
--set timestamp="2024-01-15" \
--debug \
--save output.md
Use in Templates
Template variables set with --set
can be used in liquid templates just like arguments:
---
title: Example Prompt
arguments:
- name: content
required: true
---
# {{title | default: "Document"}}
Author: {{author | default: "Unknown"}}
Version: {{version | default: "1.0"}}
{{content}}
Running this with:
swissarmyhammer test example \
--arg content="Main content here" \
--set author="Alice" \
--set title="My Document"
Would produce:
# My Document
Author: Alice
Version: 1.0
Main content here
See Also
search
- Find prompts to test- Template Variables - Template syntax reference
- Testing Guide - Advanced testing strategies
- Custom Filters - Available template filters
doctor Command
The doctor
command performs comprehensive health checks on your SwissArmyHammer installation and configuration. It identifies issues and provides recommendations for optimal operation.
Usage
swissarmyhammer doctor [OPTIONS]
Overview
The doctor command checks:
- Installation integrity and version compatibility
- Configuration file validity
- Prompt directory accessibility and structure
- Prompt file syntax and metadata
- MCP server functionality
- System dependencies and environment
Options
--verbose
- Description: Enable detailed output with additional diagnostic information
- Default: Disabled
- Example: Shows file paths, configuration details, and system information
swissarmyhammer doctor --verbose
--json
- Description: Output results in JSON format for programmatic use
- Default: Human-readable text output
- Example: Useful for scripts and automation
swissarmyhammer doctor --json
--fix
- Description: Automatically fix issues when possible
- Default: Report-only mode
- Example: Creates missing directories, fixes permissions
swissarmyhammer doctor --fix
--check <CATEGORY>
- Description: Run specific check categories only
- Values:
installation
,config
,prompts
,mcp
,system
- Repeatable: Can specify multiple categories
# Check only prompt-related issues
swissarmyhammer doctor --check prompts
# Check multiple categories
swissarmyhammer doctor --check config --check prompts
Check Categories
Installation Checks
Verifies SwissArmyHammer installation:
Binary Location
- Checks if
swissarmyhammer
is in PATH - Verifies executable permissions
- Confirms version compatibility
Dependencies
- Validates system requirements
- Checks for required libraries
- Verifies runtime dependencies
Example Output
β SwissArmyHammer binary found: /usr/local/bin/swissarmyhammer
β Version: 0.1.0 (latest)
β Executable permissions: OK
β System dependencies: All present
Configuration Checks
Validates configuration files and settings:
Configuration File
- Checks for valid TOML syntax
- Validates configuration schema
- Identifies deprecated settings
Directory Structure
- Verifies default directories exist
- Checks directory permissions
- Validates custom prompt directories
Environment Variables
- Lists relevant environment variables
- Checks for conflicts or inconsistencies
- Validates variable values
Example Output
β Configuration file: ~/.swissarmyhammer/config.toml
β Configuration syntax: Valid TOML
β Default directories: Created and accessible
β Custom directory not found: /nonexistent/prompts
β Environment variables: No conflicts
Prompt Checks
Analyzes prompt files and library structure:
Directory Scanning
- Scans all configured prompt directories
- Counts prompt files by category
- Identifies orphaned or miscategorized files
File Validation
- Validates YAML front matter syntax
- Checks required fields presence
- Verifies argument specifications
Content Analysis
- Validates Liquid template syntax
- Checks for common template errors
- Identifies missing or broken references
Duplicate Detection
- Finds prompts with identical names
- Shows override hierarchy
- Warns about potential conflicts
Example Output
β Prompt directories: 3 found, all accessible
β Prompt files: 47 total, 45 valid
β Invalid prompts: 2 files with errors
- debug-helper.md: Missing required field 'description'
- code-review.md: Invalid YAML syntax on line 8
β Duplicate names: 1 conflict found
- 'help' defined in both builtin and ~/.swissarmyhammer/prompts/
β Template syntax: All valid
MCP Checks
Tests Model Context Protocol functionality:
Server Startup
- Attempts to start MCP server
- Tests port binding
- Verifies server responds to requests
Protocol Compliance
- Validates MCP protocol responses
- Checks capability advertisements
- Tests prompt exposure format
Integration Status
- Checks Claude Code configuration
- Tests end-to-end connectivity
- Validates prompt accessibility
Example Output
β MCP server startup: Success on port 8080
β Protocol compliance: All tests passed
β Prompt exposure: 45 prompts available
β Claude Code integration: Not configured
Run: claude mcp add swissarmyhammer swissarmyhammer serve
System Checks
Analyzes system environment and performance:
Operating System
- Identifies OS and version
- Checks compatibility
- Validates system requirements
File System
- Tests file watching capabilities
- Checks disk space availability
- Validates permissions
Performance
- Measures prompt loading time
- Tests file watching responsiveness
- Checks memory usage patterns
Example Output
β Operating system: macOS 14.0 (supported)
β File system: APFS with file watching support
β Disk space: 15.2 GB available
β Performance: Prompt loading < 100ms
β Memory usage: High with 1000+ prompts (consider --watch false)
Common Issues and Solutions
Installation Issues
SwissArmyHammer Not Found
β SwissArmyHammer binary: Not found in PATH
Solutions:
- Install SwissArmyHammer:
curl -sSL https://install.sh | bash
- Add to PATH:
export PATH="$HOME/.local/bin:$PATH"
- Verify installation:
which swissarmyhammer
Permission Denied
β Executable permissions: Permission denied
Solutions:
# Fix permissions
chmod +x $(which swissarmyhammer)
# Or reinstall
curl -sSL https://install.sh | bash
Configuration Issues
Invalid Configuration File
β Configuration syntax: Invalid TOML at line 15
Solutions:
- Validate TOML syntax online
- Check for missing quotes or brackets
- Reset to defaults:
swissarmyhammer doctor --fix
Missing Directories
β Prompt directory not accessible: /custom/prompts
Solutions:
# Create missing directory
mkdir -p /custom/prompts
# Fix automatically
swissarmyhammer doctor --fix
Prompt Issues
Invalid YAML Front Matter
β Invalid prompts: 3 files with YAML errors
- code-review.md: missing required field 'name'
Solutions:
- Add missing required fields
- Validate YAML syntax
- Use
swissarmyhammer test <prompt>
for detailed errors
Duplicate Prompt Names
β Duplicate names: 'help' defined in multiple locations
Solutions:
- Rename one of the conflicting prompts
- Use different directories for different contexts
- Check prompt override hierarchy
Template Syntax Errors
β Template errors: 2 prompts with Liquid syntax issues
- debug.md: Unknown filter 'unknownfilter'
Solutions:
- Fix Liquid template syntax
- Check available filters: see Custom Filters
- Test templates:
swissarmyhammer test <prompt>
MCP Issues
Server Wonβt Start
β MCP server startup: Failed to bind to port 8080
Solutions:
# Try different port
swissarmyhammer serve --port 8081
# Check what's using the port
lsof -i :8080 # macOS/Linux
netstat -an | findstr 8080 # Windows
Claude Code Not Configured
β Claude Code integration: Not configured
Solutions:
# Add to Claude Code
claude mcp add swissarmyhammer swissarmyhammer serve
# Verify configuration
claude mcp list
Performance Issues
Slow Prompt Loading
β Performance: Prompt loading > 1000ms
Solutions:
- Reduce prompt directory size
- Disable file watching:
--watch false
- Use SSDs for prompt storage
- Split large libraries into categories
High Memory Usage
β Memory usage: 2.1 GB with file watching enabled
Solutions:
# Disable file watching
swissarmyhammer serve --watch false
# Limit prompt directories
swissarmyhammer serve --prompts ./essential-prompts
Automated Fixes
With the --fix
flag, doctor can automatically resolve:
Directory Issues
- Creates missing prompt directories
- Sets appropriate permissions
- Creates default configuration file
Configuration Issues
- Repairs malformed TOML files
- Sets missing default values
- Removes deprecated settings
Permission Issues
- Fixes file and directory permissions
- Makes binaries executable
- Sets appropriate ownership
Example Auto-Fix
swissarmyhammer doctor --fix
# Output:
Fixed: Created missing directory ~/.swissarmyhammer/prompts
Fixed: Set executable permission on swissarmyhammer binary
Fixed: Created default configuration file
Warning: Could not fix invalid YAML in code-review.md (manual intervention required)
Output Formats
Human-Readable (Default)
SwissArmyHammer Doctor Report
============================
Installation Checks:
β Binary found and executable
β Version 0.1.0 (latest)
β Dependencies satisfied
Configuration Checks:
β Configuration file valid
β Custom directory not found: /tmp/prompts
Prompt Checks:
β 45 prompts loaded successfully
β 2 prompts with errors (see details below)
MCP Checks:
β Server starts successfully
β Protocol compliance verified
System Checks:
β OS compatibility confirmed
β High memory usage detected
Summary: 3 warnings, 1 error found
JSON Format
{
"timestamp": "2024-03-20T10:30:00Z",
"version": "0.1.0",
"checks": {
"installation": {
"status": "passed",
"details": [
{
"check": "binary_found",
"status": "passed",
"message": "Binary found at /usr/local/bin/swissarmyhammer"
}
]
},
"configuration": {
"status": "warning",
"details": [
{
"check": "custom_directory",
"status": "warning",
"message": "Directory not found: /tmp/prompts",
"fixable": true
}
]
}
},
"summary": {
"total_checks": 15,
"passed": 12,
"warnings": 2,
"errors": 1
}
}
Integration with CI/CD
Use doctor in automated workflows:
GitHub Actions
- name: Check SwissArmyHammer Health
run: |
swissarmyhammer doctor --json > health-report.json
if [ $(jq '.summary.errors' health-report.json) -gt 0 ]; then
echo "Health check failed"
exit 1
fi
Pre-commit Hook
#!/bin/bash
# .git/hooks/pre-commit
swissarmyhammer doctor --check prompts
if [ $? -ne 0 ]; then
echo "Prompt validation failed. Fix issues before committing."
exit 1
fi
Development Script
#!/bin/bash
# dev-setup.sh
echo "Setting up development environment..."
swissarmyhammer doctor --fix --verbose
echo "Health check complete. Run 'swissarmyhammer serve' to start."
Best Practices
Regular Health Checks
# Weekly health check
swissarmyhammer doctor --verbose
# Before important deployments
swissarmyhammer doctor --check mcp --check prompts
Monitoring Integration
# Check and alert on issues
swissarmyhammer doctor --json | jq -r '.summary.errors' | \
xargs -I {} sh -c 'if [ {} -gt 0 ]; then echo "Alert: SwissArmyHammer errors detected"; fi'
Development Workflow
# After making prompt changes
swissarmyhammer doctor --check prompts --fix
# Before committing
swissarmyhammer doctor --check prompts
Troubleshooting Doctor Issues
Doctor Command Not Found
# Verify installation
which swissarmyhammer
# Reinstall if needed
curl -sSL https://install.sh | bash
Doctor Hangs or Crashes
# Run with timeout
timeout 30s swissarmyhammer doctor --verbose
# Check specific categories
swissarmyhammer doctor --check installation
False Positives
# Skip problematic checks
swissarmyhammer doctor --check config --check prompts
# Use verbose mode for details
swissarmyhammer doctor --verbose
Next Steps
- Fix any issues identified by doctor
- Set up regular health monitoring
- Configure automated fixes where appropriate
- See Troubleshooting for detailed problem resolution
- Check Configuration for advanced settings
completion
The completion
command generates shell completion scripts for SwissArmyHammer, enabling tab completion for commands, options, and arguments in your shell.
Usage
swissarmyhammer completion <SHELL>
Arguments
<SHELL>
- The shell to generate completions for (bash
,zsh
,fish
,powershell
,elvish
)
Supported Shells
Bash
Generate and install Bash completions:
# Generate completion script
swissarmyhammer completion bash > swissarmyhammer.bash
# Install for current user
mkdir -p ~/.local/share/bash-completion/completions
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Or install system-wide (requires sudo)
sudo swissarmyhammer completion bash > /usr/share/bash-completion/completions/swissarmyhammer
# Source in current session
source ~/.local/share/bash-completion/completions/swissarmyhammer
Add to ~/.bashrc
for permanent installation:
# Add SwissArmyHammer completions
if [ -f ~/.local/share/bash-completion/completions/swissarmyhammer ]; then
source ~/.local/share/bash-completion/completions/swissarmyhammer
fi
Zsh
Generate and install Zsh completions:
# Generate completion script
swissarmyhammer completion zsh > _swissarmyhammer
# Install to Zsh completions directory
# First, add custom completion directory to fpath in ~/.zshrc:
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
# Create directory and install
mkdir -p ~/.zsh/completions
swissarmyhammer completion zsh > ~/.zsh/completions/_swissarmyhammer
# Reload completions
autoload -U compinit && compinit
For Oh My Zsh users:
# Install to Oh My Zsh custom plugins
swissarmyhammer completion zsh > ~/.oh-my-zsh/custom/plugins/swissarmyhammer/_swissarmyhammer
Fish
Generate and install Fish completions:
# Generate and install in one command
swissarmyhammer completion fish > ~/.config/fish/completions/swissarmyhammer.fish
# Completions are automatically loaded in new shells
PowerShell
Generate and install PowerShell completions:
# Generate completion script
swissarmyhammer completion powershell > SwissArmyHammer.ps1
# Add to PowerShell profile
Add-Content $PROFILE "`n. $(pwd)\SwissArmyHammer.ps1"
# Or install to modules directory
$modulePath = "$env:USERPROFILE\Documents\PowerShell\Modules\SwissArmyHammer"
New-Item -ItemType Directory -Force -Path $modulePath
swissarmyhammer completion powershell > "$modulePath\SwissArmyHammer.psm1"
Elvish
Generate and install Elvish completions:
# Generate and install
swissarmyhammer completion elvish > ~/.elvish/lib/swissarmyhammer.elv
# Add to rc.elv
echo "use swissarmyhammer" >> ~/.elvish/rc.elv
What Gets Completed
The completion system provides intelligent suggestions for:
Commands
swissarmyhammer <TAB>
# Suggests: serve, list, doctor, export, import, completion, config
Command Options
swissarmyhammer serve --<TAB>
# Suggests: --port, --host, --debug, --watch, --prompts, etc.
Prompt Names
swissarmyhammer get <TAB>
# Suggests available prompt names from your library
File Paths
swissarmyhammer import <TAB>
# Suggests .tar.gz files in current directory
swissarmyhammer export output<TAB>
# Suggests: output.tar.gz
Configuration Keys
swissarmyhammer config set <TAB>
# Suggests: server.port, server.host, prompts.directories, etc.
Advanced Features
Dynamic Completions
Some completions are generated dynamically based on context:
# Completes with actual prompt names from your library
swissarmyhammer get code-<TAB>
# Suggests: code-review, code-documentation, code-optimizer
# Completes with valid categories
swissarmyhammer list --category <TAB>
# Suggests: development, writing, data, productivity
Nested Completions
Completions work with nested commands:
swissarmyhammer config <TAB>
# Suggests: get, set, list, validate
swissarmyhammer config set server.<TAB>
# Suggests: server.port, server.host, server.debug
Alias Support
If you create shell aliases, completions still work:
# In .bashrc or .zshrc
alias sah='swissarmyhammer'
# Completions work with alias
sah serve --<TAB>
Troubleshooting
Completions Not Working
-
Check Installation Location
# Bash ls ~/.local/share/bash-completion/completions/ # Zsh echo $fpath ls ~/.zsh/completions/ # Fish ls ~/.config/fish/completions/
-
Reload Shell Configuration
# Bash source ~/.bashrc # Zsh source ~/.zshrc # Fish source ~/.config/fish/config.fish
-
Check Completion System
# Bash complete -p | grep swissarmyhammer # Zsh print -l ${(ok)_comps} | grep swissarmyhammer
Slow Completions
If completions are slow:
-
Enable Caching (Zsh)
# Add to ~/.zshrc zstyle ':completion:*' use-cache on zstyle ':completion:*' cache-path ~/.zsh/cache
-
Reduce Dynamic Lookups
# Set static prompt directory export SWISSARMYHAMMER_PROMPTS_DIR=~/.swissarmyhammer/prompts
Missing Completions
If some completions are missing:
# Regenerate completions after updates
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Check SwissArmyHammer version
swissarmyhammer --version
Environment Variables
Completions respect environment variables:
# Complete with custom prompt directories
export SWISSARMYHAMMER_PROMPTS_DIRECTORIES="/opt/prompts,~/my-prompts"
swissarmyhammer list <TAB>
Integration with Tools
fzf Integration
Combine with fzf for fuzzy completion:
# Add to .bashrc/.zshrc
_swissarmyhammer_fzf_complete() {
swissarmyhammer list --format simple | fzf
}
# Use with Ctrl+T
bind '"\C-t": "$(_swissarmyhammer_fzf_complete)\e\C-e\er"'
IDE Integration
Most IDEs can use shell completions:
VS Code
{
"terminal.integrated.shellIntegration.enabled": true,
"terminal.integrated.shellIntegration.suggestEnabled": true
}
JetBrains IDEs
- Terminal automatically sources shell configuration
- Completions work in integrated terminal
Custom Completions
Adding Custom Completions
Create wrapper scripts with additional completions:
#!/bin/bash
# my-swissarmyhammer-completions.bash
# Source original completions
source ~/.local/share/bash-completion/completions/swissarmyhammer
# Add custom completions
_my_custom_prompts() {
COMPREPLY=($(compgen -W "my-prompt-1 my-prompt-2 my-prompt-3" -- ${COMP_WORDS[COMP_CWORD]}))
}
# Override prompt name completion
complete -F _my_custom_prompts swissarmyhammer get
Project-Specific Completions
Add project-specific completions:
# .envrc (direnv) or project script
_project_prompts() {
ls ./.swissarmyhammer/prompts/*.md 2>/dev/null | xargs -n1 basename | sed 's/\.md$//'
}
# Export for use in completions
export SWISSARMYHAMMER_PROJECT_PROMPTS=$(_project_prompts)
Best Practices
-
Keep Completions Updated
# Update completions after SwissArmyHammer updates swissarmyhammer completion $(basename $SHELL) > ~/.local/share/completions/swissarmyhammer
-
Test Completions
# Test completion generation swissarmyhammer completion bash | head -20
-
Document Custom Completions
# Add comments in completion files # Custom completions for project XYZ # Generated: $(date) # Version: $(swissarmyhammer --version)
Examples
Complete Workflow
# Install completions
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Use completions
swissarmyhammer li<TAB> # Completes to: list
swissarmyhammer list --for<TAB> # Completes to: --format
swissarmyhammer list --format j<TAB> # Completes to: json
# Get specific prompt
swissarmyhammer get code-r<TAB> # Completes to: code-review
# Export with completion
swissarmyhammer export my-prompts<TAB> # Suggests: my-prompts.tar.gz
Script Integration
#!/bin/bash
# setup-completions.sh
SHELL_NAME=$(basename "$SHELL")
case "$SHELL_NAME" in
bash)
COMPLETION_DIR="$HOME/.local/share/bash-completion/completions"
;;
zsh)
COMPLETION_DIR="$HOME/.zsh/completions"
;;
fish)
COMPLETION_DIR="$HOME/.config/fish/completions"
;;
*)
echo "Unsupported shell: $SHELL_NAME"
exit 1
;;
esac
mkdir -p "$COMPLETION_DIR"
swissarmyhammer completion "$SHELL_NAME" > "$COMPLETION_DIR/swissarmyhammer"
echo "Completions installed to $COMPLETION_DIR"
See Also
- CLI Reference - Complete command documentation
- Configuration - Configuration options
- Getting Started - Initial setup guide
Memoranda CLI Reference
SwissArmyHammerβs memoranda system provides command-line tools for managing structured text memos with automatic timestamping, unique identifiers, and full-text search capabilities.
Overview
The memoranda CLI enables you to:
- Create structured memos with titles and content
- List and browse all stored memos with previews
- Search memos by keywords across titles and content
- Retrieve specific memos by their unique identifiers
- Update memo content while preserving metadata
- Delete memos when no longer needed
- Export all memos as formatted context for AI assistants
All memos are stored in ./.swissarmyhammer/memos/
with ULID identifiers for chronological ordering.
Basic Usage
swissarmyhammer memo [SUBCOMMAND] [OPTIONS]
Subcommands
Subcommand | Description |
---|---|
create | Create a new memo with title and content |
list | List all memos with previews |
get | Retrieve a specific memo by ID |
update | Update memo content by ID |
delete | Delete a memo by ID |
search | Search memos by query string |
context | Get all memo context for AI consumption |
create
Creates a new memo with a title and content.
Usage
swissarmyhammer memo create <TITLE> [OPTIONS]
Arguments
<TITLE>
- Brief title or subject for the memo (required)
Options
-c, --content <CONTENT>
- Memo content (optional)- If omitted, prompts for interactive input
- Use
-c -
to read from stdin - Use
-c "content text"
for direct input
Examples
Interactive Content Input
# Create memo with interactive content input
swissarmyhammer memo create "Meeting Notes"
# Prompts: Enter memo content, press Ctrl+D when finished
Direct Content Input
# Create memo with inline content
swissarmyhammer memo create "Daily Standup" -c "- Completed user auth\n- Working on API endpoints\n- Next: Database schema"
Stdin Content Input
# Create memo from file or piped content
cat meeting_notes.md | swissarmyhammer memo create "Team Meeting" -c -
# Or from heredoc
swissarmyhammer memo create "Project Notes" -c - << EOF
# Project Planning Session
## Attendees
- Alice, Bob, Charlie
## Decisions
- Use React for frontend
- PostgreSQL for database
- Deploy on AWS
EOF
Markdown Content
swissarmyhammer memo create "Code Review Checklist" -c "# Code Review Checklist
## Security
- [ ] Input validation
- [ ] Authentication checks
- [ ] No hardcoded secrets
## Performance
- [ ] Database queries optimized
- [ ] Caching implemented
- [ ] Memory usage acceptable
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Edge cases covered"
Output
β
Created memo: Meeting Notes
π ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
π
Created: 2024-01-15 14:30:25 UTC
list
Lists all available memos with metadata and content previews.
Usage
swissarmyhammer memo list
Options
None.
Examples
# List all memos
swissarmyhammer memo list
Output
π Found 3 memos
π 01ARZ3NDEKTSV4RRFFQ69G5FAV
π Meeting Notes
π
2024-01-15 14:30:25 UTC
π¬ # Team Meeting 2024-01-15\n\n- Discussed Q1 roadmap\n- Assigned tasks for sprint...
π 01BRZ3NDEKTSV4RRFFQ69G5FAW
π Project Ideas
π
2024-01-14 09:15:42 UTC
π¬ ## New Features\n\n1. Dark mode toggle\n2. Export functionality\n3. Advanced search...
π 01CRZ3NDEKTSV4RRFFQ69G5FAX
π Code Snippets
π
2024-01-13 16:22:18 UTC
π¬ ```rust\nfn fibonacci(n: u32) -> u32 {\n match n {\n 0 => 0,\n 1 => 1...
Empty Collection:
βΉοΈ No memos found
Content Preview
- Shows first 100 characters of memo content
- Newlines replaced with spaces for compact display
- Truncated content indicated with ββ¦β
- Memos sorted by creation time (newest first)
get
Retrieves and displays a specific memo by its ULID identifier.
Usage
swissarmyhammer memo get <ID>
Arguments
<ID>
- ULID identifier of the memo to retrieve (required)
Examples
# Get memo by ID
swissarmyhammer memo get 01ARZ3NDEKTSV4RRFFQ69G5FAV
Output
π Memo: Meeting Notes
π ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
π
Created: 2024-01-15 14:30:25 UTC
π Updated: 2024-01-15 14:30:25 UTC
Content:
# Team Meeting 2024-01-15
## Attendees
- Alice (PM)
- Bob (Engineering)
- Charlie (Design)
## Agenda Items
1. Q1 Roadmap Review
2. Sprint Planning
3. Technical Debt Discussion
## Action Items
- [ ] Alice: Schedule follow-up with stakeholders
- [ ] Bob: Research database migration options
- [ ] Charlie: Create wireframes for new features
## Next Meeting
- Date: 2024-01-22
- Time: 2:00 PM UTC
- Location: Conference Room B
Error Handling
Invalid ULID format:
swissarmyhammer memo get invalid-id
# Error: Invalid memo ID format: 'invalid-id'. Expected a valid ULID...
Memo not found:
swissarmyhammer memo get 01XYZ3NDEKTSV4RRFFQ69G5FAV
# Error: Memo not found with ID: 01XYZ3NDEKTSV4RRFFQ69G5FAV
update
Updates the content of an existing memo while preserving its title and metadata.
Usage
swissarmyhammer memo update <ID> [OPTIONS]
Arguments
<ID>
- ULID identifier of the memo to update (required)
Options
-c, --content <CONTENT>
- New content for the memo (optional)- If omitted, prompts for interactive input
- Use
-c -
to read from stdin - Use
-c "content text"
for direct input
Examples
Interactive Update
# Update memo with interactive content input
swissarmyhammer memo update 01ARZ3NDEKTSV4RRFFQ69G5FAV
# Prompts: Enter memo content, press Ctrl+D when finished
Direct Update
# Update memo with inline content
swissarmyhammer memo update 01ARZ3NDEKTSV4RRFFQ69G5FAV -c "# Updated Meeting Notes
Added action items and next steps from today's discussion."
Stdin Update
# Update memo from file
cat updated_notes.md | swissarmyhammer memo update 01ARZ3NDEKTSV4RRFFQ69G5FAV -c -
Output
β
Updated memo: Meeting Notes
π ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
π Updated: 2024-01-15 16:45:30 UTC
Content:
# Updated Meeting Notes
Added action items and next steps from today's discussion.
Behavior
- Title preserved - Only content is updated, title remains unchanged
- Updated timestamp -
updated_at
field refreshed to current time - Created timestamp -
created_at
field remains unchanged - ID preserved - ULID identifier never changes
delete
Permanently deletes a memo from storage.
Usage
swissarmyhammer memo delete <ID>
Arguments
<ID>
- ULID identifier of the memo to delete (required)
Examples
# Delete memo by ID
swissarmyhammer memo delete 01ARZ3NDEKTSV4RRFFQ69G5FAV
Output
ποΈ Deleted memo: Meeting Notes
π ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV
Important Notes
β οΈ Warning: Deletion is permanent and cannot be undone. The memo file is immediately removed from the filesystem.
Best Practices:
- Verify the memo ID before deletion using
get
command - Consider archiving important memos instead of deleting
- Use search to ensure youβre deleting the correct memo
search
Searches memos by query string across titles and content with advanced highlighting and relevance scoring.
Usage
swissarmyhammer memo search <QUERY>
Arguments
<QUERY>
- Search query to match against memo titles and content (required)
Examples
Basic Search
# Search for memos containing "meeting"
swissarmyhammer memo search "meeting"
Multi-word Search
# Search for multiple keywords
swissarmyhammer memo search "project roadmap timeline"
Empty Query
# Empty query returns all memos
swissarmyhammer memo search ""
Output
Successful Search
π Found 2 memos matching 'meeting'
π 01ARZ3NDEKTSV4RRFFQ69G5FAV
π Meeting Notes
π
2024-01-15 14:30:25 UTC
β 95.5% relevance
π¬ # Team **Meeting** 2024-01-15\n\n- Discussed Q1 roadmap\n- Assigned tasks for sprint\n- Next **meeting**: 2024-01-22...
π 01DRZ3NDEKTSV4RRFFQ69G5FAY
π Sprint Planning
π
2024-01-10 11:20:15 UTC
β 78.2% relevance
π¬ Planning **meeting** for next sprint. Need to review backlog and assign story points...
No Results
βΉοΈ No memos found matching 'nonexistent'
Search Features
- Case-insensitive - Search matches regardless of case
- Partial matching - Finds substrings within words
- Title and content - Searches across both memo titles and content
- Relevance scoring - Results sorted by relevance (0-100%)
- Highlighted matches - Query terms highlighted in results
- Advanced search engine - Uses advanced search when available, falls back to basic search
- Result previews - Shows 150 characters of content around matches
Search Tips
- Use specific keywords for better results
- Combine related terms to narrow results
- Search for unique identifiers or project names
- Use common words to find broader categories
context
Exports all memo content formatted for AI assistant consumption.
Usage
swissarmyhammer memo context
Options
None.
Examples
# Get all memo context
swissarmyhammer memo context
Output
π All memo context (2 memos)
## Meeting Notes (ID: 01ARZ3NDEKTSV4RRFFQ69G5FAV)
Created: 2024-01-15 14:30:25 UTC
Updated: 2024-01-15 16:45:30 UTC
# Team Meeting 2024-01-15
## Attendees
- Alice (PM)
- Bob (Engineering)
- Charlie (Design)
## Action Items
- [ ] Schedule follow-up meeting
- [ ] Research database options
- [ ] Create wireframes
===
## Project Ideas (ID: 01BRZ3NDEKTSV4RRFFQ69G5FAW)
Created: 2024-01-14 09:15:42 UTC
Updated: 2024-01-14 09:15:42 UTC
## New Features
1. Dark mode toggle
2. Export functionality
3. Advanced search with filters
4. Collaborative editing
===
Output Format
- Sorted by updated time - Most recently updated memos first
- Full content - Complete memo content without truncation
- Metadata included - Creation and update timestamps
- Clear separators -
===
between memos for easy parsing - AI-friendly format - Optimized for AI assistant consumption
Use Cases
- AI Context Loading - Provide memo knowledge base to AI assistants
- Backup Creation - Export all memos for archival
- Content Analysis - Analyze memo collection patterns
- Documentation Generation - Convert memos to documentation format
Common Workflows
Daily Note-Taking
# Morning: Create daily journal
swissarmyhammer memo create "Daily Journal $(date +%Y-%m-%d)" -c "# Goals for today:
- Review PR #123
- Complete API documentation
- Team standup at 10am"
# Throughout the day: Search for related memos
swissarmyhammer memo search "API documentation"
# Evening: Update with accomplishments
swissarmyhammer memo update $MEMO_ID -c "# Daily Journal $(date +%Y-%m-%d)
## Completed:
- β
Reviewed PR #123 - approved with minor comments
- β
Completed API documentation for auth endpoints
- β
Team standup - discussed upcoming sprint
## Tomorrow:
- Implement user profile endpoints
- Review database migration scripts"
Meeting Notes Management
# Before meeting: Create template
swissarmyhammer memo create "Weekly Team Meeting $(date +%Y-%m-%d)" -c "# Weekly Team Meeting
## Attendees
-
## Agenda
1. Sprint progress review
2. Blockers discussion
3. Next week planning
## Action Items
- [ ]
## Next Meeting
- Date:
- Topics: "
# During meeting: Get memo ID for quick updates
MEMO_ID=$(swissarmyhammer memo list | grep "Weekly Team Meeting" | grep -o '01[A-Z0-9]\{25\}' | head -1)
# After meeting: Update with notes
swissarmyhammer memo update $MEMO_ID -c "$(cat meeting_notes_final.md)"
Project Documentation
# Create project overview
swissarmyhammer memo create "Project Alpha Overview" -c "# Project Alpha
## Objectives
- Improve user onboarding flow
- Reduce support tickets by 30%
- Increase user retention
## Tech Stack
- Frontend: React + TypeScript
- Backend: Node.js + Express
- Database: PostgreSQL
- Deployment: Docker + AWS
## Team
- PM: Alice
- Engineering: Bob, Carol
- Design: David
- QA: Eve"
# Document decisions
swissarmyhammer memo create "Architecture Decisions" -c "# Architecture Decision Records
## ADR-001: Database Choice
**Date**: $(date +%Y-%m-%d)
**Status**: Accepted
**Decision**: Use PostgreSQL for primary database
**Rationale**: Better JSON support, mature ecosystem"
# Search related documentation
swissarmyhammer memo search "project alpha database"
Code Snippet Collection
# Save useful code snippets
swissarmyhammer memo create "Rust Error Handling Patterns" -c '# Rust Error Handling
## Custom Error Types
```rust
#[derive(Debug)]
pub enum MyError {
Io(std::io::Error),
Parse(String),
NotFound,
}
impl std::fmt::Display for MyError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
MyError::Io(err) => write!(f, "IO error: {}", err),
MyError::Parse(msg) => write!(f, "Parse error: {}", msg),
MyError::NotFound => write!(f, "Not found"),
}
}
}
impl std::error::Error for MyError {}
Result Helpers
type Result<T> = std::result::Result<T, MyError>;
fn might_fail() -> Result<String> {
Ok("success".to_string())
}
```'
Search for code examples
swissarmyhammer memo search "rust error"
swissarmyhammer memo search "```rust"
Storage and File Management
Storage Location
Memos are stored in:
./.swissarmyhammer/memos/
βββ Meeting Notes.md
βββ Project Planning.md
βββ Technical Documentation.md
File Format
Each memo is stored as a markdown file with the memo title as the filename:
Example: Meeting Notes.md
# Team Meeting 2024-01-15
- Discussed roadmap and next quarter priorities
- Reviewed architectural decisions
- Action items assigned to team members
## Key Decisions
- Migrate to new storage format
- Implement user feedback system
File Properties:
- Filename: Sanitized version of memo title (spaces preserved, special characters removed)
- Content: Pure markdown content without metadata wrapper
- Timestamps: Derived from filesystem metadata (creation and modification times)
- ID: Based on filename for human-readable organization
Backup and Restore
Backup Memos
# Copy memo directory
cp -r .swissarmyhammer/memos/ backup/memos-$(date +%Y%m%d)/
# Or create archive
tar -czf memos-backup-$(date +%Y%m%d).tar.gz .swissarmyhammer/memos/
# Export as context for external storage
swissarmyhammer memo context > all-memos-$(date +%Y%m%d).md
Restore Memos
# Restore from backup directory
cp -r backup/memos-20240115/ .swissarmyhammer/memos/
# Or extract from archive
tar -xzf memos-backup-20240115.tar.gz
Migration Between Projects
# Export from project A
cd /path/to/project-a
swissarmyhammer memo context > project-a-memos.md
# In project B, manually recreate important memos
cd /path/to/project-b
swissarmyhammer memo create "Imported from Project A" -c - < project-a-memos.md
Performance Considerations
Response Times
Command | Typical Time | Notes |
---|---|---|
create | 10-50ms | File I/O dependent |
list | 50-200ms | Scales with memo count |
get | 5-20ms | Single file read |
update | 15-60ms | File write + timestamp |
delete | 10-30ms | File deletion |
search | 100-500ms | Content size dependent |
context | 200ms-2s | All memos loaded |
Optimization Tips
- Large Collections - Consider archiving old memos
- Search Performance - Use specific keywords rather than broad terms
- Context Export - Run sparingly for large memo collections
- File System - Use SSD storage for better I/O performance
Limitations
- No pagination -
list
andcontext
load all memos - Memory usage - Large memos consume proportional memory
- Search complexity - Basic string matching only
- Concurrent access - No locking mechanism for simultaneous operations
Troubleshooting
Common Issues
Permission Denied
# Error: Permission denied (os error 13)
# Solution: Check directory permissions
ls -la .swissarmyhammer/
chmod 755 .swissarmyhammer/
chmod 644 .swissarmyhammer/memos/*
Invalid ULID Format
# Error: Invalid memo ID format
# Solution: Use complete 26-character ULID
swissarmyhammer memo get 01ARZ3NDEKTSV4RRFFQ69G5FAV # β
Correct
swissarmyhammer memo get 01ARZ3 # β Too short
Storage Directory Missing
# Error: No such file or directory
# Solution: Directory created automatically, but check parent permissions
mkdir -p .swissarmyhammer/memos
Large Content Issues
# For very large content, use file input instead of command line
# Command line arguments have length limits
cat large_document.md | swissarmyhammer memo create "Large Doc" -c -
Debug Mode
Enable debug logging to troubleshoot issues:
RUST_LOG=debug swissarmyhammer memo list
Integration with Other Tools
Shell Scripts
#!/bin/bash
# daily-standup.sh - Create daily standup note
DATE=$(date +%Y-%m-%d)
TITLE="Daily Standup $DATE"
swissarmyhammer memo create "$TITLE" -c "# Daily Standup $DATE
## Yesterday
-
## Today
-
## Blockers
-
## Notes
- "
echo "Created daily standup note for $DATE"
Git Hooks
#!/bin/bash
# post-commit hook - Create memo for significant commits
COMMIT_MSG=$(git log -1 --pretty=%B)
COMMIT_HASH=$(git rev-parse --short HEAD)
if [[ "$COMMIT_MSG" == *"BREAKING CHANGE"* || "$COMMIT_MSG" == *"feat:"* ]]; then
swissarmyhammer memo create "Git Commit $COMMIT_HASH" -c "# Significant Commit
**Hash**: $COMMIT_HASH
**Message**: $COMMIT_MSG
**Changes**:
$(git show --stat $COMMIT_HASH)
**Files Modified**:
$(git show --name-only $COMMIT_HASH)"
fi
Editor Integration
Vim/Neovim
" Add to .vimrc or init.vim
command! MemoCreate :!swissarmyhammer memo create <q-args>
command! MemoList :!swissarmyhammer memo list
command! MemoSearch :!swissarmyhammer memo search <q-args>
" Quick memo creation from current buffer
nnoremap <leader>mc :MemoCreate
nnoremap <leader>ml :MemoList<CR>
nnoremap <leader>ms :MemoSearch
VS Code
Create tasks in .vscode/tasks.json
:
{
"version": "2.0.0",
"tasks": [
{
"label": "Create Memo",
"type": "shell",
"command": "swissarmyhammer",
"args": ["memo", "create", "${input:memoTitle}"],
"group": "build",
"presentation": {
"echo": true,
"reveal": "always"
}
}
],
"inputs": [
{
"id": "memoTitle",
"description": "Memo title",
"default": "Quick Note",
"type": "promptString"
}
]
}
See Also
- MCP Memoranda Tools - MCP integration for AI assistants
- Getting Started Guide - Step-by-step tutorial
- Advanced Usage Examples - Complex workflows
- API Reference - Programmatic usage
- Troubleshooting - Common issues and solutions
Rust Library Guide
SwissArmyHammer is available as a Rust library (swissarmyhammer
) that you can integrate into your own applications. This guide covers installation, basic usage, and integration patterns.
Installation
Add SwissArmyHammer to your Cargo.toml
:
[dependencies]
swissarmyhammer = { git = "https://github.com/wballard/swissarmyhammer" }
Quick Start
Basic Prompt Library
use swissarmyhammer::{PromptLibrary, ArgumentSpec, Prompt};
use std::collections::HashMap;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new prompt library
let mut library = PromptLibrary::new();
// Create a simple prompt
let prompt = Prompt::new("greet", "Hello {{name}}!")
.with_description("A greeting prompt")
.add_argument(ArgumentSpec {
name: "name".to_string(),
description: Some("Name to greet".to_string()),
required: true,
default: None,
type_hint: Some("string".to_string()),
});
// Add prompt to library
library.add(prompt)?;
// Add prompts from a directory
if std::path::Path::new("./.swissarmyhammer/prompts").exists() {
let count = library.add_directory("./.swissarmyhammer/prompts")?;
println!("Loaded {} prompts from directory", count);
}
// List available prompts
let prompts = library.list()?;
for prompt in &prompts {
println!("Available prompt: {}", prompt.name);
}
// Get a specific prompt
let prompt = library.get("greet")?;
println!("Name: {}", prompt.name);
if let Some(description) = &prompt.description {
println!("Description: {}", description);
}
// Prepare arguments
let mut args = HashMap::new();
args.insert("name".to_string(), "World".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("Rendered prompt:\n{}", rendered);
Ok(())
}
Custom Prompt Creation
use swissarmyhammer::{Prompt, ArgumentSpec};
use std::collections::HashMap;
fn create_custom_prompt() -> Result<Prompt, Box<dyn std::error::Error>> {
let template = r#"
Code Review: {{ focus | capitalize }}
Please review this code:
{{ code }}
{% if focus == "security" %}
Focus specifically on security vulnerabilities and best practices.
{% elsif focus == "performance" %}
Focus on performance optimizations and efficiency.
{% else %}
Perform a general code review covering style, bugs, and maintainability.
{% endif %}
"#;
let prompt = Prompt::new("custom-code-review", template)
.with_description("A custom code review prompt")
.with_category("development")
.add_argument(ArgumentSpec {
name: "code".to_string(),
description: Some("Code to review".to_string()),
required: true,
default: None,
type_hint: Some("string".to_string()),
})
.add_argument(ArgumentSpec {
name: "focus".to_string(),
description: Some("Review focus area".to_string()),
required: false,
default: Some("general".to_string()),
type_hint: Some("string".to_string()),
});
Ok(prompt)
}
// Test the custom prompt
fn main() -> Result<(), Box<dyn std::error::Error>> {
let prompt = create_custom_prompt()?;
let mut args = HashMap::new();
args.insert("code".to_string(), "fn main() { println!(\"Hello\"); }".to_string());
args.insert("focus".to_string(), "security".to_string());
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Core Components
PromptLibrary
The main interface for managing collections of prompts.
use swissarmyhammer::PromptLibrary;
// Create a new library
let mut library = PromptLibrary::new();
// Add prompts from various sources
library.add_directory("./.swissarmyhammer/prompts")?;
// Query prompts
let prompts = library.list()?;
let prompt = library.get("prompt-name")?;
// Search prompts
let results = library.search("code review")?;
// Render a prompt directly
let mut args = std::collections::HashMap::new();
args.insert("key".to_string(), "value".to_string());
let rendered = library.render_prompt("prompt-name", &args)?;
Prompt
Individual prompt with metadata and template.
use swissarmyhammer::{Prompt, PromptLoader};
use std::collections::HashMap;
// Load from file using PromptLoader
let loader = PromptLoader::new();
let prompt = loader.load_file("./.swissarmyhammer/prompts/review.md")?;
// Access metadata
println!("Name: {}", prompt.name);
if let Some(description) = &prompt.description {
println!("Description: {}", description);
}
for arg in &prompt.arguments {
println!("Argument: {} (required: {})", arg.name, arg.required);
}
// Render with arguments
let mut args = HashMap::new();
args.insert("code".to_string(), "example code".to_string());
let rendered = prompt.render(&args)?;
Template Engine
Template processing with Liquid syntax.
use swissarmyhammer::template::Template;
use std::collections::HashMap;
let template = Template::new("Hello {{ name | capitalize }}! Today is {{ date }}.")?;
let mut variables = HashMap::new();
variables.insert("name".to_string(), "alice".to_string());
variables.insert("date".to_string(), "2024-01-15".to_string());
let result = template.render(&variables)?;
println!("{}", result); // "Hello Alice! Today is 2024-01-15."
Advanced Usage
Loading Prompts from String
use swissarmyhammer::PromptLoader;
let loader = PromptLoader::new();
let content = r#"---
title: My Prompt
description: A custom prompt
arguments:
- name: input
description: The input text
required: true
---
Process this input: {{ input }}
"#;
let prompt = loader.load_from_string("my-prompt", content)?;
Search and Filter
use swissarmyhammer::PromptLibrary;
let mut library = PromptLibrary::new();
library.add_directory("./.swissarmyhammer/prompts")?;
// Search for prompts
let results = library.search("code review")?;
for prompt in results {
println!("Found: {} - {}", prompt.name,
prompt.description.unwrap_or_default());
}
Integration Examples
Simple CLI Tool
use clap::{Arg, Command};
use swissarmyhammer::PromptLibrary;
use std::collections::HashMap;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let matches = Command::new("my-prompt-tool")
.arg(Arg::new("prompt")
.help("Prompt ID to render")
.required(true)
.index(1))
.arg(Arg::new("args")
.help("Template arguments as key=value pairs")
.action(clap::ArgAction::Append)
.short('a')
.long("arg"))
.get_matches();
let mut library = PromptLibrary::new();
library.add_directory("./.swissarmyhammer/prompts")?;
let prompt_id = matches.get_one::<String>("prompt")
.expect("Prompt ID is required");
let prompt = library.get(prompt_id)?;
let mut args = HashMap::new();
if let Some(arg_values) = matches.get_many::<String>("args") {
for arg in arg_values {
if let Some((key, value)) = arg.split_once('=') {
args.insert(key.to_string(), value.to_string());
}
}
}
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Configuration Management
use serde::{Deserialize, Serialize};
use swissarmyhammer::PromptLibrary;
use std::collections::HashMap;
#[derive(Serialize, Deserialize)]
struct AppConfig {
prompt_directories: Vec<String>,
default_arguments: HashMap<String, String>,
}
impl Default for AppConfig {
fn default() -> Self {
Self {
prompt_directories: vec!["./.swissarmyhammer/prompts".to_string()],
default_arguments: HashMap::new(),
}
}
}
fn setup_library(config: &AppConfig) -> Result<PromptLibrary, Box<dyn std::error::Error>> {
let mut library = PromptLibrary::new();
for dir in &config.prompt_directories {
library.add_directory(dir)?;
}
Ok(library)
}
Error Handling
SwissArmyHammer uses comprehensive error types:
use swissarmyhammer::{SwissArmyHammerError, PromptLibrary};
let library = PromptLibrary::new();
match library.get("nonexistent") {
Ok(prompt) => {
// Handle success
println!("Found prompt: {}", prompt.name);
}
Err(SwissArmyHammerError::PromptNotFound(id)) => {
eprintln!("Prompt '{}' not found", id);
}
Err(SwissArmyHammerError::TemplateError(msg)) => {
eprintln!("Template error: {}", msg);
}
Err(SwissArmyHammerError::IoError(io_err)) => {
eprintln!("I/O error: {}", io_err);
}
Err(e) => {
eprintln!("Unexpected error: {}", e);
}
}
Working with Workflow System
use swissarmyhammer::{Workflow, WorkflowRun, State};
// The workflow system is available but requires more complex setup
// Refer to the workflow module documentation for detailed examples
Working with Issues and Memoranda
use swissarmyhammer::{
CreateMemoRequest, Memo,
issues::filesystem::FileSystemIssueStorage
};
// These modules provide issue tracking and memo functionality
// See the respective module documentation for usage examples
Best Practices
Memory Usage
- Prompt libraries cache parsed templates in memory
- For large collections, consider periodically reloading
- Use the search functionality to find specific prompts efficiently
Error Handling
- Always handle
SwissArmyHammerError
variants appropriately - Use
?
operator for error propagation in functions returningResult
- Log errors appropriately for debugging
Performance
- Template rendering is generally fast
- Cache commonly used prompts in your application
- Use batch operations when working with multiple prompts
Testing
#[cfg(test)]
mod tests {
use super::*;
use swissarmyhammer::{PromptLibrary, Prompt, ArgumentSpec};
use std::collections::HashMap;
#[test]
fn test_prompt_rendering() {
let prompt = Prompt::new("test", "Hello {{name}}!")
.add_argument(ArgumentSpec {
name: "name".to_string(),
description: Some("Name to greet".to_string()),
required: true,
default: None,
type_hint: Some("string".to_string()),
});
let mut args = HashMap::new();
args.insert("name".to_string(), "World".to_string());
let result = prompt.render(&args).unwrap();
assert_eq!(result, "Hello World!");
}
}
See Also
- Built-in Prompts - Available built-in prompts
- Creating Prompts - How to create custom prompts
- CLI Reference - Command-line interface documentation
API Guide
This guide provides comprehensive usage patterns and examples for the SwissArmyHammer Rust API.
Overview
SwissArmyHammer provides a rich Rust API for programmatic prompt management, template rendering, workflow orchestration, and memoranda handling. The API is designed with both synchronous and asynchronous patterns in mind.
Core Components
- PromptLibrary - Main interface for prompt management
- Prompt - Individual prompt representation
- PromptResolver - Advanced prompt loading and resolution
- TemplateEngine - Low-level template rendering
- Workflows - State-based execution workflows
- Memoranda - Structured note management
- Storage - Pluggable storage backends
Quick Start
use swissarmyhammer::{PromptLibrary, ArgumentSpec, Prompt};
use std::collections::HashMap;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a library and load prompts
let mut library = PromptLibrary::new();
library.add_directory("./.swissarmyhammer/prompts")?;
// Use a prompt
let prompt = library.get("code-review")?;
let mut args = HashMap::new();
args.insert("language".to_string(), "rust".to_string());
let result = prompt.render(&args)?;
println!("{}", result);
Ok(())
}
PromptLibrary
The PromptLibrary
is the primary interface for managing collections of prompts.
Creation and Initialization
use swissarmyhammer::PromptLibrary;
// Create an empty library
let mut library = PromptLibrary::new();
// Load from directories (common pattern)
if std::path::Path::new("./.swissarmyhammer/prompts").exists() {
let count = library.add_directory("./.swissarmyhammer/prompts")?;
println!("Loaded {} prompts", count);
}
// Load from multiple sources
library.add_directory("./global/prompts")?;
library.add_directory("./project/prompts")?;
Adding Prompts Programmatically
use swissarmyhammer::{Prompt, ArgumentSpec};
let prompt = Prompt::new("greeting", "Hello {{name}}!")
.with_description("A simple greeting")
.add_argument(ArgumentSpec {
name: "name".to_string(),
description: Some("Person's name".to_string()),
required: true,
default: Some("World".to_string()),
type_hint: Some("string".to_string()),
});
library.add(prompt)?;
Retrieving and Using Prompts
use std::collections::HashMap;
// Get by name
let prompt = library.get("greeting")?;
// Render with arguments
let mut args = HashMap::new();
args.insert("name".to_string(), "Alice".to_string());
let result = prompt.render(&args)?;
Listing and Discovery
// List all prompts
for prompt in library.list()? {
println!("{}: {}", prompt.name,
prompt.description.as_deref().unwrap_or("No description"));
}
// Filter by category
for prompt in library.list()? {
if prompt.category.as_deref() == Some("development") {
println!("Dev prompt: {}", prompt.name);
}
}
// Search by content
let matches = library.search("code review")?;
for prompt in matches {
println!("Found: {}", prompt.name);
}
Prompt
Individual prompts encapsulate template content and metadata.
Creating Prompts
use swissarmyhammer::{Prompt, ArgumentSpec};
let prompt = Prompt::new("code-review", r#"
Review this {{language}} code:
```{{language}}
{{code}}
Focus on:
- Best practices
- Potential bugs
- Performance β#) .with_description(βComprehensive code review promptβ) .with_category(βdevelopmentβ) .with_tags(vec![βcodeβ.to_string(), βreviewβ.to_string()]) .add_argument(ArgumentSpec { name: βlanguageβ.to_string(), description: Some(βProgramming languageβ.to_string()), required: true, default: None, type_hint: Some(βstringβ.to_string()), }) .add_argument(ArgumentSpec { name: βcodeβ.to_string(), description: Some(βCode to reviewβ.to_string()), required: true, default: None, type_hint: Some(βtextβ.to_string()), });
### Argument Validation
```rust
// Check required arguments
if let Err(missing) = prompt.validate_arguments(&args) {
eprintln!("Missing arguments: {:?}", missing);
return Ok(());
}
// Get argument specifications
for arg in &prompt.arguments {
println!("Arg: {} (required: {})", arg.name, arg.required);
if let Some(default) = &arg.default {
println!(" Default: {}", default);
}
}
Template Features
// Basic variable substitution
let template = "Hello {{name}}!";
// Conditionals
let template = r#"
{% if urgent %}
**URGENT**: {{message}}
{% else %}
{{message}}
{% endif %}
"#;
// Loops
let template = r#"
{% for item in items %}
- {{item.name}}: {{item.description}}
{% endfor %}
"#;
// Custom filters (if registered)
let template = "{{text | upper | truncate: 50}}";
PromptResolver
For advanced prompt loading and resolution scenarios.
Basic Usage
use swissarmyhammer::PromptResolver;
let resolver = PromptResolver::new();
// Get all available prompts from standard locations
let prompts = resolver.resolve_all()?;
// Resolve specific prompt by name
if let Some(prompt) = resolver.resolve("code-review")? {
println!("Found prompt: {}", prompt.name);
}
Custom Search Paths
use swissarmyhammer::{PromptResolver, FileSource};
let mut resolver = PromptResolver::new();
resolver.add_source(FileSource::from_directory("./custom/prompts")?);
// Resolve from custom sources
let prompts = resolver.resolve_all()?;
Source Priority
// Sources are resolved in order, later sources override earlier ones
resolver.add_source(FileSource::from_directory("./global")?); // Lower priority
resolver.add_source(FileSource::from_directory("./project")?); // Medium priority
resolver.add_source(FileSource::from_directory("./local")?); // Higher priority
// "code-review" from ./local will override ./project and ./global
let prompt = resolver.resolve("code-review")?;
TemplateEngine
Low-level template rendering with custom filters and context.
Basic Rendering
use swissarmyhammer::TemplateEngine;
use std::collections::HashMap;
let engine = TemplateEngine::new();
let template = engine.parse("Hello {{name}}!")?;
let mut context = HashMap::new();
context.insert("name".to_string(), "World".to_string());
let result = template.render(&context)?;
Custom Filters
use swissarmyhammer::{TemplateEngine, CustomLiquidFilter};
use liquid::ValueView;
struct UppercaseFilter;
impl CustomLiquidFilter for UppercaseFilter {
fn name(&self) -> &'static str {
"upper"
}
fn filter(&self, input: &dyn ValueView) -> Result<String, Box<dyn std::error::Error>> {
Ok(input.to_kstr().to_uppercase())
}
}
let mut engine = TemplateEngine::new();
engine.register_filter(Box::new(UppercaseFilter))?;
let template = engine.parse("{{name | upper}}")?;
// Renders: "ALICE" from input "alice"
Advanced Context
use serde_json::json;
let context = json!({
"user": {
"name": "Alice",
"role": "developer"
},
"items": [
{"name": "Task 1", "done": true},
{"name": "Task 2", "done": false}
]
});
let template = r#"
User: {{user.name}} ({{user.role}})
Pending tasks:
{% for item in items %}
{% unless item.done %}
- {{item.name}}
{% endunless %}
{% endfor %}
"#;
Workflows
State-based execution for complex multi-step processes.
Defining Workflows
use swissarmyhammer::{Workflow, State, Transition};
let workflow = Workflow::new("code-review-process")
.with_description("Complete code review workflow")
.add_state(State::new("start")
.with_prompt("code-review-request"))
.add_state(State::new("review")
.with_prompt("perform-code-review"))
.add_state(State::new("feedback")
.with_prompt("format-feedback"))
.add_state(State::new("complete"))
.add_transition(Transition::new("start", "review")
.with_condition("review_requested"))
.add_transition(Transition::new("review", "feedback")
.with_condition("review_complete"))
.add_transition(Transition::new("feedback", "complete"));
Executing Workflows
use std::collections::HashMap;
// Start a workflow run
let mut context = HashMap::new();
context.insert("code".to_string(), "fn main() {}".to_string());
context.insert("language".to_string(), "rust".to_string());
let mut run = workflow.start_run(context)?;
// Execute step by step
while !run.is_complete() {
let current_state = run.current_state();
println!("Current state: {:?}", current_state);
// Get the prompt for this state
if let Some(prompt_name) = current_state.prompt_name() {
let prompt = library.get(prompt_name)?;
let result = prompt.render(run.context())?;
// Process result and advance
run.set_variable("result", result);
run.advance("next")?;
}
}
let final_result = run.get_variable("result");
Workflow Persistence
use swissarmyhammer::{WorkflowRun, WorkflowRunStatus};
// Save workflow state
let run_id = run.id();
let status = run.status(); // InProgress, Completed, Failed
let state_data = run.serialize()?;
// Later, restore workflow
let restored_run = WorkflowRun::deserialize(&state_data)?;
Memoranda
Structured note and memo management.
Creating Memos
use swissarmyhammer::{Memo, CreateMemoRequest};
let memo_request = CreateMemoRequest {
title: "Meeting Notes".to_string(),
content: r#"
Team Meeting 2024-01-15
# Attendees
- Alice, Bob, Charlie
# Action Items
- [ ] Review PR #123
- [ ] Update documentation
"#.to_string(),
};
// This would typically be used with MCP or storage layer
Searching Memos
use swissarmyhammer::SearchMemosRequest;
let search_request = SearchMemosRequest {
query: "meeting action items".to_string(),
};
// Returns SearchMemosResponse with matching memos
// Implementation depends on storage backend
Storage
Pluggable storage backends for prompts and data.
File System Storage
use swissarmyhammer::{PromptStorage, StorageBackend};
use std::path::PathBuf;
// Default file system storage
let storage = PromptStorage::new_filesystem(PathBuf::from("./.swissarmyhammer"))?;
// Store and retrieve prompts
storage.store_prompt(&prompt)?;
let retrieved = storage.get_prompt("greeting")?;
// List all stored prompts
let all_prompts = storage.list_prompts()?;
Custom Storage Backend
use swissarmyhammer::{StorageBackend, Prompt};
use async_trait::async_trait;
struct DatabaseStorage {
connection: DatabaseConnection,
}
#[async_trait]
impl StorageBackend for DatabaseStorage {
async fn store_prompt(&self, prompt: &Prompt) -> Result<(), SwissArmyHammerError> {
// Store in database
Ok(())
}
async fn get_prompt(&self, name: &str) -> Result<Option<Prompt>, SwissArmyHammerError> {
// Retrieve from database
Ok(None)
}
async fn list_prompts(&self) -> Result<Vec<Prompt>, SwissArmyHammerError> {
// List all prompts
Ok(vec![])
}
}
Error Handling
SwissArmyHammer uses a comprehensive error system.
Error Types
use swissarmyhammer::{SwissArmyHammerError, Result};
fn handle_errors() -> Result<()> {
match library.get("nonexistent") {
Ok(prompt) => println!("Found: {}", prompt.name),
Err(SwissArmyHammerError::PromptNotFound(name)) => {
eprintln!("Prompt '{}' not found", name);
}
Err(SwissArmyHammerError::Template(msg)) => {
eprintln!("Template error: {}", msg);
}
Err(SwissArmyHammerError::Io(err)) => {
eprintln!("IO error: {}", err);
}
Err(err) => {
eprintln!("Other error: {}", err);
}
}
Ok(())
}
Result Chaining
// Chain operations safely
let result = library
.get("code-review")?
.render(&args)?;
// Or with explicit error handling
let prompt = library.get("code-review").map_err(|e| {
eprintln!("Failed to get prompt: {}", e);
e
})?;
Advanced Patterns
Plugin System
use swissarmyhammer::{SwissArmyHammerPlugin, PluginRegistry};
struct CustomPlugin;
impl SwissArmyHammerPlugin for CustomPlugin {
fn name(&self) -> &'static str {
"custom-plugin"
}
fn initialize(&self, registry: &mut PluginRegistry) -> Result<()> {
// Register custom filters, templates, etc.
Ok(())
}
}
let mut registry = PluginRegistry::new();
registry.register(Box::new(CustomPlugin))?;
Configuration Management
use swissarmyhammer::Config;
let config = Config::builder()
.prompt_directories(vec!["./prompts", "./shared/prompts"])
.template_cache_size(1000)
.enable_file_watching(true)
.build()?;
let library = PromptLibrary::with_config(config)?;
Async Patterns
use swissarmyhammer::PromptResolver;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let resolver = PromptResolver::new();
// Async prompt resolution
let prompts = resolver.resolve_all_async().await?;
// Concurrent prompt rendering
let tasks: Vec<_> = prompts.into_iter()
.map(|prompt| {
let args = args.clone();
tokio::spawn(async move {
prompt.render(&args)
})
})
.collect();
let results = futures::future::join_all(tasks).await;
Ok(())
}
Performance Considerations
Caching
// PromptLibrary caches loaded prompts automatically
let library = PromptLibrary::new();
library.add_directory("./prompts")?; // Loads once
// Multiple gets use cached versions
let prompt1 = library.get("greeting")?; // From cache
let prompt2 = library.get("greeting")?; // From cache
Memory Management
use std::sync::Arc;
// Share prompts across threads
let prompt = Arc::new(library.get("greeting")?);
let handles: Vec<_> = (0..4).map(|_| {
let prompt = Arc::clone(&prompt);
let args = args.clone();
std::thread::spawn(move || {
prompt.render(&args)
})
}).collect();
Batch Operations
// Process multiple prompts efficiently
let prompt_names = vec!["greeting", "farewell", "code-review"];
let results: Result<Vec<_>, _> = prompt_names
.iter()
.map(|name| library.get(name)?.render(&args))
.collect();
Testing
Unit Testing Prompts
#[cfg(test)]
mod tests {
use super::*;
use std::collections::HashMap;
#[test]
fn test_greeting_prompt() {
let prompt = Prompt::new("greeting", "Hello {{name}}!");
let mut args = HashMap::new();
args.insert("name".to_string(), "World".to_string());
let result = prompt.render(&args).unwrap();
assert_eq!(result, "Hello World!");
}
#[test]
fn test_missing_argument() {
let prompt = Prompt::new("greeting", "Hello {{name}}!")
.add_argument(ArgumentSpec {
name: "name".to_string(),
required: true,
..Default::default()
});
let args = HashMap::new(); // Missing 'name'
assert!(prompt.render(&args).is_err());
}
}
Integration Testing
#[cfg(test)]
mod integration_tests {
use super::*;
use tempfile::TempDir;
#[test]
fn test_library_from_directory() {
let temp_dir = TempDir::new().unwrap();
let prompt_content = r#"---
title: Test Prompt
---
Hello {{name}}!"#;
std::fs::write(
temp_dir.path().join("test.md"),
prompt_content
).unwrap();
let mut library = PromptLibrary::new();
library.add_directory(temp_dir.path()).unwrap();
let prompt = library.get("test").unwrap();
assert_eq!(prompt.title.unwrap(), "Test Prompt");
}
}
Best Practices
1. Error Handling
- Always use
Result<T>
return types - Provide meaningful error messages
- Chain operations with
?
operator
2. Resource Management
- Cache
PromptLibrary
instances when possible - Use
Arc<>
for sharing across threads - Clean up temporary resources
3. Template Design
- Keep templates focused and reusable
- Use clear argument names
- Provide default values where appropriate
- Document expected arguments
4. Performance
- Load prompts once, use many times
- Consider async patterns for I/O intensive operations
- Use batch operations for multiple prompts
5. Testing
- Unit test individual prompts
- Integration test library loading
- Mock storage backends for testing
See Also
- Library Examples - Practical usage examples
- API Reference - Complete API documentation
- rustdoc Documentation - Generated API docs
- MCP Protocol - Model Context Protocol integration
Library API Reference
This document provides comprehensive API documentation for the SwissArmyHammer Rust library.
Core Types
Prompt
The Prompt
struct represents a single prompt with metadata and template content.
pub struct Prompt {
pub name: String,
pub content: String,
pub description: Option<String>,
pub category: Option<String>,
pub tags: Vec<String>,
pub arguments: Vec<ArgumentSpec>,
pub file_path: Option<PathBuf>,
}
Methods
new(name: &str, content: &str) -> Self
- Create a new promptwith_description(self, description: &str) -> Self
- Add a description (builder pattern)with_category(self, category: &str) -> Self
- Add a category (builder pattern)add_tag(self, tag: &str) -> Self
- Add a tag (builder pattern)add_argument(self, arg: ArgumentSpec) -> Self
- Add an argument specificationrender(&self, args: &HashMap<String, String>) -> Result<String>
- Render the prompt with argumentsvalidate_arguments(&self, args: &HashMap<String, String>) -> Result<()>
- Validate provided arguments
Example
use swissarmyhammer::{Prompt, ArgumentSpec};
use std::collections::HashMap;
let prompt = Prompt::new("greet", "Hello {{name}}!")
.with_description("A greeting prompt")
.add_argument(ArgumentSpec {
name: "name".to_string(),
description: Some("Name to greet".to_string()),
required: true,
default: None,
type_hint: Some("string".to_string()),
});
let mut args = HashMap::new();
args.insert("name".to_string(), "World".to_string());
let result = prompt.render(&args)?;
// result: "Hello World!"
ArgumentSpec
Defines the specification for a prompt argument.
pub struct ArgumentSpec {
pub name: String,
pub description: Option<String>,
pub required: bool,
pub default: Option<String>,
pub type_hint: Option<String>,
}
PromptLibrary
The main interface for managing collections of prompts.
pub struct PromptLibrary {
// internal fields...
}
Methods
new() -> Self
- Create a new empty libraryadd_directory<P: AsRef<Path>>(&mut self, path: P) -> Result<()>
- Load prompts from directoryadd_prompt(&mut self, prompt: Prompt)
- Add a single promptget(&self, name: &str) -> Result<&Prompt>
- Get a prompt by namelist_prompts(&self) -> Vec<&Prompt>
- List all promptsfind_by_category(&self, category: &str) -> Vec<&Prompt>
- Find prompts by categoryfind_by_tag(&self, tag: &str) -> Vec<&Prompt>
- Find prompts by tagremove(&mut self, name: &str) -> Option<Prompt>
- Remove a prompt
Example
use swissarmyhammer::PromptLibrary;
let mut library = PromptLibrary::new();
library.add_directory("./.swissarmyhammer/prompts")?;
let prompt = library.get("code-review")?;
let rendered = prompt.render(&args)?;
PromptLoader
Handles loading prompts from various sources.
pub struct PromptLoader {
// internal fields...
}
Methods
new() -> Self
- Create a new loaderload_file<P: AsRef<Path>>(&self, path: P) -> Result<Prompt>
- Load single prompt fileload_directory<P: AsRef<Path>>(&self, path: P) -> Result<Vec<Prompt>>
- Load all prompts from directoryload_string(&self, name: &str, content: &str) -> Result<Prompt>
- Load prompt from string
Template Engine
Template
Wrapper for Liquid templates with custom filters.
pub struct Template {
// internal fields...
}
Methods
new(template_str: &str) -> Result<Self>
- Create template from stringrender(&self, args: &HashMap<String, String>) -> Result<String>
- Render with argumentsraw(&self) -> &str
- Get the raw template string
TemplateEngine
Manages template parsing and custom filters.
pub struct TemplateEngine {
// internal fields...
}
Methods
new() -> Self
- Create new enginedefault_parser() -> Parser
- Get default Liquid parser with custom filtersregister_filter<F>(&mut self, name: &str, filter: F)
- Register custom filter
Storage
PromptStorage
High-level storage interface for prompts.
pub trait PromptStorage {
fn store_prompt(&mut self, prompt: &Prompt) -> Result<()>;
fn load_prompt(&self, name: &str) -> Result<Prompt>;
fn list_prompts(&self) -> Result<Vec<String>>;
fn delete_prompt(&mut self, name: &str) -> Result<()>;
}
StorageBackend
Low-level storage abstraction.
pub trait StorageBackend {
fn read(&self, key: &str) -> Result<Vec<u8>>;
fn write(&mut self, key: &str, data: &[u8]) -> Result<()>;
fn delete(&mut self, key: &str) -> Result<()>;
fn list(&self) -> Result<Vec<String>>;
}
Search
Available with the search
feature
SearchEngine
Full-text search functionality for prompts.
pub struct SearchEngine {
// internal fields...
}
Methods
new() -> Result<Self>
- Create new search engineindex_prompt(&mut self, prompt: &Prompt) -> Result<()>
- Add prompt to search indexsearch(&self, query: &str) -> Result<Vec<SearchResult>>
- Search for prompts
SearchResult
Represents a search result.
pub struct SearchResult {
pub name: String,
pub score: f32,
pub snippet: Option<String>,
}
MCP Integration
Available with the mcp
feature
McpServer
Model Context Protocol server implementation.
pub struct McpServer {
// internal fields...
}
Methods
new(library: PromptLibrary) -> Self
- Create server with prompt libraryrun(&mut self) -> Result<()>
- Start the MCP server
Plugin System
SwissArmyHammerPlugin
Trait for creating plugins.
pub trait SwissArmyHammerPlugin {
fn name(&self) -> &str;
fn filters(&self) -> Vec<Box<dyn CustomLiquidFilter>>;
}
CustomLiquidFilter
Trait for custom Liquid template filters.
pub trait CustomLiquidFilter {
fn name(&self) -> &str;
fn filter(&self, input: &str, args: &[&str]) -> Result<String>;
}
PluginRegistry
Manages registered plugins and filters.
pub struct PluginRegistry {
// internal fields...
}
Methods
new() -> Self
- Create new registryregister_plugin<P: SwissArmyHammerPlugin>(&mut self, plugin: P)
- Register pluginget_filters(&self) -> Vec<&dyn CustomLiquidFilter>
- Get all registered filters
Error Handling
SwissArmyHammerError
Main error type for the library.
pub enum SwissArmyHammerError {
Io(std::io::Error),
Template(String),
PromptNotFound(String),
Config(String),
Storage(String),
Serialization(serde_yaml::Error),
Other(String),
}
Result Type
Convenient result type alias.
pub type Result<T> = std::result::Result<T, SwissArmyHammerError>;
Feature Flags
The library supports several optional features:
search
- Enables full-text search functionalitymcp
- Enables Model Context Protocol server support
Enable features in your Cargo.toml
:
[dependencies]
swissarmyhammer = { version = "0.1", features = ["search", "mcp"] }
Complete Example
use swissarmyhammer::{PromptLibrary, ArgumentSpec, Result};
use std::collections::HashMap;
fn main() -> Result<()> {
// Create library and load prompts
let mut library = PromptLibrary::new();
library.add_directory("./.swissarmyhammer/prompts")?;
// Get a prompt
let prompt = library.get("code-review")?;
// Prepare arguments
let mut args = HashMap::new();
args.insert("code".to_string(), "fn main() { println!(\"Hello\"); }".to_string());
args.insert("language".to_string(), "rust".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Advanced Usage
Custom Filters
Create custom Liquid filters for domain-specific transformations:
use swissarmyhammer::{CustomLiquidFilter, PluginRegistry, TemplateEngine};
struct UppercaseFilter;
impl CustomLiquidFilter for UppercaseFilter {
fn name(&self) -> &str { "uppercase" }
fn filter(&self, input: &str, _args: &[&str]) -> Result<String> {
Ok(input.to_uppercase())
}
}
let mut registry = PluginRegistry::new();
registry.register_filter("uppercase", Box::new(UppercaseFilter));
Storage Backends
Implement custom storage backends:
use swissarmyhammer::{StorageBackend, Result};
struct DatabaseBackend {
// database connection...
}
impl StorageBackend for DatabaseBackend {
fn read(&self, key: &str) -> Result<Vec<u8>> {
// Read from database
todo!()
}
fn write(&mut self, key: &str, data: &[u8]) -> Result<()> {
// Write to database
todo!()
}
// ... implement other methods
}
For more examples and advanced usage patterns, see the Library Examples page.
Library Examples
This guide provides practical examples of using SwissArmyHammer as a Rust library in your applications.
Basic Usage
Adding to Your Project
Add SwissArmyHammer to your Cargo.toml
:
[dependencies]
swissarmyhammer = { git = "https://github.com/wballard/swissarmyhammer.git" }
tokio = { version = "1", features = ["full"] }
serde_json = "1"
Simple Example
use swissarmyhammer::{PromptManager, PromptArgument};
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a prompt manager
let manager = PromptManager::new()?;
// Load prompts from default directories
manager.load_prompts().await?;
// Get a specific prompt
let prompt = manager.get_prompt("code-review")?;
// Prepare arguments
let mut args = HashMap::new();
args.insert("code".to_string(), r#"
def calculate_sum(a, b):
return a + b
"#.to_string());
args.insert("language".to_string(), "python".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("Rendered prompt:\n{}", rendered);
Ok(())
}
Advanced Examples
Custom Prompt Directories
use swissarmyhammer::{PromptManager, Config};
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create custom configuration
let mut config = Config::default();
config.prompt_directories.push(PathBuf::from("./my-prompts"));
config.prompt_directories.push(PathBuf::from("/opt/company/prompts"));
// Create manager with custom config
let manager = PromptManager::with_config(config)?;
// Load prompts from all directories
manager.load_prompts().await?;
// List all available prompts
for prompt in manager.list_prompts() {
println!("Found prompt: {} - {}", prompt.name, prompt.title);
}
Ok(())
}
Watching for Changes
use swissarmyhammer::{PromptManager, WatchEvent};
use tokio::sync::mpsc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
// Create a channel for watch events
let (tx, mut rx) = mpsc::channel(100);
// Start watching for changes
manager.watch(tx).await?;
// Handle watch events
tokio::spawn(async move {
while let Some(event) = rx.recv().await {
match event {
WatchEvent::PromptAdded(name) => {
println!("New prompt added: {}", name);
}
WatchEvent::PromptModified(name) => {
println!("Prompt modified: {}", name);
}
WatchEvent::PromptRemoved(name) => {
println!("Prompt removed: {}", name);
}
}
}
});
// Keep the program running
tokio::signal::ctrl_c().await?;
println!("Shutting down...");
Ok(())
}
MCP Server Implementation
use swissarmyhammer::{PromptManager, MCPServer, MCPRequest, MCPResponse};
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create prompt manager
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create MCP server
let server = MCPServer::new(manager);
// Listen on TCP socket
let listener = TcpListener::bind("127.0.0.1:3333").await?;
println!("MCP server listening on 127.0.0.1:3333");
loop {
let (mut socket, addr) = listener.accept().await?;
let server = server.clone();
// Handle each connection
tokio::spawn(async move {
let mut buffer = vec![0; 1024];
loop {
let n = match socket.read(&mut buffer).await {
Ok(n) if n == 0 => return,
Ok(n) => n,
Err(e) => {
eprintln!("Error reading from {}: {}", addr, e);
return;
}
};
// Parse request
if let Ok(request) = serde_json::from_slice::<MCPRequest>(&buffer[..n]) {
// Handle request
let response = server.handle_request(request).await;
// Send response
let response_bytes = serde_json::to_vec(&response).unwrap();
if let Err(e) = socket.write_all(&response_bytes).await {
eprintln!("Error writing to {}: {}", addr, e);
return;
}
}
}
});
}
}
Custom Template Filters
use swissarmyhammer::{PromptManager, TemplateEngine, FilterFunction};
use liquid::ValueView;
fn create_custom_filters() -> Vec<(&'static str, FilterFunction)> {
vec![
// Custom filter to convert to snake_case
("snake_case", Box::new(|input: &dyn ValueView, _args: &[liquid::model::Value]| {
let s = input.to_kstr().to_string();
let snake = s.chars().fold(String::new(), |mut acc, ch| {
if ch.is_uppercase() && !acc.is_empty() {
acc.push('_');
}
acc.push(ch.to_lowercase().next().unwrap());
acc
});
Ok(liquid::model::Value::scalar(snake))
})),
// Custom filter to add line numbers
("line_numbers", Box::new(|input: &dyn ValueView, _args: &[liquid::model::Value]| {
let s = input.to_kstr().to_string();
let numbered = s.lines()
.enumerate()
.map(|(i, line)| format!("{:4}: {}", i + 1, line))
.collect::<Vec<_>>()
.join("\n");
Ok(liquid::model::Value::scalar(numbered))
})),
]
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create template engine with custom filters
let mut engine = TemplateEngine::new();
for (name, filter) in create_custom_filters() {
engine.register_filter(name, filter);
}
// Create prompt manager with custom engine
let manager = PromptManager::with_engine(engine)?;
// Use prompts with custom filters
let template = r#"
Function name: {{ function_name | snake_case }}
Code with line numbers:
{{ code | line_numbers }}
"#;
let mut args = HashMap::new();
args.insert("function_name", "calculateTotalPrice");
args.insert("code", "def hello():\n print('Hello')\n return True");
let rendered = engine.render_str(template, &args)?;
println!("{}", rendered);
Ok(())
}
Prompt Validation
use swissarmyhammer::{PromptManager, PromptValidator, ValidationRule};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create custom validation rules
let rules = vec![
ValidationRule::RequiredFields(vec!["name", "title", "description"]),
ValidationRule::ArgumentTypes(HashMap::from([
("max_length", "integer"),
("temperature", "float"),
("enabled", "boolean"),
])),
ValidationRule::TemplatePatterns(vec![
r"\{\{[^}]+\}\}", // Must use double braces
]),
];
// Create validator
let validator = PromptValidator::new(rules);
// Create manager with validator
let manager = PromptManager::with_validator(validator)?;
// Load and validate prompts
match manager.load_prompts().await {
Ok(_) => println!("All prompts validated successfully"),
Err(e) => eprintln!("Validation errors: {}", e),
}
// Validate a specific prompt file
let prompt_content = std::fs::read_to_string("my-prompt.md")?;
match manager.validate_prompt_content(&prompt_content) {
Ok(prompt) => println!("Prompt '{}' is valid", prompt.name),
Err(errors) => {
println!("Validation errors:");
for error in errors {
println!(" - {}", error);
}
}
}
Ok(())
}
Batch Processing
use swissarmyhammer::{PromptManager, BatchProcessor};
use futures::stream::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create batch processor
let processor = BatchProcessor::new(manager, 10); // 10 concurrent tasks
// Prepare batch jobs
let jobs = vec![
("code-review", HashMap::from([
("code", "def add(a, b): return a + b"),
("language", "python"),
])),
("api-docs", HashMap::from([
("api_spec", r#"{"endpoints": ["/users", "/posts"]}"#),
("format", "markdown"),
])),
("test-writer", HashMap::from([
("code", "class Calculator { add(a, b) { return a + b; } }"),
("framework", "jest"),
])),
];
// Process in parallel
let results = processor.process_batch(jobs).await;
// Handle results
for (index, result) in results.iter().enumerate() {
match result {
Ok(rendered) => {
println!("Job {} completed:", index + 1);
println!("{}\n", rendered);
}
Err(e) => {
eprintln!("Job {} failed: {}", index + 1, e);
}
}
}
Ok(())
}
Integration with AI Services
use swissarmyhammer::{PromptManager, AIServiceClient};
use async_trait::async_trait;
// Custom AI service implementation
struct OpenAIClient {
api_key: String,
client: reqwest::Client,
}
#[async_trait]
impl AIServiceClient for OpenAIClient {
async fn complete(&self, prompt: &str) -> Result<String, Box<dyn std::error::Error>> {
let response = self.client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(&self.api_key)
.json(&serde_json::json!({
"model": "gpt-4",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
}))
.send()
.await?;
let data: serde_json::Value = response.json().await?;
let content = data["choices"][0]["message"]["content"]
.as_str()
.unwrap_or("");
Ok(content.to_string())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Setup prompt manager
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create AI client
let ai_client = OpenAIClient {
api_key: std::env::var("OPENAI_API_KEY")?,
client: reqwest::Client::new(),
};
// Get and render prompt
let prompt = manager.get_prompt("code-review")?;
let args = HashMap::from([
("code", "def factorial(n): return 1 if n <= 1 else n * factorial(n-1)"),
("language", "python"),
]);
let rendered = prompt.render(&args)?;
// Send to AI service
println!("Sending prompt to AI service...");
let response = ai_client.complete(&rendered).await?;
println!("AI Response:\n{}", response);
Ok(())
}
Web Server Integration
use swissarmyhammer::PromptManager;
use axum::{
routing::{get, post},
Router, Json, Extension,
response::IntoResponse,
http::StatusCode,
};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
#[derive(Deserialize)]
struct RenderRequest {
prompt_name: String,
arguments: HashMap<String, String>,
}
#[derive(Serialize)]
struct RenderResponse {
rendered: String,
}
async fn list_prompts(
Extension(manager): Extension<Arc<PromptManager>>
) -> impl IntoResponse {
let prompts = manager.list_prompts();
Json(prompts)
}
async fn render_prompt(
Extension(manager): Extension<Arc<PromptManager>>,
Json(request): Json<RenderRequest>,
) -> impl IntoResponse {
match manager.get_prompt(&request.prompt_name) {
Ok(prompt) => match prompt.render(&request.arguments) {
Ok(rendered) => Ok(Json(RenderResponse { rendered })),
Err(e) => Err((StatusCode::BAD_REQUEST, e.to_string())),
},
Err(e) => Err((StatusCode::NOT_FOUND, e.to_string())),
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Setup prompt manager
let manager = Arc::new(PromptManager::new()?);
manager.load_prompts().await?;
// Build web app
let app = Router::new()
.route("/prompts", get(list_prompts))
.route("/render", post(render_prompt))
.layer(Extension(manager));
// Start server
let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await?;
println!("Web server listening on http://0.0.0.0:8080");
axum::serve(listener, app).await?;
Ok(())
}
Testing Utilities
use swissarmyhammer::{PromptManager, TestHarness, TestCase};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create test harness
let harness = TestHarness::new(manager);
// Define test cases
let test_cases = vec![
TestCase {
prompt_name: "code-review",
arguments: HashMap::from([
("code", "def divide(a, b): return a / b"),
("language", "python"),
]),
expected_contains: vec!["division by zero", "error handling"],
expected_not_contains: vec!["syntax error"],
},
TestCase {
prompt_name: "api-docs",
arguments: HashMap::from([
("api_spec", r#"{"version": "1.0"}"#),
]),
expected_contains: vec!["API Documentation", "version"],
expected_not_contains: vec!["error", "invalid"],
},
];
// Run tests
let results = harness.run_tests(test_cases).await;
// Report results
for (test, result) in results {
match result {
Ok(_) => println!("β {} passed", test.prompt_name),
Err(e) => println!("β {} failed: {}", test.prompt_name, e),
}
}
Ok(())
}
Error Handling
Comprehensive Error Handling
use swissarmyhammer::{PromptManager, SwissArmyHammerError};
#[tokio::main]
async fn main() {
match run_app().await {
Ok(_) => println!("Application completed successfully"),
Err(e) => {
eprintln!("Application error: {}", e);
std::process::exit(1);
}
}
}
async fn run_app() -> Result<(), SwissArmyHammerError> {
let manager = PromptManager::new()
.map_err(|e| SwissArmyHammerError::Initialization(e.to_string()))?;
// Handle different error types
match manager.load_prompts().await {
Ok(_) => println!("Prompts loaded successfully"),
Err(SwissArmyHammerError::IoError(e)) => {
eprintln!("File system error: {}", e);
return Err(SwissArmyHammerError::IoError(e));
}
Err(SwissArmyHammerError::ParseError(e)) => {
eprintln!("Prompt parsing error: {}", e);
// Continue with partial prompts
}
Err(e) => return Err(e),
}
// Safely get and render prompt
let prompt_name = "code-review";
let prompt = manager.get_prompt(prompt_name)
.map_err(|_| SwissArmyHammerError::PromptNotFound(prompt_name.to_string()))?;
let args = HashMap::from([("code", "print('hello')")]);
let rendered = prompt.render(&args)
.map_err(|e| SwissArmyHammerError::RenderError(e.to_string()))?;
println!("Rendered: {}", rendered);
Ok(())
}
Performance Optimization
Caching and Pooling
use swissarmyhammer::{PromptManager, CacheConfig, ConnectionPool};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure caching
let cache_config = CacheConfig {
max_size: 100_000_000, // 100MB
ttl: Duration::from_secs(3600),
strategy: CacheStrategy::LRU,
};
// Create connection pool for MCP
let pool = ConnectionPool::builder()
.max_connections(100)
.connection_timeout(Duration::from_secs(5))
.idle_timeout(Duration::from_secs(60))
.build()?;
// Create optimized manager
let manager = PromptManager::builder()
.cache_config(cache_config)
.connection_pool(pool)
.parallel_load(true)
.build()?;
// Benchmark loading
let start = std::time::Instant::now();
manager.load_prompts().await?;
println!("Loaded prompts in {:?}", start.elapsed());
// Benchmark rendering with cache
let mut total_time = Duration::ZERO;
for i in 0..1000 {
let start = std::time::Instant::now();
let prompt = manager.get_prompt("code-review")?;
let args = HashMap::from([("code", format!("test {}", i))]);
let _ = prompt.render(&args)?;
total_time += start.elapsed();
}
println!("Average render time: {:?}", total_time / 1000);
Ok(())
}
Next Steps
- Review the Library API reference
- Learn about Library Usage patterns
- See Integration Examples for more use cases
- Check the API Documentation for detailed information
π Rustdoc API Documentation
π docs.rs API Reference
Advanced Prompt Techniques
This guide covers advanced techniques for creating sophisticated and powerful prompts with SwissArmyHammer.
Composable Prompts
Prompt Chaining
Chain multiple prompts together for complex workflows:
---
name: full-analysis
title: Complete Code Analysis Pipeline
description: Runs multiple analysis steps on code
arguments:
- name: file_path
description: File to analyze
required: true
- name: output_format
description: Format for results
default: markdown
---
# Complete Analysis for {{file_path}}
## Step 1: Code Review
{% capture review_output %}
Run code review on {{file_path}} focusing on:
- Code quality
- Best practices
- Potential bugs
{% endcapture %}
## Step 2: Security Analysis
{% capture security_output %}
Analyze {{file_path}} for security vulnerabilities:
- Input validation
- Authentication issues
- Data exposure risks
{% endcapture %}
## Step 3: Performance Analysis
{% capture performance_output %}
Check {{file_path}} for performance issues:
- Algorithm complexity
- Resource usage
- Optimization opportunities
{% endcapture %}
{% if output_format == "markdown" %}
## Analysis Results
### Code Review
{{ review_output }}
### Security
{{ security_output }}
### Performance
{{ performance_output }}
{% elsif output_format == "json" %}
{
"code_review": "{{ review_output | escape }}",
"security": "{{ security_output | escape }}",
"performance": "{{ performance_output | escape }}"
}
{% endif %}
Modular Prompt Components
Create reusable prompt components:
---
name: code-analyzer-base
title: Base Code Analyzer
description: Reusable base for code analysis prompts
arguments:
- name: code
description: Code to analyze
required: true
- name: analysis_type
description: Type of analysis
required: true
---
{% comment %} Base analysis template {% endcomment %}
{% assign lines = code | split: "\n" %}
{% assign line_count = lines | size %}
# {{analysis_type | capitalize}} Analysis
## Code Metrics
- Lines of code: {{line_count}}
- Language: {% if code contains "def " %}Python{% elsif code contains "function" %}JavaScript{% else %}Unknown{% endif %}
## Analysis Focus
{% case analysis_type %}
{% when "security" %}
{% include "security-checks.liquid" %}
{% when "performance" %}
{% include "performance-checks.liquid" %}
{% when "style" %}
{% include "style-checks.liquid" %}
{% endcase %}
## Detailed Analysis
Analyze the following code for {{analysis_type}} issues:
{{code}}
Advanced Templating
Dynamic Content Generation
Generate content based on complex conditions:
---
name: api-documentation-generator
title: Dynamic API Documentation
description: Generates API docs with dynamic sections
arguments:
- name: api_spec
description: API specification (JSON)
required: true
- name: include_examples
description: Include code examples
default: "true"
- name: languages
description: Example languages (comma-separated)
default: "curl,python,javascript"
---
{% assign api = api_spec | parse_json %}
{% assign lang_list = languages | split: "," %}
# {{api.title}} API Documentation
{{api.description}}
Base URL: `{{api.base_url}}`
Version: {{api.version}}
## Authentication
{% if api.auth.type == "bearer" %}
This API uses Bearer token authentication. Include your API token in the Authorization header:
Authorization: Bearer YOUR_API_TOKEN
{% elsif api.auth.type == "oauth2" %}
This API uses OAuth 2.0. See [Authentication Guide](#auth-guide) for details.
{% endif %}
## Endpoints
{% for endpoint in api.endpoints %}
### {{endpoint.method}} {{endpoint.path}}
{{endpoint.description}}
{% if endpoint.parameters.size > 0 %}
#### Parameters
| Name | Type | Required | Description |
|------|------|----------|-------------|
{% for param in endpoint.parameters %}
| {{param.name}} | {{param.type}} | {{param.required | default: false}} | {{param.description}} |
{% endfor %}
{% endif %}
{% if include_examples == "true" %}
#### Examples
{% for lang in lang_list %}
{% case lang %}
{% when "curl" %}
```bash
curl -X {{endpoint.method}} \
{{api.base_url}}{{endpoint.path}} \
{% if api.auth.type == "bearer" %}-H "Authorization: Bearer $API_TOKEN" \{% endif %}
{% for param in endpoint.parameters %}{% if param.in == "header" %}-H "{{param.name}}: value" \{% endif %}{% endfor %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}-H "Content-Type: application/json" \
-d '{"key": "value"}'{% endif %}
{% when βpythonβ %}
import requests
response = requests.{{endpoint.method | downcase}}(
"{{api.base_url}}{{endpoint.path}}",
{% if api.auth.type == "bearer" %}headers={"Authorization": f"Bearer {api_token}"},{% endif %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}json={"key": "value"}{% endif %}
)
print(response.json())
{% when βjavascriptβ %}
const response = await fetch('{{api.base_url}}{{endpoint.path}}', {
method: '{{endpoint.method}}',
{% if api.auth.type == "bearer" %}headers: {
'Authorization': `Bearer ${apiToken}`,
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}'Content-Type': 'application/json'{% endif %}
},{% endif %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}body: JSON.stringify({ key: 'value' }){% endif %}
});
const data = await response.json();
{% endcase %} {% endfor %} {% endif %}
{% endfor %}
### Complex Conditionals
Use advanced conditional logic:
```markdown
---
name: smart-optimizer
title: Smart Code Optimizer
description: Applies context-aware optimizations
arguments:
- name: code
description: Code to optimize
required: true
- name: metrics
description: Performance metrics (JSON)
required: false
- name: constraints
description: Optimization constraints
default: "balanced"
---
{% if metrics %}
{% assign perf = metrics | parse_json %}
{% assign needs_memory_opt = false %}
{% assign needs_cpu_opt = false %}
{% if perf.memory_usage > 80 %}
{% assign needs_memory_opt = true %}
{% endif %}
{% if perf.cpu_usage > 70 %}
{% assign needs_cpu_opt = true %}
{% endif %}
{% endif %}
# Optimization Analysis
{% if needs_memory_opt and needs_cpu_opt %}
## Critical: Both Memory and CPU Optimization Needed
Your code is experiencing both memory and CPU pressure. This requires careful optimization to balance both concerns.
### Recommended Strategy: Hybrid Optimization
1. Profile to identify hotspots
2. Optimize algorithms first (reduces both CPU and memory)
3. Implement caching strategically
4. Consider async processing
{% elsif needs_memory_opt %}
## Memory Optimization Required
Current memory usage: {{perf.memory_usage}}%
### Memory Optimization Strategies:
1. Reduce object allocation
2. Use object pooling
3. Implement lazy loading
4. Clear unused references
{% elsif needs_cpu_opt %}
## CPU Optimization Required
Current CPU usage: {{perf.cpu_usage}}%
### CPU Optimization Strategies:
1. Algorithm optimization
2. Parallel processing
3. Caching computed results
4. Reduce unnecessary operations
{% else %}
## Performance is Acceptable
No immediate optimization needed. Consider:
- Code maintainability improvements
- Preemptive optimization for scale
- Documentation updates
{% endif %}
## Code Analysis
{{code}}
{% case constraints %}
{% when "memory-first" %}
Focus on reducing memory footprint, even at slight CPU cost.
{% when "cpu-first" %}
Optimize for CPU performance, memory usage is secondary.
{% when "balanced" %}
Balance both memory and CPU optimizations.
{% endcase %}
State Management
Using Captures for State
Manage complex state across prompt sections:
---
name: migration-planner
title: Database Migration Planner
description: Plans complex database migrations
arguments:
- name: current_schema
description: Current database schema
required: true
- name: target_schema
description: Target database schema
required: true
- name: strategy
description: Migration strategy
default: "safe"
---
{% comment %} Analyze schemas and capture findings {% endcomment %}
{% capture added_tables %}
{% assign current_tables = current_schema | parse_json | map: "name" %}
{% assign target_tables = target_schema | parse_json | map: "name" %}
{% for table in target_tables %}
{% unless current_tables contains table %}
- {{table}}
{% endunless %}
{% endfor %}
{% endcapture %}
{% capture removed_tables %}
{% for table in current_tables %}
{% unless target_tables contains table %}
- {{table}}
{% endunless %}
{% endfor %}
{% endcapture %}
{% capture migration_risk %}
{% if removed_tables contains "users" or removed_tables contains "auth" %}
HIGH - Critical tables being removed
{% elsif added_tables.size > 5 %}
MEDIUM - Large number of new tables
{% else %}
LOW - Minimal structural changes
{% endif %}
{% endcapture %}
# Database Migration Plan
## Risk Assessment: {{migration_risk | strip}}
## Changes Summary
### New Tables
{{added_tables | default: "None"}}
### Removed Tables
{{removed_tables | default: "None"}}
## Migration Strategy: {{strategy | upcase}}
{% if strategy == "safe" %}
### Safe Migration Steps
1. Create backup
2. Add new tables first
3. Migrate data with validation
4. Update application code
5. Remove old tables after verification
{% elsif strategy == "fast" %}
### Fast Migration Steps
1. Quick backup
2. Execute all changes in transaction
3. Minimal validation
4. Quick rollback if needed
{% elsif strategy == "zero-downtime" %}
### Zero-Downtime Migration Steps
1. Create new tables alongside old
2. Implement dual-write logic
3. Backfill data progressively
4. Switch reads to new tables
5. Remove old tables after stabilization
{% endif %}
{% if migration_risk contains "HIGH" %}
## β οΈ High Risk Mitigation
Due to the high risk nature of this migration:
1. Schedule during maintenance window
2. Have rollback plan ready
3. Test in staging environment first
4. Monitor closely after deployment
{% endif %}
Performance Optimization
Lazy Evaluation
Use lazy evaluation for expensive operations:
---
name: smart-analyzer
title: Smart Performance Analyzer
description: Analyzes code with lazy evaluation
arguments:
- name: code
description: Code to analyze
required: true
- name: quick_check
description: Perform quick check only
default: "false"
---
# Code Analysis
{% if quick_check == "true" %}
## Quick Analysis
- Lines: {{code | split: "\n" | size}}
- Complexity: {{code | size | divided_by: 100}} (estimated)
{% else %}
{% comment %} Full analysis only when needed {% endcomment %}
{% capture complexity_analysis %}
{% assign lines = code | split: "\n" %}
{% assign complexity = 0 %}
{% for line in lines %}
{% if line contains "if " or line contains "for " or line contains "while " %}
{% assign complexity = complexity | plus: 1 %}
{% endif %}
{% endfor %}
Cyclomatic Complexity: {{complexity}}
{% endcapture %}
{% capture pattern_analysis %}
{% if code contains "TODO" or code contains "FIXME" %}
- Contains pending work items
{% endif %}
{% if code contains "console.log" or code contains "print(" %}
- Contains debug output
{% endif %}
{% endcapture %}
## Full Analysis
### Metrics
{{complexity_analysis}}
### Code Patterns
{{pattern_analysis | default: "No issues found"}}
### Detailed Review
Analyze the code for:
1. Performance bottlenecks
2. Security vulnerabilities
3. Best practice violations
{{code}}
{% endif %}
Caching Computed Values
Cache expensive computations:
---
name: data-processor
title: Efficient Data Processor
description: Processes data with caching
arguments:
- name: data
description: Data to process (CSV or JSON)
required: true
- name: operations
description: Operations to perform
required: true
---
{% comment %} Cache parsed data {% endcomment %}
{% assign is_json = false %}
{% assign is_csv = false %}
{% if data contains "{" and data contains "}" %}
{% assign is_json = true %}
{% assign parsed_data = data | parse_json %}
{% elsif data contains "," %}
{% assign is_csv = true %}
{% comment %} Cache row count {% endcomment %}
{% assign rows = data | split: "\n" %}
{% assign row_count = rows | size %}
{% endif %}
# Data Processing
## Data Format: {% if is_json %}JSON{% elsif is_csv %}CSV ({{row_count}} rows){% else %}Unknown{% endif %}
{% comment %} Reuse cached values {% endcomment %}
{% for operation in operations %}
{% case operation %}
{% when "count" %}
- Count: {% if is_json %}{{parsed_data | size}}{% else %}{{row_count}}{% endif %}
{% when "validate" %}
- Validation: {% if is_json %}Valid JSON{% elsif is_csv %}Valid CSV{% endif %}
{% endcase %}
{% endfor %}
Error Handling
Graceful Degradation
Handle errors gracefully:
---
name: robust-analyzer
title: Robust Code Analyzer
description: Analyzes code with error handling
arguments:
- name: code
description: Code to analyze
required: true
- name: language
description: Programming language
default: "auto"
---
# Code Analysis
{% comment %} Safe language detection {% endcomment %}
{% assign detected_language = "unknown" %}
{% if language == "auto" %}
{% if code contains "def " and code contains ":" %}
{% assign detected_language = "python" %}
{% elsif code contains "function" or code contains "const " %}
{% assign detected_language = "javascript" %}
{% elsif code contains "fn " and code contains "->" %}
{% assign detected_language = "rust" %}
{% endif %}
{% else %}
{% assign detected_language = language %}
{% endif %}
## Language: {{detected_language | capitalize}}
{% comment %} Safe parsing with fallbacks {% endcomment %}
{% assign parse_success = false %}
{% capture parsed_structure %}
{% if detected_language == "python" %}
{% comment %} Python-specific parsing {% endcomment %}
{% assign functions = code | split: "def " | size | minus: 1 %}
{% assign classes = code | split: "class " | size | minus: 1 %}
Functions: {{functions}}, Classes: {{classes}}
{% assign parse_success = true %}
{% elsif detected_language == "javascript" %}
{% comment %} JavaScript-specific parsing {% endcomment %}
{% assign functions = code | split: "function" | size | minus: 1 %}
{% assign arrows = code | split: "=>" | size | minus: 1 %}
Functions: {{functions | plus: arrows}}
{% assign parse_success = true %}
{% endif %}
{% endcapture %}
{% if parse_success %}
## Structure Analysis
{{parsed_structure}}
{% else %}
## Basic Analysis
Unable to parse structure for {{detected_language}}.
Falling back to general analysis:
- Lines: {{code | split: "\n" | size}}
- Characters: {{code | size}}
{% endif %}
## Code Review
Analyze the following {{detected_language}} code:
```{{detected_language}}
{{code}}
### Input Validation
Validate and sanitize inputs:
```markdown
---
name: secure-processor
title: Secure Input Processor
description: Processes inputs with validation
arguments:
- name: user_input
description: User-provided input
required: true
- name: input_type
description: Expected input type
required: true
- name: max_length
description: Maximum allowed length
default: "1000"
---
{% comment %} Input validation {% endcomment %}
{% assign is_valid = true %}
{% assign validation_errors = "" %}
{% comment %} Length check {% endcomment %}
{% assign input_length = user_input | size %}
{% if input_length > max_length %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Input exceeds maximum length ({{input_length}} > {{max_length}}){% endcapture %}
{% endif %}
{% comment %} Type validation {% endcomment %}
{% case input_type %}
{% when "email" %}
{% unless user_input contains "@" and user_input contains "." %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Invalid email format{% endcapture %}
{% endunless %}
{% when "number" %}
{% assign test_number = user_input | plus: 0 %}
{% if test_number == 0 and user_input != "0" %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Input is not a valid number{% endcapture %}
{% endif %}
{% when "json" %}
{% capture json_test %}{{user_input | parse_json}}{% endcapture %}
{% unless json_test %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Invalid JSON format{% endcapture %}
{% endunless %}
{% endcase %}
# Input Processing Result
## Validation: {% if is_valid %}β
Passed{% else %}β Failed{% endif %}
{% unless is_valid %}
## Validation Errors:
{{validation_errors}}
{% endunless %}
{% if is_valid %}
## Processing Input
Type: {{input_type}}
Length: {{input_length}} characters
### Sanitized Input:
{{user_input | strip | escape}}
### Next Steps:
Process the validated {{input_type}} input according to business logic.
{% else %}
## Cannot Process Invalid Input
Please fix the validation errors and try again.
{% endif %}
Integration Patterns
External Tool Integration
Integrate with external tools and services:
---
name: ci-cd-analyzer
title: CI/CD Pipeline Analyzer
description: Analyzes CI/CD configurations
arguments:
- name: pipeline_config
description: CI/CD configuration file
required: true
- name: platform
description: CI/CD platform (github, gitlab, jenkins)
required: true
- name: recommendations
description: Include recommendations
default: "true"
---
# CI/CD Pipeline Analysis
Platform: {{platform | capitalize}}
{% assign config = pipeline_config %}
## Pipeline Structure
{% case platform %}
{% when "github" %}
{% if config contains "on:" %}
### Triggers
- Configured triggers found
{% if config contains "push:" %}β Push events{% endif %}
{% if config contains "pull_request:" %}β Pull request events{% endif %}
{% if config contains "schedule:" %}β Scheduled runs{% endif %}
{% endif %}
{% if config contains "jobs:" %}
### Jobs
{% assign job_count = config | split: "jobs:" | last | split: ":" | size %}
- Number of jobs: ~{{job_count}}
{% endif %}
{% when "gitlab" %}
{% if config contains "stages:" %}
### Stages
- Pipeline stages defined
{% endif %}
{% if config contains "before_script:" %}
### Global Configuration
- Global before_script found
{% endif %}
{% when "jenkins" %}
{% if config contains "pipeline {" %}
### Pipeline Type
- Declarative pipeline
{% elsif config contains "node {" %}
- Scripted pipeline
{% endif %}
{% endcase %}
## Security Analysis
{% capture security_issues %}
{% if config contains "secrets." or config contains "${{" %}
- β Uses secure secret management
{% endif %}
{% if config contains "password" or config contains "api_key" %}
- β οΈ Possible hardcoded credentials
{% endif %}
{% if platform == "github" and config contains "actions/checkout" %}
{% unless config contains "actions/checkout@v" %}
- β οΈ Using unpinned actions
{% endunless %}
{% endif %}
{% endcapture %}
{{security_issues | default: "No security issues found"}}
{% if recommendations == "true" %}
## Recommendations
{% case platform %}
{% when "github" %}
1. Use specific action versions (e.g., `actions/checkout@v3`)
2. Implement job dependencies for efficiency
3. Use matrix builds for multiple versions
4. Cache dependencies for faster builds
{% when "gitlab" %}
1. Use DAG for job dependencies
2. Implement proper stage dependencies
3. Use artifacts for job communication
4. Enable pipeline caching
{% when "jenkins" %}
1. Use declarative pipeline syntax
2. Implement proper error handling
3. Use Jenkins shared libraries
4. Enable pipeline visualization
{% endcase %}
### General Best Practices
- Implement proper testing stages
- Add security scanning steps
- Use parallel execution where possible
- Monitor pipeline metrics
{% endif %}
## Raw Configuration
```yaml
{{pipeline_config}}
## Advanced Examples
### Multi-Stage Document Generator
```markdown
---
name: tech-doc-generator
title: Technical Documentation Generator
description: Generates comprehensive technical documentation
arguments:
- name: project_info
description: Project information (JSON)
required: true
- name: doc_sections
description: Sections to include (comma-separated)
default: "overview,architecture,api,deployment"
- name: audience
description: Target audience
default: "developers"
---
{% assign project = project_info | parse_json %}
{% assign sections = doc_sections | split: "," %}
# {{project.name}} Technical Documentation
Version: {{project.version}}
Last Updated: {% assign date = 'now' | date: "%B %d, %Y" %}{{date}}
{% for section in sections %}
{% case section | strip %}
{% when "overview" %}
## Overview
{{project.description}}
### Key Features
{% for feature in project.features %}
- **{{feature.name}}**: {{feature.description}}
{% endfor %}
### Technology Stack
{% for tech in project.stack %}
- {{tech.name}} ({{tech.version}}) - {{tech.purpose}}
{% endfor %}
{% when "architecture" %}
## Architecture
### System Components
{% for component in project.components %}
#### {{component.name}}
- **Type**: {{component.type}}
- **Responsibility**: {{component.responsibility}}
- **Dependencies**: {% for dep in component.dependencies %}{{dep}}{% unless forloop.last %}, {% endunless %}{% endfor %}
{% endfor %}
### Data Flow
{% for flow in project.dataflows %} {{flow.source}} β> {{flow.destination}}: {{flow.description}} {% endfor %}
{% when "api" %}
## API Reference
Base URL: `{{project.api.base_url}}`
### Authentication
{{project.api.auth.description}}
### Endpoints
{% for endpoint in project.api.endpoints %}
#### {{endpoint.method}} {{endpoint.path}}
{{endpoint.description}}
**Parameters:**
{% for param in endpoint.parameters %}
- `{{param.name}}` ({{param.type}}{% if param.required %}, required{% endif %}) - {{param.description}}
{% endfor %}
**Response:** {{endpoint.response.description}}
{% endfor %}
{% when "deployment" %}
## Deployment Guide
### Prerequisites
{% for prereq in project.deployment.prerequisites %}
- {{prereq}}
{% endfor %}
### Environment Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
{% for env in project.deployment.env_vars %}
| {{env.name}} | {{env.description}} | {{env.required}} | {{env.default | default: "none"}} |
{% endfor %}
### Deployment Steps
{% for step in project.deployment.steps %}
{{forloop.index}}. {{step.description}}
```bash
{{step.command}}
{% endfor %} {% endcase %} {% endfor %}
Generated for {{audience}} by SwissArmyHammer
### Intelligent Code Refactoring Assistant
```markdown
---
name: refactoring-assistant
title: Intelligent Refactoring Assistant
description: Provides context-aware refactoring suggestions
arguments:
- name: code
description: Code to refactor
required: true
- name: code_metrics
description: Code metrics (JSON)
required: false
- name: refactor_goals
description: Refactoring goals (comma-separated)
default: "readability,maintainability,performance"
- name: preserve_behavior
description: Ensure behavior preservation
default: "true"
---
{% if code_metrics %}
{% assign metrics = code_metrics | parse_json %}
{% endif %}
# Refactoring Analysis
## Current Code Metrics
{% if metrics %}
- Complexity: {{metrics.complexity}}
- Lines: {{metrics.lines}}
- Duplication: {{metrics.duplication}}%
- Test Coverage: {{metrics.coverage}}%
{% else %}
- Lines: {{code | split: "\n" | size}}
{% endif %}
## Refactoring Goals
{% assign goals = refactor_goals | split: "," %}
{% for goal in goals %}
- {{goal | strip | capitalize}}
{% endfor %}
## Analysis
{{code}}
{% capture refactoring_plan %}
{% for goal in goals %}
{% case goal | strip %}
{% when "readability" %}
### Readability Improvements
1. Extract complex conditionals into well-named functions
2. Replace magic numbers with named constants
3. Improve variable and function names
4. Add clarifying comments for complex logic
{% when "maintainability" %}
### Maintainability Enhancements
1. Apply SOLID principles
2. Reduce coupling between components
3. Extract reusable components
4. Improve error handling
{% when "performance" %}
### Performance Optimizations
1. Identify and optimize bottlenecks
2. Reduce unnecessary iterations
3. Implement caching where appropriate
4. Optimize data structures
{% when "testability" %}
### Testability Improvements
1. Extract pure functions
2. Reduce dependencies
3. Implement dependency injection
4. Separate business logic from I/O
{% endcase %}
{% endfor %}
{% endcapture %}
{{refactoring_plan}}
{% if preserve_behavior == "true" %}
## Behavior Preservation Strategy
To ensure the refactoring preserves behavior:
1. **Write characterization tests** before refactoring
2. **Refactor in small steps** with tests passing
3. **Use automated refactoring tools** where possible
4. **Compare outputs** before and after changes
### Suggested Test Cases
Based on the code analysis, ensure tests cover:
- Edge cases and boundary conditions
- Error handling paths
- Main business logic flows
- Integration points
{% endif %}
## Refactoring Priority
{% if metrics %}
{% if metrics.complexity > 10 %}
**High Priority**: Reduce complexity first - current complexity of {{metrics.complexity}} is too high
{% elsif metrics.duplication > 20 %}
**High Priority**: Address code duplication - {{metrics.duplication}}% duplication detected
{% elsif metrics.coverage < 60 %}
**High Priority**: Improve test coverage before refactoring - only {{metrics.coverage}}% covered
{% else %}
**Normal Priority**: Code is in reasonable shape for refactoring
{% endif %}
{% else %}
Based on initial analysis, focus on readability and structure improvements.
{% endif %}
## Next Steps
1. Review the refactoring plan
2. Set up safety nets (tests, version control)
3. Apply refactorings incrementally
4. Validate behavior preservation
5. Update documentation
Best Practices
1. Use Meaningful Variable Names
{% comment %} Bad {% endcomment %}
{% assign x = data | split: "," %}
{% comment %} Good {% endcomment %}
{% assign csv_rows = data | split: "," %}
2. Cache Expensive Operations
{% comment %} Cache parsed data {% endcomment %}
{% assign parsed_json = data | parse_json %}
{% comment %} Reuse parsed_json multiple times {% endcomment %}
3. Provide Fallbacks
{{variable | default: "No value provided"}}
4. Use Comments for Complex Logic
{% comment %}
Check if the code is Python by looking for specific syntax
This is more reliable than file extension alone
{% endcomment %}
{% if code contains "def " and code contains ":" %}
{% assign language = "python" %}
{% endif %}
5. Modularize with Captures
{% capture header %}
# {{title}}
Generated on: {{date}}
{% endcapture %}
{% comment %} Reuse header in multiple places {% endcomment %}
{{header}}
Next Steps
- Explore Custom Filters for extending functionality
- Learn about Prompt Organization for managing complex prompts
- See Examples for more real-world scenarios
- Read Template Variables for Liquid syntax reference
Issue Management User Guide
This guide covers the complete workflow for managing issues in SwissArmyHammer.
Overview
SwissArmyHammerβs issue management system provides a lightweight, git-friendly way to track work items directly in your repository. Issues are stored as markdown files with sequential numbering and can be managed through both MCP tools and CLI commands.
Table of Contents
- Overview
- Directory Structure
- Workflow Patterns
- MCP Tools Reference
- CLI Commands Reference
- Best Practices
- Troubleshooting
- Integration Examples
- Advanced Usage
- Configuration
- API Reference
- Performance Optimization
- Migration Guide
- Contributing
- License
Directory Structure
your-project/
βββ issues/
β βββ 000001_implement_auth.md # Active issue
β βββ 000002_fix_bug.md # Active issue
β βββ complete/
β βββ 000003_add_tests.md # Completed issue
βββ .git/
βββ your-code/
Workflow Patterns
Basic Workflow
- Create Issue: Define what needs to be done
- Work on Issue: Switch to dedicated branch
- Update Progress: Document work as you go
- Mark Complete: Signal work is finished
- Merge: Integrate work into main branch
Advanced Workflow
- Project Planning: Create multiple issues for features
- Priority Management: Work on issues in order
- Progress Tracking: Regular updates and status checks
- Team Coordination: Use issue status for team awareness
- Historical Record: Completed issues provide project history
MCP Tools Reference
issue_create
Create a new issue with automatic numbering.
Parameters:
name
(required): Issue name for filenamecontent
(required): Markdown content
Example:
{
"tool": "issue_create",
"arguments": {
"name": "implement_dashboard",
"content": "# Dashboard Implementation\n\nCreate a user dashboard with the following features:\n- User profile display\n- Activity feed\n- Quick actions"
}
}
issue_work
Start working on an issue by creating and switching to a work branch.
Parameters:
number
(required): Issue number to work on
Example:
{
"tool": "issue_work",
"arguments": {
"number": 1
}
}
issue_update
Update an existing issueβs content.
Parameters:
number
(required): Issue number to updatecontent
(required): New or additional contentappend
(optional): If true, append to existing content
Example:
{
"tool": "issue_update",
"arguments": {
"number": 1,
"content": "## Progress Update\n\nCompleted user profile display component.",
"append": true
}
}
issue_mark_complete
Mark an issue as complete, moving it to the completed directory.
Parameters:
number
(required): Issue number to complete
Example:
{
"tool": "issue_mark_complete",
"arguments": {
"number": 1
}
}
issue_current
Get the current issue based on the active git branch.
Parameters:
branch
(optional): Specific branch to check
Example:
{
"tool": "issue_current",
"arguments": {}
}
issue_all_complete
Check if all issues in the project are completed.
Parameters: None
Example:
{
"tool": "issue_all_complete",
"arguments": {}
}
issue_merge
Merge completed issue work back to the main branch.
Parameters:
number
(required): Issue number to merge
Example:
{
"tool": "issue_merge",
"arguments": {
"number": 1
}
}
CLI Commands Reference
Create Issues
# Create with inline content
swissarmyhammer issue create "fix_login_bug" --content "Fix the login validation issue"
# Create with content from file
swissarmyhammer issue create "add_feature" --file feature_spec.md
# Create with content from stdin
echo "Issue content" | swissarmyhammer issue create "my_issue" --content -
List Issues
# List all issues
swissarmyhammer issue list
# List only active issues
swissarmyhammer issue list --active
# List only completed issues
swissarmyhammer issue list --completed
# Output as JSON
swissarmyhammer issue list --format json
# Output as Markdown
swissarmyhammer issue list --format markdown
Show Issue Details
# Show formatted issue details
swissarmyhammer issue show 1
# Show raw content only
swissarmyhammer issue show 1 --raw
Update Issues
# Replace content
swissarmyhammer issue update 1 --content "New content"
# Append to existing content
swissarmyhammer issue update 1 --content "Additional notes" --append
# Update from file
swissarmyhammer issue update 1 --file update.md --append
Work on Issues
# Start working on an issue
swissarmyhammer issue work 1
# Check current issue
swissarmyhammer issue current
# Show project status
swissarmyhammer issue status
Complete and Merge
# Mark issue complete
swissarmyhammer issue complete 1
# Merge to main branch
swissarmyhammer issue merge 1
# Merge and keep branch
swissarmyhammer issue merge 1 --keep-branch
Best Practices
Issue Naming
- Use descriptive, action-oriented names
- Use underscores instead of spaces
- Keep names under 50 characters
- Examples:
implement_auth
,fix_login_bug
,add_user_dashboard
Issue Content
- Use markdown for formatting
- Include acceptance criteria
- Add relevant links and references
- Update with progress notes
- Keep content focused and organized
Git Workflow
- Always work on issue branches
- Make atomic commits with clear messages
- Keep branches up to date with main
- Complete issues before merging
- Use descriptive commit messages
Team Coordination
- Check issue status before starting work
- Update issues with progress regularly
- Use issue comments for team communication
- Mark issues complete when work is done
- Review completed issues in team meetings
Troubleshooting
Git Repository Issues
Problem: βNot in a git repositoryβ Solution: Initialize git repository in your project directory:
git init
git add .
git commit -m "Initial commit"
Problem: βUncommitted changes prevent operationβ Solution: Commit or stash your changes:
git add .
git commit -m "Work in progress"
# or
git stash
Issue Management Issues
Problem: βIssue not foundβ Solution: Check available issues:
swissarmyhammer issue list
Problem: βBranch already existsβ Solution: Switch to existing branch or delete it:
git checkout issue/000001_issue_name
# or
git branch -D issue/000001_issue_name
Performance Issues
Problem: Slow issue operations with many issues Solution: Use filtering and pagination:
swissarmyhammer issue list --active
swissarmyhammer issue list --format json | jq '.[] | select(.completed == false)'
Integration Examples
With Claude Code
Human: Create an issue to implement user authentication
Assistant: I'll create an issue to track implementing user authentication.
issue_create name="implement_auth" content="# User Authentication Implementation
## Requirements
- JWT-based authentication
- Login/logout endpoints
- Password hashing with bcrypt
- Session management
- Role-based access control
## Acceptance Criteria
- [ ] User registration endpoint
- [ ] Login endpoint with JWT token generation
- [ ] Password validation and hashing
- [ ] Protected routes middleware
- [ ] Role-based permissions
- [ ] Session expiration handling
- [ ] Unit tests for all auth components
"
With GitHub Actions
name: Issue Management Workflow
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
check-issues:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup SwissArmyHammer
run: |
# Install SwissArmyHammer
curl -sSL https://install.swissarmyhammer.dev | bash
- name: Check issue status
run: |
swissarmyhammer issue status
- name: Validate issue files
run: |
swissarmyhammer validate --path issues/
With VS Code
Create a .vscode/tasks.json
file:
{
"version": "2.0.0",
"tasks": [
{
"label": "Create Issue",
"type": "shell",
"command": "swissarmyhammer",
"args": ["issue", "create", "${input:issueName}", "--content", "${input:issueContent}"],
"group": "build",
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared"
}
},
{
"label": "List Issues",
"type": "shell",
"command": "swissarmyhammer",
"args": ["issue", "list"],
"group": "build"
},
{
"label": "Current Issue",
"type": "shell",
"command": "swissarmyhammer",
"args": ["issue", "current"],
"group": "build"
}
],
"inputs": [
{
"id": "issueName",
"description": "Issue name",
"default": "new_issue",
"type": "promptString"
},
{
"id": "issueContent",
"description": "Issue content",
"default": "# New Issue\n\nDescribe the issue here.",
"type": "promptString"
}
]
}
Advanced Usage
Batch Operations
# Create multiple issues from a CSV file
cat issues.csv | while IFS=',' read -r name content; do
swissarmyhammer issue create "$name" --content "$content"
done
# Update all active issues with a common note
swissarmyhammer issue list --active --format json | \
jq -r '.[] | .number' | \
while read -r number; do
swissarmyhammer issue update "$number" --content "\n\n**Updated:** $(date)" --append
done
Custom Workflows
#!/bin/bash
# smart-issue-create.sh - Smart issue creation with templates
ISSUE_NAME="$1"
ISSUE_TYPE="$2"
case "$ISSUE_TYPE" in
"feature")
TEMPLATE="# Feature: $ISSUE_NAME
## Overview
[Brief description of the feature]
## Requirements
- [ ] Requirement 1
- [ ] Requirement 2
## Acceptance Criteria
- [ ] Criteria 1
- [ ] Criteria 2
## Implementation Notes
[Technical notes and considerations]
"
;;
"bug")
TEMPLATE="# Bug Fix: $ISSUE_NAME
## Problem
[Describe the bug]
## Steps to Reproduce
1. Step 1
2. Step 2
3. Step 3
## Expected Behavior
[What should happen]
## Actual Behavior
[What actually happens]
## Fix Strategy
[How to fix it]
"
;;
*)
TEMPLATE="# $ISSUE_NAME
[Issue description]
"
;;
esac
swissarmyhammer issue create "$ISSUE_NAME" --content "$TEMPLATE"
Issue Analytics
#!/bin/bash
# issue-analytics.sh - Generate issue statistics
echo "=== Issue Analytics ==="
echo ""
# Count active issues
ACTIVE_COUNT=$(swissarmyhammer issue list --active --format json | jq length)
echo "Active Issues: $ACTIVE_COUNT"
# Count completed issues
COMPLETED_COUNT=$(swissarmyhammer issue list --completed --format json | jq length)
echo "Completed Issues: $COMPLETED_COUNT"
# Calculate completion rate
TOTAL_COUNT=$((ACTIVE_COUNT + COMPLETED_COUNT))
if [ $TOTAL_COUNT -gt 0 ]; then
COMPLETION_RATE=$((COMPLETED_COUNT * 100 / TOTAL_COUNT))
echo "Completion Rate: $COMPLETION_RATE%"
fi
# Show recent activity
echo ""
echo "=== Recent Activity ==="
swissarmyhammer issue list --completed --format json | \
jq -r '.[] | select(.completed_at != null) | "\(.number): \(.name) (completed: \(.completed_at))"' | \
sort -k3 -r | head -5
Configuration
Global Configuration
Create ~/.swissarmyhammer/config.toml
:
[issues]
# Default issue directory
directory = "./issues"
# Default branch prefix for issue work
branch_prefix = "issue"
# Auto-delete branches after merge
auto_delete_branches = true
# Default issue template
template = """
# {name}
## Overview
[Brief description]
## Tasks
- [ ] Task 1
- [ ] Task 2
## Notes
[Additional notes]
"""
[git]
# Default commit message template
commit_template = "{action}: {issue_name} - {description}"
# Auto-commit issue updates
auto_commit = false
Project Configuration
Create .swissarmyhammer/config.toml
in your project:
[issues]
# Project-specific issue directory
directory = "./project-issues"
# Custom issue number format
number_format = "PRJ-{:06d}"
# Issue categories
categories = ["bug", "feature", "enhancement", "documentation"]
# Required fields
required_fields = ["assignee", "priority", "category"]
[workflow]
# Required before marking complete
completion_checklist = [
"Code written and tested",
"Documentation updated",
"Tests passing",
"Code reviewed"
]
API Reference
Issue File Format
Issues are stored as markdown files with optional YAML frontmatter:
---
title: "Fix login bug"
assignee: "john@example.com"
priority: "high"
category: "bug"
created_at: "2024-01-15T10:30:00Z"
updated_at: "2024-01-15T14:45:00Z"
---
# Fix Login Bug
## Problem
Users cannot log in with special characters in their passwords.
## Root Cause
Password validation is too strict and rejects valid special characters.
## Solution
Update the password validation regex to allow all printable ASCII characters.
## Implementation
- [ ] Update validation regex in `auth.rs:45`
- [ ] Add unit tests for special character passwords
- [ ] Update documentation
## Testing
- [ ] Test with various special characters
- [ ] Test with existing passwords
- [ ] Test edge cases
Exit Codes
Code | Description |
---|---|
0 | Success |
1 | General error |
2 | Invalid arguments |
3 | File not found |
4 | Git error |
5 | Permission denied |
10 | Issue not found |
11 | Issue already exists |
12 | Invalid issue state |
Environment Variables
Variable | Description | Default |
---|---|---|
SWISSARMYHAMMER_ISSUES_DIR | Issue directory | ./issues |
SWISSARMYHAMMER_BRANCH_PREFIX | Branch prefix | issue |
SWISSARMYHAMMER_AUTO_DELETE_BRANCHES | Auto-delete branches | true |
SWISSARMYHAMMER_EDITOR | Default editor | $EDITOR |
Rust API Types
For programmatic access to issue management functionality, the following types are available:
Core Types
Issue
- Represents a single issue with metadataIssueState
- Issue lifecycle state (Active, Complete, etc.)IssueStorage
- Storage abstraction for issues
Storage Implementations
FileSystemIssueStorage
- File-based storage backendInstrumentedIssueStorage
- Instrumented wrapper with metrics
Utility Types
ProjectStatus
- Project-wide issue status informationIssueBranchResult
- Result of branch operationsIssueMergeResult
- Result of merge operationsPerformanceMetrics
- Performance monitoring data
Utility Functions
get_current_issue_from_branch
- Extract issue from branch namework_on_issue
- Switch to issue work branchmerge_issue_branch
- Merge issue work back to mainget_next_issue
- Find next issue to work on
See Also:
- API Guide - Comprehensive API usage guide
- Library Examples - Practical code examples
- Rustdoc API Documentation - Complete API reference
Performance Optimization
Large Projects
For projects with many issues, consider:
- Use filtering: Always use
--active
or--completed
flags - JSON output: Use
--format json
for programmatic access - Batch operations: Group multiple operations together
- Index optimization: Keep issue files small and focused
Memory Usage
# Monitor memory usage during large operations
time swissarmyhammer issue list --format json > /dev/null
# Use streaming for large datasets
swissarmyhammer issue list --format json | jq -c '.[]' | while read -r issue; do
# Process each issue individually
echo "$issue" | jq -r '.name'
done
Migration Guide
From GitHub Issues
#!/bin/bash
# migrate-from-github.sh - Migrate GitHub issues to SwissArmyHammer
REPO="owner/repo"
TOKEN="your-github-token"
# Export GitHub issues
gh issue list --repo "$REPO" --state all --json number,title,body,state | \
jq -r '.[] | @base64' | \
while read -r issue; do
# Decode issue data
ISSUE_DATA=$(echo "$issue" | base64 -d)
NUMBER=$(echo "$ISSUE_DATA" | jq -r '.number')
TITLE=$(echo "$ISSUE_DATA" | jq -r '.title')
BODY=$(echo "$ISSUE_DATA" | jq -r '.body')
STATE=$(echo "$ISSUE_DATA" | jq -r '.state')
# Create SwissArmyHammer issue
ISSUE_NAME=$(echo "$TITLE" | sed 's/[^a-zA-Z0-9]/_/g' | tr '[:upper:]' '[:lower:]')
CONTENT="# $TITLE
$BODY
---
*Migrated from GitHub Issue #$NUMBER*
"
swissarmyhammer issue create "$ISSUE_NAME" --content "$CONTENT"
# Mark completed issues as complete
if [ "$STATE" = "closed" ]; then
ISSUE_NUM=$(swissarmyhammer issue list --format json | jq -r '.[] | select(.name == "'$ISSUE_NAME'") | .number')
swissarmyhammer issue complete "$ISSUE_NUM"
fi
done
From Jira
#!/bin/bash
# migrate-from-jira.sh - Migrate Jira issues to SwissArmyHammer
JIRA_URL="https://your-domain.atlassian.net"
PROJECT="YOUR-PROJECT"
EMAIL="your-email@example.com"
TOKEN="your-jira-token"
# Export Jira issues
curl -u "$EMAIL:$TOKEN" \
"$JIRA_URL/rest/api/2/search?jql=project=$PROJECT&maxResults=1000" | \
jq -r '.issues[] | @base64' | \
while read -r issue; do
# Decode and process issue data
ISSUE_DATA=$(echo "$issue" | base64 -d)
KEY=$(echo "$ISSUE_DATA" | jq -r '.key')
SUMMARY=$(echo "$ISSUE_DATA" | jq -r '.fields.summary')
DESCRIPTION=$(echo "$ISSUE_DATA" | jq -r '.fields.description // ""')
STATUS=$(echo "$ISSUE_DATA" | jq -r '.fields.status.name')
# Create SwissArmyHammer issue
ISSUE_NAME=$(echo "$SUMMARY" | sed 's/[^a-zA-Z0-9]/_/g' | tr '[:upper:]' '[:lower:]')
CONTENT="# $SUMMARY
$DESCRIPTION
---
*Migrated from Jira Issue $KEY*
"
swissarmyhammer issue create "$ISSUE_NAME" --content "$CONTENT"
# Mark completed issues as complete
if [ "$STATUS" = "Done" ] || [ "$STATUS" = "Resolved" ]; then
ISSUE_NUM=$(swissarmyhammer issue list --format json | jq -r '.[] | select(.name == "'$ISSUE_NAME'") | .number')
swissarmyhammer issue complete "$ISSUE_NUM"
fi
done
Contributing
To contribute to SwissArmyHammerβs issue management system:
- Report Issues: Use SwissArmyHammer itself to track bugs and features
- Submit PRs: Follow the standard GitHub workflow
- Write Tests: Ensure all new features have comprehensive tests
- Update Documentation: Keep this guide up to date
Development Setup
# Clone the repository
git clone https://github.com/wballard/swissarmyhammer.git
cd swissarmyhammer
# Install dependencies
cargo build
# Run tests
cargo test
# Create a development issue
./target/debug/swissarmyhammer issue create "my_feature" --content "# My Feature
Description of what I want to implement.
"
# Start working on it
./target/debug/swissarmyhammer issue work 1
License
This issue management system is part of SwissArmyHammer and is licensed under the MIT License. See LICENSE for details.
Workflows
SwissArmyHammer provides a powerful workflow system that allows you to define and execute complex multi-step processes using Mermaid state diagrams. This guide covers creating, running, and managing workflows.
Overview
Workflows in SwissArmyHammer are defined using Mermaid state diagrams in markdown files. Each workflow consists of states (actions) and transitions that control the flow of execution. Workflows can:
- Execute prompts or other workflows
- Make decisions based on outputs
- Run actions in parallel
- Handle errors gracefully
- Resume from failures
Creating Workflows
Workflows are stored in .swissarmyhammer/workflows/
directories and use the .md
file extension. Each workflow file consists of:
- YAML Front Matter - Metadata about the workflow
- Mermaid State Diagram - The workflow structure
- Actions Section - Mappings of states to their actions
Hereβs a basic workflow structure:
---
name: my-workflow
title: My Example Workflow
description: A workflow that demonstrates basic functionality
category: user
tags:
- example
- automation
---
# My Example Workflow
This workflow processes data through multiple stages.
For a complete example, see: [Simple Workflow](../examples/workflows/simple-workflow.md)
## Actions
- Start: Execute prompt "setup" with input="${data}"
- Process: Execute prompt "main-task"
- Success: Log "Task completed successfully"
- Failure: Log error "Task failed: ${error}"
Workflow Components
Front Matter
The YAML front matter contains workflow metadata:
---
name: workflow-id # Unique identifier for the workflow
title: Workflow Title # Human-readable title
description: Description # What the workflow does
category: builtin # Category (builtin, user, etc.)
tags: # Tags for organization
- automation
- data-processing
---
States
States represent steps in your workflow. They are defined in the Mermaid diagram and their actions are specified in the Actions section:
- Execute a prompt:
Execute prompt "prompt-name" with var="value"
- Run another workflow:
Run workflow "workflow-name" with data="${input}"
- Set variables:
Set result="${output}"
- Log messages:
Log "Processing complete"
- Wait:
Wait 5 seconds
Actions Section
The Actions section maps state names to their actions using the format:
## Actions
- StateName: Action description
- AnotherState: Execute prompt "example" with param="value"
Transitions
Transitions control flow between states:
- Always: Unconditional transition
- OnSuccess: Transition when action succeeds
- OnFailure: Transition when action fails
- Conditional: Based on regex matching or CEL expressions
- Regex patterns:
"pattern"
or/regex/
- CEL expressions: Complex conditions like
var.startsWith('Hello')
oris_error == true
- Regex patterns:
Special States
[*]
: Start and end states- Fork (
<<fork>>
) and Join (<<join>>
): For parallel execution
Mermaid Syntax Guide
SwissArmyHammer uses standard Mermaid state diagram syntax. The diagram defines the workflow structure, while actions are defined separately in the Actions section:
Basic Flow
stateDiagram-v2
[*] --> StateA
StateA --> StateB
StateB --> [*]
With corresponding actions:
## Actions
- StateA: Log "Starting process"
- StateB: Execute prompt "process-data"
Conditional Branching
stateDiagram-v2
[*] --> Check
Check --> OptionA: "pattern_a"
Check --> OptionB: "pattern_b"
Check --> Default: Always
Choice State Detection
SwissArmyHammer automatically detects choice states based on their transition patterns. A state is identified as a choice state when it has:
- Multiple outgoing transitions
- At least one transition with a condition (regex pattern or CEL expression)
- Different transition types (e.g., conditional and βalwaysβ transitions)
This automatic detection ensures that branching logic works correctly without requiring explicit choice state declarations in the Mermaid diagram.
Parallel Execution
See: Parallel Workflow
Action Reference
Execute Prompt
Execute a SwissArmyHammer prompt:
Execute prompt "prompt-name" with var1="value1" var2="${variable}"
Run Workflow
Delegate to another workflow:
Run workflow "workflow-name" with input="${data}"
Set Variable
Store values for later use:
Set variable_name="value"
Set result="${output}"
Log Messages
Output information:
Log "Information message"
Log error "Error message"
Log warning "Warning message"
Wait
Pause execution:
Wait 5 seconds
Wait 1 minute
System Commands
Execute shell commands:
Execute command "ls -la"
Execute command "npm test"
Variables and Context
Workflows have access to:
- Input variables passed via
--var
- Template variables passed via
--set
(for liquid template rendering) - Variables set in previous states
- Output from executed prompts (
${output}
) - Error messages (
${error}
) - Workflow metadata (
${workflow_name}
,${run_id}
)
Variable Interpolation
Use ${variable_name}
syntax to reference workflow variables:
Execute prompt "analyze" with file="${input_file}"
Set result="Analysis of ${input_file}: ${output}"
Liquid Template Support
Workflows support Liquid template rendering in action strings when using the --set
parameter. This allows dynamic parameterization of workflows at runtime:
## Actions
- start: Log "Starting workflow for {{ user_name | default: 'Guest' }}"
- greet: Execute prompt "say-hello" with name="{{ name }}" language="{{ language | default: 'English' }}"
- process: Set message="{{ greeting_type }} for {{ user_name }}!"
- farewell: Log "Goodbye, {{ name }}!"
Run the workflow with template variables:
# Pass template variables with --set
swissarmyhammer flow run greeting --set name=Alice --set language=French
# Template variables with default values
swissarmyhammer flow run greeting --set name=Bob
# The language will default to 'English'
# Complex template variables
swissarmyhammer flow run data-processor --set user.name=Alice --set user.role=admin
Template Features in Workflows
You can use all Liquid template features in workflow action strings:
Filters:
- log_user: Log "Processing user: {{ username | upcase }}"
- set_path: Set output_file="/tmp/{{ filename | slugify }}.json"
Conditionals:
- notify: Log "{% if priority == 'high' %}π¨ URGENT: {% endif %}{{ message }}"
Default Values:
- configure: Set timeout="{{ timeout | default: '30' }}"
- log_mode: Log "Running in {{ mode | default: 'development' }} mode"
Complex Objects:
- process_user: Execute prompt "user-handler" with name="{{ user.name }}" role="{{ user.role }}"
Combining βvar and βset
You can use both --var
(for workflow variables) and --set
(for template variables) together:
swissarmyhammer flow run my-workflow \
--var input_file=data.json \
--var output_dir=/tmp \
--set user_name=Alice \
--set environment=production
--var
variables are available as${variable}
in the workflow--set
variables are available as{{ variable }}
in liquid templates
Template Rendering Behavior
- Templates are rendered before action parsing
- If a template variable is not provided, the original template syntax is preserved
- Template rendering errors are logged as warnings but donβt stop workflow execution
- Use the
default
filter to provide fallback values for optional variables
Error Handling
Workflows provide robust error handling:
Retry Logic
Note: Retry logic is handled automatically by Claudeβs infrastructure. SwissArmyHammer workflows do not need to implement application-level retry mechanisms as Claude will automatically retry failed requests according to its built-in retry policies. This eliminates the need for manual retry implementation in your workflow logic.
Try-Catch Pattern
stateDiagram-v2
[*] --> Try
Try --> Success: OnSuccess
Try --> Catch: OnFailure
Catch --> Recovery
Success --> [*]
Recovery --> [*]
## Actions
- Try: Execute prompt "risky-operation"
- Catch: Log error "Operation failed: ${error}"
- Recovery: Execute prompt "cleanup"
- Success: Log "Operation completed successfully"
Abort Error Handling
Workflows support immediate termination through abort errors. When a prompt actionβs result begins with ABORT ERROR:
, the workflow immediately exits all the way back to the root workflow with an error.
How Abort Errors Work
- Detection: When a prompt returns a response starting with
ABORT ERROR:
, it triggers immediate termination - Propagation: The error bypasses all normal error handling (retries, compensation, transitions)
- Root Exit: In nested workflows, the abort error propagates through all parent workflows to the root
Example Usage
See: User Confirmation Workflow
## Actions
- UserConfirmation: Execute prompt "confirm-destructive-action"
- ProcessData: Execute prompt "process-user-data"
- Complete: Log "Processing completed"
If the confirm-destructive-action
prompt returns ABORT ERROR: User cancelled the operation
, the workflow immediately terminates without executing ProcessData or Complete states.
Use Cases
- User Cancellation: Allow users to cancel long-running operations
- Critical Failures: Immediately stop on unrecoverable errors
- Safety Checks: Abort when safety conditions are not met
Important Notes
- Abort errors only trigger when the response starts with
ABORT ERROR:
(case-sensitive) - The error message after
ABORT ERROR:
is propagated in the error - Abort errors cannot be caught or handled within the workflow
- In sub-workflows, abort errors bubble up to terminate the parent workflow
Best Practices
1. Keep States Focused
Each state should perform one clear action:
Good:
ValidateInput: Execute prompt "validate-json" with file="${input}"
Bad:
DoEverything: Execute prompt "validate-and-transform-and-save"
2. Use Meaningful State Names
State names should describe what happens:
Good: ValidateConfiguration, ProcessUserData, GenerateReport
Bad: Step1, Step2, DoStuff
3. Handle All Paths
Ensure all states have clear exit paths:
stateDiagram-v2
[*] --> Process
Process --> Success: OnSuccess
Process --> Failure: OnFailure
Success --> [*]
Failure --> Cleanup: Always
Cleanup --> [*]
4. Use Variables Effectively
Pass data between states using variables:
ExtractData: Execute prompt "parse-file" with file="${input_file}"
ProcessData: Execute prompt "transform" with data="${output}"
SaveResults: Execute prompt "save" with content="${output}" path="${output_file}"
5. Document Complex Logic
Add comments to explain complex workflows:
stateDiagram-v2
%% This workflow processes user uploads
%% It validates, transforms, and stores the data
[*] --> Validate
%% Validation ensures file format is correct
Validate --> Transform: OnSuccess
Complete Example
Hereβs a complete workflow file showing all components including liquid template support:
---
name: data-processor
title: Data Processing Workflow
description: Validates and processes incoming data files
category: user
tags:
- data-processing
- validation
- automation
---
# Data Processing Workflow
This workflow validates incoming data files, transforms them to the required format,
and stores the results. It includes error handling and retry logic.
```mermaid
stateDiagram-v2
[*] --> Initialize
Initialize --> ValidateFormat
ValidateFormat --> Transform: OnSuccess
ValidateFormat --> LogError: OnFailure
Transform --> StoreData: OnSuccess
Transform --> RetryTransform: OnFailure
RetryTransform --> Transform
StoreData --> NotifyComplete
LogError --> NotifyError
NotifyComplete --> [*]
NotifyError --> [*]
Actions
- Initialize: Log βStarting {{ environment | default: βdevelopmentβ }} data processing for file: ${input_file}β
- ValidateFormat: Execute prompt βvalidate-json-schemaβ with file=β${input_file}β schema=β{{ schema | default: βdefault-schema.jsonβ }}β
- Transform: Execute prompt βtransform-dataβ with input=β${output}β format=β{{ format | default: βjsonβ }}β
- StoreData: Execute prompt βstore-to-databaseβ with data=β${output}β table=β{{ db_table | default: βprocessed_dataβ }}β
- RetryTransform: Wait {{ retry_delay | default: β5β }} seconds
- NotifyComplete: Log βSuccessfully processed ${input_file} in {{ environment }} environmentβ
- LogError: Log error βValidation failed for ${input_file}: ${error}β
- NotifyError: Execute prompt βsend-notificationβ with message=βProcessing failed: ${error}β channel=β{{ alert_channel | default: βerrorsβ }}β
### Running the Example
```bash
# Basic run with defaults
swissarmyhammer flow run data-processor --var input_file=data.json
# Production run with custom settings
swissarmyhammer flow run data-processor \
--var input_file=data.json \
--set environment=production \
--set schema=production-schema.json \
--set db_table=prod_data \
--set alert_channel=prod-alerts
# Development run with custom retry
swissarmyhammer flow run data-processor \
--var input_file=test.json \
--set environment=development \
--set retry_delay=10
Running Workflows
Execute workflows using the flow
command:
# Run a workflow
swissarmyhammer flow run workflow-name
# Pass variables
swissarmyhammer flow run workflow-name --var input_file=data.json --var mode=production
# Resume from failure
swissarmyhammer flow run workflow-name --resume <run_id>
Monitoring and Debugging
View Workflow Runs
# List recent runs
swissarmyhammer flow list
# Show run details
swissarmyhammer flow show <run_id>
Debug Output
Workflows create detailed logs in .swissarmyhammer/workflows/runs/<run_id>.jsonl
:
- State entries and exits
- Variable values
- Prompt outputs
- Error messages
- Timing information
Visualization
Generate workflow diagrams:
# Visualize workflow structure
swissarmyhammer flow visualize workflow-name
# Show run path
swissarmyhammer flow visualize workflow-name --run <run_id>
Advanced Features
Nested Workflows
Workflows can call other workflows, enabling modular design:
stateDiagram-v2
[*] --> Initialize
Initialize --> RunSubWorkflow
RunSubWorkflow --> ProcessResults: OnSuccess
ProcessResults --> [*]
## Actions
- Initialize: Log "Starting main workflow"
- RunSubWorkflow: Run workflow "data-processor" with input="${raw_data}"
- ProcessResults: Log "Processed ${output}"
Dynamic Workflow Selection
Choose workflows at runtime:
stateDiagram-v2
[*] --> DetermineType
DetermineType --> RunTypeA: "type:A"
DetermineType --> RunTypeB: "type:B"
RunTypeA --> [*]
RunTypeB --> [*]
## Actions
- DetermineType: Execute prompt "detect-type" with data="${input}"
- RunTypeA: Run workflow "process-type-a" with data="${input}"
- RunTypeB: Run workflow "process-type-b" with data="${input}"
CEL Expression Branching
Use CEL expressions for complex conditional logic:
stateDiagram-v2
[*] --> BranchDecision
BranchDecision --> ProcessNormal: example_var.startsWith('Hello')
BranchDecision --> ProcessError: is_error == true
BranchDecision --> ProcessSpecial: count > 10 && status == 'active'
BranchDecision --> ProcessDefault: always
ProcessNormal --> [*]
ProcessError --> [*]
ProcessSpecial --> [*]
ProcessDefault --> [*]
## Actions
- BranchDecision: Execute prompt "analyze-data" with input="${data}"
- ProcessNormal: Log "Processing normal flow for ${example_var}"
- ProcessError: Execute prompt "handle-error" with error="${error}"
- ProcessSpecial: Execute prompt "special-handler" with count="${count}"
- ProcessDefault: Log "No special conditions met"
The choice state (BranchDecision) is automatically detected and will evaluate each condition in order, taking the first matching transition.
Parallel Processing
Process multiple items concurrently:
stateDiagram-v2
[*] --> Split
Split --> fork1: Always
state fork1 <<fork>>
fork1 --> ProcessItem1
fork1 --> ProcessItem2
fork1 --> ProcessItem3
ProcessItem1 --> join1: Always
ProcessItem2 --> join1: Always
ProcessItem3 --> join1: Always
state join1 <<join>>
join1 --> Aggregate
Aggregate --> [*]
Troubleshooting
Common Issues
- Workflow not found: Ensure workflow is in
.swissarmyhammer/workflows/
- Variable undefined: Check variable names and initialization
- Infinite loops: Add proper exit conditions
- Prompt not found: Verify prompt paths and names
Validation
Always validate workflows before running:
swissarmyhammer validate
This checks:
- Mermaid syntax
- State connectivity
- Action syntax
- Variable usage
- Circular dependencies
Next Steps
- Explore Example Workflows for practical patterns
- Read about Workflow Patterns for common solutions
- Check the CLI Reference for all flow commands
- Learn about Testing Workflows
Workflow Examples
This guide showcases practical workflow examples demonstrating various features and patterns available in SwissArmyHammerβs workflow system.
Example Workflows
Example workflows are located in:
builtin/workflows/
- Example workflows demonstrating various features and patterns
All workflows can be run directly or used as templates for your own workflows.
Basic Examples
Hello World Workflow
File: builtin/workflows/hello-world.md
Type: Simple linear workflow
The simplest possible workflow demonstrating basic functionality:
swissarmyhammer flow run hello-world
Features demonstrated:
- Basic state transitions
- Prompt execution with result capture
- Using variables in log messages
Example Actions Workflow
File: builtin/workflows/example-actions.md
Type: Action reference workflow
Demonstrates all available workflow actions:
swissarmyhammer flow run example-actions
Features demonstrated:
- All action types (Log, Execute prompt, Set variable, Wait)
- Different log levels (info, warning, error)
- Variable substitution
- Sub-workflow execution
Greeting Workflow
File: builtin/workflows/greeting.md
Type: Template variable demonstration
A simple workflow that demonstrates using template variables in workflows:
swissarmyhammer flow run greeting --set name=John --set language=English
Features demonstrated:
- Template variables with
--set
parameters - Variable defaults with liquid templates
- Linear state progression
- Prompt execution with variable substitution
Use cases:
- Learning template variable usage
- Building parameterized workflows
- Dynamic workflow customization
Running Example Workflows
Basic Execution
Run any example workflow:
swissarmyhammer flow run <workflow-name>
With Custom Variables
Customize workflows with variables:
swissarmyhammer flow run greeting \
--set name="Your Name" \
--set language="French"
Resume from Failure
If a workflow fails, you can resume from where it left off:
swissarmyhammer flow run example-actions --resume <run_id>
List Available Workflows
See all available workflows:
swissarmyhammer flow list
Learning from Examples
How to Study the Examples
- Read the workflow definition: Understand the state flow and transitions
- Examine the actions: See how different action types are used
- Look at error handling: Note how errors are caught and handled
- Study variable usage: See how data flows through the workflow
- Run with debugging: Use
--debug
to see detailed execution logs
Adapting Examples
To create your own workflow based on an example:
- Copy the example workflow to your prompts directory
- Modify the metadata (name, description, tags)
- Adjust the state diagram to match your needs
- Update actions and variables
- Test incrementally with
--dry-run
Common Modifications
Adding error handling to workflows:
## Actions
- ProcessData: Execute prompt "process-data" with path="${data_path}"
- HandleError: Log error "Data processing failed: ${error}"
- Retry: Log "Retrying with different parameters"
And add retry logic in the Mermaid diagram:
ProcessData --> HandleError: OnFailure
HandleError --> Retry
Retry --> ProcessData
Adding template variables to customize workflows:
Pass variables when running any workflow:
swissarmyhammer flow run greeting \
--set name="Your Name" \
--set language="French"
Adding notifications to workflows:
## Actions
- CompleteTask: Log "Task completed successfully"
- NotifyUser: Execute prompt "send-notification" with message="Workflow completed: ${result}"
Best Practices from Examples
1. State Naming
- Use descriptive, action-oriented names
- Keep names concise but clear
- Use consistent naming patterns
2. Error Handling
- Always include error states for critical operations
- Provide meaningful error messages
- Design rollback paths for reversible operations
3. Variable Management
- Define sensible defaults
- Document variable purposes
- Validate inputs early in the workflow
4. User Interaction
- Provide clear choice descriptions
- Include help text for complex decisions
- Allow bypassing interaction for automation
5. Performance
- Use parallel execution where possible
- Set appropriate timeouts
- Design for idempotency
Complete Workflow Example
Hereβs a complete workflow file showing all the components:
---
name: automated-testing
title: Automated Testing Workflow
description: Runs tests, analyzes results, and generates reports
category: user
tags:
- testing
- ci
- automation
---
# Automated Testing Workflow
This workflow runs the test suite, analyzes failures, and generates comprehensive reports.
```mermaid
stateDiagram-v2
[*] --> Initialize
Initialize --> RunTests
RunTests --> AnalyzeResults: OnSuccess
RunTests --> HandleFailure: OnFailure
AnalyzeResults --> GenerateReport
HandleFailure --> RetryTests: "flaky"
HandleFailure --> GenerateFailureReport: "real_failure"
RetryTests --> RunTests
GenerateReport --> NotifySuccess
GenerateFailureReport --> NotifyFailure
NotifySuccess --> [*]
NotifyFailure --> [*]
Actions
- Initialize: Log βStarting test run for ${branch_name}β
- RunTests: Execute prompt βrun-test-suiteβ with suite=β${test_suite}β parallel=β${parallel_tests}β
- AnalyzeResults: Execute prompt βanalyze-test-resultsβ with results=β${output}β
- HandleFailure: Execute prompt βcategorize-failuresβ with failures=β${error}β
- RetryTests: Wait 30 seconds
- GenerateReport: Execute prompt βgenerate-test-reportβ with data=β${output}β format=βhtmlβ
- GenerateFailureReport: Execute prompt βgenerate-failure-reportβ with failures=β${output}β
- NotifySuccess: Execute prompt βsend-notificationβ with status=βsuccessβ message=βAll tests passed!β
- NotifyFailure: Execute prompt βsend-notificationβ with status=βfailureβ message=βTests failed: ${error}β
## Troubleshooting Examples
### Common Issues
1. **Workflow not found**:
```bash
swissarmyhammer flow list # Check available workflows
-
Variable errors:
swissarmyhammer flow show greeting # View workflow details
-
State transition failures:
- Check condition syntax
- Verify variable values
- Use
--debug
for detailed logs
-
Action failures:
- Ensure referenced prompts exist
- Check variable interpolation
- Verify action syntax
Debugging Tips
-
Use verbose output:
swissarmyhammer workflow run example-actions -v
-
Enable debug mode:
swissarmyhammer workflow run greeting --debug
-
Test with custom variables:
swissarmyhammer workflow run greeting \ --set name="Test User" \ --set language="Spanish"
Next Steps
- Explore Workflow Patterns for advanced techniques
- Read the main Workflows Documentation for detailed reference
- Create your own workflows based on these examples
- Share your workflows with the community
Workflow Patterns
Search and Discovery Guide
SwissArmyHammer provides powerful search capabilities to help you discover and find prompts in your collection. This guide covers search strategies, advanced filtering, and integration workflows.
Search Strategies Overview
SwissArmyHammer provides three complementary search strategies:
- Fuzzy Search: Fast approximate matching for typos and partial matches
- Full-Text Search: Precise term-based search with boolean operators
- Semantic Search: AI-powered semantic similarity using vector embeddings
Each strategy has different strengths and use cases. You can use them individually or in combination for optimal results.
Basic Search
Simple Text Search
The most basic way to search is with a simple text query:
# Search for prompts containing "code"
swissarmyhammer search code
# Search for multiple terms
swissarmyhammer search "code review"
# Search with partial matches
swissarmyhammer search debug
Search Results Format
Found 3 prompts matching "code":
π code-review (builtin)
Review code for best practices and potential issues
Arguments: code, language (optional)
π§ debug-helper (user)
Help debug programming issues and errors
Arguments: error, context (optional)
π analyze-performance (local)
Analyze code performance and suggest optimizations
Arguments: code, language, metrics (optional)
Each result shows:
- Icon: Indicates prompt type (π builtin, π§ user, π local)
- Name: Prompt identifier
- Source: Where the prompt is stored
- Description: Brief description of the promptβs purpose
- Arguments: Required and optional parameters
Field-Specific Search
Search in Titles Only
# Find prompts with "review" in the title
swissarmyhammer search --in title review
# Case-sensitive title search
swissarmyhammer search --in title --case-sensitive "Code Review"
Search in Descriptions
# Find prompts about debugging in descriptions
swissarmyhammer search --in description debug
# Find prompts mentioning specific technologies
swissarmyhammer search --in description "python javascript"
Search in Content
# Find prompts that use specific template variables
swissarmyhammer search --in content "{{code}}"
# Find prompts with specific instructions
swissarmyhammer search --in content "best practices"
Search All Fields
# Search across titles, descriptions, and content (default)
swissarmyhammer search --in all "security"
# Explicit all-field search
swissarmyhammer search "API documentation"
Advanced Search Techniques
Regular Expression Search
Use regex patterns for powerful pattern matching:
# Find prompts with "test" followed by any word
swissarmyhammer search --regex "test\s+\w+"
# Find prompts starting with specific words
swissarmyhammer search --regex "^(debug|fix|analyze)"
# Find prompts with email patterns
swissarmyhammer search --regex "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b"
# Case-sensitive regex
swissarmyhammer search --regex --case-sensitive "^Code"
Search by Source
Filter prompts by their source location:
# Find only built-in prompts
swissarmyhammer search --source builtin
# Find only user-created prompts
swissarmyhammer search --source user
# Find only local project prompts
swissarmyhammer search --source local
# Combine with text search
swissarmyhammer search review --source user
Search by Arguments
Find prompts based on their argument requirements:
# Find prompts that accept a "code" argument
swissarmyhammer search --has-arg code
# Find prompts with no arguments (simple prompts)
swissarmyhammer search --no-args
# Find prompts with specific argument combinations
swissarmyhammer search --has-arg code --has-arg language
# Combine with text search
swissarmyhammer search debug --has-arg error
Search Strategies
Discovery Workflows
Finding Prompts for a Task
# 1. Start broad
swissarmyhammer search "code review"
# 2. Narrow down by context
swissarmyhammer search "code review" --source user
# 3. Check argument requirements
swissarmyhammer search "code review" --has-arg language
# 4. Examine specific matches
swissarmyhammer search --in title "Advanced Code Review"
Exploring Available Prompts
# See all available prompts
swissarmyhammer search --limit 50 ""
# Browse by category/topic
swissarmyhammer search documentation
swissarmyhammer search testing
swissarmyhammer search refactoring
# Find simple prompts (no arguments)
swissarmyhammer search --no-args
Finding Template Examples
# Find prompts using loops
swissarmyhammer search --in content "{% for"
# Find prompts with conditionals
swissarmyhammer search --in content "{% if"
# Find prompts using specific filters
swissarmyhammer search --in content "| capitalize"
Search Optimization
Performance Tips
# Limit results for faster response
swissarmyhammer search --limit 10 query
# Use specific fields to reduce search scope
swissarmyhammer search --in title query # faster than all fields
# Use source filtering to narrow search space
swissarmyhammer search --source user query
Precision vs. Recall
# High precision (exact matches)
swissarmyhammer search --case-sensitive --regex "^exact pattern$"
# High recall (find everything related)
swissarmyhammer search --in all "broad topic"
# Balanced approach
swissarmyhammer search "specific terms" --limit 20
Integration with Other Commands
Search and Test Workflow
# Find debugging prompts
swissarmyhammer search debug
# Test a specific one
swissarmyhammer test debug-helper
# Test with specific arguments
swissarmyhammer test debug-helper --arg error="TypeError: undefined"
Search and Export Workflow
# Find all review-related prompts
swissarmyhammer search review --limit 20
# Export specific ones found
swissarmyhammer export code-review security-review design-review output.tar.gz
# Or export all matching a pattern
# (manual selection based on search results)
Scripted Search
#!/bin/bash
# find-and-test.sh
QUERY="$1"
if [ -z "$QUERY" ]; then
echo "Usage: $0 <search-query>"
exit 1
fi
echo "Searching for: $QUERY"
PROMPTS=$(swissarmyhammer search --json "$QUERY" | jq -r '.results[].id')
if [ -z "$PROMPTS" ]; then
echo "No prompts found"
exit 1
fi
echo "Found prompts:"
echo "$PROMPTS"
echo "Select a prompt to test:"
select PROMPT in $PROMPTS; do
if [ -n "$PROMPT" ]; then
swissarmyhammer test "$PROMPT"
break
fi
done
JSON Output for Scripting
Basic JSON Search
swissarmyhammer search --json "code review"
{
"query": "code review",
"total_found": 3,
"results": [
{
"id": "code-review",
"title": "Code Review Helper",
"description": "Review code for best practices and potential issues",
"source": "builtin",
"path": "/builtin/review/code.md",
"arguments": [
{"name": "code", "required": true},
{"name": "language", "required": false, "default": "auto-detect"}
],
"score": 0.95
}
]
}
Processing JSON Results
# Extract prompt IDs
swissarmyhammer search --json query | jq -r '.results[].id'
# Get highest scoring result
swissarmyhammer search --json query | jq -r '.results[0].id'
# Filter by score threshold
swissarmyhammer search --json query | jq '.results[] | select(.score > 0.8)'
# Count results by source
swissarmyhammer search --json "" --limit 100 | jq '.results | group_by(.source) | map({source: .[0].source, count: length})'
Search Index Management
Understanding the Search Index
SwissArmyHammer automatically maintains a search index that includes:
- Prompt titles - Weighted heavily in scoring
- Descriptions - Medium weight
- Content text - Lower weight
- Argument names - Considered for relevance
- File paths - Used for source filtering
Index Updates
The search index is automatically updated when:
- Prompts are added to the library
- Existing prompts are modified
- The
serve
command starts (full rebuild) - File watching detects changes
Performance Characteristics
- Index size: Proportional to prompt collection size
- Search speed: Sub-second for collections up to 10,000 prompts
- Memory usage: Moderate (index kept in memory)
- Update speed: Fast incremental updates
Troubleshooting Search Issues
No Results Found
# Check if prompts exist
swissarmyhammer search --limit 100 ""
# Verify prompt sources
swissarmyhammer search --source builtin
swissarmyhammer search --source user
swissarmyhammer search --source local
# Try broader search
swissarmyhammer search --in all "partial terms"
Too Many Results
# Use more specific terms
swissarmyhammer search "specific exact phrase"
# Limit by source
swissarmyhammer search broad-term --source user
# Use field-specific search
swissarmyhammer search --in title specific-title
# Limit result count
swissarmyhammer search broad-term --limit 5
Unexpected Results
# Check what's being matched
swissarmyhammer search --full query
# Use exact matching
swissarmyhammer search --regex "^exact term$"
# Search in specific field
swissarmyhammer search --in description query
Best Practices
Effective Search Terms
- Use specific terms: βREST API documentationβ vs. βAPIβ
- Include context: βPython debuggingβ vs. βdebuggingβ
- Try synonyms: βreviewβ, βanalyzeβ, βexamineβ
- Use argument names: Search for βcodeβ, βerrorβ, βdataβ to find relevant prompts
Search Workflow Patterns
- Start broad, narrow down: Begin with general terms, add filters
- Use multiple strategies: Try both fuzzy and regex search
- Check all sources: Donβt assume prompts are only in one location
- Combine with testing: Always test prompts before using
Organization for Searchability
- Clear titles: Use descriptive, searchable titles
- Good descriptions: Include keywords and use cases
- Consistent naming: Use standard terms across prompts
- Tag with arguments: Use predictable argument names
Advanced Examples
Finding Template Patterns
# Find prompts using custom filters
swissarmyhammer search --in content "format_lang"
# Find prompts with error handling
swissarmyhammer search --in content "default:"
# Find prompts with loops
swissarmyhammer search --in content "{% for"
Building Prompt Collections
# Find all code-related prompts
swissarmyhammer search --regex "(code|programming|software)" --limit 50
# Find all documentation prompts
swissarmyhammer search --regex "(doc|documentation|readme|guide)" --limit 30
# Find all analysis prompts
swissarmyhammer search --regex "(analy|review|audit|inspect)" --limit 20
Quality Assurance
# Find prompts without descriptions
swissarmyhammer search --in description "^$" --regex
# Find prompts with no arguments (might need descriptions)
swissarmyhammer search --no-args --limit 50
# Find prompts with many arguments (might be complex)
swissarmyhammer search --json "" --limit 100 | \
jq '.results[] | select(.arguments | length > 5)'
Semantic Search
Overview
Semantic search uses AI embeddings to find code and prompts based on meaning rather than exact text matches. This is particularly powerful for finding conceptually similar code even when the exact keywords differ.
Basic Semantic Search
# Find code semantically similar to a concept
swissarmyhammer search --semantic "error handling patterns"
# Search for code functionality
swissarmyhammer search --semantic "database connection pooling"
# Find similar algorithms or approaches
swissarmyhammer search --semantic "sorting algorithms implementation"
Language-Specific Semantic Search
# Find Rust-specific patterns
swissarmyhammer search --semantic "async error handling" --language rust
# Python-specific search
swissarmyhammer search --semantic "decorator patterns" --language python
# JavaScript/TypeScript patterns
swissarmyhammer search --semantic "promise chain handling" --language typescript
Semantic Search with Thresholds
# High-precision semantic search (only very similar results)
swissarmyhammer search --semantic "REST API client" --threshold 0.8
# Broader semantic search (more results, less similar)
swissarmyhammer search --semantic "authentication" --threshold 0.6
# Maximum recall (find anything remotely related)
swissarmyhammer search --semantic "testing" --threshold 0.4
Code Similarity Detection
# Find code similar to a specific file
swissarmyhammer search --semantic-file path/to/example.rs
# Find duplicated or similar functions
swissarmyhammer search --semantic "function findUser(id)" --limit 10
# Detect architectural patterns
swissarmyhammer search --semantic "observer pattern implementation"
Multi-Modal Semantic Search
# Combine text and code structure
swissarmyhammer search --semantic "error handling" --include-structure
# Search including comments and documentation
swissarmyhammer search --semantic "caching strategy" --include-docs
# Focus on specific code constructs
swissarmyhammer search --semantic "async functions" --code-only
Semantic Search Performance
Semantic search characteristics:
- Latency: 50-200ms for typical queries
- Memory: Scales with result set size and embedding dimensions
- Accuracy: High for conceptual matches, lower for exact syntax
- Best for: Finding similar algorithms, patterns, and architectural concepts
When to Use Semantic Search
Use semantic search when:
- Looking for conceptually similar code patterns
- Searching across different programming languages
- Finding architectural patterns and design solutions
- Exploring similar algorithms or approaches
- Discovering duplicated or near-duplicate code
Use traditional search when:
- Looking for exact variable names or function signatures
- Searching for specific syntax or language constructs
- Finding exact string matches
- Performance is critical (semantic search is slower)
See Also
search
command - Command referencetest
command - Testing found prompts- Prompt Organization - Organizing for discoverability
Search Architecture
SwissArmyHammer implements a multi-tiered search architecture that combines traditional text search with modern semantic search capabilities. This document provides a deep dive into the search systemβs architecture, indexing strategies, and performance characteristics.
Architecture Overview
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Search Frontend β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β CLI Interface β MCP Server β Library API β
βββββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββββββββ
β β
βββββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββ
β Search Engine β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Hybrid Search Controller β
β ββ Query Routing Logic β
β ββ Result Aggregation β
β ββ Score Normalization β
βββββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββββββββ
β β
βββββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββ
β Search Backends β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Fuzzy Search β Full-Text β Semantic Search β
β (SkimMatcher) β (Tantivy) β (Embeddings+DuckDB) β
βββββββββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββββββββββββ
β β
βββββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββ
β Storage Layer β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β In-Memory β File System β Vector Database β
β (RAM Index) β (Persistent) β (DuckDB + VSS) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Search Engines
1. Fuzzy Search Engine
Implementation: search.rs::SearchEngine::fuzzy_search
Technology Stack:
- Matcher:
skim_matcher_v2
(fuzzy string matching) - Storage: In-memory prompt collection
- Indexing: None (searches on-demand)
Architecture:
pub struct SearchEngine {
fuzzy_matcher: SkimMatcherV2,
// Other fields...
}
Performance Characteristics:
- Time Complexity: O(n*m) where n=prompts, m=avg prompt length
- Space Complexity: O(1) additional memory
- Latency: 1-5ms for typical collections
- Strengths: Fast, handles typos, no indexing overhead
- Weaknesses: No semantic understanding, limited scalability
2. Full-Text Search Engine
Implementation: search.rs::SearchEngine::search
Technology Stack:
- Search Engine: Apache Tantivy
- Storage: RAM index with optional persistence
- Query Language: Lucene-compatible syntax
Architecture:
pub struct SearchEngine {
index: Index,
writer: IndexWriter,
name_field: Field,
description_field: Field,
category_field: Field,
tags_field: Field,
template_field: Field,
}
Index Schema:
name
: Prompt name (TEXT | STORED)description
: Prompt description (TEXT | STORED)category
: Prompt category (TEXT | STORED)tags
: Space-separated tags (TEXT | STORED)template
: Prompt content (TEXT only)
Performance Characteristics:
- Indexing: O(n log n) build time, O(log n) updates
- Query: O(log n) typical case
- Memory: ~50MB writer buffer + index size
- Strengths: Boolean queries, exact matching, fast retrieval
- Weaknesses: No semantic understanding, requires exact terms
3. Semantic Search Engine
Implementation: semantic/
modules
Technology Stack:
- Embeddings: ONNX Runtime with transformer models
- Vector Storage: DuckDB with vector similarity search extension
- Code Parsing: TreeSitter multi-language parser
- File Processing: Async I/O with tokio
Architecture:
pub struct SemanticSearcher {
storage: VectorStorage,
embedding_engine: EmbeddingEngine,
config: SemanticConfig,
}
pub struct VectorStorage {
db: Connection,
embeddings_table: String,
chunks_table: String,
}
pub struct EmbeddingEngine {
session: Session,
tokenizer: Tokenizer,
model_config: ModelConfig,
}
Database Schema:
-- Code chunks table
CREATE TABLE chunks (
id TEXT PRIMARY KEY,
file_path TEXT NOT NULL,
content TEXT NOT NULL,
language TEXT,
start_line INTEGER,
end_line INTEGER,
chunk_type TEXT,
created_at TIMESTAMP DEFAULT now()
);
-- Vector embeddings table
CREATE TABLE embeddings (
chunk_id TEXT PRIMARY KEY,
embedding FLOAT[],
embedding_model TEXT,
created_at TIMESTAMP DEFAULT now(),
FOREIGN KEY (chunk_id) REFERENCES chunks(id)
);
Performance Characteristics:
- Indexing: O(n*k) where k=embedding dimension (384-1536)
- Query: O(n) similarity calculation with HNSW optimization
- Memory: Model size (100MB-2GB) + embeddings cache
- Strengths: Semantic understanding, cross-language search
- Weaknesses: High memory usage, slower than text search
Indexing Strategies
Text Index Management
Index Creation:
// In-memory index (default)
let engine = SearchEngine::new()?;
// Persistent index
let engine = SearchEngine::with_directory("/path/to/index")?;
Index Updates:
- Incremental: Add/remove individual prompts
- Batch: Bulk updates with commit optimization
- Rebuild: Full index reconstruction
Index Persistence:
- Memory-mapped files for fast loading
- Atomic commits to prevent corruption
- Configurable buffer sizes for performance tuning
Vector Index Management
Embedding Generation:
// Code chunk processing pipeline
CodeFile -> TreeSitter Parse -> Chunks -> Embeddings -> Storage
Vector Storage:
- Chunking Strategy: Function/class-level granularity
- Embedding Models: Configurable ONNX models
- Similarity Metrics: Cosine similarity (default)
- Index Types: HNSW for approximate nearest neighbor
Update Strategies:
- Lazy Updates: Generate embeddings on first search
- Eager Updates: Pre-compute all embeddings
- Incremental: Update only changed files
Query Processing Pipeline
1. Query Analysis
enum QueryType {
Simple(String),
Regex(String),
Boolean(BooleanQuery),
Semantic(SemanticQuery),
}
2. Strategy Selection
impl SearchEngine {
fn select_strategy(&self, query: &Query) -> SearchStrategy {
match query {
Query { regex: true, .. } => SearchStrategy::Regex,
Query { semantic: true, .. } => SearchStrategy::Semantic,
Query { fuzzy: true, .. } => SearchStrategy::Fuzzy,
_ => SearchStrategy::Hybrid,
}
}
}
3. Result Aggregation
pub fn hybrid_search(&self, query: &str, prompts: &[Prompt]) -> Result<Vec<SearchResult>> {
let mut results = HashMap::new();
// Combine multiple search strategies
let text_results = self.search(query, prompts)?;
let fuzzy_results = self.fuzzy_search(query, prompts);
// Merge and deduplicate results
// Score normalization and ranking
Ok(final_results)
}
Performance Optimization
Caching Strategies
Query Result Caching:
- LRU cache for frequent queries
- TTL-based invalidation
- Configurable cache size limits
Index Caching:
- Memory-mapped index files
- Lazy loading of index segments
- Background index warming
Embedding Caching:
- Persistent embedding storage
- Model result caching
- Batch processing optimization
Memory Management
Memory Usage Patterns:
Component | Memory Usage
------------------------|----------------------------------
Fuzzy Search | O(1) - no additional memory
Full-Text Index | O(index_size) ~ 10-20% of data
Semantic Embeddings | O(chunks * dimensions) ~ 1-5GB
Model Loading | 100MB - 2GB depending on model
Result Sets | O(result_count * prompt_size)
Optimization Strategies:
- Stream processing for large result sets
- Configurable result limits
- Memory-mapped storage for large indices
- Model quantization for embedding models
Performance Tuning
Configuration Parameters:
pub struct SearchConfig {
// Full-text search
pub tantivy_writer_buffer_size: usize,
pub tantivy_merge_policy: MergePolicy,
// Semantic search
pub embedding_batch_size: usize,
pub similarity_threshold: f32,
pub max_results_per_query: usize,
// Hybrid search
pub score_combination_strategy: ScoreStrategy,
pub result_deduplication: bool,
}
Benchmarking Results:
Search Type | Latency (p50) | Latency (p99) | Throughput
----------------|---------------|---------------|------------
Fuzzy Search | 2ms | 8ms | 500 qps
Full-Text | 5ms | 15ms | 200 qps
Semantic | 50ms | 150ms | 20 qps
Hybrid | 25ms | 80ms | 40 qps
Scalability Considerations
Data Size Limits
Recommended Limits:
- Prompts: Up to 100,000 prompts
- Code Files: Up to 1 million files
- Index Size: Up to 10GB total
- Concurrent Users: Up to 100 simultaneous searches
Scaling Strategies
Horizontal Scaling:
- Distributed search with result merging
- Sharded indices by category/source
- Load balancing across search nodes
Vertical Scaling:
- Memory optimization for large datasets
- SSD storage for persistent indices
- GPU acceleration for semantic search
Integration Points
MCP Server Integration
pub struct McpSearchHandler {
search_engine: Arc<SearchEngine>,
semantic_searcher: Arc<SemanticSearcher>,
}
CLI Integration
pub async fn handle_search_command(
args: SearchArgs,
prompt_library: &PromptLibrary,
) -> Result<()> {
// Route to appropriate search engine
// Format results for terminal display
}
Library API
pub trait SearchProvider {
fn search(&self, query: &SearchQuery) -> Result<Vec<SearchResult>>;
fn suggest(&self, partial: &str) -> Result<Vec<String>>;
fn similar(&self, item_id: &str) -> Result<Vec<SearchResult>>;
}
Future Enhancements
Planned Features
- Vector Database: Migration to specialized vector DB (Qdrant/Weaviate)
- Hybrid Retrieval: BM25 + vector search combination
- Query Expansion: Automatic query term expansion
- Personalization: User-specific result ranking
- Real-time Updates: Streaming index updates
Research Directions
- Multi-modal Embeddings: Code + documentation + comments
- Graph-based Search: Code dependency graph traversal
- Federated Search: Cross-repository search capabilities
- Explainable Rankings: Search result explanations
Troubleshooting
Common Issues
Index Corruption:
# Rebuild text index
rm -rf ~/.swissarmyhammer/index
swissarmyhammer search --rebuild-index
# Rebuild semantic index
swissarmyhammer search --rebuild-embeddings
Memory Issues:
# Reduce memory usage
export SWISSARMYHAMMER_MAX_RESULTS=100
export SWISSARMYHAMMER_EMBEDDING_BATCH_SIZE=10
Performance Problems:
# Enable search timing
swissarmyhammer search --timing "query"
# Profile search performance
swissarmyhammer search --profile "detailed query"
Monitoring
Metrics to Track:
- Query latency percentiles
- Index size growth
- Memory usage patterns
- Cache hit rates
- Error rates by search type
Logging Configuration:
// Enable debug logging for search
RUST_LOG=swissarmyhammer::search=debug cargo run
See Also
- Search Guide - User guide for search features
- CLI Search Reference - Command-line interface
- Performance Tuning - Optimization guidelines
- Index Management - Index maintenance guide
Index Management Guide
This guide covers the management, maintenance, and optimization of SwissArmyHammerβs search indices. The system maintains multiple types of indices to support different search strategies, each requiring specific management approaches.
Index Types Overview
SwissArmyHammer maintains three distinct index types:
- Text Index: Tantivy-based full-text search index
- Vector Index: DuckDB-based semantic embeddings storage
- In-Memory Cache: Runtime fuzzy search acceleration
Text Index Management
Index Location and Structure
Default Locations:
# User data directory
~/.swissarmyhammer/index/ # Text indices
~/.swissarmyhammer/cache/ # Temporary cache files
# Project-specific indices
.swissarmyhammer/index/ # Local project indices
.swissarmyhammer/semantic.db # Semantic database
Index Structure:
index/
βββ meta.json # Index metadata
βββ segments/ # Tantivy segments
β βββ segment_0/
β βββ segment_1/
β βββ ...
βββ .managed # Management marker
Index Creation and Rebuilding
Automatic Index Creation:
# Index is created automatically on first search
swissarmyhammer search "query"
# Force index rebuild
swissarmyhammer search --rebuild-index
Manual Index Management:
# Check index status
swissarmyhammer doctor --check-indices
# Rebuild all indices
swissarmyhammer index rebuild --all
# Rebuild specific index type
swissarmyhammer index rebuild --text
swissarmyhammer index rebuild --semantic
Programmatic Index Creation:
use swissarmyhammer::search::SearchEngine;
// Create in-memory index
let mut engine = SearchEngine::new()?;
// Create persistent index
let mut engine = SearchEngine::with_directory("/path/to/index")?;
// Index prompts
engine.index_prompts(&prompts)?;
engine.commit()?;
Index Updates and Maintenance
Incremental Updates:
// Add new prompt to index
engine.index_prompt(&new_prompt)?;
engine.commit()?;
// Batch updates for better performance
for prompt in new_prompts {
engine.index_prompt(&prompt)?;
}
engine.commit()?; // Single commit at the end
Index Optimization:
# Optimize index segments
swissarmyhammer index optimize
# Force merge segments
swissarmyhammer index merge --force
# Compact index storage
swissarmyhammer index compact
Configuration Options:
// Tantivy writer configuration
let writer = index.writer_with_num_threads(2, 50_000_000)?;
// Custom merge policy
use tantivy::merge_policy::LogMergePolicy;
let merge_policy = LogMergePolicy::default()
.set_min_merge_size(8)
.set_min_layer_size(10_000);
Text Index Monitoring
Index Statistics:
# Get index information
swissarmyhammer index stats
# Detailed segment information
swissarmyhammer index stats --detailed
Performance Metrics:
pub struct IndexStats {
pub total_documents: usize,
pub total_segments: usize,
pub index_size_bytes: u64,
pub last_commit_timestamp: DateTime<Utc>,
pub avg_query_time_ms: f64,
}
Vector Index Management
Semantic Database Structure
Database Schema:
-- Code chunks metadata
CREATE TABLE chunks (
id TEXT PRIMARY KEY,
file_path TEXT NOT NULL,
content TEXT NOT NULL,
language TEXT,
start_line INTEGER NOT NULL,
end_line INTEGER NOT NULL,
chunk_type TEXT NOT NULL,
content_hash TEXT NOT NULL,
created_at TIMESTAMP DEFAULT now(),
updated_at TIMESTAMP DEFAULT now()
);
-- Vector embeddings
CREATE TABLE embeddings (
chunk_id TEXT PRIMARY KEY,
embedding FLOAT[] NOT NULL,
embedding_model TEXT NOT NULL,
model_version TEXT NOT NULL,
created_at TIMESTAMP DEFAULT now(),
FOREIGN KEY (chunk_id) REFERENCES chunks(id) ON DELETE CASCADE
);
-- Index metadata
CREATE TABLE index_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TIMESTAMP DEFAULT now()
);
Semantic Index Operations
Database Initialization:
# Initialize semantic database
swissarmyhammer semantic init
# Check database schema
swissarmyhammer semantic schema --verify
# Migrate database schema
swissarmyhammer semantic migrate
Embedding Generation:
# Generate embeddings for codebase
swissarmyhammer semantic index /path/to/code
# Incremental embedding updates
swissarmyhammer semantic update --since last-commit
# Regenerate embeddings for specific files
swissarmyhammer semantic index --files "*.rs,*.py"
Vector Index Configuration:
pub struct SemanticConfig {
pub database_path: PathBuf,
pub embedding_model: String,
pub batch_size: usize,
pub max_chunk_size: usize,
pub similarity_threshold: f32,
pub index_update_strategy: UpdateStrategy,
}
Embedding Model Management
Model Configuration:
# List available embedding models
swissarmyhammer semantic models list
# Download and cache model
swissarmyhammer semantic models download sentence-transformers/all-MiniLM-L6-v2
# Set default model
swissarmyhammer semantic models set-default all-MiniLM-L6-v2
# Model information
swissarmyhammer semantic models info all-MiniLM-L6-v2
Model Switching:
# Switch to different model (requires re-indexing)
swissarmyhammer semantic models switch all-mpnet-base-v2 --reindex
# Compare models performance
swissarmyhammer semantic benchmark --models all-MiniLM-L6-v2,all-mpnet-base-v2
Vector Index Maintenance
Database Maintenance:
# Vacuum database
swissarmyhammer semantic vacuum
# Analyze query performance
swissarmyhammer semantic analyze
# Repair corrupted database
swissarmyhammer semantic repair --backup
Embedding Validation:
# Validate embedding integrity
swissarmyhammer semantic validate
# Check for orphaned embeddings
swissarmyhammer semantic cleanup --dry-run
swissarmyhammer semantic cleanup --execute
# Verify embedding models consistency
swissarmyhammer semantic verify-models
Performance Optimization
Index Performance Tuning
Text Index Optimization:
// Writer configuration for performance
pub struct IndexConfig {
pub writer_buffer_size: usize, // Default: 50MB
pub merge_policy: MergePolicy, // LogMergePolicy recommended
pub num_threads: usize, // CPU cores
pub commit_interval: Duration, // Auto-commit frequency
}
// Performance settings
let config = IndexConfig {
writer_buffer_size: 100_000_000, // 100MB for large datasets
num_threads: num_cpus::get(), // Use all CPU cores
commit_interval: Duration::from_secs(30), // Commit every 30s
..Default::default()
};
Vector Index Optimization:
pub struct VectorIndexConfig {
pub batch_size: usize, // Embedding batch size
pub connection_pool_size: usize, // DB connection pool
pub cache_size: usize, // Result cache size
pub similarity_cache_ttl: Duration, // Cache expiration
}
Memory Management
Memory Usage Patterns:
# Monitor memory usage
swissarmyhammer index stats --memory
# Set memory limits
export SWISSARMYHAMMER_MAX_MEMORY=4GB
export SWISSARMYHAMMER_INDEX_CACHE_SIZE=1GB
Memory Optimization Strategies:
// Streaming index updates for large datasets
pub fn stream_index_updates<I>(
engine: &mut SearchEngine,
prompts: I,
batch_size: usize,
) -> Result<()>
where
I: Iterator<Item = Prompt>,
{
for batch in prompts.chunks(batch_size) {
for prompt in batch {
engine.index_prompt(&prompt)?;
}
engine.commit()?; // Commit each batch
}
Ok(())
}
Storage Optimization
Disk Usage Management:
# Check storage usage
swissarmyhammer index disk-usage
# Clean temporary files
swissarmyhammer index clean --temp
# Archive old indices
swissarmyhammer index archive --older-than 30d
Compression Settings:
// Enable index compression
let index = Index::builder()
.compression(Compression::Lz4)
.block_size(16384)
.build()?;
Index Backup and Recovery
Backup Strategies
Automated Backups:
# Create full backup
swissarmyhammer backup create --type full --output backup.tar.gz
# Create incremental backup
swissarmyhammer backup create --type incremental --since last-backup
# Schedule regular backups
swissarmyhammer backup schedule --daily --retention 7d
Manual Backup:
# Backup text indices
tar -czf text-index-backup.tar.gz ~/.swissarmyhammer/index/
# Backup semantic database
cp ~/.swissarmyhammer/semantic.db semantic-backup.db
# Backup configuration
cp -r ~/.swissarmyhammer/config/ config-backup/
Recovery Procedures
Index Recovery:
# Restore from backup
swissarmyhammer backup restore backup.tar.gz
# Rebuild from source
swissarmyhammer index rebuild --from-source
# Partial recovery
swissarmyhammer index recover --text-only
swissarmyhammer index recover --semantic-only
Corruption Recovery:
# Detect corruption
swissarmyhammer index verify
# Repair text index
swissarmyhammer index repair --text
# Repair semantic database
swissarmyhammer semantic repair --auto-fix
# Recovery from corruption
swissarmyhammer index recover --rebuild-corrupted
Troubleshooting
Common Issues
Index Corruption:
# Symptoms
# - Search results incomplete
# - Index verification failures
# - Crash during search operations
# Diagnosis
swissarmyhammer index verify --verbose
swissarmyhammer doctor --check-indices
# Resolution
swissarmyhammer index rebuild --force
Performance Issues:
# Symptoms
# - Slow search responses
# - High memory usage
# - Long indexing times
# Diagnosis
swissarmyhammer index stats --performance
swissarmyhammer index analyze --slow-queries
# Resolution
swissarmyhammer index optimize --aggressive
swissarmyhammer index config --tune-performance
Storage Issues:
# Symptoms
# - Disk space errors
# - Cannot create index
# - Write permission errors
# Diagnosis
swissarmyhammer index disk-usage --detailed
swissarmyhammer doctor --check-permissions
# Resolution
swissarmyhammer index clean --aggressive
sudo chown -R $USER ~/.swissarmyhammer/
Diagnostic Commands
Index Health Check:
# Comprehensive health check
swissarmyhammer doctor --indices
# Specific checks
swissarmyhammer index health --text
swissarmyhammer index health --semantic
swissarmyhammer index health --all
Performance Analysis:
# Query performance profiling
swissarmyhammer search --profile "test query"
# Index performance metrics
swissarmyhammer index benchmark
# Memory usage analysis
swissarmyhammer index memory-profile
Debug Information:
# Enable debug logging
RUST_LOG=swissarmyhammer::search=debug swissarmyhammer search query
# Export debug information
swissarmyhammer debug export --indices
# Generate diagnostic report
swissarmyhammer doctor --report diagnostic-report.txt
Best Practices
Index Management Best Practices
-
Regular Maintenance:
- Schedule weekly index optimization
- Monitor index size growth
- Clean temporary files regularly
-
Performance Monitoring:
- Track query response times
- Monitor memory usage patterns
- Set up alerts for index corruption
-
Backup Strategy:
- Daily incremental backups
- Weekly full backups
- Test recovery procedures regularly
-
Configuration Management:
- Version control index configuration
- Document performance tuning changes
- Test configuration changes in development
Development Workflow Integration
CI/CD Integration:
# Example GitHub Actions workflow
- name: Update Search Index
run: |
swissarmyhammer index update --check-changes
swissarmyhammer index optimize
swissarmyhammer index verify
Pre-commit Hooks:
#!/bin/bash
# .git/hooks/pre-commit
swissarmyhammer index update --incremental
swissarmyhammer index verify --quick
Configuration Reference
Environment Variables
# Index locations
export SWISSARMYHAMMER_INDEX_DIR="/custom/index/path"
export SWISSARMYHAMMER_SEMANTIC_DB="/custom/semantic.db"
# Performance tuning
export SWISSARMYHAMMER_INDEX_BUFFER_SIZE="100MB"
export SWISSARMYHAMMER_MAX_THREADS="8"
export SWISSARMYHAMMER_CACHE_SIZE="1GB"
# Maintenance settings
export SWISSARMYHAMMER_AUTO_OPTIMIZE="true"
export SWISSARMYHAMMER_BACKUP_RETENTION="30d"
Configuration File
# ~/.swissarmyhammer/config.toml
[index]
buffer_size = "50MB"
auto_optimize = true
backup_enabled = true
[index.text]
merge_policy = "log"
commit_interval = "30s"
[index.semantic]
model = "all-MiniLM-L6-v2"
batch_size = 32
similarity_threshold = 0.7
[performance]
max_results = 1000
cache_ttl = "1h"
memory_limit = "4GB"
See Also
- Search Architecture - System architecture overview
- Search Guide - User guide for search features
- Performance Tuning - Advanced optimization
- Troubleshooting - General troubleshooting guide
Testing and Debugging Guide
This guide covers testing strategies, debugging techniques, and best practices for working with SwissArmyHammer prompts.
Interactive Testing
Basic Testing Workflow
The test
command provides an interactive environment for testing prompts:
# Start interactive testing
swissarmyhammer test code-review
This will:
- Load the specified prompt
- Prompt for required arguments
- Show optional arguments with defaults
- Render the template
- Display the result
- Offer additional actions (copy, save, retry)
Testing with Predefined Arguments
# Test with known arguments
swissarmyhammer test code-review \
--arg code="fn main() { println!(\"Hello\"); }" \
--arg language="rust"
# Copy result directly to clipboard
swissarmyhammer test email-template \
--arg recipient="John" \
--arg subject="Meeting" \
--copy
Debugging Template Issues
Common Template Problems
Missing Variables
<!-- Problem: undefined variable -->
Hello {{name}}
<!-- Solution: provide default -->
Hello {{ name | default: "Guest" }}
Type Mismatches
<!-- Problem: trying to use string methods on numbers -->
{{ count | upcase }}
<!-- Solution: convert types -->
{{ count | append: " items" }}
Loop Issues
<!-- Problem: not checking for empty arrays -->
{% for item in items %}
- {{ item }}
{% endfor %}
<!-- Solution: check array exists and has items -->
{% if items and items.size > 0 %}
{% for item in items %}
- {{ item }}
{% endfor %}
{% else %}
No items found.
{% endif %}
Debug Mode
Use debug mode to see detailed template processing:
swissarmyhammer test prompt-name --debug
Debug output includes:
- Variable resolution steps
- Filter application results
- Conditional evaluation
- Loop iteration details
- Performance timing
Validation Strategies
Argument Validation
Test with different argument combinations:
# Test required arguments only
swissarmyhammer test prompt-name --arg required_arg="value"
# Test with all arguments
swissarmyhammer test prompt-name \
--arg required_arg="value" \
--arg optional_arg="optional_value"
# Test with edge cases
swissarmyhammer test prompt-name \
--arg text="" \
--arg number="0" \
--arg array="[]"
Template Edge Cases
Create test cases for common scenarios:
- Empty inputs
- Very long inputs
- Special characters
- Unicode content
- Null/undefined values
Automated Testing
For prompt libraries, create test scripts:
#!/bin/bash
# test-all-prompts.sh
PROMPTS=$(swissarmyhammer search --json "" --limit 100 | jq -r '.results[].id')
for prompt in $PROMPTS; do
echo "Testing $prompt..."
if swissarmyhammer test "$prompt" --arg placeholder="test" 2>/dev/null; then
echo "β $prompt"
else
echo "β $prompt"
fi
done
Performance Testing
Measuring Render Time
# Time a complex template
time swissarmyhammer test complex-template \
--arg large_data="$(cat large-file.json)"
# Use debug mode for detailed timing
swissarmyhammer test template-name --debug | grep "Performance:"
Memory Usage Testing
For large templates or data:
# Monitor memory usage during rendering
/usr/bin/time -v swissarmyhammer test large-template \
--arg big_data="$(cat massive-dataset.json)"
Best Practices
Writing Testable Prompts
- Provide sensible defaults for optional arguments
- Handle empty/null inputs gracefully
- Use meaningful argument names
- Include example values in descriptions
- Test with realistic data sizes
Testing Workflow
- Start simple: Test with minimal arguments
- Add complexity: Test with full argument sets
- Test edge cases: Empty, null, large inputs
- Validate output: Ensure rendered content makes sense
- Performance check: Verify reasonable render times
Debugging Tips
- Use debug mode for complex templates
- Test filters individually in simple templates
- Validate JSON/YAML with external tools
- Check argument types match expectations
- Use raw mode to see unprocessed templates
Integration with Development
IDE Integration
Many editors support SwissArmyHammer testing:
# VS Code task example
{
"version": "2.0.0",
"tasks": [
{
"label": "Test Current Prompt",
"type": "shell",
"command": "swissarmyhammer",
"args": ["test", "${fileBasenameNoExtension}"],
"group": "test",
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared"
}
}
]
}
Continuous Integration
Add prompt testing to CI pipelines:
# .github/workflows/test-prompts.yml
name: Test Prompts
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
- name: Test all prompts
run: |
for prompt in prompts/*.md; do
name=$(basename "$prompt" .md)
echo "Testing $name..."
swissarmyhammer test "$name" --arg test="ci_validation"
done
See Also
test
command - Command reference- Template Variables - Template syntax
- Custom Filters - Filter reference
Sharing and Collaboration
This guide covers how to share SwissArmyHammer prompts with your team, collaborate on prompt development, and manage shared prompt libraries.
Overview
SwissArmyHammer supports multiple collaboration workflows:
- File Sharing - Share prompt files directly
- Git Integration - Version control for prompts
- Team Directories - Shared network folders
- Package Management - Distribute as packages
Sharing Methods
Direct File Sharing
Single Prompt Sharing
Share individual prompt files:
# Send a single prompt
cp ~/.swissarmyhammer/prompts/code-review.md /shared/prompts/
# Share via email/chat
# Attach the .md file directly
Recipients install by copying to their prompt directory:
# Install shared prompt
cp /downloads/code-review.md ~/.swissarmyhammer/prompts/
Prompt Collections
Share multiple related prompts:
# Create a collection
mkdir python-toolkit
cp ~/.swissarmyhammer/prompts/python-*.md python-toolkit/
cp ~/.swissarmyhammer/prompts/pytest-*.md python-toolkit/
# Share as zip
zip -r python-toolkit.zip python-toolkit/
Git-Based Collaboration
Repository Structure
Organize prompts in a Git repository:
prompt-library/
βββ .git/
βββ README.md
βββ prompts/
β βββ development/
β β βββ languages/
β β βββ frameworks/
β β βββ tools/
β βββ data/
β β βββ analysis/
β β βββ visualization/
β βββ writing/
β βββ technical/
β βββ creative/
βββ shared/
β βββ components/
β βββ templates/
βββ scripts/
β βββ validate.sh
β βββ install.sh
βββ .github/
βββ workflows/
βββ validate-prompts.yml
Team Workflow
Initial Setup
# Create prompt repository
git init prompt-library
cd prompt-library
# Add initial structure
mkdir -p prompts/{development,data,writing}
mkdir -p shared/{components,templates}
# Add README
cat > README.md << 'EOF'
# Team Prompt Library
Shared SwissArmyHammer prompts for our team.
## Installation
```bash
git clone https://github.com/ourteam/prompt-library
./scripts/install.sh
Contributing
See CONTRIBUTING.md for guidelines. EOF
Initial commit
git add . git commit -m βInitial prompt library structureβ
#### Installation Script
Create `scripts/install.sh`:
```bash
#!/bin/bash
# install.sh - Install team prompts
PROMPT_DIR="$HOME/.swissarmyhammer/prompts/team"
# Create team namespace
mkdir -p "$PROMPT_DIR"
# Copy prompts
cp -r prompts/* "$PROMPT_DIR/"
# Copy shared components
cp -r shared/* "$HOME/.swissarmyhammer/prompts/_shared/"
echo "Team prompts installed to $PROMPT_DIR"
echo "Run 'swissarmyhammer list' to see available prompts"
Contributing Prompts
# Clone repository
git clone https://github.com/ourteam/prompt-library
cd prompt-library
# Create feature branch
git checkout -b add-docker-prompts
# Add new prompts
mkdir -p prompts/development/docker
vim prompts/development/docker/dockerfile-optimizer.md
# Test locally
swissarmyhammer doctor --check prompts
# Commit and push
git add prompts/development/docker/
git commit -m "Add Docker optimization prompts"
git push origin add-docker-prompts
# Create pull request
gh pr create --title "Add Docker optimization prompts" \
--body "Adds prompts for Dockerfile optimization and best practices"
Version Control Best Practices
Branching Strategy
# Main branches
main # Stable, tested prompts
develop # Integration branch
feature/* # New prompts
fix/* # Bug fixes
experimental/* # Experimental prompts
Commit Messages
Follow conventional commits:
# Adding prompts
git commit -m "feat(python): add async code review prompt"
# Fixing prompts
git commit -m "fix(api-design): correct OpenAPI template syntax"
# Updating prompts
git commit -m "refactor(test-writer): improve test case generation"
# Documentation
git commit -m "docs: add prompt writing guidelines"
Pull Request Template
.github/pull_request_template.md
:
## Description
Brief description of the prompts being added/modified
## Type of Change
- [ ] New prompt(s)
- [ ] Bug fix
- [ ] Enhancement
- [ ] Documentation
## Testing
- [ ] Tested with `swissarmyhammer doctor`
- [ ] Validated template syntax
- [ ] Checked for duplicates
## Checklist
- [ ] Follows naming conventions
- [ ] Includes required metadata
- [ ] Has meaningful description
- [ ] Includes usage examples
Automated Validation
GitHub Actions Workflow
.github/workflows/validate-prompts.yml
:
name: Validate Prompts
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -sSL https://install.swissarmyhammer.dev | sh
- name: Validate Prompts
run: |
swissarmyhammer doctor --check prompts
- name: Check Duplicates
run: |
swissarmyhammer list --format json | \
jq -r '.[].name' | sort | uniq -d > duplicates.txt
if [ -s duplicates.txt ]; then
echo "Duplicate prompts found:"
cat duplicates.txt
exit 1
fi
- name: Lint Markdown
uses: DavidAnson/markdownlint-cli2-action@v11
with:
globs: 'prompts/**/*.md'
Team Directories
Network Share Setup
Windows Network Share
# Create shared folder
New-Item -Path "\\server\prompts" -ItemType Directory
# Set permissions
$acl = Get-Acl "\\server\prompts"
$permission = "Domain\TeamMembers","ReadAndExecute","Allow"
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule $permission
$acl.SetAccessRule($accessRule)
Set-Acl "\\server\prompts" $acl
Configure SwissArmyHammer:
# ~/.swissarmyhammer/config.toml
[prompts]
directories = [
"~/.swissarmyhammer/prompts",
"//server/prompts"
]
NFS Share (Linux/Mac)
# Server setup
sudo mkdir -p /srv/prompts
sudo chown -R :team /srv/prompts
sudo chmod -R 775 /srv/prompts
# /etc/exports
/srv/prompts 192.168.1.0/24(ro,sync,no_subtree_check)
# Client mount
sudo mkdir -p /mnt/team-prompts
sudo mount -t nfs server:/srv/prompts /mnt/team-prompts
Syncing Strategies
rsync Method
#!/bin/bash
# sync-prompts.sh
REMOTE="server:/srv/prompts"
LOCAL="$HOME/.swissarmyhammer/prompts/team"
# Sync from server (read-only)
rsync -avz --delete "$REMOTE/" "$LOCAL/"
# Watch for changes
while inotifywait -r -e modify,create,delete "$LOCAL"; do
echo "Changes detected, syncing..."
rsync -avz "$LOCAL/" "$REMOTE/"
done
Cloud Storage Sync
Using rclone:
# Configure rclone
rclone config
# Sync from cloud
rclone sync dropbox:team-prompts ~/.swissarmyhammer/prompts/team
# Bidirectional sync
rclone bisync dropbox:team-prompts ~/.swissarmyhammer/prompts/team
Package Management
Creating Packages
NPM Package
package.json
:
{
"name": "@company/swissarmyhammer-prompts",
"version": "1.0.0",
"description": "Company SwissArmyHammer prompts",
"files": [
"prompts/**/*.md",
"install.js"
],
"scripts": {
"postinstall": "node install.js"
},
"keywords": ["swissarmyhammer", "prompts", "ai"],
"repository": {
"type": "git",
"url": "https://github.com/company/prompts.git"
}
}
install.js
:
const fs = require('fs');
const path = require('path');
const os = require('os');
const sourceDir = path.join(__dirname, 'prompts');
const targetDir = path.join(
os.homedir(),
'.swissarmyhammer',
'prompts',
'company'
);
// Copy prompts to user directory
fs.cpSync(sourceDir, targetDir, { recursive: true });
console.log(`Prompts installed to ${targetDir}`);
Python Package
setup.py
:
from setuptools import setup, find_packages
import os
from pathlib import Path
def get_prompt_files():
"""Get all prompt files for packaging."""
prompt_files = []
for root, dirs, files in os.walk('prompts'):
for file in files:
if file.endswith('.md'):
prompt_files.append(os.path.join(root, file))
return prompt_files
setup(
name='company-swissarmyhammer-prompts',
version='1.0.0',
packages=find_packages(),
data_files=[
(f'.swissarmyhammer/prompts/company/{os.path.dirname(f)}', [f])
for f in get_prompt_files()
],
install_requires=[],
entry_points={
'console_scripts': [
'install-company-prompts=scripts.install:main',
],
},
)
Distribution Channels
Internal Package Registry
# Publish to internal registry
npm publish --registry https://npm.company.com
# Install from registry
npm install @company/swissarmyhammer-prompts --registry https://npm.company.com
Container Registry
Dockerfile
:
FROM alpine:latest
# Install prompts
COPY prompts /prompts
# Create tarball
RUN tar -czf /prompts.tar.gz -C / prompts
# Export as artifact
FROM scratch
COPY --from=0 /prompts.tar.gz /
# Build and push
docker build -t registry.company.com/prompts:latest .
docker push registry.company.com/prompts:latest
# Pull and extract
docker create --name temp registry.company.com/prompts:latest
docker cp temp:/prompts.tar.gz .
docker rm temp
tar -xzf prompts.tar.gz -C ~/.swissarmyhammer/
Access Control
Git-Based Permissions
# Separate repositories by access level
prompt-library-public/ # All team members
prompt-library-internal/ # Internal team only
prompt-library-sensitive/ # Restricted access
File System Permissions
# Create group-based access
sudo groupadd prompt-readers
sudo groupadd prompt-writers
# Set permissions
sudo chown -R :prompt-readers /srv/prompts
sudo chmod -R 750 /srv/prompts
sudo chmod -R 770 /srv/prompts/contributions
# Add users to groups
sudo usermod -a -G prompt-readers alice
sudo usermod -a -G prompt-writers bob
Prompt Metadata
Mark prompts with access levels:
---
name: sensitive-data-analyzer
title: Sensitive Data Analysis
access: restricted
allowed_users:
- security-team
- data-governance
tags:
- sensitive
- compliance
- restricted
---
Collaboration Tools
Prompt Development Environment
VS Code workspace settings:
.vscode/settings.json
:
{
"files.associations": {
"*.md": "markdown"
},
"markdown.validate.enabled": true,
"markdown.validate.rules": {
"yaml-front-matter": true
},
"files.exclude": {
"**/.git": true,
"**/.DS_Store": true
},
"search.exclude": {
"**/node_modules": true,
"**/.git": true
}
}
Team Guidelines
Create CONTRIBUTING.md
:
# Contributing to Team Prompts
## Prompt Standards
### Naming Conventions
- Use kebab-case: `code-review-security.md`
- Be descriptive: `python-async-optimizer.md`
- Include context: `react-component-generator.md`
### Required Metadata
All prompts must include:
- `name` - Unique identifier
- `title` - Human-readable title
- `description` - What the prompt does
- `author` - Your email
- `category` - Primary category
- `tags` - At least 3 relevant tags
### Template Quality
- Use clear, concise language
- Include usage examples
- Test with various inputs
- Document edge cases
## Review Process
1. Create feature branch
2. Add/modify prompts
3. Run validation: `swissarmyhammer doctor`
4. Submit pull request
5. Address review feedback
6. Merge when approved
## Testing
Before submitting:
```bash
# Validate syntax
swissarmyhammer doctor --check prompts
# Test rendering
swissarmyhammer get your-prompt --args key=value
# Check for conflicts
swissarmyhammer list --format json | jq '.[] | select(.name=="your-prompt")'
### Communication
#### Slack Integration
```javascript
// slack-bot.js
const { WebClient } = require('@slack/web-api');
const { exec } = require('child_process');
const slack = new WebClient(process.env.SLACK_TOKEN);
// Notify on new prompts
async function notifyNewPrompt(promptName, author) {
await slack.chat.postMessage({
channel: '#prompt-library',
text: `New prompt added: *${promptName}* by ${author}`,
attachments: [{
color: 'good',
fields: [{
title: 'View Prompt',
value: `\`swissarmyhammer get ${promptName}\``,
short: false
}]
}]
});
}
Email Notifications
#!/bin/bash
# notify-updates.sh
RECIPIENTS="team@company.com"
SUBJECT="Prompt Library Updates"
# Get recent changes
CHANGES=$(git log --oneline --since="1 week ago" --grep="^feat\|^fix")
# Send email
echo "Weekly prompt library updates:
$CHANGES
To update your local prompts:
git pull origin main
./scripts/install.sh
" | mail -s "$SUBJECT" $RECIPIENTS
Best Practices
1. Establish Standards
Define clear guidelines:
- Naming conventions
- Required metadata
- Quality standards
- Review process
- Version strategy
2. Use Namespaces
Organize prompts by team/project:
~/.swissarmyhammer/prompts/
βββ personal/ # Your prompts
βββ team/ # Team shared
βββ company/ # Company wide
βββ community/ # Open source
3. Document Everything
- README for each category
- Usage examples in prompts
- Change logs for versions
- Migration guides
4. Automate Validation
- Pre-commit hooks
- CI/CD validation
- Automated testing
- Quality metrics
5. Regular Maintenance
- Review unused prompts
- Update outdated content
- Consolidate duplicates
- Archive deprecated
Examples
Team Onboarding
Create onboarding bundle:
#!/bin/bash
# create-onboarding-bundle.sh
# Create directory structure
mkdir -p onboarding-prompts/prompts
# Copy essential prompts
cp ~/.swissarmyhammer/prompts/*onboarding*.md onboarding-prompts/prompts/
cp ~/.swissarmyhammer/prompts/*essential*.md onboarding-prompts/prompts/
# Add setup script
cat > onboarding-prompts/setup.sh << 'EOF'
#!/bin/bash
echo "Welcome to the team! Setting up your prompts..."
cp -r prompts/* ~/.swissarmyhammer/prompts/
echo "Run 'swissarmyhammer list' to see your new prompts!"
EOF
# Create welcome package
tar -czf welcome-pack.tar.gz onboarding-prompts/
Project Templates
Share project-specific prompts:
# project-manifest.yaml
name: microservice-toolkit
version: 1.0.0
description: Prompts for microservice development
prompts:
- api-design
- openapi-generator
- dockerfile-creator
- k8s-manifest-builder
- test-suite-generator
dependencies:
- base-toolkit: ">=1.0.0"
install_script: |
mkdir -p ~/.swissarmyhammer/prompts/projects/microservices
cp prompts/*.md ~/.swissarmyhammer/prompts/projects/microservices/
Next Steps
- Read Prompt Organization for structure best practices
- See Contributing for contribution guidelines
- Explore Git Integration for version control workflows
- Learn about Configuration for team setup
MCP Protocol
SwissArmyHammer implements the Model Context Protocol (MCP) to provide prompts to AI assistants like Claude. This guide covers the protocol details and implementation specifics.
Overview
The Model Context Protocol (MCP) is a standardized protocol for communication between AI assistants and context providers. SwissArmyHammer acts as an MCP server, providing prompt templates as resources and tools.
βββββββββββββββ MCP ββββββββββββββββββββ
β Claude βββββββββββββββββββββββΊβ SwissArmyHammer β
β (Client) β JSON-RPC 2.0 β (Server) β
βββββββββββββββ ββββββββββββββββββββ
Protocol Basics
Transport
MCP uses JSON-RPC 2.0 over various transports:
- stdio - Standard input/output (default for Claude Code)
- HTTP - REST API endpoints
- WebSocket - Persistent connections
Message Format
All messages follow JSON-RPC 2.0 format:
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {},
"id": 1
}
Response format:
{
"jsonrpc": "2.0",
"result": {
"prompts": [...]
},
"id": 1
}
MCP Methods
Initialize
Establishes connection and capabilities:
// Request
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {
"prompts": {},
"resources": {}
},
"clientInfo": {
"name": "claude-code",
"version": "1.0.0"
}
},
"id": 1
}
// Response
{
"jsonrpc": "2.0",
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"prompts": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
}
},
"serverInfo": {
"name": "swissarmyhammer",
"version": "0.1.0"
}
},
"id": 1
}
List Prompts
Get available prompts:
// Request
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"cursor": null
},
"id": 2
}
// Response
{
"jsonrpc": "2.0",
"result": {
"prompts": [
{
"id": "code-review",
"name": "Code Review",
"description": "Reviews code for best practices and issues",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
},
{
"name": "language",
"description": "Programming language",
"required": false
}
]
}
],
"nextCursor": null
},
"id": 2
}
Get Prompt
Retrieve a specific prompt with arguments:
// Request
{
"jsonrpc": "2.0",
"method": "prompts/get",
"params": {
"promptId": "code-review",
"arguments": {
"code": "def add(a, b):\n return a + b",
"language": "python"
}
},
"id": 3
}
// Response
{
"jsonrpc": "2.0",
"result": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Please review this python code:\n\n```python\ndef add(a, b):\n return a + b\n```\n\nFocus on:\n- Code quality\n- Best practices\n- Potential issues"
}
}
]
},
"id": 3
}
List Resources
Get available resources (prompt source files):
// Request
{
"jsonrpc": "2.0",
"method": "resources/list",
"params": {
"cursor": null
},
"id": 4
}
// Response
{
"jsonrpc": "2.0",
"result": {
"resources": [
{
"uri": "prompt://code-review",
"name": "code-review.md",
"description": "Code review prompt source",
"mimeType": "text/markdown"
}
],
"nextCursor": null
},
"id": 4
}
Read Resource
Get resource content:
// Request
{
"jsonrpc": "2.0",
"method": "resources/read",
"params": {
"uri": "prompt://code-review"
},
"id": 5
}
// Response
{
"jsonrpc": "2.0",
"result": {
"contents": [
{
"uri": "prompt://code-review",
"mimeType": "text/markdown",
"text": "---\nname: code-review\ntitle: Code Review\n---\n\n# Code Review\n..."
}
]
},
"id": 5
}
Notifications
Prompt List Changed
Sent when prompts are added/removed/modified:
{
"jsonrpc": "2.0",
"method": "notifications/prompts/list_changed",
"params": {}
}
Resource List Changed
Sent when resources change:
{
"jsonrpc": "2.0",
"method": "notifications/resources/list_changed",
"params": {}
}
Error Handling
MCP defines standard error codes:
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params",
"data": {
"details": "Missing required argument 'code'"
}
},
"id": 6
}
Standard error codes:
-32700
- Parse error-32600
- Invalid request-32601
- Method not found-32602
- Invalid params-32603
- Internal error
Custom error codes:
1001
- Prompt not found1002
- Invalid prompt format1003
- Template render error
SwissArmyHammer Extensions
Pagination
For large prompt collections:
// Request with pagination
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"cursor": "eyJvZmZzZXQiOjUwfQ==",
"limit": 50
},
"id": 7
}
Filtering
Filter prompts by criteria:
// Request with filters
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"filter": {
"category": "development",
"tags": ["python", "testing"]
}
},
"id": 8
}
Metadata
Extended prompt metadata:
{
"id": "code-review",
"name": "Code Review",
"description": "Reviews code for best practices",
"metadata": {
"author": "SwissArmyHammer Team",
"version": "1.0.0",
"category": "development",
"tags": ["code", "review", "quality"],
"lastModified": "2024-01-15T10:30:00Z"
}
}
Implementation Details
Server Lifecycle
-
Initialization
// Server startup sequence let server = MCPServer::new(); server.load_prompts()?; server.start_file_watcher()?; server.listen(transport)?;
-
Request Handling
match request.method.as_str() { "initialize" => handle_initialize(params), "prompts/list" => handle_list_prompts(params), "prompts/get" => handle_get_prompt(params), "resources/list" => handle_list_resources(params), "resources/read" => handle_read_resource(params), _ => Err(MethodNotFound), }
-
Change Detection
// File watcher triggers notifications watcher.on_change(|event| { server.reload_prompts(); server.notify_clients("prompts/list_changed"); });
Transport Implementations
stdio Transport
Default for Claude Code integration:
// Read from stdin, write to stdout
let stdin = io::stdin();
let stdout = io::stdout();
loop {
let request = read_json_rpc(&mut stdin)?;
let response = server.handle_request(request)?;
write_json_rpc(&mut stdout, response)?;
}
HTTP Transport
For web integrations:
// HTTP endpoint handler
async fn handle_mcp(Json(request): Json<MCPRequest>) -> Json<MCPResponse> {
let response = server.handle_request(request).await;
Json(response)
}
WebSocket Transport
For real-time updates:
// WebSocket handler
async fn handle_websocket(ws: WebSocket, server: Arc<MCPServer>) {
let (tx, rx) = ws.split();
// Handle incoming messages
rx.for_each(|msg| async {
if let Ok(request) = parse_json_rpc(msg) {
let response = server.handle_request(request).await;
tx.send(serialize_json_rpc(response)).await;
}
}).await;
}
Security Considerations
Authentication
MCP doesnβt specify authentication, but SwissArmyHammer supports:
// With API key
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"authentication": {
"type": "bearer",
"token": "sk-..."
}
},
"id": 1
}
Rate Limiting
Prevent abuse:
// Rate limit configuration
rate_limiter: {
requests_per_minute: 100,
burst_size: 20,
per_method: {
"prompts/get": 50,
"resources/read": 30
}
}
Input Validation
All inputs are validated:
// Validate prompt arguments
fn validate_arguments(args: &HashMap<String, Value>) -> Result<()> {
// Check required fields
// Validate data types
// Sanitize inputs
// Check size limits
}
Performance Optimization
Caching
Responses are cached for efficiency:
// Cache configuration
cache: {
prompts_list: {
ttl: 300, // 5 minutes
max_size: 1000
},
rendered_prompts: {
ttl: 3600, // 1 hour
max_size: 10000
}
}
Streaming
Large responses support streaming:
// Streaming response
{
"jsonrpc": "2.0",
"result": {
"stream": true,
"chunks": [
{"index": 0, "data": "First chunk..."},
{"index": 1, "data": "Second chunk..."},
{"index": 2, "data": "Final chunk", "final": true}
]
},
"id": 9
}
Testing MCP
Manual Testing
Test with curl:
# Test initialize
echo '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}' | \
swissarmyhammer serve --transport stdio
# Test prompts list
curl -X POST http://localhost:3333/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"prompts/list","params":{},"id":2}'
Automated Testing
#[test]
async fn test_mcp_protocol() {
let server = MCPServer::new_test();
// Test initialize
let response = server.handle_request(json!({
"jsonrpc": "2.0",
"method": "initialize",
"params": {},
"id": 1
})).await;
assert_eq!(response["result"]["serverInfo"]["name"], "swissarmyhammer");
}
Debugging
Enable Debug Logging
logging:
modules:
swissarmyhammer::mcp: debug
Request/Response Logging
// Log all MCP traffic
middleware: {
log_requests: true,
log_responses: true,
log_errors: true,
pretty_print: true
}
Protocol Inspector
# Inspect MCP traffic
swissarmyhammer mcp-inspector --port 3334
# Connect through inspector
export MCP_PROXY=http://localhost:3334
Best Practices
- Always validate inputs - Never trust client data
- Handle errors gracefully - Return proper error codes
- Implement timeouts - Prevent hanging requests
- Cache when possible - Reduce computation
- Log important events - Aid debugging
- Version your changes - Maintain compatibility
- Document extensions - Help client implementers
Next Steps
- Implement Claude Code Integration
- Review API Reference for details
- See Troubleshooting for common issues
- Check Examples for implementation patterns
Configuration
SwissArmyHammer offers flexible configuration options through configuration files, environment variables, and command-line arguments. This guide covers all configuration methods and settings.
Configuration File
Location
SwissArmyHammer looks for configuration files in this order:
./swissarmyhammer.toml
(current directory)~/.swissarmyhammer/config.toml
(user directory)/etc/swissarmyhammer/config.toml
(system-wide)
Format
Configuration uses TOML format:
# ~/.swissarmyhammer/config.toml
# Server configuration
[server]
host = "localhost"
port = 8080
debug = false
timeout = 30000 # milliseconds
# Prompt directories
[prompts]
directories = [
"~/.swissarmyhammer/prompts",
"./.swissarmyhammer/prompts",
"/opt/company/prompts"
]
builtin = true
watch = true
# File watching configuration
[watch]
enabled = true
poll_interval = 1000 # milliseconds
max_depth = 5
ignore_patterns = [
"*.tmp",
"*.swp",
".git/*",
"__pycache__/*"
]
# Logging configuration
[log]
level = "info" # debug, info, warn, error
file = "~/.swissarmyhammer/logs/server.log"
rotate = true
max_size = "10MB"
max_age = 30 # days
# Cache configuration
[cache]
enabled = true
directory = "~/.swissarmyhammer/cache"
ttl = 3600 # seconds
max_size = "100MB"
# Template engine configuration
[template]
strict_variables = false
strict_filters = false
custom_filters_path = "~/.swissarmyhammer/filters"
# Security settings
[security]
allow_file_access = false
allow_network_access = false
sandbox_mode = true
allowed_domains = ["api.company.com", "github.com"]
# MCP specific settings
[mcp]
protocol_version = "1.0"
capabilities = ["prompts", "notifications"]
max_prompt_size = 1048576 # 1MB
compression = true
Environment Variables
All configuration options can be set via environment variables:
Naming Convention
- Prefix:
SWISSARMYHAMMER_
- Nested values use underscores:
SECTION_KEY
- Arrays use comma separation
Examples
# Server settings
export SWISSARMYHAMMER_SERVER_HOST=0.0.0.0
export SWISSARMYHAMMER_SERVER_PORT=9000
export SWISSARMYHAMMER_SERVER_DEBUG=true
# Prompt directories (comma-separated)
export SWISSARMYHAMMER_PROMPTS_DIRECTORIES="/opt/prompts,~/my-prompts"
export SWISSARMYHAMMER_PROMPTS_BUILTIN=false
# Logging
export SWISSARMYHAMMER_LOG_LEVEL=debug
export SWISSARMYHAMMER_LOG_FILE=/var/log/swissarmyhammer.log
# File watching
export SWISSARMYHAMMER_WATCH_ENABLED=true
export SWISSARMYHAMMER_WATCH_POLL_INTERVAL=2000
# Security
export SWISSARMYHAMMER_SECURITY_SANDBOX_MODE=true
export SWISSARMYHAMMER_SECURITY_ALLOWED_DOMAINS="api.example.com,cdn.example.com"
Precedence
Configuration precedence (highest to lowest):
- Command-line arguments
- Environment variables
- Configuration files
- Default values
Command-Line Options
Global Options
swissarmyhammer [GLOBAL_OPTIONS] <COMMAND> [COMMAND_OPTIONS]
Global Options:
--config <FILE> Use specific configuration file
--verbose Enable verbose output
--quiet Suppress non-error output
--no-color Disable colored output
--json Output in JSON format
--help Show help information
--version Show version information
Per-Command Configuration
Override configuration for specific commands:
# Override server settings
swissarmyhammer serve --host 0.0.0.0 --port 9000 --debug
# Override prompt directories
swissarmyhammer serve --prompts /custom/prompts --no-builtin
# Override logging
swissarmyhammer serve --log-level debug --log-file server.log
Configuration Sections
Server Configuration
Controls the MCP server behavior:
[server]
# Network binding
host = "localhost" # IP address or hostname
port = 8080 # Port number (0 for auto-assign)
# Performance
workers = 4 # Number of worker threads
max_connections = 100 # Maximum concurrent connections
timeout = 30000 # Request timeout in milliseconds
# Debugging
debug = false # Enable debug mode
trace = false # Enable trace logging
metrics = true # Enable metrics collection
Prompt Configuration
Manages prompt loading and directories:
[prompts]
# Directories to load prompts from
directories = [
"~/.swissarmyhammer/prompts", # User prompts
"./.swissarmyhammer/prompts", # Project prompts
"/opt/shared/prompts" # Shared prompts
]
# Loading behavior
builtin = true # Include built-in prompts
watch = true # Enable file watching
recursive = true # Scan directories recursively
follow_symlinks = false # Follow symbolic links
# Filtering
include_patterns = ["*.md", "*.markdown"]
exclude_patterns = ["*.draft.md", "test-*"]
# Validation
strict_validation = true # Fail on invalid prompts
required_fields = ["name", "description"]
max_file_size = "1MB" # Maximum prompt file size
File Watching
Configure file system monitoring:
[watch]
enabled = true # Enable/disable watching
strategy = "efficient" # efficient, aggressive, polling
# Polling strategy settings
poll_interval = 1000 # Milliseconds between polls
poll_timeout = 100 # Polling timeout
# Watch behavior
debounce = 500 # Milliseconds to wait for changes to settle
max_depth = 10 # Maximum directory depth
batch_events = true # Batch multiple changes
# Ignore patterns
ignore_patterns = [
"*.tmp",
"*.swp",
"*.bak",
".git/**",
".svn/**",
"__pycache__/**",
"node_modules/**"
]
# Performance
max_watches = 10000 # Maximum number of watches
event_buffer_size = 1000 # Event queue size
Logging
Configure logging behavior:
[log]
# Log level: trace, debug, info, warn, error
level = "info"
# Console output
console = true
console_format = "pretty" # pretty, json, compact
console_colors = true
# File logging
file = "~/.swissarmyhammer/logs/server.log"
file_format = "json"
rotate = true
max_size = "10MB"
max_files = 5
max_age = 30 # days
# Log filtering
include_modules = ["server", "prompts"]
exclude_modules = ["watcher"]
# Performance
buffer_size = 8192
async = true
Cache Configuration
Control caching behavior:
[cache]
enabled = true
directory = "~/.swissarmyhammer/cache"
# Cache strategy
strategy = "lru" # lru, lfu, ttl
max_entries = 1000
max_size = "100MB"
# Time-based settings
ttl = 3600 # Default TTL in seconds
refresh_ahead = 300 # Refresh cache 5 minutes before expiry
# Cache categories
[cache.prompts]
enabled = true
ttl = 7200
[cache.templates]
enabled = true
ttl = 3600
[cache.search]
enabled = false # Disable search result caching
Template Engine
Configure Liquid template processing:
[template]
# Parsing
strict_variables = false # Error on undefined variables
strict_filters = false # Error on undefined filters
error_mode = "warn" # warn, error, ignore
# Custom extensions
custom_filters_path = "~/.swissarmyhammer/filters"
custom_tags_path = "~/.swissarmyhammer/tags"
# Security
allow_includes = true
include_paths = ["~/.swissarmyhammer/includes"]
max_render_depth = 10
max_iterations = 1000
# Performance
cache_templates = true
compile_cache = "~/.swissarmyhammer/template_cache"
Security Settings
Control security features:
[security]
# Sandboxing
sandbox_mode = true # Enable security sandbox
allow_file_access = false # Allow template file access
allow_network_access = false # Allow network requests
allow_system_access = false # Allow system commands
# Network security
allowed_domains = [
"api.company.com",
"cdn.company.com",
"github.com"
]
blocked_domains = [
"malicious.site"
]
# File security
allowed_paths = [
"~/Documents/projects",
"/opt/shared/data"
]
blocked_paths = [
"/etc",
"/sys",
"~/.ssh"
]
# Content security
max_input_size = "10MB"
max_output_size = "50MB"
sanitize_html = true
Configuration Profiles
Using Profiles
Define multiple configuration profiles:
# Default configuration
[default]
server.host = "localhost"
server.port = 8080
log.level = "info"
# Development profile
[profiles.development]
server.debug = true
log.level = "debug"
cache.enabled = false
template.strict_variables = true
# Production profile
[profiles.production]
server.host = "0.0.0.0"
server.workers = 8
log.level = "warn"
security.sandbox_mode = true
# Testing profile
[profiles.test]
server.port = 0 # Auto-assign
log.file = "/tmp/test.log"
prompts.directories = ["./test/fixtures"]
Activating Profiles
# Via environment variable
export SWISSARMYHAMMER_PROFILE=production
swissarmyhammer serve
# Via command line
swissarmyhammer --profile development serve
# Multiple profiles (later overrides earlier)
swissarmyhammer --profile production --profile custom serve
Advanced Configuration
Dynamic Configuration
Load configuration from external sources:
[config]
# Load additional config from URL
remote_config = "https://config.company.com/swissarmyhammer"
remote_check_interval = 300 # seconds
# Load from environment-specific file
env_config = "/etc/swissarmyhammer/config.${ENV}.toml"
# Merge strategy
merge_strategy = "deep" # deep, shallow, replace
Hooks Configuration
Configure lifecycle hooks:
[hooks]
# Startup hooks
pre_start = [
"~/scripts/pre-start.sh",
"/opt/swissarmyhammer/hooks/validate.py"
]
post_start = [
"~/scripts/notify-start.sh"
]
# Shutdown hooks
pre_stop = [
"~/scripts/save-state.sh"
]
post_stop = [
"~/scripts/cleanup.sh"
]
# Prompt hooks
pre_load = "~/scripts/validate-prompt.sh"
post_load = "~/scripts/index-prompt.sh"
# Error hooks
on_error = "~/scripts/error-handler.sh"
# Hook configuration
[hooks.config]
timeout = 30 # seconds
fail_on_error = false # Continue if hook fails
environment = {
CUSTOM_VAR = "value"
}
Performance Tuning
Optimize for different scenarios:
[performance]
# Threading
thread_pool_size = 8
async_workers = 4
io_threads = 2
# Memory
max_memory = "2GB"
gc_interval = 300 # seconds
cache_pressure = 0.8 # Evict cache at 80% memory
# Network
connection_pool_size = 50
keep_alive = true
tcp_nodelay = true
socket_timeout = 30
# File I/O
read_buffer_size = 8192
write_buffer_size = 8192
use_mmap = true # Memory-mapped files
# Optimizations
lazy_loading = true
parallel_parsing = true
compress_cache = true
Monitoring Configuration
Enable monitoring and metrics:
[monitoring]
enabled = true
# Metrics collection
[monitoring.metrics]
enabled = true
interval = 60 # seconds
retention = 7 # days
# Metrics to collect
collect = [
"cpu_usage",
"memory_usage",
"prompt_count",
"request_rate",
"error_rate",
"cache_hit_rate"
]
# Export metrics
[monitoring.export]
format = "prometheus" # prometheus, json, statsd
endpoint = "http://metrics.company.com:9090"
labels = {
service = "swissarmyhammer",
environment = "production"
}
# Health checks
[monitoring.health]
enabled = true
endpoint = "/health"
checks = [
"server_status",
"prompt_loading",
"file_watcher",
"cache_status"
]
Configuration Examples
Minimal Configuration
# Minimal working configuration
[server]
host = "localhost"
port = 8080
[prompts]
directories = ["~/.swissarmyhammer/prompts"]
Development Configuration
# Development-optimized configuration
[server]
host = "localhost"
port = 8080
debug = true
[prompts]
directories = [
"./.swissarmyhammer/prompts",
"~/.swissarmyhammer/prompts"
]
watch = true
[log]
level = "debug"
console = true
[cache]
enabled = false # Disable caching for development
[template]
strict_variables = true # Catch template errors early
Production Configuration
# Production-optimized configuration
[server]
host = "0.0.0.0"
port = 80
workers = 8
timeout = 60000
[prompts]
directories = [
"/opt/swissarmyhammer/prompts",
"/var/lib/swissarmyhammer/prompts"
]
builtin = true
watch = false # Disable for performance
[log]
level = "warn"
file = "/var/log/swissarmyhammer/server.log"
rotate = true
max_size = "100MB"
max_files = 10
[cache]
enabled = true
strategy = "lru"
max_size = "1GB"
[security]
sandbox_mode = true
allow_file_access = false
allow_network_access = false
[monitoring]
enabled = true
metrics.enabled = true
health.enabled = true
High-Performance Configuration
# Optimized for high load
[server]
workers = 16
max_connections = 1000
timeout = 120000
[performance]
thread_pool_size = 32
async_workers = 16
connection_pool_size = 200
lazy_loading = true
parallel_parsing = true
[cache]
enabled = true
strategy = "lfu"
max_size = "4GB"
refresh_ahead = 600
[watch]
enabled = false # Disable for performance
[log]
level = "error" # Minimize logging overhead
async = true
buffer_size = 65536
Configuration Validation
Validate Configuration
# Validate configuration file
swissarmyhammer config validate
# Validate specific file
swissarmyhammer config validate --file custom-config.toml
# Show effective configuration
swissarmyhammer config show
# Show configuration with sources
swissarmyhammer config show --sources
Configuration Schema
# Generate configuration schema
swissarmyhammer config schema > config-schema.json
# Validate against schema
swissarmyhammer config validate --schema config-schema.json
Best Practices
1. Use Profiles
Separate configurations for different environments:
[profiles.local]
server.debug = true
[profiles.staging]
server.host = "staging.company.com"
[profiles.production]
server.host = "0.0.0.0"
security.sandbox_mode = true
2. Secure Sensitive Data
Never store secrets in configuration files:
# Bad
api_key = "sk-1234567890abcdef"
# Good - use environment variables
api_key = "${API_KEY}"
3. Document Configuration
Add comments explaining non-obvious settings:
# Increase timeout for slow network environments
timeout = 60000 # 1 minute
# Disable caching during development to see changes immediately
[cache]
enabled = false # Set to true in production for better performance
4. Version Control
Track configuration changes:
# .gitignore
config.local.toml
config.production.toml
# Track example configuration
config.example.toml
5. Validate Changes
Always validate configuration changes:
# Before deploying
swissarmyhammer config validate --file new-config.toml
# Test with dry run
swissarmyhammer serve --config new-config.toml --dry-run
Troubleshooting
Configuration Not Loading
- Check file exists and is readable
- Validate TOML syntax
- Check environment variable names
- Review precedence order
Performance Issues
- Disable file watching in production
- Tune cache settings
- Adjust worker counts
- Enable performance monitoring
Security Warnings
- Review security settings
- Enable sandbox mode
- Restrict file and network access
- Update allowed domains
Next Steps
- See CLI Reference for command-line options
- Learn about File Watching configuration
- Explore Troubleshooting for common issues
- Read Security for security best practices
File Watching
SwissArmyHammer includes a powerful file watching system that automatically detects and reloads prompt changes without restarting the server.
How It Works
The file watcher monitors your prompt directories for changes and automatically:
- Detects new, modified, or deleted prompt files
- Validates changed files for syntax errors
- Reloads prompts into memory
- Notifies connected clients of updates
- Maintains state during the reload process
βββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β File System βββββ>β File Watcher βββββ>β Prompt Cache β
βββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββ ββββββββββββββββ
β Validator β β MCP Clients β
ββββββββββββββββ ββββββββββββββββ
Configuration
Basic Settings
Configure file watching in your config.yaml
:
watch:
# Enable/disable file watching
enabled: true
# Check interval in milliseconds
interval: 1000
# Debounce delay to batch rapid changes
debounce: 500
# Maximum files to process per cycle
batch_size: 100
Advanced Options
watch:
# File patterns to watch
patterns:
- "**/*.md"
- "**/*.markdown"
- "**/prompts.yaml"
# Patterns to ignore
ignore:
- "**/node_modules/**"
- "**/.git/**"
- "**/target/**"
- "**/*.swp"
- "**/*~"
- "**/.DS_Store"
# Watch strategy
strategy: efficient # efficient, aggressive, polling
# Platform-specific settings
platform:
# macOS FSEvents
macos:
use_fsevents: true
latency: 0.1
# Linux inotify
linux:
use_inotify: true
max_watches: 8192
# Windows
windows:
use_polling: false
poll_interval: 1000
Watch Strategies
Efficient (Default)
Best for most use cases:
watch:
strategy: efficient
# Uses native OS file watching APIs
# Low CPU usage
# May have slight delay on some systems
Aggressive
For development with frequent changes:
watch:
strategy: aggressive
interval: 100 # Check every 100ms
debounce: 50 # Minimal debounce
# Higher CPU usage
# Near-instant updates
Polling
Fallback for compatibility:
watch:
strategy: polling
interval: 2000 # Poll every 2 seconds
# Works everywhere
# Higher CPU usage
# Slower updates
File Events
Supported Events
The watcher handles these file system events:
- Created - New prompt files added
- Modified - Existing files changed
- Deleted - Files removed
- Renamed - Files moved or renamed
- Metadata - Permission or timestamp changes
Event Processing
# Event processing configuration
watch:
events:
# Process creation events
create:
enabled: true
validate: true
# Process modification events
modify:
enabled: true
validate: true
reload_delay: 100 # ms
# Process deletion events
delete:
enabled: true
cleanup_cache: true
# Process rename events
rename:
enabled: true
track_moves: true
Validation
Automatic Validation
Files are validated before reload:
watch:
validation:
# Enable validation
enabled: true
# Validation rules
rules:
# Check YAML front matter
yaml_syntax: true
# Validate required fields
required_fields:
- name
- title
- description
# Check template syntax
template_syntax: true
# Maximum file size
max_size: 1MB
# What to do on validation failure
on_failure: warn # warn, ignore, stop
Validation Errors
When validation fails:
[WARN] Validation failed for prompts/invalid.md:
- Line 5: Invalid YAML syntax
- Missing required field: 'title'
- Template error: Unclosed tag '{% if'
File will not be loaded. Fix errors and save again.
Performance
Optimization Tips
-
Exclude unnecessary paths:
watch: ignore: - "**/backup/**" - "**/archive/**" - "**/*.log"
-
Tune intervals for your workflow:
# For active development watch: interval: 500 debounce: 250 # For production watch: interval: 5000 debounce: 2000
-
Limit watch scope:
watch: # Only watch specific directories directories: - ./.swissarmyhammer/prompts - ~/.swissarmyhammer/prompts # Don't watch subdirectories recursive: false
Resource Usage
Monitor watcher resource usage:
# Check watcher status
swissarmyhammer doctor --watch
# Show watcher statistics
swissarmyhammer status --verbose
# Output:
File Watcher Status:
Strategy: efficient
Files watched: 156
Directories: 12
CPU usage: 0.1%
Memory: 2.4MB
Events processed: 1,234
Last reload: 2 minutes ago
Debugging
Enable Debug Logging
logging:
modules:
swissarmyhammer::watcher: debug
Common Issues
-
Changes not detected:
# Check if watching is enabled swissarmyhammer config get watch.enabled # Test file watching swissarmyhammer test --watch
-
High CPU usage:
# Increase intervals watch: interval: 2000 debounce: 1000 # Use efficient strategy watch: strategy: efficient
-
Too many open files:
# Linux: Increase inotify watches echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p # macOS: Usually not an issue with FSEvents # Windows: Use polling fallback
Platform-Specific Notes
macOS
Uses FSEvents for efficient watching:
watch:
platform:
macos:
use_fsevents: true
# FSEvents latency in seconds
latency: 0.1
# Ignore events older than
ignore_older_than: 10 # seconds
Linux
Uses inotify with automatic limits:
watch:
platform:
linux:
use_inotify: true
# Will warn if approaching limits
warn_threshold: 0.8
# Fallback to polling if needed
auto_fallback: true
Windows
Uses ReadDirectoryChangesW:
watch:
platform:
windows:
# Buffer size for changes
buffer_size: 65536
# Watch subtree
watch_subtree: true
# Notification filters
filters:
- file_name
- last_write
- size
Integration
Client Notifications
Clients are notified of changes:
// MCP client receives notification
client.on('prompt.changed', (event) => {
console.log(`Prompt ${event.name} was ${event.type}`);
// Refresh UI, clear caches, etc.
});
Hooks
Run commands on file changes:
watch:
hooks:
# Before processing changes
pre_reload:
- echo "Reloading prompts..."
# After successful reload
post_reload:
- ./scripts/notify-team.sh
- ./scripts/update-index.sh
# On reload failure
on_error:
- ./scripts/alert-admin.sh
API Access
Query watcher status via API:
# Get watcher status
curl http://localhost:3333/api/watcher/status
# Get recent events
curl http://localhost:3333/api/watcher/events
# Trigger manual reload
curl -X POST http://localhost:3333/api/watcher/reload
Best Practices
Development
- Use aggressive watching for immediate feedback
- Enable validation to catch errors early
- Watch only active directories to reduce overhead
- Use debug logging to troubleshoot issues
Production
- Use efficient strategy for lower resource usage
- Increase intervals to reduce CPU load
- Disable watching if prompts rarely change
- Monitor resource usage regularly
Large Projects
- Exclude build directories and dependencies
- Use specific patterns instead of wildcards
- Consider splitting prompts across multiple directories
- Implement caching to reduce reload impact
Manual Control
CLI Commands
Control file watching manually:
# Pause file watching
swissarmyhammer watch pause
# Resume file watching
swissarmyhammer watch resume
# Force reload all prompts
swissarmyhammer watch reload
# Show watch status
swissarmyhammer watch status
Environment Variables
Override watch settings:
# Disable watching
export SWISSARMYHAMMER_WATCH_ENABLED=false
# Change interval
export SWISSARMYHAMMER_WATCH_INTERVAL=5000
# Force polling strategy
export SWISSARMYHAMMER_WATCH_STRATEGY=polling
Troubleshooting
Diagnostic Commands
# Run watcher diagnostics
swissarmyhammer doctor --watch
# Test file detection
echo "test" >> prompts/test.md
swissarmyhammer watch test
# Monitor events in real-time
swissarmyhammer watch monitor
Common Solutions
- Linux: Increase inotify limits
- macOS: Grant full disk access
- Windows: Run as administrator
- All: Check file permissions
- All: Verify ignore patterns
Next Steps
- Configure watching in Configuration
- Learn about Prompt Organization
- Understand Prompt Overrides
- Read Troubleshooting for more help
Prompt Overrides
SwissArmyHammer supports a hierarchical override system that allows you to customize prompts at different levels without modifying the original files.
Override Hierarchy
Prompts are loaded and merged in this order (later overrides earlier):
1. Built-in prompts (system)
β
2. User prompts (~/.swissarmyhammer/prompts)
β
3. Project prompts (./.swissarmyhammer/prompts)
β
4. Runtime overrides (CLI/API)
How Overrides Work
Complete Override
Replace an entire prompt by using the same name:
<!-- Built-in: /usr/share/swissarmyhammer/prompts/code-review.md -->
---
name: code-review
title: Code Review
description: Reviews code for issues
arguments:
- name: code
required: true
---
Please review this code.
<!-- User override: ~/.swissarmyhammer/prompts/code-review.md -->
---
name: code-review
title: Enhanced Code Review
description: Comprehensive code analysis with security focus
arguments:
- name: code
required: true
- name: security_check
required: false
default: true
---
Perform a detailed security-focused code review.
Partial Override
Override specific fields while inheriting others:
# ~/.swissarmyhammer/overrides/code-review.yaml
name: code-review
extends: true # Inherit from lower level
title: "Code Review (Company Standards)"
# Only override specific arguments
arguments:
merge: true # Merge with parent arguments
items:
- name: style_guide
default: "company-style-guide.md"
Override Methods
1. File-Based Overrides
Create a prompt file with the same name at a higher level:
# Original
/usr/share/swissarmyhammer/prompts/development/python-analyzer.md
# User override
~/.swissarmyhammer/prompts/development/python-analyzer.md
# Project override
./.swissarmyhammer/prompts/development/python-analyzer.md
2. Override Configuration
Use override files to modify prompts without duplicating:
# ~/.swissarmyhammer/overrides.yaml
overrides:
- name: code-review
# Override just the description
description: "Code review with company standards"
- name: api-generator
# Add new arguments
arguments:
append:
- name: auth_type
default: "oauth2"
- name: test-writer
# Modify template content
template:
prepend: |
# Company Test Standards
Follow these guidelines:
- Use pytest exclusively
- Include docstrings
append: |
## Additional Requirements
- Minimum 80% coverage
- Include integration tests
3. Runtime Overrides
Override prompts at runtime via CLI or API:
# Override prompt arguments
swissarmyhammer test code-review \
--override title="Security Review" \
--override description="Focus on security vulnerabilities"
# Override template content
swissarmyhammer test api-docs \
--template-override prepend="# CONFIDENTIAL\n\n" \
--template-override append="\n\nΒ© 2024 Acme Corp"
Advanced Override Patterns
Inheritance Chain
Create a chain of inherited prompts:
# base-analyzer.yaml
name: base-analyzer
abstract: true # Can't be used directly
title: Base Code Analyzer
arguments:
- name: code
required: true
- name: language
required: false
# python-analyzer.yaml
name: python-analyzer
extends: base-analyzer
title: Python Code Analyzer
arguments:
merge: true
items:
- name: check_types
default: true
# security-python-analyzer.yaml
name: security-python-analyzer
extends: python-analyzer
title: Security-Focused Python Analyzer
template:
inherit: true
prepend: |
## Security Analysis
Focus on OWASP Top 10 vulnerabilities.
Conditional Overrides
Apply overrides based on conditions:
# overrides.yaml
conditional_overrides:
- condition:
environment: production
overrides:
- name: all
arguments:
- name: verbose
default: false
- condition:
user: qa-team
overrides:
- name: test-generator
template:
append: |
Include edge case testing.
- condition:
project_type: web
overrides:
- name: security-scan
arguments:
- name: check_xss
default: true
Template Merging
Control how templates are merged:
# Override with template merging
name: api-docs
extends: true
template_merge:
strategy: smart # smart, prepend, append, replace
sections:
- match: "## Authentication"
action: replace
content: |
## Authentication
Use OAuth 2.0 with PKCE flow.
- match: "## Error Handling"
action: append
content: |
### Company Error Codes
- 4001: Invalid API key
- 4002: Rate limit exceeded
Project-Specific Overrides
Directory Structure
Organize project overrides:
.swissarmyhammer/
βββ prompts/ # Complete prompt overrides
β βββ code-review.md
βββ overrides.yaml # Partial overrides
βββ templates/ # Template snippets
β βββ header.md
β βββ footer.md
βββ config.yaml # Override configuration
Override Configuration
Configure override behavior:
# .swissarmyhammer/config.yaml
overrides:
# Enable/disable overrides
enabled: true
# Override precedence
precedence:
- runtime # Highest priority
- project
- user
- system # Lowest priority
# Merge strategies
merge:
arguments: deep # deep, shallow, replace
template: smart # smart, simple, replace
metadata: shallow # deep, shallow, replace
# Validation
validation:
strict: true
require_base: false
allow_new_fields: true
Use Cases
1. Company Standards
Enforce company-wide standards:
# ~/.swissarmyhammer/company-overrides.yaml
global_overrides:
all_prompts:
template:
prepend: |
# {{company}} Standards
This output follows {{company}} guidelines.
globals:
company: "Acme Corp"
support_email: "ai-support@acme.com"
2. Environment-Specific
Different behavior per environment:
# Development overrides
development:
overrides:
- name: code-review
arguments:
- name: verbose
default: true
- name: include_suggestions
default: true
# Production overrides
production:
overrides:
- name: code-review
arguments:
- name: verbose
default: false
- name: security_scan
default: true
3. Team Customization
Team-specific modifications:
# Frontend team overrides
team: frontend
overrides:
- pattern: "*-component"
template:
prepend: |
Use React 18+ features.
Follow Material-UI guidelines.
- name: test-writer
arguments:
- name: framework
default: "jest"
- name: include_snapshots
default: true
Override Resolution
Name Matching
How prompts are matched for override:
- Exact match:
code-review
matchescode-review
- Pattern match:
*-review
matchescode-review
,security-review
- Category match:
category:development
matches all development prompts
Conflict Resolution
When multiple overrides apply:
# Resolution rules
conflict_resolution:
# Strategy: first, last, merge, error
strategy: merge
# Priority (higher wins)
priorities:
exact_match: 100
pattern_match: 50
category_match: 10
global: 1
Debugging Overrides
See what overrides are applied:
# Show override chain for a prompt
swissarmyhammer debug code-review --show-overrides
# Output:
Override chain for 'code-review':
1. System: /usr/share/swissarmyhammer/prompts/code-review.md
2. User: ~/.swissarmyhammer/prompts/code-review.md (extends)
3. Project: ./.swissarmyhammer/overrides.yaml (partial)
4. Runtime: --override title="Custom Review"
# Test with override preview
swissarmyhammer test code-review --preview-overrides
Best Practices
1. Minimal Overrides
Override only what needs to change:
# Good: Override specific fields
name: code-review
extends: true
description: "Code review with security focus"
# Avoid: Duplicating entire prompt
name: code-review
title: Code Review # Unchanged
description: "Code review with security focus" # Only this changed
arguments: [...] # Duplicated
template: | # Duplicated
...
2. Document Overrides
Always document why overrides exist:
# overrides.yaml
overrides:
- name: api-generator
# OVERRIDE REASON: Company requires OAuth2 for all APIs
# JIRA: SECURITY-123
# Date: 2024-01-15
arguments:
- name: auth_type
default: "oauth2"
locked: true # Prevent further overrides
3. Version Control
Track override changes:
# .swissarmyhammer/.gitignore
# Don't ignore override files
!overrides.yaml
!prompts/
# Track override history
git add .swissarmyhammer/overrides.yaml
git commit -m "Add security requirements to code-review prompt"
4. Testing Overrides
Test overrides thoroughly:
# Test override application
swissarmyhammer test code-review --test-overrides
# Compare with and without overrides
swissarmyhammer test code-review --no-overrides > without.txt
swissarmyhammer test code-review > with.txt
diff without.txt with.txt
Security Considerations
Lock Overrides
Prevent certain overrides:
# System prompt with locked fields
---
name: security-scan
locked_fields:
- title
- core_checks
no_override: false # Can't be overridden at all
Validate Overrides
Ensure overrides meet requirements:
# Override validation rules
validation:
rules:
- field: arguments
required_items:
- name: code
- name: language
- field: template
must_contain:
- "SECURITY WARNING"
- "Confidential"
- field: description
min_length: 50
pattern: ".*security.*"
Troubleshooting
Common Issues
-
Override not applying:
# Check override precedence swissarmyhammer config get overrides.precedence # Verify file locations swissarmyhammer debug --show-paths
-
Merge conflicts:
# Show merge details swissarmyhammer debug code-review --trace-merge
-
Validation errors:
# Validate overrides swissarmyhammer validate --overrides
Next Steps
- Learn about Prompt Organization
- Understand Configuration options
- Read about Testing override scenarios
- See Examples of override patterns
Built-in Prompts
SwissArmyHammer includes a comprehensive set of built-in prompts designed to assist with various development tasks. These prompts use Liquid templating for dynamic, customizable assistance and are organized by category for easy discovery.
Overview
All built-in prompts:
- Support customizable arguments with sensible defaults
- Use Liquid syntax for variable substitution and control flow
- Are organized into logical categories for easy discovery
- Follow a standardized YAML front matter format
Categories
System Management
are_issues_complete
Check if the plan is complete.
Arguments: None
Example:
swissarmyhammer prompt test are_issues_complete
are_reviews_done
Check if all the code review items are complete.
Arguments: None
Example:
swissarmyhammer prompt test are_reviews_done
are_tests_passing
Check if all tests are passing.
Arguments: None
Example:
swissarmyhammer prompt test are_tests_passing
Issue Management
branch (issue_branch)
Create an issue work branch for the next issue to work
Arguments: None
Example:
swissarmyhammer prompt test branch
issue_complete
Mark an issue as complete
Arguments: None
Example:
swissarmyhammer prompt test issue_complete
Code Development
code/issue (do_issue)
Code up an issue
Arguments: None
Example:
swissarmyhammer prompt test code/issue
code/review
Do Code Review
Arguments: None
Example:
swissarmyhammer prompt test code/review
Version Control
commit
Commit your work to git.
Arguments: None
Example:
swissarmyhammer prompt test commit
merge
Merge your work into the main branch.
Arguments: None
Example:
swissarmyhammer prompt test merge
Testing & Quality
coverage
Improve coverage.
Arguments: None
Example:
swissarmyhammer prompt test coverage
test
Iterate to correct test failures in the codebase.
Arguments: None
Example:
swissarmyhammer prompt test test
Debugging
debug/error
Analyze error messages and provide debugging guidance with potential solutions.
Arguments:
error_message
(required) - The error message or stack trace to analyzelanguage
(default: βauto-detectβ) - The programming languagecontext
(optional) - Additional context about when the error occurs
Example:
swissarmyhammer prompt test debug/error --arg error_message="TypeError: cannot read property 'name' of undefined" --arg language="javascript"
debug/logs
Analyze log files to identify issues and patterns.
Arguments:
log_content
(required) - The log content to analyzeissue_description
(default: βgeneral analysisβ) - Description of the issue youβre investigatingtime_range
(default: βallβ) - Specific time range to focus onlog_format
(default: βauto-detectβ) - Log format (json, plaintext, syslog, etc.)
Example:
swissarmyhammer prompt test debug/logs --arg log_content="$(cat application.log)" --arg issue_description="API timeout errors"
Documentation
document
Create documentation for the project
Arguments: None
Example:
swissarmyhammer prompt test document
docs/comments
Add comprehensive comments and documentation to code.
Arguments:
code
(required) - The code to documentcomment_style
(default: βauto-detectβ) - Comment style (inline, block, jsdoc, docstring, rustdoc)detail_level
(default: βstandardβ) - Level of detail (minimal, standard, comprehensive)audience
(default: βdevelopersβ) - Target audience for the comments
Example:
swissarmyhammer prompt test docs/comments --arg code="$(cat utils.js)" --arg comment_style="jsdoc" --arg detail_level="comprehensive"
docs/readme
Create comprehensive README documentation for a project.
Arguments:
project_name
(required) - Name of the projectproject_description
(required) - Brief description of what the project doeslanguage
(default: βauto-detectβ) - Primary programming languagefeatures
(default: ββ) - Key features of the project (comma-separated)target_audience
(default: βdevelopersβ) - Who this project is for
Example:
swissarmyhammer prompt test docs/readme --arg project_name="MyLib" --arg project_description="A library for awesome things" --arg features="fast,reliable,easy"
Planning
plan
Generate a step by step development plan from a specification.
Arguments: None
Example:
swissarmyhammer prompt test plan
Code Review
review/accessibility
Review code for accessibility compliance and best practices.
Arguments:
code
(required) - The UI/frontend code to reviewwcag_level
(default: βAAβ) - WCAG compliance level target (A, AA, AAA)component_type
(default: βgeneralβ) - Type of component (form, navigation, content, interactive)target_users
(default: βall usersβ) - Specific user needs to consider
Example:
swissarmyhammer prompt test review/accessibility --arg code="$(cat form.html)" --arg component_type="form" --arg wcag_level="AA"
review/branch
Review code
Arguments: None
Example:
swissarmyhammer prompt test review/branch
review/code
Review code for quality, bugs, and improvements.
Arguments: None
Example:
swissarmyhammer prompt test review/code
review/documentation
Review documentation
Arguments: None
Example:
swissarmyhammer prompt test review/documentation
review/patterns
Perform a comprehensive review of the code to improve pattern use.
Arguments: None
Example:
swissarmyhammer prompt test review/patterns
review/security
Perform a comprehensive security review of code to identify vulnerabilities.
Arguments:
code
(required) - The code to review for security issuescontext
(default: βgeneral purpose codeβ) - Context about the codelanguage
(default: βauto-detectβ) - Programming languageseverity_threshold
(default: βlowβ) - Minimum severity to report (critical, high, medium, low)
Example:
swissarmyhammer prompt test review/security --arg code="$(cat login.php)" --arg context="handles user authentication"
Prompt Management
prompts/create
Help create effective prompts for swissarmyhammer.
Arguments:
purpose
(required) - What the prompt should accomplishcategory
(default: βgeneralβ) - Category for the promptinputs_needed
(default: ββ) - What information the prompt needs from userscomplexity
(default: βmoderateβ) - Complexity level (simple, moderate, advanced)
Example:
swissarmyhammer prompt test prompts/create --arg purpose="Generate database migrations" --arg category="database"
prompts/improve
Analyze and enhance existing prompts for better effectiveness.
Arguments:
prompt_content
(required) - The current prompt content (including YAML front matter)improvement_goals
(default: βoverall enhancementβ) - What aspects to improveuser_feedback
(default: ββ) - Any feedback or issues users have reported
Example:
swissarmyhammer prompt test prompts/improve --arg prompt_content="$(cat my-prompt.md)" --arg improvement_goals="clarity,flexibility"
General Purpose
help
A prompt for providing helpful assistance and guidance to users.
Arguments:
topic
(default: βgeneral assistanceβ) - The topic to get help aboutdetail_level
(default: βnormalβ) - How detailed the help should be (basic, normal, detailed)
Example:
swissarmyhammer prompt test help --arg topic="git workflows" --arg detail_level="detailed"
example
An example prompt for testing.
Arguments:
topic
(default: βgeneral topicβ) - The topic to ask about
Example:
swissarmyhammer prompt test example --arg topic="testing prompts"
say-hello
A simple greeting prompt that can be customized with name and language.
Arguments:
name
(default: βfriendβ) - Name to greetlanguage
(default: βenglishβ) - Language for the greeting
Example:
swissarmyhammer prompt test say-hello --arg name="Alice" --arg language="spanish"
Template Partials
These .md
files are partial templates used by other prompts for consistency:
code.md
- Partial template for reuse in other promptscoding_standards.md
- Coding Standards templatedocumentation.md
- Documentation templateempty.md
- Empty templateprincipals.md
- Principals templatereview_format.md
- Review Format templatetodo.md
- Todo template
Usage Patterns
Basic Usage
# Use a prompt with default arguments
swissarmyhammer prompt test review/code
# Specify custom arguments
swissarmyhammer prompt test debug/error --arg error_message="NullPointerException" --arg language="java"
Piping Content
# Use command substitution to pass file content
swissarmyhammer prompt test docs/comments --arg code="$(cat utils.py)" --arg comment_style="docstring"
# Pipe log content for analysis
cat error.log | xargs -I {} swissarmyhammer prompt test debug/logs --arg log_content="{}" --arg issue_description="memory leak"
Non-Interactive Mode
# Run prompts non-interactively with all arguments specified
swissarmyhammer prompt test help --arg topic="git" --arg detail_level="detailed"
Debugging Prompts
# Use debug mode to see template processing
swissarmyhammer prompt test debug/error --debug --arg error_message="Sample error"
Best Practices
- Choose the Right Prompt - Select prompts that match your specific task
- Provide Context - Use optional arguments to give more context when available
- Combine Prompts - Use multiple prompts in sequence for comprehensive workflows
- Test Arguments - Use the test command to verify prompt behavior before using in workflows
- Review Output - Always review and validate generated content before using it
Creating Custom Prompts
If the built-in prompts donβt meet your needs:
- Use the
prompts/create
prompt to generate a template - Save it in your
~/.swissarmyhammer/prompts/
directory - Follow the YAML front matter format for consistency
- Test with various inputs to ensure reliability
For more information on creating custom prompts, see Creating Prompts.
Custom Filters Reference
Examples
This page provides real-world examples of using SwissArmyHammer for various development tasks. Each example demonstrates practical usage patterns and best practices for prompt engineering.
Overview
SwissArmyHammer excels at organizing and managing prompts for:
- Development workflows - Code review, testing, debugging
- Documentation tasks - API docs, user guides, technical writing
- Security analysis - Code audits, vulnerability assessment
- Content creation - Blog posts, marketing copy, educational materials
- Data processing - Analysis, transformation, reporting
New Advanced Prompts
AI Code Assistant
Intelligent code assistance with context awareness:
---
title: AI Code Assistant
description: Intelligent code assistance with context awareness
category: development
tags: ["coding", "assistance", "refactoring", "optimization"]
arguments:
- name: code
description: The code to analyze and improve
required: true
- name: language
description: Programming language
required: true
- name: task
description: Specific task to perform
required: true
default: "improve"
- name: context
description: Additional context about the code
required: false
---
# AI Code Assistant
You are an expert {{ language }} developer. Please help me with the following task: **{{ task }}**
## Code to analyze:
```{{ language }}
{{ code }}
{% if context %}
Additional context:
{{ context }} {% endif %}
Instructions:
{% if task == βimproveβ %} Please improve this code by:
- Identifying potential issues or inefficiencies
- Suggesting optimizations
- Ensuring best practices are followed
- Improving readability and maintainability {% elsif task == βrefactorβ %} Please refactor this code to:
- Improve structure and organization
- Extract reusable components
- Reduce complexity
- Follow {{ language }} conventions {% elsif task == βoptimizeβ %} Please optimize this code for:
- Performance improvements
- Memory efficiency
- Algorithmic complexity
- Resource usage {% elsif task == βdebugβ %} Please help debug this code by:
- Identifying potential bugs
- Analyzing logic errors
- Suggesting fixes
- Explaining the root cause {% elsif task == βdocumentβ %} Please document this code by:
- Adding clear docstrings/comments
- Explaining complex logic
- Providing usage examples
- Describing parameters and return values {% else %} Please analyze this code and provide assistance with: {{ task }} {% endif %}
Response format:
- Analysis: What you found in the code
- Recommendations: Specific improvements to make
- Updated Code: The improved version
- Explanation: Why these changes improve the code
### Security Audit
Comprehensive security analysis following OWASP guidelines:
```markdown
---
title: Security Audit
description: Comprehensive security analysis and recommendations
category: security
tags: ["security", "audit", "vulnerability", "analysis"]
arguments:
- name: code
description: Code to audit for security issues
required: true
- name: language
description: Programming language
required: true
- name: context
description: Application context and environment
required: false
- name: compliance
description: Compliance standards to check against
required: false
default: "OWASP"
- name: severity_level
description: Minimum severity level to report
required: false
default: "medium"
---
# Security Audit
You are a cybersecurity expert conducting a comprehensive security audit of {{ language }} code.
## Code to Audit
```{{ language }}
{{ code }}
{% if context %}
Application Context
{{ context }} {% endif %}
Audit Scope
Security Standards
{% if compliance == βOWASPβ %} Evaluate against OWASP Top 10 security risks:
- A01:2021 - Broken Access Control
- A02:2021 - Cryptographic Failures
- A03:2021 - Injection
- A04:2021 - Insecure Design
- A05:2021 - Security Misconfiguration
- A06:2021 - Vulnerable and Outdated Components
- A07:2021 - Identification and Authentication Failures
- A08:2021 - Software and Data Integrity Failures
- A09:2021 - Security Logging and Monitoring Failures
- A10:2021 - Server-Side Request Forgery (SSRF) {% else %} Evaluate against {{ compliance }} security standards and requirements. {% endif %}
{{ language | capitalize }}-Specific Security Concerns
{% if language == βjavascriptβ or language == βtypescriptβ %}
- XSS Prevention: Input sanitization and output encoding
- CSRF Protection: Token validation and SameSite cookies
- Prototype Pollution: Object property manipulation
- Dependency Vulnerabilities: Third-party package security
- Client-Side Security: Browser-specific vulnerabilities {% elsif language == βpythonβ %}
- SQL Injection: Query parameterization and ORM usage
- Code Injection: eval() and exec() usage
- Pickle Vulnerabilities: Insecure deserialization
- Path Traversal: File system access controls
- Dependency Management: Package security {% elsif language == βjavaβ %}
- Deserialization Vulnerabilities: Object deserialization
- XML External Entity (XXE): XML parser configuration
- Path Traversal: File system access
- SQL Injection: PreparedStatement usage
- Reflection Attacks: Dynamic code execution {% elsif language == βrustβ %}
- Memory Safety: While Rust prevents many issues, check for unsafe blocks
- Integer Overflow: Arithmetic operations
- Dependency Security: Cargo.toml vulnerabilities
- Error Handling: Information disclosure {% elsif language == βgoβ %}
- SQL Injection: Query parameterization
- Command Injection: exec.Command usage
- Path Traversal: Filepath handling
- Goroutine Security: Concurrent access patterns {% else %}
- Input Validation: Data sanitization and validation
- Authentication: Access control mechanisms
- Cryptography: Secure implementations
- Error Handling: Information disclosure {% endif %}
Audit Report Format
Executive Summary
- Overall security posture
- Critical findings count
- Risk level assessment
Detailed Findings
For each vulnerability found:
Finding #N: [Vulnerability Name]
- Severity: {% if severity_level == βcriticalβ %}CRITICAL{% elsif severity_level == βhighβ %}HIGH{% elsif severity_level == βmediumβ %}MEDIUM{% else %}LOW{% endif %} | HIGH | MEDIUM | LOW
- Category: [OWASP Category or Security Domain]
- Location: [File/Function/Line number]
- Description: Detailed explanation of the vulnerability
- Impact: Potential consequences if exploited
- Evidence: Code snippets demonstrating the issue
- Remediation: Specific steps to fix the vulnerability
- Resources: Links to relevant security guidelines
Recommendations
- Immediate Actions: Critical issues requiring immediate attention
- Short-term Improvements: High-priority security enhancements
- Long-term Strategy: Ongoing security practices and policies
- Security Tools: Recommended tools for ongoing monitoring
Secure Code Examples
Provide corrected code examples that demonstrate:
- Proper input validation
- Secure authentication/authorization
- Safe data handling
- Error handling best practices
Compliance Check
{% if compliance %} Verify compliance with {{ compliance }} requirements and note any gaps. {% endif %}
Security Best Practices
Include relevant security best practices for {{ language }} development:
- Secure coding guidelines
- Regular security testing
- Dependency management
- Security monitoring
Report only findings at {{ severity_level }} severity level and above.
### Technical Writer
Professional technical documentation generation:
```markdown
---
title: Technical Writer
description: Generate technical documentation with proper structure
category: documentation
tags: ["documentation", "technical-writing", "api", "guides"]
arguments:
- name: topic
description: What to document
required: true
- name: audience
description: Target audience level
required: true
default: "developers"
- name: format
description: Documentation format
required: false
default: "markdown"
- name: sections
description: Specific sections to include
required: false
- name: examples
description: Include code examples
required: false
default: "true"
---
# Technical Writer
You are a skilled technical writer specializing in {{ format }} documentation for {{ audience }}.
## Task
Create comprehensive documentation for: **{{ topic }}**
## Target Audience
{{ audience | capitalize }} - adjust complexity and terminology accordingly.
## Documentation Requirements
### Structure
{% if format == "markdown" %}
Use clear Markdown formatting with:
- Proper heading hierarchy (# ## ###)
- Code blocks with syntax highlighting
- Tables for structured data
- Lists for step-by-step instructions
{% elsif format == "rst" %}
Use reStructuredText formatting with:
- Proper heading hierarchy (===, ---, ...)
- Code blocks with language specification
- Tables and lists for organization
- Cross-references where appropriate
{% else %}
Use {{ format }} best practices for formatting and structure.
{% endif %}
### Content Guidelines
1. **Clear and Concise**: Use simple, direct language
2. **Logical Flow**: Organize information in a logical sequence
3. **Actionable**: Provide specific, actionable instructions
4. **Complete**: Cover all necessary information
5. **Accessible**: Make content accessible to {{ audience }}
{% if sections %}
### Required Sections
Include these specific sections:
{{ sections }}
{% else %}
### Standard Sections
Include these standard sections:
- **Overview**: Brief introduction and purpose
- **Prerequisites**: What users need before starting
- **Getting Started**: Basic setup and first steps
- **Usage**: Detailed usage instructions
- **Examples**: Practical examples and use cases
- **Troubleshooting**: Common issues and solutions
- **Reference**: Detailed API or configuration reference
{% endif %}
{% if examples == "true" %}
### Code Examples
Include practical code examples that:
- Demonstrate real-world usage
- Are complete and runnable
- Include expected outputs
- Cover common use cases
- Follow best practices
{% endif %}
## Quality Standards
- **Accuracy**: Ensure all information is correct and up-to-date
- **Completeness**: Cover all aspects users need to know
- **Clarity**: Use clear, unambiguous language
- **Consistency**: Maintain consistent style and terminology
- **Usability**: Make documentation easy to navigate and use
## Output Format
Provide well-structured {{ format }} documentation that {{ audience }} can immediately use and understand.
Basic Prompt Usage
Simple Code Review
# Review a Python file
swissarmyhammer test review/code --file_path "src/main.py"
# Review with specific focus
swissarmyhammer test review/code --file_path "api/auth.py" --context "focus on security and error handling"
Generate Unit Tests
# Generate tests for a function
swissarmyhammer test test/unit --code "$(cat calculator.py)" --framework "pytest"
# Generate tests with high coverage target
swissarmyhammer test test/unit --code "$(cat utils.js)" --framework "jest" --coverage_target "95"
Debug an Error
# Analyze an error message
swissarmyhammer test debug/error \
--error_message "TypeError: Cannot read property 'name' of undefined" \
--language "javascript" \
--context "Happens when user submits form"
Creating Custom Prompts
Basic Prompt Structure
Create ~/.swissarmyhammer/prompts/my-prompt.md
:
---
name: git-commit-message
title: Git Commit Message Generator
description: Generate conventional commit messages from changes
arguments:
- name: changes
description: Description of changes made
required: true
- name: type
description: Type of change (feat, fix, docs, etc.)
required: false
default: feat
- name: scope
description: Scope of the change
required: false
default: ""
---
# Git Commit Message
Based on the changes: {{changes}}
Generate a conventional commit message:
Type: {{type}}
{% if scope %}Scope: {{scope}}{% endif %}
Format: `{{type}}{% if scope %}({{scope}}){% endif %}: <subject>`
Subject should be:
- 50 characters or less
- Present tense
- No period at the end
- Clear and descriptive
Use it:
swissarmyhammer test git-commit-message \
--changes "Added user authentication with OAuth2" \
--type "feat" \
--scope "auth"
Advanced Template with Conditionals
Create ~/.swissarmyhammer/prompts/database-query.md
:
---
name: database-query-optimizer
title: Database Query Optimizer
description: Optimize SQL queries for better performance
arguments:
- name: query
description: The SQL query to optimize
required: true
- name: database
description: Database type (postgres, mysql, sqlite)
required: false
default: postgres
- name: table_sizes
description: Approximate table sizes (small, medium, large)
required: false
default: medium
- name: indexes
description: Available indexes (comma-separated)
required: false
default: ""
---
# SQL Query Optimization
## Original Query
```sql
{{query}}
Database: {{database | capitalize}}
{% if database == βpostgresβ %}
PostgreSQL Specific Optimizations
- Consider using EXPLAIN ANALYZE
- Check for missing indexes on JOIN columns
- Use CTEs for complex queries
- Consider partial indexes for WHERE conditions {% elsif database == βmysqlβ %}
MySQL Specific Optimizations
- Use EXPLAIN to check execution plan
- Consider covering indexes
- Optimize GROUP BY queries
- Check buffer pool size {% else %}
SQLite Specific Optimizations
- Use EXPLAIN QUERY PLAN
- Consider table order in JOINs
- Minimize use of LIKE with wildcards {% endif %}
Table Size Considerations
{% case table_sizes %} {% when βsmallβ %}
- Full table scans might be acceptable
- Focus on query simplicity {% when βlargeβ %}
- Indexes are critical
- Consider partitioning
- Avoid SELECT * {% else %}
- Balance between indexes and write performance
- Monitor query execution time {% endcase %}
{% if indexes %}
Available Indexes
{% assign index_list = indexes | split: β,β %} {% for index in index_list %}
- {{ index | strip }} {% endfor %} {% endif %}
Provide:
- Optimized query
- Explanation of changes
- Expected performance improvement
- Additional index recommendations
### Using Arrays and Loops
Create `~/.swissarmyhammer/prompts/api-client.md`:
```markdown
---
name: api-client-generator
title: API Client Generator
description: Generate API client code from endpoint specifications
arguments:
- name: endpoints
description: Comma-separated list of endpoints (method:path)
required: true
- name: base_url
description: Base URL for the API
required: true
- name: language
description: Target language for the client
required: false
default: javascript
- name: auth_type
description: Authentication type (none, bearer, basic, apikey)
required: false
default: none
---
# API Client Generator
Generate a {{language}} API client for:
- Base URL: {{base_url}}
- Authentication: {{auth_type}}
## Endpoints
{% assign endpoint_list = endpoints | split: "," %}
{% for endpoint in endpoint_list %}
{% assign parts = endpoint | split: ":" %}
{% assign method = parts[0] | strip | upcase %}
{% assign path = parts[1] | strip %}
- {{method}} {{path}}
{% endfor %}
{% if language == "javascript" %}
Generate a modern JavaScript client using:
- Fetch API for requests
- Async/await syntax
- Proper error handling
- TypeScript interfaces if applicable
{% elsif language == "python" %}
Generate a Python client using:
- requests library
- Type hints
- Proper exception handling
- Docstrings for all methods
{% endif %}
{% if auth_type != "none" %}
Include authentication handling for {{auth_type}}:
{% case auth_type %}
{% when "bearer" %}
- Accept token in constructor
- Add Authorization: Bearer header
{% when "basic" %}
- Accept username/password
- Encode credentials properly
{% when "apikey" %}
- Accept API key
- Add to headers or query params as needed
{% endcase %}
{% endif %}
Include:
1. Complete client class
2. Error handling
3. Usage examples
4. Any necessary types/interfaces
Complex Workflows
Multi-Step Code Analysis
#!/bin/bash
# analyze-codebase.sh
# Step 1: Get overview of the codebase
echo "=== Codebase Overview ==="
swissarmyhammer test help --topic "codebase structure" --detail_level "detailed" > analysis/overview.md
# Step 2: Review critical files
echo "=== Security Review ==="
for file in auth.py payment.py user.py; do
echo "Reviewing $file..."
swissarmyhammer test review/security \
--code "$(cat src/$file)" \
--context "handles sensitive data" \
--severity_threshold "medium" > "analysis/security-$file.md"
done
# Step 3: Generate tests for uncovered code
echo "=== Test Generation ==="
swissarmyhammer test test/unit \
--code "$(cat src/utils.py)" \
--framework "pytest" \
--style "BDD" \
--coverage_target "90" > tests/test_utils_generated.py
# Step 4: Create documentation
echo "=== Documentation ==="
swissarmyhammer test docs/api \
--code "$(cat src/api.py)" \
--api_type "REST" \
--format "openapi" > docs/api-spec.yaml
echo "Analysis complete! Check the analysis/ directory for results."
Automated PR Review
#!/bin/bash
# pr-review.sh
# Get changed files
CHANGED_FILES=$(git diff --name-only main...HEAD)
echo "# Pull Request Review" > pr-review.md
echo "" >> pr-review.md
for file in $CHANGED_FILES; do
if [[ $file == *.py ]] || [[ $file == *.js ]] || [[ $file == *.ts ]]; then
echo "## Review: $file" >> pr-review.md
# Dynamic code review
swissarmyhammer test review/code-dynamic \
--file_path "$file" \
--language "${file##*.}" \
--focus_areas "bugs,security,performance" \
--severity_level "info" >> pr-review.md
echo "" >> pr-review.md
fi
done
# Check for accessibility issues in UI files
for file in $CHANGED_FILES; do
if [[ $file == *.html ]] || [[ $file == *.jsx ]] || [[ $file == *.tsx ]]; then
echo "## Accessibility: $file" >> pr-review.md
swissarmyhammer test review/accessibility \
--code "$(cat $file)" \
--wcag_level "AA" >> pr-review.md
echo "" >> pr-review.md
fi
done
echo "Review complete! See pr-review.md"
Project Setup Automation
#!/bin/bash
# setup-project.sh
PROJECT_NAME=$1
PROJECT_TYPE=$2 # api, webapp, library
# Create project structure
mkdir -p $PROJECT_NAME/{src,tests,docs}
cd $PROJECT_NAME
# Generate README
swissarmyhammer test docs/readme \
--project_name "$PROJECT_NAME" \
--project_description "A $PROJECT_TYPE project" \
--language "$PROJECT_TYPE" > README.md
# Create initial prompts
mkdir -p prompts/project
# Generate project-specific code review prompt
cat > prompts/project/code-review.md << 'EOF'
---
name: project-code-review
title: Project Code Review
description: Review code according to our project standards
arguments:
- name: file_path
description: File to review
required: true
---
Review {{file_path}} for:
- Our naming conventions (camelCase for JS, snake_case for Python)
- Error handling patterns we use
- Project-specific security requirements
- Performance considerations for our scale
EOF
# Configure SwissArmyHammer for this project
claude mcp add ${PROJECT_NAME}_sah swissarmyhammer serve --prompts ./prompts
echo "Project $PROJECT_NAME setup complete!"
Integration Examples
Git Hooks
.git/hooks/pre-commit
:
#!/bin/bash
# Check code quality before commit
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(py|js|ts)$')
if [ -z "$STAGED_FILES" ]; then
exit 0
fi
echo "Running pre-commit checks..."
for FILE in $STAGED_FILES; do
# Run security review on staged content
git show ":$FILE" | swissarmyhammer test review/security \
--code "$(cat)" \
--severity_threshold "high" \
--language "${FILE##*.}"
if [ $? -ne 0 ]; then
echo "Security issues found in $FILE"
exit 1
fi
done
echo "Pre-commit checks passed!"
CI/CD Integration
.github/workflows/code-quality.yml
:
name: Code Quality
on: [push, pull_request]
jobs:
quality-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Run Code Reviews
run: |
for file in $(find src -name "*.py"); do
swissarmyhammer test review/code-dynamic \
--file_path "$file" \
--language "python" \
--focus_areas "bugs,security" \
--severity_level "warning"
done
- name: Generate Missing Tests
run: |
swissarmyhammer test test/unit \
--code "$(cat src/core.py)" \
--framework "pytest" \
--coverage_target "80" > tests/test_core_generated.py
- name: Update Documentation
run: |
swissarmyhammer test docs/api \
--code "$(cat src/api.py)" \
--api_type "REST" \
--format "markdown" > docs/api.md
VS Code Task
.vscode/tasks.json
:
{
"version": "2.0.0",
"tasks": [
{
"label": "Review Current File",
"type": "shell",
"command": "swissarmyhammer",
"args": [
"test",
"review/code",
"--file_path",
"${file}"
],
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
}
},
{
"label": "Generate Tests",
"type": "shell",
"command": "swissarmyhammer",
"args": [
"test",
"test/unit",
"--code",
"$(cat ${file})",
"--framework",
"auto-detect"
],
"group": "test"
}
]
}
Real-World Scenarios
Onboarding New Team Members
Create an interactive onboarding workflow:
# Create onboarding checklist
swissarmyhammer test onboarding/checklist \
--team_name "Backend Team" \
--role "Senior Engineer" \
--project_stack "Rust, PostgreSQL, Docker"
# Generate personalized learning path
swissarmyhammer test onboarding/learning-path \
--experience_level "senior" \
--background "Go, MongoDB" \
--target_skills "Rust, async programming"
Code Migration Project
Systematic approach to migrating codebases:
# Analyze legacy code
swissarmyhammer test migration/analyze \
--source_language "Python 2.7" \
--target_language "Python 3.11" \
--codebase_size "50k LOC"
# Generate migration plan
swissarmyhammer test migration/plan \
--analysis_results "$(cat analysis.md)" \
--timeline "3 months" \
--team_size "4 developers"
# Create migration checklist per module
for module in $(find src -name "*.py"); do
swissarmyhammer test migration/module-checklist \
--module_path "$module" \
--complexity "$(wc -l < $module)" \
>> migration-plan.md
done
Technical Debt Assessment
Comprehensive debt analysis workflow:
# Assess technical debt across codebase
swissarmyhammer test debt/assessment \
--project_age "2 years" \
--team_turnover "high" \
--test_coverage "$(pytest --cov=. --cov-report=term | grep TOTAL | awk '{print $4}')"
# Prioritize debt items
swissarmyhammer test debt/prioritize \
--business_impact "high" \
--development_velocity "slowing" \
--upcoming_features "user dashboard, payments"
Performance Optimization Campaign
Systematic performance improvement:
# Identify bottlenecks
swissarmyhammer test performance/analyze \
--profile_data "$(cat profile.json)" \
--target_improvement "50% faster" \
--budget "2 weeks"
# Generate optimization roadmap
swissarmyhammer test performance/roadmap \
--current_metrics "$(cat metrics.json)" \
--constraints "no breaking changes" \
--priority "database queries, API latency"
Advanced Patterns
Team Collaboration Workflows
# Daily standup preparation
swissarmyhammer test standup/prepare \
--yesterday_commits "$(git log --oneline --since='1 day ago' --author="$(git config user.email)")" \
--current_branch "$(git branch --show-current)" \
--blockers "waiting for API keys"
# Sprint retrospective insights
swissarmyhammer test retro/insights \
--sprint_goals "$(cat sprint-goals.md)" \
--completed_stories "8/12" \
--team_feedback "$(cat feedback.json)"
Dynamic Prompt Selection
#!/bin/bash
# smart-review.sh
FILE=$1
EXTENSION="${FILE##*.}"
case $EXTENSION in
py)
PROMPT="review/code-dynamic"
ARGS="--language python --focus_areas style,typing"
;;
js|ts)
PROMPT="review/code-dynamic"
ARGS="--language javascript --focus_areas async,security"
;;
html)
PROMPT="review/accessibility"
ARGS="--wcag_level AA"
;;
sql)
PROMPT="database-query-optimizer"
ARGS="--database postgres"
;;
*)
PROMPT="review/code"
ARGS=""
;;
esac
swissarmyhammer test $PROMPT --file_path "$FILE" $ARGS
Batch Processing
#!/usr/bin/env python3
# batch_analyze.py
import subprocess
import json
import glob
def analyze_file(filepath):
"""Run SwissArmyHammer analysis on a file."""
result = subprocess.run([
'swissarmyhammer', 'test', 'review/code',
'--file_path', filepath,
'--context', 'batch analysis'
], capture_output=True, text=True)
return {
'file': filepath,
'output': result.stdout,
'errors': result.stderr
}
# Analyze all Python files
files = glob.glob('**/*.py', recursive=True)
results = [analyze_file(f) for f in files]
# Save results
with open('analysis_results.json', 'w') as f:
json.dump(results, f, indent=2)
print(f"Analyzed {len(files)} files. Results saved to analysis_results.json")
Enterprise Integration Patterns
# Compliance audit preparation
swissarmyhammer test compliance/audit \
--standards "SOC2, GDPR, HIPAA" \
--audit_date "2024-03-15" \
--evidence_path "./compliance-docs"
# Risk assessment for new features
swissarmyhammer test risk/assessment \
--feature_description "$(cat feature-spec.md)" \
--security_requirements "PII handling, payment processing" \
--timeline "Q2 2024"
Multi-Repository Management
# Synchronize standards across repos
for repo in frontend backend mobile; do
cd "../$repo"
swissarmyhammer test standards/sync \
--repo_type "$repo" \
--base_standards "$(cat ../standards/base.md)" \
--output_file "CODING_STANDARDS.md"
done
# Generate cross-repo dependency analysis
swissarmyhammer test deps/analyze \
--repositories "frontend,backend,mobile" \
--focus "security,performance,maintainability"
Custom Filter Integration
Create a prompt that uses custom filters:
---
name: data-transformer
title: Data Transformation Pipeline
description: Transform data using custom filters
arguments:
- name: data
description: Input data (JSON or CSV)
required: true
- name: transformations
description: Comma-separated list of transformations
required: true
---
# Data Transformation
Input data:
{{data}}
Apply transformations: {{transformations}}
{% assign transform_list = transformations | split: "," %}
{% for transform in transform_list %}
{% case transform | strip %}
{% when "uppercase" %}
- Convert all text fields to uppercase
{% when "normalize" %}
- Normalize whitespace and formatting
{% when "validate" %}
- Validate data types and constraints
{% when "aggregate" %}
- Aggregate numeric fields
{% endcase %}
{% endfor %}
Provide:
1. Transformed data
2. Transformation log
3. Any validation errors
4. Summary statistics
Workflow Automation with Issue Management
# Create issues for code quality improvements
swissarmyhammer test quality/issues \
--analysis_report "$(cat code-analysis.json)" \
--severity_threshold "medium" | \
while read issue_title; do
swissarmyhammer issue create quality \
--content "# Code Quality Issue\n\n$issue_title\n\n## Analysis\n$(cat details.md)"
done
# Generate release notes from completed issues
swissarmyhammer test release/notes \
--version "v2.1.0" \
--completed_issues "$(ls issues/complete/*.md)" \
--target_audience "technical users"
Tips and Best Practices
1. Use Command Substitution
# Good - passes file content directly
swissarmyhammer test review/code --code "$(cat main.py)"
# Less efficient - requires file path handling
swissarmyhammer test review/code --file_path main.py
2. Chain Commands
# Review then test
swissarmyhammer test review/code --file_path app.py && \
swissarmyhammer test test/unit --code "$(cat app.py)"
3. Save Common Workflows
Create ~/.swissarmyhammer/scripts/full-review.sh
:
#!/bin/bash
FILE=$1
echo "=== Code Review ==="
swissarmyhammer test review/code --file_path "$FILE"
echo -e "\n=== Security Check ==="
swissarmyhammer test review/security --code "$(cat $FILE)"
echo -e "\n=== Test Generation ==="
swissarmyhammer test test/unit --code "$(cat $FILE)"
4. Use Environment Variables
export SAH_DEFAULT_LANGUAGE=python
export SAH_DEFAULT_FRAMEWORK=pytest
# Now these defaults apply
swissarmyhammer test test/unit --code "$(cat app.py)"
5. Create Project Templates
Store in ~/.swissarmyhammer/templates/
:
# Create new project with templates
cp -r ~/.swissarmyhammer/templates/webapp-template my-new-app
cd my-new-app
swissarmyhammer test docs/readme \
--project_name "my-new-app" \
--project_description "My awesome web app"
Next Steps
- Explore Built-in Prompts for more capabilities
- Learn about Creating Prompts for custom workflows
- Check CLI Reference for all available commands
- See Library Usage for programmatic integration
Troubleshooting
This guide helps you resolve common issues with SwissArmyHammer. For additional support, check the GitHub Issues.
Quick Diagnostics
Run the doctor command for automated diagnosis:
swissarmyhammer doctor --verbose
Installation Issues
Command Not Found
Problem: swissarmyhammer: command not found
Solutions:
-
Verify installation:
ls -la ~/.local/bin/swissarmyhammer # or ls -la /usr/local/bin/swissarmyhammer
-
Add to PATH:
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc source ~/.bashrc
-
Reinstall:
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
Permission Denied
Problem: Permission denied
when running swissarmyhammer
Solutions:
# Make executable
chmod +x $(which swissarmyhammer)
# If installed system-wide, use sudo
sudo chmod +x /usr/local/bin/swissarmyhammer
Installation Script Fails
Problem: Install script errors or hangs
Solutions:
-
Manual installation:
# Download binary directly curl -L https://github.com/wballard/swissarmyhammer/releases/latest/download/swissarmyhammer-linux-x64 -o swissarmyhammer chmod +x swissarmyhammer sudo mv swissarmyhammer /usr/local/bin/
-
Build from source:
git clone https://github.com/wballard/swissarmyhammer.git cd swissarmyhammer cargo build --release sudo cp target/release/swissarmyhammer /usr/local/bin/
MCP Server Issues
Server Wonβt Start
Problem: swissarmyhammer serve
fails to start
Solutions:
-
Check port availability:
# Default port lsof -i :8080 # Try different port swissarmyhammer serve --port 8081
-
Debug mode:
swissarmyhammer --debug serve
-
Check permissions:
# Ensure read access to prompt directories ls -la ~/.swissarmyhammer/prompts
Claude Code Connection Issues
Problem: SwissArmyHammer doesnβt appear in Claude Code
Solutions:
-
Verify MCP configuration:
claude mcp list
-
Re-add server:
claude mcp remove swissarmyhammer claude mcp add swissarmyhammer swissarmyhammer serve
-
Check server is running:
# In another terminal ps aux | grep swissarmyhammer
-
Restart Claude Code:
- Close Claude Code completely
- Start Claude Code
- Check MCP servers are connected
MCP Protocol Errors
Problem: Protocol errors in Claude Code logs
Solutions:
-
Update SwissArmyHammer:
# Check version swissarmyhammer --version # Update to latest curl -sSL https://install.sh | bash
-
Check logs:
# Enable debug logging swissarmyhammer --debug serve > debug.log 2>&1
-
Validate prompt syntax:
swissarmyhammer validate
Prompt Issues
Prompts Not Loading
Problem: Prompts donβt appear or are outdated
Solutions:
-
Check directories:
# List prompt directories ls -la ~/.swissarmyhammer/prompts ls -la ./.swissarmyhammer/prompts
-
Validate prompts:
swissarmyhammer prompt test <prompt-name> swissarmyhammer validate --verbose
-
Force reload:
# Restart server # Ctrl+C to stop, then: swissarmyhammer serve
Invalid YAML Front Matter
Problem: YAML parsing errors
Common Issues:
-
Missing quotes:
# Bad description: This won't work: because of the colon # Good description: "This works: because it's quoted"
-
Incorrect indentation:
# Bad arguments: - name: test description: Test argument # Good arguments: - name: test description: Test argument
-
Missing required fields:
# Must have name, title, description --- name: my-prompt title: My Prompt description: What this prompt does ---
Template Rendering Errors
Problem: Liquid template errors
Common Issues:
-
Undefined variables:
# Error: undefined variable 'foo' {{ foo }} # Fix: Check if variable exists {% if foo %}{{ foo }}{% endif %}
-
Invalid filter:
# Error: unknown filter {{ text | invalid_filter }} # Fix: Use valid filter {{ text | capitalize }}
-
Syntax errors:
# Error: unclosed tag {% if condition %} # Fix: Close all tags {% if condition %}...{% endif %}
Duplicate Prompt Names
Problem: Multiple prompts with same name
Solutions:
-
Check override hierarchy:
swissarmyhammer prompt list --verbose | grep "prompt-name"
-
Rename conflicts:
- Local prompts override user prompts
- User prompts override built-in prompts
- Rename one to avoid confusion
Performance Issues
Slow Prompt Loading
Problem: Server takes long to start or reload
Solutions:
-
Disable file watching:
swissarmyhammer serve --watch false
-
Limit prompt directories:
swissarmyhammer serve --prompts ./essential-prompts --builtin false
-
Check directory size:
find ~/.swissarmyhammer/prompts -type f | wc -l
High Memory Usage
Problem: Excessive memory consumption
Solutions:
-
Monitor usage:
top | grep swissarmyhammer
-
Optimize configuration:
# Disable file watching swissarmyhammer serve --watch false # Reduce prompt count # Move unused prompts to archive
-
System limits:
# Check ulimits ulimit -a # Increase if needed ulimit -n 4096
File System Issues
Permission Errors
Problem: Cannot read/write prompt files
Solutions:
-
Fix directory permissions:
chmod -R 755 ~/.swissarmyhammer chmod -R 644 ~/.swissarmyhammer/prompts/*.md
-
Check ownership:
ls -la ~/.swissarmyhammer/ # Fix if needed chown -R $USER:$USER ~/.swissarmyhammer
File Watching Not Working
Problem: Changes to prompts not detected
Solutions:
-
Check file system support:
# macOS fs_usage | grep swissarmyhammer # Linux inotifywait -m ~/.swissarmyhammer/prompts
-
Increase watch limits (Linux):
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p
-
Manual reload:
- Restart the server
- Or disable watching:
--watch false
CLI Command Issues
Test Command Fails
Problem: swissarmyhammer prompt test
errors
Solutions:
-
Check prompt exists:
swissarmyhammer prompt list | grep "prompt-name"
-
Validate arguments:
# Show required arguments swissarmyhammer prompt test prompt-name --help
-
Debug mode:
swissarmyhammer prompt test prompt-name --debug
Export/Import Errors
Problem: Cannot export or import prompts
Solutions:
-
Check file permissions:
# For export touch test-export.tar.gz # For import ls -la import-file.tar.gz
-
Validate archive:
tar -tzf archive.tar.gz
-
Manual export:
tar -czf prompts.tar.gz -C ~/.swissarmyhammer prompts/
Environment-Specific Issues
macOS Issues
Problem: Security warnings or quarantine
Solutions:
# Remove quarantine attribute
xattr -d com.apple.quarantine /usr/local/bin/swissarmyhammer
# Allow in Security & Privacy settings
# System Preferences > Security & Privacy > General
Linux Issues
Problem: Library dependencies missing
Solutions:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install libssl-dev
# Fedora
sudo dnf install openssl-devel
# Check dependencies
ldd $(which swissarmyhammer)
Windows Issues
Problem: Path or execution issues
Solutions:
-
Use PowerShell as Administrator
-
Add to PATH:
$env:Path += ";C:\Program Files\swissarmyhammer" [Environment]::SetEnvironmentVariable("Path", $env:Path, [EnvironmentVariableTarget]::User)
-
Windows Defender:
- Add exclusion for swissarmyhammer.exe
- Check Windows Security logs
Debug Techniques
Enable Verbose Logging
# Server debug mode
swissarmyhammer --debug serve
# Redirect to file
swissarmyhammer --debug serve > debug.log 2>&1
# CLI debug
RUST_LOG=debug swissarmyhammer prompt test prompt-name
Check Configuration
# Run comprehensive checks
swissarmyhammer doctor --verbose
# Check specific areas
swissarmyhammer validate --check mcp
# Auto-fix issues
swissarmyhammer doctor --fix
Trace MCP Communication
# Save MCP messages
swissarmyhammer --debug serve | grep MCP > mcp-trace.log
# Monitor in real-time
swissarmyhammer --debug serve | grep -E "(request|response)"
Getting Help
Documentation
- Check this troubleshooting guide first
- Read the CLI Reference
- Review Configuration options
Community Support
- GitHub Issues
- Discussions
- Discord/Slack Community (if available)
Reporting Issues
When reporting issues, include:
-
System information:
swissarmyhammer doctor --json > diagnosis.json
-
Steps to reproduce
-
Error messages and logs
-
Expected vs actual behavior
Debug Information Script
Save this as debug-info.sh
:
#!/bin/bash
echo "=== SwissArmyHammer Debug Information ==="
echo "Date: $(date)"
echo "Version: $(swissarmyhammer --version)"
echo "OS: $(uname -a)"
echo ""
echo "=== Doctor Report ==="
swissarmyhammer doctor --verbose
echo ""
echo "=== Configuration ==="
cat ~/.swissarmyhammer/config.toml 2>/dev/null || echo "No config file"
echo ""
echo "=== Prompt Directories ==="
ls -la ~/.swissarmyhammer/prompts 2>/dev/null || echo "No user prompts"
ls -la ./.swissarmyhammer/prompts 2>/dev/null || echo "No local prompts"
echo ""
echo "=== Process Check ==="
ps aux | grep swissarmyhammer | grep -v grep
Run and save output:
bash debug-info.sh > debug-info.txt
Common Error Messages
βFailed to bind to addressβ
- Port already in use
- Try:
--port 8081
βPermission deniedβ
- File/directory permissions issue
- Try:
chmod +x
or check ownership
βYAML parse errorβ
- Invalid YAML syntax in prompt
- Check indentation and special characters
βTemplate compilation failedβ
- Liquid syntax error
- Check tags are closed and filters exist
βPrompt not foundβ
- Prompt name doesnβt exist
- Check:
swissarmyhammer prompt list
βConnection refusedβ
- MCP server not running
- Start server:
swissarmyhammer serve
Prevention Tips
-
Regular maintenance:
# Weekly health check swissarmyhammer doctor # Update regularly swissarmyhammer --version
-
Backup prompts:
# Regular backups swissarmyhammer export ~/.swissarmyhammer/backups/prompts-$(date +%Y%m%d).tar.gz
-
Test changes:
# Before committing swissarmyhammer prompt test new-prompt swissarmyhammer validate
-
Monitor logs:
# Keep logs for debugging swissarmyhammer --debug serve > server.log 2>&1 &
Contributing
Thank you for your interest in contributing to SwissArmyHammer! This guide will help you get started with contributing to the project.
Overview
SwissArmyHammer welcomes contributions in many forms:
- Code contributions - Features, bug fixes, optimizations
- Prompt contributions - New built-in prompts
- Documentation - Improvements, examples, translations
- Bug reports - Issues and reproducible test cases
- Feature requests - Ideas and suggestions
Getting Started
Prerequisites
- Rust 1.70+ (check with
rustc --version
) - Git
- GitHub account
- Basic familiarity with:
- Rust programming
- Model Context Protocol (MCP)
- Liquid templating
Fork and Clone
-
Fork the repository on GitHub
-
Clone your fork:
git clone https://github.com/YOUR_USERNAME/swissarmyhammer.git cd swissarmyhammer
-
Add upstream remote:
git remote add upstream https://github.com/wballard/swissarmyhammer.git
Development Setup
-
Install Rust toolchain:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Install development tools:
# Format checker rustup component add rustfmt # Linter rustup component add clippy # Documentation cargo install mdbook
-
Build the project:
cargo build cargo test
Development Workflow
Branch Strategy
main
- Stable release branchdevelop
- Development branchfeature/*
- Feature branchesfix/*
- Bug fix branchesdocs/*
- Documentation branches
Creating a Feature Branch
# Update your fork
git checkout main
git pull upstream main
git push origin main
# Create feature branch
git checkout -b feature/your-feature-name
# Or for fixes
git checkout -b fix/issue-description
Making Changes
- Write code following our style guide
- Add tests for new functionality
- Update documentation as needed
- Run checks before committing
Running Checks
# Format code
cargo fmt
# Run linter
cargo clippy -- -D warnings
# Run tests
cargo test
# Build documentation
cargo doc --no-deps --open
# Check everything
./scripts/check-all.sh
Code Style Guide
Rust Code
Follow Rust standard style with these additions:
// Good: Clear module organization
pub mod prompts;
pub mod template;
pub mod mcp;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::{Context, Result};
use serde::{Deserialize, Serialize};
// Good: Descriptive names
pub struct PromptManager {
prompts: HashMap<String, Prompt>,
directories: Vec<PathBuf>,
watcher: Option<FileWatcher>,
}
// Good: Clear error handling
impl PromptManager {
pub fn load_prompt(&mut self, path: &Path) -> Result<()> {
let content = std::fs::read_to_string(path)
.with_context(|| format!("Failed to read prompt file: {}", path.display()))?;
let prompt = Prompt::parse(&content)
.with_context(|| format!("Failed to parse prompt: {}", path.display()))?;
self.prompts.insert(prompt.name.clone(), prompt);
Ok(())
}
}
// Good: Comprehensive tests
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_prompt() {
let mut manager = PromptManager::new();
let result = manager.load_prompt(Path::new("test.md"));
assert!(result.is_ok());
}
}
Documentation
- Use
///
for public API documentation - Include examples in doc comments
- Keep comments concise and helpful
/// Manages a collection of prompts and provides MCP server functionality.
///
/// # Examples
///
/// ```
/// use swissarmyhammer::PromptManager;
///
/// let mut manager = PromptManager::new();
/// manager.load_prompts()?;
/// ```
pub struct PromptManager {
// Implementation details...
}
Error Messages
Make errors helpful and actionable:
// Good
bail!("Prompt '{}' not found in directories: {:?}", name, self.directories);
// Good with context
.with_context(|| format!("Failed to parse YAML front matter in {}", path.display()))?;
// Bad
bail!("Error");
Contributing Prompts
Built-in Prompt Guidelines
- Location:
builtin/prompts/
- Categories: Place in appropriate subdirectory
- Quality: Must be generally useful
- Testing: Include test cases
Prompt Standards
---
name: descriptive-name
title: Human Readable Title
description: |
Clear description of what this prompt does.
Include use cases and examples.
category: development
tags:
- relevant
- searchable
- tags
author: your-email@example.com
version: 1.0.0
arguments:
- name: required_arg
description: What this argument is for
required: true
- name: optional_arg
description: Optional parameter
default: "default value"
---
# Prompt Title
Clear instructions using the arguments:
- {{required_arg}}
- {{optional_arg}}
## Section Headers
Organize the prompt logically...
Testing Prompts
Add test file builtin/prompts/tests/your-prompt.test.md
:
name: test-your-prompt
cases:
- name: basic usage
arguments:
required_arg: "test value"
expected_contains:
- "test value"
- "expected output"
expected_not_contains:
- "error"
- name: edge case
arguments:
required_arg: ""
optional_arg: "custom"
expected_error: "required_arg cannot be empty"
Documentation
Documentation Structure
doc/
βββ src/
β βββ SUMMARY.md # Table of contents
β βββ chapter-1.md # Content files
β βββ images/ # Images and diagrams
βββ book.toml # mdbook configuration
Writing Documentation
- Be clear and concise
- Include examples
- Use proper markdown
- Test all code examples
Building Documentation
cd doc
mdbook build
mdbook serve # Preview at http://localhost:3000
Testing
Test Organization
tests/
βββ integration/ # Integration tests
βββ fixtures/ # Test data
βββ common/ # Shared test utilities
Writing Tests
#[test]
fn test_prompt_loading() {
let temp_dir = tempdir().unwrap();
let prompt_file = temp_dir.path().join("test.md");
std::fs::write(&prompt_file, r#"---
name: test-prompt
title: Test
---
Content"#).unwrap();
let mut manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
assert!(manager.get_prompt("test-prompt").is_some());
}
Running Tests
# All tests
cargo test
# Specific test
cargo test test_prompt_loading
# With output
cargo test -- --nocapture
# Integration tests only
cargo test --test integration
Submitting Changes
Commit Messages
Follow Conventional Commits:
# Features
git commit -m "feat: add prompt validation API"
git commit -m "feat(mcp): implement notification support"
# Bug fixes
git commit -m "fix: correct template escaping issue"
git commit -m "fix(watcher): handle symlink changes"
# Documentation
git commit -m "docs: add prompt writing guide"
git commit -m "docs(api): document PromptManager methods"
# Performance
git commit -m "perf: optimize prompt loading"
git commit -m "perf(cache): implement LRU cache"
# Refactoring
git commit -m "refactor: simplify error handling"
git commit -m "refactor(template): extract common logic"
Pull Request Process
-
Update your branch:
git fetch upstream git rebase upstream/main
-
Push to your fork:
git push origin feature/your-feature-name
-
Create pull request:
- Use clear, descriptive title
- Reference any related issues
- Describe what changes do
- Include test results
- Add screenshots if UI changes
PR Template
## Description
Brief description of changes
## Related Issue
Fixes #123
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing completed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests added/updated
- [ ] Breaking changes documented
Code Review Process
What We Look For
- Correctness - Does it work as intended?
- Tests - Are changes adequately tested?
- Documentation - Is it documented?
- Style - Does it follow conventions?
- Performance - Any performance impacts?
- Security - Any security concerns?
Review Timeline
- Initial response: 2-3 days
- Full review: Within a week
- Follow-ups: As needed
Addressing Feedback
# Make requested changes
git add -A
git commit -m "address review feedback"
# Or amend if small change
git commit --amend
# Force push to your branch
git push -f origin feature/your-feature-name
Release Process
Version Numbering
We use Semantic Versioning:
- MAJOR: Breaking API changes
- MINOR: New features, backward compatible
- PATCH: Bug fixes, backward compatible
Release Checklist
- Update
Cargo.toml
version - Update
CHANGELOG.md
- Run full test suite
- Build and test binaries
- Update documentation
- Create release PR
- Tag release after merge
- Publish to crates.io
Community
Getting Help
- GitHub Issues - Bug reports and features
- Discussions - Questions and ideas
- Discord - Real-time chat (if available)
Code of Conduct
We follow the Rust Code of Conduct:
- Be respectful and inclusive
- Welcome newcomers
- Focus on whatβs best for the community
- Show empathy towards others
Recognition
Contributors are recognized in:
CONTRIBUTORS.md
file- Release notes
- Documentation credits
Quick Reference
Common Commands
# Development
cargo build # Build project
cargo test # Run tests
cargo fmt # Format code
cargo clippy # Lint code
cargo doc # Build docs
# Documentation
cd doc && mdbook serve # Preview docs
# Prompts
cargo run -- list # List prompts
cargo run -- doctor # Validate prompts
# Release
cargo publish --dry-run # Test publishing
Useful Resources
Thank You!
Your contributions make SwissArmyHammer better for everyone. Whether itβs fixing a typo, adding a feature, or improving documentation, every contribution is valued.
Happy contributing! π
Development Setup
This guide covers setting up a development environment for working on SwissArmyHammer.
Prerequisites
Required Tools
- Rust 1.70 or later
- Git 2.0 or later
- Cargo (comes with Rust)
- A code editor (VS Code recommended)
Optional Tools
- Docker - For testing container builds
- mdBook - For documentation development
- Node.js - For web-based tooling
- Python - For utility scripts
Environment Setup
Installing Rust
# Install Rust via rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Follow the installation prompts, then:
source $HOME/.cargo/env
# Verify installation
rustc --version
cargo --version
Setting Up the Repository
# Clone the repository
git clone https://github.com/wballard/swissarmyhammer.git
cd swissarmyhammer
# Install development dependencies
cargo install cargo-watch cargo-edit cargo-outdated
# Install formatting and linting tools
rustup component add rustfmt clippy
# Install documentation tools
cargo install mdbook mdbook-linkcheck mdbook-mermaid
VS Code Setup
Install recommended extensions:
{
"recommendations": [
"rust-lang.rust-analyzer",
"vadimcn.vscode-lldb",
"serayuzgur.crates",
"tamasfe.even-better-toml",
"streetsidesoftware.code-spell-checker",
"yzhang.markdown-all-in-one"
]
}
Settings for .vscode/settings.json
:
{
"editor.formatOnSave": true,
"rust-analyzer.cargo.features": "all",
"rust-analyzer.checkOnSave.command": "clippy",
"rust-analyzer.inlayHints.enable": true,
"rust-analyzer.inlayHints.typeHints.enable": true,
"rust-analyzer.inlayHints.parameterHints.enable": true,
"[rust]": {
"editor.defaultFormatter": "rust-lang.rust-analyzer"
}
}
Project Structure
swissarmyhammer/
βββ src/
β βββ main.rs # CLI entry point
β βββ lib.rs # Library entry point
β βββ cli/ # CLI commands
β βββ mcp/ # MCP server implementation
β βββ prompts/ # Prompt management
β βββ template/ # Template engine
β βββ utils/ # Utilities
βββ tests/
β βββ integration/ # Integration tests
β βββ fixtures/ # Test data
βββ doc/
β βββ src/ # Documentation source
βββ benches/ # Benchmarks
βββ Cargo.toml # Project manifest
Building the Project
Development Build
# Quick build (debug mode)
cargo build
# Run tests
cargo test
# Run with debug output
RUST_LOG=debug cargo run -- serve
# Watch for changes and rebuild
cargo watch -x build
Release Build
# Optimized build
cargo build --release
# Run release binary
./target/release/swissarmyhammer --version
# Build with all features
cargo build --release --all-features
Cross-Compilation
# Install cross-compilation tools
cargo install cross
# Build for different targets
cross build --target x86_64-pc-windows-gnu
cross build --target aarch64-apple-darwin
cross build --target x86_64-unknown-linux-musl
Development Workflow
Running Tests
# All tests
cargo test
# Unit tests only
cargo test --lib
# Integration tests only
cargo test --test '*'
# Specific test
cargo test test_prompt_loading
# With output
cargo test -- --show-output
# With specific features
cargo test --features "experimental"
Code Quality
# Format code
cargo fmt
# Check formatting
cargo fmt -- --check
# Run linter
cargo clippy
# Strict linting
cargo clippy -- -D warnings
# Check for security issues
cargo audit
# Update dependencies
cargo update
cargo outdated
Documentation
# Build API documentation
cargo doc --no-deps --open
# Build user documentation
cd doc
mdbook build
mdbook serve
# Check documentation examples
cargo test --doc
Debugging
VS Code Debug Configuration
.vscode/launch.json
:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug CLI",
"cargo": {
"args": ["build", "--bin=swissarmyhammer"],
"filter": {
"name": "swissarmyhammer",
"kind": "bin"
}
},
"args": ["serve", "--debug"],
"cwd": "${workspaceFolder}",
"env": {
"RUST_LOG": "debug",
"RUST_BACKTRACE": "1"
}
},
{
"type": "lldb",
"request": "launch",
"name": "Debug Tests",
"cargo": {
"args": ["test", "--no-run"],
"filter": {
"name": "swissarmyhammer",
"kind": "lib"
}
},
"args": [],
"cwd": "${workspaceFolder}"
}
]
}
Command Line Debugging
# Enable debug logging
export RUST_LOG=swissarmyhammer=debug
# Enable backtrace
export RUST_BACKTRACE=1
# Run with debugging
cargo run -- serve --debug
# Use GDB
rust-gdb target/debug/swissarmyhammer
# Use LLDB
rust-lldb target/debug/swissarmyhammer
Logging
Add debug logging to your code:
use log::{debug, info, warn, error};
fn process_prompt(name: &str) -> Result<()> {
debug!("Processing prompt: {}", name);
if let Some(prompt) = self.get_prompt(name) {
info!("Found prompt: {}", prompt.title);
Ok(())
} else {
error!("Prompt not found: {}", name);
Err(anyhow!("Prompt not found"))
}
}
Performance Profiling
Benchmarking
Create benchmarks in benches/
:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use swissarmyhammer::PromptManager;
fn bench_prompt_loading(c: &mut Criterion) {
c.bench_function("load 100 prompts", |b| {
b.iter(|| {
let manager = PromptManager::new();
manager.load_prompts()
});
});
}
criterion_group!(benches, bench_prompt_loading);
criterion_main!(benches);
Run benchmarks:
cargo bench
# Compare benchmarks
cargo bench -- --save-baseline before
# Make changes
cargo bench -- --baseline before
CPU Profiling
# Install profiling tools
cargo install flamegraph
# Generate flamegraph
cargo flamegraph --bin swissarmyhammer -- serve
# Using perf (Linux)
perf record --call-graph=dwarf cargo run --release -- serve
perf report
Memory Profiling
# Install valgrind (Linux/macOS)
# macOS: brew install valgrind
# Linux: apt-get install valgrind
# Run with valgrind
valgrind --leak-check=full \
--show-leak-kinds=all \
target/debug/swissarmyhammer serve
# Use heaptrack (Linux)
heaptrack cargo run -- serve
heaptrack_gui heaptrack.*.gz
Testing Strategies
Unit Testing
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_prompt_parsing() {
let content = r#"---
name: test
title: Test Prompt
---
Content"#;
let prompt = Prompt::parse(content).unwrap();
assert_eq!(prompt.name, "test");
assert_eq!(prompt.title, "Test Prompt");
}
}
Integration Testing
In tests/integration/
:
use swissarmyhammer::PromptManager;
use tempfile::tempdir;
#[test]
fn test_full_workflow() {
let temp_dir = tempdir().unwrap();
// Create test prompts
std::fs::write(
temp_dir.path().join("test.md"),
"---\nname: test\n---\nContent"
).unwrap();
// Test loading
let mut manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
// Test retrieval
assert!(manager.get_prompt("test").is_some());
}
Property Testing
Using proptest
:
use proptest::prelude::*;
proptest! {
#[test]
fn test_prompt_name_validation(name in "[a-z][a-z0-9-]*") {
assert!(is_valid_prompt_name(&name));
}
}
Common Development Tasks
Adding a New Command
-
Create command module in
src/cli/
:// src/cli/new_command.rs use clap::Args; #[derive(Args)] pub struct NewCommand { #[arg(short, long)] option: String, } impl NewCommand { pub fn run(&self) -> Result<()> { // Implementation Ok(()) } }
-
Add to CLI enum:
// src/cli/mod.rs #[derive(Subcommand)] pub enum Commands { NewCommand(NewCommand), // ... }
Adding a Feature
-
Define feature in
Cargo.toml
:[features] experimental = ["dep:experimental-lib"]
-
Conditionally compile code:
#[cfg(feature = "experimental")] pub mod experimental { // Experimental features }
Updating Dependencies
# Check outdated dependencies
cargo outdated
# Update specific dependency
cargo update -p serde
# Update all dependencies
cargo update
# Edit dependency version
cargo upgrade serde --version 1.0.150
Troubleshooting
Common Issues
Compilation Errors
# Clean build artifacts
cargo clean
# Check for missing dependencies
cargo check
# Verify toolchain
rustup show
Test Failures
# Run single test with output
cargo test test_name -- --nocapture
# Run tests serially
cargo test -- --test-threads=1
# Skip slow tests
cargo test --lib
Performance Issues
# Build with debug symbols in release
cargo build --release --features debug
# Check binary size
cargo bloat --release
# Analyze dependencies
cargo tree --duplicates
CI/CD Integration
GitHub Actions
.github/workflows/ci.yml
:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
rust: [stable, beta]
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
components: rustfmt, clippy
- name: Cache
uses: Swatinem/rust-cache@v2
- name: Check
run: cargo check --all-features
- name: Test
run: cargo test --all-features
- name: Clippy
run: cargo clippy -- -D warnings
- name: Format
run: cargo fmt -- --check
Pre-commit Hooks
.pre-commit-config.yaml
:
repos:
- repo: local
hooks:
- id: fmt
name: Format
entry: cargo fmt -- --check
language: system
types: [rust]
pass_filenames: false
- id: clippy
name: Clippy
entry: cargo clippy -- -D warnings
language: system
types: [rust]
pass_filenames: false
- id: test
name: Test
entry: cargo test
language: system
types: [rust]
pass_filenames: false
Next Steps
- Read Contributing for contribution guidelines
- Check Testing for detailed testing practices
- See Release Process for release procedures
- Review Architecture for system design
Testing
This guide covers testing practices and strategies for SwissArmyHammer development.
Overview
SwissArmyHammer uses a comprehensive testing approach:
- Unit tests - Test individual components
- Integration tests - Test component interactions
- End-to-end tests - Test complete workflows
- Property tests - Test with generated inputs
- Benchmark tests - Test performance
Test Organization
swissarmyhammer/
βββ src/
β βββ *.rs # Unit tests in source files
βββ tests/
β βββ integration/ # Integration test files
β βββ common/ # Shared test utilities
β βββ fixtures/ # Test data files
βββ benches/ # Benchmark tests
βββ examples/ # Example code (also tested)
Unit Testing
Basic Unit Tests
Place unit tests in the same file as the code:
// src/prompts/prompt.rs
pub struct Prompt {
pub name: String,
pub title: String,
pub content: String,
}
impl Prompt {
pub fn parse(content: &str) -> Result<Self> {
// Implementation
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_valid_prompt() {
let content = r#"---
name: test
title: Test Prompt
---
Content here"#;
let prompt = Prompt::parse(content).unwrap();
assert_eq!(prompt.name, "test");
assert_eq!(prompt.title, "Test Prompt");
assert!(prompt.content.contains("Content here"));
}
#[test]
fn test_parse_missing_name() {
let content = r#"---
title: Test Prompt
---
Content"#;
let result = Prompt::parse(content);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("name"));
}
}
Testing Private Functions
#[cfg(test)]
mod tests {
use super::*;
// Test private functions by making them pub(crate) in test mode
#[test]
fn test_private_helper() {
// Can access private functions within the module
let result = validate_prompt_name("test-name");
assert!(result);
}
}
Mock Dependencies
#[cfg(test)]
mod tests {
use super::*;
use mockall::*;
#[automock]
trait FileSystem {
fn read_file(&self, path: &Path) -> io::Result<String>;
}
#[test]
fn test_with_mock_filesystem() {
let mut mock = MockFileSystem::new();
mock.expect_read_file()
.returning(|_| Ok("file content".to_string()));
let result = process_with_fs(&mock, "test.md");
assert!(result.is_ok());
}
}
Integration Testing
Basic Integration Test
Create files in tests/integration/
:
// tests/integration/prompt_loading.rs
use swissarmyhammer::{PromptManager, Config};
use tempfile::tempdir;
use std::fs;
#[test]
fn test_load_prompts_from_directory() {
// Create temporary directory
let temp_dir = tempdir().unwrap();
let prompt_path = temp_dir.path().join("test.md");
// Write test prompt
fs::write(&prompt_path, r#"---
name: test-prompt
title: Test Prompt
---
Test content"#).unwrap();
// Test loading
let mut config = Config::default();
config.prompt_directories.push(temp_dir.path().to_path_buf());
let manager = PromptManager::with_config(config).unwrap();
manager.load_prompts().unwrap();
// Verify
let prompt = manager.get_prompt("test-prompt").unwrap();
assert_eq!(prompt.title, "Test Prompt");
}
Testing MCP Server
// tests/integration/mcp_server.rs
use swissarmyhammer::mcp::{MCPServer, MCPRequest, MCPResponse};
use serde_json::json;
#[tokio::test]
async fn test_mcp_initialize() {
let server = MCPServer::new();
let request = MCPRequest {
jsonrpc: "2.0".to_string(),
method: "initialize".to_string(),
params: json!({}),
id: Some(json!(1)),
};
let response = server.handle_request(request).await.unwrap();
assert_eq!(response.jsonrpc, "2.0");
assert!(response.result.is_some());
assert!(response.result.unwrap()["serverInfo"]["name"]
.as_str()
.unwrap()
.contains("swissarmyhammer"));
}
#[tokio::test]
async fn test_mcp_list_prompts() {
let server = setup_test_server().await;
let request = MCPRequest {
jsonrpc: "2.0".to_string(),
method: "prompts/list".to_string(),
params: json!({}),
id: Some(json!(2)),
};
let response = server.handle_request(request).await.unwrap();
let prompts = &response.result.unwrap()["prompts"];
assert!(prompts.is_array());
assert!(!prompts.as_array().unwrap().is_empty());
}
Testing CLI Commands
// tests/integration/cli_commands.rs
use assert_cmd::Command;
use predicates::prelude::*;
use tempfile::tempdir;
#[test]
fn test_list_command() {
let mut cmd = Command::cargo_bin("swissarmyhammer").unwrap();
cmd.arg("list")
.assert()
.success()
.stdout(predicate::str::contains("Available prompts:"));
}
#[test]
fn test_serve_command_help() {
let mut cmd = Command::cargo_bin("swissarmyhammer").unwrap();
cmd.arg("serve")
.arg("--help")
.assert()
.success()
.stdout(predicate::str::contains("Start the MCP server"));
}
#[test]
fn test_export_import_workflow() {
let temp_dir = tempdir().unwrap();
let export_path = temp_dir.path().join("export.tar.gz");
// Export
Command::cargo_bin("swissarmyhammer").unwrap()
.arg("export")
.arg(&export_path)
.assert()
.success();
// Import
Command::cargo_bin("swissarmyhammer").unwrap()
.arg("import")
.arg(&export_path)
.arg("--dry-run")
.assert()
.success()
.stdout(predicate::str::contains("Would import"));
}
Property Testing
Using Proptest
// src/validation.rs
use proptest::prelude::*;
fn is_valid_prompt_name(name: &str) -> bool {
!name.is_empty()
&& name.chars().all(|c| c.is_alphanumeric() || c == '-')
&& name.chars().next().unwrap().is_alphabetic()
}
#[cfg(test)]
mod tests {
use super::*;
use proptest::prelude::*;
proptest! {
#[test]
fn test_valid_names_accepted(name in "[a-z][a-z0-9-]{0,50}") {
assert!(is_valid_prompt_name(&name));
}
#[test]
fn test_invalid_names_rejected(name in "[^a-z].*|.*[^a-z0-9-].*") {
// Names starting with non-letter or containing invalid chars
if !name.chars().next().unwrap().is_alphabetic()
|| name.chars().any(|c| !c.is_alphanumeric() && c != '-') {
assert!(!is_valid_prompt_name(&name));
}
}
}
}
Testing Template Rendering
use proptest::prelude::*;
proptest! {
#[test]
fn test_template_escaping(
user_input in any::<String>(),
template in "Hello {{name}}!"
) {
let mut args = HashMap::new();
args.insert("name", &user_input);
let result = render_template(&template, &args).unwrap();
// Should not contain raw HTML
if user_input.contains('<') {
assert!(!result.contains('<'));
}
}
}
Testing Async Code
Basic Async Tests
#[tokio::test]
async fn test_async_prompt_loading() {
let manager = PromptManager::new();
let result = manager.load_prompts_async().await;
assert!(result.is_ok());
let prompts = manager.list_prompts().await;
assert!(!prompts.is_empty());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_concurrent_access() {
let manager = Arc::new(PromptManager::new());
let handle1 = {
let mgr = Arc::clone(&manager);
tokio::spawn(async move {
mgr.get_prompt("test1").await
})
};
let handle2 = {
let mgr = Arc::clone(&manager);
tokio::spawn(async move {
mgr.get_prompt("test2").await
})
};
let (result1, result2) = tokio::join!(handle1, handle2);
assert!(result1.is_ok());
assert!(result2.is_ok());
}
Testing Timeouts
#[tokio::test]
async fn test_operation_timeout() {
let manager = PromptManager::new();
let result = tokio::time::timeout(
Duration::from_secs(5),
manager.slow_operation()
).await;
assert!(result.is_ok(), "Operation should complete within timeout");
}
Test Fixtures
Using Test Data
Create reusable test data in tests/fixtures/
:
// tests/common/mod.rs
use std::path::PathBuf;
pub fn test_prompt_content() -> &'static str {
r#"---
name: test-prompt
title: Test Prompt
description: A prompt for testing
arguments:
- name: input
description: Test input
required: true
---
Process this input: {{input}}"#
}
pub fn fixtures_dir() -> PathBuf {
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("tests")
.join("fixtures")
}
pub fn load_fixture(name: &str) -> String {
std::fs::read_to_string(fixtures_dir().join(name))
.expect("Failed to load fixture")
}
Test Builders
// tests/common/builders.rs
pub struct PromptBuilder {
name: String,
title: String,
content: String,
arguments: Vec<ArgumentSpec>,
}
impl PromptBuilder {
pub fn new(name: &str) -> Self {
Self {
name: name.to_string(),
title: format!("{} Title", name),
content: "Default content".to_string(),
arguments: vec![],
}
}
pub fn with_argument(mut self, name: &str, required: bool) -> Self {
self.arguments.push(ArgumentSpec {
name: name.to_string(),
required,
..Default::default()
});
self
}
pub fn build(self) -> String {
// Generate YAML front matter and content
format!(r#"---
name: {}
title: {}
arguments:
{}
---
{}"#, self.name, self.title,
self.arguments.iter()
.map(|a| format!(" - name: {}\n required: {}", a.name, a.required))
.collect::<Vec<_>>()
.join("\n"),
self.content)
}
}
// Usage in tests
#[test]
fn test_with_builder() {
let prompt_content = PromptBuilder::new("test")
.with_argument("input", true)
.with_argument("format", false)
.build();
let prompt = Prompt::parse(&prompt_content).unwrap();
assert_eq!(prompt.arguments.len(), 2);
}
Performance Testing
Benchmarks
Create benchmarks in benches/
:
// benches/prompt_loading.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId};
use swissarmyhammer::PromptManager;
fn benchmark_prompt_loading(c: &mut Criterion) {
let mut group = c.benchmark_group("prompt_loading");
for size in [10, 100, 1000].iter() {
group.bench_with_input(
BenchmarkId::from_parameter(size),
size,
|b, &size| {
let temp_dir = create_test_prompts(size);
b.iter(|| {
let manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts()
});
},
);
}
group.finish();
}
fn benchmark_template_rendering(c: &mut Criterion) {
c.bench_function("render_simple_template", |b| {
let template = "Hello {{name}}, welcome to {{place}}!";
let mut args = HashMap::new();
args.insert("name", "Alice");
args.insert("place", "Wonderland");
b.iter(|| {
black_box(render_template(template, &args))
});
});
}
criterion_group!(benches, benchmark_prompt_loading, benchmark_template_rendering);
criterion_main!(benches);
Profiling Tests
#[test]
#[ignore] // Run with cargo test -- --ignored
fn profile_large_prompt_set() {
let temp_dir = create_test_prompts(10000);
let start = Instant::now();
let manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
let duration = start.elapsed();
println!("Loaded 10000 prompts in {:?}", duration);
assert!(duration < Duration::from_secs(5), "Loading too slow");
}
Test Coverage
Generating Coverage Reports
# Install tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html --output-dir coverage
# With specific features
cargo tarpaulin --features "experimental" --out Lcov
# Exclude test code from coverage
cargo tarpaulin --exclude-files "*/tests/*" --exclude-files "*/benches/*"
Coverage Configuration
.tarpaulin.toml
:
[default]
exclude-files = ["*/tests/*", "*/benches/*", "*/examples/*"]
ignored = false
timeout = "600s"
features = "all"
[report]
out = ["Html", "Lcov"]
output-dir = "coverage"
Test Utilities
Custom Assertions
// tests/common/assertions.rs
pub trait PromptAssertions {
fn assert_valid_prompt(&self);
fn assert_has_argument(&self, name: &str);
fn assert_renders_with(&self, args: &HashMap<String, String>);
}
impl PromptAssertions for Prompt {
fn assert_valid_prompt(&self) {
assert!(!self.name.is_empty(), "Prompt name is empty");
assert!(!self.title.is_empty(), "Prompt title is empty");
assert!(is_valid_prompt_name(&self.name), "Invalid prompt name");
}
fn assert_has_argument(&self, name: &str) {
assert!(
self.arguments.iter().any(|a| a.name == name),
"Prompt missing expected argument: {}", name
);
}
fn assert_renders_with(&self, args: &HashMap<String, String>) {
let result = self.render(args);
assert!(result.is_ok(), "Failed to render: {:?}", result.err());
assert!(!result.unwrap().is_empty(), "Rendered output is empty");
}
}
Test Helpers
// tests/common/helpers.rs
use std::sync::Once;
static INIT: Once = Once::new();
pub fn init_test_logging() {
INIT.call_once(|| {
env_logger::builder()
.filter_level(log::LevelFilter::Debug)
.is_test(true)
.init();
});
}
pub fn with_test_env<F>(vars: Vec<(&str, &str)>, test: F)
where
F: FnOnce() + std::panic::UnwindSafe,
{
let _guards: Vec<_> = vars.into_iter()
.map(|(k, v)| {
env::set_var(k, v);
defer::defer(move || env::remove_var(k))
})
.collect();
test();
}
// Usage
#[test]
fn test_with_env_vars() {
with_test_env(vec![
("SWISSARMYHAMMER_DEBUG", "true"),
("SWISSARMYHAMMER_PORT", "9999"),
], || {
let config = Config::from_env();
assert!(config.debug);
assert_eq!(config.port, 9999);
});
}
Debugging Tests
Debug Output
#[test]
fn test_with_debug_output() {
init_test_logging();
log::debug!("Starting test");
let result = some_operation();
// Print debug info on failure
if result.is_err() {
eprintln!("Operation failed: {:?}", result);
eprintln!("Current state: {:?}", get_debug_state());
}
assert!(result.is_ok());
}
Test Isolation
#[test]
fn test_isolated_state() {
// Use a unique test ID to avoid conflicts
let test_id = uuid::Uuid::new_v4();
let test_dir = temp_dir().join(format!("test-{}", test_id));
// Ensure cleanup even on panic
let _guard = defer::defer(|| {
let _ = fs::remove_dir_all(&test_dir);
});
// Run test with isolated state
run_test_in_dir(&test_dir);
}
CI Testing
GitHub Actions Test Matrix
name: Test
on: [push, pull_request]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
rust: [stable, beta, nightly]
features: ["", "all", "experimental"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
- name: Test
run: cargo test --features "${{ matrix.features }}"
- name: Test Examples
run: cargo test --examples
- name: Doc Tests
run: cargo test --doc
Best Practices
1. Test Organization
- Keep unit tests with the code
- Use integration tests for workflows
- Group related tests
- Share common utilities
2. Test Naming
#[test]
fn test_parse_valid_prompt() { } // Clear what's being tested
#[test]
fn test_render_with_missing_arg() { } // Clear expected outcome
#[test]
fn test_concurrent_access_safety() { } // Clear test scenario
3. Test Independence
- Each test should be independent
- Use temporary directories
- Clean up resources
- Donβt rely on test order
4. Test Coverage
- Aim for >80% coverage
- Test edge cases
- Test error paths
- Test concurrent scenarios
5. Performance
- Keep tests fast (<100ms each)
- Use
#[ignore]
for slow tests - Run slow tests in CI only
- Mock expensive operations
Next Steps
- Read Development Setup for environment setup
- See Contributing for contribution guidelines
- Check CI/CD for automated testing
- Review Benchmarking for performance testing
Release Process
This guide documents the release process for SwissArmyHammer, including versioning, testing, building, and publishing.
Overview
SwissArmyHammer follows a structured release process:
- Version Planning - Determine version number and scope
- Pre-release Testing - Comprehensive testing
- Release Preparation - Update version, changelog
- Building - Create release artifacts
- Publishing - Release to crates.io and GitHub
- Post-release - Announcements and documentation
Versioning
Semantic Versioning
We follow Semantic Versioning:
MAJOR.MINOR.PATCH
1.0.0
β β βββ Patch: Bug fixes, no API changes
β βββββ Minor: New features, backward compatible
βββββββ Major: Breaking changes
Version Guidelines
Patch Release (0.0.X)
- Bug fixes
- Documentation improvements
- Performance improvements (no API change)
- Security patches
Minor Release (0.X.0)
- New features
- New commands
- New configuration options
- Deprecations (with warnings)
Major Release (X.0.0)
- Breaking API changes
- Removal of deprecated features
- Major architectural changes
- Incompatible configuration changes
Release Checklist
Pre-release Checklist
## Pre-release Checklist
- [ ] All tests passing on main branch
- [ ] No outstanding security issues
- [ ] Documentation updated
- [ ] CHANGELOG.md updated
- [ ] Version numbers updated
- [ ] Release branch created
- [ ] Release PR approved
Release Build Checklist
## Build Checklist
- [ ] Clean build on all platforms
- [ ] All features compile
- [ ] Binary size acceptable
- [ ] Performance benchmarks acceptable
- [ ] Security audit passing
Release Preparation
1. Create Release Branch
# Create release branch from main
git checkout main
git pull origin main
git checkout -b release/v1.2.3
# Or for release candidates
git checkout -b release/v1.2.3-rc1
2. Update Version Numbers
Update version in multiple files:
# Cargo.toml
[package]
name = "swissarmyhammer"
version = "1.2.3" # Update this
# Update lock file
cargo update -p swissarmyhammer
3. Update Changelog
Edit CHANGELOG.md
:
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [1.2.3] - 2024-03-15
### Added
- New `validate` command for prompt validation
- Support for YAML anchors in prompts
- Performance monitoring dashboard
### Changed
- Improved error messages for template rendering
- Updated minimum Rust version to 1.70
### Fixed
- Fixed file watcher memory leak (#123)
- Corrected prompt loading on Windows (#124)
### Security
- Updated dependencies to patch CVE-2024-XXXXX
[1.2.3]: https://github.com/wballard/swissarmyhammer/compare/v1.2.2...v1.2.3
4. Update Documentation
# Update version in documentation
find doc -name "*.md" -exec sed -i 's/0\.1\.0/1.2.3/g' {} \;
# Rebuild documentation
cd doc
mdbook build
# Update README if needed
vim README.md
5. Run Pre-release Tests
# Full test suite
cargo test --all-features
# Test on different platforms
cargo test --target x86_64-pc-windows-gnu
cargo test --target x86_64-apple-darwin
# Integration tests
cargo test --test '*' -- --test-threads=1
# Benchmarks
cargo bench
# Security audit
cargo audit
# Check for unused dependencies
cargo machete
Building Release Artifacts
Local Build Script
Create scripts/build-release.sh
:
#!/bin/bash
set -e
VERSION=$1
if [ -z "$VERSION" ]; then
echo "Usage: $0 <version>"
exit 1
fi
echo "Building SwissArmyHammer v$VERSION"
# Clean previous builds
cargo clean
rm -rf target/release-artifacts
mkdir -p target/release-artifacts
# Build for multiple platforms
PLATFORMS=(
"x86_64-unknown-linux-gnu"
"x86_64-apple-darwin"
"aarch64-apple-darwin"
"x86_64-pc-windows-gnu"
)
for platform in "${PLATFORMS[@]}"; do
echo "Building for $platform..."
if [[ "$platform" == *"windows"* ]]; then
ext=".exe"
else
ext=""
fi
# Build
cross build --release --target "$platform"
# Package
cp "target/$platform/release/swissarmyhammer$ext" \
"target/release-artifacts/swissarmyhammer-$VERSION-$platform$ext"
# Create tarball/zip
if [[ "$platform" == *"windows"* ]]; then
cd target/release-artifacts
zip "swissarmyhammer-$VERSION-$platform.zip" \
"swissarmyhammer-$VERSION-$platform.exe"
rm "swissarmyhammer-$VERSION-$platform.exe"
cd ../..
else
cd target/release-artifacts
tar -czf "swissarmyhammer-$VERSION-$platform.tar.gz" \
"swissarmyhammer-$VERSION-$platform"
rm "swissarmyhammer-$VERSION-$platform"
cd ../..
fi
done
# Generate checksums
cd target/release-artifacts
shasum -a 256 * > checksums.sha256
cd ../..
echo "Release artifacts built in target/release-artifacts/"
GitHub Actions Release
.github/workflows/release.yml
:
name: Release
on:
push:
tags:
- 'v*'
permissions:
contents: write
jobs:
create-release:
runs-on: ubuntu-latest
outputs:
upload_url: ${{ steps.create_release.outputs.upload_url }}
steps:
- uses: actions/checkout@v3
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: true
prerelease: false
body_path: RELEASE_NOTES.md
build-release:
needs: create-release
strategy:
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: swissarmyhammer
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: swissarmyhammer
- os: macos-latest
target: x86_64-apple-darwin
artifact: swissarmyhammer
- os: macos-latest
target: aarch64-apple-darwin
artifact: swissarmyhammer
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: swissarmyhammer.exe
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
with:
targets: ${{ matrix.target }}
- name: Build
run: cargo build --release --target ${{ matrix.target }}
- name: Package (Unix)
if: matrix.os != 'windows-latest'
run: |
cd target/${{ matrix.target }}/release
tar -czf swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.tar.gz swissarmyhammer
mv *.tar.gz ../../../
- name: Package (Windows)
if: matrix.os == 'windows-latest'
run: |
cd target/${{ matrix.target }}/release
7z a swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.zip swissarmyhammer.exe
mv *.zip ../../../
- name: Upload Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create-release.outputs.upload_url }}
asset_path: ./swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.${{ matrix.os == 'windows-latest' && 'zip' || 'tar.gz' }}
asset_name: swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.${{ matrix.os == 'windows-latest' && 'zip' || 'tar.gz' }}
asset_content_type: ${{ matrix.os == 'windows-latest' && 'application/zip' || 'application/gzip' }}
Publishing
1. Publish to crates.io
# Dry run first
cargo publish --dry-run
# Verify package contents
cargo package --list
# Publish
cargo publish
# Note: You need to be logged in
cargo login <token>
2. Create GitHub Release
# Push release branch
git push origin release/v1.2.3
# Create and merge PR
gh pr create --title "Release v1.2.3" \
--body "Release version 1.2.3. See CHANGELOG.md for details."
# After PR merged, create tag
git checkout main
git pull origin main
git tag -a v1.2.3 -m "Release version 1.2.3"
git push origin v1.2.3
3. Update GitHub Release
After CI builds artifacts:
# Edit release notes
gh release edit v1.2.3 --notes-file RELEASE_NOTES.md
# Publish release (remove draft status)
gh release edit v1.2.3 --draft=false
Post-release
1. Update Documentation
# Update stable docs
git checkout gh-pages
cp -r doc/book/* .
git add .
git commit -m "Update documentation for v1.2.3"
git push origin gh-pages
2. Announcements
Create announcement template:
# SwissArmyHammer v1.2.3 Released!
We're excited to announce the release of SwissArmyHammer v1.2.3!
## Highlights
- π New validate command for prompt validation
- π Performance monitoring dashboard
- π Fixed file watcher memory leak
- π Security updates
## Installation
```bash
# Install with cargo
cargo install swissarmyhammer
# Or download binaries
https://github.com/wballard/swissarmyhammer/releases/tag/v1.2.3
Whatβs Changed
Thank You
Thanks to all contributors who made this release possible!
Post to:
- GitHub Discussions
- Discord/Slack channels
- Twitter/Social media
- Dev.to/Medium article
### 3. Update Homebrew Formula
If maintaining Homebrew formula:
```ruby
class Swissarmyhammer < Formula
desc "MCP server for prompt management"
homepage "https://github.com/wballard/swissarmyhammer"
version "1.2.3"
if OS.mac? && Hardware::CPU.intel?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-x86_64-apple-darwin.tar.gz"
sha256 "HASH_HERE"
elsif OS.mac? && Hardware::CPU.arm?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-aarch64-apple-darwin.tar.gz"
sha256 "HASH_HERE"
elsif OS.linux?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-x86_64-unknown-linux-gnu.tar.gz"
sha256 "HASH_HERE"
end
def install
bin.install "swissarmyhammer"
end
end
4. Monitor Release
# Check crates.io
open https://crates.io/crates/swissarmyhammer
# Monitor GitHub issues
gh issue list --label "v1.2.3"
# Check download stats
gh api repos/wballard/swissarmyhammer/releases/tags/v1.2.3
Hotfix Process
For critical fixes:
# Create hotfix branch from tag
git checkout -b hotfix/v1.2.4 v1.2.3
# Make fixes
# ...
# Update version to 1.2.4
vim Cargo.toml
# Fast-track release
cargo test
cargo publish
git tag -a v1.2.4 -m "Hotfix: Critical bug in prompt loading"
git push origin v1.2.4
Release Automation
Release Script
scripts/prepare-release.sh
:
#!/bin/bash
set -e
VERSION=$1
TYPE=${2:-patch} # patch, minor, major
if [ -z "$VERSION" ]; then
echo "Usage: $0 <version> [patch|minor|major]"
exit 1
fi
echo "Preparing release v$VERSION ($TYPE)"
# Update version
sed -i "s/^version = .*/version = \"$VERSION\"/" Cargo.toml
# Update lock file
cargo update -p swissarmyhammer
# Run tests
echo "Running tests..."
cargo test --all-features
# Update changelog
echo "Updating CHANGELOG.md..."
# Auto-generate from commits
git log --pretty=format:"- %s (%h)" v$(cargo pkgid | cut -d# -f2)..HEAD >> CHANGELOG_NEW.md
# Build documentation
echo "Building documentation..."
cd doc && mdbook build && cd ..
# Create release notes
echo "# Release v$VERSION" > RELEASE_NOTES.md
echo "" >> RELEASE_NOTES.md
cat CHANGELOG_NEW.md >> RELEASE_NOTES.md
echo "Release preparation complete!"
echo "Next steps:"
echo "1. Review and edit CHANGELOG.md and RELEASE_NOTES.md"
echo "2. Commit changes"
echo "3. Create PR for release/v$VERSION"
echo "4. After merge, tag and push"
Rollback Procedure
If issues are found after release:
-
Yank from crates.io (if critical):
cargo yank --vers 1.2.3
-
Update GitHub Release:
gh release edit v1.2.3 --prerelease
-
Communicate:
- Post in announcements
- Create issue for tracking
- Prepare hotfix
-
Fix and Re-release:
# Create fix git checkout -b fix/critical-issue # ... make fixes ... # New version cargo release patch
Best Practices
-
Test Thoroughly
- Run full test suite
- Test on all platforms
- Manual smoke tests
-
Document Changes
- Update CHANGELOG.md
- Write clear release notes
- Update migration guides
-
Communicate Clearly
- Announce deprecations early
- Provide migration paths
- Respond to feedback quickly
-
Automate When Possible
- Use CI for builds
- Automate version updates
- Script repetitive tasks
Next Steps
- Review Contributing for development workflow
- See Testing for test requirements
- Check Development for build setup
- Read Changelog for version history
Changelog
All notable changes to SwissArmyHammer will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Unreleased
Added
- Comprehensive documentation with mdBook
- GitHub Pages deployment for documentation
- Enhanced error messages with context
- Validation for prompt arguments
- Support for YAML anchors in prompts
- Performance benchmarks
Changed
- BREAKING: The
validate
command no longer accepts custom workflow directories via--workflow-dir
. Workflows are now only loaded from standard locations (builtin, user~/.swissarmyhammer/workflows
, and local./.swissarmyhammer/workflows
) - Improved template rendering performance
- Better error handling in MCP server
- Enhanced file watching efficiency
- Validation error paths for workflows now include source location (e.g.,
workflow:builtin:example
instead ofworkflow:example
)
Fixed
- Memory leak in file watcher
- Prompt loading on Windows paths
- Template escaping for special characters
0.2.0 - 2024-03-01
Added
- MCP (Model Context Protocol) server implementation
- File watching for automatic prompt reloading
- Doctor command for system health checks
- Liquid template engine integration
- Support for prompt arguments and validation
- Recursive directory scanning
- YAML front matter parsing
Changed
- Migrated from simple templates to Liquid engine
- Improved prompt discovery algorithm
- Enhanced CLI output formatting
- Better error messages and diagnostics
Fixed
- Cross-platform path handling
- Unicode support in prompts
- Memory usage optimization
Security
- Added input sanitization for templates
- Implemented secure file access controls
0.1.0 - 2024-01-15
Added
- Initial release
- Basic prompt management functionality
- CLI interface with subcommands
- List command to show available prompts
- Serve command for MCP integration
- Simple template substitution
- Configuration file support
- Basic documentation
Changed
- N/A (initial release)
Fixed
- N/A (initial release)
Deprecated
- N/A (initial release)
Removed
- N/A (initial release)
Security
- N/A (initial release)
Version History
Versioning Policy
SwissArmyHammer follows Semantic Versioning:
- MAJOR version for incompatible API changes
- MINOR version for backwards-compatible functionality additions
- PATCH version for backwards-compatible bug fixes
Pre-1.0 Versions
During the 0.x series:
- Minor version bumps may include breaking changes
- The API is considered unstable
- Features may be experimental
Migration Guides
0.1.x to 0.2.x
Breaking Changes:
-
Template Engine Change
- Old: Simple
{variable}
substitution - New: Liquid templates with
{{variable}}
- Migration: Update all prompts to use double braces
- Old: Simple
-
Configuration Format
- Old: JSON configuration
- New: TOML configuration
- Migration: Convert config.json to config.toml
-
Prompt Metadata
- Old: Optional metadata
- New: Required YAML front matter
- Migration: Add minimal front matter to all prompts
Example Migration:
Old prompt (0.1.x):
# Code Review
Review this {language} code:
{code}
New prompt (0.2.x):
---
name: code-review
title: Code Review
arguments:
- name: language
required: true
- name: code
required: true
---
# Code Review
Review this {{language}} code:
{{code}}
Release Schedule
- Patch releases: As needed for bug fixes
- Minor releases: Monthly with new features
- Major releases: When breaking changes are necessary
Support Policy
- Latest version: Full support
- Previous minor version: Security fixes only
- Older versions: No support
Contributing to Changelog
When contributing, please:
- Add entries under βUnreleasedβ
- Use the appropriate section
- Reference issue/PR numbers
- Keep descriptions concise
- Sort entries by importance
Example entry:
### Fixed
- Fix memory leak in file watcher (#123)
Links
License
SwissArmyHammer is distributed under the MIT License, a permissive open-source license that allows for commercial use, modification, distribution, and private use.
MIT License
MIT License
Copyright (c) 2024 SwissArmyHammer Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
What This Means
You CAN:
β Commercial Use - Use SwissArmyHammer in commercial projects and products
β Modify - Make changes to the source code for your needs
β Distribute - Share the software with others
β Private Use - Use for private purposes without sharing modifications
β Sublicense - Include SwissArmyHammer in software with different licensing
You MUST:
π Include License - Include the copyright notice and license in copies
π Include Copyright - Keep the copyright notice intact
You CANNOT:
β Hold Liable - Hold contributors liable for damages
β Use Trademark - Use SwissArmyHammer name/logo without permission
Dependencies
SwissArmyHammer uses various open-source dependencies, each with their own licenses:
Core Dependencies
- tokio (MIT) - Async runtime
- serde (MIT/Apache-2.0) - Serialization framework
- clap (MIT/Apache-2.0) - Command line parsing
- liquid (MIT/Apache-2.0) - Template engine
- anyhow (MIT/Apache-2.0) - Error handling
Full Dependency List
To view all dependencies and their licenses:
cargo license
Or check the Cargo.lock
file for a complete list.
Contributing
By contributing to SwissArmyHammer, you agree that your contributions will be licensed under the MIT License. See Contributing for more details.
Prompt Content
Your Prompts
Prompts you create and store in SwissArmyHammer remain your intellectual property. The MIT License applies only to the SwissArmyHammer software itself, not to the content you create with it.
Built-in Prompts
Built-in prompts distributed with SwissArmyHammer are covered by the same MIT License as the software.
Third-Party Content
Some documentation and examples may include references to third-party services or tools. These references are for illustration only and donβt imply endorsement. Third-party tools and services are subject to their own licenses and terms.
Warranty Disclaimer
SwissArmyHammer is provided βas isβ without warranty of any kind. The full disclaimer is included in the license text above. In particular:
- No warranty of merchantability
- No warranty of fitness for a particular purpose
- No warranty against defects
- Use at your own risk
Questions
If you have questions about the license:
- Read the MIT License FAQ
- Consult with a legal professional for specific situations
- Open an issue on GitHub for clarification about SwissArmyHammerβs use of the license
License Changes
The project maintainers reserve the right to release future versions under a different license. However:
- Existing versions remain under their released license
- Contributors will be notified of proposed changes
- Significant changes require community discussion
Compliance
To comply with the MIT License when using SwissArmyHammer:
- In Source Distribution: Include the LICENSE file
- In Binary Distribution: Include license text in documentation
- In Modified Versions: Note your changes but keep original copyright
- In Products: Include attribution in your credits/about section
Example attribution:
This product includes software developed by the SwissArmyHammer project
(https://github.com/wballard/swissarmyhammer), licensed under the MIT License.