SwissArmyHammer
The MCP server for managing prompts as markdown files
SwissArmyHammer is a powerful Model Context Protocol (MCP) server that lets you manage AI prompts as simple markdown files. It seamlessly integrates with Claude Code and other MCP-compatible tools, providing a flexible and organized way to work with AI prompts.
What is SwissArmyHammer?
SwissArmyHammer transforms how you work with AI prompts by:
- π File-based prompt management - Store prompts as markdown files with YAML front matter
- π Live reloading - Changes to prompt files are automatically detected and reloaded
- π― Template variables - Use
{{variable}}
syntax for dynamic prompt customization - β‘ MCP integration - Works seamlessly with Claude Code and other MCP clients
- ποΈ Organized hierarchy - Support for built-in, user, and local prompt directories
- π οΈ Developer-friendly - Rich CLI with diagnostics and shell completions
Key Features
π Quick Setup
Get started by installing with Cargo:
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
This requires Rust 1.70+ to be installed. Get Rust from rustup.rs.
π Simple Prompt Format
Create prompts using familiar markdown with YAML front matter:
---
title: Code Review Helper
description: Helps review code for best practices and potential issues
arguments:
- name: code
description: The code to review
required: true
- name: language
description: Programming language
required: false
default: "auto-detect"
---
# Code Review
Please review the following {{language}} code:
{{code}}
Focus on:
- Code quality and readability
- Potential bugs or security issues
- Performance considerations
- Best practices adherence
π― Template Variables
Use template variables to make prompts dynamic and reusable:
{{variable}}
- Required variables{{variable:default}}
- Optional variables with defaults- Support for strings, numbers, booleans, and JSON objects
π§ Built-in Diagnostics
The doctor
command helps troubleshoot setup issues:
swissarmyhammer doctor
Use Cases
SwissArmyHammer is perfect for:
- Development Teams - Share and standardize AI prompts across your team
- Individual Developers - Organize your personal prompt library
- Content Creators - Manage writing and editing prompts
- Researchers - Organize domain-specific prompts and templates
- Students - Build a learning-focused prompt collection
Architecture
SwissArmyHammer follows a simple but powerful architecture:
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Claude Code βββββΊβ SwissArmyHammer βββββΊβ Prompt Files β
β (MCP Client) β β (MCP Server) β β (.md files) β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βΌ
ββββββββββββββββββββ
β File Watcher β
β (Auto-reload) β
ββββββββββββββββββββ
Getting Started
Ready to get started? Check out our Installation Guide or jump straight to creating Your First Prompt.
For integration with Claude Code, see our Claude Code Integration guide.
Community
- GitHub: github.com/wballard/swissarmyhammer
- Issues: Report bugs and request features
- Discussions: Community Q&A and sharing
- Contributing: See our Contributing Guide
License
SwissArmyHammer is open source software licensed under the MIT License. See the License page for details.
Installation
Quick Install (Recommended)
Unix-like Systems (Linux/macOS)
curl -fsSL https://raw.githubusercontent.com/swissarmyhammer/swissarmyhammer/main/install.sh | sh
This script will:
- Detect your platform automatically
- Download the latest release
- Install to
/usr/local/bin
- Verify the installation
Clone and Build
If you are installing from source:
- Rust 1.70 or later - Install from rustup.rs
- Git - For cloning the repository
If you want to build from source or contribute to development:
# Clone the repository
git clone https://github.com/wballard/swissarmyhammer.git
cd swissarmyhammer
# Build the CLI (debug mode for development)
cargo build
# Build optimized release version
cargo build --release
# Install from the local source
cargo install --path swissarmyhammer-cli
# Or run directly without installing
cargo run --bin swissarmyhammer -- --help
Future Installation Methods
Pre-built binaries and package manager support are planned for future releases:
- macOS: Homebrew formula
- Linux: DEB and RPM packages
- Windows: MSI installer and Chocolatey package
- crates.io: Published crate for
cargo install swissarmyhammer-cli
Check the releases page for updates.
Verification
After installation, verify that SwissArmyHammer is working correctly:
# Check version
swissarmyhammer --version
# Run diagnostics
swissarmyhammer doctor
# Show help
swissarmyhammer --help
# List available commands
swissarmyhammer list
The doctor
command will check your installation and provide helpful diagnostics if anything needs attention.
Shell Completions
Generate and install shell completions for better CLI experience:
# Bash
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Zsh (add to fpath)
swissarmyhammer completion zsh > ~/.zfunc/_swissarmyhammer
# Fish
swissarmyhammer completion fish > ~/.config/fish/completions/swissarmyhammer.fish
# PowerShell
swissarmyhammer completion powershell >> $PROFILE
Remember to reload your shell or start a new terminal session for completions to take effect.
Updating
To update SwissArmyHammer to the latest version:
# Update from git repository
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli --force
The --force
flag will overwrite the existing installation.
Next Steps
Once installed, continue to the Quick Start guide to set up SwissArmyHammer with Claude Code and create your first prompt.
Troubleshooting
Common Issues
Command not found: Make sure ~/.cargo/bin
is in your PATH.
Build failures: Ensure you have Rust 1.70+ installed and try updating Rust:
rustup update
Permission errors: Donβt use sudo
with cargo install - it installs to your user directory.
For more help, check the Troubleshooting guide or run:
swissarmyhammer doctor
Quick Start
Get up and running with SwissArmyHammer in just a few minutes.
Prerequisites
Before you begin, make sure you have:
- SwissArmyHammer installed (see Installation)
- Claude Code (or another MCP-compatible client)
Step 1: Verify Installation
First, check that SwissArmyHammer is properly installed:
swissarmyhammer --version
Run the doctor command to check your setup:
swissarmyhammer doctor
This will check your system and provide recommendations if anything needs attention.
Step 2: Configure Claude Code
Add SwissArmyHammer to your Claude Code MCP configuration:
Find Your Config File
The Claude Code configuration file is located at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
Add the Configuration
Create or edit the configuration file with the following content:
{
"mcpServers": {
"swissarmyhammer": {
"command": "swissarmyhammer",
"args": ["serve"]
}
}
}
If you already have other MCP servers configured, just add the swissarmyhammer
entry to your existing mcpServers
object.
Step 3: Create Your Prompt Directory
Create a directory for your prompts:
mkdir -p ~/.swissarmyhammer/prompts
This is where youβll store your custom prompts. SwissArmyHammer will automatically watch this directory for changes.
Step 4: Create Your First Prompt
Create a simple prompt file:
cat > ~/.swissarmyhammer/prompts/helper.md << 'EOF'
---
title: General Helper
description: A helpful assistant for various tasks
arguments:
- name: task
description: What you need help with
required: true
- name: style
description: How to approach the task
required: false
default: "friendly and concise"
---
# Task Helper
Please help me with: {{task}}
Approach this in a {{style}} manner. Provide clear, actionable advice.
EOF
Step 5: Test the Setup
-
Restart Claude Code to pick up the new MCP server configuration.
-
Open Claude Code and start a new conversation.
-
Try using your prompt: In Claude Code, you should now see SwissArmyHammer prompts available in the prompt picker.
-
Use the built-in prompts: SwissArmyHammer comes with several built-in prompts you can try right away:
help
- Get help with using SwissArmyHammerdebug-error
- Debug error messagescode-review
- Review code for issuesdocs-readme
- Generate README files
Step 6: Verify Everything Works
Test that SwissArmyHammer is working correctly:
# Check if Claude Code can connect (this will show server info)
swissarmyhammer serve --help
# Run diagnostics again to see the updated status
swissarmyhammer doctor
The doctor command should now show that Claude Code configuration is found and prompts are loading correctly.
Whatβs Next?
Now that you have SwissArmyHammer set up, you can:
- Explore built-in prompts - See whatβs available out of the box
- Create more prompts - Build your own prompt library
- Learn advanced features - Template variables, prompt organization, etc.
Recommended Next Steps
- Create Your First Custom Prompt
- Learn about Template Variables
- Explore Built-in Prompts
- Advanced Prompt Techniques
Troubleshooting
If something isnβt working:
- Run the doctor:
swissarmyhammer doctor
- Check Claude Code logs: Look for any error messages
- Verify file permissions: Make sure SwissArmyHammer can read your prompt files
- Restart Claude Code: Sometimes a restart is needed after configuration changes
For more detailed troubleshooting, see the Troubleshooting guide.
Getting Help
If you need help:
- Check the Troubleshooting guide
- Look at Examples for inspiration
- Ask questions in GitHub Discussions
- Report bugs in GitHub Issues
Your First Prompt
Letβs create your first custom prompt with SwissArmyHammer! This guide will walk you through creating a useful code review prompt.
Understanding Prompt Structure
SwissArmyHammer prompts are markdown files with YAML front matter. Hereβs the basic structure:
---
title: Your Prompt Title
description: What this prompt does
arguments:
- name: argument_name
description: What this argument is for
required: true/false
default: "optional default value"
---
# Your Prompt Content
Use {{argument_name}} to insert variables into your prompt.
Creating a Code Review Prompt
Letβs create a practical code review prompt step by step.
Step 1: Create the File
First, create a new prompt file in your prompts directory:
# Create the file
touch ~/.swissarmyhammer/prompts/code-review.md
# Or create a category directory
mkdir -p ~/.swissarmyhammer/prompts/development
touch ~/.swissarmyhammer/prompts/development/code-review.md
Step 2: Add the YAML Front Matter
Open the file in your favorite editor and add the front matter:
---
title: Code Review Assistant
description: Comprehensive code review with focus on best practices, security, and performance
arguments:
- name: code
description: The code to review (can be a function, class, or entire file)
required: true
- name: language
description: Programming language (helps with language-specific advice)
required: false
default: "auto-detect"
- name: focus
description: Areas to focus on (security, performance, readability, etc.)
required: false
default: "general best practices"
---
Step 3: Write the Prompt Content
Below the front matter, add the prompt content:
# Code Review
I need a thorough code review for the following {{language}} code.
## Code to Review
```{{language}}
{{code}}
Review Focus
Please focus on: {{focus}}
Review Criteria
Please analyze the code for:
π Security
- Potential security vulnerabilities
- Input validation issues
- Authentication/authorization concerns
π Performance
- Inefficient algorithms or operations
- Memory usage concerns
- Potential bottlenecks
π Readability & Maintainability
- Code clarity and organization
- Naming conventions
- Documentation needs
π§ͺ Testing & Reliability
- Error handling
- Edge cases
- Testability
ποΈ Architecture & Design
- SOLID principles adherence
- Design patterns usage
- Code structure
Output Format
Please provide:
- Overall Assessment - Brief summary of code quality
- Specific Issues - List each issue with:
- Severity (High/Medium/Low)
- Location (line numbers if applicable)
- Explanation of the problem
- Suggested fix
- Positive Aspects - Whatβs done well
- Recommendations - Broader suggestions for improvement
Focus especially on {{focus}} in your analysis.
### Step 4: Complete File Example
Here's the complete prompt file:
```markdown
---
title: Code Review Assistant
description: Comprehensive code review with focus on best practices, security, and performance
arguments:
- name: code
description: The code to review (can be a function, class, or entire file)
required: true
- name: language
description: Programming language (helps with language-specific advice)
required: false
default: "auto-detect"
- name: focus
description: Areas to focus on (security, performance, readability, etc.)
required: false
default: "general best practices"
---
# Code Review
I need a thorough code review for the following {{language}} code.
## Code to Review
```{{language}}
{{code}}
Review Focus
Please focus on: {{focus}}
Review Criteria
Please analyze the code for:
π Security
- Potential security vulnerabilities
- Input validation issues
- Authentication/authorization concerns
π Performance
- Inefficient algorithms or operations
- Memory usage concerns
- Potential bottlenecks
π Readability & Maintainability
- Code clarity and organization
- Naming conventions
- Documentation needs
π§ͺ Testing & Reliability
- Error handling
- Edge cases
- Testability
ποΈ Architecture & Design
- SOLID principles adherence
- Design patterns usage
- Code structure
Output Format
Please provide:
- Overall Assessment - Brief summary of code quality
- Specific Issues - List each issue with:
- Severity (High/Medium/Low)
- Location (line numbers if applicable)
- Explanation of the problem
- Suggested fix
- Positive Aspects - Whatβs done well
- Recommendations - Broader suggestions for improvement
Focus especially on {{focus}} in your analysis.
## Step 5: Test Your Prompt
Save the file and test that SwissArmyHammer can load it:
```bash
# Check if your prompt loads correctly
swissarmyhammer doctor
The doctor command will validate your YAML syntax and confirm the prompt is loaded.
Step 6: Use Your Prompt
- Open Claude Code
- Start a new conversation
- Look for your prompt in the prompt picker - it should appear as βCode Review Assistantβ
- Fill in the parameters:
code
: Paste some code you want reviewedlanguage
: Specify the programming language (optional)focus
: Specify what to focus on (optional)
Understanding What Happened
When you created this prompt, SwissArmyHammer:
- Detected the new file using its file watcher
- Parsed the YAML front matter to understand the prompt structure
- Made it available to Claude Code via the MCP protocol
- Prepared for template substitution when the prompt is used
Best Practices for Your First Prompt
β Doβs
- Use descriptive titles and descriptions
- Document your arguments clearly
- Provide sensible defaults for optional arguments
- Structure your prompt content with clear sections
- Use template variables to make prompts flexible
β Donβts
- Donβt use required arguments unless necessary
- Donβt make prompts too rigid - allow for flexibility
- Donβt forget to test your YAML syntax
- Donβt use overly complex template logic in your first prompts
Next Steps
Now that youβve created your first prompt, you can:
- Create more prompts for different use cases
- Organize prompts into directories by category
- Learn advanced template features like conditionals and loops
- Share prompts with your team or the community
Recommended Reading
- Creating Prompts - Comprehensive guide to prompt creation
- Template Variables - Advanced template features
- Prompt Organization - How to organize your prompt library
- Built-in Prompts - Examples from the built-in library
Troubleshooting
If your prompt isnβt working:
- Check YAML syntax - Make sure your front matter is valid YAML
- Run doctor -
swissarmyhammer doctor
will catch common issues - Check file permissions - Make sure SwissArmyHammer can read the file
- Restart Claude Code - Sometimes needed after creating new prompts
Common issues:
- YAML indentation errors - Use spaces, not tabs
- Missing required fields - Title and description are required
- Invalid argument structure - Check the argument format
- File encoding - Use UTF-8 encoding for your markdown files
Creating Prompts
SwissArmyHammer prompts are markdown files with YAML front matter that define reusable AI prompts. This guide walks you through creating effective prompts.
Basic Structure
Every prompt file has two parts:
- YAML Front Matter - Metadata about the prompt
- Markdown Content - The actual prompt template
---
name: my-prompt
title: My Awesome Prompt
description: Does something useful
arguments:
- name: input
description: What to process
required: true
---
# My Prompt
Please help me with {{input}}.
Provide a detailed response.
YAML Front Matter
The YAML front matter defines the promptβs metadata:
Required Fields
name
- Unique identifier for the prompttitle
- Human-readable namedescription
- What the prompt does
Optional Fields
category
- Group related prompts (e.g., βdevelopmentβ, βwritingβ)tags
- List of keywords for discoveryarguments
- Input parameters (see Arguments)author
- Prompt creatorversion
- Version numberlicense
- License information
Example
---
name: code-review
title: Code Review Assistant
description: Reviews code for best practices, bugs, and improvements
category: development
tags: ["code", "review", "quality", "best-practices"]
author: SwissArmyHammer Team
version: 1.0.0
arguments:
- name: code
description: The code to review
required: true
- name: language
description: Programming language
required: false
default: auto-detect
- name: focus
description: Specific areas to focus on
required: false
default: all aspects
---
Arguments
Arguments define the inputs your prompt accepts. Each argument has:
Argument Properties
name
- Parameter name (used in template as{{name}}
)description
- What this parameter is forrequired
- Whether the argument is mandatorydefault
- Default value if not providedtype_hint
- Expected data type (documentation only)
Example Arguments
arguments:
- name: text
description: Text to analyze
required: true
type_hint: string
- name: format
description: Output format
required: false
default: markdown
type_hint: string
- name: max_length
description: Maximum response length
required: false
default: 500
type_hint: integer
- name: include_examples
description: Include code examples
required: false
default: true
type_hint: boolean
Template Content
The markdown content is your prompt template using Liquid templating.
Basic Variables
Use {{variable}}
to insert argument values:
Please review this {{language}} code:
```{{code}}```
Focus on {{focus}}.
Conditional Logic
Use {% if %}
blocks for conditional content:
{% if language == "python" %}
Pay special attention to:
- PEP 8 style guidelines
- Type hints
- Error handling
{% elsif language == "javascript" %}
Pay special attention to:
- ESLint rules
- Async/await usage
- Error handling
{% endif %}
Loops
Use {% for %}
to iterate over lists:
{% if tags %}
Tags: {% for tag in tags %}#{{tag}}{% unless forloop.last %}, {% endunless %}{% endfor %}
{% endif %}
Filters
Apply filters to transform data:
Language: {{language | capitalize}}
Code length: {{code | length}} characters
Summary: {{description | truncate: 100}}
Organization
Directory Structure
Organize prompts in logical directories:
prompts/
βββ development/
β βββ code-review.md
β βββ debug-helper.md
β βββ api-docs.md
βββ writing/
β βββ blog-post.md
β βββ email-draft.md
β βββ summary.md
βββ analysis/
βββ data-insights.md
βββ competitor-analysis.md
Naming Conventions
- Use kebab-case for filenames:
code-review.md
- Make names descriptive:
debug-python-errors.md
notdebug.md
- Include the category in the path, not the filename
Categories and Tags
Use categories for broad groupings:
development
- Code-related promptswriting
- Content creationanalysis
- Data and researchproductivity
- Task management
Use tags for specific features:
["python", "debugging", "error-handling"]
["marketing", "email", "b2b"]
["data", "visualization", "charts"]
Best Practices
1. Write Clear Descriptions
# Good
description: Reviews Python code for PEP 8 compliance, type hints, and common bugs
# Bad
description: Code review
2. Provide Helpful Defaults
arguments:
- name: style_guide
description: Coding style guide to follow
required: false
default: PEP 8 # Sensible default
3. Use Descriptive Variable Names
# Good
Please analyze this {{source_code}} for {{security_vulnerabilities}}.
# Bad
Please analyze this {{input}} for {{stuff}}.
4. Include Examples in Descriptions
arguments:
- name: format
description: Output format (markdown, json, html)
required: false
default: markdown
5. Structure Your Prompts
Use clear sections and formatting:
# Code Review
## Overview
Please review the following {{language}} code for:
## Focus Areas
1. **Best Practices** - Follow {{style_guide}} guidelines
2. **Security** - Identify potential vulnerabilities
3. **Performance** - Suggest optimizations
4. **Maintainability** - Assess code clarity
## Code to Review
```{{code}}```
## Instructions
{{#if include_suggestions}}
Please provide specific improvement suggestions.
{{/if}}
6. Test Your Prompts
Use the CLI to test prompts:
# Test with required arguments
swissarmyhammer test code-review --code "def hello(): print('hi')" --language python
# Test with defaults
swissarmyhammer test code-review --code "function hello() { console.log('hi'); }"
Common Patterns
Code Analysis
---
name: analyze-code
title: Code Analyzer
description: Analyzes code for issues and improvements
arguments:
- name: code
description: Code to analyze
required: true
- name: language
description: Programming language
required: false
default: auto-detect
---
# Code Analysis
Analyze this {{language}} code:
```{{code}}```
Provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance optimizations
- Readability improvements
Document Generation
---
name: api-docs
title: API Documentation Generator
description: Generates API documentation from code
arguments:
- name: code
description: API code to document
required: true
- name: format
description: Documentation format
required: false
default: markdown
---
# API Documentation
Generate {{format}} documentation for this API:
```{{code}}```
Include:
- Endpoint descriptions
- Parameter details
- Response examples
- Error codes
Text Processing
---
name: summarize
title: Text Summarizer
description: Creates concise summaries of text
arguments:
- name: text
description: Text to summarize
required: true
- name: length
description: Target summary length
required: false
default: 3 sentences
---
# Text Summary
Create a {{length}} summary of this text:
{{text}}
Focus on the key points and main ideas.
Next Steps
- Learn about YAML Front Matter in detail
- Explore Template Variables and Liquid syntax
- Check out Custom Filters for advanced transformations
- See Examples for real-world prompt templates
- Read about Prompt Organization strategies
YAML Front Matter
YAML front matter is the metadata section at the beginning of each prompt file. It defines the promptβs properties, arguments, and other configuration details.
Structure
Front matter appears between triple dashes (---
) at the start of your markdown file:
---
name: my-prompt
title: My Prompt
description: What this prompt does
---
# Prompt content starts here
Required Fields
Every prompt must have these fields:
name
- Type: String
- Description: Unique identifier for the prompt
- Format: kebab-case recommended
- Example:
code-review
,debug-helper
,api-docs
name: code-review
title
- Type: String
- Description: Human-readable name displayed in UIs
- Format: Title Case
- Example:
Code Review Assistant
,Debug Helper
title: Code Review Assistant
description
- Type: String
- Description: Brief explanation of what the prompt does
- Format: Sentence or short paragraph
- Example: Clear, actionable description
description: Reviews code for best practices, bugs, and potential improvements
Optional Fields
category
- Type: String
- Description: Groups related prompts together
- Common Values:
development
,writing
,analysis
,productivity
category: development
tags
- Type: Array of strings
- Description: Keywords for discovery and filtering
- Format: Lowercase, descriptive terms
tags: ["python", "debugging", "error-handling"]
arguments
- Type: Array of argument objects
- Description: Input parameters the prompt accepts
- See: Arguments section below
arguments:
- name: code
description: Code to review
required: true
author
- Type: String
- Description: Creator of the prompt
- Format: Name or organization
author: SwissArmyHammer Team
version
- Type: String
- Description: Version number for tracking changes
- Format: Semantic versioning recommended
version: 1.2.0
license
- Type: String
- Description: License for the prompt
- Common Values:
MIT
,Apache-2.0
,GPL-3.0
license: MIT
created
- Type: Date string
- Description: When the prompt was created
- Format: ISO 8601 date
created: 2024-01-15
updated
- Type: Date string
- Description: Last modification date
- Format: ISO 8601 date
updated: 2024-03-20
keywords
- Type: Array of strings
- Description: Alternative to tags for SEO/discovery
- Note: Similar to tags but more formal
keywords: ["code quality", "static analysis", "best practices"]
Arguments
Arguments define the inputs your prompt can accept. Each argument is an object with these properties:
Argument Properties
name
(required)
- Type: String
- Description: Parameter name used in template
- Format: snake_case or kebab-case
- Usage: Referenced as
{{name}}
in template
arguments:
- name: source_code
# Used as {{source_code}} in template
description
(required)
- Type: String
- Description: What this argument is for
- Format: Clear, helpful explanation
arguments:
- name: language
description: Programming language of the code
required
(optional, default: false)
- Type: Boolean
- Description: Whether this argument must be provided
- Default:
false
arguments:
- name: code
description: Code to analyze
required: true # Must be provided
- name: style_guide
description: Coding style to follow
required: false # Optional
default
(optional)
- Type: String
- Description: Default value if not provided
- Note: Only used when
required: false
arguments:
- name: format
description: Output format
required: false
default: markdown
type_hint
(optional)
- Type: String
- Description: Expected data type (documentation only)
- Common Values:
string
,integer
,boolean
,array
,object
arguments:
- name: max_length
description: Maximum output length
required: false
default: 500
type_hint: integer
Argument Examples
Simple Text Input
arguments:
- name: text
description: Text to process
required: true
type_hint: string
Optional with Default
arguments:
- name: format
description: Output format (markdown, json, html)
required: false
default: markdown
type_hint: string
Boolean Flag
arguments:
- name: include_examples
description: Include code examples in output
required: false
default: true
type_hint: boolean
Multiple Arguments
arguments:
- name: code
description: Source code to review
required: true
type_hint: string
- name: language
description: Programming language
required: false
default: auto-detect
type_hint: string
- name: severity
description: Minimum issue severity to report
required: false
default: medium
type_hint: string
- name: include_suggestions
description: Include improvement suggestions
required: false
default: true
type_hint: boolean
Complete Example
Hereβs a comprehensive example showing all available fields:
---
name: comprehensive-code-review
title: Comprehensive Code Review Assistant
description: Performs detailed code review focusing on best practices, security, and performance
category: development
tags: ["code-review", "security", "performance", "best-practices"]
author: SwissArmyHammer Team
version: 2.1.0
license: MIT
created: 2024-01-15
updated: 2024-03-20
keywords: ["static analysis", "code quality", "security audit"]
arguments:
- name: code
description: Source code to review
required: true
type_hint: string
- name: language
description: Programming language (python, javascript, rust, etc.)
required: false
default: auto-detect
type_hint: string
- name: focus_areas
description: Specific areas to focus on (security, performance, style)
required: false
default: all
type_hint: string
- name: severity_threshold
description: Minimum severity level to report (low, medium, high)
required: false
default: medium
type_hint: string
- name: include_examples
description: Include code examples in suggestions
required: false
default: true
type_hint: boolean
- name: max_suggestions
description: Maximum number of suggestions to provide
required: false
default: 10
type_hint: integer
---
# Comprehensive Code Review
I'll perform a detailed review of your {{language}} code, focusing on {{focus_areas}}.
## Code Analysis
```{{code}}```
## Review Criteria
I'll evaluate the code for:
{% if focus_areas contains "security" or focus_areas == "all" %}
- **Security vulnerabilities** and best practices
{% endif %}
{% if focus_areas contains "performance" or focus_areas == "all" %}
- **Performance optimizations** and efficiency
{% endif %}
{% if focus_areas contains "style" or focus_areas == "all" %}
- **Code style** and formatting consistency
{% endif %}
{% if language != "auto-detect" %}
- **{{language | capitalize}}-specific** best practices and idioms
{% endif %}
## Reporting
- Minimum severity: {{severity_threshold}}
- Maximum suggestions: {{max_suggestions}}
{% if include_examples %}
- Including code examples and fixes
{% endif %}
Please provide detailed feedback with specific line references where applicable.
Validation Rules
SwissArmyHammer validates your YAML front matter:
Required Field Validation
name
,title
, anddescription
must be presentname
must be unique within the prompt libraryname
must not contain spaces or special characters
Argument Validation
- Each argument must have
name
anddescription
- Argument names must be valid template variables
- Required arguments cannot have default values
- Argument names must be unique within the prompt
Type Validation
- Arrays must contain valid elements
- Dates must be in ISO format
- Booleans must be
true
orfalse
Best Practices
1. Use Descriptive Names
# Good
name: python-code-review
title: Python Code Review Assistant
# Bad
name: review
title: Review
2. Write Clear Descriptions
# Good
description: Analyzes Python code for PEP 8 compliance, type hints, security issues, and performance optimizations
# Bad
description: Reviews code
3. Organize with Categories and Tags
category: development
tags: ["python", "pep8", "security", "performance", "code-quality"]
4. Provide Sensible Defaults
arguments:
- name: style_guide
description: Python style guide to follow
required: false
default: PEP 8 # Most common choice
5. Use Type Hints
arguments:
- name: max_issues
description: Maximum number of issues to report
required: false
default: 20
type_hint: integer # Helps users understand expected format
6. Keep Versions Updated
version: 1.2.0 # Update when you make changes
updated: 2024-03-20 # Track modification date
Common Patterns
Code Processing Prompt
---
name: code-processor
title: Code Processor
description: Processes and transforms code
category: development
arguments:
- name: code
description: Source code to process
required: true
- name: language
description: Programming language
required: false
default: auto-detect
- name: output_format
description: Desired output format
required: false
default: markdown
---
Text Analysis Prompt
---
name: text-analyzer
title: Text Analyzer
description: Analyzes text for various metrics
category: analysis
arguments:
- name: text
description: Text to analyze
required: true
- name: analysis_type
description: Type of analysis to perform
required: false
default: comprehensive
- name: include_stats
description: Include statistical analysis
required: false
default: true
type_hint: boolean
---
Document Generator
---
name: doc-generator
title: Document Generator
description: Generates documentation from code or specifications
category: documentation
arguments:
- name: source
description: Source material to document
required: true
- name: format
description: Output format
required: false
default: markdown
- name: include_examples
description: Include usage examples
required: false
default: true
type_hint: boolean
---
Troubleshooting
Common Errors
Invalid YAML Syntax
# Error: Missing quotes around string with special characters
description: This won't work: because of the colon
# Fixed: Quote strings with special characters
description: "This works: because it's quoted"
Missing Required Fields
# Error: Missing required fields
title: My Prompt
# Fixed: Include all required fields
name: my-prompt
title: My Prompt
description: What this prompt does
Invalid Argument Structure
# Error: Missing required argument properties
arguments:
- name: code
# Fixed: Include required properties
arguments:
- name: code
description: Code to process
required: true
Validation Tips
- Use a YAML validator - Many online tools can check syntax
- Test with CLI - Use
swissarmyhammer test
to validate prompts - Check the logs - SwissArmyHammer provides detailed error messages
- Start simple - Begin with minimal front matter and add complexity
Next Steps
- Learn about Template Variables to use your arguments
- Explore Custom Filters for advanced text processing
- See Creating Prompts for the complete workflow
- Check Examples for real-world YAML configurations
Template Variables
SwissArmyHammer uses the Liquid template engine for processing prompts. This provides powerful templating features including variables, conditionals, loops, and filters.
Basic Variable Substitution
Variables are inserted using double curly braces:
Hello {{ name }}!
Your email is {{ email }}.
For backward compatibility, variables without spaces also work:
Hello {{name}}!
Conditionals
If Statements
Use if
statements to conditionally include content:
{% if user_type == "admin" %}
Welcome, administrator!
{% elsif user_type == "moderator" %}
Welcome, moderator!
{% else %}
Welcome, user!
{% endif %}
Unless Statements
unless
is the opposite of if
:
{% unless error_count == 0 %}
Warning: {{ error_count }} errors found.
{% endunless %}
Comparison Operators
==
- equals!=
- not equals>
- greater than<
- less than>=
- greater or equal<=
- less or equalcontains
- string/array containsand
- logical ANDor
- logical OR
Example:
{% if age >= 18 and country == "US" %}
You are eligible to vote.
{% endif %}
{% if tags contains "urgent" %}
π¨ This is urgent!
{% endif %}
Case Statements
For multiple conditions, use case
:
{% case status %}
{% when "pending" %}
β³ Waiting for approval
{% when "approved" %}
β
Approved and ready
{% when "rejected" %}
β Rejected
{% else %}
β Unknown status
{% endcase %}
Loops
Basic For Loops
Iterate over arrays:
{% for item in items %}
- {{ item }}
{% endfor %}
Range Loops
Loop over a range of numbers:
{% for i in (1..5) %}
Step {{ i }} of 5
{% endfor %}
Loop Variables
Inside loops, you have access to special variables:
{% for item in items %}
{% if forloop.first %}First item: {% endif %}
{{ forloop.index }}. {{ item }}
{% if forloop.last %}(last item){% endif %}
{% endfor %}
Available loop variables:
forloop.index
- current iteration (1-based)forloop.index0
- current iteration (0-based)forloop.first
- true on first iterationforloop.last
- true on last iterationforloop.length
- total number of items
Loop Control
Use break
and continue
for flow control:
{% for item in items %}
{% if item == "skip" %}
{% continue %}
{% endif %}
{% if item == "stop" %}
{% break %}
{% endif %}
Processing: {{ item }}
{% endfor %}
Cycle
Alternate between values:
{% for row in data %}
<tr class="{% cycle 'odd', 'even' %}">
<td>{{ row }}</td>
</tr>
{% endfor %}
Filters
Filters modify variables using the pipe (|
) character:
String Filters
{{ name | upcase }} # ALICE
{{ name | downcase }} # alice
{{ name | capitalize }} # Alice
{{ text | strip }} # removes whitespace
{{ text | truncate: 20 }} # truncates to 20 chars
{{ text | truncate: 20, "..." }} # custom ellipsis
{{ text | append: "!" }} # adds to end
{{ text | prepend: "Hello " }} # adds to beginning
{{ text | remove: "bad" }} # removes all occurrences
{{ text | replace: "old", "new" }} # replaces all
{{ text | split: "," }} # splits into array
Array Filters
{{ array | first }} # first element
{{ array | last }} # last element
{{ array | join: ", " }} # joins with delimiter
{{ array | sort }} # sorts array
{{ array | reverse }} # reverses array
{{ array | size }} # number of elements
{{ array | uniq }} # removes duplicates
Math Filters
{{ number | plus: 5 }} # addition
{{ number | minus: 3 }} # subtraction
{{ number | times: 2 }} # multiplication
{{ number | divided_by: 4 }} # division
{{ number | modulo: 3 }} # remainder
{{ number | ceil }} # round up
{{ number | floor }} # round down
{{ number | round }} # round to nearest
{{ number | round: 2 }} # round to 2 decimals
{{ number | abs }} # absolute value
Default Filter
Provide fallback values:
Hello {{ name | default: "Guest" }}!
Score: {{ score | default: 0 }}
Date Filters
{{ date | date: "%Y-%m-%d" }} # 2024-01-15
{{ date | date: "%B %d, %Y" }} # January 15, 2024
{{ "now" | date: "%Y" }} # current year
Advanced Features
Comments
Comments are not rendered in output:
{% comment %}
This is a comment that won't appear in the output.
Useful for documentation or temporarily disabling code.
{% endcomment %}
Raw Blocks
Prevent Liquid processing:
{% raw %}
This {{ variable }} won't be processed.
Useful for showing Liquid syntax examples.
{% endraw %}
Assign Variables
Create new variables:
{% assign full_name = first_name | append: " " | append: last_name %}
Welcome, {{ full_name }}!
{% assign item_count = items | size %}
You have {{ item_count }} items.
Capture Blocks
Capture content into a variable:
{% capture greeting %}
{% if time_of_day == "morning" %}
Good morning
{% elsif time_of_day == "evening" %}
Good evening
{% else %}
Hello
{% endif %}
{% endcapture %}
{{ greeting }}, {{ name }}!
Environment Variables
Access environment variables through the env
object:
Current user: {{ env.USER }}
Home directory: {{ env.HOME }}
Custom setting: {{ env.MY_APP_CONFIG | default: "not set" }}
Object Access
Access nested objects and arrays:
{{ user.name }}
{{ user.address.city }}
{{ items[0] }}
{{ items[index] }}
{{ data["dynamic_key"] }}
Truthy and Falsy Values
In Liquid conditions:
- Falsy:
false
,nil
- Truthy: everything else (including
0
,""
,[]
)
{% if value %}
This shows unless value is false or nil
{% endif %}
Error Handling
When a variable is undefined:
- In backward-compatible mode:
{{ undefined }}
renders as{{ undefined }}
- With validation: An error is raised for missing required arguments
Use the default
filter to handle missing values gracefully:
{{ optional_var | default: "fallback value" }}
Migration from Basic Templates
If youβre migrating from basic {{variable}}
syntax:
- Your existing templates still work - backward compatibility is maintained
- Add spaces for clarity:
{{var}}
β{{ var }}
- Use filters for transformation:
{{ name | upcase }}
instead of post-processing - Add conditions for dynamic content: Use
{% if %}
blocks - Use loops for repetitive content: Replace manual duplication with
{% for %}
Migration Examples
Before: Basic Variable Substitution
Please review the {{language}} code in {{file}}.
Focus on {{focus_area}}.
After: Enhanced with Liquid Features
Please review the {{ language | capitalize }} code in {{ file }}.
{% if focus_area %}
Focus on {{ focus_area }}.
{% else %}
Perform a general code review.
{% endif %}
{% if language == "python" %}
Pay special attention to PEP 8 compliance.
{% elsif language == "javascript" %}
Check for ESLint rule violations.
{% endif %}
Before: Manual List Creation
Files to review:
- {{file1}}
- {{file2}}
- {{file3}}
After: Dynamic Lists with Loops
Files to review:
{% for file in files %}
- {{ file }}{% if forloop.last %} (final file){% endif %}
{% endfor %}
Total: {{ files | size }} files
Before: Fixed Templates
Status: {{status}}
After: Conditional Formatting
Status: {% case status %}
{% when "success" %}β
{{ status | upcase }}
{% when "error" %}β {{ status | upcase }}
{% when "warning" %}β οΈ {{ status | capitalize }}
{% else %}{{ status }}
{% endcase %}
Differences from Handlebars/Mustache
If youβre familiar with Handlebars or Mustache templating:
Feature | Handlebars/Mustache | Liquid |
---|---|---|
Variables | {{variable}} | {{ variable }} |
Conditionals | {{#if}}...{{/if}} | {% if %}...{% endif %} |
Loops | {{#each}}...{{/each}} | {% for %}...{% endfor %} |
Comments | {{! comment }} | {% comment %}...{% endcomment %} |
Filters | Limited | Extensive built-in filters |
Logic | Minimal | Full comparison operators |
Common Migration Patterns
-
Variable with Default
- Before: Handle missing variables in code
- After:
{{ variable | default: "fallback" }}
-
Conditional Sections
- Before: Generate different templates
- After: Single template with
{% if %}
blocks
-
Repeated Content
- Before: Manual duplication
- After:
{% for %}
loops withforloop
variables
-
String Transformation
- Before: Transform in application code
- After: Use Liquid filters directly
Backward Compatibility Notes
- Simple
{{variable}}
syntax continues to work - Undefined variables are preserved as
{{ variable }}
in output - No breaking changes to existing templates
- Gradual migration is supported - mix old and new syntax
Examples
Dynamic Code Review
{% if language == "python" %}
Please review this Python code for PEP 8 compliance.
{% elsif language == "javascript" %}
Please review this JavaScript code for ESLint rules.
{% else %}
Please review this {{ language }} code for best practices.
{% endif %}
{% if include_security %}
Also check for security vulnerabilities.
{% endif %}
Formatted List
{% for item in tasks %}
{{ forloop.index }}. {{ item.title }}
{% if item.completed %}β{% else %}β{% endif %}
Priority: {{ item.priority | default: "normal" }}
{% unless item.completed %}
Due: {{ item.due_date | date: "%B %d" }}
{% endunless %}
{% endfor %}
Conditional Debugging
{% if debug_mode %}
=== Debug Information ===
Variables: {{ arguments | json }}
Environment: {{ env.NODE_ENV | default: "development" }}
{% for key in api_keys %}
{{ key }}: {{ key | truncate: 8 }}...
{% endfor %}
{% endif %}
Best Practices
- Use meaningful variable names:
{{ user_email }}
instead of{{ ue }}
- Provide defaults:
{{ value | default: "N/A" }}
for optional values - Format output: Use filters to ensure consistent formatting
- Comment complex logic: Use
{% comment %}
blocks - Test edge cases: Empty arrays, nil values, missing variables
- Keep it readable: Break complex templates into sections
Custom Filters
SwissArmyHammer includes specialized custom filters designed for prompt engineering:
Code Filters
{{ code | format_lang: "rust" }} # Format code with language
{{ code | extract_functions }} # Extract function signatures
{{ path | basename }} # Get filename from path
{{ path | dirname }} # Get directory from path
{{ text | count_lines }} # Count number of lines
{{ code | dedent }} # Remove common indentation
Text Processing Filters
{{ text | extract_urls }} # Extract URLs from text
{{ title | slugify }} # Convert to URL-friendly slug
{{ text | word_wrap: 80 }} # Wrap text at 80 characters
{{ text | indent: 2 }} # Indent all lines by 2 spaces
{{ items | bullet_list }} # Convert array to bullet list
{{ text | highlight: "keyword" }} # Highlight specific terms
Data Transformation Filters
{{ json_string | from_json }} # Parse JSON string
{{ data | to_json }} # Convert to JSON string
{{ csv_string | from_csv }} # Parse CSV string
{{ array | to_csv }} # Convert to CSV string
{{ yaml_string | from_yaml }} # Parse YAML string
{{ data | to_yaml }} # Convert to YAML string
Utility Filters
{{ text | md5 }} # Generate MD5 hash
{{ text | sha1 }} # Generate SHA1 hash
{{ text | sha256 }} # Generate SHA256 hash
{{ number | ordinal }} # Convert to ordinal (1st, 2nd, 3rd)
{{ 100 | lorem_words }} # Generate lorem ipsum words
{{ date | format_date: "%Y-%m-%d" }} # Advanced date formatting
For complete documentation of custom filters, see the Custom Filters Reference.
Limitations
- No includes: Cannot include other template files
- No custom tags: Only standard Liquid tags are supported
- Performance: Very large loops may impact performance
Further Reading
- Official Liquid Documentation
- Liquid Playground - Test templates online
- Liquid Cheat Sheet
Custom Filters Reference
SwissArmyHammer includes a comprehensive set of custom Liquid filters designed specifically for prompt engineering and content processing.
Code Filters
format_lang
Formats code with language-specific syntax highlighting hints.
{{ code | format_lang: "rust" }}
{{ code | format_lang: language_var }}
Arguments:
language
- Programming language identifier (e.g., βrustβ, βpythonβ, βjavascriptβ)
Example:
<!-- Input -->
{{ "fn main() { println!(\"Hello\"); }" | format_lang: "rust" }}
<!-- Output -->
```rust
fn main() { println!("Hello"); }
### extract_functions
Extracts function signatures and definitions from code.
```liquid
{{ code | extract_functions }}
{{ code | extract_functions: "detailed" }}
Arguments:
mode
(optional) - βsignaturesβ (default) or βdetailedβ for full function bodies
Example:
<!-- Input -->
{{ rust_code | extract_functions }}
<!-- Output -->
- fn main()
- fn calculate(x: i32, y: i32) -> i32
- fn process_data(data: &Vec<String>) -> Result<(), Error>
basename
Extracts the filename from a file path.
{{ path | basename }}
Example:
<!-- Input -->
{{ "/usr/local/bin/swissarmyhammer" | basename }}
<!-- Output -->
swissarmyhammer
dirname
Extracts the directory path from a file path.
{{ path | dirname }}
Example:
<!-- Input -->
{{ "/usr/local/bin/swissarmyhammer" | dirname }}
<!-- Output -->
/usr/local/bin
count_lines
Counts the number of lines in text.
{{ text | count_lines }}
Example:
<!-- Input -->
{{ "line 1\nline 2\nline 3" | count_lines }}
<!-- Output -->
3
dedent
Removes common leading whitespace from all lines.
{{ code | dedent }}
Example:
<!-- Input -->
{{ " fn main() {\n println!(\"Hello\");\n }" | dedent }}
<!-- Output -->
fn main() {
println!("Hello");
}
Text Processing Filters
extract_urls
Extracts all URLs from text.
{{ text | extract_urls }}
{{ text | extract_urls: "list" }}
Arguments:
format
(optional) - βarrayβ (default) or βlistβ for bullet point list
Example:
<!-- Input -->
{{ "Visit https://example.com and https://github.com" | extract_urls }}
<!-- Output -->
["https://example.com", "https://github.com"]
<!-- With list format -->
{{ "Visit https://example.com and https://github.com" | extract_urls: "list" }}
<!-- Output -->
- https://example.com
- https://github.com
slugify
Converts text to a URL-friendly slug.
{{ title | slugify }}
Example:
<!-- Input -->
{{ "Advanced Code Review Helper!" | slugify }}
<!-- Output -->
advanced-code-review-helper
word_wrap
Wraps text at specified column width.
{{ text | word_wrap: 80 }}
{{ text | word_wrap: width_var }}
Arguments:
width
- Column width for wrapping (default: 80)
Example:
<!-- Input -->
{{ "This is a very long line that should be wrapped at a specific column width to ensure readability." | word_wrap: 30 }}
<!-- Output -->
This is a very long line that
should be wrapped at a specific
column width to ensure
readability.
indent
Indents all lines by specified number of spaces.
{{ text | indent: 4 }}
{{ text | indent: spaces_var }}
Arguments:
spaces
- Number of spaces to indent (default: 2)
Example:
<!-- Input -->
{{ "line 1\nline 2" | indent: 4 }}
<!-- Output -->
line 1
line 2
bullet_list
Converts an array to a bullet point list.
{{ array | bullet_list }}
{{ array | bullet_list: "*" }}
Arguments:
bullet
(optional) - Bullet character (default: β-β)
Example:
<!-- Input -->
{{ ["Item 1", "Item 2", "Item 3"] | bullet_list }}
<!-- Output -->
- Item 1
- Item 2
- Item 3
<!-- With custom bullet -->
{{ ["Item 1", "Item 2"] | bullet_list: "*" }}
<!-- Output -->
* Item 1
* Item 2
highlight
Highlights specific terms in text.
{{ text | highlight: "keyword" }}
{{ text | highlight: term_var }}
Arguments:
term
- Term to highlight
Example:
<!-- Input -->
{{ "This is important text with keywords" | highlight: "important" }}
<!-- Output -->
This is **important** text with keywords
Data Transformation Filters
from_json
Parses JSON string into object/array.
{{ json_string | from_json }}
Example:
<!-- Input -->
{% assign data = '{"name": "John", "age": 30}' | from_json %}
Name: {{ data.name }}
Age: {{ data.age }}
<!-- Output -->
Name: John
Age: 30
to_json
Converts object/array to JSON string.
{{ data | to_json }}
{{ data | to_json: "pretty" }}
Arguments:
format
(optional) - βcompactβ (default) or βprettyβ for formatted output
Example:
<!-- Input -->
{% assign user = { "name": "John", "age": 30 } %}
{{ user | to_json: "pretty" }}
<!-- Output -->
{
"name": "John",
"age": 30
}
from_csv
Parses CSV string into array of objects.
{{ csv_string | from_csv }}
{{ csv_string | from_csv: ";" }}
Arguments:
delimiter
(optional) - Field delimiter (default: β,β)
Example:
<!-- Input -->
{% assign data = "name,age\nJohn,30\nJane,25" | from_csv %}
{% for row in data %}
- {{ row.name }} is {{ row.age }} years old
{% endfor %}
<!-- Output -->
- John is 30 years old
- Jane is 25 years old
to_csv
Converts array of objects to CSV string.
{{ array | to_csv }}
{{ array | to_csv: ";" }}
Arguments:
delimiter
(optional) - Field delimiter (default: β,β)
Example:
<!-- Input -->
{% assign users = [{"name": "John", "age": 30}, {"name": "Jane", "age": 25}] %}
{{ users | to_csv }}
<!-- Output -->
name,age
John,30
Jane,25
from_yaml
Parses YAML string into object/array.
{{ yaml_string | from_yaml }}
Example:
<!-- Input -->
{% assign config = "database:\n host: localhost\n port: 5432" | from_yaml %}
Host: {{ config.database.host }}
Port: {{ config.database.port }}
<!-- Output -->
Host: localhost
Port: 5432
to_yaml
Converts object/array to YAML string.
{{ data | to_yaml }}
Example:
<!-- Input -->
{% assign config = {"database": {"host": "localhost", "port": 5432}} %}
{{ config | to_yaml }}
<!-- Output -->
database:
host: localhost
port: 5432
Hash Filters
md5
Generates MD5 hash of input text.
{{ text | md5 }}
Example:
<!-- Input -->
{{ "hello world" | md5 }}
<!-- Output -->
5d41402abc4b2a76b9719d911017c592
sha1
Generates SHA1 hash of input text.
{{ text | sha1 }}
Example:
<!-- Input -->
{{ "hello world" | sha1 }}
<!-- Output -->
2aae6c35c94fcfb415dbe95f408b9ce91ee846ed
sha256
Generates SHA256 hash of input text.
{{ text | sha256 }}
Example:
<!-- Input -->
{{ "hello world" | sha256 }}
<!-- Output -->
b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9
Utility Filters
ordinal
Converts number to ordinal format (1st, 2nd, 3rd, etc.).
{{ number | ordinal }}
Example:
<!-- Input -->
{{ 1 | ordinal }} item
{{ 22 | ordinal }} place
{{ 103 | ordinal }} attempt
<!-- Output -->
1st item
22nd place
103rd attempt
lorem_words
Generates lorem ipsum text with specified number of words.
{{ count | lorem_words }}
Arguments:
count
- Number of words to generate
Example:
<!-- Input -->
{{ 10 | lorem_words }}
<!-- Output -->
Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod
format_date
Advanced date formatting with custom format strings.
{{ date | format_date: "%Y-%m-%d %H:%M:%S" }}
{{ "now" | format_date: "%B %d, %Y" }}
Arguments:
format
- Date format string (uses strftime format)
Common format codes:
%Y
- 4-digit year (2024)%m
- Month as number (01-12)%d
- Day of month (01-31)%H
- Hour (00-23)%M
- Minute (00-59)%S
- Second (00-59)%B
- Full month name (January)%b
- Abbreviated month (Jan)%A
- Full weekday name (Monday)%a
- Abbreviated weekday (Mon)
Example:
<!-- Input -->
{{ "2024-01-15T10:30:00Z" | format_date: "%B %d, %Y at %I:%M %p" }}
{{ "now" | format_date: "%A, %Y-%m-%d" }}
<!-- Output -->
January 15, 2024 at 10:30 AM
Monday, 2024-01-15
Filter Chaining
Filters can be chained together for complex transformations:
{{ code | dedent | format_lang: "python" | highlight: "def" }}
{{ user_input | strip | truncate: 100 | capitalize }}
{{ data | to_json | indent: 2 }}
{{ filename | basename | slugify | append: ".md" }}
Error Handling
Custom filters handle errors gracefully:
- Invalid input: Returns original value or empty string
- Missing arguments: Uses sensible defaults
- Type mismatches: Attempts conversion or returns original value
Performance Notes
- Hash filters (md5, sha1, sha256) are computationally expensive for large inputs
- Data transformation filters (JSON, CSV, YAML) may consume memory for large datasets
- Text processing filters are optimized for typical prompt content sizes
- Code filters use efficient parsing algorithms
Integration Examples
Code Review Prompt
# Code Review: {{ filename | basename }}
## File Information
- **Path**: {{ filepath }}
- **Lines**: {{ code | count_lines }}
- **Language**: {{ language | capitalize }}
## Code to Review
{{ code | dedent | format_lang: language }}
## Functions Found
{{ code | extract_functions | bullet_list }}
## Review Checklist
{% assign hash = code | sha256 | truncate: 8 %}
- [ ] Security review (ID: {{ hash }})
- [ ] Performance analysis
- [ ] Style compliance
Data Analysis Prompt
# Data Analysis Report
## Dataset Summary
{% assign data = csv_data | from_csv %}
- **Records**: {{ data | size }}
- **Generated**: {{ "now" | format_date: "%Y-%m-%d %H:%M" }}
## Sample Data
{% for row in data limit:3 %}
{{ forloop.index | ordinal }} record: {{ row | to_json }}
{% endfor %}
## Field Analysis
{% assign fields = data[0] | keys %}
Available fields: {{ fields | bullet_list }}
Documentation Generator
# API Documentation
## Endpoints
{% for endpoint in api_endpoints %}
### {{ endpoint.method | upcase }} {{ endpoint.path }}
{{ endpoint.description | word_wrap: 80 }}
{% if endpoint.parameters %}
**Parameters:**
{{ endpoint.parameters | to_yaml | indent: 2 }}
{% endif %}
**Example:**
```{{ endpoint.language | default: "bash" }}
{{ endpoint.example | dedent }}
{% endfor %}
Generated on {{ βnowβ | format_date: β%B %d, %Yβ }}
## See Also
- [Template Variables](./template-variables.md) - Basic Liquid syntax
- [Advanced Prompts](./advanced-prompts.md) - Using filters in complex templates
- [Testing Guide](./testing-guide.md) - Testing templates with custom filters
Prompt Organization
Effective prompt organization is crucial for maintaining a scalable and manageable prompt library. This guide covers best practices for organizing your SwissArmyHammer prompts.
Directory Structure
Recommended Hierarchy
~/.swissarmyhammer/prompts/
βββ development/
β βββ languages/
β β βββ python/
β β βββ javascript/
β β βββ rust/
β βββ frameworks/
β β βββ react/
β β βββ django/
β β βββ fastapi/
β βββ tools/
β βββ git/
β βββ docker/
β βββ ci-cd/
βββ writing/
β βββ technical/
β βββ business/
β βββ creative/
βββ data/
β βββ analysis/
β βββ transformation/
β βββ visualization/
βββ productivity/
β βββ planning/
β βββ automation/
β βββ workflows/
βββ _shared/
βββ components/
βββ templates/
βββ utilities/
Directory Purposes
- development/ - Programming and technical prompts
- writing/ - Content creation and documentation
- data/ - Data processing and analysis
- productivity/ - Task management and workflows
- _shared/ - Reusable components and utilities
Naming Conventions
File Names
Use consistent, descriptive file names:
# Good examples
code-review-python.md
api-documentation-generator.md
git-commit-message.md
database-migration-planner.md
# Avoid
review.md
doc.md
prompt1.md
temp.md
Naming Rules
- Use kebab-case for file names
- Be descriptive - Include the purpose and context
- Add type suffix when multiple variants exist
- Keep under 40 characters for readability
Prompt Names (in YAML)
# Good - matches file name pattern
name: code-review-python
# Also good - hierarchical naming
name: development/code-review/python
# Avoid - too generic
name: review
Categories and Tags
Using Categories
Categories provide broad groupings:
---
name: api-security-scanner
title: API Security Scanner
category: development
subcategory: security
---
Effective Tagging
Tags enable fine-grained discovery:
---
name: react-component-generator
tags:
- react
- javascript
- frontend
- component
- generator
- boilerplate
---
Category vs Tags
- Categories: Single, broad classification
- Tags: Multiple, specific attributes
# Category for primary classification
category: development
# Tags for detailed attributes
tags:
- python
- testing
- pytest
- unit-tests
- tdd
Modular Design
Base Templates
Create base templates in _shared/templates/
:
<!-- _shared/templates/code-review-base.md -->
---
name: code-review-base
title: Base Code Review Template
abstract: true # Indicates this is a template
---
# Code Review
## Overview
Review the following code for:
- Best practices
- Potential bugs
- Performance issues
- Security concerns
## Code
```{{language}}
{{code}}
Analysis
{{analysis_content}}
### Extending Base Templates
```markdown
<!-- development/languages/python/code-review-python.md -->
---
name: code-review-python
title: Python Code Review
extends: code-review-base
---
{% capture analysis_content %}
### Python-Specific Checks
- PEP 8 compliance
- Type hints usage
- Pythonic idioms
- Import organization
{% endcapture %}
{% include "code-review-base" %}
Shared Components
Store reusable components in _shared/components/
:
<!-- _shared/components/security-checks.md -->
---
name: security-checks-component
component: true
---
### Security Analysis
- Input validation
- SQL injection risks
- XSS vulnerabilities
- Authentication flaws
- Data exposure
Use in prompts:
---
name: web-app-review
---
# Web Application Review
{{code}}
{% include "_shared/components/security-checks.md" %}
## Additional Checks
...
Versioning
Version in Metadata
Track prompt versions:
---
name: api-generator
version: 2.1.0
updated: 2024-03-20
changelog:
- 2.1.0: Added GraphQL support
- 2.0.0: Breaking change - new argument structure
- 1.0.0: Initial release
---
Version Directories
For major versions with breaking changes:
prompts/
βββ development/
β βββ api-generator/
β β βββ v1/
β β β βββ api-generator.md
β β βββ v2/
β β β βββ api-generator.md
β β βββ latest -> v2/api-generator.md
Migration Guides
Document version changes:
<!-- development/api-generator/MIGRATION.md -->
# API Generator Migration Guide
## v1 to v2
### Breaking Changes
- `endpoint_list` argument renamed to `endpoints`
- `auth_method` now requires specific values
### Migration Steps
1. Update argument names in your scripts
2. Validate auth_method values
3. Test with new version
Collections
Prompt Collections
Group related prompts:
<!-- collections/fullstack-development.md -->
---
name: fullstack-collection
title: Full-Stack Development Collection
type: collection
---
# Full-Stack Development Prompts
## Frontend
- `frontend/react-component` - React component generator
- `frontend/vue-template` - Vue.js templates
- `frontend/css-optimizer` - CSS optimization
## Backend
- `backend/api-design` - API design assistant
- `backend/database-schema` - Schema designer
- `backend/auth-implementation` - Authentication setup
## DevOps
- `devops/docker-config` - Docker configuration
- `devops/ci-pipeline` - CI/CD pipeline setup
- `devops/deployment-guide` - Deployment strategies
Collection Metadata
---
name: data-science-toolkit
type: collection
prompts:
- data/eda-assistant
- data/feature-engineering
- data/model-evaluation
- data/visualization-guide
dependencies:
- python
- pandas
- scikit-learn
---
Search and Discovery
Metadata for Search
Optimize prompts for discovery:
---
name: code-documenter
title: Intelligent Code Documentation Generator
description: |
Generates comprehensive documentation for code including:
- Function/method documentation
- Class documentation
- Module overview
- Usage examples
- API references
keywords:
- documentation
- docstring
- comments
- api docs
- code docs
- jsdoc
- sphinx
- rustdoc
search_terms:
- "generate documentation"
- "add comments to code"
- "create api docs"
- "document functions"
---
Aliases
Support multiple names:
---
name: git-commit-message
aliases:
- commit-message
- git-message
- commit-generator
---
Related Prompts
Link related prompts:
---
name: code-review-security
related:
- code-review-general
- security-audit
- vulnerability-scanner
- penetration-test-guide
---
Team Collaboration
Shared Conventions
Document team conventions in CONVENTIONS.md
:
# Prompt Conventions
## Naming
- Use `project-` prefix for project-specific prompts
- Use `team-` prefix for team-wide prompts
- Use `personal-` prefix for individual prompts
## Categories
- `project` - Project-specific
- `team` - Team standards
- `experimental` - Under development
## Required Metadata
All prompts must include:
- name
- title
- description
- author
- created date
- category
Ownership
Track prompt ownership:
---
name: deployment-checklist
author: jane.doe@company.com
team: platform-engineering
maintainers:
- jane.doe@company.com
- john.smith@company.com
review_required: true
last_reviewed: 2024-03-15
---
Review Process
Implement prompt review:
---
name: api-contract-generator
status: draft # draft, review, approved, deprecated
reviewers:
- senior-dev-team
approved_by: tech-lead@company.com
approval_date: 2024-03-10
---
Import/Export Strategies
Partial Exports
Export specific categories:
# Export only development prompts
swissarmyhammer export dev-prompts.tar.gz --filter "category:development"
# Export by tags
swissarmyhammer export python-prompts.tar.gz --filter "tag:python"
# Export by date
swissarmyhammer export recent-prompts.tar.gz --filter "updated:>2024-01-01"
Collection Bundles
Create installable bundles:
# bundle.yaml
name: web-development-bundle
version: 1.0.0
description: Complete web development prompt collection
prompts:
include:
- category: frontend
- category: backend
- tag: web
exclude:
- tag: experimental
dependencies:
- swissarmyhammer: ">=0.1.0"
install_to: ~/.swissarmyhammer/prompts/bundles/web-dev/
Sync Strategies
Keep prompts synchronized:
#!/bin/bash
# sync-prompts.sh
# Backup local changes
swissarmyhammer export local-backup-$(date +%Y%m%d).tar.gz
# Pull team prompts
git pull origin main
# Import team updates
swissarmyhammer import team-prompts.tar.gz --merge
# Export for distribution
swissarmyhammer export team-bundle.tar.gz --filter "team:approved"
Best Practices
1. Start Simple
Begin with basic organization:
prompts/
βββ work/
βββ personal/
βββ learning/
Then evolve as needed.
2. Use Meaningful Hierarchies
# Good - clear hierarchy
development/testing/unit-test-generator.md
development/testing/integration-test-builder.md
# Avoid - flat structure
unit-test-generator.md
integration-test-builder.md
3. Document Your System
Create prompts/README.md
:
# Prompt Library Organization
## Structure
- `development/` - Programming prompts
- `data/` - Data analysis prompts
- `writing/` - Content creation
- `_shared/` - Reusable components
## Naming Convention
- Files: `purpose-context-type.md`
- Prompts: Match file names
## How to Contribute
1. Choose appropriate category
2. Follow naming conventions
3. Include all required metadata
4. Test before committing
4. Regular Maintenance
# Find unused prompts
swissarmyhammer list --unused --days 90
# Find duplicates
swissarmyhammer list --duplicates
# Validate all prompts
swissarmyhammer doctor --check prompts
5. Progressive Enhancement
Start with basic prompts and enhance:
# Version 1 - Basic
name: code-review
description: Reviews code
# Version 2 - Enhanced
name: code-review
description: Reviews code for quality, security, and performance
category: development
tags: [review, quality, security]
version: 2.0.0
Examples
Enterprise Setup
company-prompts/
βββ departments/
β βββ engineering/
β β βββ standards/
β β βββ templates/
β β βββ tools/
β βββ product/
β β βββ specs/
β β βββ research/
β β βββ documentation/
β βββ data-science/
β βββ analysis/
β βββ models/
β βββ reporting/
βββ projects/
β βββ project-alpha/
β βββ project-beta/
β βββ _archived/
βββ shared/
β βββ components/
β βββ templates/
β βββ utilities/
βββ personal/
βββ [username]/
Open Source Project
oss-prompts/
βββ contribution/
β βββ issue-templates/
β βββ pr-templates/
β βββ code-review/
βββ documentation/
β βββ api-docs/
β βββ user-guide/
β βββ examples/
βββ maintenance/
β βββ release/
β βββ changelog/
β βββ security/
βββ community/
βββ support/
βββ onboarding/
Automation
Auto-Organization Script
#!/usr/bin/env python3
# organize-prompts.py
import os
import yaml
from pathlib import Path
def organize_prompts(source_dir, target_dir):
"""Auto-organize prompts based on metadata."""
for prompt_file in Path(source_dir).glob("**/*.md"):
with open(prompt_file) as f:
content = f.read()
# Extract front matter
if content.startswith("---"):
_, fm, _ = content.split("---", 2)
metadata = yaml.safe_load(fm)
# Determine target path
category = metadata.get("category", "uncategorized")
subcategory = metadata.get("subcategory", "")
target_path = Path(target_dir) / category
if subcategory:
target_path = target_path / subcategory
# Move file
target_path.mkdir(parents=True, exist_ok=True)
target_file = target_path / prompt_file.name
prompt_file.rename(target_file)
print(f"Moved {prompt_file} -> {target_file}")
Validation Script
#!/bin/bash
# validate-organization.sh
echo "Validating prompt organization..."
# Check for prompts without categories
echo "Prompts without categories:"
grep -L "category:" prompts/**/*.md
# Check for duplicate names
echo "Duplicate prompt names:"
swissarmyhammer list --format json | jq -r '.[] | .name' | sort | uniq -d
# Check naming conventions
echo "Files not following naming convention:"
find prompts -name "*.md" | grep -v "[a-z0-9-]*.md"
Next Steps
- See Creating Prompts for prompt creation guidelines
- Learn about Advanced Prompts for complex scenarios
- Explore Examples for organization patterns
- Read Configuration for system-wide settings
Claude Code Integration
SwissArmyHammer integrates seamlessly with Claude Code through the Model Context Protocol (MCP). This guide shows you how to set up and use SwissArmyHammer with Claude Code.
What is MCP?
The Model Context Protocol allows AI assistants like Claude to access external tools and data sources. SwissArmyHammer acts as an MCP server, providing Claude Code with access to your prompt library.
Installation
1. Install SwissArmyHammer
See the Installation Guide for detailed instructions. The quickest method:
# Install using the install script
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
2. Verify Installation
swissarmyhammer --version
3. Test the MCP Server
swissarmyhammer serve --help
Configuration
Add to Claude Code
Configure Claude Code to use SwissArmyHammer as an MCP server:
claude mcp add --scope user swissarmyhammer swissarmyhammer serve
This command:
- Adds
swissarmyhammer
as an MCP server name - Uses
swissarmyhammer serve
as the command to start the server - Sets user scope (available for your user account)
Alternative Configuration Methods
Manual Configuration
If you prefer to configure manually, add this to your Claude Code MCP configuration:
{
"mcpServers": {
"swissarmyhammer": {
"command": "swissarmyhammer",
"args": ["serve"]
}
}
}
Project-Specific Configuration
For project-specific prompts, create a local configuration:
# In your project directory
claude mcp add --scope project swissarmyhammer_local swissarmyhammer serve --prompts ./prompts
Verification
Check MCP Configuration
List your configured MCP servers:
claude mcp list
You should see swissarmyhammer
in the output.
Test the Connection
Start Claude Code and verify SwissArmyHammer is connected:
- Open Claude Code
- Look for SwissArmyHammer prompts in the available tools
- Try using a built-in prompt like
help
orplan
Debug Connection Issues
If SwissArmyHammer doesnβt appear in Claude Code:
# Check if the server starts correctly
swissarmyhammer serve --debug
# Verify prompts are loaded
swissarmyhammer list
# Check configuration
claude mcp list
Usage in Claude Code
Available Features
Once configured, SwissArmyHammer provides these features in Claude Code:
1. Prompt Library Access
All your prompts become available as tools in Claude Code:
- Built-in prompts (code review, debugging, documentation)
- User prompts (in
~/.swissarmyhammer/prompts/
) - Local prompts (in current project directory)
2. Dynamic Arguments
Prompts with arguments become interactive forms in Claude Code:
---
name: code-review
arguments:
- name: code
description: Code to review
required: true
- name: language
description: Programming language
required: false
default: auto-detect
---
3. Live Reloading
Changes to prompt files are automatically detected and reloaded.
Using Prompts
Basic Usage
- Select a Prompt: Choose from available SwissArmyHammer prompts
- Fill Arguments: Provide required and optional parameters
- Execute: Claude runs the prompt with your arguments
Example Workflow
-
Code Review:
- Select
code-review
prompt - Paste your code in the
code
field - Set
language
if needed - Execute to get detailed code analysis
- Select
-
Debug Helper:
- Select
debug
prompt - Describe your error in the
error
field - Get step-by-step debugging guidance
- Select
-
Documentation:
- Select
docs
prompt - Provide code or specifications
- Generate comprehensive documentation
- Select
Advanced Usage
Prompt Chaining
Use multiple prompts in sequence:
1. Use `analyze-code` to understand the codebase
2. Use `plan` to create implementation strategy
3. Use `code-review` on the new code
4. Use `docs` to generate documentation
Custom Workflows
Create project-specific prompt workflows:
# Project prompts in ./prompts/
## development/
- project-setup.md - Initialize new features
- code-standards.md - Apply project coding standards
- deployment.md - Deploy to staging/production
## documentation/
- api-docs.md - Generate API documentation
- user-guide.md - Create user-facing documentation
- changelog.md - Generate release notes
Configuration Options
Server Configuration
The swissarmyhammer serve
command accepts several options:
swissarmyhammer serve [OPTIONS]
Common Options
--port <PORT>
- Port for MCP communication (default: auto)--host <HOST>
- Host to bind to (default: localhost)--prompts <DIR>
- Additional prompt directories--builtin <BOOL>
- Include built-in prompts (default: true)--watch <BOOL>
- Enable file watching (default: true)--debug
- Enable debug logging
Examples
# Basic server
swissarmyhammer serve
# Custom prompt directory
swissarmyhammer serve --prompts /path/to/prompts
# Multiple prompt directories
swissarmyhammer serve --prompts ./prompts --prompts ~/.custom-prompts
# Debug mode
swissarmyhammer serve --debug
# Disable built-in prompts
swissarmyhammer serve --builtin false
Claude Code Configuration
Server Arguments
Pass arguments to the SwissArmyHammer server:
# Add with custom options
claude mcp add swissarmyhammer_custom swissarmyhammer serve --prompts ./project-prompts --debug
Environment Variables
Configure through environment variables:
export SWISSARMYHAMMER_PROMPTS_DIR=/path/to/prompts
export SWISSARMYHAMMER_DEBUG=true
claude mcp add swissarmyhammer swissarmyhammer serve
Prompt Organization
Directory Structure
Organize prompts for easy discovery in Claude Code:
~/.swissarmyhammer/prompts/
βββ development/
β βββ code-review.md
β βββ debug-helper.md
β βββ refactor.md
β βββ testing.md
βββ writing/
β βββ blog-post.md
β βββ documentation.md
β βββ email.md
βββ analysis/
β βββ data-insights.md
β βββ research.md
βββ productivity/
βββ task-planning.md
βββ meeting-notes.md
Naming Conventions
Use clear, descriptive names that work well in Claude Code:
# Good - Clear and specific
code-review-python.md
debug-javascript-async.md
documentation-api.md
# Bad - Too generic
review.md
debug.md
docs.md
Categories and Tags
Use categories and tags for better organization in Claude Code:
---
name: python-code-review
title: Python Code Review
description: Reviews Python code for PEP 8, security, and performance
category: development
tags: ["python", "code-review", "pep8", "security"]
---
Troubleshooting
Common Issues
SwissArmyHammer Not Available
Symptoms: SwissArmyHammer prompts donβt appear in Claude Code
Solutions:
- Verify installation:
swissarmyhammer --version
- Check MCP configuration:
claude mcp list
- Test server manually:
swissarmyhammer serve --debug
- Restart Claude Code
Connection Errors
Symptoms: Error messages about MCP connection
Solutions:
- Check if port is available:
swissarmyhammer serve --port 8080
- Verify permissions: Run with
--debug
to see detailed logs - Check firewall settings
- Try different host:
--host 127.0.0.1
Prompts Not Loading
Symptoms: Some prompts missing or outdated
Solutions:
- Check prompt syntax:
swissarmyhammer test <prompt-name>
- Verify file permissions in prompt directories
- Check for YAML syntax errors
- Restart the MCP server: Restart Claude Code
Performance Issues
Symptoms: Slow prompt loading or execution
Solutions:
- Reduce prompt directory size
- Disable file watching:
--watch false
- Use specific prompt directories:
--prompts ./specific-dir
- Check system resources
Debug Mode
Enable debug mode for detailed troubleshooting:
swissarmyhammer serve --debug
Debug mode provides:
- Detailed logging of MCP communication
- Prompt loading information
- Error stack traces
- Performance metrics
Logs and Diagnostics
Server Logs
SwissArmyHammer logs to standard output:
# Save logs to file
swissarmyhammer serve --debug > swissarmyhammer.log 2>&1
Claude Code Logs
Check Claude Code logs for MCP-related issues:
# Location varies by platform
# macOS: ~/Library/Logs/Claude Code/
# Linux: ~/.local/share/claude-code/logs/
# Windows: %APPDATA%/Claude Code/logs/
Health Check
Use the doctor command to check configuration:
swissarmyhammer doctor
This checks:
- Installation status
- Configuration validity
- Prompt directory accessibility
- MCP server functionality
Best Practices
1. Organize Prompts Logically
Structure prompts by workflow rather than just topic:
prompts/
βββ workflows/
β βββ code-review-workflow.md
β βββ feature-development.md
β βββ bug-fixing.md
βββ utilities/
β βββ format-code.md
β βββ generate-tests.md
β βββ extract-docs.md
2. Use Descriptive Metadata
Make prompts discoverable with good metadata:
---
name: comprehensive-code-review
title: Comprehensive Code Review
description: Deep analysis of code for security, performance, and maintainability
category: development
tags: ["security", "performance", "maintainability", "best-practices"]
keywords: ["static analysis", "code quality", "peer review"]
---
3. Test Prompts Regularly
Validate prompts before using in Claude Code:
# Test basic functionality
swissarmyhammer test code-review --code "def hello(): print('hi')"
# Test all prompts
swissarmyhammer test --all
4. Use Project-Specific Configurations
Create project-specific prompt collections:
# Per-project MCP server
cd my-project
claude mcp add project_prompts swissarmyhammer serve --prompts ./prompts
5. Keep Prompts Updated
Maintain prompt quality:
---
name: my-prompt
version: 1.2.0
updated: 2024-03-20 # Track changes
---
Examples
Complete Setup Example
Hereβs a complete example of setting up SwissArmyHammer for a development project:
# 1. Install SwissArmyHammer
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
# 2. Create project prompts
mkdir -p ./prompts/development
cat > ./prompts/development/project-review.md << 'EOF'
---
name: project-code-review
title: Project Code Review
description: Reviews code according to our project standards
category: development
arguments:
- name: code
description: Code to review
required: true
- name: component
description: Which component this code belongs to
required: false
default: general
---
# Project Code Review
Please review this {{component}} code according to our project standards:
```{{code}}```
Check for:
- Adherence to our coding standards
- Security best practices
- Performance considerations
- Documentation completeness
- Test coverage
Provide specific, actionable feedback.
EOF
# 3. Configure Claude Code
claude mcp add project_sah swissarmyhammer serve --prompts ./prompts
# 4. Test the setup
swissarmyhammer test project-code-review --code "print('hello')" --component "utility"
# 5. Start using in Claude Code
echo "Setup complete! Restart Claude Code to use your prompts."
Next Steps
- Explore Built-in Prompts to see whatβs available
- Learn Creating Prompts to build custom prompts
- Read about Prompt Organization strategies
- Check the CLI Reference for all available commands
- See Troubleshooting for additional help
Command Line Interface
SwissArmyHammer provides a comprehensive command-line interface for managing prompts, running the MCP server, and integrating with your development workflow.
Installation
# Install from Git repository (requires Rust)
cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
# Ensure ~/.cargo/bin is in your PATH
export PATH="$HOME/.cargo/bin:$PATH"
Basic Usage
swissarmyhammer [COMMAND] [OPTIONS]
Global Options
--help, -h
- Display help information--version, -V
- Display version information
Commands Overview
Command | Description |
---|---|
serve | Run as MCP server for Claude Code integration |
search | Search and discover prompts with powerful filtering |
test | Interactively test prompts with arguments |
doctor | Diagnose configuration and setup issues |
completion | Generate shell completion scripts |
Quick Examples
Start MCP Server
# Run as MCP server (for Claude Code)
swissarmyhammer serve
Search for Prompts
# Search for code-related prompts
swissarmyhammer search code
# Search with regex in descriptions
swissarmyhammer search --regex "test.*unit" --in description
Test a Prompt
# Interactively test a prompt
swissarmyhammer test code-review
# Test with predefined arguments
swissarmyhammer test code-review --arg code="fn main() { println!(\"Hello\"); }"
Check Setup
# Diagnose any configuration issues
swissarmyhammer doctor
Generate Shell Completions
# Generate Bash completions
swissarmyhammer completion bash > ~/.bash_completion.d/swissarmyhammer
# Generate Zsh completions
swissarmyhammer completion zsh > ~/.zfunc/_swissarmyhammer
Exit Codes
0
- Success1
- General error2
- Command line usage error3
- Configuration error4
- Prompt not found5
- Template rendering error
Configuration
SwissArmyHammer looks for prompts in these directories (in order):
- Built-in prompts (embedded in the binary)
- User prompts:
~/.swissarmyhammer/prompts/
- Local prompts:
./prompts/
(current directory)
For detailed command documentation, see the individual command pages linked in the table above.
serve Command
The serve
command starts SwissArmyHammer as a Model Context Protocol (MCP) server, making your prompts available to Claude Code and other MCP clients.
Usage
swissarmyhammer serve [OPTIONS]
Overview
The serve command:
- Starts an MCP server that provides access to your prompt library
- Loads prompts from various directories (built-in, user, local)
- Watches for file changes and automatically reloads prompts
- Provides real-time access to prompts for Claude Code integration
Options
--port <PORT>
- Description: Port number for MCP communication
- Default: Automatically assigned by the system
- Example:
--port 8080
swissarmyhammer serve --port 8080
--host <HOST>
- Description: Host address to bind the server to
- Default:
localhost
- Example:
--host 127.0.0.1
swissarmyhammer serve --host 127.0.0.1
--prompts <DIRECTORY>
- Description: Additional directories to load prompts from
- Default: Standard locations (
~/.swissarmyhammer/prompts
,./prompts
) - Repeatable: Can be used multiple times
- Example:
--prompts ./custom-prompts
# Single custom directory
swissarmyhammer serve --prompts ./project-prompts
# Multiple directories
swissarmyhammer serve --prompts ./prompts --prompts ~/.custom-prompts
--builtin <BOOLEAN>
- Description: Include built-in prompts in the library
- Default:
true
- Values:
true
,false
- Example:
--builtin false
# Disable built-in prompts
swissarmyhammer serve --builtin false
# Explicitly enable built-in prompts
swissarmyhammer serve --builtin true
--watch <BOOLEAN>
- Description: Enable file watching for automatic prompt reloading
- Default:
true
- Values:
true
,false
- Example:
--watch false
# Disable file watching (for performance)
swissarmyhammer serve --watch false
--debug
- Description: Enable debug logging for troubleshooting
- Default: Disabled
- Output: Detailed logs to stdout
swissarmyhammer serve --debug
--config <FILE>
- Description: Path to configuration file
- Default:
~/.swissarmyhammer/config.toml
- Example:
--config ./custom-config.toml
swissarmyhammer serve --config ./project-config.toml
Examples
Basic Server
Start a basic MCP server with default settings:
swissarmyhammer serve
This loads:
- Built-in prompts
- User prompts from
~/.swissarmyhammer/prompts/
- Local prompts from
./prompts/
(if exists) - Enables file watching
Development Server
For development with debug logging:
swissarmyhammer serve --debug
Output includes:
- Prompt loading details
- MCP protocol messages
- File watching events
- Error stack traces
Custom Prompt Directory
Serve prompts from a specific directory:
swissarmyhammer serve --prompts /path/to/my/prompts
Multiple Directories
Load prompts from multiple locations:
swissarmyhammer serve \
--prompts ./project-prompts \
--prompts ~/.shared-prompts \
--prompts /team/common-prompts
Project-Only Prompts
Serve only local project prompts (no built-in or user prompts):
swissarmyhammer serve \
--prompts ./prompts \
--builtin false
Performance-Optimized
For large prompt collections, disable file watching:
swissarmyhammer serve \
--watch false \
--prompts ./large-prompt-collection
Custom Port and Host
Specify network settings:
swissarmyhammer serve \
--host 0.0.0.0 \
--port 9000
Prompt Loading Order
SwissArmyHammer loads prompts in this order:
-
Built-in prompts (if
--builtin true
)- Located in the binary
- Categories: development, writing, analysis, etc.
-
User prompts (always loaded)
- Location:
~/.swissarmyhammer/prompts/
- Your personal prompt library
- Location:
-
Custom directories (from
--prompts
flags)- Processed in order specified
- Can override earlier prompts with same name
-
Local prompts (always checked)
- Location:
./prompts/
in current directory - Project-specific prompts
- Location:
Prompt Override Behavior
When prompts have the same name:
- Later sources override earlier ones
- Local prompts have highest priority
- Built-in prompts have lowest priority
Example hierarchy:
./prompts/code-review.md (highest priority)
~/.custom/code-review.md (from --prompts ~/.custom)
~/.swissarmyhammer/prompts/code-review.md (user prompts)
built-in:code-review (lowest priority)
File Watching
When file watching is enabled (--watch true
), the server automatically:
Detects Changes
- New prompt files added
- Existing prompt files modified
- Prompt files deleted
- Directory structure changes
Reloads Prompts
- Parses updated files
- Validates YAML front matter
- Updates the prompt library
- Notifies connected MCP clients
Handles Errors
- Invalid YAML syntax
- Missing required fields
- Template compilation errors
- Logs errors without stopping the server
Performance Considerations
File watching uses system resources:
- Memory: Stores file metadata
- CPU: Processes file system events
- Disk I/O: Reads modified files
For large prompt collections (1000+ files), consider:
# Disable watching for better performance
swissarmyhammer serve --watch false
MCP Protocol Details
Server Capabilities
SwissArmyHammer advertises these MCP capabilities:
{
"capabilities": {
"prompts": {
"listChanged": true
},
"tools": {
"listChanged": false
}
}
}
Prompt Exposure
Each prompt becomes an MCP prompt with:
- Name: From promptβs
name
field - Description: From promptβs
description
field - Arguments: From promptβs
arguments
array
Example MCP Prompt
A SwissArmyHammer prompt:
---
name: code-review
title: Code Review Assistant
description: Reviews code for best practices and issues
arguments:
- name: code
description: Code to review
required: true
- name: language
description: Programming language
required: false
default: auto-detect
---
Becomes this MCP prompt:
{
"name": "code-review",
"description": "Reviews code for best practices and issues",
"arguments": [
{
"name": "code",
"description": "Code to review",
"required": true
},
{
"name": "language",
"description": "Programming language",
"required": false
}
]
}
Integration with Claude Code
Configuration
Add SwissArmyHammer to Claude Codeβs MCP configuration:
claude mcp add swissarmyhammer swissarmyhammer serve
Custom Configuration
Add with specific options:
claude mcp add project_sah swissarmyhammer serve --prompts ./project-prompts --debug
Multiple Servers
Run different SwissArmyHammer instances:
# Global prompts
claude mcp add sah_global swissarmyhammer serve
# Project-specific prompts
claude mcp add sah_project swissarmyhammer serve --prompts ./prompts --builtin false
Logging and Output
Standard Output
Normal operation logs:
2024-03-20T10:30:00Z INFO SwissArmyHammer MCP Server starting
2024-03-20T10:30:00Z INFO Loaded 25 prompts from 3 directories
2024-03-20T10:30:00Z INFO Server listening on localhost:8080
2024-03-20T10:30:00Z INFO MCP client connected
Debug Output
With --debug
flag:
2024-03-20T10:30:00Z DEBUG Loading prompts from: ~/.swissarmyhammer/prompts
2024-03-20T10:30:00Z DEBUG Found prompt file: code-review.md
2024-03-20T10:30:00Z DEBUG Parsed prompt: code-review (Code Review Assistant)
2024-03-20T10:30:00Z DEBUG MCP request: prompts/list
2024-03-20T10:30:00Z DEBUG MCP response: 25 prompts returned
Error Handling
The server continues running even with errors:
2024-03-20T10:30:00Z ERROR Failed to parse prompt: invalid-prompt.md
2024-03-20T10:30:00Z ERROR YAML error: missing required field 'description'
2024-03-20T10:30:00Z INFO Continuing with 24 valid prompts
Troubleshooting
Server Wonβt Start
Check port availability:
# Try a specific port
swissarmyhammer serve --port 8080
# Check if port is in use
lsof -i :8080 # macOS/Linux
netstat -an | findstr 8080 # Windows
Check permissions:
# Run with debug to see detailed errors
swissarmyhammer serve --debug
Prompts Not Loading
Verify directories exist:
# Check default directories
ls -la ~/.swissarmyhammer/prompts
ls -la ./prompts
# Check custom directories
ls -la /path/to/custom/prompts
Validate prompt syntax:
# Test individual prompts
swissarmyhammer test prompt-name
# Validate all prompts
swissarmyhammer doctor
Performance Issues
Large prompt collections:
# Disable file watching
swissarmyhammer serve --watch false
# Limit to specific directories
swissarmyhammer serve --prompts ./essential-prompts --builtin false
Memory usage:
# Monitor memory usage
top -p $(pgrep swissarmyhammer) # Linux
top | grep swissarmyhammer # macOS
Connection Issues
MCP client canβt connect:
# Check server is running
ps aux | grep swissarmyhammer
# Test with different host/port
swissarmyhammer serve --host 127.0.0.1 --port 8080
# Check firewall settings
Debug MCP communication:
# Enable debug logging
swissarmyhammer serve --debug
# Save logs to file
swissarmyhammer serve --debug > server.log 2>&1
Configuration File
Create a configuration file for persistent settings:
# ~/.swissarmyhammer/config.toml
[server]
host = "localhost"
port = 8080
debug = false
[prompts]
builtin = true
watch = true
directories = [
"~/.swissarmyhammer/prompts",
"./prompts",
"/team/shared-prompts"
]
Use with:
swissarmyhammer serve --config ~/.swissarmyhammer/config.toml
Environment Variables
Configure through environment variables:
export SWISSARMYHAMMER_HOST=localhost
export SWISSARMYHAMMER_PORT=8080
export SWISSARMYHAMMER_DEBUG=true
export SWISSARMYHAMMER_PROMPTS_DIR=/custom/prompts
swissarmyhammer serve
Best Practices
1. Use Consistent Directory Structure
~/.swissarmyhammer/prompts/
βββ development/
βββ writing/
βββ analysis/
βββ productivity/
2. Enable Debug During Development
swissarmyhammer serve --debug
3. Use Project-Specific Servers
# In each project
claude mcp add project_sah swissarmyhammer serve --prompts ./prompts
4. Monitor Performance
# For large collections
swissarmyhammer serve --watch false --debug
5. Version Control Integration
# .gitignore
.swissarmyhammer/cache/
.swissarmyhammer/logs/
# Keep prompts in version control
git add prompts/
Next Steps
- Learn about Claude Code Integration setup
- Explore Configuration options
- See Troubleshooting for common issues
- Check Built-in Prompts reference
search - Search and Discover Prompts
The search
command provides powerful functionality to find prompts in your collection using various search strategies and filters.
Synopsis
swissarmyhammer search [OPTIONS] [QUERY]
Description
Search through your prompt collection using fuzzy matching, regular expressions, or exact text matching. The search can target specific fields and provides relevance-ranked results.
Arguments
QUERY
- Search term or pattern (optional if using filters)
Options
Search Strategy
--case-sensitive, -c
- Enable case-sensitive matching--regex, -r
- Use regular expressions instead of fuzzy matching--full, -f
- Show full prompt content in results
Field Targeting
--in FIELD
- Search in specific field (title, description, content, all)title
- Search only in prompt titlesdescription
- Search only in prompt descriptionscontent
- Search only in prompt content/bodyall
- Search in all fields (default)
Filtering
--source SOURCE
- Filter by prompt source (builtin, user, local)--has-arg ARG
- Show prompts that have a specific argument--no-args
- Show prompts with no arguments
Output Control
--limit, -l N
- Limit results to N prompts (default: 20)--json
- Output results in JSON format
Examples
Basic Search
# Find prompts containing "code"
swissarmyhammer search code
# Case-sensitive search
swissarmyhammer search --case-sensitive "Code Review"
Field-Specific Search
# Search only in titles
swissarmyhammer search --in title "review"
# Search only in descriptions
swissarmyhammer search --in description "debugging"
# Search in content/body
swissarmyhammer search --in content "TODO"
Regular Expression Search
# Find prompts with "test" followed by any word
swissarmyhammer search --regex "test\s+\w+"
# Find prompts starting with specific pattern
swissarmyhammer search --regex "^(debug|fix|analyze)"
Advanced Filtering
# Find built-in prompts only
swissarmyhammer search --source builtin
# Find prompts with "code" argument
swissarmyhammer search --has-arg code
# Find prompts without any arguments
swissarmyhammer search --no-args
# Combine filters
swissarmyhammer search review --source user --has-arg language
Output Options
# Show full content of matching prompts
swissarmyhammer search code --full
# Limit to 5 results
swissarmyhammer search --limit 5 test
# Get JSON output for scripting
swissarmyhammer search --json "data analysis"
Output Format
Default Output
Found 3 prompts matching "code":
π code-review (builtin)
Review code for best practices and potential issues
Arguments: code, language (optional)
π§ debug-code (user)
Help debug programming issues and errors
Arguments: error, context (optional)
π analyze-performance (local)
Analyze code performance and suggest optimizations
Arguments: code, language, metrics (optional)
JSON Output
{
"query": "code",
"results": [
{
"id": "code-review",
"title": "Code Review Helper",
"description": "Review code for best practices and potential issues",
"source": "builtin",
"path": "/builtin/review/code.md",
"arguments": [
{"name": "code", "required": true},
{"name": "language", "required": false, "default": "auto-detect"}
],
"score": 0.95
}
],
"total_found": 3
}
Search Scoring
Results are ranked by relevance using these factors:
- Exact matches score higher than partial matches
- Title matches score higher than description or content matches
- Multiple field matches increase the overall score
- Argument name matches are considered for relevance
Performance
- Search is optimized with an in-memory index
- Fuzzy matching uses efficient algorithms
- Results are cached for repeated queries
- Large prompt collections are handled efficiently
Integration with Other Commands
Search integrates well with other SwissArmyHammer commands:
# Find and test a prompt
PROMPT=$(swissarmyhammer search --json code | jq -r '.results[0].id')
swissarmyhammer test "$PROMPT"
# Export search results
swissarmyhammer search debug --limit 5 | \
grep -o '\w\+-\w\+' | \
xargs swissarmyhammer export
See Also
test
- Test prompts found through searchexport
- Export specific prompts- Search Guide - Advanced search strategies
test - Interactive Prompt Testing
The test
command allows you to test prompts interactively, providing argument values and seeing the rendered output before using them with AI models.
Synopsis
swissarmyhammer test [OPTIONS] <PROMPT_ID>
Description
Test prompts interactively by providing arguments and viewing the rendered output. This is essential for debugging template issues, validating arguments, and refining prompts before deployment.
Arguments
PROMPT_ID
- The ID of the prompt to test (required)
Options
Argument Specification
--arg KEY=VALUE
- Provide argument values directly (can be used multiple times)
Output Control
--raw
- Show raw template without rendering--copy
- Copy rendered result to clipboard--save FILE
- Save rendered result to file--debug
- Show detailed debug information including variable resolution
Interactive Mode
When no --arg
options are provided, the command enters interactive mode:
- Prompt Selection: If prompt ID is ambiguous, presents a fuzzy selector
- Argument Collection: Prompts for each required and optional argument
- Template Rendering: Shows the rendered output
- Actions: Offers to copy to clipboard or save to file
Examples
Interactive Testing
# Test a prompt interactively
swissarmyhammer test code-review
# The command will prompt for arguments:
# ? Enter value for 'code' (required): fn main() { println!("Hello"); }
# ? Enter value for 'language' (optional, default: auto-detect): rust
#
# [Rendered output shows here]
#
# ? What would you like to do?
# > View output
# Copy to clipboard
# Save to file
# Test with different arguments
# Exit
Non-Interactive Testing
# Test with predefined arguments
swissarmyhammer test code-review \
--arg code="fn main() { println!(\"Hello\"); }" \
--arg language="rust"
# Test and copy to clipboard
swissarmyhammer test debug-helper \
--arg error="compiler error" \
--copy
# Test and save output
swissarmyhammer test api-docs \
--arg code="$(cat src/api.rs)" \
--save generated-docs.md
Debug Mode
# Show debug information
swissarmyhammer test template-complex --debug
# Output includes:
# Variables resolved:
# user_input: "example text"
# timestamp: "2024-01-15T10:30:00Z"
#
# Template processing:
# Line 5: Variable 'user_input' resolved to "example text"
# Line 12: Filter 'capitalize' applied
# Line 18: Conditional block evaluated to true
#
# Final output:
# [rendered template]
Raw Template View
# View the raw template without rendering
swissarmyhammer test email-template --raw
# Shows:
# ---
# title: Email Template
# arguments:
# - name: recipient
# required: true
# ---
#
# Dear {{recipient | capitalize}},
#
# {% if urgent %}
# **URGENT:**
# {% endif %}
# {{message}}
Output Format
Default Output
Testing prompt: code-review
Arguments:
code: "fn main() { println!(\"Hello\"); }"
language: "rust" (default: auto-detect)
Rendered Output:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β # Code Review β
β β
β Please review the following rust code: β
β β
β ```rust β
β fn main() { println!("Hello"); } β
β ``` β
β β
β Focus on: β
β - Code quality and readability β
β - Potential bugs or security issues β
β - Performance considerations β
β - Best practices adherence β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Template rendered successfully (247 characters)
Debug Output
Testing prompt: code-review (debug mode)
Prompt loaded from: ~/.swissarmyhammer/prompts/review/code.md
Arguments defined: 2 (1 required, 1 optional)
Argument Resolution:
β code: "fn main() { println!(\"Hello\"); }" [user provided]
β language: "rust" [user provided, overrides default "auto-detect"]
Template Processing:
β Line 8: Variable 'language' resolved and capitalized
β Line 12-14: Code block with 'code' variable substitution
β Line 16-20: Static bullet list rendered
Filters Applied:
- capitalize: "rust" β "Rust"
Rendered Output:
[... same as above ...]
Performance:
- Template parsing: 2ms
- Variable resolution: 1ms
- Rendering: 3ms
- Total: 6ms
Error Handling
The test command provides helpful error messages for common issues:
Missing Arguments
$ swissarmyhammer test code-review
Error: Missing required argument 'code'
Available arguments:
code (required) - The code to review
language (optional) - Programming language (default: auto-detect)
Use --arg KEY=VALUE to provide arguments, or run without --arg for interactive mode.
Template Errors
$ swissarmyhammer test broken-template --arg data="test"
Error: Template rendering failed at line 15
13 | {% for item in items %}
14 | - {{item.name}}
> 15 | - {{item.invalid_field | unknown_filter}}
| ^^^^^^^^^^^^^^
16 | {% endfor %}
Unknown filter: unknown_filter
Available filters: capitalize, lower, upper, truncate, ...
Fix the template and try again.
Integration with Development Workflow
Testing Before Deployment
# Test a prompt before adding to Claude Code
swissarmyhammer test new-prompt --debug
# Validate all prompts in a directory
for prompt in $(ls prompts/*.md); do
swissarmyhammer test "${prompt%.md}" --arg placeholder="test"
done
Clipboard Integration
# Test and copy for immediate use
swissarmyhammer test quick-note \
--arg content="Meeting notes" \
--copy
# Now paste into your editor or Claude Code
Script Integration
#!/bin/bash
# test-and-deploy.sh
PROMPT_ID="$1"
if swissarmyhammer test "$PROMPT_ID" --arg test="validation"; then
echo "β Prompt test passed, deploying..."
swissarmyhammer export "$PROMPT_ID" --format directory deployment/
else
echo "β Prompt test failed, fix issues before deploying"
exit 1
fi
See Also
search
- Find prompts to test- Template Variables - Template syntax reference
- Testing Guide - Advanced testing strategies
- Custom Filters - Available template filters
doctor Command
The doctor
command performs comprehensive health checks on your SwissArmyHammer installation and configuration. It identifies issues and provides recommendations for optimal operation.
Usage
swissarmyhammer doctor [OPTIONS]
Overview
The doctor command checks:
- Installation integrity and version compatibility
- Configuration file validity
- Prompt directory accessibility and structure
- Prompt file syntax and metadata
- MCP server functionality
- System dependencies and environment
Options
--verbose
- Description: Enable detailed output with additional diagnostic information
- Default: Disabled
- Example: Shows file paths, configuration details, and system information
swissarmyhammer doctor --verbose
--json
- Description: Output results in JSON format for programmatic use
- Default: Human-readable text output
- Example: Useful for scripts and automation
swissarmyhammer doctor --json
--fix
- Description: Automatically fix issues when possible
- Default: Report-only mode
- Example: Creates missing directories, fixes permissions
swissarmyhammer doctor --fix
--check <CATEGORY>
- Description: Run specific check categories only
- Values:
installation
,config
,prompts
,mcp
,system
- Repeatable: Can specify multiple categories
# Check only prompt-related issues
swissarmyhammer doctor --check prompts
# Check multiple categories
swissarmyhammer doctor --check config --check prompts
Check Categories
Installation Checks
Verifies SwissArmyHammer installation:
Binary Location
- Checks if
swissarmyhammer
is in PATH - Verifies executable permissions
- Confirms version compatibility
Dependencies
- Validates system requirements
- Checks for required libraries
- Verifies runtime dependencies
Example Output
β SwissArmyHammer binary found: /usr/local/bin/swissarmyhammer
β Version: 0.1.0 (latest)
β Executable permissions: OK
β System dependencies: All present
Configuration Checks
Validates configuration files and settings:
Configuration File
- Checks for valid TOML syntax
- Validates configuration schema
- Identifies deprecated settings
Directory Structure
- Verifies default directories exist
- Checks directory permissions
- Validates custom prompt directories
Environment Variables
- Lists relevant environment variables
- Checks for conflicts or inconsistencies
- Validates variable values
Example Output
β Configuration file: ~/.swissarmyhammer/config.toml
β Configuration syntax: Valid TOML
β Default directories: Created and accessible
β Custom directory not found: /nonexistent/prompts
β Environment variables: No conflicts
Prompt Checks
Analyzes prompt files and library structure:
Directory Scanning
- Scans all configured prompt directories
- Counts prompt files by category
- Identifies orphaned or miscategorized files
File Validation
- Validates YAML front matter syntax
- Checks required fields presence
- Verifies argument specifications
Content Analysis
- Validates Liquid template syntax
- Checks for common template errors
- Identifies missing or broken references
Duplicate Detection
- Finds prompts with identical names
- Shows override hierarchy
- Warns about potential conflicts
Example Output
β Prompt directories: 3 found, all accessible
β Prompt files: 47 total, 45 valid
β Invalid prompts: 2 files with errors
- debug-helper.md: Missing required field 'description'
- code-review.md: Invalid YAML syntax on line 8
β Duplicate names: 1 conflict found
- 'help' defined in both builtin and ~/.swissarmyhammer/prompts/
β Template syntax: All valid
MCP Checks
Tests Model Context Protocol functionality:
Server Startup
- Attempts to start MCP server
- Tests port binding
- Verifies server responds to requests
Protocol Compliance
- Validates MCP protocol responses
- Checks capability advertisements
- Tests prompt exposure format
Integration Status
- Checks Claude Code configuration
- Tests end-to-end connectivity
- Validates prompt accessibility
Example Output
β MCP server startup: Success on port 8080
β Protocol compliance: All tests passed
β Prompt exposure: 45 prompts available
β Claude Code integration: Not configured
Run: claude mcp add swissarmyhammer swissarmyhammer serve
System Checks
Analyzes system environment and performance:
Operating System
- Identifies OS and version
- Checks compatibility
- Validates system requirements
File System
- Tests file watching capabilities
- Checks disk space availability
- Validates permissions
Performance
- Measures prompt loading time
- Tests file watching responsiveness
- Checks memory usage patterns
Example Output
β Operating system: macOS 14.0 (supported)
β File system: APFS with file watching support
β Disk space: 15.2 GB available
β Performance: Prompt loading < 100ms
β Memory usage: High with 1000+ prompts (consider --watch false)
Common Issues and Solutions
Installation Issues
SwissArmyHammer Not Found
β SwissArmyHammer binary: Not found in PATH
Solutions:
- Install SwissArmyHammer:
curl -sSL https://install.sh | bash
- Add to PATH:
export PATH="$HOME/.local/bin:$PATH"
- Verify installation:
which swissarmyhammer
Permission Denied
β Executable permissions: Permission denied
Solutions:
# Fix permissions
chmod +x $(which swissarmyhammer)
# Or reinstall
curl -sSL https://install.sh | bash
Configuration Issues
Invalid Configuration File
β Configuration syntax: Invalid TOML at line 15
Solutions:
- Validate TOML syntax online
- Check for missing quotes or brackets
- Reset to defaults:
swissarmyhammer doctor --fix
Missing Directories
β Prompt directory not accessible: /custom/prompts
Solutions:
# Create missing directory
mkdir -p /custom/prompts
# Fix automatically
swissarmyhammer doctor --fix
Prompt Issues
Invalid YAML Front Matter
β Invalid prompts: 3 files with YAML errors
- code-review.md: missing required field 'name'
Solutions:
- Add missing required fields
- Validate YAML syntax
- Use
swissarmyhammer test <prompt>
for detailed errors
Duplicate Prompt Names
β Duplicate names: 'help' defined in multiple locations
Solutions:
- Rename one of the conflicting prompts
- Use different directories for different contexts
- Check prompt override hierarchy
Template Syntax Errors
β Template errors: 2 prompts with Liquid syntax issues
- debug.md: Unknown filter 'unknownfilter'
Solutions:
- Fix Liquid template syntax
- Check available filters: see Custom Filters
- Test templates:
swissarmyhammer test <prompt>
MCP Issues
Server Wonβt Start
β MCP server startup: Failed to bind to port 8080
Solutions:
# Try different port
swissarmyhammer serve --port 8081
# Check what's using the port
lsof -i :8080 # macOS/Linux
netstat -an | findstr 8080 # Windows
Claude Code Not Configured
β Claude Code integration: Not configured
Solutions:
# Add to Claude Code
claude mcp add swissarmyhammer swissarmyhammer serve
# Verify configuration
claude mcp list
Performance Issues
Slow Prompt Loading
β Performance: Prompt loading > 1000ms
Solutions:
- Reduce prompt directory size
- Disable file watching:
--watch false
- Use SSDs for prompt storage
- Split large libraries into categories
High Memory Usage
β Memory usage: 2.1 GB with file watching enabled
Solutions:
# Disable file watching
swissarmyhammer serve --watch false
# Limit prompt directories
swissarmyhammer serve --prompts ./essential-prompts
Automated Fixes
With the --fix
flag, doctor can automatically resolve:
Directory Issues
- Creates missing prompt directories
- Sets appropriate permissions
- Creates default configuration file
Configuration Issues
- Repairs malformed TOML files
- Sets missing default values
- Removes deprecated settings
Permission Issues
- Fixes file and directory permissions
- Makes binaries executable
- Sets appropriate ownership
Example Auto-Fix
swissarmyhammer doctor --fix
# Output:
Fixed: Created missing directory ~/.swissarmyhammer/prompts
Fixed: Set executable permission on swissarmyhammer binary
Fixed: Created default configuration file
Warning: Could not fix invalid YAML in code-review.md (manual intervention required)
Output Formats
Human-Readable (Default)
SwissArmyHammer Doctor Report
============================
Installation Checks:
β Binary found and executable
β Version 0.1.0 (latest)
β Dependencies satisfied
Configuration Checks:
β Configuration file valid
β Custom directory not found: /tmp/prompts
Prompt Checks:
β 45 prompts loaded successfully
β 2 prompts with errors (see details below)
MCP Checks:
β Server starts successfully
β Protocol compliance verified
System Checks:
β OS compatibility confirmed
β High memory usage detected
Summary: 3 warnings, 1 error found
JSON Format
{
"timestamp": "2024-03-20T10:30:00Z",
"version": "0.1.0",
"checks": {
"installation": {
"status": "passed",
"details": [
{
"check": "binary_found",
"status": "passed",
"message": "Binary found at /usr/local/bin/swissarmyhammer"
}
]
},
"configuration": {
"status": "warning",
"details": [
{
"check": "custom_directory",
"status": "warning",
"message": "Directory not found: /tmp/prompts",
"fixable": true
}
]
}
},
"summary": {
"total_checks": 15,
"passed": 12,
"warnings": 2,
"errors": 1
}
}
Integration with CI/CD
Use doctor in automated workflows:
GitHub Actions
- name: Check SwissArmyHammer Health
run: |
swissarmyhammer doctor --json > health-report.json
if [ $(jq '.summary.errors' health-report.json) -gt 0 ]; then
echo "Health check failed"
exit 1
fi
Pre-commit Hook
#!/bin/bash
# .git/hooks/pre-commit
swissarmyhammer doctor --check prompts
if [ $? -ne 0 ]; then
echo "Prompt validation failed. Fix issues before committing."
exit 1
fi
Development Script
#!/bin/bash
# dev-setup.sh
echo "Setting up development environment..."
swissarmyhammer doctor --fix --verbose
echo "Health check complete. Run 'swissarmyhammer serve' to start."
Best Practices
Regular Health Checks
# Weekly health check
swissarmyhammer doctor --verbose
# Before important deployments
swissarmyhammer doctor --check mcp --check prompts
Monitoring Integration
# Check and alert on issues
swissarmyhammer doctor --json | jq -r '.summary.errors' | \
xargs -I {} sh -c 'if [ {} -gt 0 ]; then echo "Alert: SwissArmyHammer errors detected"; fi'
Development Workflow
# After making prompt changes
swissarmyhammer doctor --check prompts --fix
# Before committing
swissarmyhammer doctor --check prompts
Troubleshooting Doctor Issues
Doctor Command Not Found
# Verify installation
which swissarmyhammer
# Reinstall if needed
curl -sSL https://install.sh | bash
Doctor Hangs or Crashes
# Run with timeout
timeout 30s swissarmyhammer doctor --verbose
# Check specific categories
swissarmyhammer doctor --check installation
False Positives
# Skip problematic checks
swissarmyhammer doctor --check config --check prompts
# Use verbose mode for details
swissarmyhammer doctor --verbose
Next Steps
- Fix any issues identified by doctor
- Set up regular health monitoring
- Configure automated fixes where appropriate
- See Troubleshooting for detailed problem resolution
- Check Configuration for advanced settings
completion
The completion
command generates shell completion scripts for SwissArmyHammer, enabling tab completion for commands, options, and arguments in your shell.
Usage
swissarmyhammer completion <SHELL>
Arguments
<SHELL>
- The shell to generate completions for (bash
,zsh
,fish
,powershell
,elvish
)
Supported Shells
Bash
Generate and install Bash completions:
# Generate completion script
swissarmyhammer completion bash > swissarmyhammer.bash
# Install for current user
mkdir -p ~/.local/share/bash-completion/completions
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Or install system-wide (requires sudo)
sudo swissarmyhammer completion bash > /usr/share/bash-completion/completions/swissarmyhammer
# Source in current session
source ~/.local/share/bash-completion/completions/swissarmyhammer
Add to ~/.bashrc
for permanent installation:
# Add SwissArmyHammer completions
if [ -f ~/.local/share/bash-completion/completions/swissarmyhammer ]; then
source ~/.local/share/bash-completion/completions/swissarmyhammer
fi
Zsh
Generate and install Zsh completions:
# Generate completion script
swissarmyhammer completion zsh > _swissarmyhammer
# Install to Zsh completions directory
# First, add custom completion directory to fpath in ~/.zshrc:
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
# Create directory and install
mkdir -p ~/.zsh/completions
swissarmyhammer completion zsh > ~/.zsh/completions/_swissarmyhammer
# Reload completions
autoload -U compinit && compinit
For Oh My Zsh users:
# Install to Oh My Zsh custom plugins
swissarmyhammer completion zsh > ~/.oh-my-zsh/custom/plugins/swissarmyhammer/_swissarmyhammer
Fish
Generate and install Fish completions:
# Generate and install in one command
swissarmyhammer completion fish > ~/.config/fish/completions/swissarmyhammer.fish
# Completions are automatically loaded in new shells
PowerShell
Generate and install PowerShell completions:
# Generate completion script
swissarmyhammer completion powershell > SwissArmyHammer.ps1
# Add to PowerShell profile
Add-Content $PROFILE "`n. $(pwd)\SwissArmyHammer.ps1"
# Or install to modules directory
$modulePath = "$env:USERPROFILE\Documents\PowerShell\Modules\SwissArmyHammer"
New-Item -ItemType Directory -Force -Path $modulePath
swissarmyhammer completion powershell > "$modulePath\SwissArmyHammer.psm1"
Elvish
Generate and install Elvish completions:
# Generate and install
swissarmyhammer completion elvish > ~/.elvish/lib/swissarmyhammer.elv
# Add to rc.elv
echo "use swissarmyhammer" >> ~/.elvish/rc.elv
What Gets Completed
The completion system provides intelligent suggestions for:
Commands
swissarmyhammer <TAB>
# Suggests: serve, list, doctor, export, import, completion, config
Command Options
swissarmyhammer serve --<TAB>
# Suggests: --port, --host, --debug, --watch, --prompts, etc.
Prompt Names
swissarmyhammer get <TAB>
# Suggests available prompt names from your library
File Paths
swissarmyhammer import <TAB>
# Suggests .tar.gz files in current directory
swissarmyhammer export output<TAB>
# Suggests: output.tar.gz
Configuration Keys
swissarmyhammer config set <TAB>
# Suggests: server.port, server.host, prompts.directories, etc.
Advanced Features
Dynamic Completions
Some completions are generated dynamically based on context:
# Completes with actual prompt names from your library
swissarmyhammer get code-<TAB>
# Suggests: code-review, code-documentation, code-optimizer
# Completes with valid categories
swissarmyhammer list --category <TAB>
# Suggests: development, writing, data, productivity
Nested Completions
Completions work with nested commands:
swissarmyhammer config <TAB>
# Suggests: get, set, list, validate
swissarmyhammer config set server.<TAB>
# Suggests: server.port, server.host, server.debug
Alias Support
If you create shell aliases, completions still work:
# In .bashrc or .zshrc
alias sah='swissarmyhammer'
# Completions work with alias
sah serve --<TAB>
Troubleshooting
Completions Not Working
-
Check Installation Location
# Bash ls ~/.local/share/bash-completion/completions/ # Zsh echo $fpath ls ~/.zsh/completions/ # Fish ls ~/.config/fish/completions/
-
Reload Shell Configuration
# Bash source ~/.bashrc # Zsh source ~/.zshrc # Fish source ~/.config/fish/config.fish
-
Check Completion System
# Bash complete -p | grep swissarmyhammer # Zsh print -l ${(ok)_comps} | grep swissarmyhammer
Slow Completions
If completions are slow:
-
Enable Caching (Zsh)
# Add to ~/.zshrc zstyle ':completion:*' use-cache on zstyle ':completion:*' cache-path ~/.zsh/cache
-
Reduce Dynamic Lookups
# Set static prompt directory export SWISSARMYHAMMER_PROMPTS_DIR=~/.swissarmyhammer/prompts
Missing Completions
If some completions are missing:
# Regenerate completions after updates
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Check SwissArmyHammer version
swissarmyhammer --version
Environment Variables
Completions respect environment variables:
# Complete with custom prompt directories
export SWISSARMYHAMMER_PROMPTS_DIRECTORIES="/opt/prompts,~/my-prompts"
swissarmyhammer list <TAB>
Integration with Tools
fzf Integration
Combine with fzf for fuzzy completion:
# Add to .bashrc/.zshrc
_swissarmyhammer_fzf_complete() {
swissarmyhammer list --format simple | fzf
}
# Use with Ctrl+T
bind '"\C-t": "$(_swissarmyhammer_fzf_complete)\e\C-e\er"'
IDE Integration
Most IDEs can use shell completions:
VS Code
{
"terminal.integrated.shellIntegration.enabled": true,
"terminal.integrated.shellIntegration.suggestEnabled": true
}
JetBrains IDEs
- Terminal automatically sources shell configuration
- Completions work in integrated terminal
Custom Completions
Adding Custom Completions
Create wrapper scripts with additional completions:
#!/bin/bash
# my-swissarmyhammer-completions.bash
# Source original completions
source ~/.local/share/bash-completion/completions/swissarmyhammer
# Add custom completions
_my_custom_prompts() {
COMPREPLY=($(compgen -W "my-prompt-1 my-prompt-2 my-prompt-3" -- ${COMP_WORDS[COMP_CWORD]}))
}
# Override prompt name completion
complete -F _my_custom_prompts swissarmyhammer get
Project-Specific Completions
Add project-specific completions:
# .envrc (direnv) or project script
_project_prompts() {
ls ./prompts/*.md 2>/dev/null | xargs -n1 basename | sed 's/\.md$//'
}
# Export for use in completions
export SWISSARMYHAMMER_PROJECT_PROMPTS=$(_project_prompts)
Best Practices
-
Keep Completions Updated
# Update completions after SwissArmyHammer updates swissarmyhammer completion $(basename $SHELL) > ~/.local/share/completions/swissarmyhammer
-
Test Completions
# Test completion generation swissarmyhammer completion bash | head -20
-
Document Custom Completions
# Add comments in completion files # Custom completions for project XYZ # Generated: $(date) # Version: $(swissarmyhammer --version)
Examples
Complete Workflow
# Install completions
swissarmyhammer completion bash > ~/.local/share/bash-completion/completions/swissarmyhammer
# Use completions
swissarmyhammer li<TAB> # Completes to: list
swissarmyhammer list --for<TAB> # Completes to: --format
swissarmyhammer list --format j<TAB> # Completes to: json
# Get specific prompt
swissarmyhammer get code-r<TAB> # Completes to: code-review
# Export with completion
swissarmyhammer export my-prompts<TAB> # Suggests: my-prompts.tar.gz
Script Integration
#!/bin/bash
# setup-completions.sh
SHELL_NAME=$(basename "$SHELL")
case "$SHELL_NAME" in
bash)
COMPLETION_DIR="$HOME/.local/share/bash-completion/completions"
;;
zsh)
COMPLETION_DIR="$HOME/.zsh/completions"
;;
fish)
COMPLETION_DIR="$HOME/.config/fish/completions"
;;
*)
echo "Unsupported shell: $SHELL_NAME"
exit 1
;;
esac
mkdir -p "$COMPLETION_DIR"
swissarmyhammer completion "$SHELL_NAME" > "$COMPLETION_DIR/swissarmyhammer"
echo "Completions installed to $COMPLETION_DIR"
See Also
- CLI Reference - Complete command documentation
- Configuration - Configuration options
- Getting Started - Initial setup guide
Rust Library Guide
SwissArmyHammer is available as a Rust library (swissarmyhammer
) that you can integrate into your own applications. This guide covers installation, basic usage, and advanced integration patterns.
Installation
Add SwissArmyHammer to your Cargo.toml
:
[dependencies]
swissarmyhammer = { git = "https://github.com/wballard/swissarmyhammer", features = ["full"] }
Feature Flags
Control which functionality to include:
[dependencies]
swissarmyhammer = {
git = "https://github.com/wballard/swissarmyhammer",
features = ["prompts", "templates", "search", "mcp"]
}
Available features:
prompts
- Core prompt management (always enabled)templates
- Liquid template engine with custom filterssearch
- Full-text search capabilitiesmcp
- Model Context Protocol server supportstorage
- Advanced storage backendsfull
- All features enabled
Quick Start
Basic Prompt Library
use swissarmyhammer::{PromptLibrary, ArgumentSpec};
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a new prompt library
let mut library = PromptLibrary::new();
// Add prompts from a directory
library.add_directory("./prompts").await?;
// List available prompts
for prompt_id in library.list_prompts() {
println!("Available prompt: {}", prompt_id);
}
// Get a specific prompt
let prompt = library.get("code-review")?;
println!("Title: {}", prompt.title());
println!("Description: {}", prompt.description());
// Prepare arguments
let mut args = HashMap::new();
args.insert("code".to_string(), "fn main() { println!(\"Hello\"); }".to_string());
args.insert("language".to_string(), "rust".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("Rendered prompt:\n{}", rendered);
Ok(())
}
Custom Prompt Creation
use swissarmyhammer::{Prompt, ArgumentSpec, PromptMetadata};
fn create_custom_prompt() -> Result<Prompt, Box<dyn std::error::Error>> {
let metadata = PromptMetadata {
title: "Custom Code Review".to_string(),
description: "A custom code review prompt".to_string(),
arguments: vec![
ArgumentSpec {
name: "code".to_string(),
description: "Code to review".to_string(),
required: true,
default: None,
},
ArgumentSpec {
name: "focus".to_string(),
description: "Review focus area".to_string(),
required: false,
default: Some("general".to_string()),
},
],
};
let template = r#"
Code Review: {{ focus | capitalize }}
Please review this code:
{{ code }}
{% if focus == "security" %}
Focus specifically on security vulnerabilities and best practices.
{% elsif focus == "performance" %}
Focus on performance optimizations and efficiency.
{% else %}
Perform a general code review covering style, bugs, and maintainability.
{% endif %}
"#;
Prompt::from_parts(metadata, template)
}
Core Components
PromptLibrary
The main interface for managing collections of prompts.
use swissarmyhammer::PromptLibrary;
let mut library = PromptLibrary::new();
// Add prompts from various sources
library.add_directory("./prompts").await?;
library.add_file("./special-prompt.md").await?;
library.add_builtin_prompts();
// Query prompts
let prompts = library.list_prompts();
let prompt = library.get("prompt-id")?;
let filtered = library.filter_by_category("review");
// Search prompts
let results = library.search("code review")?;
Prompt
Individual prompt with metadata and template.
use swissarmyhammer::Prompt;
// Load from file
let prompt = Prompt::from_file("./prompts/review.md").await?;
// Access metadata
println!("Title: {}", prompt.title());
println!("Description: {}", prompt.description());
for arg in prompt.arguments() {
println!("Argument: {} (required: {})", arg.name, arg.required);
}
// Render with arguments
let mut args = HashMap::new();
args.insert("code".to_string(), "example code".to_string());
let rendered = prompt.render(&args)?;
Template Engine
Advanced template processing with custom filters.
use swissarmyhammer::template::{TemplateEngine, TemplateContext};
let engine = TemplateEngine::new();
let template = "Hello {{ name | capitalize }}! Today is {{ 'now' | format_date: '%Y-%m-%d' }}.";
let mut context = TemplateContext::new();
context.insert("name", "alice");
let result = engine.render(template, &context)?;
println!("{}", result); // "Hello Alice! Today is 2024-01-15."
Advanced Usage
Custom Storage Backend
Implement your own storage for prompts:
use swissarmyhammer::storage::{StorageBackend, PromptSource};
use async_trait::async_trait;
struct DatabaseStorage {
// Your database connection
db: Database,
}
#[async_trait]
impl StorageBackend for DatabaseStorage {
async fn list_prompts(&self) -> Result<Vec<String>, StorageError> {
// Implement database query
todo!()
}
async fn get_prompt(&self, id: &str) -> Result<PromptSource, StorageError> {
// Implement database retrieval
todo!()
}
async fn save_prompt(&mut self, id: &str, source: &PromptSource) -> Result<(), StorageError> {
// Implement database storage
todo!()
}
}
// Use custom storage
let storage = DatabaseStorage::new(db);
let mut library = PromptLibrary::with_storage(storage);
Search Integration
Advanced search capabilities:
use swissarmyhammer::search::{SearchEngine, SearchQuery, SearchResult};
let mut search_engine = SearchEngine::new();
// Index prompts
search_engine.index_prompt("code-review", &prompt).await?;
// Perform searches
let query = SearchQuery::new("code review")
.with_field("title")
.with_limit(10)
.case_sensitive(false);
let results: Vec<SearchResult> = search_engine.search(&query)?;
for result in results {
println!("Found: {} (score: {:.2})", result.id, result.score);
}
MCP Server Integration
Embed MCP server functionality:
use swissarmyhammer::mcp::{McpServer, McpConfig};
let config = McpConfig {
name: "my-prompt-server".to_string(),
version: "1.0.0".to_string(),
// ... other config
};
let mut library = PromptLibrary::new();
library.add_directory("./prompts").await?;
let server = McpServer::new(config, library);
// Run MCP server
server.serve().await?;
Event System
React to library events:
use swissarmyhammer::events::{EventHandler, PromptEvent};
struct MyEventHandler;
impl EventHandler for MyEventHandler {
fn handle_prompt_added(&self, id: &str) {
println!("Prompt added: {}", id);
}
fn handle_prompt_updated(&self, id: &str) {
println!("Prompt updated: {}", id);
}
fn handle_prompt_removed(&self, id: &str) {
println!("Prompt removed: {}", id);
}
}
let mut library = PromptLibrary::new();
library.add_event_handler(Box::new(MyEventHandler));
File Watching
Automatically reload prompts when files change:
use swissarmyhammer::watcher::FileWatcher;
let mut library = PromptLibrary::new();
library.add_directory("./prompts").await?;
// Start watching for file changes
let _watcher = FileWatcher::new("./prompts", move |event| {
match event {
FileEvent::Created(path) => {
if let Err(e) = library.reload_file(&path) {
eprintln!("Failed to load {}: {}", path.display(), e);
}
}
FileEvent::Modified(path) => {
if let Err(e) = library.reload_file(&path) {
eprintln!("Failed to reload {}: {}", path.display(), e);
}
}
FileEvent::Deleted(path) => {
library.remove_file(&path);
}
}
});
// Keep the watcher alive
std::thread::sleep(std::time::Duration::from_secs(60));
Integration Examples
Web Server Integration
use axum::{extract::Path, http::StatusCode, response::Json, routing::get, Router};
use swissarmyhammer::PromptLibrary;
use std::sync::Arc;
use tokio::sync::RwLock;
type SharedLibrary = Arc<RwLock<PromptLibrary>>;
async fn list_prompts(library: SharedLibrary) -> Json<Vec<String>> {
let lib = library.read().await;
Json(lib.list_prompts())
}
async fn get_prompt(
Path(id): Path<String>,
library: SharedLibrary,
) -> Result<Json<String>, StatusCode> {
let lib = library.read().await;
match lib.get(&id) {
Ok(prompt) => Ok(Json(prompt.title().to_string())),
Err(_) => Err(StatusCode::NOT_FOUND),
}
}
#[tokio::main]
async fn main() {
let mut library = PromptLibrary::new();
library.add_directory("./prompts").await.unwrap();
let shared_library = Arc::new(RwLock::new(library));
let app = Router::new()
.route("/prompts", get({
let lib = shared_library.clone();
move || list_prompts(lib)
}))
.route("/prompts/:id", get({
let lib = shared_library.clone();
move |path| get_prompt(path, lib)
}));
axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
.serve(app.into_make_service())
.await
.unwrap();
}
CLI Tool Integration
use clap::{Arg, Command};
use swissarmyhammer::PromptLibrary;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let matches = Command::new("my-prompt-tool")
.arg(Arg::new("prompt")
.help("Prompt ID to render")
.required(true)
.index(1))
.arg(Arg::new("args")
.help("Template arguments as key=value pairs")
.multiple_values(true)
.short('a')
.long("arg"))
.get_matches();
let mut library = PromptLibrary::new();
library.add_directory("./prompts").await?;
let prompt_id = matches.value_of("prompt").unwrap();
let prompt = library.get(prompt_id)?;
let mut args = std::collections::HashMap::new();
if let Some(arg_values) = matches.values_of("args") {
for arg in arg_values {
if let Some((key, value)) = arg.split_once('=') {
args.insert(key.to_string(), value.to_string());
}
}
}
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Configuration Management
use serde::{Deserialize, Serialize};
use swissarmyhammer::{PromptLibrary, storage::FileSystemStorage};
#[derive(Serialize, Deserialize)]
struct AppConfig {
prompt_directories: Vec<String>,
default_arguments: std::collections::HashMap<String, String>,
search_enabled: bool,
}
impl Default for AppConfig {
fn default() -> Self {
Self {
prompt_directories: vec!["./prompts".to_string()],
default_arguments: std::collections::HashMap::new(),
search_enabled: true,
}
}
}
async fn setup_library(config: &AppConfig) -> Result<PromptLibrary, Box<dyn std::error::Error>> {
let mut library = PromptLibrary::new();
for dir in &config.prompt_directories {
library.add_directory(dir).await?;
}
if config.search_enabled {
library.enable_search()?;
}
Ok(library)
}
Error Handling
SwissArmyHammer uses comprehensive error types:
use swissarmyhammer::error::{SwissArmyHammerError, PromptError, TemplateError};
match library.get("nonexistent") {
Ok(prompt) => {
// Handle success
}
Err(SwissArmyHammerError::PromptNotFound(id)) => {
eprintln!("Prompt '{}' not found", id);
}
Err(SwissArmyHammerError::Template(TemplateError::RenderError(msg))) => {
eprintln!("Template rendering failed: {}", msg);
}
Err(SwissArmyHammerError::Io(io_err)) => {
eprintln!("I/O error: {}", io_err);
}
Err(e) => {
eprintln!("Unexpected error: {}", e);
}
}
Testing
SwissArmyHammer provides testing utilities:
use swissarmyhammer::testing::{MockPromptLibrary, PromptTestCase};
#[tokio::test]
async fn test_prompt_rendering() {
let mut library = MockPromptLibrary::new();
let test_case = PromptTestCase::new("test-prompt")
.with_template("Hello {{ name }}!")
.with_argument("name", "World")
.expect_output("Hello World!");
library.add_test_prompt(test_case);
let prompt = library.get("test-prompt").unwrap();
let mut args = std::collections::HashMap::new();
args.insert("name".to_string(), "World".to_string());
let result = prompt.render(&args).unwrap();
assert_eq!(result, "Hello World!");
}
Performance Considerations
Memory Usage
- Prompt libraries cache parsed templates in memory
- Large collections may require custom storage backends
- Use lazy loading for better memory efficiency
Concurrency
PromptLibrary
isSend + Sync
when used with appropriate storage- Template rendering is thread-safe
- Consider using
Arc<RwLock<PromptLibrary>>
for shared access
Best Practices
- Prefer batch operations for multiple prompts
- Cache rendered templates when arguments donβt change
- Use feature flags to include only needed functionality
- Implement proper error handling for production use
Migration from CLI
If youβre migrating from using the CLI to the library:
// CLI equivalent: swissarmyhammer search "code review"
let results = library.search("code review")?;
// CLI equivalent: swissarmyhammer test prompt-id --arg key=value
let prompt = library.get("prompt-id")?;
let mut args = HashMap::new();
args.insert("key".to_string(), "value".to_string());
let rendered = prompt.render(&args)?;
// CLI equivalent: swissarmyhammer export --all output.tar.gz
library.export_all("output.tar.gz", ExportFormat::TarGz)?;
See Also
- Library API Reference - Complete API documentation
- Integration Examples - More integration patterns
- Custom Filters - Template customization
- Advanced Prompts - Complex template patterns
Library API Reference
This document provides comprehensive API documentation for the SwissArmyHammer Rust library.
Core Types
Prompt
The Prompt
struct represents a single prompt with metadata and template content.
pub struct Prompt {
pub name: String,
pub content: String,
pub description: Option<String>,
pub category: Option<String>,
pub tags: Vec<String>,
pub arguments: Vec<ArgumentSpec>,
pub file_path: Option<PathBuf>,
}
Methods
new(name: &str, content: &str) -> Self
- Create a new promptwith_description(self, description: &str) -> Self
- Add a description (builder pattern)with_category(self, category: &str) -> Self
- Add a category (builder pattern)add_tag(self, tag: &str) -> Self
- Add a tag (builder pattern)add_argument(self, arg: ArgumentSpec) -> Self
- Add an argument specificationrender(&self, args: &HashMap<String, String>) -> Result<String>
- Render the prompt with argumentsvalidate_arguments(&self, args: &HashMap<String, String>) -> Result<()>
- Validate provided arguments
Example
use swissarmyhammer::{Prompt, ArgumentSpec};
use std::collections::HashMap;
let prompt = Prompt::new("greet", "Hello {{name}}!")
.with_description("A greeting prompt")
.add_argument(ArgumentSpec {
name: "name".to_string(),
description: Some("Name to greet".to_string()),
required: true,
default: None,
type_hint: Some("string".to_string()),
});
let mut args = HashMap::new();
args.insert("name".to_string(), "World".to_string());
let result = prompt.render(&args)?;
// result: "Hello World!"
ArgumentSpec
Defines the specification for a prompt argument.
pub struct ArgumentSpec {
pub name: String,
pub description: Option<String>,
pub required: bool,
pub default: Option<String>,
pub type_hint: Option<String>,
}
PromptLibrary
The main interface for managing collections of prompts.
pub struct PromptLibrary {
// internal fields...
}
Methods
new() -> Self
- Create a new empty libraryadd_directory<P: AsRef<Path>>(&mut self, path: P) -> Result<()>
- Load prompts from directoryadd_prompt(&mut self, prompt: Prompt)
- Add a single promptget(&self, name: &str) -> Result<&Prompt>
- Get a prompt by namelist_prompts(&self) -> Vec<&Prompt>
- List all promptsfind_by_category(&self, category: &str) -> Vec<&Prompt>
- Find prompts by categoryfind_by_tag(&self, tag: &str) -> Vec<&Prompt>
- Find prompts by tagremove(&mut self, name: &str) -> Option<Prompt>
- Remove a prompt
Example
use swissarmyhammer::PromptLibrary;
let mut library = PromptLibrary::new();
library.add_directory("./prompts")?;
let prompt = library.get("code-review")?;
let rendered = prompt.render(&args)?;
PromptLoader
Handles loading prompts from various sources.
pub struct PromptLoader {
// internal fields...
}
Methods
new() -> Self
- Create a new loaderload_file<P: AsRef<Path>>(&self, path: P) -> Result<Prompt>
- Load single prompt fileload_directory<P: AsRef<Path>>(&self, path: P) -> Result<Vec<Prompt>>
- Load all prompts from directoryload_string(&self, name: &str, content: &str) -> Result<Prompt>
- Load prompt from string
Template Engine
Template
Wrapper for Liquid templates with custom filters.
pub struct Template {
// internal fields...
}
Methods
new(template_str: &str) -> Result<Self>
- Create template from stringrender(&self, args: &HashMap<String, String>) -> Result<String>
- Render with argumentsraw(&self) -> &str
- Get the raw template string
TemplateEngine
Manages template parsing and custom filters.
pub struct TemplateEngine {
// internal fields...
}
Methods
new() -> Self
- Create new enginedefault_parser() -> Parser
- Get default Liquid parser with custom filtersregister_filter<F>(&mut self, name: &str, filter: F)
- Register custom filter
Storage
PromptStorage
High-level storage interface for prompts.
pub trait PromptStorage {
fn store_prompt(&mut self, prompt: &Prompt) -> Result<()>;
fn load_prompt(&self, name: &str) -> Result<Prompt>;
fn list_prompts(&self) -> Result<Vec<String>>;
fn delete_prompt(&mut self, name: &str) -> Result<()>;
}
StorageBackend
Low-level storage abstraction.
pub trait StorageBackend {
fn read(&self, key: &str) -> Result<Vec<u8>>;
fn write(&mut self, key: &str, data: &[u8]) -> Result<()>;
fn delete(&mut self, key: &str) -> Result<()>;
fn list(&self) -> Result<Vec<String>>;
}
Search
Available with the search
feature
SearchEngine
Full-text search functionality for prompts.
pub struct SearchEngine {
// internal fields...
}
Methods
new() -> Result<Self>
- Create new search engineindex_prompt(&mut self, prompt: &Prompt) -> Result<()>
- Add prompt to search indexsearch(&self, query: &str) -> Result<Vec<SearchResult>>
- Search for prompts
SearchResult
Represents a search result.
pub struct SearchResult {
pub name: String,
pub score: f32,
pub snippet: Option<String>,
}
MCP Integration
Available with the mcp
feature
McpServer
Model Context Protocol server implementation.
pub struct McpServer {
// internal fields...
}
Methods
new(library: PromptLibrary) -> Self
- Create server with prompt libraryrun(&mut self) -> Result<()>
- Start the MCP server
Plugin System
SwissArmyHammerPlugin
Trait for creating plugins.
pub trait SwissArmyHammerPlugin {
fn name(&self) -> &str;
fn filters(&self) -> Vec<Box<dyn CustomLiquidFilter>>;
}
CustomLiquidFilter
Trait for custom Liquid template filters.
pub trait CustomLiquidFilter {
fn name(&self) -> &str;
fn filter(&self, input: &str, args: &[&str]) -> Result<String>;
}
PluginRegistry
Manages registered plugins and filters.
pub struct PluginRegistry {
// internal fields...
}
Methods
new() -> Self
- Create new registryregister_plugin<P: SwissArmyHammerPlugin>(&mut self, plugin: P)
- Register pluginget_filters(&self) -> Vec<&dyn CustomLiquidFilter>
- Get all registered filters
Error Handling
SwissArmyHammerError
Main error type for the library.
pub enum SwissArmyHammerError {
Io(std::io::Error),
Template(String),
PromptNotFound(String),
Config(String),
Storage(String),
Serialization(serde_yaml::Error),
Other(String),
}
Result Type
Convenient result type alias.
pub type Result<T> = std::result::Result<T, SwissArmyHammerError>;
Feature Flags
The library supports several optional features:
search
- Enables full-text search functionalitymcp
- Enables Model Context Protocol server support
Enable features in your Cargo.toml
:
[dependencies]
swissarmyhammer = { version = "0.1", features = ["search", "mcp"] }
Complete Example
use swissarmyhammer::{PromptLibrary, ArgumentSpec, Result};
use std::collections::HashMap;
fn main() -> Result<()> {
// Create library and load prompts
let mut library = PromptLibrary::new();
library.add_directory("./prompts")?;
// Get a prompt
let prompt = library.get("code-review")?;
// Prepare arguments
let mut args = HashMap::new();
args.insert("code".to_string(), "fn main() { println!(\"Hello\"); }".to_string());
args.insert("language".to_string(), "rust".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("{}", rendered);
Ok(())
}
Advanced Usage
Custom Filters
Create custom Liquid filters for domain-specific transformations:
use swissarmyhammer::{CustomLiquidFilter, PluginRegistry, TemplateEngine};
struct UppercaseFilter;
impl CustomLiquidFilter for UppercaseFilter {
fn name(&self) -> &str { "uppercase" }
fn filter(&self, input: &str, _args: &[&str]) -> Result<String> {
Ok(input.to_uppercase())
}
}
let mut registry = PluginRegistry::new();
registry.register_filter("uppercase", Box::new(UppercaseFilter));
Storage Backends
Implement custom storage backends:
use swissarmyhammer::{StorageBackend, Result};
struct DatabaseBackend {
// database connection...
}
impl StorageBackend for DatabaseBackend {
fn read(&self, key: &str) -> Result<Vec<u8>> {
// Read from database
todo!()
}
fn write(&mut self, key: &str, data: &[u8]) -> Result<()> {
// Write to database
todo!()
}
// ... implement other methods
}
For more examples and advanced usage patterns, see the Library Examples page.
Library Examples
This guide provides practical examples of using SwissArmyHammer as a Rust library in your applications.
Basic Usage
Adding to Your Project
Add SwissArmyHammer to your Cargo.toml
:
[dependencies]
swissarmyhammer = { git = "https://github.com/wballard/swissarmyhammer.git" }
tokio = { version = "1", features = ["full"] }
serde_json = "1"
Simple Example
use swissarmyhammer::{PromptManager, PromptArgument};
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a prompt manager
let manager = PromptManager::new()?;
// Load prompts from default directories
manager.load_prompts().await?;
// Get a specific prompt
let prompt = manager.get_prompt("code-review")?;
// Prepare arguments
let mut args = HashMap::new();
args.insert("code".to_string(), r#"
def calculate_sum(a, b):
return a + b
"#.to_string());
args.insert("language".to_string(), "python".to_string());
// Render the prompt
let rendered = prompt.render(&args)?;
println!("Rendered prompt:\n{}", rendered);
Ok(())
}
Advanced Examples
Custom Prompt Directories
use swissarmyhammer::{PromptManager, Config};
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create custom configuration
let mut config = Config::default();
config.prompt_directories.push(PathBuf::from("./my-prompts"));
config.prompt_directories.push(PathBuf::from("/opt/company/prompts"));
// Create manager with custom config
let manager = PromptManager::with_config(config)?;
// Load prompts from all directories
manager.load_prompts().await?;
// List all available prompts
for prompt in manager.list_prompts() {
println!("Found prompt: {} - {}", prompt.name, prompt.title);
}
Ok(())
}
Watching for Changes
use swissarmyhammer::{PromptManager, WatchEvent};
use tokio::sync::mpsc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
// Create a channel for watch events
let (tx, mut rx) = mpsc::channel(100);
// Start watching for changes
manager.watch(tx).await?;
// Handle watch events
tokio::spawn(async move {
while let Some(event) = rx.recv().await {
match event {
WatchEvent::PromptAdded(name) => {
println!("New prompt added: {}", name);
}
WatchEvent::PromptModified(name) => {
println!("Prompt modified: {}", name);
}
WatchEvent::PromptRemoved(name) => {
println!("Prompt removed: {}", name);
}
}
}
});
// Keep the program running
tokio::signal::ctrl_c().await?;
println!("Shutting down...");
Ok(())
}
MCP Server Implementation
use swissarmyhammer::{PromptManager, MCPServer, MCPRequest, MCPResponse};
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create prompt manager
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create MCP server
let server = MCPServer::new(manager);
// Listen on TCP socket
let listener = TcpListener::bind("127.0.0.1:3333").await?;
println!("MCP server listening on 127.0.0.1:3333");
loop {
let (mut socket, addr) = listener.accept().await?;
let server = server.clone();
// Handle each connection
tokio::spawn(async move {
let mut buffer = vec![0; 1024];
loop {
let n = match socket.read(&mut buffer).await {
Ok(n) if n == 0 => return,
Ok(n) => n,
Err(e) => {
eprintln!("Error reading from {}: {}", addr, e);
return;
}
};
// Parse request
if let Ok(request) = serde_json::from_slice::<MCPRequest>(&buffer[..n]) {
// Handle request
let response = server.handle_request(request).await;
// Send response
let response_bytes = serde_json::to_vec(&response).unwrap();
if let Err(e) = socket.write_all(&response_bytes).await {
eprintln!("Error writing to {}: {}", addr, e);
return;
}
}
}
});
}
}
Custom Template Filters
use swissarmyhammer::{PromptManager, TemplateEngine, FilterFunction};
use liquid::ValueView;
fn create_custom_filters() -> Vec<(&'static str, FilterFunction)> {
vec![
// Custom filter to convert to snake_case
("snake_case", Box::new(|input: &dyn ValueView, _args: &[liquid::model::Value]| {
let s = input.to_kstr().to_string();
let snake = s.chars().fold(String::new(), |mut acc, ch| {
if ch.is_uppercase() && !acc.is_empty() {
acc.push('_');
}
acc.push(ch.to_lowercase().next().unwrap());
acc
});
Ok(liquid::model::Value::scalar(snake))
})),
// Custom filter to add line numbers
("line_numbers", Box::new(|input: &dyn ValueView, _args: &[liquid::model::Value]| {
let s = input.to_kstr().to_string();
let numbered = s.lines()
.enumerate()
.map(|(i, line)| format!("{:4}: {}", i + 1, line))
.collect::<Vec<_>>()
.join("\n");
Ok(liquid::model::Value::scalar(numbered))
})),
]
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create template engine with custom filters
let mut engine = TemplateEngine::new();
for (name, filter) in create_custom_filters() {
engine.register_filter(name, filter);
}
// Create prompt manager with custom engine
let manager = PromptManager::with_engine(engine)?;
// Use prompts with custom filters
let template = r#"
Function name: {{ function_name | snake_case }}
Code with line numbers:
{{ code | line_numbers }}
"#;
let mut args = HashMap::new();
args.insert("function_name", "calculateTotalPrice");
args.insert("code", "def hello():\n print('Hello')\n return True");
let rendered = engine.render_str(template, &args)?;
println!("{}", rendered);
Ok(())
}
Prompt Validation
use swissarmyhammer::{PromptManager, PromptValidator, ValidationRule};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create custom validation rules
let rules = vec![
ValidationRule::RequiredFields(vec!["name", "title", "description"]),
ValidationRule::ArgumentTypes(HashMap::from([
("max_length", "integer"),
("temperature", "float"),
("enabled", "boolean"),
])),
ValidationRule::TemplatePatterns(vec![
r"\{\{[^}]+\}\}", // Must use double braces
]),
];
// Create validator
let validator = PromptValidator::new(rules);
// Create manager with validator
let manager = PromptManager::with_validator(validator)?;
// Load and validate prompts
match manager.load_prompts().await {
Ok(_) => println!("All prompts validated successfully"),
Err(e) => eprintln!("Validation errors: {}", e),
}
// Validate a specific prompt file
let prompt_content = std::fs::read_to_string("my-prompt.md")?;
match manager.validate_prompt_content(&prompt_content) {
Ok(prompt) => println!("Prompt '{}' is valid", prompt.name),
Err(errors) => {
println!("Validation errors:");
for error in errors {
println!(" - {}", error);
}
}
}
Ok(())
}
Batch Processing
use swissarmyhammer::{PromptManager, BatchProcessor};
use futures::stream::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create batch processor
let processor = BatchProcessor::new(manager, 10); // 10 concurrent tasks
// Prepare batch jobs
let jobs = vec![
("code-review", HashMap::from([
("code", "def add(a, b): return a + b"),
("language", "python"),
])),
("api-docs", HashMap::from([
("api_spec", r#"{"endpoints": ["/users", "/posts"]}"#),
("format", "markdown"),
])),
("test-writer", HashMap::from([
("code", "class Calculator { add(a, b) { return a + b; } }"),
("framework", "jest"),
])),
];
// Process in parallel
let results = processor.process_batch(jobs).await;
// Handle results
for (index, result) in results.iter().enumerate() {
match result {
Ok(rendered) => {
println!("Job {} completed:", index + 1);
println!("{}\n", rendered);
}
Err(e) => {
eprintln!("Job {} failed: {}", index + 1, e);
}
}
}
Ok(())
}
Integration with AI Services
use swissarmyhammer::{PromptManager, AIServiceClient};
use async_trait::async_trait;
// Custom AI service implementation
struct OpenAIClient {
api_key: String,
client: reqwest::Client,
}
#[async_trait]
impl AIServiceClient for OpenAIClient {
async fn complete(&self, prompt: &str) -> Result<String, Box<dyn std::error::Error>> {
let response = self.client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(&self.api_key)
.json(&serde_json::json!({
"model": "gpt-4",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
}))
.send()
.await?;
let data: serde_json::Value = response.json().await?;
let content = data["choices"][0]["message"]["content"]
.as_str()
.unwrap_or("");
Ok(content.to_string())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Setup prompt manager
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create AI client
let ai_client = OpenAIClient {
api_key: std::env::var("OPENAI_API_KEY")?,
client: reqwest::Client::new(),
};
// Get and render prompt
let prompt = manager.get_prompt("code-review")?;
let args = HashMap::from([
("code", "def factorial(n): return 1 if n <= 1 else n * factorial(n-1)"),
("language", "python"),
]);
let rendered = prompt.render(&args)?;
// Send to AI service
println!("Sending prompt to AI service...");
let response = ai_client.complete(&rendered).await?;
println!("AI Response:\n{}", response);
Ok(())
}
Web Server Integration
use swissarmyhammer::PromptManager;
use axum::{
routing::{get, post},
Router, Json, Extension,
response::IntoResponse,
http::StatusCode,
};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
#[derive(Deserialize)]
struct RenderRequest {
prompt_name: String,
arguments: HashMap<String, String>,
}
#[derive(Serialize)]
struct RenderResponse {
rendered: String,
}
async fn list_prompts(
Extension(manager): Extension<Arc<PromptManager>>
) -> impl IntoResponse {
let prompts = manager.list_prompts();
Json(prompts)
}
async fn render_prompt(
Extension(manager): Extension<Arc<PromptManager>>,
Json(request): Json<RenderRequest>,
) -> impl IntoResponse {
match manager.get_prompt(&request.prompt_name) {
Ok(prompt) => match prompt.render(&request.arguments) {
Ok(rendered) => Ok(Json(RenderResponse { rendered })),
Err(e) => Err((StatusCode::BAD_REQUEST, e.to_string())),
},
Err(e) => Err((StatusCode::NOT_FOUND, e.to_string())),
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Setup prompt manager
let manager = Arc::new(PromptManager::new()?);
manager.load_prompts().await?;
// Build web app
let app = Router::new()
.route("/prompts", get(list_prompts))
.route("/render", post(render_prompt))
.layer(Extension(manager));
// Start server
let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await?;
println!("Web server listening on http://0.0.0.0:8080");
axum::serve(listener, app).await?;
Ok(())
}
Testing Utilities
use swissarmyhammer::{PromptManager, TestHarness, TestCase};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let manager = PromptManager::new()?;
manager.load_prompts().await?;
// Create test harness
let harness = TestHarness::new(manager);
// Define test cases
let test_cases = vec![
TestCase {
prompt_name: "code-review",
arguments: HashMap::from([
("code", "def divide(a, b): return a / b"),
("language", "python"),
]),
expected_contains: vec!["division by zero", "error handling"],
expected_not_contains: vec!["syntax error"],
},
TestCase {
prompt_name: "api-docs",
arguments: HashMap::from([
("api_spec", r#"{"version": "1.0"}"#),
]),
expected_contains: vec!["API Documentation", "version"],
expected_not_contains: vec!["error", "invalid"],
},
];
// Run tests
let results = harness.run_tests(test_cases).await;
// Report results
for (test, result) in results {
match result {
Ok(_) => println!("β {} passed", test.prompt_name),
Err(e) => println!("β {} failed: {}", test.prompt_name, e),
}
}
Ok(())
}
Error Handling
Comprehensive Error Handling
use swissarmyhammer::{PromptManager, SwissArmyHammerError};
#[tokio::main]
async fn main() {
match run_app().await {
Ok(_) => println!("Application completed successfully"),
Err(e) => {
eprintln!("Application error: {}", e);
std::process::exit(1);
}
}
}
async fn run_app() -> Result<(), SwissArmyHammerError> {
let manager = PromptManager::new()
.map_err(|e| SwissArmyHammerError::Initialization(e.to_string()))?;
// Handle different error types
match manager.load_prompts().await {
Ok(_) => println!("Prompts loaded successfully"),
Err(SwissArmyHammerError::IoError(e)) => {
eprintln!("File system error: {}", e);
return Err(SwissArmyHammerError::IoError(e));
}
Err(SwissArmyHammerError::ParseError(e)) => {
eprintln!("Prompt parsing error: {}", e);
// Continue with partial prompts
}
Err(e) => return Err(e),
}
// Safely get and render prompt
let prompt_name = "code-review";
let prompt = manager.get_prompt(prompt_name)
.map_err(|_| SwissArmyHammerError::PromptNotFound(prompt_name.to_string()))?;
let args = HashMap::from([("code", "print('hello')")]);
let rendered = prompt.render(&args)
.map_err(|e| SwissArmyHammerError::RenderError(e.to_string()))?;
println!("Rendered: {}", rendered);
Ok(())
}
Performance Optimization
Caching and Pooling
use swissarmyhammer::{PromptManager, CacheConfig, ConnectionPool};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure caching
let cache_config = CacheConfig {
max_size: 100_000_000, // 100MB
ttl: Duration::from_secs(3600),
strategy: CacheStrategy::LRU,
};
// Create connection pool for MCP
let pool = ConnectionPool::builder()
.max_connections(100)
.connection_timeout(Duration::from_secs(5))
.idle_timeout(Duration::from_secs(60))
.build()?;
// Create optimized manager
let manager = PromptManager::builder()
.cache_config(cache_config)
.connection_pool(pool)
.parallel_load(true)
.build()?;
// Benchmark loading
let start = std::time::Instant::now();
manager.load_prompts().await?;
println!("Loaded prompts in {:?}", start.elapsed());
// Benchmark rendering with cache
let mut total_time = Duration::ZERO;
for i in 0..1000 {
let start = std::time::Instant::now();
let prompt = manager.get_prompt("code-review")?;
let args = HashMap::from([("code", format!("test {}", i))]);
let _ = prompt.render(&args)?;
total_time += start.elapsed();
}
println!("Average render time: {:?}", total_time / 1000);
Ok(())
}
Next Steps
- Review the Library API reference
- Learn about Library Usage patterns
- See Integration Examples for more use cases
- Check the API Documentation for detailed information
π Rustdoc API Documentation
Advanced Prompt Techniques
This guide covers advanced techniques for creating sophisticated and powerful prompts with SwissArmyHammer.
Composable Prompts
Prompt Chaining
Chain multiple prompts together for complex workflows:
---
name: full-analysis
title: Complete Code Analysis Pipeline
description: Runs multiple analysis steps on code
arguments:
- name: file_path
description: File to analyze
required: true
- name: output_format
description: Format for results
default: markdown
---
# Complete Analysis for {{file_path}}
## Step 1: Code Review
{% capture review_output %}
Run code review on {{file_path}} focusing on:
- Code quality
- Best practices
- Potential bugs
{% endcapture %}
## Step 2: Security Analysis
{% capture security_output %}
Analyze {{file_path}} for security vulnerabilities:
- Input validation
- Authentication issues
- Data exposure risks
{% endcapture %}
## Step 3: Performance Analysis
{% capture performance_output %}
Check {{file_path}} for performance issues:
- Algorithm complexity
- Resource usage
- Optimization opportunities
{% endcapture %}
{% if output_format == "markdown" %}
## Analysis Results
### Code Review
{{ review_output }}
### Security
{{ security_output }}
### Performance
{{ performance_output }}
{% elsif output_format == "json" %}
{
"code_review": "{{ review_output | escape }}",
"security": "{{ security_output | escape }}",
"performance": "{{ performance_output | escape }}"
}
{% endif %}
Modular Prompt Components
Create reusable prompt components:
---
name: code-analyzer-base
title: Base Code Analyzer
description: Reusable base for code analysis prompts
arguments:
- name: code
description: Code to analyze
required: true
- name: analysis_type
description: Type of analysis
required: true
---
{% comment %} Base analysis template {% endcomment %}
{% assign lines = code | split: "\n" %}
{% assign line_count = lines | size %}
# {{analysis_type | capitalize}} Analysis
## Code Metrics
- Lines of code: {{line_count}}
- Language: {% if code contains "def " %}Python{% elsif code contains "function" %}JavaScript{% else %}Unknown{% endif %}
## Analysis Focus
{% case analysis_type %}
{% when "security" %}
{% include "security-checks.liquid" %}
{% when "performance" %}
{% include "performance-checks.liquid" %}
{% when "style" %}
{% include "style-checks.liquid" %}
{% endcase %}
## Detailed Analysis
Analyze the following code for {{analysis_type}} issues:
{{code}}
Advanced Templating
Dynamic Content Generation
Generate content based on complex conditions:
---
name: api-documentation-generator
title: Dynamic API Documentation
description: Generates API docs with dynamic sections
arguments:
- name: api_spec
description: API specification (JSON)
required: true
- name: include_examples
description: Include code examples
default: "true"
- name: languages
description: Example languages (comma-separated)
default: "curl,python,javascript"
---
{% assign api = api_spec | parse_json %}
{% assign lang_list = languages | split: "," %}
# {{api.title}} API Documentation
{{api.description}}
Base URL: `{{api.base_url}}`
Version: {{api.version}}
## Authentication
{% if api.auth.type == "bearer" %}
This API uses Bearer token authentication. Include your API token in the Authorization header:
Authorization: Bearer YOUR_API_TOKEN
{% elsif api.auth.type == "oauth2" %}
This API uses OAuth 2.0. See [Authentication Guide](#auth-guide) for details.
{% endif %}
## Endpoints
{% for endpoint in api.endpoints %}
### {{endpoint.method}} {{endpoint.path}}
{{endpoint.description}}
{% if endpoint.parameters.size > 0 %}
#### Parameters
| Name | Type | Required | Description |
|------|------|----------|-------------|
{% for param in endpoint.parameters %}
| {{param.name}} | {{param.type}} | {{param.required | default: false}} | {{param.description}} |
{% endfor %}
{% endif %}
{% if include_examples == "true" %}
#### Examples
{% for lang in lang_list %}
{% case lang %}
{% when "curl" %}
```bash
curl -X {{endpoint.method}} \
{{api.base_url}}{{endpoint.path}} \
{% if api.auth.type == "bearer" %}-H "Authorization: Bearer $API_TOKEN" \{% endif %}
{% for param in endpoint.parameters %}{% if param.in == "header" %}-H "{{param.name}}: value" \{% endif %}{% endfor %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}-H "Content-Type: application/json" \
-d '{"key": "value"}'{% endif %}
{% when βpythonβ %}
import requests
response = requests.{{endpoint.method | downcase}}(
"{{api.base_url}}{{endpoint.path}}",
{% if api.auth.type == "bearer" %}headers={"Authorization": f"Bearer {api_token}"},{% endif %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}json={"key": "value"}{% endif %}
)
print(response.json())
{% when βjavascriptβ %}
const response = await fetch('{{api.base_url}}{{endpoint.path}}', {
method: '{{endpoint.method}}',
{% if api.auth.type == "bearer" %}headers: {
'Authorization': `Bearer ${apiToken}`,
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}'Content-Type': 'application/json'{% endif %}
},{% endif %}
{% if endpoint.method == "POST" or endpoint.method == "PUT" %}body: JSON.stringify({ key: 'value' }){% endif %}
});
const data = await response.json();
{% endcase %} {% endfor %} {% endif %}
{% endfor %}
### Complex Conditionals
Use advanced conditional logic:
```markdown
---
name: smart-optimizer
title: Smart Code Optimizer
description: Applies context-aware optimizations
arguments:
- name: code
description: Code to optimize
required: true
- name: metrics
description: Performance metrics (JSON)
required: false
- name: constraints
description: Optimization constraints
default: "balanced"
---
{% if metrics %}
{% assign perf = metrics | parse_json %}
{% assign needs_memory_opt = false %}
{% assign needs_cpu_opt = false %}
{% if perf.memory_usage > 80 %}
{% assign needs_memory_opt = true %}
{% endif %}
{% if perf.cpu_usage > 70 %}
{% assign needs_cpu_opt = true %}
{% endif %}
{% endif %}
# Optimization Analysis
{% if needs_memory_opt and needs_cpu_opt %}
## Critical: Both Memory and CPU Optimization Needed
Your code is experiencing both memory and CPU pressure. This requires careful optimization to balance both concerns.
### Recommended Strategy: Hybrid Optimization
1. Profile to identify hotspots
2. Optimize algorithms first (reduces both CPU and memory)
3. Implement caching strategically
4. Consider async processing
{% elsif needs_memory_opt %}
## Memory Optimization Required
Current memory usage: {{perf.memory_usage}}%
### Memory Optimization Strategies:
1. Reduce object allocation
2. Use object pooling
3. Implement lazy loading
4. Clear unused references
{% elsif needs_cpu_opt %}
## CPU Optimization Required
Current CPU usage: {{perf.cpu_usage}}%
### CPU Optimization Strategies:
1. Algorithm optimization
2. Parallel processing
3. Caching computed results
4. Reduce unnecessary operations
{% else %}
## Performance is Acceptable
No immediate optimization needed. Consider:
- Code maintainability improvements
- Preemptive optimization for scale
- Documentation updates
{% endif %}
## Code Analysis
{{code}}
{% case constraints %}
{% when "memory-first" %}
Focus on reducing memory footprint, even at slight CPU cost.
{% when "cpu-first" %}
Optimize for CPU performance, memory usage is secondary.
{% when "balanced" %}
Balance both memory and CPU optimizations.
{% endcase %}
State Management
Using Captures for State
Manage complex state across prompt sections:
---
name: migration-planner
title: Database Migration Planner
description: Plans complex database migrations
arguments:
- name: current_schema
description: Current database schema
required: true
- name: target_schema
description: Target database schema
required: true
- name: strategy
description: Migration strategy
default: "safe"
---
{% comment %} Analyze schemas and capture findings {% endcomment %}
{% capture added_tables %}
{% assign current_tables = current_schema | parse_json | map: "name" %}
{% assign target_tables = target_schema | parse_json | map: "name" %}
{% for table in target_tables %}
{% unless current_tables contains table %}
- {{table}}
{% endunless %}
{% endfor %}
{% endcapture %}
{% capture removed_tables %}
{% for table in current_tables %}
{% unless target_tables contains table %}
- {{table}}
{% endunless %}
{% endfor %}
{% endcapture %}
{% capture migration_risk %}
{% if removed_tables contains "users" or removed_tables contains "auth" %}
HIGH - Critical tables being removed
{% elsif added_tables.size > 5 %}
MEDIUM - Large number of new tables
{% else %}
LOW - Minimal structural changes
{% endif %}
{% endcapture %}
# Database Migration Plan
## Risk Assessment: {{migration_risk | strip}}
## Changes Summary
### New Tables
{{added_tables | default: "None"}}
### Removed Tables
{{removed_tables | default: "None"}}
## Migration Strategy: {{strategy | upcase}}
{% if strategy == "safe" %}
### Safe Migration Steps
1. Create backup
2. Add new tables first
3. Migrate data with validation
4. Update application code
5. Remove old tables after verification
{% elsif strategy == "fast" %}
### Fast Migration Steps
1. Quick backup
2. Execute all changes in transaction
3. Minimal validation
4. Quick rollback if needed
{% elsif strategy == "zero-downtime" %}
### Zero-Downtime Migration Steps
1. Create new tables alongside old
2. Implement dual-write logic
3. Backfill data progressively
4. Switch reads to new tables
5. Remove old tables after stabilization
{% endif %}
{% if migration_risk contains "HIGH" %}
## β οΈ High Risk Mitigation
Due to the high risk nature of this migration:
1. Schedule during maintenance window
2. Have rollback plan ready
3. Test in staging environment first
4. Monitor closely after deployment
{% endif %}
Performance Optimization
Lazy Evaluation
Use lazy evaluation for expensive operations:
---
name: smart-analyzer
title: Smart Performance Analyzer
description: Analyzes code with lazy evaluation
arguments:
- name: code
description: Code to analyze
required: true
- name: quick_check
description: Perform quick check only
default: "false"
---
# Code Analysis
{% if quick_check == "true" %}
## Quick Analysis
- Lines: {{code | split: "\n" | size}}
- Complexity: {{code | size | divided_by: 100}} (estimated)
{% else %}
{% comment %} Full analysis only when needed {% endcomment %}
{% capture complexity_analysis %}
{% assign lines = code | split: "\n" %}
{% assign complexity = 0 %}
{% for line in lines %}
{% if line contains "if " or line contains "for " or line contains "while " %}
{% assign complexity = complexity | plus: 1 %}
{% endif %}
{% endfor %}
Cyclomatic Complexity: {{complexity}}
{% endcapture %}
{% capture pattern_analysis %}
{% if code contains "TODO" or code contains "FIXME" %}
- Contains pending work items
{% endif %}
{% if code contains "console.log" or code contains "print(" %}
- Contains debug output
{% endif %}
{% endcapture %}
## Full Analysis
### Metrics
{{complexity_analysis}}
### Code Patterns
{{pattern_analysis | default: "No issues found"}}
### Detailed Review
Analyze the code for:
1. Performance bottlenecks
2. Security vulnerabilities
3. Best practice violations
{{code}}
{% endif %}
Caching Computed Values
Cache expensive computations:
---
name: data-processor
title: Efficient Data Processor
description: Processes data with caching
arguments:
- name: data
description: Data to process (CSV or JSON)
required: true
- name: operations
description: Operations to perform
required: true
---
{% comment %} Cache parsed data {% endcomment %}
{% assign is_json = false %}
{% assign is_csv = false %}
{% if data contains "{" and data contains "}" %}
{% assign is_json = true %}
{% assign parsed_data = data | parse_json %}
{% elsif data contains "," %}
{% assign is_csv = true %}
{% comment %} Cache row count {% endcomment %}
{% assign rows = data | split: "\n" %}
{% assign row_count = rows | size %}
{% endif %}
# Data Processing
## Data Format: {% if is_json %}JSON{% elsif is_csv %}CSV ({{row_count}} rows){% else %}Unknown{% endif %}
{% comment %} Reuse cached values {% endcomment %}
{% for operation in operations %}
{% case operation %}
{% when "count" %}
- Count: {% if is_json %}{{parsed_data | size}}{% else %}{{row_count}}{% endif %}
{% when "validate" %}
- Validation: {% if is_json %}Valid JSON{% elsif is_csv %}Valid CSV{% endif %}
{% endcase %}
{% endfor %}
Error Handling
Graceful Degradation
Handle errors gracefully:
---
name: robust-analyzer
title: Robust Code Analyzer
description: Analyzes code with error handling
arguments:
- name: code
description: Code to analyze
required: true
- name: language
description: Programming language
default: "auto"
---
# Code Analysis
{% comment %} Safe language detection {% endcomment %}
{% assign detected_language = "unknown" %}
{% if language == "auto" %}
{% if code contains "def " and code contains ":" %}
{% assign detected_language = "python" %}
{% elsif code contains "function" or code contains "const " %}
{% assign detected_language = "javascript" %}
{% elsif code contains "fn " and code contains "->" %}
{% assign detected_language = "rust" %}
{% endif %}
{% else %}
{% assign detected_language = language %}
{% endif %}
## Language: {{detected_language | capitalize}}
{% comment %} Safe parsing with fallbacks {% endcomment %}
{% assign parse_success = false %}
{% capture parsed_structure %}
{% if detected_language == "python" %}
{% comment %} Python-specific parsing {% endcomment %}
{% assign functions = code | split: "def " | size | minus: 1 %}
{% assign classes = code | split: "class " | size | minus: 1 %}
Functions: {{functions}}, Classes: {{classes}}
{% assign parse_success = true %}
{% elsif detected_language == "javascript" %}
{% comment %} JavaScript-specific parsing {% endcomment %}
{% assign functions = code | split: "function" | size | minus: 1 %}
{% assign arrows = code | split: "=>" | size | minus: 1 %}
Functions: {{functions | plus: arrows}}
{% assign parse_success = true %}
{% endif %}
{% endcapture %}
{% if parse_success %}
## Structure Analysis
{{parsed_structure}}
{% else %}
## Basic Analysis
Unable to parse structure for {{detected_language}}.
Falling back to general analysis:
- Lines: {{code | split: "\n" | size}}
- Characters: {{code | size}}
{% endif %}
## Code Review
Analyze the following {{detected_language}} code:
```{{detected_language}}
{{code}}
### Input Validation
Validate and sanitize inputs:
```markdown
---
name: secure-processor
title: Secure Input Processor
description: Processes inputs with validation
arguments:
- name: user_input
description: User-provided input
required: true
- name: input_type
description: Expected input type
required: true
- name: max_length
description: Maximum allowed length
default: "1000"
---
{% comment %} Input validation {% endcomment %}
{% assign is_valid = true %}
{% assign validation_errors = "" %}
{% comment %} Length check {% endcomment %}
{% assign input_length = user_input | size %}
{% if input_length > max_length %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Input exceeds maximum length ({{input_length}} > {{max_length}}){% endcapture %}
{% endif %}
{% comment %} Type validation {% endcomment %}
{% case input_type %}
{% when "email" %}
{% unless user_input contains "@" and user_input contains "." %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Invalid email format{% endcapture %}
{% endunless %}
{% when "number" %}
{% assign test_number = user_input | plus: 0 %}
{% if test_number == 0 and user_input != "0" %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Input is not a valid number{% endcapture %}
{% endif %}
{% when "json" %}
{% capture json_test %}{{user_input | parse_json}}{% endcapture %}
{% unless json_test %}
{% assign is_valid = false %}
{% capture validation_errors %}{{validation_errors}}
- Invalid JSON format{% endcapture %}
{% endunless %}
{% endcase %}
# Input Processing Result
## Validation: {% if is_valid %}β
Passed{% else %}β Failed{% endif %}
{% unless is_valid %}
## Validation Errors:
{{validation_errors}}
{% endunless %}
{% if is_valid %}
## Processing Input
Type: {{input_type}}
Length: {{input_length}} characters
### Sanitized Input:
{{user_input | strip | escape}}
### Next Steps:
Process the validated {{input_type}} input according to business logic.
{% else %}
## Cannot Process Invalid Input
Please fix the validation errors and try again.
{% endif %}
Integration Patterns
External Tool Integration
Integrate with external tools and services:
---
name: ci-cd-analyzer
title: CI/CD Pipeline Analyzer
description: Analyzes CI/CD configurations
arguments:
- name: pipeline_config
description: CI/CD configuration file
required: true
- name: platform
description: CI/CD platform (github, gitlab, jenkins)
required: true
- name: recommendations
description: Include recommendations
default: "true"
---
# CI/CD Pipeline Analysis
Platform: {{platform | capitalize}}
{% assign config = pipeline_config %}
## Pipeline Structure
{% case platform %}
{% when "github" %}
{% if config contains "on:" %}
### Triggers
- Configured triggers found
{% if config contains "push:" %}β Push events{% endif %}
{% if config contains "pull_request:" %}β Pull request events{% endif %}
{% if config contains "schedule:" %}β Scheduled runs{% endif %}
{% endif %}
{% if config contains "jobs:" %}
### Jobs
{% assign job_count = config | split: "jobs:" | last | split: ":" | size %}
- Number of jobs: ~{{job_count}}
{% endif %}
{% when "gitlab" %}
{% if config contains "stages:" %}
### Stages
- Pipeline stages defined
{% endif %}
{% if config contains "before_script:" %}
### Global Configuration
- Global before_script found
{% endif %}
{% when "jenkins" %}
{% if config contains "pipeline {" %}
### Pipeline Type
- Declarative pipeline
{% elsif config contains "node {" %}
- Scripted pipeline
{% endif %}
{% endcase %}
## Security Analysis
{% capture security_issues %}
{% if config contains "secrets." or config contains "${{" %}
- β Uses secure secret management
{% endif %}
{% if config contains "password" or config contains "api_key" %}
- β οΈ Possible hardcoded credentials
{% endif %}
{% if platform == "github" and config contains "actions/checkout" %}
{% unless config contains "actions/checkout@v" %}
- β οΈ Using unpinned actions
{% endunless %}
{% endif %}
{% endcapture %}
{{security_issues | default: "No security issues found"}}
{% if recommendations == "true" %}
## Recommendations
{% case platform %}
{% when "github" %}
1. Use specific action versions (e.g., `actions/checkout@v3`)
2. Implement job dependencies for efficiency
3. Use matrix builds for multiple versions
4. Cache dependencies for faster builds
{% when "gitlab" %}
1. Use DAG for job dependencies
2. Implement proper stage dependencies
3. Use artifacts for job communication
4. Enable pipeline caching
{% when "jenkins" %}
1. Use declarative pipeline syntax
2. Implement proper error handling
3. Use Jenkins shared libraries
4. Enable pipeline visualization
{% endcase %}
### General Best Practices
- Implement proper testing stages
- Add security scanning steps
- Use parallel execution where possible
- Monitor pipeline metrics
{% endif %}
## Raw Configuration
```yaml
{{pipeline_config}}
## Advanced Examples
### Multi-Stage Document Generator
```markdown
---
name: tech-doc-generator
title: Technical Documentation Generator
description: Generates comprehensive technical documentation
arguments:
- name: project_info
description: Project information (JSON)
required: true
- name: doc_sections
description: Sections to include (comma-separated)
default: "overview,architecture,api,deployment"
- name: audience
description: Target audience
default: "developers"
---
{% assign project = project_info | parse_json %}
{% assign sections = doc_sections | split: "," %}
# {{project.name}} Technical Documentation
Version: {{project.version}}
Last Updated: {% assign date = 'now' | date: "%B %d, %Y" %}{{date}}
{% for section in sections %}
{% case section | strip %}
{% when "overview" %}
## Overview
{{project.description}}
### Key Features
{% for feature in project.features %}
- **{{feature.name}}**: {{feature.description}}
{% endfor %}
### Technology Stack
{% for tech in project.stack %}
- {{tech.name}} ({{tech.version}}) - {{tech.purpose}}
{% endfor %}
{% when "architecture" %}
## Architecture
### System Components
{% for component in project.components %}
#### {{component.name}}
- **Type**: {{component.type}}
- **Responsibility**: {{component.responsibility}}
- **Dependencies**: {% for dep in component.dependencies %}{{dep}}{% unless forloop.last %}, {% endunless %}{% endfor %}
{% endfor %}
### Data Flow
{% for flow in project.dataflows %} {{flow.source}} β> {{flow.destination}}: {{flow.description}} {% endfor %}
{% when "api" %}
## API Reference
Base URL: `{{project.api.base_url}}`
### Authentication
{{project.api.auth.description}}
### Endpoints
{% for endpoint in project.api.endpoints %}
#### {{endpoint.method}} {{endpoint.path}}
{{endpoint.description}}
**Parameters:**
{% for param in endpoint.parameters %}
- `{{param.name}}` ({{param.type}}{% if param.required %}, required{% endif %}) - {{param.description}}
{% endfor %}
**Response:** {{endpoint.response.description}}
{% endfor %}
{% when "deployment" %}
## Deployment Guide
### Prerequisites
{% for prereq in project.deployment.prerequisites %}
- {{prereq}}
{% endfor %}
### Environment Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
{% for env in project.deployment.env_vars %}
| {{env.name}} | {{env.description}} | {{env.required}} | {{env.default | default: "none"}} |
{% endfor %}
### Deployment Steps
{% for step in project.deployment.steps %}
{{forloop.index}}. {{step.description}}
```bash
{{step.command}}
{% endfor %} {% endcase %} {% endfor %}
Generated for {{audience}} by SwissArmyHammer
### Intelligent Code Refactoring Assistant
```markdown
---
name: refactoring-assistant
title: Intelligent Refactoring Assistant
description: Provides context-aware refactoring suggestions
arguments:
- name: code
description: Code to refactor
required: true
- name: code_metrics
description: Code metrics (JSON)
required: false
- name: refactor_goals
description: Refactoring goals (comma-separated)
default: "readability,maintainability,performance"
- name: preserve_behavior
description: Ensure behavior preservation
default: "true"
---
{% if code_metrics %}
{% assign metrics = code_metrics | parse_json %}
{% endif %}
# Refactoring Analysis
## Current Code Metrics
{% if metrics %}
- Complexity: {{metrics.complexity}}
- Lines: {{metrics.lines}}
- Duplication: {{metrics.duplication}}%
- Test Coverage: {{metrics.coverage}}%
{% else %}
- Lines: {{code | split: "\n" | size}}
{% endif %}
## Refactoring Goals
{% assign goals = refactor_goals | split: "," %}
{% for goal in goals %}
- {{goal | strip | capitalize}}
{% endfor %}
## Analysis
{{code}}
{% capture refactoring_plan %}
{% for goal in goals %}
{% case goal | strip %}
{% when "readability" %}
### Readability Improvements
1. Extract complex conditionals into well-named functions
2. Replace magic numbers with named constants
3. Improve variable and function names
4. Add clarifying comments for complex logic
{% when "maintainability" %}
### Maintainability Enhancements
1. Apply SOLID principles
2. Reduce coupling between components
3. Extract reusable components
4. Improve error handling
{% when "performance" %}
### Performance Optimizations
1. Identify and optimize bottlenecks
2. Reduce unnecessary iterations
3. Implement caching where appropriate
4. Optimize data structures
{% when "testability" %}
### Testability Improvements
1. Extract pure functions
2. Reduce dependencies
3. Implement dependency injection
4. Separate business logic from I/O
{% endcase %}
{% endfor %}
{% endcapture %}
{{refactoring_plan}}
{% if preserve_behavior == "true" %}
## Behavior Preservation Strategy
To ensure the refactoring preserves behavior:
1. **Write characterization tests** before refactoring
2. **Refactor in small steps** with tests passing
3. **Use automated refactoring tools** where possible
4. **Compare outputs** before and after changes
### Suggested Test Cases
Based on the code analysis, ensure tests cover:
- Edge cases and boundary conditions
- Error handling paths
- Main business logic flows
- Integration points
{% endif %}
## Refactoring Priority
{% if metrics %}
{% if metrics.complexity > 10 %}
**High Priority**: Reduce complexity first - current complexity of {{metrics.complexity}} is too high
{% elsif metrics.duplication > 20 %}
**High Priority**: Address code duplication - {{metrics.duplication}}% duplication detected
{% elsif metrics.coverage < 60 %}
**High Priority**: Improve test coverage before refactoring - only {{metrics.coverage}}% covered
{% else %}
**Normal Priority**: Code is in reasonable shape for refactoring
{% endif %}
{% else %}
Based on initial analysis, focus on readability and structure improvements.
{% endif %}
## Next Steps
1. Review the refactoring plan
2. Set up safety nets (tests, version control)
3. Apply refactorings incrementally
4. Validate behavior preservation
5. Update documentation
Best Practices
1. Use Meaningful Variable Names
{% comment %} Bad {% endcomment %}
{% assign x = data | split: "," %}
{% comment %} Good {% endcomment %}
{% assign csv_rows = data | split: "," %}
2. Cache Expensive Operations
{% comment %} Cache parsed data {% endcomment %}
{% assign parsed_json = data | parse_json %}
{% comment %} Reuse parsed_json multiple times {% endcomment %}
3. Provide Fallbacks
{{variable | default: "No value provided"}}
4. Use Comments for Complex Logic
{% comment %}
Check if the code is Python by looking for specific syntax
This is more reliable than file extension alone
{% endcomment %}
{% if code contains "def " and code contains ":" %}
{% assign language = "python" %}
{% endif %}
5. Modularize with Captures
{% capture header %}
# {{title}}
Generated on: {{date}}
{% endcapture %}
{% comment %} Reuse header in multiple places {% endcomment %}
{{header}}
Next Steps
- Explore Custom Filters for extending functionality
- Learn about Prompt Organization for managing complex prompts
- See Examples for more real-world scenarios
- Read Template Variables for Liquid syntax reference
Search and Discovery Guide
SwissArmyHammer provides powerful search capabilities to help you discover and find prompts in your collection. This guide covers search strategies, advanced filtering, and integration workflows.
Basic Search
Simple Text Search
The most basic way to search is with a simple text query:
# Search for prompts containing "code"
swissarmyhammer search code
# Search for multiple terms
swissarmyhammer search "code review"
# Search with partial matches
swissarmyhammer search debug
Search Results Format
Found 3 prompts matching "code":
π code-review (builtin)
Review code for best practices and potential issues
Arguments: code, language (optional)
π§ debug-helper (user)
Help debug programming issues and errors
Arguments: error, context (optional)
π analyze-performance (local)
Analyze code performance and suggest optimizations
Arguments: code, language, metrics (optional)
Each result shows:
- Icon: Indicates prompt type (π builtin, π§ user, π local)
- Name: Prompt identifier
- Source: Where the prompt is stored
- Description: Brief description of the promptβs purpose
- Arguments: Required and optional parameters
Field-Specific Search
Search in Titles Only
# Find prompts with "review" in the title
swissarmyhammer search --in title review
# Case-sensitive title search
swissarmyhammer search --in title --case-sensitive "Code Review"
Search in Descriptions
# Find prompts about debugging in descriptions
swissarmyhammer search --in description debug
# Find prompts mentioning specific technologies
swissarmyhammer search --in description "python javascript"
Search in Content
# Find prompts that use specific template variables
swissarmyhammer search --in content "{{code}}"
# Find prompts with specific instructions
swissarmyhammer search --in content "best practices"
Search All Fields
# Search across titles, descriptions, and content (default)
swissarmyhammer search --in all "security"
# Explicit all-field search
swissarmyhammer search "API documentation"
Advanced Search Techniques
Regular Expression Search
Use regex patterns for powerful pattern matching:
# Find prompts with "test" followed by any word
swissarmyhammer search --regex "test\s+\w+"
# Find prompts starting with specific words
swissarmyhammer search --regex "^(debug|fix|analyze)"
# Find prompts with email patterns
swissarmyhammer search --regex "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b"
# Case-sensitive regex
swissarmyhammer search --regex --case-sensitive "^Code"
Search by Source
Filter prompts by their source location:
# Find only built-in prompts
swissarmyhammer search --source builtin
# Find only user-created prompts
swissarmyhammer search --source user
# Find only local project prompts
swissarmyhammer search --source local
# Combine with text search
swissarmyhammer search review --source user
Search by Arguments
Find prompts based on their argument requirements:
# Find prompts that accept a "code" argument
swissarmyhammer search --has-arg code
# Find prompts with no arguments (simple prompts)
swissarmyhammer search --no-args
# Find prompts with specific argument combinations
swissarmyhammer search --has-arg code --has-arg language
# Combine with text search
swissarmyhammer search debug --has-arg error
Search Strategies
Discovery Workflows
Finding Prompts for a Task
# 1. Start broad
swissarmyhammer search "code review"
# 2. Narrow down by context
swissarmyhammer search "code review" --source user
# 3. Check argument requirements
swissarmyhammer search "code review" --has-arg language
# 4. Examine specific matches
swissarmyhammer search --in title "Advanced Code Review"
Exploring Available Prompts
# See all available prompts
swissarmyhammer search --limit 50 ""
# Browse by category/topic
swissarmyhammer search documentation
swissarmyhammer search testing
swissarmyhammer search refactoring
# Find simple prompts (no arguments)
swissarmyhammer search --no-args
Finding Template Examples
# Find prompts using loops
swissarmyhammer search --in content "{% for"
# Find prompts with conditionals
swissarmyhammer search --in content "{% if"
# Find prompts using specific filters
swissarmyhammer search --in content "| capitalize"
Search Optimization
Performance Tips
# Limit results for faster response
swissarmyhammer search --limit 10 query
# Use specific fields to reduce search scope
swissarmyhammer search --in title query # faster than all fields
# Use source filtering to narrow search space
swissarmyhammer search --source user query
Precision vs. Recall
# High precision (exact matches)
swissarmyhammer search --case-sensitive --regex "^exact pattern$"
# High recall (find everything related)
swissarmyhammer search --in all "broad topic"
# Balanced approach
swissarmyhammer search "specific terms" --limit 20
Integration with Other Commands
Search and Test Workflow
# Find debugging prompts
swissarmyhammer search debug
# Test a specific one
swissarmyhammer test debug-helper
# Test with specific arguments
swissarmyhammer test debug-helper --arg error="TypeError: undefined"
Search and Export Workflow
# Find all review-related prompts
swissarmyhammer search review --limit 20
# Export specific ones found
swissarmyhammer export code-review security-review design-review output.tar.gz
# Or export all matching a pattern
# (manual selection based on search results)
Scripted Search
#!/bin/bash
# find-and-test.sh
QUERY="$1"
if [ -z "$QUERY" ]; then
echo "Usage: $0 <search-query>"
exit 1
fi
echo "Searching for: $QUERY"
PROMPTS=$(swissarmyhammer search --json "$QUERY" | jq -r '.results[].id')
if [ -z "$PROMPTS" ]; then
echo "No prompts found"
exit 1
fi
echo "Found prompts:"
echo "$PROMPTS"
echo "Select a prompt to test:"
select PROMPT in $PROMPTS; do
if [ -n "$PROMPT" ]; then
swissarmyhammer test "$PROMPT"
break
fi
done
JSON Output for Scripting
Basic JSON Search
swissarmyhammer search --json "code review"
{
"query": "code review",
"total_found": 3,
"results": [
{
"id": "code-review",
"title": "Code Review Helper",
"description": "Review code for best practices and potential issues",
"source": "builtin",
"path": "/builtin/review/code.md",
"arguments": [
{"name": "code", "required": true},
{"name": "language", "required": false, "default": "auto-detect"}
],
"score": 0.95
}
]
}
Processing JSON Results
# Extract prompt IDs
swissarmyhammer search --json query | jq -r '.results[].id'
# Get highest scoring result
swissarmyhammer search --json query | jq -r '.results[0].id'
# Filter by score threshold
swissarmyhammer search --json query | jq '.results[] | select(.score > 0.8)'
# Count results by source
swissarmyhammer search --json "" --limit 100 | jq '.results | group_by(.source) | map({source: .[0].source, count: length})'
Search Index Management
Understanding the Search Index
SwissArmyHammer automatically maintains a search index that includes:
- Prompt titles - Weighted heavily in scoring
- Descriptions - Medium weight
- Content text - Lower weight
- Argument names - Considered for relevance
- File paths - Used for source filtering
Index Updates
The search index is automatically updated when:
- Prompts are added to the library
- Existing prompts are modified
- The
serve
command starts (full rebuild) - File watching detects changes
Performance Characteristics
- Index size: Proportional to prompt collection size
- Search speed: Sub-second for collections up to 10,000 prompts
- Memory usage: Moderate (index kept in memory)
- Update speed: Fast incremental updates
Troubleshooting Search Issues
No Results Found
# Check if prompts exist
swissarmyhammer search --limit 100 ""
# Verify prompt sources
swissarmyhammer search --source builtin
swissarmyhammer search --source user
swissarmyhammer search --source local
# Try broader search
swissarmyhammer search --in all "partial terms"
Too Many Results
# Use more specific terms
swissarmyhammer search "specific exact phrase"
# Limit by source
swissarmyhammer search broad-term --source user
# Use field-specific search
swissarmyhammer search --in title specific-title
# Limit result count
swissarmyhammer search broad-term --limit 5
Unexpected Results
# Check what's being matched
swissarmyhammer search --full query
# Use exact matching
swissarmyhammer search --regex "^exact term$"
# Search in specific field
swissarmyhammer search --in description query
Best Practices
Effective Search Terms
- Use specific terms: βREST API documentationβ vs. βAPIβ
- Include context: βPython debuggingβ vs. βdebuggingβ
- Try synonyms: βreviewβ, βanalyzeβ, βexamineβ
- Use argument names: Search for βcodeβ, βerrorβ, βdataβ to find relevant prompts
Search Workflow Patterns
- Start broad, narrow down: Begin with general terms, add filters
- Use multiple strategies: Try both fuzzy and regex search
- Check all sources: Donβt assume prompts are only in one location
- Combine with testing: Always test prompts before using
Organization for Searchability
- Clear titles: Use descriptive, searchable titles
- Good descriptions: Include keywords and use cases
- Consistent naming: Use standard terms across prompts
- Tag with arguments: Use predictable argument names
Advanced Examples
Finding Template Patterns
# Find prompts using custom filters
swissarmyhammer search --in content "format_lang"
# Find prompts with error handling
swissarmyhammer search --in content "default:"
# Find prompts with loops
swissarmyhammer search --in content "{% for"
Building Prompt Collections
# Find all code-related prompts
swissarmyhammer search --regex "(code|programming|software)" --limit 50
# Find all documentation prompts
swissarmyhammer search --regex "(doc|documentation|readme|guide)" --limit 30
# Find all analysis prompts
swissarmyhammer search --regex "(analy|review|audit|inspect)" --limit 20
Quality Assurance
# Find prompts without descriptions
swissarmyhammer search --in description "^$" --regex
# Find prompts with no arguments (might need descriptions)
swissarmyhammer search --no-args --limit 50
# Find prompts with many arguments (might be complex)
swissarmyhammer search --json "" --limit 100 | \
jq '.results[] | select(.arguments | length > 5)'
See Also
search
command - Command referencetest
command - Testing found prompts- Prompt Organization - Organizing for discoverability
Testing and Debugging Guide
This guide covers testing strategies, debugging techniques, and best practices for working with SwissArmyHammer prompts.
Interactive Testing
Basic Testing Workflow
The test
command provides an interactive environment for testing prompts:
# Start interactive testing
swissarmyhammer test code-review
This will:
- Load the specified prompt
- Prompt for required arguments
- Show optional arguments with defaults
- Render the template
- Display the result
- Offer additional actions (copy, save, retry)
Testing with Predefined Arguments
# Test with known arguments
swissarmyhammer test code-review \
--arg code="fn main() { println!(\"Hello\"); }" \
--arg language="rust"
# Copy result directly to clipboard
swissarmyhammer test email-template \
--arg recipient="John" \
--arg subject="Meeting" \
--copy
Debugging Template Issues
Common Template Problems
Missing Variables
<!-- Problem: undefined variable -->
Hello {{name}}
<!-- Solution: provide default -->
Hello {{ name | default: "Guest" }}
Type Mismatches
<!-- Problem: trying to use string methods on numbers -->
{{ count | upcase }}
<!-- Solution: convert types -->
{{ count | append: " items" }}
Loop Issues
<!-- Problem: not checking for empty arrays -->
{% for item in items %}
- {{ item }}
{% endfor %}
<!-- Solution: check array exists and has items -->
{% if items and items.size > 0 %}
{% for item in items %}
- {{ item }}
{% endfor %}
{% else %}
No items found.
{% endif %}
Debug Mode
Use debug mode to see detailed template processing:
swissarmyhammer test prompt-name --debug
Debug output includes:
- Variable resolution steps
- Filter application results
- Conditional evaluation
- Loop iteration details
- Performance timing
Validation Strategies
Argument Validation
Test with different argument combinations:
# Test required arguments only
swissarmyhammer test prompt-name --arg required_arg="value"
# Test with all arguments
swissarmyhammer test prompt-name \
--arg required_arg="value" \
--arg optional_arg="optional_value"
# Test with edge cases
swissarmyhammer test prompt-name \
--arg text="" \
--arg number="0" \
--arg array="[]"
Template Edge Cases
Create test cases for common scenarios:
- Empty inputs
- Very long inputs
- Special characters
- Unicode content
- Null/undefined values
Automated Testing
For prompt libraries, create test scripts:
#!/bin/bash
# test-all-prompts.sh
PROMPTS=$(swissarmyhammer search --json "" --limit 100 | jq -r '.results[].id')
for prompt in $PROMPTS; do
echo "Testing $prompt..."
if swissarmyhammer test "$prompt" --arg placeholder="test" 2>/dev/null; then
echo "β $prompt"
else
echo "β $prompt"
fi
done
Performance Testing
Measuring Render Time
# Time a complex template
time swissarmyhammer test complex-template \
--arg large_data="$(cat large-file.json)"
# Use debug mode for detailed timing
swissarmyhammer test template-name --debug | grep "Performance:"
Memory Usage Testing
For large templates or data:
# Monitor memory usage during rendering
/usr/bin/time -v swissarmyhammer test large-template \
--arg big_data="$(cat massive-dataset.json)"
Best Practices
Writing Testable Prompts
- Provide sensible defaults for optional arguments
- Handle empty/null inputs gracefully
- Use meaningful argument names
- Include example values in descriptions
- Test with realistic data sizes
Testing Workflow
- Start simple: Test with minimal arguments
- Add complexity: Test with full argument sets
- Test edge cases: Empty, null, large inputs
- Validate output: Ensure rendered content makes sense
- Performance check: Verify reasonable render times
Debugging Tips
- Use debug mode for complex templates
- Test filters individually in simple templates
- Validate JSON/YAML with external tools
- Check argument types match expectations
- Use raw mode to see unprocessed templates
Integration with Development
IDE Integration
Many editors support SwissArmyHammer testing:
# VS Code task example
{
"version": "2.0.0",
"tasks": [
{
"label": "Test Current Prompt",
"type": "shell",
"command": "swissarmyhammer",
"args": ["test", "${fileBasenameNoExtension}"],
"group": "test",
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared"
}
}
]
}
Continuous Integration
Add prompt testing to CI pipelines:
# .github/workflows/test-prompts.yml
name: Test Prompts
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: cargo install --git https://github.com/wballard/swissarmyhammer.git swissarmyhammer-cli
- name: Test all prompts
run: |
for prompt in prompts/*.md; do
name=$(basename "$prompt" .md)
echo "Testing $name..."
swissarmyhammer test "$name" --arg test="ci_validation"
done
See Also
test
command - Command reference- Template Variables - Template syntax
- Custom Filters - Filter reference
Sharing and Collaboration
This guide covers how to share SwissArmyHammer prompts with your team, collaborate on prompt development, and manage shared prompt libraries.
Overview
SwissArmyHammer supports multiple collaboration workflows:
- File Sharing - Share prompt files directly
- Git Integration - Version control for prompts
- Team Directories - Shared network folders
- Package Management - Distribute as packages
Sharing Methods
Direct File Sharing
Single Prompt Sharing
Share individual prompt files:
# Send a single prompt
cp ~/.swissarmyhammer/prompts/code-review.md /shared/prompts/
# Share via email/chat
# Attach the .md file directly
Recipients install by copying to their prompt directory:
# Install shared prompt
cp /downloads/code-review.md ~/.swissarmyhammer/prompts/
Prompt Collections
Share multiple related prompts:
# Create a collection
mkdir python-toolkit
cp ~/.swissarmyhammer/prompts/python-*.md python-toolkit/
cp ~/.swissarmyhammer/prompts/pytest-*.md python-toolkit/
# Share as zip
zip -r python-toolkit.zip python-toolkit/
Git-Based Collaboration
Repository Structure
Organize prompts in a Git repository:
prompt-library/
βββ .git/
βββ README.md
βββ prompts/
β βββ development/
β β βββ languages/
β β βββ frameworks/
β β βββ tools/
β βββ data/
β β βββ analysis/
β β βββ visualization/
β βββ writing/
β βββ technical/
β βββ creative/
βββ shared/
β βββ components/
β βββ templates/
βββ scripts/
β βββ validate.sh
β βββ install.sh
βββ .github/
βββ workflows/
βββ validate-prompts.yml
Team Workflow
Initial Setup
# Create prompt repository
git init prompt-library
cd prompt-library
# Add initial structure
mkdir -p prompts/{development,data,writing}
mkdir -p shared/{components,templates}
# Add README
cat > README.md << 'EOF'
# Team Prompt Library
Shared SwissArmyHammer prompts for our team.
## Installation
```bash
git clone https://github.com/ourteam/prompt-library
./scripts/install.sh
Contributing
See CONTRIBUTING.md for guidelines. EOF
Initial commit
git add . git commit -m βInitial prompt library structureβ
#### Installation Script
Create `scripts/install.sh`:
```bash
#!/bin/bash
# install.sh - Install team prompts
PROMPT_DIR="$HOME/.swissarmyhammer/prompts/team"
# Create team namespace
mkdir -p "$PROMPT_DIR"
# Copy prompts
cp -r prompts/* "$PROMPT_DIR/"
# Copy shared components
cp -r shared/* "$HOME/.swissarmyhammer/prompts/_shared/"
echo "Team prompts installed to $PROMPT_DIR"
echo "Run 'swissarmyhammer list' to see available prompts"
Contributing Prompts
# Clone repository
git clone https://github.com/ourteam/prompt-library
cd prompt-library
# Create feature branch
git checkout -b add-docker-prompts
# Add new prompts
mkdir -p prompts/development/docker
vim prompts/development/docker/dockerfile-optimizer.md
# Test locally
swissarmyhammer doctor --check prompts
# Commit and push
git add prompts/development/docker/
git commit -m "Add Docker optimization prompts"
git push origin add-docker-prompts
# Create pull request
gh pr create --title "Add Docker optimization prompts" \
--body "Adds prompts for Dockerfile optimization and best practices"
Version Control Best Practices
Branching Strategy
# Main branches
main # Stable, tested prompts
develop # Integration branch
feature/* # New prompts
fix/* # Bug fixes
experimental/* # Experimental prompts
Commit Messages
Follow conventional commits:
# Adding prompts
git commit -m "feat(python): add async code review prompt"
# Fixing prompts
git commit -m "fix(api-design): correct OpenAPI template syntax"
# Updating prompts
git commit -m "refactor(test-writer): improve test case generation"
# Documentation
git commit -m "docs: add prompt writing guidelines"
Pull Request Template
.github/pull_request_template.md
:
## Description
Brief description of the prompts being added/modified
## Type of Change
- [ ] New prompt(s)
- [ ] Bug fix
- [ ] Enhancement
- [ ] Documentation
## Testing
- [ ] Tested with `swissarmyhammer doctor`
- [ ] Validated template syntax
- [ ] Checked for duplicates
## Checklist
- [ ] Follows naming conventions
- [ ] Includes required metadata
- [ ] Has meaningful description
- [ ] Includes usage examples
Automated Validation
GitHub Actions Workflow
.github/workflows/validate-prompts.yml
:
name: Validate Prompts
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -sSL https://install.swissarmyhammer.dev | sh
- name: Validate Prompts
run: |
swissarmyhammer doctor --check prompts
- name: Check Duplicates
run: |
swissarmyhammer list --format json | \
jq -r '.[].name' | sort | uniq -d > duplicates.txt
if [ -s duplicates.txt ]; then
echo "Duplicate prompts found:"
cat duplicates.txt
exit 1
fi
- name: Lint Markdown
uses: DavidAnson/markdownlint-cli2-action@v11
with:
globs: 'prompts/**/*.md'
Team Directories
Network Share Setup
Windows Network Share
# Create shared folder
New-Item -Path "\\server\prompts" -ItemType Directory
# Set permissions
$acl = Get-Acl "\\server\prompts"
$permission = "Domain\TeamMembers","ReadAndExecute","Allow"
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule $permission
$acl.SetAccessRule($accessRule)
Set-Acl "\\server\prompts" $acl
Configure SwissArmyHammer:
# ~/.swissarmyhammer/config.toml
[prompts]
directories = [
"~/.swissarmyhammer/prompts",
"//server/prompts"
]
NFS Share (Linux/Mac)
# Server setup
sudo mkdir -p /srv/prompts
sudo chown -R :team /srv/prompts
sudo chmod -R 775 /srv/prompts
# /etc/exports
/srv/prompts 192.168.1.0/24(ro,sync,no_subtree_check)
# Client mount
sudo mkdir -p /mnt/team-prompts
sudo mount -t nfs server:/srv/prompts /mnt/team-prompts
Syncing Strategies
rsync Method
#!/bin/bash
# sync-prompts.sh
REMOTE="server:/srv/prompts"
LOCAL="$HOME/.swissarmyhammer/prompts/team"
# Sync from server (read-only)
rsync -avz --delete "$REMOTE/" "$LOCAL/"
# Watch for changes
while inotifywait -r -e modify,create,delete "$LOCAL"; do
echo "Changes detected, syncing..."
rsync -avz "$LOCAL/" "$REMOTE/"
done
Cloud Storage Sync
Using rclone:
# Configure rclone
rclone config
# Sync from cloud
rclone sync dropbox:team-prompts ~/.swissarmyhammer/prompts/team
# Bidirectional sync
rclone bisync dropbox:team-prompts ~/.swissarmyhammer/prompts/team
Package Management
Creating Packages
NPM Package
package.json
:
{
"name": "@company/swissarmyhammer-prompts",
"version": "1.0.0",
"description": "Company SwissArmyHammer prompts",
"files": [
"prompts/**/*.md",
"install.js"
],
"scripts": {
"postinstall": "node install.js"
},
"keywords": ["swissarmyhammer", "prompts", "ai"],
"repository": {
"type": "git",
"url": "https://github.com/company/prompts.git"
}
}
install.js
:
const fs = require('fs');
const path = require('path');
const os = require('os');
const sourceDir = path.join(__dirname, 'prompts');
const targetDir = path.join(
os.homedir(),
'.swissarmyhammer',
'prompts',
'company'
);
// Copy prompts to user directory
fs.cpSync(sourceDir, targetDir, { recursive: true });
console.log(`Prompts installed to ${targetDir}`);
Python Package
setup.py
:
from setuptools import setup, find_packages
import os
from pathlib import Path
def get_prompt_files():
"""Get all prompt files for packaging."""
prompt_files = []
for root, dirs, files in os.walk('prompts'):
for file in files:
if file.endswith('.md'):
prompt_files.append(os.path.join(root, file))
return prompt_files
setup(
name='company-swissarmyhammer-prompts',
version='1.0.0',
packages=find_packages(),
data_files=[
(f'.swissarmyhammer/prompts/company/{os.path.dirname(f)}', [f])
for f in get_prompt_files()
],
install_requires=[],
entry_points={
'console_scripts': [
'install-company-prompts=scripts.install:main',
],
},
)
Distribution Channels
Internal Package Registry
# Publish to internal registry
npm publish --registry https://npm.company.com
# Install from registry
npm install @company/swissarmyhammer-prompts --registry https://npm.company.com
Container Registry
Dockerfile
:
FROM alpine:latest
# Install prompts
COPY prompts /prompts
# Create tarball
RUN tar -czf /prompts.tar.gz -C / prompts
# Export as artifact
FROM scratch
COPY --from=0 /prompts.tar.gz /
# Build and push
docker build -t registry.company.com/prompts:latest .
docker push registry.company.com/prompts:latest
# Pull and extract
docker create --name temp registry.company.com/prompts:latest
docker cp temp:/prompts.tar.gz .
docker rm temp
tar -xzf prompts.tar.gz -C ~/.swissarmyhammer/
Access Control
Git-Based Permissions
# Separate repositories by access level
prompt-library-public/ # All team members
prompt-library-internal/ # Internal team only
prompt-library-sensitive/ # Restricted access
File System Permissions
# Create group-based access
sudo groupadd prompt-readers
sudo groupadd prompt-writers
# Set permissions
sudo chown -R :prompt-readers /srv/prompts
sudo chmod -R 750 /srv/prompts
sudo chmod -R 770 /srv/prompts/contributions
# Add users to groups
sudo usermod -a -G prompt-readers alice
sudo usermod -a -G prompt-writers bob
Prompt Metadata
Mark prompts with access levels:
---
name: sensitive-data-analyzer
title: Sensitive Data Analysis
access: restricted
allowed_users:
- security-team
- data-governance
tags:
- sensitive
- compliance
- restricted
---
Collaboration Tools
Prompt Development Environment
VS Code workspace settings:
.vscode/settings.json
:
{
"files.associations": {
"*.md": "markdown"
},
"markdown.validate.enabled": true,
"markdown.validate.rules": {
"yaml-front-matter": true
},
"files.exclude": {
"**/.git": true,
"**/.DS_Store": true
},
"search.exclude": {
"**/node_modules": true,
"**/.git": true
}
}
Team Guidelines
Create CONTRIBUTING.md
:
# Contributing to Team Prompts
## Prompt Standards
### Naming Conventions
- Use kebab-case: `code-review-security.md`
- Be descriptive: `python-async-optimizer.md`
- Include context: `react-component-generator.md`
### Required Metadata
All prompts must include:
- `name` - Unique identifier
- `title` - Human-readable title
- `description` - What the prompt does
- `author` - Your email
- `category` - Primary category
- `tags` - At least 3 relevant tags
### Template Quality
- Use clear, concise language
- Include usage examples
- Test with various inputs
- Document edge cases
## Review Process
1. Create feature branch
2. Add/modify prompts
3. Run validation: `swissarmyhammer doctor`
4. Submit pull request
5. Address review feedback
6. Merge when approved
## Testing
Before submitting:
```bash
# Validate syntax
swissarmyhammer doctor --check prompts
# Test rendering
swissarmyhammer get your-prompt --args key=value
# Check for conflicts
swissarmyhammer list --format json | jq '.[] | select(.name=="your-prompt")'
### Communication
#### Slack Integration
```javascript
// slack-bot.js
const { WebClient } = require('@slack/web-api');
const { exec } = require('child_process');
const slack = new WebClient(process.env.SLACK_TOKEN);
// Notify on new prompts
async function notifyNewPrompt(promptName, author) {
await slack.chat.postMessage({
channel: '#prompt-library',
text: `New prompt added: *${promptName}* by ${author}`,
attachments: [{
color: 'good',
fields: [{
title: 'View Prompt',
value: `\`swissarmyhammer get ${promptName}\``,
short: false
}]
}]
});
}
Email Notifications
#!/bin/bash
# notify-updates.sh
RECIPIENTS="team@company.com"
SUBJECT="Prompt Library Updates"
# Get recent changes
CHANGES=$(git log --oneline --since="1 week ago" --grep="^feat\|^fix")
# Send email
echo "Weekly prompt library updates:
$CHANGES
To update your local prompts:
git pull origin main
./scripts/install.sh
" | mail -s "$SUBJECT" $RECIPIENTS
Best Practices
1. Establish Standards
Define clear guidelines:
- Naming conventions
- Required metadata
- Quality standards
- Review process
- Version strategy
2. Use Namespaces
Organize prompts by team/project:
~/.swissarmyhammer/prompts/
βββ personal/ # Your prompts
βββ team/ # Team shared
βββ company/ # Company wide
βββ community/ # Open source
3. Document Everything
- README for each category
- Usage examples in prompts
- Change logs for versions
- Migration guides
4. Automate Validation
- Pre-commit hooks
- CI/CD validation
- Automated testing
- Quality metrics
5. Regular Maintenance
- Review unused prompts
- Update outdated content
- Consolidate duplicates
- Archive deprecated
Examples
Team Onboarding
Create onboarding bundle:
#!/bin/bash
# create-onboarding-bundle.sh
# Create directory structure
mkdir -p onboarding-prompts/prompts
# Copy essential prompts
cp ~/.swissarmyhammer/prompts/*onboarding*.md onboarding-prompts/prompts/
cp ~/.swissarmyhammer/prompts/*essential*.md onboarding-prompts/prompts/
# Add setup script
cat > onboarding-prompts/setup.sh << 'EOF'
#!/bin/bash
echo "Welcome to the team! Setting up your prompts..."
cp -r prompts/* ~/.swissarmyhammer/prompts/
echo "Run 'swissarmyhammer list' to see your new prompts!"
EOF
# Create welcome package
tar -czf welcome-pack.tar.gz onboarding-prompts/
Project Templates
Share project-specific prompts:
# project-manifest.yaml
name: microservice-toolkit
version: 1.0.0
description: Prompts for microservice development
prompts:
- api-design
- openapi-generator
- dockerfile-creator
- k8s-manifest-builder
- test-suite-generator
dependencies:
- base-toolkit: ">=1.0.0"
install_script: |
mkdir -p ~/.swissarmyhammer/prompts/projects/microservices
cp prompts/*.md ~/.swissarmyhammer/prompts/projects/microservices/
Next Steps
- Read Prompt Organization for structure best practices
- See Contributing for contribution guidelines
- Explore Git Integration for version control workflows
- Learn about Configuration for team setup
MCP Protocol
SwissArmyHammer implements the Model Context Protocol (MCP) to provide prompts to AI assistants like Claude. This guide covers the protocol details and implementation specifics.
Overview
The Model Context Protocol (MCP) is a standardized protocol for communication between AI assistants and context providers. SwissArmyHammer acts as an MCP server, providing prompt templates as resources and tools.
βββββββββββββββ MCP ββββββββββββββββββββ
β Claude βββββββββββββββββββββββΊβ SwissArmyHammer β
β (Client) β JSON-RPC 2.0 β (Server) β
βββββββββββββββ ββββββββββββββββββββ
Protocol Basics
Transport
MCP uses JSON-RPC 2.0 over various transports:
- stdio - Standard input/output (default for Claude Code)
- HTTP - REST API endpoints
- WebSocket - Persistent connections
Message Format
All messages follow JSON-RPC 2.0 format:
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {},
"id": 1
}
Response format:
{
"jsonrpc": "2.0",
"result": {
"prompts": [...]
},
"id": 1
}
MCP Methods
Initialize
Establishes connection and capabilities:
// Request
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {
"prompts": {},
"resources": {}
},
"clientInfo": {
"name": "claude-code",
"version": "1.0.0"
}
},
"id": 1
}
// Response
{
"jsonrpc": "2.0",
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"prompts": {
"listChanged": true
},
"resources": {
"subscribe": true,
"listChanged": true
}
},
"serverInfo": {
"name": "swissarmyhammer",
"version": "0.1.0"
}
},
"id": 1
}
List Prompts
Get available prompts:
// Request
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"cursor": null
},
"id": 2
}
// Response
{
"jsonrpc": "2.0",
"result": {
"prompts": [
{
"id": "code-review",
"name": "Code Review",
"description": "Reviews code for best practices and issues",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
},
{
"name": "language",
"description": "Programming language",
"required": false
}
]
}
],
"nextCursor": null
},
"id": 2
}
Get Prompt
Retrieve a specific prompt with arguments:
// Request
{
"jsonrpc": "2.0",
"method": "prompts/get",
"params": {
"promptId": "code-review",
"arguments": {
"code": "def add(a, b):\n return a + b",
"language": "python"
}
},
"id": 3
}
// Response
{
"jsonrpc": "2.0",
"result": {
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Please review this python code:\n\n```python\ndef add(a, b):\n return a + b\n```\n\nFocus on:\n- Code quality\n- Best practices\n- Potential issues"
}
}
]
},
"id": 3
}
List Resources
Get available resources (prompt source files):
// Request
{
"jsonrpc": "2.0",
"method": "resources/list",
"params": {
"cursor": null
},
"id": 4
}
// Response
{
"jsonrpc": "2.0",
"result": {
"resources": [
{
"uri": "prompt://code-review",
"name": "code-review.md",
"description": "Code review prompt source",
"mimeType": "text/markdown"
}
],
"nextCursor": null
},
"id": 4
}
Read Resource
Get resource content:
// Request
{
"jsonrpc": "2.0",
"method": "resources/read",
"params": {
"uri": "prompt://code-review"
},
"id": 5
}
// Response
{
"jsonrpc": "2.0",
"result": {
"contents": [
{
"uri": "prompt://code-review",
"mimeType": "text/markdown",
"text": "---\nname: code-review\ntitle: Code Review\n---\n\n# Code Review\n..."
}
]
},
"id": 5
}
Notifications
Prompt List Changed
Sent when prompts are added/removed/modified:
{
"jsonrpc": "2.0",
"method": "notifications/prompts/list_changed",
"params": {}
}
Resource List Changed
Sent when resources change:
{
"jsonrpc": "2.0",
"method": "notifications/resources/list_changed",
"params": {}
}
Error Handling
MCP defines standard error codes:
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params",
"data": {
"details": "Missing required argument 'code'"
}
},
"id": 6
}
Standard error codes:
-32700
- Parse error-32600
- Invalid request-32601
- Method not found-32602
- Invalid params-32603
- Internal error
Custom error codes:
1001
- Prompt not found1002
- Invalid prompt format1003
- Template render error
SwissArmyHammer Extensions
Pagination
For large prompt collections:
// Request with pagination
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"cursor": "eyJvZmZzZXQiOjUwfQ==",
"limit": 50
},
"id": 7
}
Filtering
Filter prompts by criteria:
// Request with filters
{
"jsonrpc": "2.0",
"method": "prompts/list",
"params": {
"filter": {
"category": "development",
"tags": ["python", "testing"]
}
},
"id": 8
}
Metadata
Extended prompt metadata:
{
"id": "code-review",
"name": "Code Review",
"description": "Reviews code for best practices",
"metadata": {
"author": "SwissArmyHammer Team",
"version": "1.0.0",
"category": "development",
"tags": ["code", "review", "quality"],
"lastModified": "2024-01-15T10:30:00Z"
}
}
Implementation Details
Server Lifecycle
-
Initialization
// Server startup sequence let server = MCPServer::new(); server.load_prompts()?; server.start_file_watcher()?; server.listen(transport)?;
-
Request Handling
match request.method.as_str() { "initialize" => handle_initialize(params), "prompts/list" => handle_list_prompts(params), "prompts/get" => handle_get_prompt(params), "resources/list" => handle_list_resources(params), "resources/read" => handle_read_resource(params), _ => Err(MethodNotFound), }
-
Change Detection
// File watcher triggers notifications watcher.on_change(|event| { server.reload_prompts(); server.notify_clients("prompts/list_changed"); });
Transport Implementations
stdio Transport
Default for Claude Code integration:
// Read from stdin, write to stdout
let stdin = io::stdin();
let stdout = io::stdout();
loop {
let request = read_json_rpc(&mut stdin)?;
let response = server.handle_request(request)?;
write_json_rpc(&mut stdout, response)?;
}
HTTP Transport
For web integrations:
// HTTP endpoint handler
async fn handle_mcp(Json(request): Json<MCPRequest>) -> Json<MCPResponse> {
let response = server.handle_request(request).await;
Json(response)
}
WebSocket Transport
For real-time updates:
// WebSocket handler
async fn handle_websocket(ws: WebSocket, server: Arc<MCPServer>) {
let (tx, rx) = ws.split();
// Handle incoming messages
rx.for_each(|msg| async {
if let Ok(request) = parse_json_rpc(msg) {
let response = server.handle_request(request).await;
tx.send(serialize_json_rpc(response)).await;
}
}).await;
}
Security Considerations
Authentication
MCP doesnβt specify authentication, but SwissArmyHammer supports:
// With API key
{
"jsonrpc": "2.0",
"method": "initialize",
"params": {
"authentication": {
"type": "bearer",
"token": "sk-..."
}
},
"id": 1
}
Rate Limiting
Prevent abuse:
// Rate limit configuration
rate_limiter: {
requests_per_minute: 100,
burst_size: 20,
per_method: {
"prompts/get": 50,
"resources/read": 30
}
}
Input Validation
All inputs are validated:
// Validate prompt arguments
fn validate_arguments(args: &HashMap<String, Value>) -> Result<()> {
// Check required fields
// Validate data types
// Sanitize inputs
// Check size limits
}
Performance Optimization
Caching
Responses are cached for efficiency:
// Cache configuration
cache: {
prompts_list: {
ttl: 300, // 5 minutes
max_size: 1000
},
rendered_prompts: {
ttl: 3600, // 1 hour
max_size: 10000
}
}
Streaming
Large responses support streaming:
// Streaming response
{
"jsonrpc": "2.0",
"result": {
"stream": true,
"chunks": [
{"index": 0, "data": "First chunk..."},
{"index": 1, "data": "Second chunk..."},
{"index": 2, "data": "Final chunk", "final": true}
]
},
"id": 9
}
Testing MCP
Manual Testing
Test with curl:
# Test initialize
echo '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}' | \
swissarmyhammer serve --transport stdio
# Test prompts list
curl -X POST http://localhost:3333/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"prompts/list","params":{},"id":2}'
Automated Testing
#[test]
async fn test_mcp_protocol() {
let server = MCPServer::new_test();
// Test initialize
let response = server.handle_request(json!({
"jsonrpc": "2.0",
"method": "initialize",
"params": {},
"id": 1
})).await;
assert_eq!(response["result"]["serverInfo"]["name"], "swissarmyhammer");
}
Debugging
Enable Debug Logging
logging:
modules:
swissarmyhammer::mcp: debug
Request/Response Logging
// Log all MCP traffic
middleware: {
log_requests: true,
log_responses: true,
log_errors: true,
pretty_print: true
}
Protocol Inspector
# Inspect MCP traffic
swissarmyhammer mcp-inspector --port 3334
# Connect through inspector
export MCP_PROXY=http://localhost:3334
Best Practices
- Always validate inputs - Never trust client data
- Handle errors gracefully - Return proper error codes
- Implement timeouts - Prevent hanging requests
- Cache when possible - Reduce computation
- Log important events - Aid debugging
- Version your changes - Maintain compatibility
- Document extensions - Help client implementers
Next Steps
- Implement Claude Code Integration
- Review API Reference for details
- See Troubleshooting for common issues
- Check Examples for implementation patterns
Configuration
SwissArmyHammer offers flexible configuration options through configuration files, environment variables, and command-line arguments. This guide covers all configuration methods and settings.
Configuration File
Location
SwissArmyHammer looks for configuration files in this order:
./swissarmyhammer.toml
(current directory)~/.swissarmyhammer/config.toml
(user directory)/etc/swissarmyhammer/config.toml
(system-wide)
Format
Configuration uses TOML format:
# ~/.swissarmyhammer/config.toml
# Server configuration
[server]
host = "localhost"
port = 8080
debug = false
timeout = 30000 # milliseconds
# Prompt directories
[prompts]
directories = [
"~/.swissarmyhammer/prompts",
"./prompts",
"/opt/company/prompts"
]
builtin = true
watch = true
# File watching configuration
[watch]
enabled = true
poll_interval = 1000 # milliseconds
max_depth = 5
ignore_patterns = [
"*.tmp",
"*.swp",
".git/*",
"__pycache__/*"
]
# Logging configuration
[log]
level = "info" # debug, info, warn, error
file = "~/.swissarmyhammer/logs/server.log"
rotate = true
max_size = "10MB"
max_age = 30 # days
# Cache configuration
[cache]
enabled = true
directory = "~/.swissarmyhammer/cache"
ttl = 3600 # seconds
max_size = "100MB"
# Template engine configuration
[template]
strict_variables = false
strict_filters = false
custom_filters_path = "~/.swissarmyhammer/filters"
# Security settings
[security]
allow_file_access = false
allow_network_access = false
sandbox_mode = true
allowed_domains = ["api.company.com", "github.com"]
# MCP specific settings
[mcp]
protocol_version = "1.0"
capabilities = ["prompts", "notifications"]
max_prompt_size = 1048576 # 1MB
compression = true
Environment Variables
All configuration options can be set via environment variables:
Naming Convention
- Prefix:
SWISSARMYHAMMER_
- Nested values use underscores:
SECTION_KEY
- Arrays use comma separation
Examples
# Server settings
export SWISSARMYHAMMER_SERVER_HOST=0.0.0.0
export SWISSARMYHAMMER_SERVER_PORT=9000
export SWISSARMYHAMMER_SERVER_DEBUG=true
# Prompt directories (comma-separated)
export SWISSARMYHAMMER_PROMPTS_DIRECTORIES="/opt/prompts,~/my-prompts"
export SWISSARMYHAMMER_PROMPTS_BUILTIN=false
# Logging
export SWISSARMYHAMMER_LOG_LEVEL=debug
export SWISSARMYHAMMER_LOG_FILE=/var/log/swissarmyhammer.log
# File watching
export SWISSARMYHAMMER_WATCH_ENABLED=true
export SWISSARMYHAMMER_WATCH_POLL_INTERVAL=2000
# Security
export SWISSARMYHAMMER_SECURITY_SANDBOX_MODE=true
export SWISSARMYHAMMER_SECURITY_ALLOWED_DOMAINS="api.example.com,cdn.example.com"
Precedence
Configuration precedence (highest to lowest):
- Command-line arguments
- Environment variables
- Configuration files
- Default values
Command-Line Options
Global Options
swissarmyhammer [GLOBAL_OPTIONS] <COMMAND> [COMMAND_OPTIONS]
Global Options:
--config <FILE> Use specific configuration file
--verbose Enable verbose output
--quiet Suppress non-error output
--no-color Disable colored output
--json Output in JSON format
--help Show help information
--version Show version information
Per-Command Configuration
Override configuration for specific commands:
# Override server settings
swissarmyhammer serve --host 0.0.0.0 --port 9000 --debug
# Override prompt directories
swissarmyhammer serve --prompts /custom/prompts --no-builtin
# Override logging
swissarmyhammer serve --log-level debug --log-file server.log
Configuration Sections
Server Configuration
Controls the MCP server behavior:
[server]
# Network binding
host = "localhost" # IP address or hostname
port = 8080 # Port number (0 for auto-assign)
# Performance
workers = 4 # Number of worker threads
max_connections = 100 # Maximum concurrent connections
timeout = 30000 # Request timeout in milliseconds
# Debugging
debug = false # Enable debug mode
trace = false # Enable trace logging
metrics = true # Enable metrics collection
Prompt Configuration
Manages prompt loading and directories:
[prompts]
# Directories to load prompts from
directories = [
"~/.swissarmyhammer/prompts", # User prompts
"./prompts", # Project prompts
"/opt/shared/prompts" # Shared prompts
]
# Loading behavior
builtin = true # Include built-in prompts
watch = true # Enable file watching
recursive = true # Scan directories recursively
follow_symlinks = false # Follow symbolic links
# Filtering
include_patterns = ["*.md", "*.markdown"]
exclude_patterns = ["*.draft.md", "test-*"]
# Validation
strict_validation = true # Fail on invalid prompts
required_fields = ["name", "description"]
max_file_size = "1MB" # Maximum prompt file size
File Watching
Configure file system monitoring:
[watch]
enabled = true # Enable/disable watching
strategy = "efficient" # efficient, aggressive, polling
# Polling strategy settings
poll_interval = 1000 # Milliseconds between polls
poll_timeout = 100 # Polling timeout
# Watch behavior
debounce = 500 # Milliseconds to wait for changes to settle
max_depth = 10 # Maximum directory depth
batch_events = true # Batch multiple changes
# Ignore patterns
ignore_patterns = [
"*.tmp",
"*.swp",
"*.bak",
".git/**",
".svn/**",
"__pycache__/**",
"node_modules/**"
]
# Performance
max_watches = 10000 # Maximum number of watches
event_buffer_size = 1000 # Event queue size
Logging
Configure logging behavior:
[log]
# Log level: trace, debug, info, warn, error
level = "info"
# Console output
console = true
console_format = "pretty" # pretty, json, compact
console_colors = true
# File logging
file = "~/.swissarmyhammer/logs/server.log"
file_format = "json"
rotate = true
max_size = "10MB"
max_files = 5
max_age = 30 # days
# Log filtering
include_modules = ["server", "prompts"]
exclude_modules = ["watcher"]
# Performance
buffer_size = 8192
async = true
Cache Configuration
Control caching behavior:
[cache]
enabled = true
directory = "~/.swissarmyhammer/cache"
# Cache strategy
strategy = "lru" # lru, lfu, ttl
max_entries = 1000
max_size = "100MB"
# Time-based settings
ttl = 3600 # Default TTL in seconds
refresh_ahead = 300 # Refresh cache 5 minutes before expiry
# Cache categories
[cache.prompts]
enabled = true
ttl = 7200
[cache.templates]
enabled = true
ttl = 3600
[cache.search]
enabled = false # Disable search result caching
Template Engine
Configure Liquid template processing:
[template]
# Parsing
strict_variables = false # Error on undefined variables
strict_filters = false # Error on undefined filters
error_mode = "warn" # warn, error, ignore
# Custom extensions
custom_filters_path = "~/.swissarmyhammer/filters"
custom_tags_path = "~/.swissarmyhammer/tags"
# Security
allow_includes = true
include_paths = ["~/.swissarmyhammer/includes"]
max_render_depth = 10
max_iterations = 1000
# Performance
cache_templates = true
compile_cache = "~/.swissarmyhammer/template_cache"
Security Settings
Control security features:
[security]
# Sandboxing
sandbox_mode = true # Enable security sandbox
allow_file_access = false # Allow template file access
allow_network_access = false # Allow network requests
allow_system_access = false # Allow system commands
# Network security
allowed_domains = [
"api.company.com",
"cdn.company.com",
"github.com"
]
blocked_domains = [
"malicious.site"
]
# File security
allowed_paths = [
"~/Documents/projects",
"/opt/shared/data"
]
blocked_paths = [
"/etc",
"/sys",
"~/.ssh"
]
# Content security
max_input_size = "10MB"
max_output_size = "50MB"
sanitize_html = true
Configuration Profiles
Using Profiles
Define multiple configuration profiles:
# Default configuration
[default]
server.host = "localhost"
server.port = 8080
log.level = "info"
# Development profile
[profiles.development]
server.debug = true
log.level = "debug"
cache.enabled = false
template.strict_variables = true
# Production profile
[profiles.production]
server.host = "0.0.0.0"
server.workers = 8
log.level = "warn"
security.sandbox_mode = true
# Testing profile
[profiles.test]
server.port = 0 # Auto-assign
log.file = "/tmp/test.log"
prompts.directories = ["./test/fixtures"]
Activating Profiles
# Via environment variable
export SWISSARMYHAMMER_PROFILE=production
swissarmyhammer serve
# Via command line
swissarmyhammer --profile development serve
# Multiple profiles (later overrides earlier)
swissarmyhammer --profile production --profile custom serve
Advanced Configuration
Dynamic Configuration
Load configuration from external sources:
[config]
# Load additional config from URL
remote_config = "https://config.company.com/swissarmyhammer"
remote_check_interval = 300 # seconds
# Load from environment-specific file
env_config = "/etc/swissarmyhammer/config.${ENV}.toml"
# Merge strategy
merge_strategy = "deep" # deep, shallow, replace
Hooks Configuration
Configure lifecycle hooks:
[hooks]
# Startup hooks
pre_start = [
"~/scripts/pre-start.sh",
"/opt/swissarmyhammer/hooks/validate.py"
]
post_start = [
"~/scripts/notify-start.sh"
]
# Shutdown hooks
pre_stop = [
"~/scripts/save-state.sh"
]
post_stop = [
"~/scripts/cleanup.sh"
]
# Prompt hooks
pre_load = "~/scripts/validate-prompt.sh"
post_load = "~/scripts/index-prompt.sh"
# Error hooks
on_error = "~/scripts/error-handler.sh"
# Hook configuration
[hooks.config]
timeout = 30 # seconds
fail_on_error = false # Continue if hook fails
environment = {
CUSTOM_VAR = "value"
}
Performance Tuning
Optimize for different scenarios:
[performance]
# Threading
thread_pool_size = 8
async_workers = 4
io_threads = 2
# Memory
max_memory = "2GB"
gc_interval = 300 # seconds
cache_pressure = 0.8 # Evict cache at 80% memory
# Network
connection_pool_size = 50
keep_alive = true
tcp_nodelay = true
socket_timeout = 30
# File I/O
read_buffer_size = 8192
write_buffer_size = 8192
use_mmap = true # Memory-mapped files
# Optimizations
lazy_loading = true
parallel_parsing = true
compress_cache = true
Monitoring Configuration
Enable monitoring and metrics:
[monitoring]
enabled = true
# Metrics collection
[monitoring.metrics]
enabled = true
interval = 60 # seconds
retention = 7 # days
# Metrics to collect
collect = [
"cpu_usage",
"memory_usage",
"prompt_count",
"request_rate",
"error_rate",
"cache_hit_rate"
]
# Export metrics
[monitoring.export]
format = "prometheus" # prometheus, json, statsd
endpoint = "http://metrics.company.com:9090"
labels = {
service = "swissarmyhammer",
environment = "production"
}
# Health checks
[monitoring.health]
enabled = true
endpoint = "/health"
checks = [
"server_status",
"prompt_loading",
"file_watcher",
"cache_status"
]
Configuration Examples
Minimal Configuration
# Minimal working configuration
[server]
host = "localhost"
port = 8080
[prompts]
directories = ["~/.swissarmyhammer/prompts"]
Development Configuration
# Development-optimized configuration
[server]
host = "localhost"
port = 8080
debug = true
[prompts]
directories = [
"./prompts",
"~/.swissarmyhammer/prompts"
]
watch = true
[log]
level = "debug"
console = true
[cache]
enabled = false # Disable caching for development
[template]
strict_variables = true # Catch template errors early
Production Configuration
# Production-optimized configuration
[server]
host = "0.0.0.0"
port = 80
workers = 8
timeout = 60000
[prompts]
directories = [
"/opt/swissarmyhammer/prompts",
"/var/lib/swissarmyhammer/prompts"
]
builtin = true
watch = false # Disable for performance
[log]
level = "warn"
file = "/var/log/swissarmyhammer/server.log"
rotate = true
max_size = "100MB"
max_files = 10
[cache]
enabled = true
strategy = "lru"
max_size = "1GB"
[security]
sandbox_mode = true
allow_file_access = false
allow_network_access = false
[monitoring]
enabled = true
metrics.enabled = true
health.enabled = true
High-Performance Configuration
# Optimized for high load
[server]
workers = 16
max_connections = 1000
timeout = 120000
[performance]
thread_pool_size = 32
async_workers = 16
connection_pool_size = 200
lazy_loading = true
parallel_parsing = true
[cache]
enabled = true
strategy = "lfu"
max_size = "4GB"
refresh_ahead = 600
[watch]
enabled = false # Disable for performance
[log]
level = "error" # Minimize logging overhead
async = true
buffer_size = 65536
Configuration Validation
Validate Configuration
# Validate configuration file
swissarmyhammer config validate
# Validate specific file
swissarmyhammer config validate --file custom-config.toml
# Show effective configuration
swissarmyhammer config show
# Show configuration with sources
swissarmyhammer config show --sources
Configuration Schema
# Generate configuration schema
swissarmyhammer config schema > config-schema.json
# Validate against schema
swissarmyhammer config validate --schema config-schema.json
Best Practices
1. Use Profiles
Separate configurations for different environments:
[profiles.local]
server.debug = true
[profiles.staging]
server.host = "staging.company.com"
[profiles.production]
server.host = "0.0.0.0"
security.sandbox_mode = true
2. Secure Sensitive Data
Never store secrets in configuration files:
# Bad
api_key = "sk-1234567890abcdef"
# Good - use environment variables
api_key = "${API_KEY}"
3. Document Configuration
Add comments explaining non-obvious settings:
# Increase timeout for slow network environments
timeout = 60000 # 1 minute
# Disable caching during development to see changes immediately
[cache]
enabled = false # TODO: Enable for production
4. Version Control
Track configuration changes:
# .gitignore
config.local.toml
config.production.toml
# Track example configuration
config.example.toml
5. Validate Changes
Always validate configuration changes:
# Before deploying
swissarmyhammer config validate --file new-config.toml
# Test with dry run
swissarmyhammer serve --config new-config.toml --dry-run
Troubleshooting
Configuration Not Loading
- Check file exists and is readable
- Validate TOML syntax
- Check environment variable names
- Review precedence order
Performance Issues
- Disable file watching in production
- Tune cache settings
- Adjust worker counts
- Enable performance monitoring
Security Warnings
- Review security settings
- Enable sandbox mode
- Restrict file and network access
- Update allowed domains
Next Steps
- See CLI Reference for command-line options
- Learn about File Watching configuration
- Explore Troubleshooting for common issues
- Read Security for security best practices
File Watching
SwissArmyHammer includes a powerful file watching system that automatically detects and reloads prompt changes without restarting the server.
How It Works
The file watcher monitors your prompt directories for changes and automatically:
- Detects new, modified, or deleted prompt files
- Validates changed files for syntax errors
- Reloads prompts into memory
- Notifies connected clients of updates
- Maintains state during the reload process
βββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β File System βββββ>β File Watcher βββββ>β Prompt Cache β
βββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββ ββββββββββββββββ
β Validator β β MCP Clients β
ββββββββββββββββ ββββββββββββββββ
Configuration
Basic Settings
Configure file watching in your config.yaml
:
watch:
# Enable/disable file watching
enabled: true
# Check interval in milliseconds
interval: 1000
# Debounce delay to batch rapid changes
debounce: 500
# Maximum files to process per cycle
batch_size: 100
Advanced Options
watch:
# File patterns to watch
patterns:
- "**/*.md"
- "**/*.markdown"
- "**/prompts.yaml"
# Patterns to ignore
ignore:
- "**/node_modules/**"
- "**/.git/**"
- "**/target/**"
- "**/*.swp"
- "**/*~"
- "**/.DS_Store"
# Watch strategy
strategy: efficient # efficient, aggressive, polling
# Platform-specific settings
platform:
# macOS FSEvents
macos:
use_fsevents: true
latency: 0.1
# Linux inotify
linux:
use_inotify: true
max_watches: 8192
# Windows
windows:
use_polling: false
poll_interval: 1000
Watch Strategies
Efficient (Default)
Best for most use cases:
watch:
strategy: efficient
# Uses native OS file watching APIs
# Low CPU usage
# May have slight delay on some systems
Aggressive
For development with frequent changes:
watch:
strategy: aggressive
interval: 100 # Check every 100ms
debounce: 50 # Minimal debounce
# Higher CPU usage
# Near-instant updates
Polling
Fallback for compatibility:
watch:
strategy: polling
interval: 2000 # Poll every 2 seconds
# Works everywhere
# Higher CPU usage
# Slower updates
File Events
Supported Events
The watcher handles these file system events:
- Created - New prompt files added
- Modified - Existing files changed
- Deleted - Files removed
- Renamed - Files moved or renamed
- Metadata - Permission or timestamp changes
Event Processing
# Event processing configuration
watch:
events:
# Process creation events
create:
enabled: true
validate: true
# Process modification events
modify:
enabled: true
validate: true
reload_delay: 100 # ms
# Process deletion events
delete:
enabled: true
cleanup_cache: true
# Process rename events
rename:
enabled: true
track_moves: true
Validation
Automatic Validation
Files are validated before reload:
watch:
validation:
# Enable validation
enabled: true
# Validation rules
rules:
# Check YAML front matter
yaml_syntax: true
# Validate required fields
required_fields:
- name
- title
- description
# Check template syntax
template_syntax: true
# Maximum file size
max_size: 1MB
# What to do on validation failure
on_failure: warn # warn, ignore, stop
Validation Errors
When validation fails:
[WARN] Validation failed for prompts/invalid.md:
- Line 5: Invalid YAML syntax
- Missing required field: 'title'
- Template error: Unclosed tag '{% if'
File will not be loaded. Fix errors and save again.
Performance
Optimization Tips
-
Exclude unnecessary paths:
watch: ignore: - "**/backup/**" - "**/archive/**" - "**/*.log"
-
Tune intervals for your workflow:
# For active development watch: interval: 500 debounce: 250 # For production watch: interval: 5000 debounce: 2000
-
Limit watch scope:
watch: # Only watch specific directories directories: - ./prompts - ~/.swissarmyhammer/prompts # Don't watch subdirectories recursive: false
Resource Usage
Monitor watcher resource usage:
# Check watcher status
swissarmyhammer doctor --watch
# Show watcher statistics
swissarmyhammer status --verbose
# Output:
File Watcher Status:
Strategy: efficient
Files watched: 156
Directories: 12
CPU usage: 0.1%
Memory: 2.4MB
Events processed: 1,234
Last reload: 2 minutes ago
Debugging
Enable Debug Logging
logging:
modules:
swissarmyhammer::watcher: debug
Common Issues
-
Changes not detected:
# Check if watching is enabled swissarmyhammer config get watch.enabled # Test file watching swissarmyhammer test --watch
-
High CPU usage:
# Increase intervals watch: interval: 2000 debounce: 1000 # Use efficient strategy watch: strategy: efficient
-
Too many open files:
# Linux: Increase inotify watches echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p # macOS: Usually not an issue with FSEvents # Windows: Use polling fallback
Platform-Specific Notes
macOS
Uses FSEvents for efficient watching:
watch:
platform:
macos:
use_fsevents: true
# FSEvents latency in seconds
latency: 0.1
# Ignore events older than
ignore_older_than: 10 # seconds
Linux
Uses inotify with automatic limits:
watch:
platform:
linux:
use_inotify: true
# Will warn if approaching limits
warn_threshold: 0.8
# Fallback to polling if needed
auto_fallback: true
Windows
Uses ReadDirectoryChangesW:
watch:
platform:
windows:
# Buffer size for changes
buffer_size: 65536
# Watch subtree
watch_subtree: true
# Notification filters
filters:
- file_name
- last_write
- size
Integration
Client Notifications
Clients are notified of changes:
// MCP client receives notification
client.on('prompt.changed', (event) => {
console.log(`Prompt ${event.name} was ${event.type}`);
// Refresh UI, clear caches, etc.
});
Hooks
Run commands on file changes:
watch:
hooks:
# Before processing changes
pre_reload:
- echo "Reloading prompts..."
# After successful reload
post_reload:
- ./scripts/notify-team.sh
- ./scripts/update-index.sh
# On reload failure
on_error:
- ./scripts/alert-admin.sh
API Access
Query watcher status via API:
# Get watcher status
curl http://localhost:3333/api/watcher/status
# Get recent events
curl http://localhost:3333/api/watcher/events
# Trigger manual reload
curl -X POST http://localhost:3333/api/watcher/reload
Best Practices
Development
- Use aggressive watching for immediate feedback
- Enable validation to catch errors early
- Watch only active directories to reduce overhead
- Use debug logging to troubleshoot issues
Production
- Use efficient strategy for lower resource usage
- Increase intervals to reduce CPU load
- Disable watching if prompts rarely change
- Monitor resource usage regularly
Large Projects
- Exclude build directories and dependencies
- Use specific patterns instead of wildcards
- Consider splitting prompts across multiple directories
- Implement caching to reduce reload impact
Manual Control
CLI Commands
Control file watching manually:
# Pause file watching
swissarmyhammer watch pause
# Resume file watching
swissarmyhammer watch resume
# Force reload all prompts
swissarmyhammer watch reload
# Show watch status
swissarmyhammer watch status
Environment Variables
Override watch settings:
# Disable watching
export SWISSARMYHAMMER_WATCH_ENABLED=false
# Change interval
export SWISSARMYHAMMER_WATCH_INTERVAL=5000
# Force polling strategy
export SWISSARMYHAMMER_WATCH_STRATEGY=polling
Troubleshooting
Diagnostic Commands
# Run watcher diagnostics
swissarmyhammer doctor --watch
# Test file detection
echo "test" >> prompts/test.md
swissarmyhammer watch test
# Monitor events in real-time
swissarmyhammer watch monitor
Common Solutions
- Linux: Increase inotify limits
- macOS: Grant full disk access
- Windows: Run as administrator
- All: Check file permissions
- All: Verify ignore patterns
Next Steps
- Configure watching in Configuration
- Learn about Prompt Organization
- Understand Prompt Overrides
- Read Troubleshooting for more help
Prompt Overrides
SwissArmyHammer supports a hierarchical override system that allows you to customize prompts at different levels without modifying the original files.
Override Hierarchy
Prompts are loaded and merged in this order (later overrides earlier):
1. Built-in prompts (system)
β
2. User prompts (~/.swissarmyhammer/prompts)
β
3. Project prompts (./.swissarmyhammer/prompts)
β
4. Runtime overrides (CLI/API)
How Overrides Work
Complete Override
Replace an entire prompt by using the same name:
<!-- Built-in: /usr/share/swissarmyhammer/prompts/code-review.md -->
---
name: code-review
title: Code Review
description: Reviews code for issues
arguments:
- name: code
required: true
---
Please review this code.
<!-- User override: ~/.swissarmyhammer/prompts/code-review.md -->
---
name: code-review
title: Enhanced Code Review
description: Comprehensive code analysis with security focus
arguments:
- name: code
required: true
- name: security_check
required: false
default: true
---
Perform a detailed security-focused code review.
Partial Override
Override specific fields while inheriting others:
# ~/.swissarmyhammer/overrides/code-review.yaml
name: code-review
extends: true # Inherit from lower level
title: "Code Review (Company Standards)"
# Only override specific arguments
arguments:
merge: true # Merge with parent arguments
items:
- name: style_guide
default: "company-style-guide.md"
Override Methods
1. File-Based Overrides
Create a prompt file with the same name at a higher level:
# Original
/usr/share/swissarmyhammer/prompts/development/python-analyzer.md
# User override
~/.swissarmyhammer/prompts/development/python-analyzer.md
# Project override
./.swissarmyhammer/prompts/development/python-analyzer.md
2. Override Configuration
Use override files to modify prompts without duplicating:
# ~/.swissarmyhammer/overrides.yaml
overrides:
- name: code-review
# Override just the description
description: "Code review with company standards"
- name: api-generator
# Add new arguments
arguments:
append:
- name: auth_type
default: "oauth2"
- name: test-writer
# Modify template content
template:
prepend: |
# Company Test Standards
Follow these guidelines:
- Use pytest exclusively
- Include docstrings
append: |
## Additional Requirements
- Minimum 80% coverage
- Include integration tests
3. Runtime Overrides
Override prompts at runtime via CLI or API:
# Override prompt arguments
swissarmyhammer test code-review \
--override title="Security Review" \
--override description="Focus on security vulnerabilities"
# Override template content
swissarmyhammer test api-docs \
--template-override prepend="# CONFIDENTIAL\n\n" \
--template-override append="\n\nΒ© 2024 Acme Corp"
Advanced Override Patterns
Inheritance Chain
Create a chain of inherited prompts:
# base-analyzer.yaml
name: base-analyzer
abstract: true # Can't be used directly
title: Base Code Analyzer
arguments:
- name: code
required: true
- name: language
required: false
# python-analyzer.yaml
name: python-analyzer
extends: base-analyzer
title: Python Code Analyzer
arguments:
merge: true
items:
- name: check_types
default: true
# security-python-analyzer.yaml
name: security-python-analyzer
extends: python-analyzer
title: Security-Focused Python Analyzer
template:
inherit: true
prepend: |
## Security Analysis
Focus on OWASP Top 10 vulnerabilities.
Conditional Overrides
Apply overrides based on conditions:
# overrides.yaml
conditional_overrides:
- condition:
environment: production
overrides:
- name: all
arguments:
- name: verbose
default: false
- condition:
user: qa-team
overrides:
- name: test-generator
template:
append: |
Include edge case testing.
- condition:
project_type: web
overrides:
- name: security-scan
arguments:
- name: check_xss
default: true
Template Merging
Control how templates are merged:
# Override with template merging
name: api-docs
extends: true
template_merge:
strategy: smart # smart, prepend, append, replace
sections:
- match: "## Authentication"
action: replace
content: |
## Authentication
Use OAuth 2.0 with PKCE flow.
- match: "## Error Handling"
action: append
content: |
### Company Error Codes
- 4001: Invalid API key
- 4002: Rate limit exceeded
Project-Specific Overrides
Directory Structure
Organize project overrides:
.swissarmyhammer/
βββ prompts/ # Complete prompt overrides
β βββ code-review.md
βββ overrides.yaml # Partial overrides
βββ templates/ # Template snippets
β βββ header.md
β βββ footer.md
βββ config.yaml # Override configuration
Override Configuration
Configure override behavior:
# .swissarmyhammer/config.yaml
overrides:
# Enable/disable overrides
enabled: true
# Override precedence
precedence:
- runtime # Highest priority
- project
- user
- system # Lowest priority
# Merge strategies
merge:
arguments: deep # deep, shallow, replace
template: smart # smart, simple, replace
metadata: shallow # deep, shallow, replace
# Validation
validation:
strict: true
require_base: false
allow_new_fields: true
Use Cases
1. Company Standards
Enforce company-wide standards:
# ~/.swissarmyhammer/company-overrides.yaml
global_overrides:
all_prompts:
template:
prepend: |
# {{company}} Standards
This output follows {{company}} guidelines.
globals:
company: "Acme Corp"
support_email: "ai-support@acme.com"
2. Environment-Specific
Different behavior per environment:
# Development overrides
development:
overrides:
- name: code-review
arguments:
- name: verbose
default: true
- name: include_suggestions
default: true
# Production overrides
production:
overrides:
- name: code-review
arguments:
- name: verbose
default: false
- name: security_scan
default: true
3. Team Customization
Team-specific modifications:
# Frontend team overrides
team: frontend
overrides:
- pattern: "*-component"
template:
prepend: |
Use React 18+ features.
Follow Material-UI guidelines.
- name: test-writer
arguments:
- name: framework
default: "jest"
- name: include_snapshots
default: true
Override Resolution
Name Matching
How prompts are matched for override:
- Exact match:
code-review
matchescode-review
- Pattern match:
*-review
matchescode-review
,security-review
- Category match:
category:development
matches all development prompts
Conflict Resolution
When multiple overrides apply:
# Resolution rules
conflict_resolution:
# Strategy: first, last, merge, error
strategy: merge
# Priority (higher wins)
priorities:
exact_match: 100
pattern_match: 50
category_match: 10
global: 1
Debugging Overrides
See what overrides are applied:
# Show override chain for a prompt
swissarmyhammer debug code-review --show-overrides
# Output:
Override chain for 'code-review':
1. System: /usr/share/swissarmyhammer/prompts/code-review.md
2. User: ~/.swissarmyhammer/prompts/code-review.md (extends)
3. Project: ./.swissarmyhammer/overrides.yaml (partial)
4. Runtime: --override title="Custom Review"
# Test with override preview
swissarmyhammer test code-review --preview-overrides
Best Practices
1. Minimal Overrides
Override only what needs to change:
# Good: Override specific fields
name: code-review
extends: true
description: "Code review with security focus"
# Avoid: Duplicating entire prompt
name: code-review
title: Code Review # Unchanged
description: "Code review with security focus" # Only this changed
arguments: [...] # Duplicated
template: | # Duplicated
...
2. Document Overrides
Always document why overrides exist:
# overrides.yaml
overrides:
- name: api-generator
# OVERRIDE REASON: Company requires OAuth2 for all APIs
# JIRA: SECURITY-123
# Date: 2024-01-15
arguments:
- name: auth_type
default: "oauth2"
locked: true # Prevent further overrides
3. Version Control
Track override changes:
# .swissarmyhammer/.gitignore
# Don't ignore override files
!overrides.yaml
!prompts/
# Track override history
git add .swissarmyhammer/overrides.yaml
git commit -m "Add security requirements to code-review prompt"
4. Testing Overrides
Test overrides thoroughly:
# Test override application
swissarmyhammer test code-review --test-overrides
# Compare with and without overrides
swissarmyhammer test code-review --no-overrides > without.txt
swissarmyhammer test code-review > with.txt
diff without.txt with.txt
Security Considerations
Lock Overrides
Prevent certain overrides:
# System prompt with locked fields
---
name: security-scan
locked_fields:
- title
- core_checks
no_override: false # Can't be overridden at all
Validate Overrides
Ensure overrides meet requirements:
# Override validation rules
validation:
rules:
- field: arguments
required_items:
- name: code
- name: language
- field: template
must_contain:
- "SECURITY WARNING"
- "Confidential"
- field: description
min_length: 50
pattern: ".*security.*"
Troubleshooting
Common Issues
-
Override not applying:
# Check override precedence swissarmyhammer config get overrides.precedence # Verify file locations swissarmyhammer debug --show-paths
-
Merge conflicts:
# Show merge details swissarmyhammer debug code-review --trace-merge
-
Validation errors:
# Validate overrides swissarmyhammer validate --overrides
Next Steps
- Learn about Prompt Organization
- Understand Configuration options
- Read about Testing override scenarios
- See Examples of override patterns
Built-in Prompts
SwissArmyHammer includes a comprehensive set of built-in prompts designed to assist with various development tasks. These prompts are organized by category and leverage Liquid templating for dynamic, customizable assistance.
Overview
All built-in prompts:
- Support customizable arguments with sensible defaults
- Use Liquid syntax for variable substitution and control flow
- Are organized into logical categories for easy discovery
- Follow a standardized YAML front matter format
Categories
Analysis
statistics-calculator
Calculate statistics on numeric data using math filters and array operations.
Arguments:
numbers
(required) - Comma-separated list of numbersprecision
(default: β2β) - Decimal precision for calculationsshow_outliers
(default: βtrueβ) - Identify outliers in the datasetpercentiles
(default: β25,50,75β) - Calculate percentiles (comma-separated)visualization
(default: βtrueβ) - Show ASCII visualization
Example:
swissarmyhammer test statistics-calculator --numbers "10,20,30,40,50" --percentiles "10,50,90"
Communication
email-composer
Compose professional emails with dynamic content using capture blocks.
Arguments:
recipient_name
(required) - Name of the email recipientsender_name
(required) - Name of the senderemail_type
(required) - Type of email (welcome, followup, reminder, thank_you)context
(default: ββ) - Additional context for the emailformal
(default: βfalseβ) - Use formal toneinclude_signature
(default: βtrueβ) - Include email signaturetime_of_day
(default: βmorningβ) - Current time of day
Example:
swissarmyhammer test email-composer --recipient_name "John Doe" --sender_name "Jane Smith" --email_type "followup" --formal "true"
Data Processing
array-processor
Process arrays with flexible filtering and loop control.
Arguments:
items
(required) - Comma-separated list of items to processskip_pattern
(default: ββ) - Pattern to skip items containing this textstop_pattern
(default: ββ) - Pattern to stop processingmax_items
(default: β100β) - Maximum number of items to processshow_skipped
(default: βfalseβ) - Show skipped items separatelyformat
(default: βlistβ) - Output format (list, table, json)
Example:
swissarmyhammer test array-processor --items "apple,banana,cherry" --skip_pattern "berry" --format "table"
Debug
debug/error
Analyze error messages and provide debugging guidance with potential solutions.
Arguments:
error_message
(required) - The error message or stack trace to analyzelanguage
(default: βauto-detectβ) - The programming languagecontext
(default: ββ) - Additional context about when the error occurs
Example:
swissarmyhammer test debug/error --error_message "TypeError: cannot read property 'name' of undefined" --language "javascript"
debug/logs
Analyze log files to identify issues and patterns.
Arguments:
log_content
(required) - The log content to analyzeissue_description
(default: βgeneral analysisβ) - Description of the issue youβre investigatingtime_range
(default: βallβ) - Specific time range to focus onlog_format
(default: βauto-detectβ) - Log format (json, plaintext, syslog, etc.)
Example:
swissarmyhammer test debug/logs --log_content "$(cat application.log)" --issue_description "API timeout errors"
debug/performance
Analyze performance problems and suggest optimization strategies.
Arguments:
problem_description
(required) - Description of the performance issuemetrics
(default: βnot providedβ) - Performance metricscode_snippet
(default: ββ) - Relevant code that might be causing the issueenvironment
(default: βdevelopmentβ) - Environment details
Example:
swissarmyhammer test debug/performance --problem_description "Database queries taking 5+ seconds" --environment "production"
Documentation
docs/api
Create comprehensive API documentation from code.
Arguments:
code
(required) - The API code to documentapi_type
(default: βRESTβ) - Type of API (REST, GraphQL, gRPC, library)format
(default: βmarkdownβ) - Documentation format (markdown, openapi, swagger)include_examples
(default: βtrueβ) - Whether to include usage examples
Example:
swissarmyhammer test docs/api --code "$(cat api.py)" --api_type "REST" --include_examples "true"
docs/comments
Add comprehensive comments and documentation to code.
Arguments:
code
(required) - The code to documentcomment_style
(default: βauto-detectβ) - Comment style (inline, block, jsdoc, docstring, rustdoc)detail_level
(default: βstandardβ) - Level of detail (minimal, standard, comprehensive)audience
(default: βdevelopersβ) - Target audience for the comments
Example:
swissarmyhammer test docs/comments --code "$(cat utils.js)" --comment_style "jsdoc" --detail_level "comprehensive"
docs/readme
Create comprehensive README documentation for a project.
Arguments:
project_name
(required) - Name of the projectproject_description
(required) - Brief description of what the project doeslanguage
(default: βauto-detectβ) - Primary programming languagefeatures
(default: ββ) - Key features of the project (comma-separated)target_audience
(default: βdevelopersβ) - Who this project is for
Example:
swissarmyhammer test docs/readme --project_name "MyLib" --project_description "A library for awesome things" --features "fast,reliable,easy"
Formatting
table-generator
Generate formatted tables with alternating row styles.
Arguments:
headers
(required) - Comma-separated list of table headersrows
(required) - Semicolon-separated rows, with comma-separated valuesstyle
(default: βmarkdownβ) - Table style (markdown, html, ascii)zebra
(default: βtrueβ) - Enable zebra striping for rowsrow_numbers
(default: βfalseβ) - Add row numbers
Example:
swissarmyhammer test table-generator --headers "Name,Age,City" --rows "John,30,NYC;Jane,25,LA" --style "markdown"
Planning & Productivity
plan
Create structured plans and break down complex tasks.
Arguments:
task
(required) - The task to plan forcontext
(default: ββ) - Additional context for the planningconstraints
(default: βnoneβ) - Any constraints or limitations to consider
Example:
swissarmyhammer test plan --task "Implement user authentication" --constraints "Must use OAuth2"
task-formatter
Format and organize tasks with priorities and grouping.
Arguments:
tasks
(required) - Comma-separated list of tasksgroup_by
(default: βnoneβ) - How to group tasks (priority, status, category, none)show_index
(default: βtrueβ) - Show task numbersshow_status
(default: βtrueβ) - Include status checkboxesdate_format
(default: β%B %d, %Yβ) - Date format for due dates
Example:
swissarmyhammer test task-formatter --tasks "Write tests,Fix bug,Update docs" --group_by "priority"
Prompt Management
prompts/create
Help create effective prompts for SwissArmyHammer.
Arguments:
purpose
(required) - What the prompt should accomplishcategory
(default: βgeneralβ) - Category for the promptinputs_needed
(default: ββ) - What information the prompt needs from userscomplexity
(default: βmoderateβ) - Complexity level (simple, moderate, advanced)
Example:
swissarmyhammer test prompts/create --purpose "Generate database migrations" --category "database"
prompts/improve
Analyze and enhance existing prompts for better effectiveness.
Arguments:
prompt_content
(required) - The current prompt content (including YAML front matter)improvement_goals
(default: βoverall enhancementβ) - What aspects to improveuser_feedback
(default: ββ) - Any feedback or issues users have reported
Example:
swissarmyhammer test prompts/improve --prompt_content "$(cat my-prompt.md)" --improvement_goals "clarity,flexibility"
Refactoring
refactor/clean
Refactor code for better readability, maintainability, and adherence to best practices.
Arguments:
code
(required) - The code to refactorlanguage
(default: βauto-detectβ) - Programming languagefocus_areas
(default: βallβ) - Specific areas to focus onstyle_guide
(default: βlanguage defaultsβ) - Specific style guide to follow
Example:
swissarmyhammer test refactor/clean --code "$(cat messy_code.py)" --focus_areas "naming,complexity"
refactor/extract
Extract code into well-named, reusable methods or functions.
Arguments:
code
(required) - The code containing logic to extractextract_purpose
(required) - What the extracted method should domethod_name
(default: βauto-suggestβ) - Suggested name for the extracted methodscope
(default: βmethodβ) - Scope for the extraction (method, function, class, module)
Example:
swissarmyhammer test refactor/extract --code "$(cat complex.js)" --extract_purpose "validate user input"
refactor/patterns
Refactor code to match a target pattern or improve structure.
Arguments:
code
(required) - The code to refactortarget_pattern
(required) - The pattern or style to refactor towards
Example:
swissarmyhammer test refactor/patterns --code "$(cat service.py)" --target_pattern "Repository pattern"
Code Review
review/code
Review code for quality, bugs, and improvements.
Arguments:
file_path
(required) - Path to the file being reviewedcontext
(default: βgeneral reviewβ) - Additional context about the code review focus
Example:
swissarmyhammer test review/code --file_path "src/auth.py" --context "focus on security"
review/code-dynamic
Language-specific code review with conditional logic.
Arguments:
file_path
(required) - Path to the file being reviewedlanguage
(required) - Programming language (python, javascript, rust, etc.)focus_areas
(default: βstyle,bugs,performanceβ) - Comma-separated list of areas to focus onseverity_level
(default: βwarningβ) - Minimum severity level to report (info, warning, error)include_suggestions
(default: βtrueβ) - Include code improvement suggestions
Example:
swissarmyhammer test review/code-dynamic --file_path "app.js" --language "javascript" --focus_areas "security,performance"
review/security
Perform a comprehensive security review of code to identify vulnerabilities.
Arguments:
code
(required) - The code to review for security issuescontext
(default: βgeneral purpose codeβ) - Context about the codelanguage
(default: βauto-detectβ) - Programming languageseverity_threshold
(default: βlowβ) - Minimum severity to report (critical, high, medium, low)
Example:
swissarmyhammer test review/security --code "$(cat login.php)" --context "handles user authentication"
review/accessibility
Review code for accessibility compliance and best practices.
Arguments:
code
(required) - The UI/frontend code to reviewwcag_level
(default: βAAβ) - WCAG compliance level target (A, AA, AAA)component_type
(default: βgeneralβ) - Type of component (form, navigation, content, interactive)target_users
(default: βall usersβ) - Specific user needs to consider
Example:
swissarmyhammer test review/accessibility --code "$(cat form.html)" --component_type "form" --wcag_level "AA"
Testing
test/unit
Create comprehensive unit tests for code with good coverage.
Arguments:
code
(required) - The code to generate tests forframework
(default: βauto-detectβ) - Testing framework to usestyle
(default: βBDDβ) - Testing style (BDD, TDD, classical)coverage_target
(default: β80β) - Target test coverage percentage
Example:
swissarmyhammer test test/unit --code "$(cat calculator.py)" --framework "pytest" --style "BDD"
test/integration
Create integration tests to verify component interactions.
Arguments:
system_description
(required) - Description of the system/components to testtest_scenarios
(default: βbasic flowβ) - Specific scenarios to test (comma-separated)framework
(default: βauto-detectβ) - Testing framework to useenvironment
(default: βlocalβ) - Test environment setup requirements
Example:
swissarmyhammer test test/integration --system_description "User service and database" --test_scenarios "user creation,user update"
test/property
Create property-based tests to find edge cases automatically.
Arguments:
code
(required) - The code to test with propertiesframework
(default: βauto-detectβ) - Property testing frameworkproperties_to_test
(default: βcommon propertiesβ) - Specific properties or invariants to verifynum_examples
(default: β100β) - Number of random examples to generate
Example:
swissarmyhammer test test/property --code "$(cat sort.js)" --properties_to_test "output length equals input length"
General Purpose
help
A prompt for providing helpful assistance and guidance to users.
Arguments:
topic
(default: βgeneral assistanceβ) - The topic to get help aboutdetail_level
(default: βnormalβ) - How detailed the help should be
Example:
swissarmyhammer test help --topic "git workflows" --detail_level "detailed"
example
An example prompt for testing.
Arguments:
topic
(default: βgeneral topicβ) - The topic to ask about
Example:
swissarmyhammer test example --topic "testing prompts"
Usage Patterns
Basic Usage
# Use a prompt with default arguments
swissarmyhammer test review/code --file_path "main.py"
# Specify custom arguments
swissarmyhammer test test/unit --code "$(cat utils.js)" --framework "jest" --coverage_target "90"
Piping Content
# Pipe file content to a prompt
cat error.log | xargs -I {} swissarmyhammer test debug/logs --log_content "{}" --issue_description "memory leak"
# Use command substitution
swissarmyhammer test docs/api --code "$(cat api.py)" --api_type "REST"
Combining Multiple Prompts
# First analyze the code
swissarmyhammer test review/code --file_path "service.py" > review.md
# Then generate tests based on the review
swissarmyhammer test test/unit --code "$(cat service.py)" --style "TDD"
Custom Workflows
# Create a security-focused workflow
swissarmyhammer test review/security --code "$(cat auth.js)" --severity_threshold "medium" > security.md
swissarmyhammer test test/unit --code "$(cat auth.js)" --focus "security edge cases"
Best Practices
- Choose the Right Prompt - Select prompts that match your specific task
- Provide Context - Use optional arguments to give more context
- Combine Prompts - Use multiple prompts in sequence for comprehensive workflows
- Customize Arguments - Override defaults when you need specific behavior
- Review Output - Always review and validate generated content before using it
Creating Custom Prompts
If the built-in prompts donβt meet your needs:
- Use the
prompts/create
prompt to generate a template - Save it in your
~/.swissarmyhammer/prompts/
directory - Follow the YAML front matter format for consistency
- Test with various inputs to ensure reliability
For more information on creating custom prompts, see Creating Prompts.
Custom Filters Reference
Examples
This page provides real-world examples of using SwissArmyHammer for various development tasks.
Basic Prompt Usage
Simple Code Review
# Review a Python file
swissarmyhammer test review/code --file_path "src/main.py"
# Review with specific focus
swissarmyhammer test review/code --file_path "api/auth.py" --context "focus on security and error handling"
Generate Unit Tests
# Generate tests for a function
swissarmyhammer test test/unit --code "$(cat calculator.py)" --framework "pytest"
# Generate tests with high coverage target
swissarmyhammer test test/unit --code "$(cat utils.js)" --framework "jest" --coverage_target "95"
Debug an Error
# Analyze an error message
swissarmyhammer test debug/error \
--error_message "TypeError: Cannot read property 'name' of undefined" \
--language "javascript" \
--context "Happens when user submits form"
Creating Custom Prompts
Basic Prompt Structure
Create ~/.swissarmyhammer/prompts/my-prompt.md
:
---
name: git-commit-message
title: Git Commit Message Generator
description: Generate conventional commit messages from changes
arguments:
- name: changes
description: Description of changes made
required: true
- name: type
description: Type of change (feat, fix, docs, etc.)
required: false
default: feat
- name: scope
description: Scope of the change
required: false
default: ""
---
# Git Commit Message
Based on the changes: {{changes}}
Generate a conventional commit message:
Type: {{type}}
{% if scope %}Scope: {{scope}}{% endif %}
Format: `{{type}}{% if scope %}({{scope}}){% endif %}: <subject>`
Subject should be:
- 50 characters or less
- Present tense
- No period at the end
- Clear and descriptive
Use it:
swissarmyhammer test git-commit-message \
--changes "Added user authentication with OAuth2" \
--type "feat" \
--scope "auth"
Advanced Template with Conditionals
Create ~/.swissarmyhammer/prompts/database-query.md
:
---
name: database-query-optimizer
title: Database Query Optimizer
description: Optimize SQL queries for better performance
arguments:
- name: query
description: The SQL query to optimize
required: true
- name: database
description: Database type (postgres, mysql, sqlite)
required: false
default: postgres
- name: table_sizes
description: Approximate table sizes (small, medium, large)
required: false
default: medium
- name: indexes
description: Available indexes (comma-separated)
required: false
default: ""
---
# SQL Query Optimization
## Original Query
```sql
{{query}}
Database: {{database | capitalize}}
{% if database == βpostgresβ %}
PostgreSQL Specific Optimizations
- Consider using EXPLAIN ANALYZE
- Check for missing indexes on JOIN columns
- Use CTEs for complex queries
- Consider partial indexes for WHERE conditions {% elsif database == βmysqlβ %}
MySQL Specific Optimizations
- Use EXPLAIN to check execution plan
- Consider covering indexes
- Optimize GROUP BY queries
- Check buffer pool size {% else %}
SQLite Specific Optimizations
- Use EXPLAIN QUERY PLAN
- Consider table order in JOINs
- Minimize use of LIKE with wildcards {% endif %}
Table Size Considerations
{% case table_sizes %} {% when βsmallβ %}
- Full table scans might be acceptable
- Focus on query simplicity {% when βlargeβ %}
- Indexes are critical
- Consider partitioning
- Avoid SELECT * {% else %}
- Balance between indexes and write performance
- Monitor query execution time {% endcase %}
{% if indexes %}
Available Indexes
{% assign index_list = indexes | split: β,β %} {% for index in index_list %}
- {{ index | strip }} {% endfor %} {% endif %}
Provide:
- Optimized query
- Explanation of changes
- Expected performance improvement
- Additional index recommendations
### Using Arrays and Loops
Create `~/.swissarmyhammer/prompts/api-client.md`:
```markdown
---
name: api-client-generator
title: API Client Generator
description: Generate API client code from endpoint specifications
arguments:
- name: endpoints
description: Comma-separated list of endpoints (method:path)
required: true
- name: base_url
description: Base URL for the API
required: true
- name: language
description: Target language for the client
required: false
default: javascript
- name: auth_type
description: Authentication type (none, bearer, basic, apikey)
required: false
default: none
---
# API Client Generator
Generate a {{language}} API client for:
- Base URL: {{base_url}}
- Authentication: {{auth_type}}
## Endpoints
{% assign endpoint_list = endpoints | split: "," %}
{% for endpoint in endpoint_list %}
{% assign parts = endpoint | split: ":" %}
{% assign method = parts[0] | strip | upcase %}
{% assign path = parts[1] | strip %}
- {{method}} {{path}}
{% endfor %}
{% if language == "javascript" %}
Generate a modern JavaScript client using:
- Fetch API for requests
- Async/await syntax
- Proper error handling
- TypeScript interfaces if applicable
{% elsif language == "python" %}
Generate a Python client using:
- requests library
- Type hints
- Proper exception handling
- Docstrings for all methods
{% endif %}
{% if auth_type != "none" %}
Include authentication handling for {{auth_type}}:
{% case auth_type %}
{% when "bearer" %}
- Accept token in constructor
- Add Authorization: Bearer header
{% when "basic" %}
- Accept username/password
- Encode credentials properly
{% when "apikey" %}
- Accept API key
- Add to headers or query params as needed
{% endcase %}
{% endif %}
Include:
1. Complete client class
2. Error handling
3. Usage examples
4. Any necessary types/interfaces
Complex Workflows
Multi-Step Code Analysis
#!/bin/bash
# analyze-codebase.sh
# Step 1: Get overview of the codebase
echo "=== Codebase Overview ==="
swissarmyhammer test help --topic "codebase structure" --detail_level "detailed" > analysis/overview.md
# Step 2: Review critical files
echo "=== Security Review ==="
for file in auth.py payment.py user.py; do
echo "Reviewing $file..."
swissarmyhammer test review/security \
--code "$(cat src/$file)" \
--context "handles sensitive data" \
--severity_threshold "medium" > "analysis/security-$file.md"
done
# Step 3: Generate tests for uncovered code
echo "=== Test Generation ==="
swissarmyhammer test test/unit \
--code "$(cat src/utils.py)" \
--framework "pytest" \
--style "BDD" \
--coverage_target "90" > tests/test_utils_generated.py
# Step 4: Create documentation
echo "=== Documentation ==="
swissarmyhammer test docs/api \
--code "$(cat src/api.py)" \
--api_type "REST" \
--format "openapi" > docs/api-spec.yaml
echo "Analysis complete! Check the analysis/ directory for results."
Automated PR Review
#!/bin/bash
# pr-review.sh
# Get changed files
CHANGED_FILES=$(git diff --name-only main...HEAD)
echo "# Pull Request Review" > pr-review.md
echo "" >> pr-review.md
for file in $CHANGED_FILES; do
if [[ $file == *.py ]] || [[ $file == *.js ]] || [[ $file == *.ts ]]; then
echo "## Review: $file" >> pr-review.md
# Dynamic code review
swissarmyhammer test review/code-dynamic \
--file_path "$file" \
--language "${file##*.}" \
--focus_areas "bugs,security,performance" \
--severity_level "info" >> pr-review.md
echo "" >> pr-review.md
fi
done
# Check for accessibility issues in UI files
for file in $CHANGED_FILES; do
if [[ $file == *.html ]] || [[ $file == *.jsx ]] || [[ $file == *.tsx ]]; then
echo "## Accessibility: $file" >> pr-review.md
swissarmyhammer test review/accessibility \
--code "$(cat $file)" \
--wcag_level "AA" >> pr-review.md
echo "" >> pr-review.md
fi
done
echo "Review complete! See pr-review.md"
Project Setup Automation
#!/bin/bash
# setup-project.sh
PROJECT_NAME=$1
PROJECT_TYPE=$2 # api, webapp, library
# Create project structure
mkdir -p $PROJECT_NAME/{src,tests,docs}
cd $PROJECT_NAME
# Generate README
swissarmyhammer test docs/readme \
--project_name "$PROJECT_NAME" \
--project_description "A $PROJECT_TYPE project" \
--language "$PROJECT_TYPE" > README.md
# Create initial prompts
mkdir -p prompts/project
# Generate project-specific code review prompt
cat > prompts/project/code-review.md << 'EOF'
---
name: project-code-review
title: Project Code Review
description: Review code according to our project standards
arguments:
- name: file_path
description: File to review
required: true
---
Review {{file_path}} for:
- Our naming conventions (camelCase for JS, snake_case for Python)
- Error handling patterns we use
- Project-specific security requirements
- Performance considerations for our scale
EOF
# Configure SwissArmyHammer for this project
claude mcp add ${PROJECT_NAME}_sah swissarmyhammer serve --prompts ./prompts
echo "Project $PROJECT_NAME setup complete!"
Integration Examples
Git Hooks
.git/hooks/pre-commit
:
#!/bin/bash
# Check code quality before commit
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(py|js|ts)$')
if [ -z "$STAGED_FILES" ]; then
exit 0
fi
echo "Running pre-commit checks..."
for FILE in $STAGED_FILES; do
# Run security review on staged content
git show ":$FILE" | swissarmyhammer test review/security \
--code "$(cat)" \
--severity_threshold "high" \
--language "${FILE##*.}"
if [ $? -ne 0 ]; then
echo "Security issues found in $FILE"
exit 1
fi
done
echo "Pre-commit checks passed!"
CI/CD Integration
.github/workflows/code-quality.yml
:
name: Code Quality
on: [push, pull_request]
jobs:
quality-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install SwissArmyHammer
run: |
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Run Code Reviews
run: |
for file in $(find src -name "*.py"); do
swissarmyhammer test review/code-dynamic \
--file_path "$file" \
--language "python" \
--focus_areas "bugs,security" \
--severity_level "warning"
done
- name: Generate Missing Tests
run: |
swissarmyhammer test test/unit \
--code "$(cat src/core.py)" \
--framework "pytest" \
--coverage_target "80" > tests/test_core_generated.py
- name: Update Documentation
run: |
swissarmyhammer test docs/api \
--code "$(cat src/api.py)" \
--api_type "REST" \
--format "markdown" > docs/api.md
VS Code Task
.vscode/tasks.json
:
{
"version": "2.0.0",
"tasks": [
{
"label": "Review Current File",
"type": "shell",
"command": "swissarmyhammer",
"args": [
"test",
"review/code",
"--file_path",
"${file}"
],
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
}
},
{
"label": "Generate Tests",
"type": "shell",
"command": "swissarmyhammer",
"args": [
"test",
"test/unit",
"--code",
"$(cat ${file})",
"--framework",
"auto-detect"
],
"group": "test"
}
]
}
Advanced Patterns
Dynamic Prompt Selection
#!/bin/bash
# smart-review.sh
FILE=$1
EXTENSION="${FILE##*.}"
case $EXTENSION in
py)
PROMPT="review/code-dynamic"
ARGS="--language python --focus_areas style,typing"
;;
js|ts)
PROMPT="review/code-dynamic"
ARGS="--language javascript --focus_areas async,security"
;;
html)
PROMPT="review/accessibility"
ARGS="--wcag_level AA"
;;
sql)
PROMPT="database-query-optimizer"
ARGS="--database postgres"
;;
*)
PROMPT="review/code"
ARGS=""
;;
esac
swissarmyhammer test $PROMPT --file_path "$FILE" $ARGS
Batch Processing
#!/usr/bin/env python3
# batch_analyze.py
import subprocess
import json
import glob
def analyze_file(filepath):
"""Run SwissArmyHammer analysis on a file."""
result = subprocess.run([
'swissarmyhammer', 'test', 'review/code',
'--file_path', filepath,
'--context', 'batch analysis'
], capture_output=True, text=True)
return {
'file': filepath,
'output': result.stdout,
'errors': result.stderr
}
# Analyze all Python files
files = glob.glob('**/*.py', recursive=True)
results = [analyze_file(f) for f in files]
# Save results
with open('analysis_results.json', 'w') as f:
json.dump(results, f, indent=2)
print(f"Analyzed {len(files)} files. Results saved to analysis_results.json")
Custom Filter Integration
Create a prompt that uses custom filters:
---
name: data-transformer
title: Data Transformation Pipeline
description: Transform data using custom filters
arguments:
- name: data
description: Input data (JSON or CSV)
required: true
- name: transformations
description: Comma-separated list of transformations
required: true
---
# Data Transformation
Input data:
{{data}}
Apply transformations: {{transformations}}
{% assign transform_list = transformations | split: "," %}
{% for transform in transform_list %}
{% case transform | strip %}
{% when "uppercase" %}
- Convert all text fields to uppercase
{% when "normalize" %}
- Normalize whitespace and formatting
{% when "validate" %}
- Validate data types and constraints
{% when "aggregate" %}
- Aggregate numeric fields
{% endcase %}
{% endfor %}
Provide:
1. Transformed data
2. Transformation log
3. Any validation errors
4. Summary statistics
Tips and Best Practices
1. Use Command Substitution
# Good - passes file content directly
swissarmyhammer test review/code --code "$(cat main.py)"
# Less efficient - requires file path handling
swissarmyhammer test review/code --file_path main.py
2. Chain Commands
# Review then test
swissarmyhammer test review/code --file_path app.py && \
swissarmyhammer test test/unit --code "$(cat app.py)"
3. Save Common Workflows
Create ~/.swissarmyhammer/scripts/full-review.sh
:
#!/bin/bash
FILE=$1
echo "=== Code Review ==="
swissarmyhammer test review/code --file_path "$FILE"
echo -e "\n=== Security Check ==="
swissarmyhammer test review/security --code "$(cat $FILE)"
echo -e "\n=== Test Generation ==="
swissarmyhammer test test/unit --code "$(cat $FILE)"
4. Use Environment Variables
export SAH_DEFAULT_LANGUAGE=python
export SAH_DEFAULT_FRAMEWORK=pytest
# Now these defaults apply
swissarmyhammer test test/unit --code "$(cat app.py)"
5. Create Project Templates
Store in ~/.swissarmyhammer/templates/
:
# Create new project with templates
cp -r ~/.swissarmyhammer/templates/webapp-template my-new-app
cd my-new-app
swissarmyhammer test docs/readme \
--project_name "my-new-app" \
--project_description "My awesome web app"
Next Steps
- Explore Built-in Prompts for more capabilities
- Learn about Creating Prompts for custom workflows
- Check CLI Reference for all available commands
- See Library Usage for programmatic integration
Troubleshooting
This guide helps you resolve common issues with SwissArmyHammer. For additional support, check the GitHub Issues.
Quick Diagnostics
Run the doctor command for automated diagnosis:
swissarmyhammer doctor --verbose
Installation Issues
Command Not Found
Problem: swissarmyhammer: command not found
Solutions:
-
Verify installation:
ls -la ~/.local/bin/swissarmyhammer # or ls -la /usr/local/bin/swissarmyhammer
-
Add to PATH:
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc source ~/.bashrc
-
Reinstall:
curl -sSL https://raw.githubusercontent.com/wballard/swissarmyhammer/main/install.sh | bash
Permission Denied
Problem: Permission denied
when running swissarmyhammer
Solutions:
# Make executable
chmod +x $(which swissarmyhammer)
# If installed system-wide, use sudo
sudo chmod +x /usr/local/bin/swissarmyhammer
Installation Script Fails
Problem: Install script errors or hangs
Solutions:
-
Manual installation:
# Download binary directly curl -L https://github.com/wballard/swissarmyhammer/releases/latest/download/swissarmyhammer-linux-x64 -o swissarmyhammer chmod +x swissarmyhammer sudo mv swissarmyhammer /usr/local/bin/
-
Build from source:
git clone https://github.com/wballard/swissarmyhammer.git cd swissarmyhammer cargo build --release sudo cp target/release/swissarmyhammer /usr/local/bin/
MCP Server Issues
Server Wonβt Start
Problem: swissarmyhammer serve
fails to start
Solutions:
-
Check port availability:
# Default port lsof -i :8080 # Try different port swissarmyhammer serve --port 8081
-
Debug mode:
swissarmyhammer serve --debug
-
Check permissions:
# Ensure read access to prompt directories ls -la ~/.swissarmyhammer/prompts
Claude Code Connection Issues
Problem: SwissArmyHammer doesnβt appear in Claude Code
Solutions:
-
Verify MCP configuration:
claude mcp list
-
Re-add server:
claude mcp remove swissarmyhammer claude mcp add swissarmyhammer swissarmyhammer serve
-
Check server is running:
# In another terminal ps aux | grep swissarmyhammer
-
Restart Claude Code:
- Close Claude Code completely
- Start Claude Code
- Check MCP servers are connected
MCP Protocol Errors
Problem: Protocol errors in Claude Code logs
Solutions:
-
Update SwissArmyHammer:
# Check version swissarmyhammer --version # Update to latest curl -sSL https://install.sh | bash
-
Check logs:
# Enable debug logging swissarmyhammer serve --debug > debug.log 2>&1
-
Validate prompt syntax:
swissarmyhammer doctor --check prompts
Prompt Issues
Prompts Not Loading
Problem: Prompts donβt appear or are outdated
Solutions:
-
Check directories:
# List prompt directories ls -la ~/.swissarmyhammer/prompts ls -la ./prompts
-
Validate prompts:
swissarmyhammer test <prompt-name> swissarmyhammer doctor --check prompts --verbose
-
Force reload:
# Restart server # Ctrl+C to stop, then: swissarmyhammer serve
Invalid YAML Front Matter
Problem: YAML parsing errors
Common Issues:
-
Missing quotes:
# Bad description: This won't work: because of the colon # Good description: "This works: because it's quoted"
-
Incorrect indentation:
# Bad arguments: - name: test description: Test argument # Good arguments: - name: test description: Test argument
-
Missing required fields:
# Must have name, title, description --- name: my-prompt title: My Prompt description: What this prompt does ---
Template Rendering Errors
Problem: Liquid template errors
Common Issues:
-
Undefined variables:
# Error: undefined variable 'foo' {{ foo }} # Fix: Check if variable exists {% if foo %}{{ foo }}{% endif %}
-
Invalid filter:
# Error: unknown filter {{ text | invalid_filter }} # Fix: Use valid filter {{ text | capitalize }}
-
Syntax errors:
# Error: unclosed tag {% if condition %} # Fix: Close all tags {% if condition %}...{% endif %}
Duplicate Prompt Names
Problem: Multiple prompts with same name
Solutions:
-
Check override hierarchy:
swissarmyhammer list --verbose | grep "prompt-name"
-
Rename conflicts:
- Local prompts override user prompts
- User prompts override built-in prompts
- Rename one to avoid confusion
Performance Issues
Slow Prompt Loading
Problem: Server takes long to start or reload
Solutions:
-
Disable file watching:
swissarmyhammer serve --watch false
-
Limit prompt directories:
swissarmyhammer serve --prompts ./essential-prompts --builtin false
-
Check directory size:
find ~/.swissarmyhammer/prompts -type f | wc -l
High Memory Usage
Problem: Excessive memory consumption
Solutions:
-
Monitor usage:
top | grep swissarmyhammer
-
Optimize configuration:
# Disable file watching swissarmyhammer serve --watch false # Reduce prompt count # Move unused prompts to archive
-
System limits:
# Check ulimits ulimit -a # Increase if needed ulimit -n 4096
File System Issues
Permission Errors
Problem: Cannot read/write prompt files
Solutions:
-
Fix directory permissions:
chmod -R 755 ~/.swissarmyhammer chmod -R 644 ~/.swissarmyhammer/prompts/*.md
-
Check ownership:
ls -la ~/.swissarmyhammer/ # Fix if needed chown -R $USER:$USER ~/.swissarmyhammer
File Watching Not Working
Problem: Changes to prompts not detected
Solutions:
-
Check file system support:
# macOS fs_usage | grep swissarmyhammer # Linux inotifywait -m ~/.swissarmyhammer/prompts
-
Increase watch limits (Linux):
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf sudo sysctl -p
-
Manual reload:
- Restart the server
- Or disable watching:
--watch false
CLI Command Issues
Test Command Fails
Problem: swissarmyhammer test
errors
Solutions:
-
Check prompt exists:
swissarmyhammer list | grep "prompt-name"
-
Validate arguments:
# Show required arguments swissarmyhammer test prompt-name --help
-
Debug mode:
swissarmyhammer test prompt-name --debug
Export/Import Errors
Problem: Cannot export or import prompts
Solutions:
-
Check file permissions:
# For export touch test-export.tar.gz # For import ls -la import-file.tar.gz
-
Validate archive:
tar -tzf archive.tar.gz
-
Manual export:
tar -czf prompts.tar.gz -C ~/.swissarmyhammer prompts/
Environment-Specific Issues
macOS Issues
Problem: Security warnings or quarantine
Solutions:
# Remove quarantine attribute
xattr -d com.apple.quarantine /usr/local/bin/swissarmyhammer
# Allow in Security & Privacy settings
# System Preferences > Security & Privacy > General
Linux Issues
Problem: Library dependencies missing
Solutions:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install libssl-dev
# Fedora
sudo dnf install openssl-devel
# Check dependencies
ldd $(which swissarmyhammer)
Windows Issues
Problem: Path or execution issues
Solutions:
-
Use PowerShell as Administrator
-
Add to PATH:
$env:Path += ";C:\Program Files\swissarmyhammer" [Environment]::SetEnvironmentVariable("Path", $env:Path, [EnvironmentVariableTarget]::User)
-
Windows Defender:
- Add exclusion for swissarmyhammer.exe
- Check Windows Security logs
Debug Techniques
Enable Verbose Logging
# Server debug mode
swissarmyhammer serve --debug
# Redirect to file
swissarmyhammer serve --debug > debug.log 2>&1
# CLI debug
RUST_LOG=debug swissarmyhammer test prompt-name
Check Configuration
# Run comprehensive checks
swissarmyhammer doctor --verbose
# Check specific areas
swissarmyhammer doctor --check prompts --check mcp
# Auto-fix issues
swissarmyhammer doctor --fix
Trace MCP Communication
# Save MCP messages
swissarmyhammer serve --debug | grep MCP > mcp-trace.log
# Monitor in real-time
swissarmyhammer serve --debug | grep -E "(request|response)"
Getting Help
Documentation
- Check this troubleshooting guide first
- Read the CLI Reference
- Review Configuration options
Community Support
- GitHub Issues
- Discussions
- Discord/Slack Community (if available)
Reporting Issues
When reporting issues, include:
-
System information:
swissarmyhammer doctor --json > diagnosis.json
-
Steps to reproduce
-
Error messages and logs
-
Expected vs actual behavior
Debug Information Script
Save this as debug-info.sh
:
#!/bin/bash
echo "=== SwissArmyHammer Debug Information ==="
echo "Date: $(date)"
echo "Version: $(swissarmyhammer --version)"
echo "OS: $(uname -a)"
echo ""
echo "=== Doctor Report ==="
swissarmyhammer doctor --verbose
echo ""
echo "=== Configuration ==="
cat ~/.swissarmyhammer/config.toml 2>/dev/null || echo "No config file"
echo ""
echo "=== Prompt Directories ==="
ls -la ~/.swissarmyhammer/prompts 2>/dev/null || echo "No user prompts"
ls -la ./prompts 2>/dev/null || echo "No local prompts"
echo ""
echo "=== Process Check ==="
ps aux | grep swissarmyhammer | grep -v grep
Run and save output:
bash debug-info.sh > debug-info.txt
Common Error Messages
βFailed to bind to addressβ
- Port already in use
- Try:
--port 8081
βPermission deniedβ
- File/directory permissions issue
- Try:
chmod +x
or check ownership
βYAML parse errorβ
- Invalid YAML syntax in prompt
- Check indentation and special characters
βTemplate compilation failedβ
- Liquid syntax error
- Check tags are closed and filters exist
βPrompt not foundβ
- Prompt name doesnβt exist
- Check:
swissarmyhammer list
βConnection refusedβ
- MCP server not running
- Start server:
swissarmyhammer serve
Prevention Tips
-
Regular maintenance:
# Weekly health check swissarmyhammer doctor # Update regularly swissarmyhammer --version
-
Backup prompts:
# Regular backups swissarmyhammer export ~/.swissarmyhammer/backups/prompts-$(date +%Y%m%d).tar.gz
-
Test changes:
# Before committing swissarmyhammer test new-prompt swissarmyhammer doctor --check prompts
-
Monitor logs:
# Keep logs for debugging swissarmyhammer serve --debug > server.log 2>&1 &
Contributing
Thank you for your interest in contributing to SwissArmyHammer! This guide will help you get started with contributing to the project.
Overview
SwissArmyHammer welcomes contributions in many forms:
- Code contributions - Features, bug fixes, optimizations
- Prompt contributions - New built-in prompts
- Documentation - Improvements, examples, translations
- Bug reports - Issues and reproducible test cases
- Feature requests - Ideas and suggestions
Getting Started
Prerequisites
- Rust 1.70+ (check with
rustc --version
) - Git
- GitHub account
- Basic familiarity with:
- Rust programming
- Model Context Protocol (MCP)
- Liquid templating
Fork and Clone
-
Fork the repository on GitHub
-
Clone your fork:
git clone https://github.com/YOUR_USERNAME/swissarmyhammer.git cd swissarmyhammer
-
Add upstream remote:
git remote add upstream https://github.com/wballard/swissarmyhammer.git
Development Setup
-
Install Rust toolchain:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Install development tools:
# Format checker rustup component add rustfmt # Linter rustup component add clippy # Documentation cargo install mdbook
-
Build the project:
cargo build cargo test
Development Workflow
Branch Strategy
main
- Stable release branchdevelop
- Development branchfeature/*
- Feature branchesfix/*
- Bug fix branchesdocs/*
- Documentation branches
Creating a Feature Branch
# Update your fork
git checkout main
git pull upstream main
git push origin main
# Create feature branch
git checkout -b feature/your-feature-name
# Or for fixes
git checkout -b fix/issue-description
Making Changes
- Write code following our style guide
- Add tests for new functionality
- Update documentation as needed
- Run checks before committing
Running Checks
# Format code
cargo fmt
# Run linter
cargo clippy -- -D warnings
# Run tests
cargo test
# Build documentation
cargo doc --no-deps --open
# Check everything
./scripts/check-all.sh
Code Style Guide
Rust Code
Follow Rust standard style with these additions:
// Good: Clear module organization
pub mod prompts;
pub mod template;
pub mod mcp;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::{Context, Result};
use serde::{Deserialize, Serialize};
// Good: Descriptive names
pub struct PromptManager {
prompts: HashMap<String, Prompt>,
directories: Vec<PathBuf>,
watcher: Option<FileWatcher>,
}
// Good: Clear error handling
impl PromptManager {
pub fn load_prompt(&mut self, path: &Path) -> Result<()> {
let content = std::fs::read_to_string(path)
.with_context(|| format!("Failed to read prompt file: {}", path.display()))?;
let prompt = Prompt::parse(&content)
.with_context(|| format!("Failed to parse prompt: {}", path.display()))?;
self.prompts.insert(prompt.name.clone(), prompt);
Ok(())
}
}
// Good: Comprehensive tests
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_prompt() {
let mut manager = PromptManager::new();
let result = manager.load_prompt(Path::new("test.md"));
assert!(result.is_ok());
}
}
Documentation
- Use
///
for public API documentation - Include examples in doc comments
- Keep comments concise and helpful
/// Manages a collection of prompts and provides MCP server functionality.
///
/// # Examples
///
/// ```
/// use swissarmyhammer::PromptManager;
///
/// let mut manager = PromptManager::new();
/// manager.load_prompts()?;
/// ```
pub struct PromptManager {
// Implementation details...
}
Error Messages
Make errors helpful and actionable:
// Good
bail!("Prompt '{}' not found in directories: {:?}", name, self.directories);
// Good with context
.with_context(|| format!("Failed to parse YAML front matter in {}", path.display()))?;
// Bad
bail!("Error");
Contributing Prompts
Built-in Prompt Guidelines
- Location:
src/prompts/builtin/
- Categories: Place in appropriate subdirectory
- Quality: Must be generally useful
- Testing: Include test cases
Prompt Standards
---
name: descriptive-name
title: Human Readable Title
description: |
Clear description of what this prompt does.
Include use cases and examples.
category: development
tags:
- relevant
- searchable
- tags
author: your-email@example.com
version: 1.0.0
arguments:
- name: required_arg
description: What this argument is for
required: true
- name: optional_arg
description: Optional parameter
default: "default value"
---
# Prompt Title
Clear instructions using the arguments:
- {{required_arg}}
- {{optional_arg}}
## Section Headers
Organize the prompt logically...
Testing Prompts
Add test file src/prompts/builtin/tests/your-prompt.test.md
:
name: test-your-prompt
cases:
- name: basic usage
arguments:
required_arg: "test value"
expected_contains:
- "test value"
- "expected output"
expected_not_contains:
- "error"
- name: edge case
arguments:
required_arg: ""
optional_arg: "custom"
expected_error: "required_arg cannot be empty"
Documentation
Documentation Structure
doc/
βββ src/
β βββ SUMMARY.md # Table of contents
β βββ chapter-1.md # Content files
β βββ images/ # Images and diagrams
βββ book.toml # mdbook configuration
Writing Documentation
- Be clear and concise
- Include examples
- Use proper markdown
- Test all code examples
Building Documentation
cd doc
mdbook build
mdbook serve # Preview at http://localhost:3000
Testing
Test Organization
tests/
βββ integration/ # Integration tests
βββ fixtures/ # Test data
βββ common/ # Shared test utilities
Writing Tests
#[test]
fn test_prompt_loading() {
let temp_dir = tempdir().unwrap();
let prompt_file = temp_dir.path().join("test.md");
std::fs::write(&prompt_file, r#"---
name: test-prompt
title: Test
---
Content"#).unwrap();
let mut manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
assert!(manager.get_prompt("test-prompt").is_some());
}
Running Tests
# All tests
cargo test
# Specific test
cargo test test_prompt_loading
# With output
cargo test -- --nocapture
# Integration tests only
cargo test --test integration
Submitting Changes
Commit Messages
Follow Conventional Commits:
# Features
git commit -m "feat: add prompt validation API"
git commit -m "feat(mcp): implement notification support"
# Bug fixes
git commit -m "fix: correct template escaping issue"
git commit -m "fix(watcher): handle symlink changes"
# Documentation
git commit -m "docs: add prompt writing guide"
git commit -m "docs(api): document PromptManager methods"
# Performance
git commit -m "perf: optimize prompt loading"
git commit -m "perf(cache): implement LRU cache"
# Refactoring
git commit -m "refactor: simplify error handling"
git commit -m "refactor(template): extract common logic"
Pull Request Process
-
Update your branch:
git fetch upstream git rebase upstream/main
-
Push to your fork:
git push origin feature/your-feature-name
-
Create pull request:
- Use clear, descriptive title
- Reference any related issues
- Describe what changes do
- Include test results
- Add screenshots if UI changes
PR Template
## Description
Brief description of changes
## Related Issue
Fixes #123
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing completed
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated
- [ ] Tests added/updated
- [ ] Breaking changes documented
Code Review Process
What We Look For
- Correctness - Does it work as intended?
- Tests - Are changes adequately tested?
- Documentation - Is it documented?
- Style - Does it follow conventions?
- Performance - Any performance impacts?
- Security - Any security concerns?
Review Timeline
- Initial response: 2-3 days
- Full review: Within a week
- Follow-ups: As needed
Addressing Feedback
# Make requested changes
git add -A
git commit -m "address review feedback"
# Or amend if small change
git commit --amend
# Force push to your branch
git push -f origin feature/your-feature-name
Release Process
Version Numbering
We use Semantic Versioning:
- MAJOR: Breaking API changes
- MINOR: New features, backward compatible
- PATCH: Bug fixes, backward compatible
Release Checklist
- Update
Cargo.toml
version - Update
CHANGELOG.md
- Run full test suite
- Build and test binaries
- Update documentation
- Create release PR
- Tag release after merge
- Publish to crates.io
Community
Getting Help
- GitHub Issues - Bug reports and features
- Discussions - Questions and ideas
- Discord - Real-time chat (if available)
Code of Conduct
We follow the Rust Code of Conduct:
- Be respectful and inclusive
- Welcome newcomers
- Focus on whatβs best for the community
- Show empathy towards others
Recognition
Contributors are recognized in:
CONTRIBUTORS.md
file- Release notes
- Documentation credits
Quick Reference
Common Commands
# Development
cargo build # Build project
cargo test # Run tests
cargo fmt # Format code
cargo clippy # Lint code
cargo doc # Build docs
# Documentation
cd doc && mdbook serve # Preview docs
# Prompts
cargo run -- list # List prompts
cargo run -- doctor # Validate prompts
# Release
cargo publish --dry-run # Test publishing
Useful Resources
Thank You!
Your contributions make SwissArmyHammer better for everyone. Whether itβs fixing a typo, adding a feature, or improving documentation, every contribution is valued.
Happy contributing! π
Development Setup
This guide covers setting up a development environment for working on SwissArmyHammer.
Prerequisites
Required Tools
- Rust 1.70 or later
- Git 2.0 or later
- Cargo (comes with Rust)
- A code editor (VS Code recommended)
Optional Tools
- Docker - For testing container builds
- mdBook - For documentation development
- Node.js - For web-based tooling
- Python - For utility scripts
Environment Setup
Installing Rust
# Install Rust via rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Follow the installation prompts, then:
source $HOME/.cargo/env
# Verify installation
rustc --version
cargo --version
Setting Up the Repository
# Clone the repository
git clone https://github.com/wballard/swissarmyhammer.git
cd swissarmyhammer
# Install development dependencies
cargo install cargo-watch cargo-edit cargo-outdated
# Install formatting and linting tools
rustup component add rustfmt clippy
# Install documentation tools
cargo install mdbook mdbook-linkcheck mdbook-mermaid
VS Code Setup
Install recommended extensions:
{
"recommendations": [
"rust-lang.rust-analyzer",
"vadimcn.vscode-lldb",
"serayuzgur.crates",
"tamasfe.even-better-toml",
"streetsidesoftware.code-spell-checker",
"yzhang.markdown-all-in-one"
]
}
Settings for .vscode/settings.json
:
{
"editor.formatOnSave": true,
"rust-analyzer.cargo.features": "all",
"rust-analyzer.checkOnSave.command": "clippy",
"rust-analyzer.inlayHints.enable": true,
"rust-analyzer.inlayHints.typeHints.enable": true,
"rust-analyzer.inlayHints.parameterHints.enable": true,
"[rust]": {
"editor.defaultFormatter": "rust-lang.rust-analyzer"
}
}
Project Structure
swissarmyhammer/
βββ src/
β βββ main.rs # CLI entry point
β βββ lib.rs # Library entry point
β βββ cli/ # CLI commands
β βββ mcp/ # MCP server implementation
β βββ prompts/ # Prompt management
β βββ template/ # Template engine
β βββ utils/ # Utilities
βββ tests/
β βββ integration/ # Integration tests
β βββ fixtures/ # Test data
βββ doc/
β βββ src/ # Documentation source
βββ examples/ # Example code
βββ benches/ # Benchmarks
βββ Cargo.toml # Project manifest
Building the Project
Development Build
# Quick build (debug mode)
cargo build
# Run tests
cargo test
# Run with debug output
RUST_LOG=debug cargo run -- serve
# Watch for changes and rebuild
cargo watch -x build
Release Build
# Optimized build
cargo build --release
# Run release binary
./target/release/swissarmyhammer --version
# Build with all features
cargo build --release --all-features
Cross-Compilation
# Install cross-compilation tools
cargo install cross
# Build for different targets
cross build --target x86_64-pc-windows-gnu
cross build --target aarch64-apple-darwin
cross build --target x86_64-unknown-linux-musl
Development Workflow
Running Tests
# All tests
cargo test
# Unit tests only
cargo test --lib
# Integration tests only
cargo test --test '*'
# Specific test
cargo test test_prompt_loading
# With output
cargo test -- --show-output
# With specific features
cargo test --features "experimental"
Code Quality
# Format code
cargo fmt
# Check formatting
cargo fmt -- --check
# Run linter
cargo clippy
# Strict linting
cargo clippy -- -D warnings
# Check for security issues
cargo audit
# Update dependencies
cargo update
cargo outdated
Documentation
# Build API documentation
cargo doc --no-deps --open
# Build user documentation
cd doc
mdbook build
mdbook serve
# Check documentation examples
cargo test --doc
Debugging
VS Code Debug Configuration
.vscode/launch.json
:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug CLI",
"cargo": {
"args": ["build", "--bin=swissarmyhammer"],
"filter": {
"name": "swissarmyhammer",
"kind": "bin"
}
},
"args": ["serve", "--debug"],
"cwd": "${workspaceFolder}",
"env": {
"RUST_LOG": "debug",
"RUST_BACKTRACE": "1"
}
},
{
"type": "lldb",
"request": "launch",
"name": "Debug Tests",
"cargo": {
"args": ["test", "--no-run"],
"filter": {
"name": "swissarmyhammer",
"kind": "lib"
}
},
"args": [],
"cwd": "${workspaceFolder}"
}
]
}
Command Line Debugging
# Enable debug logging
export RUST_LOG=swissarmyhammer=debug
# Enable backtrace
export RUST_BACKTRACE=1
# Run with debugging
cargo run -- serve --debug
# Use GDB
rust-gdb target/debug/swissarmyhammer
# Use LLDB
rust-lldb target/debug/swissarmyhammer
Logging
Add debug logging to your code:
use log::{debug, info, warn, error};
fn process_prompt(name: &str) -> Result<()> {
debug!("Processing prompt: {}", name);
if let Some(prompt) = self.get_prompt(name) {
info!("Found prompt: {}", prompt.title);
Ok(())
} else {
error!("Prompt not found: {}", name);
Err(anyhow!("Prompt not found"))
}
}
Performance Profiling
Benchmarking
Create benchmarks in benches/
:
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use swissarmyhammer::PromptManager;
fn bench_prompt_loading(c: &mut Criterion) {
c.bench_function("load 100 prompts", |b| {
b.iter(|| {
let manager = PromptManager::new();
manager.load_prompts()
});
});
}
criterion_group!(benches, bench_prompt_loading);
criterion_main!(benches);
Run benchmarks:
cargo bench
# Compare benchmarks
cargo bench -- --save-baseline before
# Make changes
cargo bench -- --baseline before
CPU Profiling
# Install profiling tools
cargo install flamegraph
# Generate flamegraph
cargo flamegraph --bin swissarmyhammer -- serve
# Using perf (Linux)
perf record --call-graph=dwarf cargo run --release -- serve
perf report
Memory Profiling
# Install valgrind (Linux/macOS)
# macOS: brew install valgrind
# Linux: apt-get install valgrind
# Run with valgrind
valgrind --leak-check=full \
--show-leak-kinds=all \
target/debug/swissarmyhammer serve
# Use heaptrack (Linux)
heaptrack cargo run -- serve
heaptrack_gui heaptrack.*.gz
Testing Strategies
Unit Testing
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_prompt_parsing() {
let content = r#"---
name: test
title: Test Prompt
---
Content"#;
let prompt = Prompt::parse(content).unwrap();
assert_eq!(prompt.name, "test");
assert_eq!(prompt.title, "Test Prompt");
}
}
Integration Testing
In tests/integration/
:
use swissarmyhammer::PromptManager;
use tempfile::tempdir;
#[test]
fn test_full_workflow() {
let temp_dir = tempdir().unwrap();
// Create test prompts
std::fs::write(
temp_dir.path().join("test.md"),
"---\nname: test\n---\nContent"
).unwrap();
// Test loading
let mut manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
// Test retrieval
assert!(manager.get_prompt("test").is_some());
}
Property Testing
Using proptest
:
use proptest::prelude::*;
proptest! {
#[test]
fn test_prompt_name_validation(name in "[a-z][a-z0-9-]*") {
assert!(is_valid_prompt_name(&name));
}
}
Common Development Tasks
Adding a New Command
-
Create command module in
src/cli/
:// src/cli/new_command.rs use clap::Args; #[derive(Args)] pub struct NewCommand { #[arg(short, long)] option: String, } impl NewCommand { pub fn run(&self) -> Result<()> { // Implementation Ok(()) } }
-
Add to CLI enum:
// src/cli/mod.rs #[derive(Subcommand)] pub enum Commands { NewCommand(NewCommand), // ... }
Adding a Feature
-
Define feature in
Cargo.toml
:[features] experimental = ["dep:experimental-lib"]
-
Conditionally compile code:
#[cfg(feature = "experimental")] pub mod experimental { // Experimental features }
Updating Dependencies
# Check outdated dependencies
cargo outdated
# Update specific dependency
cargo update -p serde
# Update all dependencies
cargo update
# Edit dependency version
cargo upgrade serde --version 1.0.150
Troubleshooting
Common Issues
Compilation Errors
# Clean build artifacts
cargo clean
# Check for missing dependencies
cargo check
# Verify toolchain
rustup show
Test Failures
# Run single test with output
cargo test test_name -- --nocapture
# Run tests serially
cargo test -- --test-threads=1
# Skip slow tests
cargo test --lib
Performance Issues
# Build with debug symbols in release
cargo build --release --features debug
# Check binary size
cargo bloat --release
# Analyze dependencies
cargo tree --duplicates
CI/CD Integration
GitHub Actions
.github/workflows/ci.yml
:
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
rust: [stable, beta]
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
components: rustfmt, clippy
- name: Cache
uses: Swatinem/rust-cache@v2
- name: Check
run: cargo check --all-features
- name: Test
run: cargo test --all-features
- name: Clippy
run: cargo clippy -- -D warnings
- name: Format
run: cargo fmt -- --check
Pre-commit Hooks
.pre-commit-config.yaml
:
repos:
- repo: local
hooks:
- id: fmt
name: Format
entry: cargo fmt -- --check
language: system
types: [rust]
pass_filenames: false
- id: clippy
name: Clippy
entry: cargo clippy -- -D warnings
language: system
types: [rust]
pass_filenames: false
- id: test
name: Test
entry: cargo test
language: system
types: [rust]
pass_filenames: false
Next Steps
- Read Contributing for contribution guidelines
- Check Testing for detailed testing practices
- See Release Process for release procedures
- Review Architecture for system design
Testing
This guide covers testing practices and strategies for SwissArmyHammer development.
Overview
SwissArmyHammer uses a comprehensive testing approach:
- Unit tests - Test individual components
- Integration tests - Test component interactions
- End-to-end tests - Test complete workflows
- Property tests - Test with generated inputs
- Benchmark tests - Test performance
Test Organization
swissarmyhammer/
βββ src/
β βββ *.rs # Unit tests in source files
βββ tests/
β βββ integration/ # Integration test files
β βββ common/ # Shared test utilities
β βββ fixtures/ # Test data files
βββ benches/ # Benchmark tests
βββ examples/ # Example code (also tested)
Unit Testing
Basic Unit Tests
Place unit tests in the same file as the code:
// src/prompts/prompt.rs
pub struct Prompt {
pub name: String,
pub title: String,
pub content: String,
}
impl Prompt {
pub fn parse(content: &str) -> Result<Self> {
// Implementation
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_valid_prompt() {
let content = r#"---
name: test
title: Test Prompt
---
Content here"#;
let prompt = Prompt::parse(content).unwrap();
assert_eq!(prompt.name, "test");
assert_eq!(prompt.title, "Test Prompt");
assert!(prompt.content.contains("Content here"));
}
#[test]
fn test_parse_missing_name() {
let content = r#"---
title: Test Prompt
---
Content"#;
let result = Prompt::parse(content);
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("name"));
}
}
Testing Private Functions
#[cfg(test)]
mod tests {
use super::*;
// Test private functions by making them pub(crate) in test mode
#[test]
fn test_private_helper() {
// Can access private functions within the module
let result = validate_prompt_name("test-name");
assert!(result);
}
}
Mock Dependencies
#[cfg(test)]
mod tests {
use super::*;
use mockall::*;
#[automock]
trait FileSystem {
fn read_file(&self, path: &Path) -> io::Result<String>;
}
#[test]
fn test_with_mock_filesystem() {
let mut mock = MockFileSystem::new();
mock.expect_read_file()
.returning(|_| Ok("file content".to_string()));
let result = process_with_fs(&mock, "test.md");
assert!(result.is_ok());
}
}
Integration Testing
Basic Integration Test
Create files in tests/integration/
:
// tests/integration/prompt_loading.rs
use swissarmyhammer::{PromptManager, Config};
use tempfile::tempdir;
use std::fs;
#[test]
fn test_load_prompts_from_directory() {
// Create temporary directory
let temp_dir = tempdir().unwrap();
let prompt_path = temp_dir.path().join("test.md");
// Write test prompt
fs::write(&prompt_path, r#"---
name: test-prompt
title: Test Prompt
---
Test content"#).unwrap();
// Test loading
let mut config = Config::default();
config.prompt_directories.push(temp_dir.path().to_path_buf());
let manager = PromptManager::with_config(config).unwrap();
manager.load_prompts().unwrap();
// Verify
let prompt = manager.get_prompt("test-prompt").unwrap();
assert_eq!(prompt.title, "Test Prompt");
}
Testing MCP Server
// tests/integration/mcp_server.rs
use swissarmyhammer::mcp::{MCPServer, MCPRequest, MCPResponse};
use serde_json::json;
#[tokio::test]
async fn test_mcp_initialize() {
let server = MCPServer::new();
let request = MCPRequest {
jsonrpc: "2.0".to_string(),
method: "initialize".to_string(),
params: json!({}),
id: Some(json!(1)),
};
let response = server.handle_request(request).await.unwrap();
assert_eq!(response.jsonrpc, "2.0");
assert!(response.result.is_some());
assert!(response.result.unwrap()["serverInfo"]["name"]
.as_str()
.unwrap()
.contains("swissarmyhammer"));
}
#[tokio::test]
async fn test_mcp_list_prompts() {
let server = setup_test_server().await;
let request = MCPRequest {
jsonrpc: "2.0".to_string(),
method: "prompts/list".to_string(),
params: json!({}),
id: Some(json!(2)),
};
let response = server.handle_request(request).await.unwrap();
let prompts = &response.result.unwrap()["prompts"];
assert!(prompts.is_array());
assert!(!prompts.as_array().unwrap().is_empty());
}
Testing CLI Commands
// tests/integration/cli_commands.rs
use assert_cmd::Command;
use predicates::prelude::*;
use tempfile::tempdir;
#[test]
fn test_list_command() {
let mut cmd = Command::cargo_bin("swissarmyhammer").unwrap();
cmd.arg("list")
.assert()
.success()
.stdout(predicate::str::contains("Available prompts:"));
}
#[test]
fn test_serve_command_help() {
let mut cmd = Command::cargo_bin("swissarmyhammer").unwrap();
cmd.arg("serve")
.arg("--help")
.assert()
.success()
.stdout(predicate::str::contains("Start the MCP server"));
}
#[test]
fn test_export_import_workflow() {
let temp_dir = tempdir().unwrap();
let export_path = temp_dir.path().join("export.tar.gz");
// Export
Command::cargo_bin("swissarmyhammer").unwrap()
.arg("export")
.arg(&export_path)
.assert()
.success();
// Import
Command::cargo_bin("swissarmyhammer").unwrap()
.arg("import")
.arg(&export_path)
.arg("--dry-run")
.assert()
.success()
.stdout(predicate::str::contains("Would import"));
}
Property Testing
Using Proptest
// src/validation.rs
use proptest::prelude::*;
fn is_valid_prompt_name(name: &str) -> bool {
!name.is_empty()
&& name.chars().all(|c| c.is_alphanumeric() || c == '-')
&& name.chars().next().unwrap().is_alphabetic()
}
#[cfg(test)]
mod tests {
use super::*;
use proptest::prelude::*;
proptest! {
#[test]
fn test_valid_names_accepted(name in "[a-z][a-z0-9-]{0,50}") {
assert!(is_valid_prompt_name(&name));
}
#[test]
fn test_invalid_names_rejected(name in "[^a-z].*|.*[^a-z0-9-].*") {
// Names starting with non-letter or containing invalid chars
if !name.chars().next().unwrap().is_alphabetic()
|| name.chars().any(|c| !c.is_alphanumeric() && c != '-') {
assert!(!is_valid_prompt_name(&name));
}
}
}
}
Testing Template Rendering
use proptest::prelude::*;
proptest! {
#[test]
fn test_template_escaping(
user_input in any::<String>(),
template in "Hello {{name}}!"
) {
let mut args = HashMap::new();
args.insert("name", &user_input);
let result = render_template(&template, &args).unwrap();
// Should not contain raw HTML
if user_input.contains('<') {
assert!(!result.contains('<'));
}
}
}
Testing Async Code
Basic Async Tests
#[tokio::test]
async fn test_async_prompt_loading() {
let manager = PromptManager::new();
let result = manager.load_prompts_async().await;
assert!(result.is_ok());
let prompts = manager.list_prompts().await;
assert!(!prompts.is_empty());
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn test_concurrent_access() {
let manager = Arc::new(PromptManager::new());
let handle1 = {
let mgr = Arc::clone(&manager);
tokio::spawn(async move {
mgr.get_prompt("test1").await
})
};
let handle2 = {
let mgr = Arc::clone(&manager);
tokio::spawn(async move {
mgr.get_prompt("test2").await
})
};
let (result1, result2) = tokio::join!(handle1, handle2);
assert!(result1.is_ok());
assert!(result2.is_ok());
}
Testing Timeouts
#[tokio::test]
async fn test_operation_timeout() {
let manager = PromptManager::new();
let result = tokio::time::timeout(
Duration::from_secs(5),
manager.slow_operation()
).await;
assert!(result.is_ok(), "Operation should complete within timeout");
}
Test Fixtures
Using Test Data
Create reusable test data in tests/fixtures/
:
// tests/common/mod.rs
use std::path::PathBuf;
pub fn test_prompt_content() -> &'static str {
r#"---
name: test-prompt
title: Test Prompt
description: A prompt for testing
arguments:
- name: input
description: Test input
required: true
---
Process this input: {{input}}"#
}
pub fn fixtures_dir() -> PathBuf {
PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("tests")
.join("fixtures")
}
pub fn load_fixture(name: &str) -> String {
std::fs::read_to_string(fixtures_dir().join(name))
.expect("Failed to load fixture")
}
Test Builders
// tests/common/builders.rs
pub struct PromptBuilder {
name: String,
title: String,
content: String,
arguments: Vec<ArgumentSpec>,
}
impl PromptBuilder {
pub fn new(name: &str) -> Self {
Self {
name: name.to_string(),
title: format!("{} Title", name),
content: "Default content".to_string(),
arguments: vec![],
}
}
pub fn with_argument(mut self, name: &str, required: bool) -> Self {
self.arguments.push(ArgumentSpec {
name: name.to_string(),
required,
..Default::default()
});
self
}
pub fn build(self) -> String {
// Generate YAML front matter and content
format!(r#"---
name: {}
title: {}
arguments:
{}
---
{}"#, self.name, self.title,
self.arguments.iter()
.map(|a| format!(" - name: {}\n required: {}", a.name, a.required))
.collect::<Vec<_>>()
.join("\n"),
self.content)
}
}
// Usage in tests
#[test]
fn test_with_builder() {
let prompt_content = PromptBuilder::new("test")
.with_argument("input", true)
.with_argument("format", false)
.build();
let prompt = Prompt::parse(&prompt_content).unwrap();
assert_eq!(prompt.arguments.len(), 2);
}
Performance Testing
Benchmarks
Create benchmarks in benches/
:
// benches/prompt_loading.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId};
use swissarmyhammer::PromptManager;
fn benchmark_prompt_loading(c: &mut Criterion) {
let mut group = c.benchmark_group("prompt_loading");
for size in [10, 100, 1000].iter() {
group.bench_with_input(
BenchmarkId::from_parameter(size),
size,
|b, &size| {
let temp_dir = create_test_prompts(size);
b.iter(|| {
let manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts()
});
},
);
}
group.finish();
}
fn benchmark_template_rendering(c: &mut Criterion) {
c.bench_function("render_simple_template", |b| {
let template = "Hello {{name}}, welcome to {{place}}!";
let mut args = HashMap::new();
args.insert("name", "Alice");
args.insert("place", "Wonderland");
b.iter(|| {
black_box(render_template(template, &args))
});
});
}
criterion_group!(benches, benchmark_prompt_loading, benchmark_template_rendering);
criterion_main!(benches);
Profiling Tests
#[test]
#[ignore] // Run with cargo test -- --ignored
fn profile_large_prompt_set() {
let temp_dir = create_test_prompts(10000);
let start = Instant::now();
let manager = PromptManager::new();
manager.add_directory(temp_dir.path());
manager.load_prompts().unwrap();
let duration = start.elapsed();
println!("Loaded 10000 prompts in {:?}", duration);
assert!(duration < Duration::from_secs(5), "Loading too slow");
}
Test Coverage
Generating Coverage Reports
# Install tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html --output-dir coverage
# With specific features
cargo tarpaulin --features "experimental" --out Lcov
# Exclude test code from coverage
cargo tarpaulin --exclude-files "*/tests/*" --exclude-files "*/benches/*"
Coverage Configuration
.tarpaulin.toml
:
[default]
exclude-files = ["*/tests/*", "*/benches/*", "*/examples/*"]
ignored = false
timeout = "600s"
features = "all"
[report]
out = ["Html", "Lcov"]
output-dir = "coverage"
Test Utilities
Custom Assertions
// tests/common/assertions.rs
pub trait PromptAssertions {
fn assert_valid_prompt(&self);
fn assert_has_argument(&self, name: &str);
fn assert_renders_with(&self, args: &HashMap<String, String>);
}
impl PromptAssertions for Prompt {
fn assert_valid_prompt(&self) {
assert!(!self.name.is_empty(), "Prompt name is empty");
assert!(!self.title.is_empty(), "Prompt title is empty");
assert!(is_valid_prompt_name(&self.name), "Invalid prompt name");
}
fn assert_has_argument(&self, name: &str) {
assert!(
self.arguments.iter().any(|a| a.name == name),
"Prompt missing expected argument: {}", name
);
}
fn assert_renders_with(&self, args: &HashMap<String, String>) {
let result = self.render(args);
assert!(result.is_ok(), "Failed to render: {:?}", result.err());
assert!(!result.unwrap().is_empty(), "Rendered output is empty");
}
}
Test Helpers
// tests/common/helpers.rs
use std::sync::Once;
static INIT: Once = Once::new();
pub fn init_test_logging() {
INIT.call_once(|| {
env_logger::builder()
.filter_level(log::LevelFilter::Debug)
.is_test(true)
.init();
});
}
pub fn with_test_env<F>(vars: Vec<(&str, &str)>, test: F)
where
F: FnOnce() + std::panic::UnwindSafe,
{
let _guards: Vec<_> = vars.into_iter()
.map(|(k, v)| {
env::set_var(k, v);
defer::defer(move || env::remove_var(k))
})
.collect();
test();
}
// Usage
#[test]
fn test_with_env_vars() {
with_test_env(vec![
("SWISSARMYHAMMER_DEBUG", "true"),
("SWISSARMYHAMMER_PORT", "9999"),
], || {
let config = Config::from_env();
assert!(config.debug);
assert_eq!(config.port, 9999);
});
}
Debugging Tests
Debug Output
#[test]
fn test_with_debug_output() {
init_test_logging();
log::debug!("Starting test");
let result = some_operation();
// Print debug info on failure
if result.is_err() {
eprintln!("Operation failed: {:?}", result);
eprintln!("Current state: {:?}", get_debug_state());
}
assert!(result.is_ok());
}
Test Isolation
#[test]
fn test_isolated_state() {
// Use a unique test ID to avoid conflicts
let test_id = uuid::Uuid::new_v4();
let test_dir = temp_dir().join(format!("test-{}", test_id));
// Ensure cleanup even on panic
let _guard = defer::defer(|| {
let _ = fs::remove_dir_all(&test_dir);
});
// Run test with isolated state
run_test_in_dir(&test_dir);
}
CI Testing
GitHub Actions Test Matrix
name: Test
on: [push, pull_request]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
rust: [stable, beta, nightly]
features: ["", "all", "experimental"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ matrix.rust }}
- name: Test
run: cargo test --features "${{ matrix.features }}"
- name: Test Examples
run: cargo test --examples
- name: Doc Tests
run: cargo test --doc
Best Practices
1. Test Organization
- Keep unit tests with the code
- Use integration tests for workflows
- Group related tests
- Share common utilities
2. Test Naming
#[test]
fn test_parse_valid_prompt() { } // Clear what's being tested
#[test]
fn test_render_with_missing_arg() { } // Clear expected outcome
#[test]
fn test_concurrent_access_safety() { } // Clear test scenario
3. Test Independence
- Each test should be independent
- Use temporary directories
- Clean up resources
- Donβt rely on test order
4. Test Coverage
- Aim for >80% coverage
- Test edge cases
- Test error paths
- Test concurrent scenarios
5. Performance
- Keep tests fast (<100ms each)
- Use
#[ignore]
for slow tests - Run slow tests in CI only
- Mock expensive operations
Next Steps
- Read Development Setup for environment setup
- See Contributing for contribution guidelines
- Check CI/CD for automated testing
- Review Benchmarking for performance testing
Release Process
This guide documents the release process for SwissArmyHammer, including versioning, testing, building, and publishing.
Overview
SwissArmyHammer follows a structured release process:
- Version Planning - Determine version number and scope
- Pre-release Testing - Comprehensive testing
- Release Preparation - Update version, changelog
- Building - Create release artifacts
- Publishing - Release to crates.io and GitHub
- Post-release - Announcements and documentation
Versioning
Semantic Versioning
We follow Semantic Versioning:
MAJOR.MINOR.PATCH
1.0.0
β β βββ Patch: Bug fixes, no API changes
β βββββ Minor: New features, backward compatible
βββββββ Major: Breaking changes
Version Guidelines
Patch Release (0.0.X)
- Bug fixes
- Documentation improvements
- Performance improvements (no API change)
- Security patches
Minor Release (0.X.0)
- New features
- New commands
- New configuration options
- Deprecations (with warnings)
Major Release (X.0.0)
- Breaking API changes
- Removal of deprecated features
- Major architectural changes
- Incompatible configuration changes
Release Checklist
Pre-release Checklist
## Pre-release Checklist
- [ ] All tests passing on main branch
- [ ] No outstanding security issues
- [ ] Documentation updated
- [ ] CHANGELOG.md updated
- [ ] Version numbers updated
- [ ] Release branch created
- [ ] Release PR approved
Release Build Checklist
## Build Checklist
- [ ] Clean build on all platforms
- [ ] All features compile
- [ ] Binary size acceptable
- [ ] Performance benchmarks acceptable
- [ ] Security audit passing
Release Preparation
1. Create Release Branch
# Create release branch from main
git checkout main
git pull origin main
git checkout -b release/v1.2.3
# Or for release candidates
git checkout -b release/v1.2.3-rc1
2. Update Version Numbers
Update version in multiple files:
# Cargo.toml
[package]
name = "swissarmyhammer"
version = "1.2.3" # Update this
# Update lock file
cargo update -p swissarmyhammer
3. Update Changelog
Edit CHANGELOG.md
:
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [1.2.3] - 2024-03-15
### Added
- New `validate` command for prompt validation
- Support for YAML anchors in prompts
- Performance monitoring dashboard
### Changed
- Improved error messages for template rendering
- Updated minimum Rust version to 1.70
### Fixed
- Fixed file watcher memory leak (#123)
- Corrected prompt loading on Windows (#124)
### Security
- Updated dependencies to patch CVE-2024-XXXXX
[1.2.3]: https://github.com/wballard/swissarmyhammer/compare/v1.2.2...v1.2.3
4. Update Documentation
# Update version in documentation
find doc -name "*.md" -exec sed -i 's/0\.1\.0/1.2.3/g' {} \;
# Rebuild documentation
cd doc
mdbook build
# Update README if needed
vim README.md
5. Run Pre-release Tests
# Full test suite
cargo test --all-features
# Test on different platforms
cargo test --target x86_64-pc-windows-gnu
cargo test --target x86_64-apple-darwin
# Integration tests
cargo test --test '*' -- --test-threads=1
# Benchmarks
cargo bench
# Security audit
cargo audit
# Check for unused dependencies
cargo machete
Building Release Artifacts
Local Build Script
Create scripts/build-release.sh
:
#!/bin/bash
set -e
VERSION=$1
if [ -z "$VERSION" ]; then
echo "Usage: $0 <version>"
exit 1
fi
echo "Building SwissArmyHammer v$VERSION"
# Clean previous builds
cargo clean
rm -rf target/release-artifacts
mkdir -p target/release-artifacts
# Build for multiple platforms
PLATFORMS=(
"x86_64-unknown-linux-gnu"
"x86_64-apple-darwin"
"aarch64-apple-darwin"
"x86_64-pc-windows-gnu"
)
for platform in "${PLATFORMS[@]}"; do
echo "Building for $platform..."
if [[ "$platform" == *"windows"* ]]; then
ext=".exe"
else
ext=""
fi
# Build
cross build --release --target "$platform"
# Package
cp "target/$platform/release/swissarmyhammer$ext" \
"target/release-artifacts/swissarmyhammer-$VERSION-$platform$ext"
# Create tarball/zip
if [[ "$platform" == *"windows"* ]]; then
cd target/release-artifacts
zip "swissarmyhammer-$VERSION-$platform.zip" \
"swissarmyhammer-$VERSION-$platform.exe"
rm "swissarmyhammer-$VERSION-$platform.exe"
cd ../..
else
cd target/release-artifacts
tar -czf "swissarmyhammer-$VERSION-$platform.tar.gz" \
"swissarmyhammer-$VERSION-$platform"
rm "swissarmyhammer-$VERSION-$platform"
cd ../..
fi
done
# Generate checksums
cd target/release-artifacts
shasum -a 256 * > checksums.sha256
cd ../..
echo "Release artifacts built in target/release-artifacts/"
GitHub Actions Release
.github/workflows/release.yml
:
name: Release
on:
push:
tags:
- 'v*'
permissions:
contents: write
jobs:
create-release:
runs-on: ubuntu-latest
outputs:
upload_url: ${{ steps.create_release.outputs.upload_url }}
steps:
- uses: actions/checkout@v3
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: true
prerelease: false
body_path: RELEASE_NOTES.md
build-release:
needs: create-release
strategy:
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: swissarmyhammer
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: swissarmyhammer
- os: macos-latest
target: x86_64-apple-darwin
artifact: swissarmyhammer
- os: macos-latest
target: aarch64-apple-darwin
artifact: swissarmyhammer
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: swissarmyhammer.exe
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
with:
targets: ${{ matrix.target }}
- name: Build
run: cargo build --release --target ${{ matrix.target }}
- name: Package (Unix)
if: matrix.os != 'windows-latest'
run: |
cd target/${{ matrix.target }}/release
tar -czf swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.tar.gz swissarmyhammer
mv *.tar.gz ../../../
- name: Package (Windows)
if: matrix.os == 'windows-latest'
run: |
cd target/${{ matrix.target }}/release
7z a swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.zip swissarmyhammer.exe
mv *.zip ../../../
- name: Upload Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create-release.outputs.upload_url }}
asset_path: ./swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.${{ matrix.os == 'windows-latest' && 'zip' || 'tar.gz' }}
asset_name: swissarmyhammer-${{ github.ref_name }}-${{ matrix.target }}.${{ matrix.os == 'windows-latest' && 'zip' || 'tar.gz' }}
asset_content_type: ${{ matrix.os == 'windows-latest' && 'application/zip' || 'application/gzip' }}
Publishing
1. Publish to crates.io
# Dry run first
cargo publish --dry-run
# Verify package contents
cargo package --list
# Publish
cargo publish
# Note: You need to be logged in
cargo login <token>
2. Create GitHub Release
# Push release branch
git push origin release/v1.2.3
# Create and merge PR
gh pr create --title "Release v1.2.3" \
--body "Release version 1.2.3. See CHANGELOG.md for details."
# After PR merged, create tag
git checkout main
git pull origin main
git tag -a v1.2.3 -m "Release version 1.2.3"
git push origin v1.2.3
3. Update GitHub Release
After CI builds artifacts:
# Edit release notes
gh release edit v1.2.3 --notes-file RELEASE_NOTES.md
# Publish release (remove draft status)
gh release edit v1.2.3 --draft=false
Post-release
1. Update Documentation
# Update stable docs
git checkout gh-pages
cp -r doc/book/* .
git add .
git commit -m "Update documentation for v1.2.3"
git push origin gh-pages
2. Announcements
Create announcement template:
# SwissArmyHammer v1.2.3 Released!
We're excited to announce the release of SwissArmyHammer v1.2.3!
## Highlights
- π New validate command for prompt validation
- π Performance monitoring dashboard
- π Fixed file watcher memory leak
- π Security updates
## Installation
```bash
# Install with cargo
cargo install swissarmyhammer
# Or download binaries
https://github.com/wballard/swissarmyhammer/releases/tag/v1.2.3
Whatβs Changed
Thank You
Thanks to all contributors who made this release possible!
Post to:
- GitHub Discussions
- Discord/Slack channels
- Twitter/Social media
- Dev.to/Medium article
### 3. Update Homebrew Formula
If maintaining Homebrew formula:
```ruby
class Swissarmyhammer < Formula
desc "MCP server for prompt management"
homepage "https://github.com/wballard/swissarmyhammer"
version "1.2.3"
if OS.mac? && Hardware::CPU.intel?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-x86_64-apple-darwin.tar.gz"
sha256 "HASH_HERE"
elsif OS.mac? && Hardware::CPU.arm?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-aarch64-apple-darwin.tar.gz"
sha256 "HASH_HERE"
elsif OS.linux?
url "https://github.com/wballard/swissarmyhammer/releases/download/v1.2.3/swissarmyhammer-v1.2.3-x86_64-unknown-linux-gnu.tar.gz"
sha256 "HASH_HERE"
end
def install
bin.install "swissarmyhammer"
end
end
4. Monitor Release
# Check crates.io
open https://crates.io/crates/swissarmyhammer
# Monitor GitHub issues
gh issue list --label "v1.2.3"
# Check download stats
gh api repos/wballard/swissarmyhammer/releases/tags/v1.2.3
Hotfix Process
For critical fixes:
# Create hotfix branch from tag
git checkout -b hotfix/v1.2.4 v1.2.3
# Make fixes
# ...
# Update version to 1.2.4
vim Cargo.toml
# Fast-track release
cargo test
cargo publish
git tag -a v1.2.4 -m "Hotfix: Critical bug in prompt loading"
git push origin v1.2.4
Release Automation
Release Script
scripts/prepare-release.sh
:
#!/bin/bash
set -e
VERSION=$1
TYPE=${2:-patch} # patch, minor, major
if [ -z "$VERSION" ]; then
echo "Usage: $0 <version> [patch|minor|major]"
exit 1
fi
echo "Preparing release v$VERSION ($TYPE)"
# Update version
sed -i "s/^version = .*/version = \"$VERSION\"/" Cargo.toml
# Update lock file
cargo update -p swissarmyhammer
# Run tests
echo "Running tests..."
cargo test --all-features
# Update changelog
echo "Updating CHANGELOG.md..."
# Auto-generate from commits
git log --pretty=format:"- %s (%h)" v$(cargo pkgid | cut -d# -f2)..HEAD >> CHANGELOG_NEW.md
# Build documentation
echo "Building documentation..."
cd doc && mdbook build && cd ..
# Create release notes
echo "# Release v$VERSION" > RELEASE_NOTES.md
echo "" >> RELEASE_NOTES.md
cat CHANGELOG_NEW.md >> RELEASE_NOTES.md
echo "Release preparation complete!"
echo "Next steps:"
echo "1. Review and edit CHANGELOG.md and RELEASE_NOTES.md"
echo "2. Commit changes"
echo "3. Create PR for release/v$VERSION"
echo "4. After merge, tag and push"
Rollback Procedure
If issues are found after release:
-
Yank from crates.io (if critical):
cargo yank --vers 1.2.3
-
Update GitHub Release:
gh release edit v1.2.3 --prerelease
-
Communicate:
- Post in announcements
- Create issue for tracking
- Prepare hotfix
-
Fix and Re-release:
# Create fix git checkout -b fix/critical-issue # ... make fixes ... # New version cargo release patch
Best Practices
-
Test Thoroughly
- Run full test suite
- Test on all platforms
- Manual smoke tests
-
Document Changes
- Update CHANGELOG.md
- Write clear release notes
- Update migration guides
-
Communicate Clearly
- Announce deprecations early
- Provide migration paths
- Respond to feedback quickly
-
Automate When Possible
- Use CI for builds
- Automate version updates
- Script repetitive tasks
Next Steps
- Review Contributing for development workflow
- See Testing for test requirements
- Check Development for build setup
- Read Changelog for version history
Changelog
All notable changes to SwissArmyHammer will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Unreleased
Added
- Comprehensive documentation with mdBook
- GitHub Pages deployment for documentation
- Enhanced error messages with context
- Validation for prompt arguments
- Support for YAML anchors in prompts
- Performance benchmarks
Changed
- Improved template rendering performance
- Better error handling in MCP server
- Enhanced file watching efficiency
Fixed
- Memory leak in file watcher
- Prompt loading on Windows paths
- Template escaping for special characters
0.2.0 - 2024-03-01
Added
- MCP (Model Context Protocol) server implementation
- File watching for automatic prompt reloading
- Doctor command for system health checks
- Liquid template engine integration
- Support for prompt arguments and validation
- Recursive directory scanning
- YAML front matter parsing
Changed
- Migrated from simple templates to Liquid engine
- Improved prompt discovery algorithm
- Enhanced CLI output formatting
- Better error messages and diagnostics
Fixed
- Cross-platform path handling
- Unicode support in prompts
- Memory usage optimization
Security
- Added input sanitization for templates
- Implemented secure file access controls
0.1.0 - 2024-01-15
Added
- Initial release
- Basic prompt management functionality
- CLI interface with subcommands
- List command to show available prompts
- Serve command for MCP integration
- Simple template substitution
- Configuration file support
- Basic documentation
Changed
- N/A (initial release)
Fixed
- N/A (initial release)
Deprecated
- N/A (initial release)
Removed
- N/A (initial release)
Security
- N/A (initial release)
Version History
Versioning Policy
SwissArmyHammer follows Semantic Versioning:
- MAJOR version for incompatible API changes
- MINOR version for backwards-compatible functionality additions
- PATCH version for backwards-compatible bug fixes
Pre-1.0 Versions
During the 0.x series:
- Minor version bumps may include breaking changes
- The API is considered unstable
- Features may be experimental
Migration Guides
0.1.x to 0.2.x
Breaking Changes:
-
Template Engine Change
- Old: Simple
{variable}
substitution - New: Liquid templates with
{{variable}}
- Migration: Update all prompts to use double braces
- Old: Simple
-
Configuration Format
- Old: JSON configuration
- New: TOML configuration
- Migration: Convert config.json to config.toml
-
Prompt Metadata
- Old: Optional metadata
- New: Required YAML front matter
- Migration: Add minimal front matter to all prompts
Example Migration:
Old prompt (0.1.x):
# Code Review
Review this {language} code:
{code}
New prompt (0.2.x):
---
name: code-review
title: Code Review
arguments:
- name: language
required: true
- name: code
required: true
---
# Code Review
Review this {{language}} code:
{{code}}
Release Schedule
- Patch releases: As needed for bug fixes
- Minor releases: Monthly with new features
- Major releases: When breaking changes are necessary
Support Policy
- Latest version: Full support
- Previous minor version: Security fixes only
- Older versions: No support
Contributing to Changelog
When contributing, please:
- Add entries under βUnreleasedβ
- Use the appropriate section
- Reference issue/PR numbers
- Keep descriptions concise
- Sort entries by importance
Example entry:
### Fixed
- Fix memory leak in file watcher (#123)
Links
License
SwissArmyHammer is distributed under the MIT License, a permissive open-source license that allows for commercial use, modification, distribution, and private use.
MIT License
MIT License
Copyright (c) 2024 SwissArmyHammer Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
What This Means
You CAN:
β Commercial Use - Use SwissArmyHammer in commercial projects and products
β Modify - Make changes to the source code for your needs
β Distribute - Share the software with others
β Private Use - Use for private purposes without sharing modifications
β Sublicense - Include SwissArmyHammer in software with different licensing
You MUST:
π Include License - Include the copyright notice and license in copies
π Include Copyright - Keep the copyright notice intact
You CANNOT:
β Hold Liable - Hold contributors liable for damages
β Use Trademark - Use SwissArmyHammer name/logo without permission
Dependencies
SwissArmyHammer uses various open-source dependencies, each with their own licenses:
Core Dependencies
- tokio (MIT) - Async runtime
- serde (MIT/Apache-2.0) - Serialization framework
- clap (MIT/Apache-2.0) - Command line parsing
- liquid (MIT/Apache-2.0) - Template engine
- anyhow (MIT/Apache-2.0) - Error handling
Full Dependency List
To view all dependencies and their licenses:
cargo license
Or check the Cargo.lock
file for a complete list.
Contributing
By contributing to SwissArmyHammer, you agree that your contributions will be licensed under the MIT License. See Contributing for more details.
Prompt Content
Your Prompts
Prompts you create and store in SwissArmyHammer remain your intellectual property. The MIT License applies only to the SwissArmyHammer software itself, not to the content you create with it.
Built-in Prompts
Built-in prompts distributed with SwissArmyHammer are covered by the same MIT License as the software.
Third-Party Content
Some documentation and examples may include references to third-party services or tools. These references are for illustration only and donβt imply endorsement. Third-party tools and services are subject to their own licenses and terms.
Warranty Disclaimer
SwissArmyHammer is provided βas isβ without warranty of any kind. The full disclaimer is included in the license text above. In particular:
- No warranty of merchantability
- No warranty of fitness for a particular purpose
- No warranty against defects
- Use at your own risk
Questions
If you have questions about the license:
- Read the MIT License FAQ
- Consult with a legal professional for specific situations
- Open an issue on GitHub for clarification about SwissArmyHammerβs use of the license
License Changes
The project maintainers reserve the right to release future versions under a different license. However:
- Existing versions remain under their released license
- Contributors will be notified of proposed changes
- Significant changes require community discussion
Compliance
To comply with the MIT License when using SwissArmyHammer:
- In Source Distribution: Include the LICENSE file
- In Binary Distribution: Include license text in documentation
- In Modified Versions: Note your changes but keep original copyright
- In Products: Include attribution in your credits/about section
Example attribution:
This product includes software developed by the SwissArmyHammer project
(https://github.com/wballard/swissarmyhammer), licensed under the MIT License.