Claude Prompts Mcp
Define and utilize custom prompts in markdown format to streamline interactions with Claude. The server enables flexible, template-based prompt creation, incorporating enhanced error handling and seamless transport configurations.
Author

minipuft
Quick Info
Actions
Tags
Claude Prompts MCP Server
๐ The Universal Model Context Protocol Server for Any MCP Client
Supercharge your AI workflows with battle-tested prompt engineering, intelligent orchestration, and lightning-fast hot-reload capabilities. Works seamlessly with Claude Desktop, Cursor Windsurf, and any MCP-compatible client.
โก Quick Start โข ๐ฏ Features โข ๐ Docs โข ๐ ๏ธ Advanced
๐ What Makes This Special? (v1.3.0 - "Consolidated Architecture with Systematic Framework Application")
- ๐ฏ Three-Tier Execution Model โ Routes between prompts (lightning-fast), templates (framework-enhanced), and chains (LLM-driven) based on file structure
- ๐ง Structural Analysis Engine โ File structure analysis detects execution type (with optional W.I.P LLM-powered semantic enhancement)
- โก Three-Tier Performance โ From instant variable substitution to comprehensive methodology-guided processing
- ๐ง Unified Creation Tools โ Create prompts or templates with type-specific optimization
- ๐ก๏ธ Intelligent Quality Gates โ Framework-aware validation with conditional injection based on execution tier
- ๐ Configurable Analysis โ Structural analysis with optional semantic enhancement and manual methodology selection
- ๐ฅ Intelligent Hot-Reload System โ Update prompts instantly without restarts
- ๐จ Advanced Template Engine โ Nunjucks-powered with conditionals, loops, and dynamic data
- โก Multi-Phase Orchestration โ Robust startup sequence with comprehensive health monitoring
- ๐ Universal MCP Compatibility โ Works flawlessly with Claude Desktop, Cursor Windsurf, and any MCP client
Transform your AI assistant experience with a three-tier execution architecture that routes between lightning-fast prompts, framework-enhanced templates, and LLM-driven chains based on file structure analysis across any MCP-compatible platform.
๐ Revolutionary Interactive Prompt Management
๐ฏ The Future is Here: Manage Your AI's Capabilities FROM WITHIN the AI Conversation
This isn't just another prompt server โ it's a living, breathing prompt ecosystem that evolves through natural conversation with your AI assistant. Imagine being able to:
# ๐ฏ Universal prompt execution with intelligent type detection
prompt_engine >>code_formatter language="Python" style="PEP8"
โ System detects execution tier, applies appropriate processing automatically
# ๐ Create and manage prompts with intelligent analysis
prompt_manager create name="code_reviewer" type="template" \
content="Analyze {{code}} for security, performance, and maintainability"
โ Creates framework-enhanced template with CAGEERF methodology integration
# ๐ Analyze existing prompts for execution optimization
prompt_manager analyze_type prompt_id="my_prompt"
โ Shows: "Type: template, Framework: CAGEERF, Confidence: 85%, Gates: enabled"
# โ๏ธ System control and framework management
system_control switch_framework framework="ReACT" reason="Problem-solving focus"
โ Switches active methodology with performance monitoring
# ๐ฅ Execute with full three-tier intelligence
prompt_engine >>analysis_chain input="complex research data" llm_driven_execution=true
โ LLM-driven chain execution with step-by-step coordination (requires semantic LLM integration)
๐ Why This Architecture Matters:
- ๐ง Structural Intelligence: File structure analysis provides reliable execution routing with minimal configuration
- ๐ Dynamic Capability Building: Build and extend your AI assistant's capabilities through conversational prompt management
- ๐ฎ Reduced Friction: Minimal configuration required - execution type detected from file structure
- โก Systematic Workflow: Create โ Structure-based routing โ Framework application in a reliable flow
- ๐ง Intelligent Command Routing: Built-in command detection with multi-strategy parsing and automatic tool routing
- ๐ง Sophisticated Methodology System: Four proven thinking frameworks (CAGEERF, ReACT, 5W1H, SCAMPER) with manual selection and conditional application
This is what well-architected AI infrastructure looks like โ where systematic analysis and proven methodologies enhance your AI interactions through structured approaches rather than magic.
๐ง Advanced Framework System
๐ฏ Revolutionary Methodology Integration
The server features a sophisticated framework system that brings structured thinking methodologies to your AI interactions:
๐จ Four Intelligent Methodologies
- ๐ CAGEERF: Comprehensive structured approach (Context, Analysis, Goals, Execution, Evaluation, Refinement, Framework)
- ๐ง ReACT: Reasoning and Acting pattern for systematic problem-solving
- โ 5W1H: Who, What, When, Where, Why, How systematic analysis
- ๐ SCAMPER: Creative problem-solving (Substitute, Combine, Adapt, Modify, Put to other uses, Eliminate, Reverse)
โ๏ธ Intelligent Framework Features
- ๐ง Manual Selection: Choose optimal methodology manually based on your needs, with sophisticated conditional application
- ๐ Runtime Switching: Change active framework with performance monitoring and seamless transition
- โก Conditional Injection: Framework enhancement applied only when beneficial (bypassed for simple prompts)
- ๐ Switching Performance: Monitor framework switching mechanics and performance
# ๐ Switch methodology for different thinking approaches
system_control switch_framework framework="ReACT" reason="Problem-solving focus"
โ Switches to ReACT methodology with performance monitoring
# ๐ Monitor framework performance and usage
system_control analytics show_details=true
โ View framework switching history and performance metrics
# โ๏ธ Get current framework status
system_control status
โ Shows active framework, available methodologies, and system health
๐ The Result: Your AI conversations become more structured, thoughtful, and effective through proven thinking methodologies applied systematically based on your chosen framework.
โ ๏ธ Analysis System Capabilities
๐๏ธ What the System Actually Does:
- ๐ Structural Analysis: Detects execution type by examining template variables (
{{variable}}
), chain steps, and file structure - ๐ Framework Application: Applies manually selected framework methodology (CAGEERF, ReACT, 5W1H, SCAMPER) based on execution tier
- โก Routing Logic: Routes to appropriate execution tier (prompt/template/chain) based on structural characteristics
๐ง Optional Semantic Enhancement:
- LLM Integration: When enabled, provides true semantic understanding of prompt content
- Advanced Analysis: Intelligent methodology recommendations and complexity assessment
- Default Mode: Structural analysis only - honest about limitations without LLM access
๐ฏ Manual Framework Control:
# Framework selection is manual, not automatic
system_control switch_framework framework="ReACT" reason="Problem-solving focus"
โก Features & Reliability
๐ฏ Developer Experience
|
๐ Enterprise Architecture
|
๐ ๏ธ Consolidated MCP Tools Suite (87.5% Reduction: 24+ โ 3 Tools)
๐ค Intelligent Features:
|
๐ฏ One-Command Installation
Get your AI command center running in under a minute:
# Clone โ Install โ Launch โ Profit! ๐
git clone https://github.com/minipuft/claude-prompts-mcp.git
cd claude-prompts-mcp/server && npm install && npm run build && npm start
๐ Universal MCP Client Integration
Claude Desktop
Drop this into your claude_desktop_config.json
:
{
"mcpServers": {
"claude-prompts-mcp": {
"command": "node",
"args": ["E:\\path\\to\\claude-prompts-mcp\\server\\dist\\index.js"],
"env": {
"MCP_PROMPTS_CONFIG_PATH": "E:\\path\\to\\claude-prompts-mcp\\server\\prompts\\promptsConfig.json"
}
}
}
}
Cursor Windsurf & Other MCP Clients
Configure your MCP client to connect via STDIO transport:
- Command:
node
- Args:
["path/to/claude-prompts-mcp/server/dist/index.js"]
- Environment (Optional):
MCP_PROMPTS_CONFIG_PATH=path/to/prompts/promptsConfig.json
Claude Code CLI Installation
For Claude Code CLI users, use the one-command installation:
claude mcp add-json claude-prompts-mcp '{"type":"stdio","command":"node","args":["path/to/claude-prompts-mcp/server/dist/index.js"],"env":{}}'
๐ก Pro Tip: Environment variables are optional - the server auto-detects paths in 99% of cases. Use absolute paths for guaranteed compatibility across all MCP clients!
๐ฎ Start Building Immediately (v1.3.0 Consolidated Architecture)
Your AI command arsenal is ready with enhanced reliability:
# ๐ง Discover your intelligent superpowers
prompt_manager list filter="category:analysis"
โ Intelligent filtering shows relevant prompts with usage examples
# ๐ฏ Structural execution routing - system detects execution type from file structure
prompt_engine >>friendly_greeting name="Developer"
โ Detected as template (has {{variables}}), returns framework-enhanced greeting
prompt_engine >>content_analysis input="my research data"
โ Detected as template (structural analysis), applies framework injection, executes with quality gates
prompt_engine >>analysis_chain input="my content" llm_driven_execution=true
โ Detected as chain (has chainSteps), provides LLM-driven step-by-step execution (requires semantic LLM integration)
# ๐ Monitor intelligent detection performance
system_control analytics include_history=true
โ See how accurately the system detects prompt types and applies gates
# ๐ Create prompts that just work (zero configuration)
"Create a prompt called 'bug_analyzer' that finds and explains code issues"
โ Prompt created via conversation, system detects execution type from structure, applies active framework
# ๐ Refine prompts through conversation (intelligence improves)
"Make the bug_analyzer prompt also suggest performance improvements"
โ Prompt updated, system re-analyzes, updates detection profile automatically
# ๐ง Build LLM-driven chain workflows
"Create a prompt chain that reviews code, validates output, tests it, then documents it"
โ Chain created, each step auto-analyzed, appropriate gates assigned automatically
# ๐๏ธ Manual override when needed (but rarely necessary)
prompt_engine >>content_analysis input="sensitive data" step_confirmation=true gate_validation=true
โ Force step confirmation for sensitive analysis
๐ The Architecture: Your prompt library becomes a structured extension of your workflow, organized and enhanced through systematic methodology application.
๐ฅ Why Developers Choose This Server
โก Lightning-Fast Hot-Reload โ Edit prompts, see changes instantly
Our sophisticated orchestration engine monitors your files and reloads everything seamlessly:
# Edit any prompt file โ Server detects โ Reloads automatically โ Zero downtime
- Instant Updates: Change templates, arguments, descriptions in real-time
- Zero Restart Required: Advanced hot-reload system keeps everything running
- Smart Dependency Tracking: Only reloads what actually changed
- Graceful Error Recovery: Invalid changes don't crash the server
๐จ Next-Gen Template Engine โ Nunjucks-powered dynamic prompts
Go beyond simple text replacement with a full template engine:
Analyze {{content}} for {% if focus_area %}{{focus_area}}{% else %}general{% endif %} insights.
{% for requirement in requirements %}
- Consider: {{requirement}}
{% endfor %}
{% if previous_context %}
Build upon: {{previous_context}}
{% endif %}
- Conditional Logic: Smart prompts that adapt based on input
- Loops & Iteration: Handle arrays and complex data structures
- Template Inheritance: Reuse and extend prompt patterns
- Real-Time Processing: Templates render with live data injection
๐๏ธ Enterprise-Grade Orchestration โ Multi-phase startup with health monitoring
Built like production software with comprehensive architecture:
Phase 1: Foundation โ Config, logging, core services
Phase 2: Data Loading โ Prompts, categories, validation
Phase 3: Module Init โ Tools, executors, managers
Phase 4: Server Launch โ Transport, API, diagnostics
- Dependency Management: Modules start in correct order with validation
- Health Monitoring: Real-time status of all components
- Performance Metrics: Memory usage, uptime, connection tracking
- Diagnostic Tools: Built-in troubleshooting and debugging
๐ Intelligent Prompt Chains โ Multi-step AI workflows
Create sophisticated workflows where each step builds on the previous:
{
"id": "content_analysis_chain",
"name": "Content Analysis Chain",
"isChain": true,
"executionMode": "chain",
"chainSteps": [
{
"stepName": "Extract Key Points",
"promptId": "extract_key_points",
"inputMapping": { "content": "original_content" },
"outputMapping": { "key_points": "extracted_points" },
"executionType": "template"
},
{
"stepName": "Analyze Sentiment",
"promptId": "sentiment_analysis",
"inputMapping": { "text": "extracted_points" },
"outputMapping": { "sentiment": "analysis_result" },
"executionType": "template"
}
]
}
- Visual Step Planning: See your workflow before execution
- Input/Output Mapping: Data flows seamlessly between steps
- Error Recovery: Failed steps don't crash the entire chain
- Flexible Execution: Run chains or individual steps as needed
๐ System Architecture
graph TB
A[Claude Desktop] -->|MCP Protocol| B[Transport Layer]
B --> C[๐ง Orchestration Engine]
C --> D[๐ Prompt Manager]
C --> E[๐ ๏ธ MCP Tools Manager]
C --> F[โ๏ธ Config Manager]
D --> G[๐จ Template Engine]
E --> H[๐ง Management Tools]
F --> I[๐ฅ Hot Reload System]
style C fill:#ff6b35
style D fill:#00ff88
style E fill:#0066cc
๐ MCP Client Compatibility
This server implements the Model Context Protocol (MCP) standard and works with any compatible client:
โ Tested & Verified
|
๐ Transport Support
|
๐ฏ Integration Features
|
๐ก Developer Note: As MCP adoption grows, this server will work with any new MCP-compatible AI assistant or development environment without modification.
๐ ๏ธ Advanced Configuration
โ๏ธ Server Powerhouse (config.json
)
Fine-tune your server's behavior:
{
"server": {
"name": "Claude Custom Prompts MCP Server",
"version": "1.0.0",
"port": 9090
},
"prompts": {
"file": "promptsConfig.json",
"registrationMode": "name"
},
"transports": {
"default": "stdio",
"sse": { "enabled": false },
"stdio": { "enabled": true }
}
}
๐๏ธ Prompt Organization (promptsConfig.json
)
Structure your AI command library:
{
"categories": [
{
"id": "development",
"name": "๐ง Development",
"description": "Code review, debugging, and development workflows"
},
{
"id": "analysis",
"name": "๐ Analysis",
"description": "Content analysis and research prompts"
},
{
"id": "creative",
"name": "๐จ Creative",
"description": "Content creation and creative writing"
}
],
"imports": [
"prompts/development/prompts.json",
"prompts/analysis/prompts.json",
"prompts/creative/prompts.json"
]
}
๐ Advanced Features
๐ Multi-Step Prompt Chains โ Build sophisticated AI workflows
Create complex workflows that chain multiple prompts together:
# Research Analysis Chain
## User Message Template
Research {{topic}} and provide {{analysis_type}} analysis.
## Chain Configuration
Steps: research โ extract โ analyze โ summarize
Input Mapping: {topic} โ {content} โ {key_points} โ {insights}
Output Format: Structured report with executive summary
Capabilities:
- Sequential Processing: Each step uses output from previous step
- Parallel Execution: Run multiple analysis streams simultaneously
- Error Recovery: Graceful handling of failed steps
- Custom Logic: Conditional branching based on intermediate results
๐จ Advanced Template Features โ Dynamic, intelligent prompts
Leverage the full power of Nunjucks templating:
# {{ title | title }} Analysis
## Context
{% if previous_analysis %}
Building upon previous analysis: {{ previous_analysis | summary }}
{% endif %}
## Requirements
{% for req in requirements %}
{{loop.index}}. **{{req.priority | upper}}**: {{req.description}}
{% if req.examples %}
Examples: {% for ex in req.examples %}{{ex}}{% if not loop.last %}, {% endif %}{% endfor %}
{% endif %}
{% endfor %}
## Focus Areas
{% set focus_areas = focus.split(',') %}
{% for area in focus_areas %}
- {{ area | trim | title }}
{% endfor %}
Template Features:
- Filters & Functions: Transform data on-the-fly
- Conditional Logic: Smart branching based on input
- Loops & Iteration: Handle complex data structures
- Template Inheritance: Build reusable prompt components
๐ง Real-Time Management Tools โ Hot management without downtime
Manage your prompts dynamically while the server runs:
# Update prompts with intelligent re-analysis
prompt_manager update id="analysis_prompt" content="new template"
โ System re-analyzes execution type and framework requirements
# Modify specific sections with validation
prompt_manager modify id="research" section="examples" content="new examples"
โ Section updated with automatic template validation
# Hot-reload with comprehensive validation
system_control reload reason="updated templates"
โ Full system reload with health monitoring
Management Capabilities:
- Live Updates: Change prompts without server restart
- Section Editing: Modify specific parts of prompts
- Bulk Operations: Update multiple prompts at once
- Rollback Support: Undo changes when things go wrong
๐ Production Monitoring โ Enterprise-grade observability
Built-in monitoring and diagnostics for production environments:
// Health Check Response
{
healthy: true,
modules: {
foundation: true,
dataLoaded: true,
modulesInitialized: true,
serverRunning: true
},
performance: {
uptime: 86400,
memoryUsage: { rss: 45.2, heapUsed: 23.1 },
promptsLoaded: 127,
categoriesLoaded: 8
}
}
Monitoring Features:
- Real-Time Health Checks: All modules continuously monitored
- Performance Metrics: Memory, uptime, connection tracking
- Diagnostic Tools: Comprehensive troubleshooting information
- Error Tracking: Graceful error handling with detailed logging
๐ Documentation Hub
Guide | Description |
---|---|
๐ฅ Installation Guide | Complete setup walkthrough with troubleshooting |
๐ ๏ธ Troubleshooting Guide | Common issues, diagnostic tools, and solutions |
๐๏ธ Architecture Overview | A deep dive into the orchestration engine, modules, and data flow |
๐ Prompt Format Guide | Master prompt creation with examples |
๐ Chain Execution Guide | Build complex multi-step workflows |
โ๏ธ Prompt Management | Dynamic management and hot-reload features |
๐ MCP Tools Reference | Complete MCP tools documentation |
๐บ๏ธ Roadmap & TODO | Planned features and development roadmap |
๐ค Contributing | Join our development community |
๐ค Contributing
We're building the future of AI prompt engineering! Join our community:
- ๐ Found a bug? Open an issue
- ๐ก Have an idea? Start a discussion
- ๐ง Want to contribute? Check our Contributing Guide
- ๐ Need help? Visit our Documentation
๐ License
Released under the MIT License - see the file for details.
โญ Star this repo if it's transforming your AI workflow!
Report Bug โข Request Feature โข View Docs
Built with โค๏ธ for the AI development community