
Supercharge Your MCP Servers with Pre-Built Prompts: A Complete Guide

Supercharge Your MCP Servers with Pre-Built Prompts: A Complete Guide
Stop letting users reinvent the wheel. Give them battle-tested prompts that deliver consistent, high-quality results every time.
If you're building MCP (Model Context Protocol) servers, you already know the power of giving Claude access to custom tools and resources. But there's a third capability that often gets overlooked: prompts.
Prompts in MCP servers let you define pre-built, high-quality instructions that clients can use instead of writing their own prompts from scratch. Think of them as carefully crafted templates that give better results than what users might come up with on their own.
Why Use Prompts?
Let's say you want Claude to reformat a document into markdown. A user could just type "convert report.pdf to markdown" and it would work fine. But they'd probably get much better results with a thoroughly tested prompt that includes specific instructions about formatting, structure, and output requirements.
The key insight: While users can accomplish these tasks on their own, they'll get more consistent and higher-quality results when using prompts that have been carefully developed and tested by the MCP server authors.
It's the difference between asking someone to "make dinner" versus handing them a detailed recipe from a professional chef. Both might work, but one produces reliably excellent results.
How Prompts Work
Prompts in MCP define a set of user and assistant messages that can be used by the client. These prompts should be high quality, well-tested, and relevant to the overall purpose of the MCP server.
The basic structure looks like this:
- Define prompts using the
@mcp.prompt()decorator on the server - Add a name and description for each prompt
- Return a list of messages that form the complete prompt
- Ensure quality — these prompts should be well-tested and relevant to your MCP server's purpose
Building a Format Command (Server Side)
Here's how to implement a document formatting prompt. First, you'll need to import the base message types:
from mcp.server.fastmcp import base
Then define your prompt function:
@mcp.prompt(
name="format",
description="Rewrites the contents of the document in Markdown format."
)
def format_document(
doc_id: str = Field(description="Id of the document to format")
) -> list[base.Message]:
prompt = f"""
Your goal is to reformat a document to be written with markdown syntax.
The id of the document you need to reformat is:
{doc_id}
Add in headers, bullet points, tables, etc as necessary. Feel free to add in extra formatting.
Use the 'edit_document' tool to edit the document. After the document has been reformatted...
"""
return [
base.UserMessage(prompt)
]
What's happening here:
- The
@mcp.prompt()decorator registers this function as an available prompt - The
doc_idparameter lets users specify which document to format - The function returns a
UserMessagecontaining the detailed instructions - The prompt references the
edit_documenttool, creating a seamless workflow
Implementing Prompts in Your MCP Client
Now let's look at the client side. To use prompts, you need to implement two key methods in your MCP client.
Listing Available Prompts
The first step is implementing the list_prompts method in your MCP client. This method retrieves all available prompts from the server:
async def list_prompts(self) -> list[types.Prompt]:
result = await self.session().list_prompts()
return result.prompts
This simple implementation calls the session's list_prompts method and returns the prompts array from the result. This gives your client a catalog of all available prompt templates.
Getting Individual Prompts
The get_prompt method retrieves a specific prompt with arguments interpolated into it. When you request a prompt, you provide arguments that get passed to the prompt function as keyword arguments:
async def get_prompt(self, prompt_name, args: dict[str, str]):
result = await self.session().get_prompt(prompt_name, args)
return result.messages
The method returns the messages from the result, which form a conversation that can be fed directly into Claude.
How Prompt Arguments Work
When you define a prompt function on the server side, it can accept parameters. For example, a document formatting prompt might expect a doc_id parameter:
def format_document(doc_id: str):
# The doc_id gets interpolated into the prompt
When the client calls get_prompt, the arguments dictionary should contain the expected keys:
# Client-side call
messages = await client.get_prompt("format", {"doc_id": "doc_12345"})
The MCP server will pass these as keyword arguments to the prompt function, allowing dynamic content to be inserted into the prompt template.
Testing Prompts in the CLI
Once implemented, you can test prompts through the command-line interface. When you type a forward slash, available prompts appear as commands. Selecting a prompt may prompt you to choose from available options (like document IDs), and then the complete prompt gets sent to Claude.
The workflow looks like this:
- User selects a prompt (like "format")
- System prompts for required arguments (like which document to format)
- The prompt gets sent to Claude with the interpolated values
- Claude can then use tools to fetch additional data and complete the task
This creates a seamless experience where users can invoke complex, multi-step workflows with just a few keystrokes.
Testing Your Prompts
You can test prompts using the MCP Inspector. Navigate to the Prompts section, select your prompt, and provide any required parameters. The inspector will show you the generated messages that would be sent to Claude.
This lets you verify that your prompt:
- Interpolates variables correctly
- Produces the expected message structure
- Works as intended before deploying to production
Pro tip: Test with edge cases — empty documents, very long documents, documents with special characters — to ensure your prompt handles them gracefully.
Best Practices
When creating prompts for your MCP server:
✅ Make them relevant to your server's purpose — don't create prompts for generic tasks users can already do well on their own
✅ Test them thoroughly before deployment — what works in development might behave differently in production
✅ Use clear, specific instructions rather than vague requests — the more context and structure you provide, the better the results
✅ Design them to work well with your available tools — prompts should orchestrate your server's capabilities seamlessly
✅ Consider what arguments users will need to provide — make parameter names intuitive and descriptions helpful
✅ Write detailed descriptions so users understand what each prompt does and when to use it
✅ Version your prompts — as you improve them, consider maintaining backward compatibility or clearly documenting breaking changes
Key insight: Prompts bridge the gap between predefined functionality and dynamic user needs, giving Claude structured starting points for complex tasks while maintaining flexibility through parameterization.
Real-World Example: CRM Enrichment Prompt
Here's a more advanced example for a CRM MCP server:
@mcp.prompt(
name="enrich_lead",
description="Enriches a lead record with research from LinkedIn, company website, and news sources."
)
def enrich_lead(
lead_id: str = Field(description="CRM lead ID to enrich"),
focus_areas: str = Field(
default="company size, recent funding, tech stack, pain points",
description="Comma-separated areas to research"
)
) -> list[base.Message]:
prompt = f"""
You are enriching a B2B sales lead. Your goal is to gather actionable intelligence that will help the sales team personalize their outreach.
**Lead ID:** {lead_id}
**Research focus areas:** {focus_areas}
**Instructions:**
1. Use the 'get_lead' tool to fetch the current lead data
2. Use 'web_search' to research the company's recent news, funding, and market position
3. Use 'linkedin_get_company_profile' to gather employee count, industry, and key personnel
4. Use 'apollo_enrich_company' to get tech stack and firmographic data
5. Synthesize your findings into a structured summary
6. Use 'update_lead' to save the enriched data back to the CRM
**Output format:**
- Company overview (2-3 sentences)
- Key decision makers and their roles
- Recent company developments (funding, product launches, hiring)
- Potential pain points based on industry and tech stack
- Recommended talking points for outreach
Begin by fetching the lead data.
"""
return [
base.UserMessage(prompt)
]
This prompt:
- Orchestrates multiple tools in a logical sequence
- Provides clear output requirements
- Gives Claude the context it needs to do high-quality research
- Saves the user from having to remember all the steps
Client usage:
# Enrich a specific lead with custom focus areas
messages = await client.get_prompt(
"enrich_lead",
{
"lead_id": "lead_789",
"focus_areas": "recent acquisitions, executive changes, expansion plans"
}
)
When to Use Prompts vs. Tools
Use prompts when:
- You want to guide Claude through a multi-step workflow
- The task requires specific formatting or structure
- You've developed domain expertise that users would struggle to replicate
- You want to ensure consistent quality across all users
Use tools when:
- You need to execute a specific action (API call, database query, file operation)
- The task is deterministic and doesn't require LLM reasoning
- You want to give Claude atomic capabilities it can combine flexibly
The magic happens when you combine them: Prompts that reference your server's tools create powerful, guided workflows that are both flexible and reliable.
Conclusion
Prompts are the secret weapon of well-designed MCP servers. They let you package your expertise into reusable templates that deliver consistent, high-quality results.
Remember: prompts are meant to provide value that users couldn't easily get on their own — they should represent your expertise in the domain your MCP server covers.
Start simple with one or two core prompts, test them thoroughly, and iterate based on user feedback. Over time, you'll build a library of battle-tested prompts that make your MCP server indispensable.
Quick Reference: Prompt Implementation Checklist
Server Side:
- Import
basefrommcp.server.fastmcp - Define prompt function with
@mcp.prompt()decorator - Add clear name and description
- Define parameters with Field descriptions
- Return list of
base.Messageobjects - Test with MCP Inspector
Client Side:
- Implement
list_prompts()method - Implement
get_prompt(prompt_name, args)method - Handle argument interpolation
- Test CLI integration
- Verify messages are properly formatted for Claude
Ready to add prompts to your MCP server? Start with the task your users do most often, write a detailed prompt for it, and test it in the MCP Inspector. You'll be amazed at the difference a well-crafted prompt makes.
Have questions about building MCP servers or want to share your prompt templates? Drop a comment below or reach out on the Anablock community forum.
Related Articles


