Tools: Giving the LLM Superpowers
# In your server.py
from mcp.server.fastmcp import FastMCP
from typing import Literal
mcp = FastMCP("CalculatorServer", version="1.0.0")
@mcp.tool()
def calculate(
x: float,
y: float,
operation: Literal["add", "subtract", "multiply", "divide"],
) -> float:
"""
Performs a basic arithmetic operation on two numbers.
"""
if operation == "add":
return x + y
elif operation == "subtract":
return x - y
elif operation == "multiply":
return x * y
elif operation == "divide":
if y == 0:
# This error will be caught by the framework and sent as a structured error response.
raise ValueError("Cannot divide by zero.")
return x / y
Exposing and Managing Resources: Providing Context
Resources are read-only data sources that an LLM can access to gain context for its tasks. They are analogous to
GET
endpoints in a REST API and should be designed to be free of side effects.
Static Resource: Exposing a fixed piece of data, like a project’s README.md
file.
# In your server.py
@mcp.resource("docs://readme", title="Project README")
def get_readme() -> str:
"""Returns the content of the project's README file."""
with open("README.md", "r") as f:
return f.read()
Dynamic (Templated) Resource: Exposing data that changes based on parameters in the URI. This allows the client to request specific pieces of information.
# In your server.py
# This assumes a dictionary `user_profiles` exists.
user_profiles = {
"123": {"name": "Alice", "role": "Engineer"},
"456": {"name": "Bob", "role": "Designer"},
}
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> dict:
"""Returns the profile for a given user ID."""
return user_profiles.get(user_id, {})
In this example, inspired by , a client can request
users://123/profile
to get Alice’s data.
Integration Pattern: Managing Persistent Connections
A common and critical real-world use case is providing access to a database. Managing the database connection lifecycle—opening it on startup and closing it gracefully on shutdown—is essential for a robust server. The Python SDK provides a lifespan
context manager for this exact purpose.
This example demonstrates how to connect to a SQLite database and expose its data as a resource.
# In your server.py
import sqlite3
from contextlib import asynccontextmanager
from collections.abc import AsyncIterator
from dataclasses import dataclass
from mcp.server.fastmcp import FastMCP
# Define a context object to hold the database connection
@dataclass
class AppContext:
db_conn: sqlite3.Connection
# Define the lifespan context manager
@asynccontextmanager
async def db_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""Manages the database connection lifecycle."""
print("Connecting to database...")
conn = sqlite3.connect("my_database.db")
try:
# Yield the connection to be used by handlers
yield AppContext(db_conn=conn)
finally:
# Ensure the connection is closed on shutdown
print("Closing database connection...")
conn.close()
# Initialize the server with the lifespan manager
mcp = FastMCP("DatabaseServer", version="1.0.0", lifespan=db_lifespan)
# Define a tool to query the database
@mcp.tool()
def query_products(max_price: float) -> list[dict]:
"""Queries the products table for items below a max price."""
# Get the context, which includes the lifespan context
ctx = mcp.get_context()
# Access the database connection from the lifespan context
db_conn = ctx.request_context.lifespan_context.db_conn
cursor = db_conn.cursor()
cursor.execute("SELECT id, name, price FROM products WHERE price <=?", (max_price,))
products = [{"id": row, "name": row, "price": row} for row in cursor.fetchall()]
return products
Creating Reusable Prompts: Standardizing Workflows
Prompts are user-controlled, parameterized templates that guide the LLM to perform a task in a specific, optimized way.They are useful for encapsulating complex instruction sets or standardizing common workflows.
The Python SDK allows you to register prompts with parameters, similar to tools
# In your server.py
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts import base
mcp = FastMCP("PromptServer", version="1.0.0")
@mcp.prompt(title="Code Review Assistant")
def review_code_prompt(code: str, focus: Literal["style", "performance", "security"]) -> list[base.Message]:
"""
Generates a structured prompt for reviewing code with a specific focus.
"""
return
In this example from , a client can invoke the
review_code_prompt
and provide both the code
to be reviewed and the focus
of the review. The server then constructs a multi-part prompt to send to the LLM.
It is important to note a practical caveat: while AI agents can often discover and use Tools autonomously, some host applications may require the user to explicitly select a Prompt from a list rather than the agent choosing it automatically.This makes Prompts better suited for user-initiated, standardized tasks
Leave a Reply