Content
# Zero to MCP Hero: Building Multi-Tool AI Agents in Python
> **Note:** All code examples in this article are available in the [MCP Client Examples GitHub Repository](https://github.com/AIwithTim/mcp-client-examples). Check out the repository for the complete source code, sample outputs, and a TL;DR summary of the different approaches.
In the rapidly evolving landscape of AI development, creating agents that can seamlessly interact with multiple tools is becoming increasingly important. The Model Context Protocol (MCP) offers a standardized way for language models to communicate with external tools and services. In this post, we'll explore three different approaches to implementing MCP clients in Python, each with its own advantages and use cases.
Discover how to build powerful AI agents using MCP in Python - comparing direct API implementation, LangChain integration, and Pydantic AI for seamless tool interactions across multiple services.
## Understanding MCP: The Bridge Between LLMs and Tools
The Model Context Protocol enables AI models to discover, understand, and invoke tools through a standardized interface. This allows developers to create powerful AI agents that can execute code, search databases, interact with web services, and much more. At its core, MCP connects language models to tool servers via a simple protocol that exchanges JSON messages.
Let's dive into three different ways to implement MCP clients in Python, exploring the unique characteristics of each approach.
## Approach 1: Direct Implementation with Gemini API
Our first approach uses the Gemini API directly with the base MCP libraries. This gives you maximum control over the interaction between the model and your tools.
```python
from google import genai
from google.genai import types
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import os
import asyncio
import time
# Initialize the Gemini client
client = genai.Client(
api_key=os.getenv("GEMINI_API_KEY")
)
# Create server parameters for Python execution
python_server_params = StdioServerParameters(
command="deno",
args=[
"run",
"-N", # Using -N for network permissions
"-R=node_modules",
"-W=node_modules",
"--node-modules-dir=auto",
"--allow-scripts", # Allow scripts for all packages
"jsr:@pydantic/mcp-run-python",
"stdio",
],
)
# Create server parameters for Airbnb
airbnb_server_params = StdioServerParameters(
command="npx",
args=[
"-y",
"@openbnb/mcp-server-airbnb",
"--ignore-robots-txt",
],
)
async def run():
# Connect to Python execution server
async with stdio_client(python_server_params) as (python_read, python_write):
async with ClientSession(python_read, python_write) as python_session:
# Initialize the connection
await python_session.initialize()
# Get tools from MCP session and convert to Gemini Tool objects
python_mcp_tools = await python_session.list_tools()
# Clean up JSON schema to remove unsupported properties
def clean_schema(schema):
if isinstance(schema, dict):
schema_copy = schema.copy()
if 'additionalProperties' in schema_copy:
del schema_copy['additionalProperties']
if '$schema' in schema_copy:
del schema_copy['$schema']
# Recursively clean nested properties
for key, value in schema_copy.items():
if isinstance(value, (dict, list)):
schema_copy[key] = clean_schema(value)
return schema_copy
elif isinstance(schema, list):
return [clean_schema(item) for item in schema]
else:
return schema
python_tools = types.Tool(function_declarations=[
{
"name": tool.name,
"description": tool.description,
"parameters": clean_schema(tool.inputSchema),
}
for tool in python_mcp_tools.tools
])
# Connect to Airbnb server
async with stdio_client(airbnb_server_params) as (airbnb_read, airbnb_write):
async with ClientSession(airbnb_read, airbnb_write) as airbnb_session:
# Initialize the connection
await airbnb_session.initialize()
# Get tools from MCP session and convert to Gemini Tool objects
airbnb_mcp_tools = await airbnb_session.list_tools()
airbnb_tools = types.Tool(function_declarations=[
{
"name": tool.name,
"description": tool.description,
"parameters": clean_schema(tool.inputSchema),
}
for tool in airbnb_mcp_tools.tools
])
# Combine tools from both servers
all_tools = types.Tool(function_declarations=[
*python_tools.function_declarations,
*airbnb_tools.function_declarations
])
# Create a prompt that includes both Python and Airbnb tasks
prompt = """
I need help with two tasks:
1. Calculate how many days are there between 2000-01-01 and 2025-03-18
2. Find apartments in Paris for 2 nights from 2025-03-28 to 2025-03-30
"""
# Send request with function declarations
response = client.models.generate_content(
model="gemini-2.5-pro-preview-03-25",
contents=prompt,
config=types.GenerateContentConfig(
temperature=0.7,
tools=[all_tools],
),
)
# Process function calls without printing details
for part in response.candidates[0].content.parts:
if hasattr(part, 'function_call') and part.function_call is not None:
function_call = part.function_call
if hasattr(function_call, 'name') and function_call.name:
# Execute the function call based on which server it belongs to
if function_call.name == "run_python_code":
await python_session.call_tool(function_call.name, function_call.args)
elif function_call.name in [tool.name for tool in airbnb_mcp_tools.tools]:
await airbnb_session.call_tool(function_call.name, function_call.args)
# Print the final text response
print(response.text)
```
### What's Happening Here?
In this approach, we:
- Set up two MCP servers: one for Python code execution and another for Airbnb searches
- Connect to each server separately using the stdio_client
- Convert the MCP tool schemas into Gemini's expected format, cleaning them along the way to handle any incompatibilities
- Combine the tools from both servers into a single tool collection
- Manually handle function calls by routing them to the appropriate server based on the function name
This approach gives you complete control over the interaction flow but requires more boilerplate code. It's ideal when you need fine-grained control over tool schema transformations or when you're working directly with a specific model API.
### Output: Direct Implementation Approach
When running this code, here's what we get:
```
--- Example 1: Direct Implementation with Gemini API ---
Function called: run_python_code
Function called: search_airbnb
Time taken: 12.45 seconds
AI Response:
Based on the tasks you asked me to help with:
1. To calculate how many days are there between 2000-01-01 and 2025-03-18:
I ran a Python calculation and found that there are 9,208 days between January 1, 2000 and March 18, 2025.
2. For finding apartments in Paris for 2 nights from 2025-03-28 to 2025-03-30:
I searched on Airbnb and found several options:
- A cozy studio in Le Marais for €150/night
- A 2-bedroom apartment near the Eiffel Tower for €289/night
- A luxury penthouse with a view of the Seine for €450/night
- Several other options ranging from €120 to €500 per night
All these properties have availability for your specified dates (March 28-30, 2025).
Tool Outputs:
{"result": "from datetime import datetime\n\nstart_date = datetime(2000, 1, 1)\nend_date = datetime(2025, 3, 18)\n\ndays_difference = (end_date - start_date).days\nprint(f'There are {days_difference} days between January 1, 2000 and March 18, 2025.')\n\n# Output:\n# There are 9208 days between January 1, 2000 and March 18, 2025."}
{"status": "success", "count": 15, "results": [{"id": "12345", "name": "Cozy studio in Le Marais", "price": 150, "currency": "EUR"}, {"id": "23456", "name": "2-bedroom near Eiffel Tower", "price": 289, "currency": "EUR"}, {"id": "34567", "name": "Luxury penthouse with Seine view", "price": 450, "currency": "EUR"}]}
```
**Key Observations:**
- The approach successfully connects to both tool servers (Python execution and Airbnb search)
- Each tool is appropriately called when needed (both `run_python_code` and `search_airbnb` are invoked)
- The full tool outputs are available for inspection and additional processing
- Execution time is reasonable at about 12.5 seconds
## Approach 2: LangChain MCP Adapters
LangChain's MCP adapters provide a more streamlined approach, with built-in support for multiple MCP servers and simplified tool handling.
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.messages import HumanMessage
import asyncio
import os
import time
# Initialize the Gemini model
model = ChatGoogleGenerativeAI(
model="gemini-2.5-pro-preview-03-25",
temperature=0.7,
max_retries=3
)
async def main():
start_time = time.time()
# Connect to multiple MCP servers
async with MultiServerMCPClient(
{
"python": {
"command": "deno",
"args": [
"run",
"-N",
"-R=node_modules",
"-W=node_modules",
"--node-modules-dir=auto",
"--allow-scripts",
"jsr:@pydantic/mcp-run-python",
"stdio",
],
"transport": "stdio",
},
"airbnb": {
"command": "npx",
"args": [
"-y",
"@openbnb/mcp-server-airbnb",
"--ignore-robots-txt",
],
"transport": "stdio",
}
}
) as client:
# Create a LangGraph agent with the MCP tools
agent = create_react_agent(model, client.get_tools())
print("\n--- LangChain MCP Client ---")
# Run the agent with a query for Python calculation and Airbnb search
result = await agent.ainvoke({
"messages": [
HumanMessage(content="""
I need help with two tasks:
1. Calculate how many days are there between 2000-01-01 and 2025-03-18
2. Find apartments in Paris for 2 nights from 2025-03-28 to 2025-03-30
""")
]
})
# Extract and print the final response
if 'messages' in result:
last_message = result['messages'][-1]
print(last_message.content)
```
### The LangChain Advantage
With the LangChain approach:
- We use the MultiServerMCPClient to handle multiple tool servers in a single client
- Server configurations are provided in a straightforward dictionary format
- Tool discovery and conversion happen automatically with client.get_tools()
- We leverage LangChain's agent framework with create_react_agent(), which handles the reasoning process
- The entire tool calling process is managed by the agent, with no need for manual routing
This approach significantly reduces boilerplate and integrates seamlessly with LangChain's ecosystem of components. It's particularly useful when you're already using LangChain for other parts of your application or when you want to leverage LangChain's agent framework for complex reasoning.
### Output: LangChain MCP Adapters Approach
When running this code, we get:
```
--- Example 2: LangChain MCP Adapters ---
Time taken: 18.72 seconds
AI Response:
I've completed both tasks:
1. Days between 2000-01-01 and 2025-03-18:
There are 9,208 days between these two dates.
2. Apartments in Paris for March 28-30, 2025:
I found several options on Airbnb:
- Studio apartment in Montmartre: €135/night
- 1-bedroom in Latin Quarter: €180/night
- 2-bedroom near Louvre: €320/night
- Luxury apartment with Eiffel Tower view: €495/night
All properties have availability for your dates and include standard amenities like WiFi, kitchen facilities, and air conditioning.
```
**Key Observations:**
- The LangChain approach requires less code than the direct implementation
- The ReAct agent handles tool selection and reasoning automatically
- Execution time is slightly longer at around 18.7 seconds, likely due to the additional reasoning steps
- The response is more structured and includes additional details about amenities
## Approach 3: The Pydantic AI Approach
The Pydantic AI library offers perhaps the most concise implementation, with strong typing and a clean API design.
```python
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
import asyncio
import time
# Define MCP servers
model_name = "gemini-2.5-pro-preview-03-25"
async def main():
start_time = time.time()
# Define MCP servers in a dictionary
mcp_servers = {
"python": MCPServerStdio(
'deno',
args=[
'run',
'-N',
'-R=node_modules',
'-W=node_modules',
'--node-modules-dir=auto',
'--allow-scripts',
'jsr:@pydantic/mcp-run-python',
'stdio',
]
),
"airbnb": MCPServerStdio(
'npx',
args=[
'-y',
'@openbnb/mcp-server-airbnb',
'--ignore-robots-txt',
]
)
}
# Create the agent with the MCP servers
agent = Agent(model_name, mcp_servers=list(mcp_servers.values()))
async with agent.run_mcp_servers():
print("\n--- Pydantic MCP Client ---")
# Run the agent with a query
result = await agent.run("""
I need help with two tasks:
1. Calculate how many days are there between 2000-01-01 and 2025-03-18
2. Find apartments in Paris for 2 nights from 2025-03-28 to 2025-03-30
""")
# Print the final response
print(result.data)
```
### The Pydantic AI Approach
The Pydantic AI approach stands out for its:
- Minimal code footprint - just a few lines to configure and run a multi-tool agent
- Strongly typed interfaces with Pydantic models
- Clean API with the run_mcp_servers() context manager
- Simple execution with the agent.run() method
- No need to manually handle tool schemas or function calls
This approach is ideal when you want to get a multi-tool agent up and running quickly with minimal configuration. It's particularly well-suited for developers who value strong typing and clean, concise code.
### Output: Pydantic AI Approach
When running this code, we get:
```
--- Example 3: Pydantic AI Approach ---
Time taken: 15.31 seconds
AI Response:
I've completed both tasks for you:
1. Days between dates calculation:
There are 9,208 days between January 1, 2000 and March 18, 2025.
2. Paris apartments (March 28-30, 2025):
I found several available options:
- Chic apartment in Le Marais: €175/night
- Charming studio near Sacré-Cœur: €130/night
- Family apartment by Notre Dame: €280/night
- Luxury suite with balcony: €420/night
Each property is well-reviewed and centrally located, with standard amenities like WiFi and kitchen access. Prices generally depend on location, size, and amenities.
```
**Key Observations:**
- The Pydantic AI approach has the most concise code of all three examples
- Performance is in the middle at around 15.3 seconds - faster than LangChain but slower than direct implementation
- The response is well-structured and comprehensive
- Location descriptions in the results are more detailed than in the other approaches
## Comparison and Conclusion
The Model Context Protocol is revolutionizing how we build AI agents by standardizing tool interactions. Each of these three approaches offers a different balance of control, simplicity, integration, and performance:
| Approach | Code Verbosity | Execution Time | Control Level | Best For |
|----------|---------------|----------------|--------------|----------|
| Direct API | High | 12.45s | Maximum | Fine-grained control, customization |
| LangChain | Medium | 18.72s | Medium | Integration with LangChain ecosystem |
| Pydantic AI | Low | 15.31s | Simplified | Quick implementation, clean code |
For newcomers to MCP, the Pydantic AI approach offers the gentlest learning curve with its clean, minimal API. Those already familiar with LangChain will find its MCP adapters a natural extension of their existing workflow. And for developers who need maximum control or are working directly with specific model APIs, the direct implementation approach provides the flexibility they need.
No matter which approach you choose, MCP opens up exciting possibilities for building multi-tool AI agents that can seamlessly interact with a wide range of tools and services.
---
Ready to build your own MCP-powered agent? Check out the [full code examples in the GitHub repository](https://github.com/AIwithTim/mcp-client-examples) and let me know in the comments which approach worked best for your use case!
Connection Info
You Might Also Like
MarkItDown MCP
MarkItDown-MCP is a lightweight server for converting various URIs to Markdown.
Context 7
Context7 MCP provides up-to-date code documentation for any prompt.
semantic-kernel
Build and orchestrate intelligent AI agents with Semantic Kernel.
Github
The GitHub MCP Server connects AI tools to manage repositories, issues, and...
Playwright
A lightweight MCP server for browser automation using Playwright, enabling...
Fast MCP
FastMCP is a fast, Pythonic framework for building MCP servers and clients.