Content
# MCP Demo Project 🚀
A minimal demonstration of the **Model Context Protocol (MCP)** using Python. This project shows how to:
- Create an MCP server that exposes a calculator tool
- Build a client that uses OpenAI's gpt-4o-mini to intelligently call the tool
- Demonstrate model-driven tool calling through MCP
## What is MCP?
The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). It enables AI models to securely access tools, data sources, and services through a unified interface.
## Project Structure
```
mcp-demo/
├── server.py # MCP server with calculator tool
├── client.py # Client that uses OpenAI to call the tool
├── requirements.txt # Python dependencies
├── .env # Your API keys (create this)
└── README.md # This file
```
## Features
✅ **Simple Calculator Tool** - Performs add, subtract, multiply, divide operations
✅ **Proper JSON Schema** - Well-defined input validation
✅ **Structured Output** - Clean, formatted results
✅ **Lightweight Model** - Uses OpenAI's gpt-4o-mini (fast & cost-effective)
✅ **Natural Language** - Ask questions in plain English
✅ **Clean Code** - Well-commented, easy to understand
## Prerequisites
- Python 3.10 or higher
- OpenAI API key (get one at https://platform.openai.com/api-keys)
## Installation
### 1. Create a virtual environment (recommended)
```bash
python -m venv venv
source venv/bin/activate # On macOS/Linux
# or
venv\Scripts\activate # On Windows
```
### 2. Install dependencies
```bash
pip install -r requirements.txt
```
### 3. Set up your OpenAI API key
Create a `.env` file in the project root:
```bash
echo "OPENAI_API_KEY=your-api-key-here" > .env
```
Or copy the example:
```bash
cp .env.example .env
# Then edit .env and add your actual API key
```
## Usage
### Running the Demo
The client automatically starts the server and runs a demo calculation:
```bash
python client.py
```
### Expected Output
```
🚀 Starting MCP Client Demo
==================================================
✅ Connected to MCP server
📋 Available tools: 1
• calculator: Perform basic arithmetic operations: add, subtract, multiply, divide
💬 User prompt: What is 156 multiplied by 27?
🤖 Asking OpenAI gpt-4o-mini to solve the problem...
✅ Model decided to use a tool!
🔧 Tool Call Details:
Name: calculator
Arguments: {'operation': 'multiply', 'a': 156, 'b': 27}
⚙️ Executing tool via MCP server...
✨ Tool Result:
Result: 156 multiply 27 = 4212
🎯 Final Answer:
156 multiplied by 27 equals 4212.
==================================================
✅ Demo completed successfully!
```
## How It Works
### 1. MCP Server (`server.py`)
- Exposes a `calculator` tool via MCP protocol
- Defines a JSON schema for input validation
- Handles arithmetic operations: add, subtract, multiply, divide
- Returns structured results
### 2. Client (`client.py`)
1. Connects to the MCP server via stdio
2. Discovers available tools from the server
3. Sends a natural language prompt to OpenAI
4. OpenAI analyzes the prompt and decides to call the calculator tool
5. Client executes the tool via MCP
6. Returns the result to OpenAI for a natural language response
### 3. The Flow
```
User Question → OpenAI (gpt-4o-mini) → Tool Call Decision
↓
MCP Client
↓
MCP Server → Calculator Tool
↓
Result → OpenAI → Natural Answer
```
## Customization
### Try Different Questions
Edit the `user_prompt` in [client.py](client.py#L69):
```python
# Examples:
user_prompt = "What is 156 multiplied by 27?"
user_prompt = "Calculate 100 divided by 5"
user_prompt = "Add 45 and 67"
user_prompt = "What's 89 minus 23?"
```
### Add More Tools
In [server.py](server.py), add more tools to the `list_tools()` function:
```python
@app.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(name="calculator", ...),
Tool(name="weather", ...), # Add your new tool
]
```
Then handle them in `call_tool()`:
```python
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "calculator":
# ... existing code
elif name == "weather":
# ... your new tool logic
```
## Troubleshooting
### "OPENAI_API_KEY not found"
- Make sure you created a `.env` file
- Verify it contains: `OPENAI_API_KEY=sk-...`
### "Module not found" errors
- Activate your virtual environment
- Run `pip install -r requirements.txt`
### "Connection refused" or server issues
- The client automatically starts the server
- Make sure `server.py` is in the same directory
- Check that Python is in your PATH
## Architecture Details
### MCP Protocol
- **Transport**: stdio (standard input/output)
- **Format**: JSON-RPC 2.0
- **Tools**: JSON Schema for validation
### Model
- **Provider**: OpenAI
- **Model**: gpt-4o-mini
- **Features**: Function calling, tool use
## Learn More
- [MCP Documentation](https://modelcontextprotocol.io)
- [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)
- [OpenAI Function Calling](https://platform.openai.com/docs/guides/function-calling)
## License
MIT - Feel free to use this demo for learning and building your own MCP tools!
---
**Happy Building! 🎉**
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
firecrawl
Firecrawl MCP Server enables web scraping, crawling, and content extraction.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.