Content
# AI Smart Chatbot Project: LangGraph Agent and MCP Server Integration
## Project Introduction
This is an intelligent chatbot project built based on LangGraph and LangChain, with tool calling capabilities and ReAct reasoning mode. The robot can call various tools to obtain real-time information, such as weather queries, historical event queries, network searches, etc., to provide users with accurate and timely answers. The project specifically demonstrates how to integrate the Model Context Protocol (MCP) server into a ReAct Agent built based on LangGraph, enabling large language models to autonomously call MCP tools.
## Main Features
- 🤖 **Intelligent Dialogue**: Natural language understanding and generation based on large language models
- 🔧 **Tool Calling**: Supports integration of various external tools, including weather queries, search engines, historical event queries, etc.
- 🧠 **ReAct Reasoning**: Employs a reasoning-action-observation loop mode to provide a structured thinking process
- 📊 **Token Statistics**: Real-time monitoring and statistics of API call token usage
- 💬 **Dialogue Memory**: Supports multi-turn dialogues, maintaining context coherence
- 🌐 **API Service**: Includes FastAPI backend service, supporting user information queries
- 🔌 **MCP Integration**: Supports seamless integration of Model Context Protocol server tools
- ⚡ **Synchronous/Asynchronous**: Provides both synchronous and asynchronous implementations to meet different performance needs
## Project Architecture
This project includes the following core components:
1. **LangGraph ReAct Agent** - An intelligent agent built based on LangGraph, supporting tool calling and reasoning
2. **MCP Server** - A server providing various tool functions, such as BMI calculation, weather queries, etc.
3. **MCP Tool Adapter** - An adapter layer that converts MCP tools into LangChain tool format
4. **FastAPI Service** - Provides API services such as user information queries
## Project Structure
```
├── sync/ # Synchronous implementation directory
│ ├── agent_chatbot_LCEL.py # Agent implementation based on LCEL (synchronous)
│ ├── agent_chatbot_systemmessage.py # Agent implementation based on system messages (synchronous)
│ ├── fastmcp_server_streamhttp.py # MCP server implementation (synchronous)
│ ├── mcp_tools_adapter.py # MCP tool adapter (synchronous)
│ └── user_api.py # User information API service (synchronous)
│
├── async/ # Asynchronous implementation directory
│ ├── agent_chatbot_async.py # Asynchronous version of the Agent implementation
│ ├── fastmcp_server_async.py # Asynchronous version of the MCP server
│ ├── mcp_tools_adapter_async.py # Asynchronous version of the MCP tool adapter
│ └── user_api_async.py # Asynchronous version of the user information API service
│
├── .gitignore # Git ignore file configuration
├── requirements.txt # List of dependency packages
└── README.md # Project overview documentation (Chinese)
```
## Core Concept
The core concept of the entire project is **tool standardization**:
- All tools (local functions, API services, MCP services) must provide standardized interfaces
- Each tool must have a clear `name` (tool name)
- Each tool must have a detailed `description` (tool description)
- Each tool must define a clear parameter structure (`args`)
This standardization enables large models to understand tool functions and call them correctly, and also allows seamless integration of tools from different sources.## Environment Requirements
The project includes two main environments:
### MCP Server Environment
- Python 3.10+ (recommended 3.10.18)
- conda environment: `mcp_env`
### LangGraph Agent Environment
- Python 3.13+ (recommended 3.13.5)
- conda environment: `langgraph_env`
## Dependency Package Installation
### MCP Server Environment
```bash
# Create MCP environment
conda create -n mcp_env python=3.10.18
conda activate mcp_env
# Install dependencies
pip install fastmcp requests python-dotenv
```
### LangGraph Agent Environment
```bash
# Create LangGraph environment
conda create -n langgraph_env python=3.13.5
conda activate langgraph_env
# Install dependencies
pip install langchain langchain-openai langchain-deepseek langchain-community
pip install langgraph fastapi uvicorn requests python-dotenv
pip install fastmcp # Used for MCP client functionality
```
### Additional Dependencies for Asynchronous Environment
```bash
# Install asynchronous support in the LangGraph environment
pip install aiohttp
```
## Environment Variable Configuration
Create a `.env` file and configure the necessary API keys:
```
DEEPSEEK_API_KEY=your_deepseek_api_key
TAVILY_API_KEY=your_tavily_api_key
OPENAI_API_KEY=your_openai_api_key # Optional
KIMI_API_KEY=your_kimi_api_key # Optional, for asynchronous version
```
## Usage Method
### Synchronous Version
#### Step 1: Start MCP Server
First, start the MCP server in the MCP environment:
```bash
conda activate mcp_env
python sync/fastmcp_server_streamhttp.py
```
The server will run on `http://127.0.0.1:8083/my-custom-path`, providing the following tools:
- `calculate_bmi` - Calculate BMI index
- `get_current_time` - Get current time
- `get_weather` - Get weather conditions for a specified city
#### Step 2: Start User API Service (Optional)
If you need to use the user information query function:
```bash
conda activate langgraph_env
python sync/user_api.py
```
The service will run on `http://127.0.0.1:8000`.
#### Step 3: Start LangGraph Agent
Start the Agent in the LangGraph environment and specify the MCP server URL through command-line parameters:
```bash
conda activate langgraph_env
python sync/agent_chatbot_LCEL.py --mcp http://127.0.0.1:8083/my-custom-path
```
The Agent will automatically connect to the MCP server, obtain available tools, and add them to the tool list.
### Asynchronous Version
#### Step 1: Start Asynchronous MCP Server
```bash
conda activate mcp_env
python async/fastmcp_server_async.py
```
The server will run on `http://127.0.0.1:8084/my-custom-path`.
#### Step 2: Start Asynchronous User API Service (Optional)
```bash
conda activate langgraph_env
python async/user_api_async.py
```
The service will run on `http://127.0.0.1:8001`.
#### Step 3: Start Asynchronous LangGraph Agent
```bash
conda activate langgraph_env
python async/agent_chatbot_async.py
```
#### Step 4: Run Performance Test (Optional)
Compare the performance differences between synchronous and asynchronous versions:
```bash
python async/agent_chatbot_async.py --perf-test
```
### Command-Line Parameters
#### Synchronous Version Parameters
`agent_chatbot_LCEL.py` supports the following command-line parameters:
- `--mcp <url>`: Main MCP server URL
- `--mcp2 <url>`: Second MCP server URL
- `--no-mcp`: Disable all MCP tools
#### Asynchronous Version Parameters
`agent_chatbot_async.py` supports the following command-line parameters:
- `--mcp <url>`: MCP server URL
- `--no-mcp`: Disable MCP tools
- `--perf-test`: Run performance test
### Interactive Commands
In the chat interface, you can use the following commands:
- `stats`: View Token usage statistics
- `quit` or `exit`: Exit the program## How it Works
### Integration of LangGraph and MCP
This project showcases the perfect combination of two different technologies:
1. **LangGraph** - Provides powerful workflow control and state management capabilities, enabling the Agent to perform complex reasoning and decision-making
2. **MCP (Model Context Protocol)** - Provides standardized tool interfaces, enabling external tools to be seamlessly called by large models
These two technologies seem independent, but they are unified through the **MCP tool adapter**:
```
LangGraph Agent <---> MCP tool adapter <---> MCP Server
```
### MCP Tool Adapter
The `mcp_tools_adapter.py` file implements an adapter that converts MCP tools into LangChain tool format:
1. Connect to the MCP server and obtain the tool list
2. Create corresponding LangChain tools for each MCP tool
3. Handle tool calls and result conversions
### Comparison of Synchronous and Asynchronous
The project provides both synchronous and asynchronous implementations, each with its own advantages:
**Synchronous Implementation**:
- Simple and intuitive code, easy to understand
- Suitable for simple scenarios and learning use
- Executes sequentially, easy to debug
**Asynchronous Implementation**:
- Higher performance, especially in I/O-intensive operations
- Can handle multiple requests simultaneously
- Reduces waiting time and improves throughput
In performance tests, the asynchronous version can usually show significant performance advantages, especially when involving multiple network requests.
## Detailed Explanation of Available Tools
### Local Tools
1. **Internet Search** (`internet_search_engine`)
- Use the Tavily search engine to get the latest network information
- Returns up to 2 search results
2. **Date Query** (`get_today`/`get_today_async`)
- Get the current system date
- Format: YYYY-MM-DD
3. **Historical Event Query** (`get_historical_events_on_date`/`get_historical_events_on_date_async`)
- Query historical events on a specified date
- Requires providing month and date parameters
4. **User Information Query** (`get_user_info`/`get_user_info_async`)
- Query user information from the internal API
- Requires starting the FastAPI service
### MCP Server Tools
1. **BMI Calculation** (`calculate_bmi`)
- Calculate body mass index
- Parameters: weight (kg) and height (m)
2. **Current Time** (`get_current_time`)
- Get the current system time
- No parameters required
3. **Weather Query** (`get_weather`)
- Get weather conditions for a specified city
- Parameters: city name and date
## ReAct Reasoning Mode
The robot adopts a structured reasoning process:
1. **Key Concepts** - Identify the core concepts in the task
2. **Concept Decomposition** - Decompose complex concepts into queryable sub-elements
3. **Reflection** - Analyze the information obtained and the missing parts
4. **Thinking** - Analyze the dependencies and query priorities between elements
5. **Reasoning** - Develop a strategy for obtaining information
6. **Planning** - Develop specific execution steps
7. **Plan Verification** - Verify the feasibility of the plan
8. **Plan Improvement** - Optimize and adjust the execution plan
## Example Dialogues
### BMI Calculation Example
```
You: I am 1.75 meters tall and weigh 70 kilograms. What is my BMI?
ReAct Agent: **Key Concepts**: BMI index
- Height: 1.75 meters
- Weight: 70 kilograms
**Concept Decomposition**:
BMI index
- Height: 1.75 meters
- Weight: 70 kilograms
- BMI calculation formula: Weight (kg) / Height (m)²
**Reflection**:
A. I have already obtained the height and weight values, but have not yet calculated the BMI.
B. Known information: Height = 1.75 meters, Weight = 70 kilograms.
C. Missing: The specific value of the BMI index.
**Thinking**:
A. I need to obtain the value of the BMI index.
B. I can use the calculate_bmi tool, which requires height and weight as input parameters.
C. The current need is to obtain the BMI index value.
**Reasoning**:
I already have the height (1.75 meters) and weight (70 kilograms), and can directly call the calculate_bmi tool to calculate the BMI value.
**Planning**:
Call the calculate_bmi tool with parameters weight_kg=70, height_m=1.75.
**Plan Verification**:
A. Known constants: Height = 1.75 meters, Weight = 70 kilograms.
B. The current plan does not involve exhaustive file records.
C. The current plan does not depend on other unobtained sub-elements.
D. The current plan does not make assumptions about the values of sub-elements.
E. The sub-task is not completed and needs to continue.
Based on the height (1.75 meters) and weight (70 kilograms) you provided, your BMI index is 22.86.
This falls within the normal weight range (18.5-24.9), indicating that your weight is beneficial to your health. Maintaining good eating habits and appropriate exercise can maintain this healthy BMI value.
```
### Weather Query Example
```
You: What is the weather like in Beijing today?
ReAct Agent: **Key Concepts**: Today's weather conditions in Beijing
**Concept Decomposition**:
- Today's date
- Beijing's weather information
**Reflection**:
A. Need to get today's date and Beijing's weather information
B. No information has been obtained yet
C. Missing: current date, Beijing weather data
**Thinking**:
A. You can directly query the weather in Beijing without getting the date first
B. The weather query tool can directly provide real-time information
**Reasoning**: Directly call the weather query tool to get the real-time weather in Beijing
**Planning**: Call the get_weather tool with the parameter "Beijing"
The current weather in Beijing: sunny, temperature 25°C, feels like 27°C, humidity 45%. It's a nice day today! ☀️
```
## Extended Development
### Adding New MCP Tools
To add a new tool to the MCP server, simply add a new tool function in `fastmcp_server_streamhttp.py` or `fastmcp_server_async.py`:
```python
# Synchronous version
@mcp.tool()
def new_tool_function(param1: type, param2: type) -> return_type:
"""Tool description...
Args:
param1: Parameter 1 description
param2: Parameter 2 description
Returns:
Return value description
"""
# Tool implementation...
return result
# Asynchronous version
@mcp.tool()
async def new_tool_function_async(param1: type, param2: type) -> return_type:
"""Tool description...
Args:
param1: Parameter 1 description
param2: Parameter 2 description
Returns:
Return value description
"""
# Asynchronous tool implementation...
await asyncio.sleep(0.01) # Simulate asynchronous operation
return result
```
After adding a new tool to the MCP server and restarting it, you need to restart the Agent at the same time for the new tool to be available to the Agent.
### Multi-MCP Server Support
The synchronous version of the Agent supports connecting to multiple MCP servers:
```bash
python sync/agent_chatbot_LCEL.py --mcp http://server1:8083/path --mcp2 http://server2:8086/path
```
If two servers provide tools with the same name, consider adding a prefix when loading the tools to avoid conflicts.
### Performance Optimization
1. **Caching Mechanism**
- Cache frequently called tool results
- Set cache expiration time
- Implement cache invalidation strategy
2. **Concurrency Control**
- Limit the number of concurrent requests
- Implement request queues and priorities
- Add timeout and retry mechanisms
3. **Load Balancing**
- Distribute requests among multiple MCP servers providing the same functionality
- Implement health checks and failover
## Troubleshooting
### MCP Connection Issues
If the Agent cannot connect to the MCP server, check:
1. Whether the MCP server is running
2. Whether the URL is correct
3. Whether the network connection is normal
4. Whether proxy settings need to be adjusted
### Tool Call Failure
If the tool call fails, possible reasons include:
1. Parameter type mismatch
- Large models may return numbers in string form, while tools require numeric types
- Solution: Add type conversion logic in the tool adapter
2. Unclear tool description
- If the tool description is not detailed enough, the large model may not be able to use it correctly
- Solution: Provide detailed parameter descriptions, examples, and usage scenarios
3. Network connection issues
- External APIs may be temporarily unavailable
- Solution: Add error handling and retry mechanisms
### Performance Issues
If you encounter performance issues, consider:
1. Switching to the asynchronous version
2. Reducing unnecessary network requests
3. Optimizing tool implementation
4. Adding a caching mechanism
## Technical Architecture
- **LangGraph**: Used to build complex AI workflow graphs
- **LangChain**: Provides LLM integration and tool management
- **FastMCP**: Provides standardized tool interfaces
- **DeepSeek/OpenAI/Kimi**: Large language model providers
- **FastAPI**: Lightweight Web framework for API services
- **Asyncio/Aiohttp**: Asynchronous programming support
## Contribution Guide
Welcome to submit Issues and Pull Requests to improve the project. Before contributing code, please ensure:
1. The code complies with the project's coding standards
2. Add necessary comments and documentation
3. Test the correctness of new features
4. Update relevant documentation
## License
This project uses the MIT license, please see the LICENSE file for details.
## Contact Information
If you have any questions or suggestions, please contact us through the following methods:
- Submit a GitHub Issue url:https://github.com/mr-jay-wei/my-langgraph-agent
- Send an email to the project maintainer Jay Wei email:xiaofeng.0209@gmail.com
---
*Last updated: July 2025*
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
Appwrite
Build like a team of hundreds
TrendRadar
TrendRadar: Your hotspot assistant for real news in just 30 seconds.
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.