Content
# MCPBridge
MCPBridge is a lightweight service designed to enable all LLM clients to be compatible with MCP server tools. It provides an HTTP interface for accessing MCP protocol functionalities, allowing various AI clients to interact with multiple AI models and leverage powerful tool capabilities.
## Key Features
- Provides an HTTP API interface compatible with OpenAI API format, supporting any LLM client integration
- Bridges your LLM client with MCP tool servers without modifying client code
- Supports interaction with large language models (currently supports DeepSeek)
- Integrates the MCP protocol, enabling standard LLM clients to utilize MCP tool functionalities
- Supports streaming responses for a better user experience
- Segmented tool calls, displaying initial responses first, followed by tool execution results
- Built-in API key validation protection to prevent unauthorized access
- Automatically initializes the MCP server and tools upon server startup
- Automatically checks and installs dependencies required for tools
- Provides a status check API for easy monitoring of server status
- Simple and user-friendly configuration method
## Installation Guide
### Prerequisites
- Python 3.8 or higher
- Node.js and npm (for supporting npx tool calls)
- Access key for the DeepSeek API
### Installation Steps
1. Clone or download the code repository
2. Install the required packages:
```bash
pip install -r requirements.txt
```
3. Copy the environment variable example file and configure it:
```bash
cp .env.example .env
```
4. Edit the `.env` file to configure necessary parameters:
- Set the local API key
- Configure the model API key
- Adjust other parameters as needed
## Configuration Instructions
The configuration file `.env` mainly contains the following sections:
### Local API Server Settings
```
API_PORT=55545 # Local server port
API_HOST=127.0.0.1 # Local server IP address
API_KEY=your_api_key_here # Local API access key for request validation
REQUEST_MODEL_ID=Seraphina # Local model ID, customizable
```
### MCP Server Settings
```
SSE_API_KEY=your_sse_api_key_here # Configure this key if using SSE server
```
### Model Settings
```
MODEL_API_KEY=your_model_api_key_here # DeepSeek API key
MODEL_API_BASE=https://api.deepseek.com/v1 # API base URL
MODEL=deepseek-chat # Model to be used
```
## Running the Server
Start the API server:
```bash
python api_server.py
```
By default, the server will run on the configured port. The MCP server and available tools will be automatically initialized upon server startup, without waiting for the first request. You can confirm the server is running by accessing `http://127.0.0.1:55545/` in your browser.
You can also check the current status of the server, including initialization status, available tools, and models, by visiting `http://127.0.0.1:55545/status`.
## MCP Tool Server Configuration
The MCP server configuration is located in the `servers_config.json` file and supports two types of servers:
### SSE Server
```json
"sse-server": {
"url": "http://localhost:8000/sse",
"headers": {
"Authorization": "Bearer ${SSE_API_KEY}"
},
"timeout": 5,
"sse_read_timeout": 300
}
```
### STDIO Server (NPX Tool)
```json
"desktop-automation": {
"name": "Desktop Automation",
"isActive": true,
"command": "npx",
"args": [
"-y",
"mcp-desktop-automation"
],
"dependencies": [
"mcp-desktop-automation"
]
}
```
For the STDIO server, the API will automatically check and install the configured dependencies. You can specify the list of dependencies in the `dependencies` field, supporting version requirements (e.g., `package@latest`).
## API Interface
### Status Check
- **Endpoint**: `/status`
- **Method**: GET
- **Response Example**:
```json
{
"status": "ready",
"servers_count": 1,
"tools_count": 5,
"default_model": "Seraphina",
"available_tools": [
{"name": "get_weather", "description": "Get weather information for a specified city"},
{"name": "search_web", "description": "Search for information on the web"}
],
"available_models": ["Seraphina"]
}
```
### Chat Interface
- **Endpoint**: `/v1/chat/completions`
- **Method**: POST
- **Request Headers**:
- `Content-Type: application/json`
- `X-API-Key: your_api_key_here`
- **Request Body Example**:
```json
{
"model": "Seraphina",
"messages": [
{"role": "user", "content": "Hello, can you help me check today's weather?"}
],
"stream": true
}
```
- **Response Example**:
```json
{
"id": "chatcmpl-123456789",
"object": "chat.completion",
"created": 1677858242,
"model": "Seraphina",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I would be happy to help you check today's weather. Let me use the weather query tool...\nThe query result shows that today is sunny with a temperature of 25°C, perfect for outdoor activities."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": -1,
"completion_tokens": -1,
"total_tokens": -1
},
"tool_used": true
}
```
## Tool Invocation
The system supports tool invocation via the MCP protocol. When a model needs to use a tool, it will return a JSON response containing the tool invocation instructions in the following format:
```json
{
"say": "I need to check the weather information",
"use": {
"tool": "get_weather",
"params": {
"city": "Beijing"
}
}
}
```
The server will automatically handle the tool invocation and return the results to the model to generate the final reply.
## Integration with Existing LLM Clients
MCPBridge is designed as a plug-and-play solution that can be easily integrated into existing LLM client systems:
1. **Standard OpenAI Client** - Simply modify the API endpoint to the MCPBridge address
2. **Custom Clients** - Just implement API calls compatible with the OpenAI format
3. **UI Interfaces** - Any ChatUI using the OpenAI interface can directly connect to MCPBridge
No extensive modifications are needed for your existing client code; just adjust the API address and key configuration to enable MCP tool capabilities in your client.
## Example Usage
### Sending Requests with curl
```bash
curl -X POST http://127.0.0.1:55545/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-API-Key: your_api_key_here" \
-d '{
"model": "Seraphina",
"messages": [
{"role": "user", "content": "Hello, can you briefly introduce yourself?"}
]
}'
```
### Sending Requests with Python
```python
import requests
url = "http://127.0.0.1:55545/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"X-API-Key": "your_api_key_here"
}
data = {
"model": "Seraphina",
"messages": [
{"role": "user", "content": "Hello, can you briefly introduce yourself?"}
]
}
response = requests.post(url, json=data, headers=headers)
print(response.json())
```
## Frequently Asked Questions
### Server Startup Failure
- Check if the port is occupied
- Ensure all necessary environment variables are correctly set
- Verify that Python dependencies are fully installed
### API Request Failure
- Confirm that the API key is correctly set and included in the request
- Check if the request format is correct
- Review server logs for detailed error information
### Tool Execution Issues
- Ensure the MCP server configuration is correct
- Check if the dependencies for related tools are correctly installed
- Review logs to understand the specific error reasons
## Development and Contribution
Contributions and feedback are welcome through Pull Requests or Issues.
## License
This project is open-sourced under the Apache-2.0 license.
Connection Info
You Might Also Like
MarkItDown MCP
Converting files and office documents to Markdown.
Time
Obtaining current time information and converting time between different...
Filesystem
Model Context Protocol Servers
Sequential Thinking
Offers a structured approach to dynamic and reflective problem-solving,...
Git
Model Context Protocol Servers
Context 7
Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors