Content
# MCP on Amazon Bedrock[[English Readme](./README.en.md)]
- Demo Videos

> ChatBot is the most common application form in the era of large models, but it is limited by the inability of large models to obtain timely information and operate external systems, making the application scenarios of ChatBot relatively limited. Later, with the introduction of Function Calling/Tool Use features, large models could interact with external systems, but the drawback was that the business logic of large models and Tool development were tightly coupled, failing to leverage the efficiency of Tool scaling. In late November 2024, Anthropic launched [MCP](https://www.anthropic.com/news/model-context-protocol), breaking this situation and introducing the power of the entire community to scale Tool development. Currently, there are rich [MCP servers](https://github.com/modelcontextprotocol/servers) developed by open-source communities and various vendors, allowing seamless integration into ChatBot by end users, greatly extending the capabilities of ChatBot UIs, and creating a trend of ChatBot unifying various system UIs.
- How MCP Works

- AWS-based MCP Enterprise Architecture Design

- This project provides ChatBot interaction services based on large models like **Bedrock** Nova and Claude, while introducing **MCP**, greatly enhancing and extending the application scenarios of ChatBot products, supporting seamless integration with local file systems, databases, development tools, internet searches, and more. If the ChatBot containing large models is equivalent to a brain, then introducing MCP is like adding arms and legs, truly enabling large models to operate and connect with various existing systems and data.
- **Demo Architecture**

- **Core Components**

1. MCP Client (mcp_client.py)
- Responsible for managing connections to multiple MCP servers
- Handles tool calls and resource access
- Provides tool name mapping and normalization functions
2. Chat Client (chat_client.py, chat_client_stream.py)
- Interacts with the Amazon Bedrock API
- Handles user queries and model responses
- Supports streaming responses and tool calls
3. Main Service (main.py)
- Provides FastAPI service, exposing chat and MCP management APIs
- Manages user sessions and MCP server configurations
- Handles concurrent requests and resource cleanup
4. Web Interface (chatbot.py)
- User interface based on Streamlit
- Allows users to interact with the model and manage MCP servers
- Displays tool call results and thought processes
- **Technical Architecture**
1. Frontend and Backend Separation
- Backend: FastAPI service providing REST API
- Frontend: Streamlit Web interface
2. Multi-user Support
- User session isolation
- Supports concurrent access
3. MCP Server Management
- Supports dynamic addition and removal of MCP servers
- Global and user-specific MCP server configurations
- **Workflow**

1. Users send queries through the web interface
2. The backend service receives the query and forwards it to the Bedrock model
3. If the model needs to use a tool, the MCP client will call the corresponding MCP server
4. The tool call results are returned to the model, which generates the final response
5. The response is returned to the user, including the tool call process and results
- This project is still continuously exploring and improving, and MCP is thriving in the entire community. Everyone is welcome to pay attention together!
## 1. Project Features:
- Supports both Amazon Nova Pro and Claude Sonnet models
- Fully compatible with the official Anthropic MCP standard, allowing the same approach to directly use various community [MCP servers](https://github.com/modelcontextprotocol/servers/tree/main)
- Decouples MCP capabilities from the client, encapsulating MCP capabilities on the server side and providing API services, with the chat interface compatible with OpenAI, facilitating integration with other chat clients
- Frontend and backend separation, both MCP Client and MCP Server can be deployed on the server side, allowing users to interact through the backend web service using a web browser, thus accessing LLM and MCP Server capabilities and resources
- Supports multiple users, user session isolation, and concurrent access.
- Streaming responses
- Visualization of thought processes
- Display of tool call results and Computer Use screenshots
## 2. Installation Steps
### 2.1. Dependency Installation
Currently, mainstream MCP Servers are developed based on NodeJS or Python and run on user PCs, so users need to install these dependencies.
### 2.1 NodeJS
Download and install NodeJS from [here](https://nodejs.org/en). This project has been thoroughly tested with version `v22.12.0`.
### 2.2 Python
Some MCP Servers are developed based on Python, so users must install [Python](https://www.python.org/downloads/). Additionally, this project code is also developed based on Python, requiring the installation of the environment and dependencies.
First, install the Python package management tool `uv`, which can be referenced in the official guide [uv](https://docs.astral.sh/uv/getting-started/installation/). This project has been thoroughly tested with version `v0.5.11`.
### 2.3 Environment Configuration
After downloading and cloning this project, navigate to the project directory to create a Python virtual environment and install dependencies:
```bash
uv sync
```
At this point, the virtual environment is created in the `.venv` directory of the project, activate it:
```
source .venv/bin/activate
```
### 2.4 Configuration Editing
> Tips: If you need to configure multiple account ak/sk and use a polling mechanism, you can add a `credential.csv` in the conf/ directory, with column names **ak** and **sk**, filling in multiple ak/sk, for example:
| ak | sk |
| ----- | ----- |
| ak 1 | sk 1 |
| ak 2 | sk 2 |
Project configuration should be written into the `.env` file, which should include the following configuration items (it is recommended to copy `env_dev` and modify it):
```
AWS_ACCESS_KEY_ID=(optional, not needed if credential.csv is present)<your-access-key>
AWS_SECRET_ACCESS_KEY=(optional)<your-secret-key>
AWS_REGION=<your-region>
LOG_DIR=./logs
CHATBOT_SERVICE_PORT=<chatbot-ui-service-port>
MCP_SERVICE_HOST=127.0.0.1
MCP_SERVICE_PORT=<bedrock-mcp-service-port>
API_KEY=<your-new-api-key>
MAX_TURNS=100
```
Note: This project uses **AWS Bedrock Nova/Claude** series models, so you need to register and obtain the above service access keys.
## 3. Running
### 3.1 This project includes one backend service and one Streamlit frontend, with the frontend and backend interfacing through REST API:
- **Chat Interface Service (Bedrock+MCP)**, which can provide an external Chat interface, host multiple MCP servers, support historical multi-turn dialogue input, and response content includes intermediate results of tool calls, but does not currently support streaming responses
- **ChatBot UI**, which communicates with the above Chat interface service, providing multi-turn dialogue and managing MCP Web UI demonstration services
### 3.2 Chat Interface Service (Bedrock+MCP)
- The interface service can be provided to independent APIs, decoupling the server-side MCP capabilities from the client
- You can view the API documentation at http://{ip}:7002/docs#/ .

- Edit the configuration file `conf/config.json`, which presets which MCP servers to start, and you can edit it to add or modify MCP server parameters.
- The parameter specification for each MCP server can refer to the following example:
```
"db_sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "./tmp/test.db"],
"env": {},
"description": "DB Sqlite CRUD - MCP Server",
"status": 1
}
```
- Start the service:
```bash
bash start_all.sh
```
- Stop the service:
```bash
bash stop_all.sh
```
- After starting, you can check the log `logs/start_mcp.log` to confirm there are no errors, then run the test script to check the Chat interface:
```bash
# The script uses Bedrock's Amazon Nova-lite model, but you can switch to others
# The default API key is set to 123456, please change it according to your actual settings
curl http://127.0.0.1:7002/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 123456" \
-H "X-User-ID: user123" \
-d '{
"model": "us.amazon.nova-pro-v1:0",
"mcp_server_ids":["local_fs"],
"stream":true,
"messages": [
{
"role": "user",
"content": "list files in current dir"
}
]
}'
```
### 3.3 (🚀 New) React UI
- 🚀 A modern frontend built on Next.js 15 and React 18, supporting Dark/Light mode
- 🎨 Aesthetic user interface implemented using Tailwind CSS and Shadcn UI component library
- 🔄 Real-time streaming responses, providing a smooth conversational experience
- 🧠 Supports "thinking" mode, showcasing the model's reasoning process
- 🛠️ MCP server management functionality, supporting the addition and configuration of servers
- 👤 User session management, maintaining conversation context
- 📊 Visualization of tool usage results, including image display
- 📱 Supports multimodal input, including image, pdf, document, and other attachment uploads
- [Installation Steps](react_ui/README.md)


#### ChatBot UI
After starting, you can check the log `logs/start_chatbot.log` to confirm there are no errors, then open the browser to the [service address](http://localhost:8502/) to experience the enhanced Bedrock large model ChatBot capabilities with MCP.
Since it has built-in file system operations, SQLite databases, and other MCP Servers, you can try asking the following questions continuously for experience:
```
show all of tables in the db
how many rows in that table
show all of rows in that table
save those rows record into a file, filename is rows.txt
list all of files in the allowed directory
read the content of rows.txt file
```
### 3.4. Adding MCP Server
Currently, there are two ways to add MCP Servers:
1. Pre-set in `conf/config.json`, which will load the configured MCP Servers each time the Chat interface service is restarted
2. Add MCP Servers through the ChatBot UI, submitting MCP Server parameters via a form, which will only take effect currently and will be invalid after the service restarts
Below is a demonstration of how to add an MCP Server through the ChatBot UI, using the web search provider [Exa](https://exa.ai/) as an example. The open-source community already has an available [MCP Server](https://github.com/exa-labs/exa-mcp-server) for it.
First, go to the [Exa](https://exa.ai/) official website to register an account and obtain an API Key.
Then click "Add MCP Server," fill in the following parameters in the pop-up menu, and submit:
- Method 1, directly add the MCP JSON configuration file (same as the official Anthropic format)
```json
{
"mcpServers": {
"exa": {
"command": "npx",
"args": ["-y","exa-mcp-server"],
"env": {
"EXA_API_KEY": "your-api-key-here"
}
}
}
}
```
- Method 2, add by fields
At this point, you can see the newly added item in the existing MCP Server list, and you can check it to start that MCP Server.
## 4. CDK Installation (New)
[README](cdk/README.me)
## 5 Demo Cases
### 5.1. Using MCP to Operate Browser
- Add this JSON file on the chatbot interface. Note: This [browser use](https://github.com/vinayak-mehta/mcp-browser-use) server defaults to starting a browser in head mode, so it is suitable for demos deployed on local computers. If deployed on the server side, please add a phrase in the prompt: `use headless is true to initialize the browser`
**Note**: The first time you run it, you need to install the corresponding dependency package on the service `sudo apt-get install libgbm1`
```json
{ "mcpServers":
{ "mcp-browser":
{ "command": "uvx",
"args": ["mcp-browser-use"],
"env": {},
"description": "mcp-browser"
}
}
}
```
- **New added 20250331** Using Microsoft's official [playwright](https://mcp.so/server/playwright-mcp/microsoft):
**Note**: If you need headless mode, add the "--headless" parameter. The first time you run it, you need to install the corresponding dependency package on the service `npx playwright install chrome`
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": [
"@playwright/mcp@latest",
"--headless"
]
}
}
}
```
- Test 1: In the chatbot interface, check the mcp-browser and local file system servers
Input task: `Help me organize an introduction to Xiaomi SU7 ultra, including performance, price, special features, with images and text, and save it as a beautiful HTML in the local directory. If other websites' images are referenced, ensure the images actually exist and are accessible.`
[Video demo](https://mp.weixin.qq.com/s/csg7N8SHoIR2WBgFOjpm6A)
[Final output file example](docs/xiaomi_su7_ultra_intro.html)
- If this is the first run, you may need to install some additional software, please follow the tool call return information prompts to install as needed.
- Test 2: In the chatbot interface, check the exa, mcp-browser, and local file system servers, which will combine the search engine and browser to obtain information and images, forming a richer report.
Input task: `I want a comprehensive analysis of Tesla's stock, including: Overview: Company profile, key metrics, performance data, and investment recommendations; Financial data: Revenue trends, profit margins, balance sheet, and cash flow analysis; Market sentiment: Analyst ratings, sentiment indicators, and news impact; Technical analysis: Price trends, technical indicators, and support/resistance levels; Asset comparison: Market share and financial metrics comparison with major competitors; Value investors: Intrinsic value, growth potential, and risk factors; Investment arguments: SWOT analysis and recommendations for different types of investors. And save it as a beautiful HTML in the local directory. If other websites' images are referenced, ensure the images actually exist and are accessible. You can use mcp-browser and exa search to obtain as rich real-time data and images as possible.`
[Final output file example](docs/tesla_stock_analysis.html)
- **Sequence Diagram 1: Using Headless Browser MCP Server**

### 5.2 Using MCP Computer Use to Operate EC2 Remote Desktop
- Install and download remote-computer-use in another directory
```bash
git clone https://github.com/aws-samples/aws-mcp-servers-samples.git
```
- You need to pre-install an EC2 instance and configure VNC remote desktop. Please refer to the [instructions](https://github.com/aws-samples/aws-mcp-servers-samples/blob/main/remote_computer_use/README.md) for installation steps.
- After the environment is configured, configure the following in the MCP demo client:
```json
{
"mcpServers": {
"computer_use": {
"command": "uv",
"env": {
"VNC_HOST":"",
"VNC_PORT":"5901",
"VNC_USERNAME":"ubuntu",
"VNC_PASSWORD":"",
"PEM_FILE":"",
"SSH_PORT":"22",
"DISPLAY_NUM":"1"
},
"args": [
"--directory",
"/absolute_path_to/remote_computer_use",
"run",
"server_claude.py"
]
}
}
}
```
- It is recommended to use the Claude 3.7 model with Computer Use and add the following system prompt
```plaintext
You are an expert research assistant with deep analytical skills.
<IMPORTANT>
* Don't assume an application's coordinates are on the screen unless you saw the screenshot. To open an application, please take a screenshot first and then find out the coordinates of the application icon.
* When using Firefox, if a startup wizard or Firefox Privacy Notice appears, IGNORE IT. Do not even click "skip this step". Instead, click on the address bar where it says "Search or enter address", and enter the appropriate search term or URL there. Maximize the Firefox browser window to get wider vision.
* If the item you are looking at is a pdf, if after taking a single screenshot of the pdf it seems that you want to read the entire document instead of trying to continue to read the pdf from your screenshots + navigation, determine the URL, use curl to download the pdf, install and use pdftotext to convert it to a text file, and then read that text file directly with your StrReplaceEditTool.
* After each step, take a screenshot and carefully evaluate if you have achieved the right outcome. Explicitly show your thinking: "I have evaluated step X..." If not correct, try again. Only when you confirm a step was executed correctly should you move on to the next one.
</IMPORTANT>
```
- **Sequence Diagram: Using Computer Use to Operate EC2 Remote Desktop**

### 5.3. Using Sequential Thinking + Search for Deep Research (Mainly for Nova/Claude 3.5 models, Claude 3.7 does not require it)
- Enable both web search (refer to the EXA configuration above) and [Sequential Thinking MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking). The Sequential Thinking MCP Server has been pre-configured in the configuration file, and after starting, you can see the server name is cot.

- Sequential Thinking provides a dynamic structured thinking process and reflection, prompting the model to produce structured output reasoning chains according to the tool input requirements through tool calls.
- EXA Search provides keyword and vector retrieval to search network knowledge and returns detailed content on the page.
- Test Questions
```
1. use search tool and sequential thinking to make comparison report between different agents frameworks such as autogen, langgraph, aws multi agents orchestrator
2. use sequential thinking and search tool to make me a travel plan to visit shanghai between 3/1/2025 to 3/5/2025. I will departure from Beijing
3. use sequential thinking to research what the key breakthroughs and future impact of deepseek r1
4. 搜索对比火山引擎,阿里百炼,硅基流动上的对外提供的deepseek r1 满血版的API 性能对比, 包括推理速度,TTFT, 最大context长度等。使用sequential thinking 工具
```
- Overview of Results


- **Sequence Diagram: Using Search API MCP Server**

### 5.4. Using Amazon Knowledge Base
First, create or use an existing Bedrock in the Bedrock console, and note the Knowledge Base Id.
Clone the [AWS Knowledge Base Retrieval MCP Server](https://github.com/modelcontextprotocol/servers) to your local machine, and replace the file in `src/aws-kb-retrieval-server/index.ts` with the file from [assets/aws-kb-retrieval-server/index.ts](assets/aws-kb-retrieval-server/index.ts).
> The new file specifies the knowledgeBaseId through environment variables, no need to pass it through dialogue.
In the newly cloned servers directory, package it with the following command
```sh
docker build -t mcp/aws-kb-retrieval:latest -f src/aws-kb-retrieval-server/Dockerfile .
```
Then, add this JSON file on the chatbot interface, noting that the fields in env need to be replaced with your account information and the Knowledge Base Id.
```json
{
"mcpServers": {
"aws-kb-retrieval": {
"command": "docker",
"args": [ "run", "-i", "--rm", "-e", "AWS_ACCESS_KEY_ID", "-e", "AWS_SECRET_ACCESS_KEY", "-e", "AWS_REGION", "-e", "knowledgeBaseId", "mcp/aws-kb-retrieval:latest" ],
"env": {
"AWS_ACCESS_KEY_ID": "YOUR_ACCESS_KEY_HERE",
"AWS_SECRET_ACCESS_KEY": "YOUR_SECRET_ACCESS_KEY_HERE",
"AWS_REGION": "YOUR_AWS_REGION_HERE",
"knowledgeBaseId":"The knowledge base id"
}
}
}
}
```
## 6. Awesome MCPs
- AWS MCP Servers Samples https://github.com/aws-samples/aws-mcp-servers-samples
- https://github.com/punkpeye/awesome-mcp-servers
- https://github.com/modelcontextprotocol/servers
- https://www.aimcp.info/en
- https://github.com/cline/mcp-marketplace
- https://github.com/xiehust/sample-mcp-servers
- https://mcp.composio.dev/
- https://smithery.ai/
- https://mcp.so/
## 9. [LICENSE](./LICENSE)