Content
# Model Context Protocol (MCP) Quick Start Guide
[TOC]
## Introduction
The Model Context Protocol (MCP) is an innovative open-source protocol that redefines how large language models (LLM) interact with the external world. MCP provides a standardized approach that allows any large language model to easily connect to various data sources and tools, enabling seamless access to and processing of information. MCP acts like a USB-C interface for AI applications, providing a standardized way for AI models to connect to different data sources and tools.

MCP has several core features:
- Resources
- Prompts
- Tools
- Sampling
- Roots
- Transports
Since most features are primarily designed to serve the Claude client, this document focuses on developing an MCP server that works with general large language models, emphasizing "tools" while briefly explaining other features at the end.
The transport layer of MCP supports the implementation of two protocols: stdio (standard input/output) and SSE (Server-Sent Events). Since stdio is more commonly used, this document will use stdio as an example for explanation.
This document will use Python version 3.11 and manage the Python project with uv. The code will be available on GitHub at the end of this document. Without further ado, let's get started!
## Developing an MCP Server
In this section, we will implement a server for web searching. First, let's initialize our project using uv.
> Official uv documentation: https://docs.astral.sh/uv/
```shell
# Initialize the project
uv init mcp_getting_started
cd mcp_getting_started
# Create a virtual environment and activate it
uv venv
.venv\Scripts\activate.bat
# Install dependencies
uv add "mcp[cli]" httpx openai
```
Next, we will create a file named `web_search.py` to implement our service. MCP provides us with two objects: `mcp.server.FastMCP` and `mcp.server.Server`. We will use the higher-level wrapper, `mcp.server.FastMCP`.
```python
import httpx
from mcp.server import FastMCP
# Initialize FastMCP server
app = FastMCP('web-search')
```
Implementing the execution method is very simple. MCP provides us with a `@mcp.tool()` decorator, and we just need to decorate our implementation function with this decorator. The function name will serve as the tool name, and the parameters will serve as tool parameters, described through comments along with the return value.
Here, we will directly use the Zhiyu API, which not only helps us search for relevant result links but also generates summaries of the articles corresponding to those links. ~~Currently, it is free~~ (it has started charging, 0.03 yuan per request), making it very suitable for our needs.
> Official documentation: https://bigmodel.cn/dev/api/search-tool/web-search-pro
>
> API Key generation address: https://bigmodel.cn/usercenter/proj-mgmt/apikeys
```python
@app.tool()
async def web_search(query: str) -> str:
"""
Search internet content
Args:
query: Content to search for
Returns:
Summary of search results
"""
async with httpx.AsyncClient() as client:
response = await client.post(
'https://open.bigmodel.cn/api/paas/v4/tools',
headers={'Authorization': 'Replace with your own API KEY'},
json={
'tool': 'web-search-pro',
'messages': [
{'role': 'user', 'content': query}
],
'stream': False
}
)
res_data = []
for choice in response.json()['choices']:
for message in choice['message']['tool_calls']:
search_results = message.get('search_result')
if not search_results:
continue
for result in search_results:
res_data.append(result['content'])
return '\n\n\n'.join(res_data)
```
Finally, let's add the code to run the server.
```python
if __name__ == "__main__":
app.run(transport='stdio')
```
## Debugging the MCP Server
At this point, we have completed writing the MCP server. Next, we will use the official `Inspector` visualization tool to debug our server.
We can run `Inspector` in two ways:
> Please ensure that you have installed the Node.js environment.
Using npx:
```shell
npx -y @modelcontextprotocol/inspector <command> <arg1> <arg2>
```
The command to run our code is:
```shell
npx -y @modelcontextprotocol/inspector uv run web_search.py
```
Running via mcp dev:
```shell
mcp dev PYTHONFILE
```
The command to run our code is:
```shell
mcp dev web_search.py
```
When you see the following prompt, it indicates that the run was successful. If there is a connection error, it may be due to the port being occupied. You can refer to this issue for a solution: https://github.com/liaokongVFX/MCP-Chinese-Getting-Started-Guide/issues/6

Then, we open this address and click the `Connect` button on the left to connect to the service we just wrote. Next, we switch to the `Tools` tab and click the `List Tools` button to see the tool we just created, and we can start debugging!

## Developing an MCP Client
First, let's see how to call the tools from the MCP server we just developed on the client side.
```python
import asyncio
from mcp.client.stdio import stdio_client
from mcp import ClientSession, StdioServerParameters
# Create server parameters for stdio connection
server_params = StdioServerParameters(
# Command to execute the server, here we use uv to run web_search.py
command='uv',
# Running parameters
args=['run', 'web_search.py'],
# Environment variables, default is None, indicating to use current environment variables
# env=None
)
async def main():
# Create stdio client
async with stdio_client(server_params) as (stdio, write):
# Create ClientSession object
async with ClientSession(stdio, write) as session:
# Initialize ClientSession
await session.initialize()
# List available tools
response = await session.list_tools()
print(response)
# Call tool
response = await session.call_tool('web_search', {'query': 'What is the weather like in Hangzhou today?'})
print(response)
if __name__ == '__main__':
asyncio.run(main())
```
Since our Python script needs to run in a virtual environment, we will start our script using `uv`.
Next, let's look at a small example of how to use `DeepSeek` to call methods from our MCP server.
We will use `dotenv` to manage our related environment variables. The content of the .env file is as follows:
```shell
OPENAI_API_KEY=sk-89baxxxxxxxxxxxxxxxxxx
OPENAI_BASE_URL=https://api.deepseek.com
OPENAI_MODEL=deepseek-chat
```
First, we will write our `MCPClient` class.
```python
import json
import asyncio
import os
from typing import Optional
from contextlib import AsyncExitStack
from openai import OpenAI
from dotenv import load_dotenv
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
load_dotenv()
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.client = OpenAI()
```
Next, we add the `connect_to_server` method to initialize our MCP server session.
```python
async def connect_to_server(self):
server_params = StdioServerParameters(
command='uv',
args=['run', 'web_search.py'],
env=None
)
stdio_transport = await self.exit_stack.enter_async_context(
stdio_client(server_params))
stdio, write = stdio_transport
self.session = await self.exit_stack.enter_async_context(
ClientSession(stdio, write))
await self.session.initialize()
```
Then we implement a method to call the MCP server to handle interactions with DeepSeek.
```python
async def process_query(self, query: str) -> str:
# Here we need to constrain the large language model through a system prompt,
# otherwise it may not call the tool and provide random answers
system_prompt = (
"You are a helpful assistant."
"You have the function of online search. "
"Please MUST call web_search tool to search the Internet content before answering."
"Please do not lose the user's question information when searching,"
"and try to maintain the completeness of the question content as much as possible."
"When there is a date related question in the user's question,"
"please use the search function directly to search and PROHIBIT inserting specific time."
)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": query}
]
# Get all MCP server tool list information
response = await self.session.list_tools()
# Generate function call description information
available_tools = [{
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
}
} for tool in response.tools]
# Request DeepSeek, function call description information is passed through tools parameter
response = self.client.chat.completions.create(
model=os.getenv("OPENAI_MODEL"),
messages=messages,
tools=available_tools
)
# Process the returned content
content = response.choices[0]
if content.finish_reason == "tool_calls":
# If a tool is needed, parse the tool
tool_call = content.message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# Execute the tool
result = await self.session.call_tool(tool_name, tool_args)
print(f"\n\n[Calling tool {tool_name} with args {tool_args}]\n\n")
# Store the data returned by DeepSeek about which tool was called and the data after the tool execution into messages
messages.append(content.message.model_dump())
messages.append({
"role": "tool",
"content": result.content[0].text,
"tool_call_id": tool_call.id,
})
# Return the above result to DeepSeek for generating the final result
response = self.client.chat.completions.create(
model=os.getenv("OPENAI_MODEL"),
messages=messages,
)
return response.choices[0].message.content
return content.message.content
```
Next, we implement a loop for querying and clean up the session after exiting.
```python
async def chat_loop(self):
while True:
try:
query = input("\nQuery: ").strip()
if query.lower() == 'quit':
break
response = await self.process_query(query)
print("\n" + response)
except Exception as e:
import traceback
traceback.print_exc()
async def cleanup(self):
"""Clean up resources"""
await self.exit_stack.aclose()
```
Finally, we complete the code related to running this client.
```python
async def main():
client = MCPClient()
try:
await client.connect_to_server()
await client.chat_loop()
finally:
await client.cleanup()
if __name__ == "__main__":
import sys
asyncio.run(main())
```
This is a minimal code example that does not implement features like message context recording, just to understand how to use a large language model to invoke the MCP server. This example only demonstrates how to connect to a single server. If you want to connect to multiple MCP servers, you can simply loop through the code in `connect_to_server`, encapsulate them into a class, and then iterate through all the tools in the MCP servers to generate a large `available_tools`, which can then be called based on the return results from the large language model. This will not be elaborated further here.
> You can refer to the official example: https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/clients/simple-chatbot/mcp_simple_chatbot/main.py
## Explanation of Sampling
MCP also provides us with a `Sampling` feature, which may seem confusing at first glance, but it actually gives us an interface to execute some operations before and after executing a tool. For example, when calling a tool to delete a local file, we would definitely want to confirm before proceeding with the deletion. This is where this feature can be used.
Let's implement this supervised functionality.
First, we create a mock MCP server that has the ability to delete files:
```python
# Server
from mcp.server import FastMCP
from mcp.types import SamplingMessage, TextContent
app = FastMCP('file_server')
@app.tool()
async def delete_file(file_path: str):
# Create a SamplingMessage to trigger the sampling callback function
result = await app.get_context().session.create_message(
messages=[
SamplingMessage(
role='user', content=TextContent(
type='text', text=f'Do you want to delete the file: {file_path}? (Y)')
)
],
max_tokens=100
)
# Get the return value from the sampling callback function and process it accordingly
if result.content.text == 'Y':
return f'File {file_path} has been deleted!!'
if __name__ == '__main__':
app.run(transport='stdio')
```
The key point here is to create a `SamplingMessage` type message using the `create_message` method, which will send this message to the corresponding sampling callback function.
Next, we create the client code:
```python
# Client
import asyncio
from mcp.client.stdio import stdio_client
from mcp import ClientSession, StdioServerParameters
from mcp.shared.context import RequestContext
from mcp.types import (
TextContent,
CreateMessageRequestParams,
CreateMessageResult,
)
server_params = StdioServerParameters(
command='uv',
args=['run', 'file_server.py'],
)
async def sampling_callback(
context: RequestContext[ClientSession, None],
params: CreateMessageRequestParams,
):
# Get the message sent by the tool and display it to the user
input_message = input(params.messages[0].content.text)
# Send the user input back to the tool
return CreateMessageResult(
role='user',
content=TextContent(
type='text',
text=input_message.strip().upper() or 'Y'
),
model='user-input',
stopReason='endTurn'
)
async def main():
async with stdio_client(server_params) as (stdio, write):
async with ClientSession(
stdio, write,
# Set the corresponding sampling_callback method
sampling_callback=sampling_callback
) as session:
await session.initialize()
res = await session.call_tool(
'delete_file',
{'file_path': 'C:/xxx.txt'}
)
# Get the final result returned by the tool
print(res)
if __name__ == '__main__':
asyncio.run(main())
```
It is important to note that currently, the content printed in the tool cannot be displayed in the command line window when using `stdio_client`. Therefore, for debugging, we can use `mcp.shared.memory.create_connected_server_and_client_session`.
The specific code is as follows:
```python
# Client
from mcp.shared.memory import (
create_connected_server_and_client_session as create_session
)
# Here we need to import the server's app object
from file_server import app
async def sampling_callback(context, params):
...
async def main():
async with create_session(
app._mcp_server,
sampling_callback=sampling_callback
) as client_session:
await client_session.call_tool(
'delete_file',
{'file_path': 'C:/xxx.txt'}
)
if __name__ == '__main__':
asyncio.run(main())
```
## Loading MCP Server into Claude Desktop
The next two features are actually aimed at enhancing the experience for the Claude desktop client, so let's first discuss how to load our custom MCP Server into the Claude desktop client.
First, we open the configuration.

We click on the `Developer` menu, then click the `Edit Config` button to open the Claude desktop configuration file `claude_desktop_config.json`.

Next, we start adding our server. The server needs to be under the `mcpServers` hierarchy, with parameters `command`, `args`, and `env`. In fact, the parameters are the same as those used to initialize the `StdioServerParameters` object.
```json
{
"mcpServers": {
"web-search-server": {
"command": "uv",
"args": [
"--directory",
"D:/projects/mcp_getting_started",
"run",
"web_search.py"
]
}
}
}
```
Finally, after saving the file and restarting the Claude desktop client, we can see our plugin here.


Of course, we can also directly run the following command in our plugin directory to install it directly:
```shell
mcp install web_search.py
```
## Other Features
### Prompt
MCP also provides us with a feature to generate Prompt templates. It is very simple to use; just decorate it with the `prompt` decorator, as shown in the code below:
```python
from mcp.server import FastMCP
app = FastMCP('prompt_and_resources')
@app.prompt('Translation Expert')
async def translate_expert(
target_language: str = 'Chinese',
) -> str:
return f'You are a translation expert, skilled at translating any language into {target_language}. Please translate the following content:'
if __name__ == '__main__':
app.run(transport='stdio')
```
Then we can add our new MCP server using the method mentioned in the previous section to configure the Claude desktop client. We can then click the icon in the lower right corner to start using it.
It will prompt us to set the parameters we pass in, and it will generate an attachment in our chat window.

### Resource
We can also allow users to select preset resources we provide on the Claude client, and it also supports custom protocols. The specific code is as follows:
```python
from mcp.server import FastMCP
app = FastMCP('prompt_and_resources')
@app.resource('echo://static')
async def echo_resource():
# The content returned when the user uses this resource
return 'Echo!'
@app.resource('greeting://{name}')
async def get_greeting(name):
return f'Hello, {name}!'
if __name__ == '__main__':
app.run(transport='stdio')
```
Next, let's check it out on the Claude desktop client.

It is important to note that currently, the Claude desktop client cannot read resource decorators set with wildcard paths like `greeting://{name}`. This will be supported in the future. However, in our client code, it can be used as a resource template, as shown in the code below:
```python
import asyncio
from pydantic import AnyUrl
from mcp.client.stdio import stdio_client
from mcp import ClientSession, StdioServerParameters
server_params = StdioServerParameters(
command='uv',
args=['run', 'prompt_and_resources.py'],
)
async def main():
async with stdio_client(server_params) as (stdio, write):
async with ClientSession(stdio, write) as session:
await session.initialize()
# Get the resource list without wildcards
res = await session.list_resources()
print(res)
# Get the resource list with wildcards (resource templates)
res = await session.list_resource_templates()
print(res)
# Read resources, matching wildcards
res = await session.read_resource(AnyUrl('greeting://liming'))
print(res)
# Get the list of Prompt templates
res = await session.list_prompts()
print(res)
# Use Prompt template
res = await session.get_prompt(
'Translation Expert', arguments={'target_language': 'English'})
print(res)
if __name__ == '__main__':
asyncio.run(main())
```
### Lifecycle
The MCP lifecycle is divided into three stages:
- Initialization
- Interaction communication
- Service shutdown
Therefore, we can perform certain actions at the beginning and end of these three stages, such as creating and closing database connections, logging, and recording tool usage information.
Below, we will use the web search tool as an example to store the queries and results of tool calls in a global context as a cache. The complete code is as follows:
```python
import httpx
from dataclasses import dataclass
from contextlib import asynccontextmanager
from mcp.server import FastMCP
from mcp.server.fastmcp import Context
@dataclass
# Initialize a lifecycle context object
class AppContext:
# Contains a field to store request history
histories: dict
@asynccontextmanager
async def app_lifespan(server):
# Execute during MCP initialization
histories = {}
try:
# This context will be passed to the tool during each communication
yield AppContext(histories=histories)
finally:
# Execute when the MCP service shuts down
print(histories)
app = FastMCP(
'web-search',
# Set lifecycle listener function
lifespan=app_lifespan
)
@app.tool()
# The first parameter will be passed to the context object
async def web_search(ctx: Context, query: str) -> str:
"""
Search internet content
Args:
query: Content to search for
Returns:
Summary of search results
"""
# If the same question has been asked before, return the cached result directly
histories = ctx.request_context.lifespan_context.histories
if query in histories:
return histories[query]
async with httpx.AsyncClient() as client:
response = await client.post(
'https://open.bigmodel.cn/api/paas/v4/tools',
headers={'Authorization': 'YOUR API KEY'},
json={
'tool': 'web-search-pro',
'messages': [
{'role': 'user', 'content': query}
],
'stream': False
}
)
res_data = []
for choice in response.json()['choices']:
for message in choice['message']['tool_calls']:
search_results = message.get('search_result')
if not search_results:
continue
for result in search_results:
res_data.append(result['content'])
return_data = '\n\n\n'.join(res_data)
# Store the query and return value in histories
ctx.request_context.lifespan_context.histories[query] = return_data
return return_data
if __name__ == "__main__":
app.run()
```
## Using MCP Server in LangChain
Recently, LangChain released a new open-source project `langchain-mcp-adapters`, which makes it easy to integrate MCP servers into LangChain. Let's see how to use it:
```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
server_params = StdioServerParameters(
command='uv',
args=['run', 'web_search.py'],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Get the list of tools
tools = await load_mcp_tools(session)
# Create and use ReAct agent
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({'messages': 'What is the weather like in Hangzhou today?'})
```
For more detailed usage, please refer to: https://github.com/langchain-ai/langchain-mcp-adapters
## DeepSeek + cline + Custom MCP = Image and Text Master
Finally, we will use the cline plugin in VsCode to build an image and text master application through DeepSeek and our custom image generation MCP server. Without further ado, let's get started.
First, let's build our image generation MCP server. We will use the `FLUX.1-schnell` model from Hugging Face, available at: https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell. We will not use the `gradio_client` library but will instead use `httpx` to implement it manually, as using `gradio_client` may lead to encoding errors. The specific code is as follows:
```python
# image_server.py
import json
import httpx
from mcp.server import FastMCP
app = FastMCP('image_server')
@app.tool()
async def image_generation(image_prompt: str):
"""
Generate an image
:param image_prompt: Image description, must be in English
:return: Local path where the image is saved
"""
async with httpx.AsyncClient() as client:
data = {'data': [image_prompt, 0, True, 512, 512, 3]}
# Create a task to generate the image
response1 = await client.post(
'https://black-forest-labs-flux-1-schnell.hf.space/call/infer',
json=data,
headers={"Content-Type": "application/json"}
)
# Parse the response to get the event ID
response_data = response1.json()
event_id = response_data.get('event_id')
if not event_id:
return 'Unable to retrieve event ID'
# Retrieve the return data in a streaming manner
url = f'https://black-forest-labs-flux-1-schnell.hf.space/call/infer/{event_id}'
full_response = ''
async with client.stream('GET', url) as response2:
async for chunk in response2.aiter_text():
full_response += chunk
return json.loads(full_response.split('data: ')[-1])[0]['url']
if __name__ == '__main__':
app.run(transport='stdio')
```
Next, we can use the following command in the virtual environment to open the MCP Inspector and debug our tool.
```shell
mcp dev image_server.py
```

Then, we install the cline plugin in VsCode. After installing the plugin, we configure our DeepSeek API key. Next, we click the `MCP Server` button in the upper right corner to open the MCP server list.

Next, we switch to the `Installed` tab and click the `Configure MCP Servers` button to edit the custom MCP server.

The configuration is as follows:
```json
{
"mcpServers": {
"image_server": {
"command": "uv",
"args": [
"--directory",
"D:/projects/mcp_getting_started",
"run",
"image_server.py"
],
"env": {},
"disabled": false,
"autoApprove": []
}
}
}
```
After saving, if the small dot here turns green, it indicates that our server is connected, and we can start using it!

Next, we open the input box to enter the content we want to write an article about:

We can see that it correctly called our tool.

Finally, we can see the generated article.

## Deploying MCP Service to the Cloud with Serverless
The above sections discussed how to use local MCP services, but sometimes we want to deploy MCP services directly to the cloud for easier access without the hassle of local downloads and startups. In this case, we need to use the SSE protocol of MCP to achieve this.
First, let's write an MCP service using the SSE protocol. It is very simple; we just need to set the `transport` parameter in our final `run` command to `sse`. Below, we will implement this using the web search example, with the specific code as follows:
```python
# sse_web_search.py
import httpx
from mcp.server import FastMCP
app = FastMCP('web-search', port=9000)
@app.tool()
async def web_search(query: str) -> str:
"""
Search internet content
Args:
query: Content to search for
Returns:
Summary of search results
"""
async with httpx.AsyncClient() as client:
response = await client.post(
'https://open.bigmodel.cn/api/paas/v4/tools',
headers={'Authorization': 'YOUR API KEY'},
json={
'tool': 'web-search-pro',
'messages': [
{'role': 'user', 'content': query}
],
'stream': False
}
)
res_data = []
for choice in response.json()['choices']:
for message in choice['message']['tool_calls']:
search_results = message.get('search_result')
if not search_results:
continue
for result in search_results:
res_data.append(result['content'])
return '\n\n\n'.join(res_data)
if __name__ == "__main__":
app.run(transport='sse')
```
In `FastMCP`, there are several parameters related to the SSE protocol that can be set:
- host: Service address, default is `0.0.0.0`
- port: Service port, default is 8000. In the above code, I set it to `9000`
- sse_path: SSE route, default is `/sse`
Now, we can write a client code to test it. The specific code is as follows:
```python
import asyncio
from mcp.client.sse import sse_client
from mcp import ClientSession
async def main():
async with sse_client('http://localhost:9000/sse') as streams:
async with ClientSession(*streams) as session:
await session.initialize()
res = await session.call_tool('web_search', {'query': 'What is the weather like in Hangzhou today?'})
print(res)
if __name__ == '__main__':
asyncio.run(main())
```
We can see that it works normally and has searched for content:

Of course, we can also test it using `mcp dev sse_web_search.py`. Here, it is important to note that the `Transport Type` needs to be changed to `SSE`, and then fill in our local service address below.

Once everything has been tested and confirmed to be working, we can deploy it to the cloud using a serverless approach. Here, we choose Alibaba Cloud's Function Compute service. First, we go to the `Functions` menu in Alibaba Cloud's `Function Compute FC 3.0` and click `Create Function` to create our service. The address is: https://fcnext.console.aliyun.com/cn-hangzhou/functions

We choose `Web Function`, and for the runtime environment, we select `Python 10`. For the code upload method, you can choose according to your needs; since I only have one Python file, I will directly select `Use Sample Code`, so I can just overwrite my code later. I keep the startup command and listening port as default (**the port needs to be consistent with the code**).
For environment variables, you can set the API key used in the code as an environment variable; I will not set it here. The final settings are as follows:

In the advanced settings, I enabled the logging feature for easier debugging.

After completing the settings, click create. It will redirect to the code editing section, where we can copy our previous code into it.

Once completed, we need to install the dependencies. We click on `Edit Layer` in the upper right corner. By default, there will be a default Flask layer because the initial template uses Flask; we do not need this, so we delete it and add an MCP layer. Select `Add Official Public Layer`, then search for `mcp` to find the Python version of the MCP layer, which contains all the dependencies used by MCP.

If you have other third-party dependencies, you can search to see if they are available in the public layers; if not, you can create a custom layer by providing a `requirements` list. This will not be elaborated further here.

Once all settings are complete, click deploy in the lower right corner.
Then we return to the code editing page, and click on `Deploy Code` in the upper left corner. After a few seconds, it will prompt that the code deployment was successful. At this point, our MCP service has been deployed to the cloud.

> Update on 20250409: I don't know if the official team saw this article, but now you can directly select `MCP Runtime` during runtime, so you no longer need to manually add the `MCP Layer` in the layers section.
>
> 
Next, we switch to the `Configuration` tab under `Triggers`, where we can see the URL address we will use to access it. Of course, you can also bind your own domain name.

Now we can use the client code we wrote earlier to test it.
```python
import asyncio
from mcp.client.sse import sse_client
from mcp import ClientSession
async def main():
async with sse_client('https://mcp-test-whhergsbso.cn-hangzhou.fcapp.run/sse') as streams:
async with ClientSession(*streams) as session:
await session.initialize()
res = await session.call_tool('web_search', {'query': 'What is the weather like in Hangzhou today?'})
print(res)
if __name__ == '__main__':
asyncio.run(main())
```
If we encounter errors in the client, we can directly check the corresponding error request in the logs by clicking `Request Logs` to view the error and fix it.

At this point, our MCP service has been deployed to the cloud, and we can use it directly from anywhere.
For example, in `Cherry-Studio`, we can set it up like this:

In `Cline`:

In `Cursor`:

```json
{
"mcpServers": {
"web-search": {
"url": "https://mcp-test-whhergsbso.cn-hangzhou.fcapp.run/sse"
}
}
}
```
This concludes the entire MCP introductory tutorial. Further updates will be made as needed. The related code will be available in the GitHub repository: https://github.com/liaokongVFX/MCP-Chinese-Getting-Started-Guide