Content
# Developing an SSE Type MCP Service
[MCP](https://www.claudemcp.com/) supports two communication transport methods: `STDIO` (standard input/output) or `SSE` (Server-Sent Events), both of which use `JSON-RPC 2.0` for message formatting. `STDIO` is used for local integration, while `SSE` is used for web-based communication.
For example, if we want to use the MCP service directly in the command line, we can use the `STDIO` transport method. If we want to use the MCP service on a web page, we can use the `SSE` transport method.
Next, we will develop an intelligent shopping mall service assistant based on MCP, using the SSE type MCP service, with the following core functionalities:
- Real-time access to product information and inventory levels, supporting custom orders.
- Product recommendations based on customer preferences and available inventory.
- Real-time interaction with microservices using the MCP tool server.
- Checking real-time inventory levels when responding to product inquiries.
- Facilitating product purchases using product ID and quantity.
- Real-time updates of inventory levels.
- Providing temporary analysis of order transactions through natural language queries.

> Here we use the Anthropic Claude 3.5 Sonnet model as the AI assistant for the MCP service, but other models that support tool invocation can also be chosen.
First, we need a product microservice to expose an API interface for a product list. Then we provide an order microservice to expose an API interface for order creation, inventory information, etc.
The next core component is the MCP SSE server, which exposes product microservice and order microservice data to the LLM, serving as a tool using the SSE protocol.
Finally, we will use the MCP client to connect to the MCP SSE server via the SSE protocol and interact with the LLM.
## Microservices
Next, we will start developing the product microservice and order microservice, exposing API interfaces.
First, define the types for products, inventory, and orders.
```typescript
// types/index.ts
export interface Product {
id: number;
name: string;
price: number;
description: string;
}
export interface Inventory {
productId: number;
quantity: number;
product?: Product;
}
export interface Order {
id: number;
customerName: string;
items: Array<{ productId: number; quantity: number }>;
totalAmount: number;
orderDate: string;
}
```
Then we can use Express to expose the product microservice and order microservice, providing API interfaces. Since this is simulated data, we will use simpler in-memory data to simulate it, directly exposing the data through the following functions. (In a production environment, a microservice with a database approach is still needed.)
```typescript
// services/product-service.ts
import { Product, Inventory, Order } from "../types/index.js";
// Simulated data storage
let products: Product[] = [
{
id: 1,
name: "Smart Watch Galaxy",
price: 1299,
description: "Health monitoring, sports tracking, supports multiple applications",
},
{
id: 2,
name: "Wireless Bluetooth Headphones Pro",
price: 899,
description: "Active noise cancellation, 30 hours battery life, IPX7 waterproof",
},
{
id: 3,
name: "Portable Power Bank",
price: 299,
description: "20000mAh large capacity, supports fast charging, slim design",
},
{
id: 4,
name: "Huawei MateBook X Pro",
price: 1599,
description: "14.2-inch full screen, 3:2 aspect ratio, 100% sRGB color gamut",
},
];
// Simulated inventory data
let inventory: Inventory[] = [
{ productId: 1, quantity: 100 },
{ productId: 2, quantity: 50 },
{ productId: 3, quantity: 200 },
{ productId: 4, quantity: 150 },
];
let orders: Order[] = [];
export async function getProducts(): Promise<Product[]> {
return products;
}
export async function getInventory(): Promise<Inventory[]> {
return inventory.map((item) => {
const product = products.find((p) => p.id === item.productId);
return {
...item,
product,
};
});
}
export async function getOrders(): Promise<Order[]> {
return [...orders].sort(
(a, b) => new Date(b.orderDate).getTime() - new Date(a.orderDate).getTime()
);
}
export async function createPurchase(
customerName: string,
items: { productId: number; quantity: number }[]
): Promise<Order> {
if (!customerName || !items || items.length === 0) {
throw new Error("Invalid request: missing customer name or product");
}
let totalAmount = 0;
// Validate inventory and calculate total price
for (const item of items) {
const inventoryItem = inventory.find((i) => i.productId === item.productId);
const product = products.find((p) => p.id === item.productId);
if (!inventoryItem || !product) {
throw new Error(`Product ID ${item.productId} does not exist`);
}
if (inventoryItem.quantity < item.quantity) {
throw new Error(
`Product ${product.name} is out of stock. Available: ${inventoryItem.quantity}`
);
}
totalAmount += product.price * item.quantity;
}
// Create order
const order: Order = {
id: orders.length + 1,
customerName,
items,
totalAmount,
orderDate: new Date().toISOString(),
};
// Update inventory
items.forEach((item) => {
const inventoryItem = inventory.find(
(i) => i.productId === item.productId
)!;
inventoryItem.quantity -= item.quantity;
});
orders.push(order);
return order;
}
```
Then we can expose these API interfaces using MCP tools as follows:
```typescript
// mcp-server.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
import {
getProducts,
getInventory,
getOrders,
createPurchase,
} from "./services/product-service.js";
export const server = new McpServer({
name: "mcp-sse-demo",
version: "1.0.0",
description: "MCP tool for product queries, inventory management, and order processing",
});
// Tool to get product list
server.tool("getProducts", "Get all product information", {}, async () => {
console.log("Fetching product list");
const products = await getProducts();
return {
content: [
{
type: "text",
text: JSON.stringify(products),
},
],
};
});
// Tool to get inventory information
server.tool("getInventory", "Get inventory information for all products", {}, async () => {
console.log("Fetching inventory information");
const inventory = await getInventory();
return {
content: [
{
type: "text",
text: JSON.stringify(inventory),
},
],
};
});
// Tool to get order list
server.tool("getOrders", "Get all order information", {}, async () => {
console.log("Fetching order list");
const orders = await getOrders();
return {
content: [
{
type: "text",
text: JSON.stringify(orders),
},
],
};
});
// Tool to purchase products
server.tool(
"purchase",
"Purchase products",
{
items: z
.array(
z.object({
productId: z.number().describe("Product ID"),
quantity: z.number().describe("Purchase quantity"),
})
)
.describe("List of products to purchase"),
customerName: z.string().describe("Customer name"),
},
async ({ items, customerName }) => {
console.log("Processing purchase request", { items, customerName });
try {
const order = await createPurchase(customerName, items);
return {
content: [
{
type: "text",
text: JSON.stringify(order),
},
],
};
} catch (error: any) {
return {
content: [
{
type: "text",
text: JSON.stringify({ error: error.message }),
},
],
};
}
}
);
```
Here we have defined 4 tools:
- `getProducts`: Get all product information
- `getInventory`: Get inventory information for all products
- `getOrders`: Get all order information
- `purchase`: Purchase products
If it were a Stdio type MCP service, we could directly use these tools in the command line. However, since we need to use the SSE type MCP service, we still need an MCP SSE server to expose these tools.
## MCP SSE Server
Next, we will develop the MCP SSE server to expose product microservice and order microservice data, serving as a tool using the SSE protocol.
```typescript
// mcp-sse-server.ts
import express, { Request, Response, NextFunction } from "express";
import cors from "cors";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import { server as mcpServer } from "./mcp-server.js"; // Renamed to avoid naming conflicts
const app = express();
app.use(
cors({
origin: process.env.ALLOWED_ORIGINS?.split(",") || "*",
methods: ["GET", "POST"],
allowedHeaders: ["Content-Type", "Authorization"],
})
);
// Store active connections
const connections = new Map();
// Health check endpoint
app.get("/health", (req, res) => {
res.status(200).json({
status: "ok",
version: "1.0.0",
uptime: process.uptime(),
timestamp: new Date().toISOString(),
connections: connections.size,
});
});
// SSE connection establishment endpoint
app.get("/sse", async (req, res) => {
// Instantiate SSE transport object
const transport = new SSEServerTransport("/messages", res);
// Get sessionId
const sessionId = transport.sessionId;
console.log(`[${new Date().toISOString()}] New SSE connection established: ${sessionId}`);
// Register connection
connections.set(sessionId, transport);
// Connection interruption handling
req.on("close", () => {
console.log(`[${new Date().toISOString()}] SSE connection closed: ${sessionId}`);
connections.delete(sessionId);
});
// Connect transport object to MCP server
await mcpServer.connect(transport);
console.log(`[${new Date().toISOString()}] MCP server connection successful: ${sessionId}`);
});
// Endpoint to receive client messages
app.post("/messages", async (req: Request, res: Response) => {
try {
console.log(`[${new Date().toISOString()}] Received client message:`, req.query);
const sessionId = req.query.sessionId as string;
// Find corresponding SSE connection and process message
if (connections.size > 0) {
const transport: SSEServerTransport = connections.get(
sessionId
) as SSEServerTransport;
// Use transport to process message
if (transport) {
await transport.handlePostMessage(req, res);
} else {
throw new Error("No active SSE connection");
}
} else {
throw new Error("No active SSE connection");
}
} catch (error: any) {
console.error(`[${new Date().toISOString()}] Failed to process client message:`, error);
res.status(500).json({ error: "Failed to process message", message: error.message });
}
});
// Gracefully close all connections
async function closeAllConnections() {
console.log(
`[${new Date().toISOString()}] Closing all connections (${connections.size} connections)`
);
for (const [id, transport] of connections.entries()) {
try {
// Send close event
transport.res.write(
'event: server_shutdown\ndata: {"reason": "Server is shutting down"}\n\n'
);
transport.res.end();
console.log(`[${new Date().toISOString()}] Closed connection: ${id}`);
} catch (error) {
console.error(`[${new Date().toISOString()}] Failed to close connection: ${id}`, error);
}
}
connections.clear();
}
// Error handling
app.use((err: Error, req: Request, res: Response, next: NextFunction) => {
console.error(`[${new Date().toISOString()}] Unhandled exception:`, err);
res.status(500).json({ error: "Internal server error" });
});
// Graceful shutdown
process.on("SIGTERM", async () => {
console.log(`[${new Date().toISOString()}] Received SIGTERM signal, preparing to shut down`);
await closeAllConnections();
server.close(() => {
console.log(`[${new Date().toISOString()}] Server has shut down`);
process.exit(0);
});
});
process.on("SIGINT", async () => {
console.log(`[${new Date().toISOString()}] Received SIGINT signal, preparing to shut down`);
await closeAllConnections();
process.exit(0);
});
// Start server
const port = process.env.PORT || 8083;
const server = app.listen(port, () => {
console.log(
`[${new Date().toISOString()}] Intelligent Shopping Mall MCP SSE Server started at: http://localhost:${port}`
);
console.log(`- SSE connection endpoint: http://localhost:${port}/sse`);
console.log(`- Message handling endpoint: http://localhost:${port}/messages`);
console.log(`- Health check endpoint: http://localhost:${port}/health`);
});
```
Here we use Express to expose an SSE connection endpoint `/sse` for receiving client messages. We use `SSEServerTransport` to create an SSE transport object and specify the message handling endpoint as `/messages`.
```typescript
const transport = new SSEServerTransport("/messages", res);
```
Once the transport object is created, we can connect it to the MCP server as follows:
```typescript
// Connect transport object to MCP server
await mcpServer.connect(transport);
```
This way, we can receive client messages through the SSE connection endpoint `/sse` and handle client messages using the message handling endpoint `/messages`. When a client message is received at the `/messages` endpoint, we need to use the `transport` object to process the client message:
```typescript
// Use transport to process message
await transport.handlePostMessage(req, res);
```
This is what we commonly refer to as listing tools, invoking tools, and other operations.
## MCP Client
Next, we will develop the MCP client to connect to the MCP SSE server and interact with the LLM. We can develop a command-line client or a web client.
For the command-line client, we have already introduced it earlier. The only difference is that we now need to use the SSE protocol to connect to the MCP SSE server.
```typescript
// Create MCP client
const mcpClient = new McpClient({
name: "mcp-sse-demo",
version: "1.0.0",
});
// Create SSE transport object
const transport = new SSEClientTransport(new URL(config.mcp.serverUrl));
// Connect to MCP server
await mcpClient.connect(transport);
```
Then other operations are the same as those introduced for the command-line client, which means listing all tools and sending the user's questions along with the tools to the LLM for processing. After the LLM returns the results, we call the tools based on the results, sending the tool invocation results and historical messages back to the LLM for processing to obtain the final result.
For the web client, the process is also basically the same as the command-line client, but now we implement these processing steps in some interfaces and call these interfaces through the web page.
First, we need to initialize the MCP client, then get all tools and convert the tool format to the array format required by Anthropic, and then create the Anthropic client.
```typescript
// Initialize MCP client
async function initMcpClient() {
if (mcpClient) return;
try {
console.log("Connecting to MCP server...");
mcpClient = new McpClient({
name: "mcp-client",
version: "1.0.0",
});
const transport = new SSEClientTransport(new URL(config.mcp.serverUrl));
await mcpClient.connect(transport);
const { tools } = await mcpClient.listTools();
// Convert tool format to the array format required by Anthropic
anthropicTools = tools.map((tool: any) => {
return {
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema,
};
});
// Create Anthropic client
aiClient = createAnthropicClient(config);
console.log("MCP client and tools initialized successfully");
} catch (error) {
console.error("Failed to initialize MCP client:", error);
throw error;
}
}
```
Next, we can develop API interfaces based on our needs. For example, we can develop a chat interface to receive user questions, then call the MCP client's tools, sending the tool invocation results and historical messages to the LLM for processing to obtain the final result. The code is as follows:
```typescript
// API: Chat request
apiRouter.post("/chat", async (req, res) => {
try {
const { message, history = [] } = req.body;
if (!message) {
console.warn("Message in request is empty");
return res.status(400).json({ error: "Message cannot be empty" });
}
// Build message history
const messages = [...history, { role: "user", content: message }];
// Call AI
const response = await aiClient.messages.create({
model: config.ai.defaultModel,
messages,
tools: anthropicTools,
max_tokens: 1000,
});
// Handle tool invocation
const hasToolUse = response.content.some(
(item) => item.type === "tool_use"
);
if (hasToolUse) {
// Handle all tool invocations
const toolResults = [];
for (const content of response.content) {
if (content.type === "tool_use") {
const name = content.name;
const toolInput = content.input as
| { [x: string]: unknown }
| undefined;
try {
// Call MCP tool
if (!mcpClient) {
console.error("MCP client not initialized");
throw new Error("MCP client not initialized");
}
console.log(`Starting to call MCP tool: ${name}`);
const toolResult = await mcpClient.callTool({
name,
arguments: toolInput,
});
toolResults.push({
name,
result: toolResult,
});
} catch (error: any) {
console.error(`Tool invocation failed: ${name}`, error);
toolResults.push({
name,
error: error.message,
});
}
}
}
// Send tool results back to AI for final reply
const finalResponse = await aiClient.messages.create({
model: config.ai.defaultModel,
messages: [
...messages,
{
role: "user",
content: JSON.stringify(toolResults),
},
],
max_tokens: 1000,
});
const textResponse = finalResponse.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
res.json({
response: textResponse,
toolCalls: toolResults,
});
} else {
// Directly return AI reply
const textResponse = response.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
res.json({
response: textResponse,
toolCalls: [],
});
}
} catch (error: any) {
console.error("Failed to process chat request:", error);
res.status(500).json({ error: error.message });
}
});
```
The core implementation here is also quite simple, and it is basically consistent with the command-line client. The only difference is that we have now implemented these processing steps in some interfaces.
## Usage
Here is an example of using the command-line client:

Of course, we can also use it in Cursor by creating a `.cursor/mcp.json` file and adding the following content:
```json
{
"mcpServers": {
"products-sse": {
"url": "http://localhost:8083/sse"
}
}
}
```
Then we can see this MCP service on the settings page of Cursor, and we can use this MCP service in Cursor.

Here is an example of using the web client we developed:


## Debugging
We can also use the `npx @modelcontextprotocol/inspector` command to debug our SSE service:
```bash
$ npx @modelcontextprotocol/inspector
Starting MCP inspector...
⚙️ Proxy server listening on port 6277
🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
```
Then open the above address in the browser, select SSE, and configure our SSE address to test:

## Summary
When the LLM decides to trigger a call to user tools, the quality of the tool descriptions is crucial:
- **Precise Descriptions**: Ensure that each tool's description is clear and includes keywords for the LLM to correctly identify when to use that tool.
- **Avoid Conflicts**: Do not provide multiple tools with similar functions, as this may lead to incorrect selections by the LLM.
- **Testing and Validation**: Test the accuracy of tool calls using various user query scenarios before deployment.
MCP servers can be implemented using various technologies:
- Python SDK
- TypeScript/JavaScript
- Other programming languages
The choice should be based on the team's familiarity and the existing technology stack.
Additionally, integrating the AI assistant with the MCP server into the existing microservice architecture has the following advantages:
1. **Real-time Data**: Provides real-time or near-real-time updates through SSE (Server-Sent Events), which is particularly important for dynamic data such as inventory information and order status.
2. **Scalability**: Different parts of the system can be independently scaled, for example, frequently used inventory check services can be scaled separately.
3. **Resilience**: Failure of a single microservice does not affect the operation of the entire system, ensuring system stability.
4. **Flexibility**: Different teams can independently handle different parts of the system, using different technology stacks if necessary.
5. **Efficient Communication**: SSE is more efficient than continuous polling, sending updates only when data changes.
6. **Enhanced User Experience**: Real-time updates and quick responses improve customer satisfaction.
7. **Simplified Client**: Client code is cleaner, without complex polling mechanisms, only needing to listen for server events.
Of course, if we want to use this in a production environment, we also need to consider the following points:
- Conduct comprehensive testing to identify potential errors.
- Design fault recovery mechanisms.
- Implement monitoring systems to track tool invocation performance and accuracy.
- Consider adding a caching layer to reduce the load on backend services.
Through these practices, we can build an efficient and reliable MCP-based intelligent shopping mall service assistant, providing users with a real-time, personalized shopping experience.