Content
# Wenyan MCP Server

**Wenyan** is a multi-platform typesetting and beautification tool that allows you to publish Markdown content to mainstream writing platforms such as WeChat Official Accounts, Zhihu, and Toutiao with a single click.
**Wenyan** has released several versions:
* [macOS App Store Version](https://github.com/caol64/wenyan) - MAC desktop application
* [Windows + Linux Version](https://github.com/caol64/wenyan-pc) - Cross-platform desktop application
* [CLI Version](https://github.com/caol64/wenyan-cli) - CI/CD or script automation for publishing articles to WeChat Official Accounts
* [MCP Version](https://github.com/caol64/wenyan-mcp) - Automatically publish articles to WeChat Official Accounts using AI
Wenyan MCP Server is a server component based on the Model Context Protocol (MCP), supporting the publishing of Markdown formatted articles to the draft box of WeChat Official Accounts, and typesetting using the same theme system as [Wenyan](https://yuzhi.tech/wenyan).
https://github.com/user-attachments/assets/2c355f76-f313-48a7-9c31-f0f69e5ec207
Usage Scenarios:
- [Let AI help you manage the layout and publishing of your WeChat Official Account](https://babyno.top/posts/2025/06/let-ai-help-you-manage-your-gzh-layout-and-publishing/)
## Features
- List and select supported article themes
- Typeset Markdown content using built-in themes
- Publish articles to the draft box of WeChat Official Accounts
- Automatically upload local or online images
## Theme Effects
👉 [Built-in Theme Preview](https://yuzhi.tech/docs/wenyan/theme)
Wenyan adopts several open-source Typora themes, and we would like to thank the authors:
- [Orange Heart](https://github.com/evgo2017/typora-theme-orange-heart)
- [Rainbow](https://github.com/thezbm/typora-theme-rainbow)
- [Lapis](https://github.com/YiNNx/typora-theme-lapis)
- [Pie](https://github.com/kevinzhao2233/typora-theme-pie)
- [Maize](https://github.com/BEATREE/typora-maize-theme)
- [Purple](https://github.com/hliu202/typora-purple-theme)
- [Physics Cat - Mint](https://github.com/sumruler/typora-theme-phycat)
## Usage
### Method 1: Local Installation (Recommended)
```
npm install -g @wenyan-md/mcp
```
#### Integration with MCP Client
Add the following content to your MCP configuration file:
```json
{
"mcpServers": {
"wenyan-mcp": {
"name": "Official Account Assistant",
"command": "wenyan-mcp",
"env": {
"WECHAT_APP_ID": "your_app_id",
"WECHAT_APP_SECRET": "your_app_secret"
}
}
}
}
```
> Note:
>
> * `WECHAT_APP_ID` is the App ID of the WeChat Official Account platform
> * `WECHAT_APP_SECRET` is the App Secret of the WeChat platform
---
### Method 2: Compile and Run
#### Compilation
Ensure that you have the [Node.js](https://nodejs.org/) environment installed:
```bash
git clone https://github.com/caol64/wenyan-mcp.git
cd wenyan-mcp
npm install
npx tsc -b
```
#### Integration with MCP Client
Add the following content to your MCP configuration file:
```json
{
"mcpServers": {
"wenyan-mcp": {
"name": "Official Account Assistant",
"command": "node",
"args": [
"Your/path/to/wenyan-mcp/dist/index.js"
],
"env": {
"WECHAT_APP_ID": "your_app_id",
"WECHAT_APP_SECRET": "your_app_secret"
}
}
}
}
```
> Note:
>
> * `WECHAT_APP_ID` is the App ID of the WeChat Official Account platform
> * `WECHAT_APP_SECRET` is the App Secret of the WeChat platform
---
### Method 3: Run Using Docker (Recommended)
Suitable for deployment in server environments or integration with local AI toolchains.
#### Build the Image
```bash
docker build -t wenyan-mcp .
```
Or specify the `npm` mirror source.
```bash
docker build --build-arg NPM_REGISTRY=https://mirrors.cloud.tencent.com/npm/ -t wenyan-mcp .
```
#### Integration with MCP Client
Add the following content to your MCP configuration file:
```json
{
"mcpServers": {
"wenyan-mcp": {
"name": "Official Account Assistant",
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"-v", "/your/host/image/path:/mnt/host-downloads",
"-e", "WECHAT_APP_ID=your_app_id",
"-e", "WECHAT_APP_SECRET=your_app_secret",
"-e", "HOST_IMAGE_PATH=/your/host/image/path",
"wenyan-mcp"
]
}
}
}
```
> Note:
>
> * `-v` mounts the host directory so that the container can access local images. It should be consistent with the environment variable `HOST_IMAGE_PATH`. All local images in your `Markdown` articles should be placed in this directory, and Docker will automatically map them to the container. The container cannot read images outside this directory.
> * `-e` injects environment variables into the Docker container:
> * `WECHAT_APP_ID` is the App ID of the WeChat Official Account platform
> * `WECHAT_APP_SECRET` is the App Secret of the WeChat platform
> * `HOST_IMAGE_PATH` is the host image directory
## WeChat Official Account IP Whitelist
Please ensure that the server IP is added to the IP whitelist of the Official Account platform to ensure successful calls to the upload interface. For detailed configuration instructions, please refer to: [https://yuzhi.tech/docs/wenyan/upload](https://yuzhi.tech/docs/wenyan/upload)
## Configuration Instructions (Frontmatter)
To correctly upload articles, you need to add a `frontmatter` section at the beginning of each Markdown article, providing the `title` and `cover` fields:
```md
---
title: Running a Large Language Model Locally (2) - Providing External Knowledge Base to the Model
cover: /Users/lei/Downloads/result_image.jpg
---
```
* `title` is the article title and is required.
* `cover` is the article cover, supporting both local paths and online images:
* If there is at least one image in the body, this can be omitted, and one of the images will be used as the cover;
* If there are no images in the body, the cover must be provided.
## About Automatic Image Upload
* Supported image paths:
* Local paths (e.g., `/Users/lei/Downloads/result_image.jpg`)
* Online paths (e.g., `https://example.com/image.jpg`)
## Example Article Format
```md
---
title: Running a Large Language Model Locally (2) - Providing External Knowledge Base to the Model
cover: /Users/lei/Downloads/result_image.jpg
---
In the [previous article](https://babyno.top/posts/2024/02/running-a-large-language-model-locally/), we demonstrated how to run a large language model locally. This article will introduce how to enable the model to retrieve customized data from an external knowledge base to improve answer accuracy and make it appear more "intelligent."
## Preparing the Model
Visit the `Ollama` model page and search for `qwen`. We will use the "[Tongyi Qianwen](https://ollama.com/library/qwen:7b)" model that supports Chinese semantics for our experiments.

```
## How to Debug
Use the Inspector for simple debugging:
```
npx @modelcontextprotocol/inspector
```
If started successfully, you will see a message like this:
```
🔗 Open inspector with token pre-filled:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=761c05058aa4f84ad02280e62d7a7e52ec0430d00c4c7a61492cca59f9eac299
(Auto-open is disabled when authentication is enabled)
```
Access the above link to open the debugging page.

1. Correctly fill in the startup command
2. Add environment variables
3. Click Connect
4. Select Tools -> List Tools
5. Choose the interface to debug
6. Fill in parameters and click Run Tool
7. View complete parameters
## Sponsorship
If you think this project is great, you can buy some canned food for my cat. [Feed the Cat ❤️](https://yuzhi.tech/sponsor)
## License
Apache License Version 2.0
Connection Info
You Might Also Like

Continue
Continue is an open-source project for seamless server management.
semantic-kernel
Build and deploy intelligent AI agents with the Semantic Kernel framework.

repomix
Repomix packages your codebase into AI-friendly formats for seamless integration.
mcp-knowledge-graph
MCP Knowledge Graph enables persistent AI memory via a local knowledge graph.
k8m
K8M is an AI-driven lightweight Kubernetes dashboard for simplified cluster...
QGIS
QGISMCP integrates QGIS with Claude AI via MCP for project and layer management.