Content
# 🐕 Yuns Desktop Assistant
<p align="center">
<img src="assets/shiba.jpg" alt="Yuns桌面助手" width="150"/>
</p>
<p align="center">
<b>Smart Desktop Pet - Multi-Model AI Dialogue Assistant + MCP Tool Invocation</b>
</p>
<p align="center">
<img src="https://img.shields.io/badge/Electron-28.0.0-47848F?logo=electron" alt="Electron"/>
<img src="https://img.shields.io/badge/License-MIT-green" alt="License"/>
<img src="https://img.shields.io/badge/Version-2.2.0-blue" alt="Version"/>
<img src="https://img.shields.io/badge/Platform-Windows-0078D6?logo=windows" alt="Platform"/>
</p>
---
## ✨ Features
### 🤖 Multi-Model AI Dialogue (Supports 10+ Providers)
**International Services**
- **DeepSeek** - DeepSeek-V3 Chat, DeepSeek-R1 Inference Model
- **Google Gemini** - Gemini 3 Pro/Flash, Gemini 2.5 Series (Latest)
- **OpenAI** - GPT-4o, o1/o3 Inference Series
- **Anthropic Claude** - Claude Sonnet 4, Claude 3.5 Series
- **Groq** - Llama 3.3 70B (Free High-Speed Inference)
**Domestic Services**
- **智谱 GLM** - GLM-4 Plus, GLM-4V Visual Model
- **月之暗面 Kimi** - Moonshot v1 Series (128K Ultra-Long Context)
- **零一万物 Yi** - Yi Lightning, Yi Large
- **硅基流动** - Qwen2.5, DeepSeek-V3 Hosted Version
**Other**
- **Custom API** - Supports any OpenAI compatible interface, can manually enter model ID
### 👁️ Visual Analysis
- One-click screenshot and send to AI for analysis
- Supports multi-modal visual understanding
- Automatically hides the window before taking a screenshot to ensure a clean screenshot
### 🛠️ MCP Tool Invocation
- Supports Model Context Protocol (MCP) standard
- Built-in file system, terminal commands, network requests, and other presets
- AI can autonomously invoke tools to complete complex tasks
- Supports custom MCP server configuration
### 🔄 Gemini API Relay Station
- Built-in Gemini API proxy server
- Supports multi-key polling load balancing
- Automatically synchronizes Gemini Keys from API configuration
- OpenAI compatible format, easy to use in other applications
### 🌐 Network Proxy Configuration
- Supports HTTP/HTTPS proxy
- Dynamic switching without restarting the application
- Applicable to all API requests
### 🎨 UI Features
- 🐕 Cute desktop pet image
- 🌓 Supports light and dark theme switching
- 💬 Streaming output, real-time display of AI responses
- 📝 Friendly prompt message system
- 📄 Export conversations as Markdown files
- 🖼️ Responsive design, window size adaptive
---
## 🖼️ Interface Preview
| Desktop Pet | Dialogue Interface | Settings Interface |
|:---:|:---:|:---:|
| Cute Shiba Inu Desktop Pet | Multi-Model Intelligent Dialogue | Rich Configuration Options |
---
## 🚀 Quick Start
### Environment Requirements
- Node.js 18+
- npm or yarn
### Installation Steps
```bash
# 1. Clone the project
git clone https://github.com/JianguSheng/yuns-desktop-pet.git
cd yuns-desktop-pet
# 2. Install dependencies
npm install
# 3. Start the application
npm start
# Development mode (with developer tools)
npm run dev
```
### Build Executable File
```bash
# Build Windows version
npm run build
# Build portable version
npm run build:portable
```
After building, the executable file is located in the `dist/win-unpacked/` directory.
---
## ⚙️ Configuration Instructions
### API Configuration
The API of the AI model needs to be configured for the first use:
1. Double-click the desktop pet to open the dialogue window
2. Click **⚙️ Settings** in the upper right corner
3. Add or edit the configuration in the **API Configuration** tab
4. Fill in the API address and key
5. Click **Test Connection** to verify the configuration
#### Supported API Providers
| Provider | Default API Address | Recommended Model |
|-------|-------------|---------|
| DeepSeek | `https://api.deepseek.com/v1/chat/completions` | deepseek-chat, deepseek-reasoner |
| Google Gemini | `https://generativelanguage.googleapis.com/v1beta/models` | gemini-3-pro-preview, gemini-2.5-flash |
| OpenAI | `https://api.openai.com/v1/chat/completions` | gpt-4o, o1, o3-mini |
| Anthropic Claude | `https://api.anthropic.com/v1/messages` | claude-sonnet-4, claude-3-5-sonnet |
| Groq (Free) | `https://api.groq.com/openai/v1/chat/completions` | llama-3.3-70b-versatile |
| 智谱 GLM | `https://open.bigmodel.cn/api/paas/v4/chat/completions` | glm-4-plus, glm-4v-plus |
| 月之暗面 Kimi | `https://api.moonshot.cn/v1/chat/completions` | moonshot-v1-128k |
| 零一万物 Yi | `https://api.lingyiwanwu.com/v1/chat/completions` | yi-lightning, yi-large |
| 硅基流动 | `https://api.siliconflow.cn/v1/chat/completions` | Qwen/Qwen2.5-72B-Instruct |
| Custom API | Configure yourself | Supports manually entering any model ID |
### Network Proxy Configuration
If you need to access the API through a proxy:
1. Go to **Settings** → **General Settings**
2. Find the **Network Proxy** area
3. Enable the proxy and fill in the proxy address and port
4. Click **Test Proxy** to verify the connection
5. **Save Configuration** to take effect immediately without restarting
### MCP Tool Configuration
1. Go to **Settings** → **🛠️ MCP Tools**
2. Turn on **Enable MCP Tool Invocation**
3. Add an MCP server or use a preset:
- 📁 **File System** - File read and write operations
- 💻 **Terminal Command** - Execute system commands
- 🌐 **Network Request** - HTTP request
4. Check **Enable This Server** and click **Connect**
#### MCP Preset Configuration
```javascript
// File System
command: npx
args: -y @modelcontextprotocol/server-filesystem C:/
// Terminal Command
command: npx
args: -y @anthropics/mcp-server-shell
// Network Request
command: npx
args: -y @anthropics/mcp-server-fetch
```
### Gemini Relay Station Configuration
1. Go to **Settings** → **API Relay Station**
2. Turn on the relay station service
3. Default port: `3001`
4. Access address: `http://127.0.0.1:3001/v1/chat/completions`
5. Supports OpenAI compatible format calls
### Custom API / Relay Station Configuration
Applicable to users using third-party API relay stations:
1. Select the provider type as **Custom API**
2. Fill in the API address provided by the relay station
3. Fill in the API Key provided by the relay station
4. Select a model from the list, or select **Manually Enter Model ID** to enter any model
---
## 📁 Project Structure
```
project/
├── main.js # Electron main process
├── preload.js # Preload script
├── config.js # Application configuration (model, window, etc.)
├── store.js # Data persistence
├── api-service.js # AI API call service
├── mcp-client.js # MCP client management
├── proxy-server.js # Gemini API relay station
├── proxy-key-manager.js # API Key management
├── renderer/ # Rendering process
│ ├── pet.html # Desktop pet window
│ ├── chat.html/js/css # Dialogue window
│ ├── settings.html/js/css # Settings window
│ └── friendly-messages.js # Friendly prompts
├── assets/ # Resource files
│ ├── shiba.jpg # Desktop pet image
│ └── icon.png # Application icon
└── dist/ # Build output
```
---
## 🔧 Advanced Features
### Conversation Saving
Conversations can be exported as Markdown files:
- Click the **💾 Save** button on the dialogue interface
- The default save path can be configured in `config.js`
### Window Always on Top
The window always on top function can be enabled in **Settings** → **General Settings**.
### Pet Size Adjustment
The pet size can be adjusted in **Settings** → **Appearance Settings**:
- Small: 180x180
- Medium: 230x230 (default)
- Large: 280x280
### Custom Theme
Supports light and dark theme switching, select in **Appearance Settings**.
---
## 🛠️ Development Instructions
### Development Mode
```bash
npm run dev
```
Development mode will automatically open developer tools for easy debugging.
### Menu Control
```bash
# Hide menu
npm run menu:hide
# Minimal menu
npm run menu:minimal
# Custom menu
npm run menu:custom
```
### Dependency Instructions
| Dependency | Purpose |
|-----|-----|
| electron | Desktop application framework |
| axios | HTTP request |
| electron-store | Data persistence |
| @modelcontextprotocol/sdk | MCP protocol support |
| express | Relay station server |
| electron-builder | Application packaging |
| https-proxy-agent | Network proxy support |
---
## 📋 Update Log
### v2.2.0 (2025-12)
- ✨ Added 6 AI providers: Claude, 智谱, 月之暗面, 零一万物, 硅基流动, Groq
- ✨ Updated Gemini 3 series model support
- ✨ Added network proxy configuration function (dynamic switching, no restart required)
- ✨ Custom API supports manually entering any model ID
- 🐛 Fixed the issue that the API address was reset when editing the configuration
- 💄 Optimized settings page UI
### v2.1.0 (2024-12)
- ✨ Added MCP tool invocation function
- ✨ Added Gemini API relay station
- 🐛 Fixed several known issues
- 💄 Optimized user interface
### v2.0.0
- ✨ Multi-card configuration system
- ✨ Multi-model support
- ✨ Visual analysis function
- ✨ Streaming output
---
## 🤝 Contribution
Welcome to submit Issues and Pull Requests!
---
## 📞 Contact Information
- **Author**: 匀升
- **Email**: qiyunsheng919@gmail.com
- **GitHub**: [JianguSheng](https://github.com/JianguSheng)
- **Project Address**: [yuns-desktop-pet](https://github.com/JianguSheng/yuns-desktop-pet)
---
## 📄 Open Source Protocol
This project is based on [MIT License](LICENSE) open source.
---
<p align="center">
Made with ❤️ by 匀升
</p>
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
Appwrite
Build like a team of hundreds
overseer
Codemode MCP server for agent task management
mcp-appium
MCP Appium is a server for mobile automation, enabling AI-driven testing for...
mcp-appium
MCP Appium is a server for mobile automation, enabling AI-driven testing on...