Content
<div align="center">
<img src="./banner/logoiconbg.png" alt="KunAvatar Logo" width="128" height="128" />
# Tool List
<div class="badges-container">
[](https://ollama.ai/) [](https://opensource.org/licenses/Apache-2.0)
[](https://mseep.ai/app/fdb0db94-da47-4150-b370-4931a6da9ca4)
</div>
[English](./README_EN.md) |
</div>
## ✨ Project Introduction
KunAvatar is a lightweight AI desktop client that supports local deployment, built with Ollama as the core inference engine. This product retains all the native functions of Ollama while extending multiple high-practicality capabilities: covering RBAC permission system, multi-tenant management, conversation memory function, and supporting MCP server tools and auxiliary models. It can accurately match the usage needs of users with high data privacy requirements, taking into account both functional practicality and data security.
### 🎯 Core Advantages
- **🧠 Intelligent Memory** - Recursive memory system, allowing AI to truly "remember" conversation content
- **🎯 Model Collaboration** - Multi-model collaborative work, main model focusing on dialogue, auxiliary model optimizing experience
- **🔧 Tool Ecosystem** - Supporting MCP tool integration with multiple protocols such as SSE and Streamable HTTP
- **👥 Enterprise-level Management** - Complete RBAC permission system, supporting multi-tenancy and resource-level data management
- **🚀 Local Deployment** - Completely localized data, protecting privacy and security, and supporting LAN access
- **📦 Out-of-the-Box** - No complex configuration required, can be deployed and used locally (currently only supports Windows)
Whether for personal use, team collaboration, or enterprise deployment, KunAvatar can provide a professional, secure, and intelligent AI interaction experience.
### Document Address
- [KunAvatar Documentation](https://kunlabai.com/en/docs)
## 🎯 Main Features
### 🤖 Intelligent Dialogue System
- **Streaming Dialogue Response** - Real-time display of AI replies, providing a smooth dialogue experience
- **Multi-model Support** - Compatible with all Ollama models, supporting hot switching of models
- **Context Memory** - Intelligent memory management, maintaining the coherence of long dialogues
- **Dialogue History** - Complete dialogue records and search functions
- **Multi-modal Support** - Supporting image-based contextual dialogue
### 🧠 Advanced Memory System
- **Intelligent Context Management** - Automatic analysis of dialogue content, extracting key information
- **Recursive Memory Update** - Dynamic update and optimization of memory content
- **Memory Optimization Algorithm** - Intelligent compression and organization of historical dialogues
- **Background Memory Service** - Asynchronous processing of memory generation, without affecting dialogue fluency
- **Global Memory Settings** - Supporting user-level memory configuration and management
### 🔧 MCP Tool Integration
- **Multi-transmission Protocol Support** - Supporting stdio, SSE, and Streamable HTTP connection methods
- **One-click Connection to Obtain Tools** - Automatic discovery and connection to MCP servers, quickly obtaining available tools
- **Multi-server Management** - Simultaneous management of multiple MCP servers, unified tool call interface
- **Real-time Tool Calling** - Allowing AI to have the ability to execute external tools
- **Tool Status Monitoring** - Real-time monitoring of tool connection status and execution results
- **Tool Permission Management** - Fine-grained control of tool access permissions
- **SSE Streaming Connection** - Supporting Server-Sent Events real-time communication
- **HTTP Streaming Transmission** - Officially recommended Streamable HTTP transmission method
- **STDIO Standard Input and Output** - Integrated with multiple local STDIO tools
### 🎯 Auxiliary Model System
- **Multi-model Collaboration** - Main model and auxiliary model working together
- **Prompt Word Optimization** - Dedicated model optimization of user input and system prompts
- **Title and Summary Generation** - Automatic generation of dialogue titles and content summaries
- **Memory Model** - Supporting customized memory models, optimizing dialogue effects
### 🧠 Intelligent Agent System
- **Configurable Intelligent Agent** - Creating AI assistants in specific fields
- **MCP Tool Customization** - Customizing exclusive tool lists for each intelligent agent
- **System Prompt Word Management** - Flexible prompt word configuration and optimization
- **Intelligent Agent Memory Association** - Each intelligent agent having an independent memory system
### 👥 Enterprise-level User Management
- **Resource-level Data Management** - Supporting fine-grained management of users, roles, and permissions
- **RBAC Permission Control** - Role-based access control system
- **Enterprise-level Module** - Supporting multi-tenancy, data isolation, and permission audit
- **User Status Management** - Supporting user activation, pause, and disable status control
- **Role Permission Allocation** - Flexible role creation and permission allocation mechanism
- **Data Security Isolation** - Ensuring complete isolation of different user data
## 🛠️ Technology Stack
### Front-end Technology
- **Next.js 15** - React full-stack framework, supporting App Router
- **React 19** - Latest React version, providing better performance
- **TypeScript** - Type-safe JavaScript superset
- **Tailwind CSS** - Utility-first CSS framework
- **Framer Motion** - Powerful animation library
- **three.js** - 3D graphics library
### Back-end Technology
- **Next.js API Routes** - Server-side API implementation
- **SQLite3** - Lightweight database, supporting Better-SQLite3
- **JWT** - JSON Web Token authentication
- **bcryptjs** - Password encryption
### AI Integration
- **Ollama** - Local large language model runtime
- **MCP (Model Context Protocol)** - Tool call protocol
## 🚀 Quick Start
### Environment Requirements
- **Node.js** >= 22.15.0+
- **npm** >= 11.3.0+
- **Ollama** >= 0.9.6+ (recommended)
### Installation Steps
1. **Clone Project**
```bash
git clone https://github.com/KunLabAI/kun-avatar.git
cd kun-avatar
```
2. **Install Dependencies**
```bash
npm run install
```
3. **Build Project**
```bash
npm run build
```
4. **Start Application**
```bash
npx start
```
### Start Development Mode
If manual startup is required, use:
```bash
cd kunavatar
npx next dev
```
If a one-click startup script is needed, use:
```bash
node start.js
```
The application will automatically:
- 🔍 Detect local IP address
- 🌐 Configure LAN access
- 🚀 Start development server
- 📱 Open the application in the browser
## 📖 Usage Guide
### Initial Configuration
1. **Install Ollama**
- Visit [Ollama Official Website](https://ollama.ai/) to download and install
- Pull the model you need: `ollama pull gemma3`
2. **Create Administrator Account**
Option 1: Create administrator account by command execution
```bash
cd kunavatar/scripts
node init-admin.js
```
Option 2: Create administrator account on the page
Note: After the application starts, visit http://localhost:3000/register to create an administrator account
### Basic Usage
1. **Start Dialogue**
- Select AI model
- Select intelligent agent (optional)
- Start dialogue with AI
2. **Manage Dialogue**
- View dialogue history
- Search historical messages
- Export dialogue records
3. **Configure Intelligent Agent**
- Create AI assistants in specific fields
- Set system prompts
- Configure model parameters
## 📁 Project Structure
```
Kun-Avatar/
├── 📄 start.js # Intelligent startup script
├── 📄 package.json # Launcher configuration
├── 📁 kunavatar/ # Main application directory
│ ├── 📁 src/ # Source code
│ │ ├── 📁 app/ # Next.js pages and API
│ │ │ ├── 📁 api/ # API routes
│ │ │ │ ├── 📁 chat/ # Chat-related API
│ │ │ │ ├── 📁 models/ # Model management API
│ │ │ │ ├── 📁 mcp/ # MCP tool API
│ │ │ │ └── 📁 auth/ # Authentication API
│ │ │ ├── 📁 simple-chat/ # Chat interface
│ │ │ ├── 📁 model-manager/ # Model management
│ │ │ ├── 📁 mcp-config/ # MCP configuration
│ │ │ └── 📁 agents/ # Intelligent agent management
│ │ ├── 📁 components/ # Shared components
│ │ ├── 📁 lib/ # Core library
│ │ │ ├── 📁 database/ # Database operations
│ │ │ ├── 📁 mcp/ # MCP client
│ │ │ ├── 📄 ollama.ts # Ollama API
│ │ │ └── 📄 auth.ts # Authentication service
│ │ ├── 📁 hooks/ # React Hooks
│ │ └── 📁 types/ # TypeScript types
│ ├── 📁 scripts/ # Tool scripts
│ ├── 📁 public/ # Static resources
│ └── 📄 package.json # Application dependencies
```
## 🚀 Future Plans
We are actively developing more exciting features, and the following is our development roadmap:
### 📋 Recent Plans
#### 🧠 Memory System Optimization
- **Intelligent Context Compression** - Implement more efficient dialogue context compression algorithm
- **Memory Level Management** - Support hierarchical storage of short-term, medium-term, and long-term memory
- **Memory Retrieval Optimization** - Improve memory retrieval accuracy and speed
- **Memory Visualization** - Provide visual management interface for memory content
#### 🔄 Model Management Enhancement
- **One-click Model Pull** - Directly pull and install models from Ollama official repository
- **Model Version Management** - Support model version control and rollback function
#### 💻 Desktop Client Support
- **Windows Client** - Native Windows desktop application
- **macOS Client** - Native macOS desktop application
- **Linux Client** - Support mainstream Linux distributions
- **Cross-platform Synchronization** - Real-time synchronization of data between desktop and web
- **Offline Mode** - Support completely offline AI dialogue function
#### 🌐 Multi-language Support
- **Multi-language Model Support** - Support more language models and translation functions
- **Multi-language Interface** - Provide multi-language user interface and interaction
### 💡 Contribution Ideas
We welcome community contributions of ideas and suggestions! If you have good ideas or functional requirements, please:
- 📝 Submit functional requests in [Issues](https://github.com/KunLabAI/kun-avatar/issues)
- 💬 Participate in discussions in [Discussions](https://github.com/KunLabAI/kun-avatar/discussions)
- 🔧 Submit Pull Request to contribute code
---
## 🤝 Contribution Guide
We welcome all forms of contributions! Whether it's bug reports, feature suggestions, or code contributions.
### How to Contribute
1. **Fork Project**
2. **Create Feature Branch** (`git checkout -b feature/AmazingFeature`)
3. **Commit Changes** (`git commit -m 'Add some AmazingFeature'`)
4. **Push to Branch** (`git push origin feature/AmazingFeature`)
5. **Create Pull Request**
### Development Guide
- Follow existing code style
- Add appropriate tests
- Update relevant documentation
- Ensure all tests pass
## 📄 License
This project adopts the Apache 2.0 license - view [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgements
- [Ollama](https://ollama.ai/) - Providing local AI model runtime
- [Next.js](https://nextjs.org/) - Powerful React framework
- [Model Context Protocol](https://modelcontextprotocol.io/) - Tool call protocol standard
- Support from all contributors and users
[](https://mseep.ai/app/kunlabai-kunavatar)
## 📞 Contact Us
- **Project Homepage**: [GitHub Repository](https://github.com/KunLabAI/kun-avatar)
- **Issue Feedback**: [Issues](https://github.com/KunLabAI/kun-avatar/issues)
- **Feature Suggestions**: [Discussions](https://github.com/KunLabAI/kun-avatar/discussions)
- **Contact Email**: [info@kunpuai.com](mailto:info@kunpuai.com)
---
<div align="center">
**If this project helps you, please give us a ⭐️**
Made with ❤️ by KunLab Team
</div>
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
Appwrite
Build like a team of hundreds
TrendRadar
TrendRadar: Your hotspot assistant for real news in just 30 seconds.
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.