Content
# NagaAgent
[简体中文](README.md)|[繁體中文](README_tw.md)|[English](README_en.md)







---
## [Get tutorial videos and one-click run integration package here](https://www.pylindex.top/naga)
### Framework Linkage (QQ Robot): [Undefined QQbot](https://github.com/69gg/Undefined/)
---
## Introduction
NagaAgent is a feature-rich intelligent dialogue assistant system with the following features:
### 🎯 Core Features
- **Intelligent Dialogue System**: Supports streaming dialogue and tool calling loops
- **Multi-Agent Collaboration**: Intelligent task scheduling based on game theory
- **Knowledge Graph Memory**: GRAG system supports long-term memory and intelligent retrieval
- **Complete Voice Interaction**: Real-time voice input and output processing
- **Modern Interface**: PyQt5 GUI + Live2D virtual character
- **System Tray Integration**: Background operation and quick operation
### 🛠️ Technical Architecture
#### Overall System Architecture
```mermaid
graph TB
%% 用户界面层
subgraph "UI Layer"
UI[PyQt5 GUI界面]
Live2D[Live2D虚拟形象]
Tray[System Tray]
Chat[Chat Interface]
end
%% 核心服务层
subgraph "Core Services"
API[API Server<br/>:8000]
Agent[Agent Server<br/>:8001]
MCP[MCP Server<br/>:8003]
TTS[TTS Server<br/>:5048]
end
%% 业务逻辑层
subgraph "Business Logic"
Game[Game Theory System<br/>Multi-Agent Collaboration]
Memory[GRAG Memory System<br/>Knowledge Graph]
Voice[Voice Processing System<br/>Real-time Voice Interaction]
Tools[Tool Calling System<br/>MCP Protocol]
end
%% 数据存储层
subgraph "Data Storage"
Neo4j[(Neo4j Graph Database<br/>Knowledge Graph Storage)]
Files[File System<br/>Configuration/Logs/Cache]
MemoryCache[Memory Cache<br/>Session State]
end
%% 外部服务层
subgraph "External Services"
LLM[LLM Service Provider<br/>OpenAI/通义千问等]
Portal[NagaPortal<br/>Portal Service]
MQTT[IoT Devices<br/>MQTT Communication]
Web[Web Crawler<br/>Online Search]
end
%% 连接关系
UI --> API
UI --> Agent
UI --> MCP
UI --> TTS
API --> Game
API --> Memory
API --> Voice
API --> Tools
Agent --> Game
Agent --> Tools
MCP --> Tools
MCP --> Portal
MCP --> MQTT
MCP --> Web
TTS --> Voice
Game --> Memory
Memory --> Neo4j
Voice --> LLM
Tools --> LLM
API --> MemoryCache
Agent --> MemoryCache
MCP --> MemoryCache
%% 样式
classDef uiLayer fill:#e1f5fe
classDef coreLayer fill:#f3e5f5
classDef businessLayer fill:#e8f5e8
classDef dataLayer fill:#fff3e0
classDef externalLayer fill:#fce4ec
class UI,Live2D,Tray,Chat uiLayer
class API,Agent,MCP,TTS coreLayer
class Game,Memory,Voice,Tools businessLayer
class Neo4j,Files,MemoryCache dataLayer
class LLM,Portal,MQTT,Web externalLayer
```
#### Core Features
- **Multi-Service Parallelism**: API server (8000), Agent server (8001), MCP server (8003), TTS server (5048)
- **Modular Design**: Each service runs independently, supporting hot-swapping
- **Configuration Driven**: Real-time configuration hot update, no need to restart
- **Cross-Platform Support**: Windows, macOS, Linux
### 🔧 Technology Stack
#### Technology Stack Architecture
```mermaid
graph TB
%% 前端技术栈
subgraph "Frontend Stack"
PyQt5[PyQt5<br/>GUI Framework]
Live2D[Live2D<br/>Virtual Character]
QSS[QSS<br/>Stylesheet]
end
%% 后端技术栈
subgraph "Backend Stack"
FastAPI[FastAPI<br/>Web Framework]
Uvicorn[Uvicorn<br/>ASGI Server]
AsyncIO[AsyncIO<br/>Asynchronous Programming]
end
%% 数据库技术栈
subgraph "Database Stack"
Neo4j[Neo4j<br/>Graph Database]
GRAG[GRAG<br/>Knowledge Graph]
Memory[Memory Cache<br/>Session Management]
end
%% AI技术栈
subgraph "AI Stack"
OpenAI[OpenAI API<br/>GPT Model]
Qwen[通义千问<br/>Multimodal Model]
MCP[MCP Protocol<br/>Tool Calling]
end
%% 语音技术栈
subgraph "Voice Stack"
ASR[Speech Recognition<br/>ASR]
TTS[Speech Synthesis<br/>TTS]
Realtime[Real-time Voice<br/>WebRTC]
end
%% 网络技术栈
subgraph "Network Stack"
HTTP[HTTP/HTTPS<br/>RESTful API]
WebSocket[WebSocket<br/>Real-time Communication]
MQTT[MQTT<br/>IoT Protocol]
end
%% 样式
classDef frontend fill:#e3f2fd
classDef backend fill:#f1f8e9
classDef database fill:#fff3e0
classDef ai fill:#fce4ec
classDef voice fill:#e8f5e8
classDef network fill:#f3e5f5
class PyQt5,Live2D,QSS frontend
class FastAPI,Uvicorn,AsyncIO backend
class Neo4j,GRAG,Memory database
class OpenAI,Qwen,MCP ai
class ASR,TTS,Realtime voice
class HTTP,WebSocket,MQTT network
```
#### Core Technologies
- **Python 3.11** + PyQt5 + FastAPI
- **Neo4j Graph Database** + GRAG Knowledge Graph
- **MCP (Model Context Protocol)** Tool Calling
- **OpenAI Compatible API** + Multiple LLM Service Provider Support
---
## Deployment and Running Tutorial
### Environment Requirements
- Python 3.11
- Optional: uv tool (accelerates dependency installation and does not require a specific python version)
### Quick Start
> If you have difficulty deploying, you can refer to the video tutorial or download the one-click run integration package.
#### 1. Install Dependencies
##### Using the setup script
```bash
# Optional: Install uv first
pip install uv
# Use setup.py to automatically initialize
python setup.py
# Or use setup.sh (Linux/macOS)
./setup.sh
# Or use setup.bat (Windows)
setup.bat
```
The initialization script will automatically:
- Detect Python version
- Create a virtual environment
- Install dependency packages
- Copy configuration file templates
- Open the configuration file for editing
</details>
<details><summary>Manual Deployment</summary>
```bash
# No uv
python -m venv .venv
# linux/Mac OS
source .venv/bin/activate
# Windows
.\.venv\Scripts\activate
pip install -r requirements.txt
# Use uv
uv sync
```
</details>
#### 2. Configure LLM API
Edit the `config.json` file to configure your LLM API information:
```json
{
"api": {
"api_key": "your_api_key",
"base_url": "Model service provider OPENAI API endpoint",
"model": "Model name"
}
}
```
<details><summary>Optional Configuration</summary>
#### Enable Knowledge Graph Memory
Use `docker` to install `neo4j` or install `neo4j desktop` and configure Neo4j connection parameters in `config.json`:
```json
{
"grag": {
"enabled": true,
"neo4j_uri": "neo4j://127.0.0.1:7687",
"neo4j_user": "neo4j",
"neo4j_password": "The password you set when installing neo4j"
}
}
```
#### Enable Voice Output Function
```json
{
"system": {
"voice_enabled": true
},
"tts": {
"port": 5048
}
}
```
#### Live2D Related Configuration
```json5
"live2d": {
"enabled": false, # Whether to enable Live2D
"model_path": "ui/live2d_local/live2d_models/重音テト/重音テト.model3.json", # Live2D model path
"fallback_image": "ui/img/standby.png", # Backup image
"auto_switch": true, # Whether to switch automatically
"animation_enabled": true, # Whether to enable animation
"touch_interaction": true # Whether to enable touch interaction
},
```
> Other configuration items can be found in the comments
</details>
#### 3. Start the Application
```bash
# Use the startup script
./start.sh # Linux/macOS
start.bat # Windows
# Or run the py file directly
# linux/Mac OS
source .venv/bin/activate
# Windows
.\.venv\Scripts\activate
python main.py
# uv
uv run main.py
```
> The program will automatically detect the last system detection time. If the detection passes and exceeds 7 days, it will automatically perform an update and restart.
#### Manual Update
```bash
# Use the update script
./update.sh # Linux/macOS
update.bat # Windows
# Or run the py file directly
python update.py
```
The update script will automatically execute git pull (automatically detect the current branch) and uv sync to synchronize dependencies.
<details><summary>Troubleshooting</summary>
1. **Python Version Incompatible**: Make sure to use Python 3.11
2. **Port Occupied**: Check if ports 8000, 8001, 8003, and 5048 are available
3. **Neo4j Connection Failed**: Make sure the Neo4j service is running
4. **JSON Parsing Error Occurs When Detecting Neo4j Connection**: Exit and restart the program
5. **Unknown Error**: Please create an issue to report
</details>
<details><summary>Environment Detection</summary>
```bash
# Run system environment detection
python main.py --check-env --force-check
# Quick detection
python main.py --quick-check
```
</details>
## License
[NagaAgent License](LICENSE)
## Contribution
Welcome to create Issues and Pull Requests!
<details><summary>Build a one-click run integration package</summary>
```bash
python build.py
```
The built files are located in the `dist/` directory
</details>
## Star History
[](https://www.star-history.com/#Xxiii8322766509/NagaAgent&type=date&legend=top-left)
<div align="center">
**Thank you to all the developers for their contributions to this project**
**⭐ If this project is helpful to you, please consider giving us a Star**
</div>
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.
git
A Model Context Protocol server for Git automation and interaction.