Content
DummyCode
A Python-based intelligent agent project that integrates Large Language Models (LLM), Model Context Protocol (MCP), and Retrieval-Augmented Generation (RAG) technologies to create an AI agent with tool invocation capabilities and knowledge base retrieval.
## Project Overview
This project demonstrates how to build a modern AI agent that can:
- 🤖 Invoke large language models for intelligent conversations
- 🔧 Call external tools via the MCP protocol (e.g., web scraping, file operations)
- 📚 Retrieve relevant information from a knowledge base using RAG technology
- 💾 Persist memory and manage multiple sessions independently
- 🔄 Achieve collaborative work between tool invocation and knowledge retrieval
## Core Components
| Component | File | Function Description |
| ---------------------------- | ----------------------- | ------------------------------------------ |
| **Agent** | `src/agent.py` | Intelligent agent core, coordinating LLM, MCP, and RAG |
| **MCPClient** | `src/mcp_client.py` | MCP protocol client, connecting to external tool services |
| **ChatOpenAI** | `src/chat_openai.py` | Large language model conversation wrapper, supporting streaming output and tool invocation |
| **EmbeddingRetriever** | `src/embedding.py` | Embedded model retriever, realizing document vectorization |
| **VectorStore** | `src/vector_store.py` | Vector storage, retrieving based on cosine similarity |
| **Session** | `src/session.py` | Session management, supporting multi-session independent memory and persistence |
## Quick Start
### 1. Environment Requirements
- Python 3.10+
- Node.js and npm (for running MCP server)
- uv tool (for running fetch MCP server)
### 2. Install Dependencies
```bash
# Install Python dependencies
pip install -r requirements.txt
# Install MCP server dependencies
npm install -g @modelcontextprotocol/server-filesystem
# Install uv (if not installed)
# Windows: powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# macOS/Linux: curl -LsSf https://astral.sh/uv/install.sh | sh
```
### 3. Configure Environment Variables
Create a new `.env` file, copy the contents from `.env.example`, and fill in your API keys:
```env
# Large language model configuration (example using DeepSeek)
OPENAI_API_KEY=your_deepseek_api_key
OPENAI_BASE_URL=https://api.deepseek.com/v1
# Vector embedding model configuration (example using SiliconFlow)
EMBEDDING_KEY=your_siliconflow_api_key
EMBEDDING_BASE_URL=https://api.siliconflow.cn/v1
```
### 4. Run the Project
```bash
cd src
python main.py
```
<!--

 -->
## Advanced Features
### Multi-Session Management Mode
Based on Memory MCP, implement multi-dialogue management, each session has an independent memory space:
**Command List:**
| Command | Description |
| ------------------------------ | -------------------------------------- |
| `/new [name]` | Create a new session (optional name, default auto-naming) |
| `/list` | List all sessions (display connection status) |
| `/switch <number>` | Switch to the specified session by list number |
| `/delete <number>` | Delete the specified session by list number |
| `/clear` | Clear the conversation history of the current session |
| `/history` | View the conversation history of the current session |
| `/rag` | Switch to RAG mode (based on knowledge base retrieval) |
| `/help` | Display help information |
| `exit` / `quit` / `exit` | Exit the program |
**Multi-Session Features:**
- Each session has an independent Memory MCP and knowledge graph
- Session memory isolation, no interference between sessions
- Support session persistence (memory files saved in `memory/sessions/` directory)
- Session index saved in `memory/index.json`, automatically restored at startup
- Automatic memory management: each conversation prioritizes memory query information after the conversation ends
### RAG Mode
Input `/rag` to switch to RAG mode, in this mode:
- Based on knowledge base retrieval to answer questions, without using session memory
- Knowledge base directory: `knowledge/`
- Input `/rag` again to exit RAG mode
- Support various file reading, such as md, pdf, xlsx, csv
**Applicable Scenarios:**
- Query specific domain knowledge
- Answer questions based on document content
- Scenarios requiring accurate citation of knowledge base information
### Docker Deployment
- **Dockerhub Address**: [ght2943330996/agent_1.0](https://hub.docker.com/repository/docker/ght2943330996/agent_1.0)
- **Mounting Requirements**: Mount API key file `.env` to container's `/app/.env` directory
- **Running Method**: `docker run -it -p 8000:8000 -v [.env file path]:/app/.env agent_1.0`
## Technical References
- [Model Context Protocol (MCP) Documentation](https://modelcontextprotocol.io/docs)
- [OpenAI API Documentation](https://platform.openai.com/docs)
- [DeepSeek API Documentation](https://platform.deepseek.com/)
- [SiliconFlow Documentation](https://docs.siliconflow.cn/)
- [FAISS Documentation](https://github.com/facebookresearch/faiss)
- [MCP Reference Project](https://github.com/KelvinQiu802/llm-mcp-rag)
- [Enhanced RAG Reference Documentation](https://my.feishu.cn/wiki/ADkHwYg3Vi495Sk8mKOcBd2knbc?from=from_copylink)
- [RAG Paper](https://arxiv.org/abs/2312.10997)
Connection Info
You Might Also Like
n8n
n8n is a workflow automation platform for technical teams, combining code...
ollama
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
dify
Dify is a platform for AI workflows, enabling file uploads and self-hosting.
open-webui
Open WebUI is an extensible web interface for various applications.
NextChat
NextChat is a light and fast AI assistant supporting Claude, DeepSeek, GPT4...
zed
Zed is a high-performance multiplayer code editor from the creators of Atom.
Cline
Cline is a versatile tool available on VS Marketplace for enhancing...
anything-llm
AnythingLLM: An all-in-one AI app for chatting with documents and using AI agents.
cherry-studio
🍒 Cherry Studio is a desktop client that supports for multiple LLM providers.
goose
Goose is an open-source AI agent that automates engineering tasks autonomously.