Content
# MCP-Mem0-docker: Long-Term Memory for AI Agents
## Goals
- Own your memories, Own your data!
- Run locally on any computer with Docker and Ollama.
### Plus
- Use any LLM provider, currently (OpenAI, OpenRouter, Ollama)
- Provide a simple and effective way to manage long-term memory for AI agents using the MCP protocol.
- Enhance your AI's capabilities with persistent memory storage.
## Features
Based on [mem0.ai](https://mem0.ai/)
Provides 4 tools for managing long-term memory:
1. **`memory_save`**: Store any information in long-term memory.
2. **`memory_get_all`**: Retrieve stored memories in contextual format.
3. **`memory_search`**: Find Memories using semantic search
4. **`memory_delete`**: Delete memories by searching an specific.
## Prerequisites
- Ollama running locally.
- Docker.
- Any MCP client (VS Code, Claude Desktop, Cursor, Windsurf, etc.)
## Fast Start
### Pull models and embeddings to Ollama
Make sure you have Ollama installed and running locally. You can pull the models and embeddings using the following commands:
````bash
ollama pull qwen2.5:3b
ollama pull nomic-embed-text
ollama serve
````
### Run the MCP Server and the database
```bash
docker compose up -d
```
### Configure the MCP Client
For example, if you are using the MCP client in VS Code, you can create a configuration file like this: in your project root: `~/.mcp/config.json`:
```json
{
"mcpServers": {
"mem0": {
"transport": "sse",
"url": "http://localhost:8050/sse"
}
}
}
```
#### Windsurf
Use `serverUrl` instead of `url` in your configuration:
## Development
Best way to develop or tweak around is to go to Fast Start section and run the server/database with docker compose, then you can stop the MCP server and keep the database running, and run the server locally with uv:
- Python 3.12+
- uv
### Using uv
1. Install uv:
```bash
pip install uv
uv venv .venv
.venv\Scripts\activate
```
2. Install dependencies:
```bash
uv pip install -e .
```
3. Create a `.env` file based on `.env.example`:
```bash
cp .env.example .env
```
4. Configure your environment variables in the `.env` file (see Configuration section)
5. Start coding and the server locally:
```bash
uv run src/main.py
```
6. Rebuild with docker compose no cache:
```bash
docker compose up -d --build --no-cache
```
### Configuration .env
The following environment variables can be configured in your `.env` file:
| Variable | Description | Default .env | Notes |
|----------|-------------|--------------| ------|
| `TRANSPORT` | Transport protocol (sse or stdio) | `sse` | Recommended to use `sse` for better performance |
| `HOST` | Host to bind to when using SSE transport | `0.0.0.0` | This allows access from any IP address |
| `PORT` | Port to listen on when using SSE transport | `8050` | Change if needed |
| `LLM_PROVIDER` | LLM provider (openai, openrouter, or ollama) | `ollama` | Use `ollama` for local models |
| `LLM_BASE_URL` | Base URL for the LLM API | `http://localhost:11434` | Default for Ollama |
| `LLM_API_KEY` | API key for the LLM provider | Empty | Required for OpenAI and OpenRouter |
| `LLM_MODEL` | LLM model to use | `qwen2.5:3b` | Change to your desired model |
| `LLM_EMBEDDING_MODEL` | Embedding model to use | `nomic-embed-text` | Change to your desired embedding model |
| `DATABASE_URL` | PostgreSQL connection | `postgresql://user:pass@host:port/db` | Change to your PostgreSQL connection string |
#### Note for n8n users
Use `host.docker.internal` instead of `localhost` so that the MCP server can be accessed from the n8n container. This is necessary because `localhost` in the n8n container refers to the container itself, not your host machine.
The URL in the MCP node would be: <http://host.docker.internal:8050/sse>
You Might Also Like
Ollama
Ollama enables easy access to large language models on various platforms.

n8n
n8n is a secure workflow automation platform for technical teams with 400+...
OpenWebUI
Open WebUI is an extensible web interface for customizable applications.

Dify
Dify is a platform for AI workflows, enabling file uploads and self-hosting.

Zed
Zed is a high-performance multiplayer code editor from the creators of Atom.
MarkItDown MCP
markitdown-mcp is a lightweight MCP server for converting various URIs to Markdown.