Content
# ComfyUI-AnimaTool
> [!NOTE]
> **Today's Updates**
> - 🎰 **Gacha Feature** — Supports `repeat` / `batch_size`, generating multiple images at once is no longer a dream
> - 🔄 **Card Pool Switching** — You can switch UNET / CLIP / VAE models, try the new card pool
> - 📋 **Recruitment Record** — View historical generation records, supports reroll to redraw
>
> ✅ **Easier Cloud/Remote Connection** — Added `uvx` installation-free mode, one-line configuration to connect to remote ComfyUI, no local environment required
> ✅ **Tavern MCP Client Released** — [SillyTavern MCP Client](https://github.com/Moeblack/sillytavern-mcp-client), supports stdio + Streamable HTTP transmission
>
> ✅ **Tool Calling Experience Fix** — [Tool Use Fix](https://github.com/Moeblack/sillytavern-tooluse-fix), merge fragmented messages, display images directly, Swipe proxy
> [!TIP]
> **Cherry Studio Now Supports MCP Image Display!**
> Our submitted PR has fixed Cherry Studio's handling of MCP `ImageContent`. Before the official merge, you can use the preview version to experience the complete MCP image function:
> **Download Preview Version** → [Cherry Studio v1.7.17-preview](https://github.com/Moeblack/cherry-studio/releases/tag/v1.7.17-preview)
> Includes the following fixes:
> - Supports configuring whether to send MCP tool images to the model separately for each assistant
> - Fixes base64 image data processing for OpenAI compatible providers
> - Fixes Gemini's MCP multimodal tool result conversion
> - **Performance Fix v2**: Solves the problem of severe lag after multiple rounds of image generation — IPC excludes large slices + base64 in-place strip (fixes the issue in v1.7.16-preview2 where Zod safeParse cloning caused strip to be ineffective) — [Upstream PR #12766](https://github.com/CherryHQ/cherry-studio/pull/12766)
<p align="center">
<img src="assets/hero.webp" alt="ComfyUI-AnimaTool Demo" width="100%">
</p>
<p align="center">
<b>Let AI Agents Directly Generate Anime Pictures, Displayed Natively in Chat Windows</b>
</p>
<p align="center">
Cursor / Claude / Gemini / OpenAI → MCP / HTTP API → ComfyUI → Anima Model
</p>
---

## Documentation
- [📖 Wiki & Prompt Guide](https://github.com/Moeblack/ComfyUI-AnimaTool/wiki) - Detailed prompt guide, installation tutorial, and API documentation.
- [🤖 Cursor Skill](CURSOR_SKILL.md) - **Must-read for Cursor / Windsurf users**! Use the content of this file as an Agent Skill to teach AI how to write high-quality prompts.
## Features
- **MCP Server**: Images are natively displayed in Cursor/Claude chat windows
- **HTTP API**: Starts with ComfyUI, no additional services required
- **Structured Prompts**: Automatically concatenated according to Anima specifications
- **Multi-Aspect Ratio Support**: 21:9 to 9:21 (14 presets in total)
- **Reroll / History**: Supports regeneration based on historical records, can override some parameters (change artist, add LoRA, etc.)
- **Batch Generation**: `repeat` parameter submits multiple independent tasks (queue mode), `batch_size` generates multiple images within a single task
---
## Related Projects
### SillyTavern Suite
Want to use AnimaTool to generate images in SillyTavern? We recommend installing the following supporting plugins:
| Project | Description |
|------|------|
| [SillyTavern MCP Client](https://github.com/Moeblack/sillytavern-mcp-client) | Tavern MCP client, connects to AnimaTool and other MCP Servers, supports stdio + Streamable HTTP |
| [SillyTavern Tool Use Fix](https://github.com/Moeblack/sillytavern-tooluse-fix) | Tool call experience fix, merges fragmented messages, displays images directly in conversations |
```
ComfyUI-AnimaTool (This project, MCP Server)
↕ MCP Protocol (stdio / streamable-http)
SillyTavern MCP Client (Connection + Tool Registration)
↕ SillyTavern Tool Calling
Tool Use Fix (Merge Display + Experience Optimization)
```
### AnimaLoraToolkit - LoRA Training Tool
If you want to train your own LoRA/LoKr to use with Anima, we recommend using **[AnimaLoraToolkit](https://github.com/Moeblack/AnimaLoraToolkit)**:
- **YAML Configuration File** - Loaded via `--config`, command-line arguments can override
- **LoRA / LoKr Dual Mode** - Standard LoRA and LyCORIS LoKr
- **ComfyUI Compatible** - Output safetensors can be used directly in this tool
- **JSON Caption Support** - Structured tags, classified shuffle
- **Real-time Training Monitoring** - Web interface displays loss curves and sample images
- **Checkpoint Recovery** - Saves complete training state, supports breakpoint resume training
After training, place the LoRA in the `ComfyUI/models/loras/` directory, and you can load and use it through the `loras` parameter of this tool.
#### Example: Cosmic Princess Kaguya LoKr
A style + character LoKr trained using AnimaLoraToolkit, restoring the 4K theatrical style of the Netflix animated film "Cosmic Princess Kaguya!":
- **Download**: [Civitai](https://civitai.com/models/2366705)
- **Trigger Words**: `@spacetime kaguya` (style), `cosmic princess kaguya` (work)
- **Recommended Weight**: 0.8 - 1.0
---
## Installation
### Cherry Studio Users
If you are using Cherry Studio as an MCP client, you need to install our preview version to correctly display the images returned by MCP:
1. Download [Cherry Studio v1.7.17-preview](https://github.com/Moeblack/cherry-studio/releases/tag/v1.7.17-preview) (installation or portable version is fine)
2. After installation, configure according to "Method 1: MCP Server" below
3. The generated images will be displayed directly in the chat window
> The official version of Cherry Studio has not yet merged this fix. Using the official version will cause the images to be displayed as base64 garbled characters, and will cause severe lag after multiple rounds of image generation.
> v1.7.17-preview is based on upstream v1.7.17 and fixes the memory bloat and UI freeze issues after image generation ([details](https://github.com/CherryHQ/cherry-studio/pull/12766)).
### Method 1: ComfyUI Manager (Recommended)
1. Open ComfyUI Manager
2. Search "Anima Tool"
3. Click Install
4. Restart ComfyUI
### Method 2: Manual Install
```bash
cd ComfyUI/custom_nodes
git clone https://github.com/Moeblack/ComfyUI-AnimaTool.git
pip install -r ComfyUI-AnimaTool/requirements.txt
```
### Prerequisites
Make sure the following model files have been placed in the corresponding ComfyUI directory:
| File | Path | Description |
|------|------|------|
| `anima-preview.safetensors` | `models/diffusion_models/` | Anima UNET |
| `qwen_3_06b_base.safetensors` | `models/text_encoders/` | Qwen3 CLIP |
| `qwen_image_vae.safetensors` | `models/vae/` | VAE |
Model Download: [circlestone-labs/Anima on Hugging Face](https://huggingface.co/circlestone-labs/Anima)
---
## Usage
### Method 0: Independent MCP (Recommended: Cloud/Remote ComfyUI, or don't want to install into custom_nodes)
If you only want to connect to a running ComfyUI (local or cloud) and don't want to put this repository into `ComfyUI/custom_nodes/`, you can use the independent PyPI package **[`comfyui-animatool`](https://github.com/Moeblack/animatool-mcp)** (use the command `animatool-mcp` after installation) (only works through the ComfyUI standard API: `/prompt`, `/history/<id>`, `/view?...`).
#### Installation
**Method 1: Use uvx (Recommended, No Installation Required)**
No need to manually install the Python package, just use `uvx` to run in the Cursor configuration directly (requires [uv](https://github.com/astral-sh/uv) to be installed):
*(Configuration see JSON below)*
**Method 2: Use pip**
```bash
pip install comfyui-animatool
```
**Method 3: Source Installation (For Development)**
```bash
pip install -e ./animatool-mcp
```
#### Configure Cursor
Create `.cursor/mcp.json` in the project root directory (using `uvx` as an example):
```json
{
"mcpServers": {
"anima-tool": {
"command": "uvx",
"args": ["--from", "comfyui-animatool", "animatool-mcp"],
"env": {
"COMFYUI_URL": "http://127.0.0.1:8188",
"ANIMATOOL_CHECK_MODELS": "false"
}
}
}
}
```
#### Cloud Authentication (Optional)
If the cloud ComfyUI requires authentication (reverse proxy/VPN/gateway, etc.), you can additionally set:
- `ANIMATOOL_BEARER_TOKEN`
- Or `ANIMATOOL_HEADERS_JSON` (custom Header JSON string)
> This method **does not depend on** installing this custom node; as long as `COMFYUI_URL` is accessible.
---
### Method 1: MCP Server (Recommended, Native Image Display)
#### Configure Cursor
Create `.cursor/mcp.json` in the project root directory:
```json
{
"mcpServers": {
"anima-tool": {
"command": "<PATH_TO_PYTHON>",
"args": ["<PATH_TO>/ComfyUI-AnimaTool/servers/mcp_server.py"]
}
}
}
```
**Example (Windows)**:
```json
{
"mcpServers": {
"anima-tool": {
"command": "C:\\ComfyUI\\.venv\\Scripts\\python.exe",
"args": ["C:\\ComfyUI\\custom_nodes\\ComfyUI-AnimaTool\\servers\\mcp_server.py"]
}
}
}
```
#### Install MCP Dependencies
```bash
pip install mcp
```
#### Usage
1. Make sure ComfyUI is running on `http://127.0.0.1:8188`
2. Restart Cursor to load the MCP Server
3. Let AI generate images directly:
> Draw a girl in a white dress in a garden, portrait 9:16, safe
The image will be **natively displayed** in the chat window.
---
### Method 2: ComfyUI Built-in HTTP API
After starting ComfyUI, the following routes are automatically registered:
| Route | Method | Description |
|------|------|------|
| `/anima/health` | GET | Health Check |
| `/anima/schema` | GET | Tool Schema |
| `/anima/knowledge` | GET | Expert Knowledge |
| `/anima/generate` | POST | Execute Generation (supports `repeat` batching) |
| `/anima/history` | GET | View Recent Generation History |
| `/anima/reroll` | POST | Regenerate Based on Historical Records |
#### Example Call
**PowerShell**:
```powershell
$body = @{
aspect_ratio = "3:4"
quality_meta_year_safe = "masterpiece, best quality, newest, year 2024, safe"
count = "1girl"
artist = "@fkey, @jima"
tags = "upper body, smile, white dress"
neg = "worst quality, low quality, blurry, bad hands, nsfw"
} | ConvertTo-Json -Depth 10
Invoke-RestMethod -Uri "http://127.0.0.1:8188/anima/generate" -Method Post -Body $body -ContentType "application/json"
```
**curl**:
```bash
curl -X POST http://127.0.0.1:8188/anima/generate \
-H "Content-Type: application/json" \
-d '{"aspect_ratio":"3:4","quality_meta_year_safe":"masterpiece, best quality, newest, year 2024, safe","count":"1girl","artist":"@fkey, @jima","tags":"upper body, smile, white dress","neg":"worst quality, low quality, blurry, bad hands, nsfw"}'
```
---
### Method 3: Independent FastAPI Server
```bash
cd ComfyUI-AnimaTool
pip install fastapi uvicorn
python -m servers.http_server
```
Visit `http://127.0.0.1:8000/docs` to view the Swagger UI.
---
## Parameters
### Required
| Parameter | Type | Description |
|------|------|------|
| `quality_meta_year_safe` | string | Quality/Year/Safety tags (must include safe/sensitive/nsfw/explicit) |
| `count` | string | Number of people (`1girl`, `2girls`, `1boy`) |
| `artist` | string | Artist, **must start with `@`** (e.g. `@fkey, @jima`) |
| `tags` | string | Danbooru tags |
| `neg` | string | Negative prompt |
### Optional
| Parameter | Type | Default | Description |
|------|------|--------|------|
| `aspect_ratio` | string | - | Aspect ratio (automatically calculates resolution) |
| `width` / `height` | int | - | Directly specify resolution |
| `character` | string | `""` | Character name |
| `series` | string | `""` | Series name |
| `appearance` | string | `""` | Appearance description |
| `style` | string | `""` | Style |
| `environment` | string | `""` | Environment/Lighting |
| `steps` | int | 25 | Steps |
| `cfg` | float | 4.5 | CFG |
| `seed` | int | Random | Seed |
| `sampler_name` | string | `er_sde` | Sampler |
| `repeat` | int | 1 | How many independent generation tasks to submit (queue mode, each with an independent random seed). Total images = repeat × batch_size |
| `batch_size` | int | 1 | How many images to generate in a single task (latent batch mode, more memory intensive) |
| `loras` | array | `[]` | Optional: Append LoRA (UNET only). `name` is the relative path under `ComfyUI/models/loras/` (can contain subdirectories), example: `[{"name":"_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors","weight":0.8}]` |
### Supported Aspect Ratios
```
Widescreen: 21:9, 2:1, 16:9, 16:10, 5:3, 3:2, 4:3
Square: 1:1
Portrait: 3:4, 2:3, 3:5, 10:16, 9:16, 1:2, 9:21
```
---
### LoRA (Optional)
> The current version injects `LoraLoaderModelOnly` in a chain between **UNETLoader → KSampler(model)**, so it **only works for UNET** (will not modify CLIP).
#### 1) Put the LoRA in ComfyUI's loras directory
Your LoRA path (example):
- `G:\\AIGC\\ComfyUICommon\\models\\loras\\_Anima\\cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
The corresponding `loras[i].name` in the request should be (relative to `models/loras/`):
- Recommended: `_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
> Note: ComfyUI will actually verify whether `lora_name` is in the return list of `GET /models/loras`.
> - In Windows environments, this list is usually a backslash path (e.g., `_Anima\\cosmic_kaguya_lokr_epoch4_comfyui.safetensors`)
> - **This project will automatically normalize the separator based on the `/models/loras` return value** (you can use `/` or `\\`), but if you manually fill in `lora_name` in ComfyUI, be sure to copy the interface return value.
#### 2) Pass in the loras parameter during generation
You can directly refer to the example in this repository: [`examples/requests/generate_with_cosmic_kaguya_lora.json`](examples/requests/generate_with_cosmic_kaguya_lora.json)
```json
{
"aspect_ratio": "3:4",
"quality_meta_year_safe": "newest, year 2024, safe",
"count": "1girl",
"character": "kaguya",
"series": "cosmic princess kaguya",
"artist": "@spacetime kaguya",
"appearance": "long hair, black hair, purple eyes",
"tags": "school uniform, smile, standing, looking at viewer",
"environment": "classroom, window, sunlight",
"nltags": "A cheerful girl stands by the window.",
"neg": "worst quality, low quality, blurry, bad hands, nsfw",
"loras": [
{
"name": "_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors",
"weight": 0.9
}
]
}
```
#### 3) (Optional) Write sidecar metadata for LoRA to make it visible to MCP's list tool
In order to avoid exposing the entire `loras` directory to the MCP client indiscriminately, this project **forces only LoRAs with the same name `.json` sidecar metadata file to be returned** when `list_anima_models(model_type="loras")` is called.
- LoRA file: `ComfyUI/models/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors`
- sidecar: `ComfyUI/models/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json`
For an example sidecar file, refer to:
- [`examples/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json`](examples/loras/_Anima/cosmic_kaguya_lokr_epoch4_comfyui.safetensors.json)
> The field structure of the sidecar JSON is completely customizable, and this project only requires it to be valid JSON.
> Note: To allow the MCP server to read the sidecar, you also need to set `COMFYUI_MODELS_DIR` to point to the **models root directory** of the machine (e.g., `C:\\ComfyUI\\models`; your example is `G:\\AIGC\\ComfyUICommon\\models`). Remote ComfyUI scenarios usually cannot read remote file systems, so only "directly using the loras parameter" is supported, and listing is not supported.
---
## Important Rules
1. **The artist must include `@`**: e.g., `@fkey, @jima`, otherwise it is almost ineffective
2. **Must specify security labels**: `safe` / `sensitive` / `nsfw` / `explicit`
3. **Recommended artist combination**: `@fkey, @jima` (stable effect)
4. **Resolution approximately 1MP**: Anima preview version is more stable
5. **Prompts are not separated by lines**: single-line comma connection
---
## Directory Structure
```
ComfyUI-AnimaTool/
├── __init__.py # ComfyUI extension (registers /anima/* routes)
├── executor/ # Core executor
│ ├── anima_executor.py
│ ├── config.py
│ ├── history.py # Generation history manager (memory + JSONL persistence)
│ └── workflow_template.json
├── knowledge/ # Expert knowledge base
│ ├── anima_expert.md
│ ├── artist_list.md
│ └── prompt_examples.md
├── schemas/ # Tool Schema
│ └── tool_schema_universal.json
├── servers/
│ ├── mcp_server.py # MCP Server (native image return)
│ ├── http_server.py # Standalone FastAPI
│ └── cli.py # Command line tool
├── assets/ # Screenshots and other resources
├── outputs/ # Generated images (gitignore)
├── README.md
├── LICENSE
├── CHANGELOG.md
├── pyproject.toml
└── requirements.txt
```
---
## Configuration
### Environment Variables (Recommended)
All configurations can be overridden by environment variables without modifying the code:
#### Basic Configuration
| Environment Variable | Default Value | Description |
|----------------------|---------------|-------------|
| `COMFYUI_URL` | `http://127.0.0.1:8188` | ComfyUI service address |
| `ANIMATOOL_TIMEOUT` | `600` | Generation timeout (seconds) |
| `ANIMATOOL_DOWNLOAD_IMAGES` | `true` | Whether to save images locally |
| `ANIMATOOL_OUTPUT_DIR` | `./outputs` | Image output directory |
| `ANIMATOOL_TARGET_MP` | `1.0` | Target number of pixels (MP) |
| `ANIMATOOL_ROUND_TO` | `16` | Resolution alignment multiple |
#### Model Configuration
| Environment Variable | Default Value | Description |
|----------------------|---------------|-------------|
| `COMFYUI_MODELS_DIR` | *(Not set)* | ComfyUI models directory path, used for model pre-check; also used for **LoRA sidecar metadata reading** (`list_anima_models(model_type="loras")`) |
| `ANIMATOOL_UNET_NAME` | `anima-preview.safetensors` | UNET model file name |
| `ANIMATOOL_CLIP_NAME` | `qwen_3_06b_base.safetensors` | CLIP model file name |
| `ANIMATOOL_VAE_NAME` | `qwen_image_vae.safetensors` | VAE model file name |
| `ANIMATOOL_CHECK_MODELS` | `true` | Whether to enable model pre-check |
### Setting Environment Variables in Cursor MCP Configuration
```json
{
"mcpServers": {
"anima-tool": {
"command": "C:\\ComfyUI\\.venv\\Scripts\\python.exe",
"args": ["C:\\ComfyUI\\custom_nodes\\ComfyUI-AnimaTool\\servers\\mcp_server.py"],
"env": {
"COMFYUI_URL": "http://127.0.0.1:8188",
"COMFYUI_MODELS_DIR": "C:\\ComfyUI\\models"
}
}
}
}
```
### Model Pre-Check
After setting `COMFYUI_MODELS_DIR`, the model file will be automatically checked for existence before generation:
```json
"env": {
"COMFYUI_MODELS_DIR": "C:\\ComfyUI\\models"
}
```
If a model file is missing, a friendly prompt will be given:
```
Missing the following model files:
- unet: diffusion_models/anima-preview.safetensors
- clip: text_encoders/qwen_3_06b_base.safetensors
Please download from HuggingFace: https://huggingface.co/circlestone-labs/Anima
And place it in the corresponding subdirectory of ComfyUI/models
```
**Remote ComfyUI Scenario**: If `COMFYUI_MODELS_DIR` is not set, the pre-check is skipped (because the remote file system cannot be accessed).
### Remote/Docker ComfyUI Configuration
If ComfyUI is not running on the local machine:
**Other computers on the local network**:
```bash
export COMFYUI_URL=http://192.168.1.100:8188
```
**Docker container accessing the host machine**:
```bash
export COMFYUI_URL=http://host.docker.internal:8188
```
**WSL accessing Windows**:
```bash
export COMFYUI_URL=http://$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):8188
```
---
## Troubleshooting
### Error: Failed to connect to ComfyUI
**Symptom**: `Connection refused` or `Failed to connect to ComfyUI`
**Troubleshooting Steps**:
1. Confirm that ComfyUI is started: visit `http://127.0.0.1:8188` in your browser
2. Confirm the port is correct: default is 8188, if it has been changed, you need to set `COMFYUI_URL`
3. Confirm that the firewall is not blocking (Windows Defender / enterprise firewall)
4. If ComfyUI is remote/Docker, set the correct `COMFYUI_URL`
### Error: H,W should be divisible by spatial_patch_size
**Symptom**: `H,W (xxx, xxx) should be divisible by spatial_patch_size 2`
**Reason**: The resolution is not a multiple of 16
**Solution**:
- Use the preset `aspect_ratio` (such as `16:9`, `9:16`, `1:1`)
- If you manually specify `width`/`height`, make sure it is a multiple of 16 (such as 512, 768, 1024)
### Error: Model file does not exist
**Symptom**: ComfyUI console reports `FileNotFoundError` or `Model not found`
**Solution**: Confirm that the following files exist:
| File | Location |
|------|----------|
| `anima-preview.safetensors` | `ComfyUI/models/diffusion_models/` |
| `qwen_3_06b_base.safetensors` | `ComfyUI/models/text_encoders/` |
| `qwen_image_vae.safetensors` | `ComfyUI/models/vae/` |
Download address: [circlestone-labs/Anima](https://huggingface.co/circlestone-labs/Anima)
### MCP Server not loaded?
1. **Check status**: Cursor Settings → MCP → anima-tool should show green
2. **View logs**: Click "Show Output" to view errors
3. **Confirm path**: Python and script paths must be **absolute paths**
4. **Confirm dependencies**: `pip install mcp` (using ComfyUI's Python environment)
5. **Restart Cursor**: Must restart after modifying the configuration
### Generation timeout?
**Symptom**: `TimeoutError` is reported after waiting for a long time
**Possible reasons**:
- ComfyUI is loading the model (slower for the first generation)
- Insufficient GPU memory leads to slow processing
- The number of steps `steps` is set too high
**Solution**:
- Increase the timeout: `export ANIMATOOL_TIMEOUT=1200`
- Reduce the number of steps: `steps: 25` (default value)
- Check the ComfyUI console for errors
### API call is stuck?
Make sure you are using the latest version, older versions may have event loop blocking issues.
---
## System Requirements
- **Python**: 3.10+
- **ComfyUI**: Latest version
- **GPU**: Recommended 8GB+ memory (Anima model is larger)
- **Dependencies**: `mcp` (MCP Server), `requests` (optional, HTTP requests)
---
## Credits
- **Anima Model**: [circlestone-labs/Anima](https://huggingface.co/circlestone-labs/Anima)
- **ComfyUI**: [comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI)
- **MCP Protocol**: [Anthropic Model Context Protocol](https://github.com/anthropics/anthropic-cookbook/tree/main/misc/model_context_protocol)
---
## License
AGPL-3.0 License - see [LICENSE](LICENSE) for details.
---
## Contributing
Welcome to submit Issues and Pull Requests!
1. Fork this repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
MCP Config
Below is the configuration for this MCP Server. You can copy it directly to Cursor or other MCP clients.
mcp.json
Connection Info
You Might Also Like
markitdown
Python tool for converting files and office documents to Markdown.
Fetch
Retrieve and process content from web pages by converting HTML into markdown format.
chatbox
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
continue
Continue is an open-source project for seamless server management.
semantic-kernel
Build and deploy intelligent AI agents with Semantic Kernel's orchestration...