Content
# sequential-thinking-slim
> **Sequential Thinking MCP server optimized for AI assistants** — Reduce context window tokens by 55.0% while keeping full functionality. Compatible with Claude, ChatGPT, Gemini, Cursor, and all MCP clients.
[](https://www.npmjs.com/package/sequential-thinking-slim)
[](https://github.com/mcpslim/mcpslim)
[](https://modelcontextprotocol.io)
## What is sequential-thinking-slim?
A **token-optimized** version of the Sequential Thinking [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server.
### The Problem
MCP tool schemas consume significant **context window tokens**. When AI assistants like Claude or ChatGPT load MCP tools, each tool definition takes up valuable context space.
The original `@modelcontextprotocol/server-sequential-thinking` loads **1 tools** consuming approximately **~1,529 tokens** — that's space you could use for actual conversation.
### The Solution
`sequential-thinking-slim` intelligently **groups 1 tools into 1 semantic operations**, reducing token usage by **55.0%** — with **zero functionality loss**.
Your AI assistant sees fewer, smarter tools. Every original capability remains available.
## Performance
| Metric | Original | Slim | Reduction |
|--------|----------|------|-----------|
| Tools | 1 | 1 | **-0%** |
| Schema Tokens | 959 | 118 | **87.7%** |
| Claude Code (est.) | ~1,529 | ~688 | **~55.0%** |
> **Benchmark Info**
> - Original: `@modelcontextprotocol/server-sequential-thinking@2025.12.18`
> - Schema tokens measured with [tiktoken](https://github.com/openai/tiktoken) (cl100k_base)
> - Claude Code estimate includes ~570 tokens/tool overhead
## Quick Start
### One-Command Setup (Recommended)
```bash
# Claude Desktop - auto-configure
npx sequential-thinking-slim --setup claude
# Cursor - auto-configure
npx sequential-thinking-slim --setup cursor
# Interactive mode (choose your client)
npx sequential-thinking-slim --setup
```
Done! Restart your app to use sequential-thinking.
### CLI Tools (already have CLI?)
```bash
# Claude Code (creates .mcp.json in project root)
claude mcp add sequential-thinking -s project -- npx -y sequential-thinking-slim@latest
# Windows: use cmd /c wrapper
claude mcp add sequential-thinking -s project -- cmd /c npx -y sequential-thinking-slim@latest
# VS Code (Copilot, Cline, Roo Code)
code --add-mcp '{"name":"sequential-thinking","command":"npx","args":["-y","sequential-thinking-slim@latest"]}'
```
## Manual Setup
<details>
<summary>Click to expand manual configuration options</summary>
### Claude Desktop
Add to your `claude_desktop_config.json`:
| OS | Path |
|----|------|
| Windows | `%APPDATA%\Claude\claude_desktop_config.json` |
| macOS | `~/Library/Application Support/Claude/claude_desktop_config.json` |
```json
{
"mcpServers": {
"sequential-thinking": {
"command": "npx",
"args": ["-y", "sequential-thinking-slim@latest"]
}
}
}
```
### Cursor
Add to `.cursor/mcp.json` (global) or `<project>/.cursor/mcp.json` (project):
```json
{
"mcpServers": {
"sequential-thinking": {
"command": "npx",
"args": ["-y", "sequential-thinking-slim@latest"]
}
}
}
```
</details>
## How It Works
MCPSlim acts as a **transparent bridge** between AI models and the original MCP server:
```
┌─────────────────────────────────────────────────────────────────┐
│ Without MCPSlim │
│ │
│ [AI Model] ──── reads 1 tool schemas ────→ [Original MCP] │
│ (~1,529 tokens loaded into context) │
├─────────────────────────────────────────────────────────────────┤
│ With MCPSlim │
│ │
│ [AI Model] ───→ [MCPSlim Bridge] ───→ [Original MCP] │
│ │ │ │ │
│ Sees 1 grouped Translates to Executes actual │
│ tools only original call tool & returns │
│ (~688 tokens) │
└─────────────────────────────────────────────────────────────────┘
```
### How Translation Works
1. **AI reads slim schema** — Only 1 grouped tools instead of 1
2. **AI calls grouped tool** — e.g., `interaction({ action: "click", ... })`
3. **MCPSlim translates** — Converts to original: `browser_click({ ... })`
4. **Original MCP executes** — Real server processes the request
5. **Response returned** — Result passes back unchanged
**Zero functionality loss. 55.0% token savings.**
## Available Tool Groups
| Group | Actions |
|-------|---------|
Plus **1 passthrough tool** — tools that don't group well are kept as-is with optimized descriptions.
## Compatibility
- ✅ **Full functionality** — All original `@modelcontextprotocol/server-sequential-thinking` features preserved
- ✅ **All AI assistants** — Works with Claude, ChatGPT, Gemini, Copilot, and any MCP client
- ✅ **Drop-in replacement** — Same capabilities, just use grouped action names
- ✅ **Tested** — Schema compatibility verified via automated tests
## FAQ
### Does this reduce functionality?
**No.** Every original tool is accessible. Tools are grouped semantically (e.g., `click`, `hover`, `drag` → `interaction`), but all actions remain available via the `action` parameter.
### Why do AI assistants need token optimization?
AI models have limited context windows. MCP tool schemas consume tokens that could be used for conversation, code, or documents. Reducing tool schema size means more room for actual work.
### Is this officially supported?
MCPSlim is a community project. It wraps official MCP servers transparently — the original server does all the real work.
## License
MIT
---
<p align="center">
Powered by <a href="https://github.com/mcpslim/mcpslim"><b>MCPSlim</b></a> — MCP Token Optimizer
<br>
<sub>Reduce AI context usage. Keep full functionality.</sub>
</p>
MCP Config
Below is the configuration for this MCP Server. You can copy it directly to Cursor or other MCP clients.
mcp.json
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
firecrawl
Firecrawl MCP Server enables web scraping, crawling, and content extraction.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.