Content
# MySearch Proxy
[English Guide](./README_EN.md)
`MySearch Proxy` is a general search project for public distribution.
It combines three originally separate things into a complete solution:
- Search `MCP` installed for `Codex` / `Claude Code`
- Search `Skill` installed for `Codex` / `Claude Code` / `OpenClaw`
- Unified search `Proxy Console` for teams or self-built Agents
Project entry:
- GitHub:
[skernelx/MySearch-Proxy](https://github.com/skernelx/MySearch-Proxy)
- Docker Hub:
[skernelx/mysearch-proxy](https://hub.docker.com/r/skernelx/mysearch-proxy)
- OpenClaw Hub Skill:
[clawhub.ai/skernelx/mysearch](https://clawhub.ai/skernelx/mysearch)
- Tavily / Firecrawl default recommended partner:
[skernelx/tavily-key-generator](https://github.com/skernelx/tavily-key-generator)

This is not a small tool that "only wraps Tavily".
Its core goal is to unify the three search capability lines of `Tavily`, `Firecrawl`, and `X / Social` into an installable, reusable, and publicly releasable product, and to minimize the caller's concern about the underlying provider differences.
You can directly view the public page status on ClawHub:
- Skill page:
[clawhub.ai/skernelx/mysearch](https://clawhub.ai/skernelx/mysearch)
- The following picture is a real screenshot taken from the public page on `2026-03-17`
- The latest results should always be based on the real-time page of ClawHub

## What's in this repository
- [`mysearch/`](./mysearch/README.md)
- Truly installable MySearch MCP
- Unify to provide `search`, `extract_url`, `research`, `mysearch_health`
- [`proxy/`](./proxy/README.md)
- Unified console and proxy layer
- Manage Tavily / Firecrawl key pool, downstream token, quota synchronization, `/social/search`
- [`skill/README.md`](./skill/README.md)
- Instructions for MySearch skill for `Codex` / `Claude Code`
- Includes direct usage of "Let AI automatically install MySearch"
- [`openclaw/README.md`](./openclaw/README.md)
- Instructions for independent skill bundle for OpenClaw / ClawHub
- Includes direct usage of "Let AI automatically install OpenClaw skill"
- [`docs/mysearch-architecture.md`](./docs/mysearch-architecture.md)
- Architecture and design boundary description
## What problem does this project solve?
Many "search-based MCP / Skill" have a common problem:
- Only search web pages, not crawl the full text
- Only suitable for news, not suitable for document sites, GitHub, PDF, pricing, changelog
- Only solve prompt, not solve the real MCP installation and running
- Only do key panel, not solve how AI calls
- Only support official interfaces, and the logic needs to be rewritten when connecting to your own gateway
- Once there is no X / Social provider, the entire link loses its value
The approach of `MySearch Proxy` is not to continue to stack a "universal interface", but to divide the system into four layers:
```text
tavily-key-generator
-> Provide Tavily / Firecrawl provider layer and aggregation API
MySearch Proxy
-> Provide MCP, Skill, OpenClaw Skill, Proxy Console, Social / X routing
Codex / Claude Code / OpenClaw / Self-built Agent
-> Directly reuse the same search capability
```
The default recommended combination is not "manually filling in official keys everywhere", but:
- `tavily-key-generator` is responsible for the provider source of Tavily / Firecrawl
- `MySearch Proxy` is responsible for unified search logic, MCP, Skill and Proxy Console
## Compared with similar projects, where are the advantages?
### 1. Not a single provider MCP
`MySearch` will automatically select routes according to task types:
- Ordinary web pages, news, quick discovery: Tavily first
- Document sites, GitHub, PDF, pricing, changelog, full-text crawling: Firecrawl first
- X / Social: xAI or compatible `/social/search` first
This means that it does not force all problems to a single provider.
### 2. Not just prompt, but no real runtime
This repository gives you at the same time:
- MCP
- Codex / Claude Code skill
- OpenClaw skill
- Proxy Console
So it can be used for local development assistants, and it can also be used for OpenClaw and team gateways, instead of redoing everything when changing a running environment.
### 3. Not only "search", but also includes extraction and small research
The core capabilities here are not only `search`:
- `extract_url`
- Firecrawl first, fallback to Tavily extract when failed or empty full text
- `research`
- Responsible for integrating search, crawling, and evidence into a small research workflow
This is more suitable for real Agent use than "searching and throwing a few links".
### 4. Official priority, but not bound to official interfaces
You can:
- Directly fill in the official Tavily / Firecrawl / xAI key
- You can also change `BASE_URL + PATH + AUTH_*`
- You can also connect Tavily / Firecrawl to your own aggregation API
- X / Social can also be connected to compatible `/social/search`
This is very important for people who are preparing to release publicly and want to be compatible with self-built gateways.
### 5. X / Social is an enhancement, not an installation threshold
If there is no official `xAI` or `grok2api`, this project can still provide normally:
- `web`
- `news`
- `docs`
- `github`
- `pdf`
- `extract`
- `research`
Only the explicit `social` route will be degraded, instead of the entire system failing together.
## Where can it be used
### 1. The default search entry for local development assistants
Suitable for:
- `Codex`
- `Claude Code`
- Other local AI assistants that support MCP
Use:
- Latest web search
- Technical documentation / GitHub / price page / changelog retrieval
- Single page full-text crawling
- Small research package
- X / Social sentiment supplement
### 2. OpenClaw's default search skill
Suitable for:
- Want to replace the old Tavily-only search skill
- Want to give OpenClaw a more complete web + docs + social search capability
- Want to publish to ClawHub for others to install directly
### 3. Team shared search gateway
Suitable for:
- Multiple downstream programs share a set of search entries
- Want to separate upstream key and downstream token
- Need to visually manage Tavily / Firecrawl / Social / X
### 4. Your own aggregation API / compatible gateway connection
Suitable for:
- You already have your own Tavily / Firecrawl aggregation API
- You have `grok2api` or other xAI-compatible services
- You want to unify the calling logic back to MySearch, instead of scattering it in the script
## How to connect by default
The most recommended complete combination is:
```text
tavily-key-generator
-> Provide Tavily / Firecrawl official provider or aggregation API
MySearch Proxy
-> Access Tavily / Firecrawl / X
-> Expose MCP, Skill, OpenClaw Skill, Proxy Console
Codex / Claude Code / OpenClaw / Self-built Agent
-> Use MySearch as a unified search entry
```
Why is `tavily-key-generator` recommended by default:
- It can be used as the provider layer of Tavily / Firecrawl
- You don't need to directly expose the official key in each downstream instance
- MySearch only needs to connect to the unified entry it exposes
If you already have the official key, you can also directly connect to the official interface; but for public deployment and team sharing, the `tavily-key-generator -> MySearch Proxy` link is usually more stable.
## Provider support and behavior after missing
### Tavily
Mainly responsible for:
- Ordinary web search
- News retrieval
- Quick discovery and default research discovery phase
Recommended connection method:
- Official API
- Or through
[skernelx/tavily-key-generator](https://github.com/skernelx/tavily-key-generator)
provided aggregation API / provider source
When Tavily is not connected:
- `web`
- `news`
- The discovery phase of the default `research`
Will be significantly weaker, but `docs / github / pdf / extract` can still partially rely on Firecrawl.
### Firecrawl
Mainly responsible for:
- docs
- GitHub
- PDF
- pricing
- changelog
- Full-text crawling
Recommended connection method:
- Official API
- Or through
[skernelx/tavily-key-generator](https://github.com/skernelx/tavily-key-generator)
provided aggregation API / provider source
When Firecrawl is not connected:
- `docs / github / pdf / pricing / changelog`
- Full-text crawling quality
Will decline, but ordinary web pages and news can still be borne by Tavily, and `extract_url` will also try to fall back to Tavily extract.
### X / Social
Mainly responsible for:
- X / Social search
- Public opinion
- Developer discussion
Recommended connection method:
- Official xAI
- Or compatible `/social/search`
When X / Social is not connected:
- `mode="social"` is not available
- `research(include_social=true)` will still return web page results and include `social_error`
That is to say, the absence of X will not prevent this project from working as a general search MCP / Skill.
## Installation method
You don't need to install all parts at once, you can choose the path according to the target.
### 0. Directly let AI read the document and install automatically
The easiest way is to send the following sentence to `Codex` or `Claude Code`:
```text
Please open skill/README.md and skill/SKILL.md in this repository, and install MySearch for me according to the documentation; if MCP is not registered, execute install.sh in the root directory of the repository; after installation, run health and smoke test, and tell me the results.
```
If you send a GitHub repository link, you can also say directly:
```text
Please read the README and SKILL in https://github.com/skernelx/MySearch-Proxy/tree/main/skill, and help me automatically install and verify MySearch.
```
### 1. Install MySearch MCP to Codex / Claude Code
```bash
python3 -m venv venv
cp mysearch/.env.example mysearch/.env
./install.sh
```
Minimum configuration:
```env
MYSEARCH_TAVILY_API_KEY=tvly-...
MYSEARCH_FIRECRAWL_API_KEY=fc-...
```
If you are going to directly connect to
[tavily-key-generator](https://github.com/skernelx/tavily-key-generator),
you can change it to:
```env
MYSEARCH_TAVILY_BASE_URL=https://your-search-gateway.example.com
MYSEARCH_TAVILY_SEARCH_PATH=/api/search
MYSEARCH_TAVILY_EXTRACT_PATH=/api/extract
MYSEARCH_TAVILY_AUTH_MODE=bearer
MYSEARCH_TAVILY_API_KEY=your-token
MYSEARCH_FIRECRAWL_BASE_URL=https://your-search-gateway.example.com
MYSEARCH_FIRECRAWL_SEARCH_PATH=/firecrawl/v2/search
MYSEARCH_FIRECRAWL_SCRAPE_PATH=/firecrawl/v2/scrape
MYSEARCH_FIRECRAWL_AUTH_MODE=bearer
MYSEARCH_FIRECRAWL_API_KEY=your-token
```
`install.sh` will:
1. Install `mysearch/requirements.txt`
2. Automatically detect and register `Claude Code`
3. Automatically detect and register `Codex`
4. Inject `MYSEARCH_*` in `mysearch/.env` into MCP configuration
The default registered is the local `stdio` MCP.
If you want to additionally expose a remote `streamableHTTP` entry, you can start it separately:
```bash
./venv/bin/python -m mysearch \
--transport streamable-http \
--host 0.0.0.0 \
--port 8000 \
--streamable-http-path /mcp
```
Default remote endpoint:
```text
http://127.0.0.1:8000/mcp
```
Here, it is necessary to clearly distinguish between two installation paths:
- Local `stdio`
- Suitable for direct installation to `Codex` / `Claude Code` on the current machine
- Directly run `./install.sh`
- Remote `streamableHTTP`
- Suitable for running `MySearch` on the server, and then allowing other clients to access it through the URL
- The client does not need to execute `./install.sh` locally
If you are letting `Codex` connect to a remote `MySearch`, it has been tested that you can register it directly like this:
```bash
codex mcp add mysearch --url http://127.0.0.1:8000/mcp
codex mcp get mysearch
```
If there is also a reverse proxy or Bearer Token in front of your remote entry:
```bash
export MYSEARCH_MCP_BEARER_TOKEN=your-token
codex mcp add mysearch \
--url https://mysearch.example.com/mcp \
--bearer-token-env-var MYSEARCH_MCP_BEARER_TOKEN
codex mcp get mysearch
```
Supplementary explanation:
- The `--url` connection method of `Codex` corresponds to `streamableHTTP`
- This set of commands has been tested locally
- If `Claude Code` does not have a URL-based MCP configuration for the time being, you can continue to use the default `stdio` installation path
- `OpenClaw` uses the skill bundle in the `openclaw/` directory and does not depend on the remote `streamableHTTP` entry here
### 2. Install Codex / Claude Code skill
If you want AI to not only "see an MCP", but also know how to call it, install the skill:
```bash
bash skill/scripts/install_codex_skill.sh
```
If you want to overwrite the old version:
```bash
bash skill/scripts/install_codex_skill.sh --force
```
A more suitable entry for distribution to others or directly to AI is:
- [skill/README.md](./skill/README.md)
### 3. Install OpenClaw skill
If you want AI to directly help you install OpenClaw skill, the easiest way to say it is:
```text
Please open openclaw/README.md and openclaw/SKILL.md in this repository, and install MySearch OpenClaw skill for me according to the documentation; if it is a local installation, copy it to ~/.openclaw/skills/mysearch, with .env, and then run health verification, and tell me the results.
```
If you send a GitHub link, you can also say directly:
```text
Please read the README and SKILL in https://github.com/skernelx/MySearch-Proxy/tree/main/openclaw, and help me automatically install and verify MySearch OpenClaw skill.
```
First look at the public page:
- [clawhub.ai/skernelx/mysearch](https://clawhub.ai/skernelx/mysearch)
The general installation method of ClawHub CLI is subject to the official documentation. The current official documentation is written as:
```bash
clawhub search "mysearch"
clawhub install <skill-slug>
```
If you want to install from a local bundle:
```bash
cp openclaw/.env.example openclaw/.env
bash openclaw/scripts/install_openclaw_skill.sh \
--install-to ~/.openclaw/skills/mysearch \
--copy-env openclaw/.env
```
A more suitable entry for distribution to others or directly to AI is:
- [openclaw/README.md](./openclaw/README.md)
### 4. Deploy Proxy Console
Default public image:
- Docker Hub:
[skernelx/mysearch-proxy](https://hub.docker.com/r/skernelx/mysearch-proxy)
- Pull address:
`docker pull skernelx/mysearch-proxy:latest`
```bash
cd proxy
docker compose up -d
```
Or:
```bash
docker run -d \
--name mysearch-proxy \
--restart unless-stopped \
-p 9874:9874 \
-e ADMIN_PASSWORD=your-admin-password \
-v $(pwd)/mysearch-proxy-data:/app/data \
skernelx/mysearch-proxy:latest
```
Access after startup:
```text
http://localhost:9874
```

## How to configure X / Social
### Official xAI mode
```env
MYSEARCH_XAI_BASE_URL=https://api.x.ai/v1
MYSEARCH_XAI_RESPONSES_PATH=/responses
MYSEARCH_XAI_SEARCH_MODE=official
MYSEARCH_XAI_API_KEY=xai-...
```
### compatible / Custom `/social/search` mode
```env
MYSEARCH_XAI_BASE_URL=https://media.example.com/v1
MYSEARCH_XAI_SOCIAL_BASE_URL=https://your-social-gateway.example.com
MYSEARCH_XAI_SEARCH_MODE=compatible
MYSEARCH_XAI_API_KEY=your-social-gateway-token
```
Description:
- `MYSEARCH_XAI_BASE_URL` points to the model or `/responses` gateway
- `MYSEARCH_XAI_SOCIAL_BASE_URL` points to the social gateway root address
- MySearch will automatically append `/social/search` by default
If your social side comes from `grok2api`, `proxy/` can also directly connect to the backend management interface, automatically
inherit `app.api_key` and read the token status.
## Quick Acceptance
After MCP installation:
```bash
claude mcp list
codex mcp list
codex mcp get mysearch
```
Local smoke test:
```bash
python skill/scripts/check_mysearch.py --health-only
python skill/scripts/check_mysearch.py --web-query "OpenAI latest announcements"
python skill/scripts/check_mysearch.py --docs-query "OpenAI Responses API docs"
```
If you have configured X / Social:
```bash
python skill/scripts/check_mysearch.py --social-query "Model Context Protocol"
```
OpenClaw bundle acceptance:
```bash
python3 openclaw/scripts/mysearch_openclaw.py health
```
## What happens if one item of support is missing
### No `grok2api` or official `xAI`
The project is still available.
You can still use normally:
- `web`
- `news`
- `docs`
- `github`
- `pdf`
- `extract`
- `research`
Only requests that explicitly depend on `social` will be downgraded.
### No Tavily / Firecrawl official key
The default suggestion is not to give up, but to prioritize access:
- [skernelx/tavily-key-generator](https://github.com/skernelx/tavily-key-generator)
That is to say, this project supports both "official interface" and "own aggregation API" connection methods by default.
## Subdirectory Documentation
- Root repository description:
[README.md](./README.md)
- MCP Documentation:
[mysearch/README.md](./mysearch/README.md)
- Skill Documentation:
[skill/README.md](./skill/README.md)
- OpenClaw Skill Documentation:
[openclaw/README.md](./openclaw/README.md)
- MCP English:
[mysearch/README_EN.md](./mysearch/README_EN.md)
- Proxy Documentation:
[proxy/README.md](./proxy/README.md)
- Proxy English:
[proxy/README_EN.md](./proxy/README_EN.md)
- Architecture Documentation:
[docs/mysearch-architecture.md](./docs/mysearch-architecture.md)
## Who is this repository suitable for
If you are one of the following types of people, this project will be more suitable:
- Want to give `Codex` / `Claude Code` a more stable default search MCP than a single search source
- Want to provide OpenClaw with a publicly available, installable, and auditable search skill
- Want to put Tavily, Firecrawl, X / Social into the same console
- Want to use your own aggregation API by default, instead of exposing the official key to all downstream
If you only need "a single script to check a webpage", then this repository will be more complete.
If you need "an installable, publishable, and reusable general-purpose search infrastructure", it is for this
scene is ready.
Connection Info
You Might Also Like
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Fetch
Retrieve and process content from web pages by converting HTML into markdown format.
context7-mcp
Context7 MCP Server provides natural language access to documentation for...
Context 7
Context7 MCP provides up-to-date code documentation for any prompt.
mempalace
The highest-scoring AI memory system ever benchmarked. And it's free.
chrome-devtools-mcp
Chrome DevTools for coding agents