Content
# Edison Scientific MCP Server
A Model Context Protocol (MCP) server for interacting with the [Edison Scientific platform](https://platform.futurehouse.org/) (former FutureHouse). This server provides simple, LLM-tailored tools to access Edison Scientific's AI agents for scientific research, chemistry tasks, and literature search.
## Features
- **Chemistry Agent (PHOENIX/MOLECULES)**: Synthesis planning, molecule design, and cheminformatics
- **Quick Search Agent (CROW/LITERATURE)**: Concise scientific answers with citations
- **Precedent Search Agent (OWL/PRECEDENT)**: Determine if anyone has done something in science
- **Deep Search Agent (FALCON/LITERATURE)**: Long reports with many sources for literature reviews
- **Data Analysis Agent (FINCH/ANALYSIS)**: Analyze datasets, perform statistical analysis, and generate insights
- **Task Continuation**: Continue previous tasks with follow-up questions
- **Multiple Transports**: Support for stdio, HTTP, and SSE transports
## Tools TL;DR
| Tool Name | Model | Task Type | Description |
|-----------|-------|-----------|-------------|
| `edison_chem_agent` | PHOENIX/MOLECULES | Chemistry Tasks | Synthesis planning, molecule design, and cheminformatics analysis |
| `edison_quick_search_agent` | CROW/LITERATURE | Concise Search | Produces succinct answers citing scientific data sources |
| `edison_precedent_search_agent` | OWL/PRECEDENT | Precedent Search | Determines if anyone has done something in science |
| `edison_deep_search_agent` | FALCON/LITERATURE | Deep Search | Produces long reports with many sources for literature reviews |
| `edison_data_analysis_agent` | FINCH/ANALYSIS | Data Analysis | Analyze datasets, perform statistical analysis, and generate insights |
| `edison_continue_task` | All | Task Continuation | Continue a previous task with a follow-up question |
## Installation
```bash
# Install using uv (recommended)
uv add edison-mcp
# Or install using pip
pip install edison-mcp
```
## Configuration
### Get Your API Key
Visit [https://platform.futurehouse.org/](https://platform.futurehouse.org/) (Edison Scientific, former FutureHouse) to:
1. Create an account or log in
2. Navigate to your user profile
3. Generate an API key
### Environment Variables
Set your API key:
```bash
export EDISON_API_KEY="your_api_key_here"
```
For backward compatibility, `FUTUREHOUSE_API_KEY` is also supported:
```bash
export FUTUREHOUSE_API_KEY="your_api_key_here"
```
### MCP Client Configuration
Add to your MCP client configuration file:
#### stdio transport (recommended)
```json
{
"mcpServers": {
"edison-mcp": {
"command": "uvx",
"args": ["edison-mcp@latest", "stdio"],
"env": {
"EDISON_API_KEY": "your_api_key_here"
}
}
}
}
```
#### HTTP transport
```json
{
"mcpServers": {
"edison-mcp": {
"command": "uvx",
"args": ["edison-mcp@latest", "server"],
"env": {
"EDISON_API_KEY": "your_api_key_here"
}
}
}
}
```
## Available Tools
### `edison_chem_agent`
Request PHOENIX model for chemistry tasks: synthesis planning, novel molecule design, and cheminformatics analysis.
**Parameters:**
- `query` (string): The chemistry question or task to submit
**Example:**
```text
Use edison_chem_agent with query "Show three examples of amide coupling reactions"
```
### `edison_quick_search_agent`
Request CROW model for concise scientific search: produces succinct answers citing scientific data sources.
**Parameters:**
- `query` (string): The scientific question to submit
**Example:**
```text
Use edison_quick_search_agent with query "What causes age-related macular degeneration?"
```
### `edison_precedent_search_agent`
Request OWL model for precedent search: determines if anyone has done something in science.
**Parameters:**
- `query` (string): The precedent question to submit
**Example:**
```text
Use edison_precedent_search_agent with query "Has anyone used CRISPR for malaria treatment?"
```
### `edison_deep_search_agent`
Request FALCON model for deep search: produces long reports with many sources for literature reviews.
**Parameters:**
- `query` (string): The literature review question to submit
**Example:**
```text
Use edison_deep_search_agent with query "What are the most effective treatments for Ulcerative Colitis?"
```
### `edison_data_analysis_agent`
Request FINCH model for data analysis tasks: analyze datasets, perform statistical analysis, and generate insights.
**Parameters:**
- `query` (string): The data analysis question or task to submit
**Example:**
```text
Use edison_data_analysis_agent with query "Analyze this dataset and identify key trends in the experimental data"
```
### `edison_continue_task`
Continue a previous task with a follow-up question.
**Parameters:**
- `previous_task_id` (string): ID of the previous task to continue
- `query` (string): Follow-up question or task
- `job_name` (string): Name of the job (phoenix/molecules, crow/literature, owl/precedent, falcon/literature, or finch/analysis)
**Example:**
```text
Use edison_continue_task with:
- previous_task_id: "task_123"
- query: "Tell me more about the third option"
- job_name: "phoenix"
```
## Usage Examples
### Chemistry Task
```text
Use edison_chem_agent with query:
"Propose 3 novel compounds that could inhibit DENND1A and include their SMILES notation"
```
### Scientific Literature Search
```text
Use edison_quick_search_agent with query:
"How compelling is genetic evidence for targeting PTH1R in small cell lung cancer?"
```
### Precedent Research
```text
Use edison_precedent_search_agent with query:
"Has anyone developed efficient non-CRISPR methods for modifying DNA?"
```
### Literature Review
```text
Use edison_deep_search_agent with query:
"What is the latest research on the physiological benefits of coffee consumption?"
```
### Data Analysis
```text
Use edison_data_analysis_agent with query:
"Analyze the correlation between variables in this dataset and suggest potential causal relationships"
```
### Task Continuation
```text
# First, submit a task
Use edison_quick_search_agent with query:
"What are the main causes of Alzheimer's disease?"
# Then continue with a follow-up using the task_id from the response
Use edison_continue_task with:
- previous_task_id: "task_from_previous_response"
- query: "What are the most promising treatments for these causes?"
- job_name: "crow"
```
## Development
### Running Locally
```bash
# Clone the repository
git clone <repository-url>
cd edison-mcp
# Install dependencies
uv sync
# Run with stdio transport
uv run python3 -m edison_mcp.server stdio
# Run with HTTP transport
uv run python3 -m edison_mcp.server main --host 0.0.0.0 --port 3011
```
### Testing
```bash
# Install development dependencies
uv sync --group dev
# Run tests
uv run pytest
# Run with coverage
uv run pytest --cov=edison_mcp
```
## API Reference
For more information about the Edison Scientific platform (former FutureHouse) and available models:
- **Platform Home**: [https://platform.futurehouse.org/](https://platform.futurehouse.org/)
- **Models Overview**: [https://platform.futurehouse.org/models](https://platform.futurehouse.org/models)
- **API Documentation**: [Edison Scientific Cookbook](https://futurehouse.gitbook.io/futurehouse-cookbook/futurehouse-client#quickstart)
## Edison Scientific Platform Models (former FutureHouse)
**Note:** The platform has introduced new canonical model names while maintaining backward compatibility with legacy names:
- **LITERATURE** (formerly CROW/FALCON) - Built with PaperQA3 (upgraded from PaperQA2)
- **PRECEDENT** (formerly OWL) - Built with PaperQA3 precedent mode
- **MOLECULES** (same as PHOENIX) - Chemistry tasks
- **ANALYSIS** (same as FINCH) - Data analysis tasks
Both old and new names are supported for backward compatibility.
### CROW / LITERATURE (`JobNames.CROW` / `JobNames.LITERATURE`)
**Job Name:** `job-futurehouse-paperqa3`
**Task Type:** Concise Search
**Description:** Produces a succinct answer citing scientific data sources, good for API calls and specific questions. Built with PaperQA3.
**Sample Queries:**
- What are some likely mechanisms by which mutations near the HTRA1 locus in humans might be causal for age-related macular degeneration?
- How might you capture electron transfer effects using classical force fields for molecular dynamics simulations of protein-protein interactions?
- How compelling is genetic evidence for targeting PTH1R in small cell lung cancer?
- What factors limit the wavelengths of light detectable by mammalian eyes?
**LLMs (priority order):**
- `openai/gpt-4.1-2025-04-14`
- `anthropic/claude-3-7-sonnet-20250219`
- `anthropic/claude-3-5-sonnet-20241022`
- `openai/gpt-4o-2024-11-20`
**Tools the agent uses:**
- `complete` - Terminate using the last proposed answer. Do not invoke this tool in parallel with other tools or itself.
- `open_targets_search` - Search for disease-target associations from OpenTargets database. Adds new disease-target association data to the state and returns top search results.
- `collect_cited_papers_in_evidence` - Collect papers by traversing the citations of relevant papers to increase the paper count. This tool has no effect if called when paper count or relevant papers are zero. This tool will find papers that are likely to lead to more relevant evidence.
- `gather_evidence` - Gather evidence from previous papers, clinical trials, and disease-target data given a specific question. Each "evidence" is a synthesized summary from the raw sources, which can then be used in gen_answer.
- `clinical_trials_search` - Search for clinical trials, with support for repeated calls and concurrent execution. Will add new clinical trials to the state, and return metadata about the number of trials found.
- `paper_search` - Search for papers to increase the paper count. Can be called repeatedly with identical parameters to retrieve more results (max 2 repeats). This tool can be used concurrently with other tools for complementary research strategies.
- `gen_answer` - Generate an answer using current evidence. The tool may fail, indicating that better or different evidence should be found. Aim for at least five pieces of evidence from multiple sources before invoking this tool.
### FALCON / LITERATURE (`JobNames.FALCON` / `JobNames.LITERATURE`)
**Job Name:** `job-futurehouse-paperqa3`
**Task Type:** Deep Search
**Description:** Produces a long report with many sources, good for literature reviews and evaluating hypotheses. Built with PaperQA3.
**Sample Queries:**
- What is the latest research on the physiological benefits and detriments of high levels of coffee consumption?
- What genes have been most strongly implicated in causing age-related macular degeneration, and what mutations contribute to those associations?
- What have been the most empirically effective treatments for Ulcerative Colitis?
- How can RNA transcriptional history be studied in humans?
**LLMs (priority order):**
- `openai/gpt-4.1-2025-04-14`
- `anthropic/claude-3-7-sonnet-20250219`
- `anthropic/claude-3-5-sonnet-20241022`
- `openai/gpt-4o-2024-11-20`
**Tools the agent uses:**
- `complete` - Terminate using the last proposed answer. Do not invoke this tool in parallel with other tools or itself.
- `open_targets_search` - Search for disease-target associations from OpenTargets database. Adds new disease-target association data to the state and returns top search results.
- `collect_cited_papers_in_evidence` - Collect papers by traversing the citations of relevant papers to increase the paper count. This tool has no effect if called when paper count or relevant papers are zero. This tool will find papers that are likely to lead to more relevant evidence.
- `gather_evidence` - Gather evidence from previous papers, clinical trials, and disease-target data given a specific question. Each "evidence" is a synthesized summary from the raw sources, which can then be used in gen_answer.
- `clinical_trials_search` - Search for clinical trials, with support for repeated calls and concurrent execution. Will add new clinical trials to the state, and return metadata about the number of trials found.
- `paper_search` - Search for papers to increase the paper count. Can be called repeatedly with identical parameters to retrieve more results (max 2 repeats). This tool can be used concurrently with other tools for complementary research strategies.
- `gen_answer` - Generate an answer using current evidence. The tool may fail, indicating that better or different evidence should be found. Aim for at least five pieces of evidence from multiple sources before invoking this tool.
### OWL / PRECEDENT (`JobNames.OWL` / `JobNames.PRECEDENT`)
**Job Name:** `job-futurehouse-paperqa3-precedent`
**Task Type:** Precedent Search
**Description:** Formerly known as HasAnyone, good for understanding if anyone has ever done something in science. Built with PaperQA3.
**Sample Queries:**
- Has anyone developed efficient non CRISPR methods for modifying DNA?
- Has anyone studied using a RAG system to help make better diagnoses for patients?
- Has anyone ever made an all-atom autoencoder for proteins?
- Has anyone used single-molecule footprinting to examine transcription factor binding in human cells?
**LLMs (priority order):**
- `openai/gpt-4.1-2025-04-14`
- `anthropic/claude-3-7-sonnet-20250219`
- `anthropic/claude-3-5-sonnet-20241022`
- `openai/gpt-4o-2024-11-20`
**Tools the agent uses:**
- `complete` - Terminate using the last proposed answer. Do not invoke this tool in parallel with other tools or itself.
- `open_targets_search` - Search for disease-target associations from OpenTargets database. Adds new disease-target association data to the state and returns top search results.
- `collect_cited_papers_in_evidence` - Collect papers by traversing the citations of relevant papers to increase the paper count. This tool has no effect if called when paper count or relevant papers are zero. This tool will find papers that are likely to lead to more relevant evidence.
- `gather_evidence` - Gather evidence from previous papers, clinical trials, and disease-target data given a specific question. Each "evidence" is a synthesized summary from the raw sources, which can then be used in gen_answer.
- `clinical_trials_search` - Search for clinical trials, with support for repeated calls and concurrent execution. Will add new clinical trials to the state, and return metadata about the number of trials found.
- `paper_search` - Search for papers to increase the paper count. Can be called repeatedly with identical parameters to retrieve more results (max 2 repeats). This tool can be used concurrently with other tools for complementary research strategies.
- `gen_answer` - Generate an answer using current evidence. The tool may fail, indicating that better or different evidence should be found. Aim for at least five pieces of evidence from multiple sources before invoking this tool.
### PHOENIX / MOLECULES (`JobNames.PHOENIX` / `JobNames.MOLECULES`)
**Job Name:** `job-futurehouse-phoenix`
**Task Type:** Chemistry Tasks (Experimental)
**Description:** A new iteration of ChemCrow, Phoenix uses cheminformatics tools to do chemistry. Good for planning synthesis and designing new molecules.
**Sample Queries:**
- Has anyone used single-molecule footprinting to examine transcription factor binding in human cells?
- Show three examples of amide coupling reactions.
- Tell me how to synthesize safinamide & where to buy each reactant. Is it cheaper to make or buy it?
**LLMs (priority order):**
- `openai/gpt-4.1-2025-04-14`
- `anthropic/claude-3-7-sonnet-20250219`
- `anthropic/claude-3-5-sonnet-20241022`
- `openai/gpt-4o-2024-11-20`
**Tools the agent uses:**
- `submit_final_answer` - Submit final answer to the environment. Include SMILES for most discussed molecules, so that there are rendered structures. Put all SMILES or Reaction SMARTS/SMILES into XML tags of `<smiles>...</smiles>`.
- `get_cheapest_price_of_mol` - Get the N cheapest prices of a molecule from ChemSpace.
- `get_ghs` - Retrieve GHS classification of a molecule from PubChem.
- `get_ld50` - Retrieve LD50 value of a molecule from PubChem.
- `chem_wep` - Check a molecule against known databases for chemical weapons.
- `fda_approval` - Check the clintox database for FDA approval of a molecule.
- `similarity_quantifier` - Computes tanimoto similarity between two SMILES strings.
- `check_if_mol_is_smiles` - Check if a string is a valid SMILES.
- `query2smiles` - Converts query to smiles.
- `modify_mol` - Proposes small chemically accessible modifications to a compound.
- `query2name` - Converts query to molecule name.
- `query2cas` - Converts a query to its CAS number.
- `get_molecular_weight` - Calculate the molecular weight of a given molecule using its SMILES representation.
- `list_functional_groups` - List functional groups in a molecule given its SMILES representation.
- `reaction_info` - Get information about a reaction given the reaction smarts (including product).
- `mol_purchasable_check` - Check if a molecule is purchasable in common catalogs.
- `mol_solubility` - Predict the aqueous solubility in units of logS.
- `predict_reaction` - Predict the product of a reaction given reaction SMARTS (w/o product). Reaction SMARTS must be formatted as `reactants_smiles>reagents_smiles>`.
- `is_patented` - True, if molecule is patented, False otherwise. Use this to assess novelty.
- `predict_binders` - Propose small molecule binders for a given gene symbol, up to max_binders.
- `web_search` - Search web for a keyword query and get snippets of top 3 results.
### FINCH / ANALYSIS (`JobNames.FINCH` / `JobNames.ANALYSIS`)
**Job Name:** `job-futurehouse-data-analysis-crow-high`
**Task Type:** Data Analysis
**Description:** Analyze datasets, perform statistical analysis, and generate insights from data. Good for analyzing experimental results and identifying trends.
**Sample Queries:**
- Analyze this dataset and identify key trends in the experimental data
- Perform statistical analysis on the correlation between variables X and Y
- Generate insights from this experimental data and suggest potential causal relationships
- What patterns emerge from this time-series data?
**LLMs (priority order):**
- `openai/gpt-4.1-2025-04-14`
- `anthropic/claude-3-7-sonnet-20250219`
- `anthropic/claude-3-5-sonnet-20241022`
- `openai/gpt-4o-2024-11-20`
**Tools the agent uses:**
- `complete` - Terminate using the last proposed answer. Do not invoke this tool in parallel with other tools or itself.
- `gather_evidence` - Gather evidence from previous papers, clinical trials, and disease-target data given a specific question. Each "evidence" is a synthesized summary from the raw sources.
- `paper_search` - Search for papers to increase the paper count. Can be called repeatedly with identical parameters to retrieve more results.
- `gen_answer` - Generate an answer using current evidence. The tool may fail, indicating that better or different evidence should be found.
---
**Sources:**
- [Edison Scientific Platform API Documentation](https://futurehouse.gitbook.io/futurehouse-cookbook/futurehouse-client#quickstart)
- [Edison Scientific Platform Models](https://platform.futurehouse.org/models)
## Architecture
The server provides simplified, LLM-tailored access to Edison Scientific's AI agents:
- **Model-Specific Tools**: Each tool corresponds to a specific model optimized for different scientific tasks
- **Automatic Task Management**: Tasks run until completion with automatic status tracking
- **Task Continuation**: Build on previous queries by continuing tasks with follow-up questions
- **Standard Results**: All tools return consistent `EdisonResult` objects with task IDs, status, and answers
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Support
For issues and questions:
- Check the [Edison Scientific Platform](https://platform.futurehouse.org/) (former FutureHouse) documentation
- Review the example scripts in the repository
- Open an issue on the GitHub repository
## Credits
This MCP server provides access to Edison Scientific's AI agents (former FutureHouse) for scientific discovery:
- **CROW/FALCON**: Built with PaperQA2 for scientific literature search
- **OWL**: Precedent search (formerly HasAnyone)
- **PHOENIX**: Chemistry agent (iteration of ChemCrow)
For more information, visit [https://platform.futurehouse.org/](https://platform.futurehouse.org/)
Connection Info
You Might Also Like
everything-claude-code
Complete Claude Code configuration collection - agents, skills, hooks,...
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
servers
Model Context Protocol Servers
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.