Content
# MCP Server Development in 1C
Tool for creating MCP (Model Context Protocol) servers on the 1C:Enterprise platform. Allows you to develop 1C extensions that provide database data and functionality for AI assistants (chats with language models, Claude Desktop, Cursor, and other AI clients).
Detailed video about the project and its capabilities:
[VK Видео](https://vkvideo.ru/video-219359576_456239024) | [Youtube](https://youtu.be/ZEla85JsfCo) | [Rutube](https://rutube.ru/video/ba1c64432d1a5b6cfd2335b83bf47071/)
The project contains a ready-made extension for 1C, which takes over all the technical routine of the MCP protocol. You only need to implement the business logic of your tools. Tools for working with 1C configuration metadata are available "out of the box".
## How it works: MCP Concept
The quality of the response of a language model (LLM) directly depends on the quality of the context provided to it. Preparing such a context manually can be laborious.
**Model Context Protocol (MCP)** is an open standard that allows the model itself to request the necessary data through special "tools" provided by your server. Thus, the context for solving the problem is collected automatically.
## Project components
1. **`src/1c_ext/`** - **1C Extension:** The core of the solution. Implements the MCP server and tools.
2. **`src/py_server/`** - **Python proxy:** An optional but recommended component for solving infrastructure problems.
## Quick start
### Step 1: Installing the 1C extension
Connect the ready-made, assembled extension from the `build/MCP_Сервер.cfe` directory to your configuration.
### Step 2: Publishing an HTTP service
Publish the `mcp_APIBackend` HTTP service, which is located in the extension, on the web server.
> **Important:** For direct connection of the AI client to 1C (without a proxy), it is required to publish the database without the need for authentication (to "embed" the access details to the database in default.vrd), which is unsafe. The solution to this problem is described in the "Connection Options" section.
### Step 3: Connecting an AI client
Connect the MCP client (for example, Cursor) to the published HTTP service (`.../your_database/hs/mcp/`). Examples of settings for different clients are located in the [`mcp_client_settings/`](./mcp_client_settings/) folder.
## Connection options
### Option 1: Direct connection to 1C
- **How it works:** `AI-client ←→ 1C HTTP service`
- **Limitations:**
- Requires publishing an HTTP service without 1C authentication.
- It is impossible to connect clients that require `stdio` transport.
- Limited compatibility with some HTTP clients due to protocol nuances.
### Option 2: Connection via Python proxy (Recommended)
- **How it works:** `AI-client ←→ MCP-proxy (Python) ←→ 1C HTTP service`
- **Why is it needed?** The proxy solves key infrastructure problems:
- **Transport problem:** Allows you to connect clients working via `stdio`.
- **Authentication problem:** Implements the `OAuth2` protocol (the standard method of authentication in MCP) and transmits authorization to 1C via `Basic Auth`. This allows you **not to disable** 1C authentication on the web server. The proxy supports two modes: working on behalf of one fixed user or "forwarding" the authentication of each user under his own credentials.
- **Proxy setup and launch:**
- Detailed instructions on requirements, installation, configuration and launch can be found in the [Python proxy documentation](./src/py_server/README.md).
### Option 3: Running a proxy in Docker
To run the proxy server in a container in isolation:
```bash
# Copy the example configuration
cp .env.docker.example .env
# Edit .env (specify URL, login, password)
# Run the container
docker-compose up -d
```
> **Important:** If 1C is on the same host, use `host.docker.internal` instead of `localhost` in `MCP_ONEC_URL`.
More details in the [Python proxy documentation](./src/py_server/README.md#docker).
## Developing your own tools
Functionality is extended by adding your own tools for interacting with 1C data.
- **Step 1:** In the extension, create a new processing and include it in the `mcp_КонтейнерыИнструментов` subsystem.
- **Step 2:** In the module manager of this processing, implement two export methods:
```bsl
// Method for describing tools and their parameters
Процедура ДобавитьИнструменты(Инструменты) Экспорт
// ... here you describe which tools your processing provides
КонецПроцедуры
// Method for executing tool logic
Функция ВыполнитьИнструмент(ИмяИнструмента, Аргументы) Экспорт
// ... here you implement the logic that will be called by the AI model
КонецФункции
```
A detailed development guide can be found in the [`src/1c_ext/agents.md`](./src/1c_ext/agents.md) file.
## Other MCP primitives
In addition to **Tools**, the project also supports **Resources** for providing static context (for example, files) and **Prompts** for prepared message templates.
## Documentation
- **[Python Proxy Documentation](./src/py_server/README.md)** — detailed description of proxy server configuration and launch.
- **[1C Development Guide](./src/1c_ext/agents.md)** — details of implementing tools, resources and prompts on the 1C side. Can be connected to code generators (AI agents).
- **[Client Settings Examples](./mcp_client_settings/)** — ready-made configurations for different AI clients.
## License
MIT License
## Support and development
The project is actively developing. Questions and suggestions for improvement are welcome via Issues.
Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
servers
Model Context Protocol Servers
Time
A Model Context Protocol server for time and timezone conversions.
Filesystem
Node.js MCP Server for filesystem operations with dynamic access control.
Sequential Thinking
A structured MCP server for dynamic problem-solving and reflective thinking.
git
A Model Context Protocol server for Git automation and interaction.