Content
# DingTalk Agent Development SDK
Supports the integration of various MCP-Server for rapid development of DingTalk AI assistants

## Overview
This project provides a strong foundation for building AI assistants on DingTalk, utilizing the following technologies:
- OpenAI Agent SDK provides AI capabilities and reasoning functions
- DingTalk Streaming API enables real-time message processing
- MCP-server integrates access to DingTalk organizational data, allowing for expanded capabilities
- Employee information query Agent example demonstrates platform capabilities
## Key Features
- **OpenAI Agent Integration**: Seamless integration with OpenAI's Agent framework
- **DingTalk Streaming Client**: Implements robust connection retry and health monitoring mechanisms
- **Message Processing Pipeline**: Well-structured message reception and processing system
- **MCP Tool Integration**: Employee query functionality via DingTalk API
- **Scalable Architecture**: Easy to add new Agents or features
## Architecture
The system consists of the following key components:
- **Streaming Client Manager**: Manages WebSocket connections to DingTalk Streaming API
- **Message Processor**: Handles incoming messages and routes them to the appropriate Agent
- **Agent Manager**: Manages OpenAI Agents and tool integrations
- **Employee Agent**: An example Agent that can query employee information
## Prerequisites
- Python 3.10+
- DingTalk developer account and corresponding permissions
- Obtain the Client_ID and Client_Secret for the AI assistant

- Enable access permissions for basic API calls


- API key for LLM calls (default uses the Qwen-max service from Bai Lian, configurable)
## Installation
1. Clone the repository:
```bash
git clone git@github.com:darrenyao/dingtalk-agent-client.git
cd DingtalkAgentClient
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Create a `.env` file containing the configuration:
```
# DingTalk API Configuration
DINGTALK_CLIENT_ID=your_client_id
DINGTALK_CLIENT_SECRET=your_client_secret
# LLM API Configuration
LLM_API_KEY=your_llm_api_key
LLM_API_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 # or your custom endpoint
LLM_API_MODEL=qwen-max # or your preferred model
```
## Running the Application
Use the following command to start the application:
```bash
python main.py
```
The system will:
1. Initialize the Agent with the necessary tools
2. Connect to DingTalk's Streaming API
3. Listen for incoming messages
4. Process messages through the appropriate Agent
## Employee Agent Example
The included `employee_agent.py` demonstrates how to:
1. Create an Agent integrated with the DingTalk organizational API
2. Use the MCP-tool to query employee information
3. Build dynamic instructions based on user context
4. Handle and return results in a conversational format
## Docker Deployment
You can also run the application using Docker:
```bash
docker-compose up -d
```
## Extending the Framework
To create a new Agent:
1. Create a new Agent file in the `app/agent` directory
2. Define the instructions and tools for the Agent
3. Register the Agent with the AgentManager
4. Update message processing to route appropriate requests to your Agent
You Might Also Like
OpenWebUI
Open WebUI is an extensible web interface for customizable applications.

NextChat
NextChat is a light and fast AI assistant supporting Claude, DeepSeek, GPT4...

cherry-studio
Cherry Studio is a multilingual project for creative collaboration.
mcp-k8s
A Kubernetes MCP server for interacting with Kubernetes clusters via MCP tools.
spotify-mcp-server
A lightweight MCP server for AI assistants to control Spotify playback.
mcp-server-plugin
The mcp-server-plugin is deprecated; core features are now in IntelliJ IDEs...