Content
<div align="center" id="trendradar">
<a href="https://github.com/sansan0/TrendRadar" title="TrendRadar">
<img src="/_image/banner.webp" alt="TrendRadar Banner" width="80%">
</a>
Fastest <strong>30 seconds</strong> deployment of hot spot assistant - Say goodbye to invalid brushing, only see the news information you really care about
<a href="https://trendshift.io/repositories/14726" target="_blank"><img src="https://trendshift.io/api/badge/repositories/14726" alt="sansan0%2FTrendRadar | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[](https://github.com/sansan0/TrendRadar/stargazers)
[](https://github.com/sansan0/TrendRadar/network/members)
[](LICENSE)
[](https://github.com/sansan0/TrendRadar)
[](https://github.com/sansan0/TrendRadar)
[](https://github.com/sansan0/TrendRadar)
[](https://github.com/sansan0/TrendRadar)
[](https://work.weixin.qq.com/)
[](https://weixin.qq.com/)
[](https://telegram.org/)
[](#)
[](https://www.feishu.cn/)
[](#)
[](https://github.com/binwiederhier/ntfy)
[](https://github.com/Finb/Bark)
[](https://slack.com/)
[](#)
[](https://github.com/sansan0/TrendRadar)
[](https://sansan0.github.io/TrendRadar)
[](https://hub.docker.com/r/wantcat/trendradar)
[](https://modelcontextprotocol.io/)
[](#)
[](#)
</div>
<div align="center">
**English** | **[English](README-EN.md)**
</div>
> This project aims to be lightweight and easy to deploy
<br>
## 📑 Quick Navigation
> 💡 **Click the link below** to quickly jump to the corresponding section. It is recommended to start deployment from '**Quick Start**', and for detailed customization, please refer to '**Detailed Configuration**'.
<div align="center">
| | | |
|:---:|:---:|:---:|
| [🚀 **Quick Start**](#-quick-start) | [AI Intelligent Analysis](#-ai-intelligent-analysis) | [⚙️ **Detailed Configuration**](#detailed-configuration) |
| [Docker Deployment](#6-docker-deployment) | [MCP Client](#-mcp-client) | [📝 **Update Log**](#-update-log) |
| [🎯 **Core Function**](#-core-function) | [☕ **Support Projects**](#-support-projects) | [📚 **Project Related**](#-project-related) |
</div>
<br>
- Thanks to the audience who **starred** the project, **fork** as you wish, **star** as I wish, and both are the best support for open source spirit 😍.
<details>
<summary>👉 Click to expand: <strong>Acknowledgement List</strong> (Angel Round Honor List 🔥73+🔥 people)</summary>
### Early Supporters Acknowledgement
> 💡 **Special Note**:
>
> 1. **About the list**: The table below records the supporters during the project startup phase (angel round). Due to early manual statistics being cumbersome, **inevitably, there may be omissions or incomplete records. If there are any omissions, it is not intentional, and I hope you will understand**.
> 2. **Future Plan**: In order to return the limited energy to code and function iteration, **from now on, this list will no longer be manually maintained**.
>
> Regardless of whether the names are listed, every support you provide is the foundation for TrendRadar to reach today. 🙏
### Infrastructure Support
Thanks to **GitHub** for providing free infrastructure, which is the biggest prerequisite for this project to **one-click fork** and run conveniently.
### Data Support
This project uses the API of the [newsnow](https://github.com/ourongxing/newsnow) project to obtain multi-platform data, and I would like to thank the author for providing the service.
After contacting, the author expressed no concerns about server pressure, but this is based on his goodwill and trust. Please:
- **Go to the [newsnow project](https://github.com/ourongxing/newsnow) and star it for support**
- When deploying with Docker, please reasonably control the push frequency and do not overfish.
### Promotion and Assistance
> Thanks to the following platforms and individuals for their recommendations (in chronological order)
- [Minor Software](https://mp.weixin.qq.com/s/fvutkJ_NPUelSW9OGK39aA) - Open-source software recommendation platform
- [LinuxDo Community](https://linux.do/) - A gathering place for technology enthusiasts
- [Ruanyifeng Weekly](https://github.com/ruanyf/weekly) - A influential weekly in the tech circle
### Audience Support
> Thanks to **those who have financially supported** us, your generosity has turned into snacks and drinks by the keyboard, accompanying the project through every iteration.
>
> **Regarding the return of "One Yuan Appreciation"**:
> With the release of version 5.0.0, the project has entered a new phase. To support the growing API costs and caffeine consumption, the "One Yuan Appreciation" channel has been reopened. Every bit of your support will be converted into tokens and motivation in the world of code. 🚀 [Support Us](#-support-the-project)
| Donor | Amount | Date | Remarks |
| :-------------------------: | :----: | :----: | :-----------------------: |
| D*5 | 1.8 * 3 | 2025.11.24 | |
| *鬼 | 1 | 2025.11.17 | |
| *超 | 10 | 2025.11.17 | |
| R*w | 10 | 2025.11.17 | This agent is awesome, brother |
| J*o | 1 | 2025.11.17 | Thanks for open-sourcing, wishing you success |
| *晨 | 8.88 | 2025.11.16 | The project is great, I'm studying and learning |
| *海 | 1 | 2025.11.15 | |
| *德 | 1.99 | 2025.11.15 | |
| *疏 | 8.8 | 2025.11.14 | Thanks for open-sourcing, the project is great, supporting |
| M*e | 10 | 2025.11.14 | Open-sourcing is not easy, thanks for your hard work |
| **柯 | 1 | 2025.11.14 | |
| *云 | 88 | 2025.11.13 | Good project, thanks for open-sourcing |
| *W | 6 | 2025.11.13 | |
| *凯 | 1 | 2025.11.13 | |
| 对*. | 1 | 2025.11.13 | Thanks for your TrendRadar |
| s*y | 1 | 2025.11.13 | |
| **翔 | 10 | 2025.11.13 | Good project, it's a pity to meet you late, thanks for open-sourcing! |
| *韦 | 9.9 | 2025.11.13 | TrendRadar is awesome, please have a cup of coffee~ |
| h*p | 5 | 2025.11.12 | Supporting China's open-source power, come on! |
| c*r | 6 | 2025.11.12 | |
| a*n | 5 | 2025.11.12 | |
| 。*c | 1 | 2025.11.12 | Thanks for sharing |
| *记 | 1 | 2025.11.11 | |
| *主 | 1 | 2025.11.10 | |
| *了 | 10 | 2025.11.09 | |
| *杰 | 5 | 2025.11.08 | |
| *点 | 8.80 | 2025.11.07 | It's not easy to develop, supporting |
| Q*Q | 6.66 | 2025.11.07 | Thanks for open-sourcing! |
| C*e | 1 | 2025.11.05 | |
| Peter Fan | 20 | 2025.10.29 | |
| M*n | 1 | 2025.10.27 | Thanks for open-sourcing |
| *许 | 8.88 | 2025.10.23 | Teacher, I'm a newbie, I've been playing around for a few days and still can't get it, seeking guidance |
| Eason | 1 | 2025.10.22 | I haven't figured it out yet, but you're doing a good thing |
| P*n | 1 | 2025.10.20 | |
| *杰 | 1 | 2025.10.19 | |
| *徐 | 1 | 2025.10.18 | |
| *志 | 1 | 2025.10.17 | |
| *😀 | 10 | 2025.10.16 | Appreciate |
| **杰 | 10 | 2025.10.16 | |
| *啸 | 10 | 2025.10.16 | |
| *纪 | 5 | 2025.10.14 | TrendRadar |
| J*d | 1 | 2025.10.14 | Thanks for your tool, it's fun... |
| *H | 1 | 2025.10.14 | |
| 那*O | 10 | 2025.10.13 | |
| *圆 | 1 | 2025.10.13 | |
| P*g | 6 | 2025.10.13 | |
| Ocean | 20 | 2025.10.12 | ...it's really great!!!even for a newbie like me... |
| **培 | 5.2 | 2025.10.2 | github-yzyf1312:open-source forever |
| *椿 | 3 | 2025.9.23 | Keep going, it's great |
| *🍍 | 10 | 2025.9.21 | |
| E*f | 1 | 2025.9.20 | |
| *记 | 1 | 2025.9.20 | |
| z*u | 2 | 2025.9.19 | |
| **昊 | 5 | 2025.9.17 | |
| *号 | 1 | 2025.9.15 | |
| T*T | 2 | 2025.9.15 | Appreciate |
| *家 | 10 | 2025.9.10 | |
| *X | 1.11 | 2025.9.3 | |
| *飙 | 20 | 2025.8.31 | From Lao Tong, thanks |
| *下 | 1 | 2025.8.30 | |
| 2*D | 88 | 2025.8.13 Afternoon | |
| 2*D | 1 | 2025.8.13 Morning | |
| S*o | 1 | 2025.8.05 | Support |
| *侠 | 10 | 2025.8.04 | |
| x*x | 2 | 2025.8.03 | trendRadar great project appreciate |
| *远 | 1 | 2025.8.01 | |
| *邪 | 5 | 2025.8.01 | |
| *梦 | 0.1 | 2025.7.30 | |
| **龙 | 10 | 2025.7.29 | Support |
</details>
<br>
## 🪄 Sponsors
<div align="center">
> **Waiting for you**
</div>
<br>
<a name="-support-the-project"></a>
### ❤️ Appreciate Us
> If TrendRadar has helped you capture value, consider giving it a boost to help it continue to evolve
>
> Any amount is welcome, and even 1 yuan is an encouragement for open-source. Feel free to leave a message when appreciating (´▽`ʃ♡ƪ)
<div align="center">
| WeChat Appreciation | Alipay Appreciation |
|:---:|:---:|
| <img src="https://cdn-1258574687.cos.ap-shanghai.myqcloud.com/img/%2F2025%2F07%2F17%2F2ae0a88d98079f7e876c2b4dc85233c6-9e8025.JPG" width="240" alt="WeChat Appreciation"> | <img src="https://cdn-1258574687.cos.ap-shanghai.myqcloud.com/img/%2F2025%2F07%2F17%2F1ed4f20ab8e35be51f8e84c94e6e239b4-fe4947.JPG" width="240" alt="Alipay Appreciation"> |
</div>
### 🤝 Secondary Development and Citation
If you have used or referenced the ideas or core code of this project in your project, **you are very welcome** to indicate the source and attach a link to this repository in your README or documentation.
This will help with the continued maintenance and development of the project, and we appreciate your respect and support! ❤️
### 💬 Communication and Feedback
- **GitHub Issues**: Suitable for specific technical issues. Please provide complete information (screenshots, error logs, etc.) to help locate the issue quickly.
- **Public Account Communication**: It is recommended to communicate in the comment area under relevant articles. If you need to ask questions in the background, **liking/recommending** the article is the best "knock on the door," and I can feel your intention (´▽`ʃ♡ƪ).
- **QQ Group Communication**: Follow the public account and reply with "**Communication Group**" to join. Whether you are an AI newbie or a hardcore developer, you are welcome to ask technical questions or share your experiences. The group is mainly for mutual assistance and inspiration, and please look at the group announcement before joining; when asking questions, describe the issue clearly and attach screenshots, and group friends will help if they have time, and everyone's practical experience is often faster and more comprehensive than mine 🤝
> **Friendly Reminder**:
> This project is for open-source sharing and is not a commercial product. Treating the author as a friend rather than a customer will make communication more efficient!
<div align="center">
|Public Account|
|:---:|
| <img src="_image/weixin.png" width="500" title="硅基茶水间"/> |
</div>
<br>
## 📝 Update Log
> **📌 View Latest Updates**: **[Original Repository Update Log](https://github.com/sansan0/TrendRadar?tab=readme-ov-file#-更新日志)** :
- **Tips**: It is recommended to view the [Historical Updates] and clarify the specific [functional content]
### 2026/03/28 - v6.6.0
- **HTML Report Browser Enhancement**: When opening a report in a browser, it automatically switches to a wide-screen layout. Keyword grouping and independent display areas support quick tab switching, and the search box filters news titles in real-time. The email client still displays the original narrow-screen layout.
- **Dark Mode**: One-click switching to a dark theme, automatically remembering preferences, suitable for nighttime reading.
- **One-Click News Copy**: Hover over the news serial number to copy the title and link, making it easy to share quickly.
- **Export Optimization**: The whole page screenshot and segmented screenshot are merged into a drop-down export button. The screenshot automatically restores a clean layout.
- **Shortcut System**: Supports `W` wide-screen switching, `D` dark mode, `/` search, and `?` viewing shortcut prompts.
- **Reading Progress Bar**: The top of the page displays the reading progress in real-time.
### 2026/02/09 - mcp-v4.0.0
- **🔥 AI Message Direct Push to All Channels**: Let AI-written content be pushed to 9 channels like Feishu, DingTalk, Telegram, and email with one click. Markdown automatically adapts to each platform's format, eliminating format differences.
- **New Formatting Strategy Guide**: Added `get_channel_format_guide` tool to tell AI which format each channel supports and what limitations it has, making the generated content better formatted.
- **Intelligent Batch Sending**: Long messages are automatically split according to each channel's byte limit (Feishu 30KB, DingTalk 20KB, etc.). The configuration is read from config.yaml.
- **Fix Channel Misdetection**: ntfy is no longer misreported as "configured" due to the default address.
- **Code Reuse Optimization**: Batch processing functions directly reuse the core module of trendradar, eliminating duplicate code.
<details>
<summary>👉 Click to expand: <strong>Historical Updates</strong></summary>
### 2026/03/12 - v6.5.0
- **AI Intelligent Screening System**: No need to manually set keywords! Write down the directions you care about in `ai_interests.txt` in everyday language (e.g., "I want to see AI and new energy-related news"), and AI will automatically extract tags and score each piece of news. Only push content that is truly relevant to you. If AI screening causes issues, it will automatically switch back to keyword matching, and the push will not be interrupted.
- **Different Screening Methods and Focus Directions Supported for Each Time Period**: Each time period in the Timeline can now independently set what method to use for screening and what type of news to focus on. For example: in the morning, use "tech keywords" for quick filtering, and in the evening, switch to "financial AI interest description" for in-depth screening - using the same system, different content can be viewed at different times.
- **AI Analysis Scope Independent of Push**: The data range analyzed by AI can be different from the pushed content. For example, push only sends new messages (to avoid repeated disturbance), but AI analyzes all news from the day (to see the complete trend). Each time period can also independently set the AI analysis mode.
- **AI Screening Saves Tokens Intelligently**: Already analyzed news will not consume duplicate tokens; after changing the interest description, AI automatically judges the change amplitude - small changes only update affected tags, and large changes require a full reclassification.
- **Multi-File Configuration and Tag Isolation**: Custom keyword files are placed in `config/custom/keyword/`, and AI interest files are placed in `config/custom/ai/`. Different files produce independent tags that do not interfere with each other.
- **AI Translation Precision Control**: Can control whether the hot list, RSS, and independent display areas are translated, and areas that are not enabled for display automatically skip, saving tokens.
- **Remote Storage Batch Upload**: Multiple write operations are accumulated and submitted to the cloud at once, reducing API call times.
- **Limit the Number of News Exhibitions per Group/Tag**: Control the maximum number of news items displayed per group through `max_news_per_keyword`, to prevent a single popular topic from occupying the entire push.
- **Intelligent Detection of Time Period Conflicts**: If two time periods have overlapping times, the system will automatically report an error and prompt modification to avoid unexpected behavior.
- Fix several bugs.
### 2026/02/09 - v6.0.0
> **Breaking Change**: Configuration file upgrade (config.yaml 2.0.0), old `push_window` and `analysis_window` configurations are no longer compatible, please refer to the new config.yaml migration.
- **Unified Scheduling System**: Added `timeline.yaml`, using a set of configurations to control "when to collect / push / AI analyze".
- **5 Predefined Templates**: `always_on` (24/7, default), `morning_evening` (morning and evening summary), `office_hours` (office hours), `night_owl` (night owl), `custom` (custom); also supports adding your own templates under `presets:`, as long as the key is not repeated, and then fill in your template name in config.yaml.
- **Flexible Time Period Configuration**: Supports workday/weekend differences, cross-midnight time periods, and per-period once deduplication.
- **Visual Configuration Editor**:
- Added `timeline.yaml` editing tab, alongside config.yaml and frequency_words.txt.
- Preset mode card selection: click to switch, automatically synchronize config.yaml's `schedule.preset`.
- Weekly timeline: 7 days × 24 hours horizontal bar, using color to distinguish push/analysis/collection status.
- Interactive controls: switches, drop-down boxes, time selectors, right-side modifications are synchronized to the left YAML.
- Weekly mapping dropdown selection: dynamically filled according to daily plans, drag and click to complete scheduling configuration.
- **AI Prompt Stability Optimization** (`ai_analysis_prompt.txt v2.0.0`):
- Format specification and independent instructions: extract line breaks/tags/serial numbers/prohibited items from JSON values as independent sections.
- JSON template simplification: field descriptions shortened to one sentence + word limit, reducing AI output format confusion.
- Remove Markdown format from system prompts, consistent with the "no Markdown" instruction.
- All JSON fields declared optional, and missing fields will not cause errors, enhancing fault tolerance.
- **New Independent Display Area AI Summary Analysis** (`ai_analysis.include_standalone`):
- New independent switch; when enabled, AI generates core summaries for each standalone source.
- AI analysis and push display decoupling: AI can independently analyze complete hot list data without enabling push display.
- Supports hot list platforms and RSS sources, including ranking/time/trajectory data.
- Trajectory analysis linked with `include_rank_timeline`: when enabled, use trajectory data for in-depth trend analysis; when disabled, based on ranking for brief judgment.
- New `standalone_summaries` JSON field (standalone source quick overview), all push channels have adapted rendering.
### 2026/01/28 - v5.5.0
> Same functionality as mcp, I won't open a new repository for this small tool; it's pure front-end, so let's keep it together.
- Add visual configuration editor for trendradar.
</details>
### 2026/02/02 - mcp-v3.2.0
- **New Tool: read_article**: Read the full text of a single article using Jina AI Reader (Markdown format)
- **New Tool: read_articles_batch**: Batch read multiple articles (up to 5, automatic speed limit)
- **Recommended Workflow**: `search_news(query="keywords", include_url=True)` → `read_article(url=...)` to read the full text
- **Documentation Update**: README-MCP-FAQ.md and README-MCP-FAQ-EN.md added Q19-Q20 article reading related instructions
### 2026/01/10 - mcp-v3.0.0~v3.1.5
- **Breaking Change**: All tool return values unified to `{success, summary, data, error}` structure
- **Asynchronous Consistency**: All 21 tool functions use `asyncio.to_thread()` to package synchronous calls
- **MCP Resources**: Added 4 resources (platforms, rss-feeds, available-dates, keywords)
- **RSS Enhancement**: `get_latest_rss` supports multi-day query (days parameter), cross-date URL deduplication
- **Regular Expression Fix**: `get_trending_topics` supports `/pattern/` regular expression syntax and `display_name`
- **Cache Optimization**: Added `make_cache_key()` function, parameter sorting + MD5 hash to ensure consistency
- **New Tool: check_version**: Supports checking TrendRadar and MCP Server version updates simultaneously
### 2026/01/23 - v5.4.0
- Added independent control function for AI analysis mode, optional follow_report | daily | current | incremental
- Added AI analysis time window control, supports custom running segments and daily frequency limits
- Added configuration file version management function
- Fixed several bugs
### 2026/01/19 - v5.3.0
> **Major Refactor: AI Module Migration to LiteLLM**
- **Unified AI Interface**: Use LiteLLM instead of manual implementation, supports 100+ AI providers
- **Simplified Configuration**: Removed `provider` field, replaced with `model: "provider/model_name"` format
- **New Features**: Automatic retry (`num_retries`), fallback models (`fallback_models`)
- **Configuration Changes`:
- `ai.provider` → removed (merged into model)
- `ai.base_url` → `ai.api_base`
- `AI_PROVIDER` environment variable → removed
- `AI_BASE_URL` environment variable → `AI_API_BASE`
- **Model Format Examples`:
- DeepSeek: `deepseek/deepseek-chat`
- OpenAI: `openai/gpt-4o`
- Gemini: `gemini/gemini-2.5-flash`
- Anthropic: `anthropic/claude-3-5-sonnet`
### 2026/01/17 - v5.2.0
> Mainly see config.yaml description
**🌐 AI Translation Function**
- **Multi-language Translation**: Supports translating push content into any language
- **Batch Translation**: Intelligent batch processing, reduces API call times
- **Custom Prompt Words**: Supports custom translation style
**🔧 Configuration Architecture Optimization**
- **AI Model Configuration Independent**: Analysis and translation share model configuration
- **Regional Switch Unified**: Unified management of regional display
- **Regional Sorting Customizable**: Supports custom display order of regions
**✨ AI Analysis Enhanced**
- **AI Analysis Embedded HTML**: Analysis results directly embedded into HTML reports, email notifications can be used directly
- **Rich-style AI Blocks**: Gradient blue background card layout, clear separation of analysis dimensions
- **Ranking Timeline Support**: AI can obtain precise ranking of each news at each crawl time point
- **Sector Reorganization (7→4)**: Integrated into core hotspot situation, public opinion trend controversy, abnormal and weak signals, research strategy suggestions
**🔧 Multi-model Adaptation**
- **General Parameter Pass-through**: Supports passing arbitrary advanced parameters to API
- **Gemini Adaptation**: Native parameter support, built-in security strategy relaxation
**🐛 Bug Fix**
- Fixed several known issues, improved system stability
### 2026/01/10 - v5.0.0
> **Development Interlude**:
> Tribute to the one that accompanied me for more than two years, but after renewing, it popped up `"This organization has been disabled"`
**✨ Push Content "Five Major Sections" Refactor**
This update refactors the push message into five major sections:
1. **📊 Hot List News**: Precise screening of hotspots based on your keywords.
2. **📰 RSS Subscription**: Your personalized subscription source content, supports grouping by keywords.
3. **🆕 Newly Added**: Real-time capture of new hotspots since the last run (with 🆕 label).
4. **📋 Independent Display Area**: Complete hot list or RSS source display of specified platforms, **completely不受关键词过滤限制**.
5. **✨ AI Analysis Section**: AI-driven deep insights, including trend overview, heat trend, and **extremely important** sentiment analysis.
**✨ AI Intelligent Analysis Push Function**
- **AI Analysis Integration**: Use large AI models to deeply analyze push content, automatically generate hotspot trend overview, keyword heat analysis, cross-platform association, potential impact assessment, etc.
- **Sentiment Tendency Analysis**: Added deep sentiment recognition, accurately capture public opinion's positive, negative, controversy, or concern emotions
- **Multi AI Provider Support**: Supports DeepSeek (default, high cost-performance), OpenAI, Google Gemini, and any OpenAI compatible interface
- **Two Push Modes**: `only_analysis` (only AI analysis), `both` (both push)
- **Custom Prompt Words**: Customize AI analysis role and output format through `config/ai_analysis_prompt.txt` file
- **Multi-dimensional Data Analysis**: AI can analyze ranking changes, heat duration, cross-platform performance, trend prediction, etc.
**📋 Independent Display Area Function**
- **Complete Hot List Display**: Complete hot list of specified platforms, displayed separately, not affected by keyword filtering
- **RSS Independent Display**: RSS source content can be displayed completely, suitable for content-limited subscription sources
- **Flexible Configuration**: Supports configuration of display platform list, RSS source list, maximum display number
**📊 Push Experience Refactor**
- **排版升级**: Redesigned and unified statistical headers of various channels, strengthened section organization, clear message hierarchy
- **Configuration Simplification**: Optimized configuration logic for Feishu and other notification channels, easier to get started
- **Heat Trend Arrow**: Added 🔺 (up), 🔻 (down), ➖ (stable) trend indicators, intuitively display heat changes
- **General Webhook**: Supports custom Webhook URL and JSON template, easily adapt to Discord, Matrix, IFTTT, and other platforms
**🔧 Configuration Optimization**
- **Frequency Word Configuration Enhanced**: Added `[group alias]` syntax, supports `#` comment line, configuration clearer (thanks to [@songge8](https://github.com/sansan0/TrendRadar/issues/752) for the suggestion)
- **Environment Variable Support**: AI analysis-related configurations support environment variable coverage (`AI_API_KEY`, `AI_PROVIDER`, etc.)
> 💡 See detailed configuration tutorial in [让 AI 帮我分析热点](#12-让-ai-帮我分析热点)
### 2026/01/02 - v4.7.0
- **Fix RSS HTML Display**: Fixed RSS data format mismatch causing rendering issues, now correctly displayed by keyword grouping
- **Added Regular Expression Syntax**: Keyword configuration supports `/pattern/` regular expression syntax, solving English substring mis-matching issues (e.g., `ai` matching `training`)
- **Added Display Name Syntax**: Use `=> remark` to give complex regular expressions a memorable name, push message display clearer (e.g., `/\bai\b/ => AI-related`)
- **Don't Know How to Write Regular Expressions?**: README added AI-generated regular expressions guide, tell ChatGPT/Gemini/DeepSeek what you want to match, let AI help you write
### 2025/12/30 - mcp-v2.0.0
- **Architecture Adjustment**: Removed TXT support, unified use of SQLite database
- **RSS Query**: Added `get_latest_rss`, `search_rss`, `get_rss_feeds_status`
- **Unified Search**: `search_news` supports `include_rss` parameter to search hot list and RSS simultaneously
### 2026/01/01 - v4.6.0
- **Fix RSS HTML Display**: Merged RSS content into hot list HTML page, displayed by source grouping
- **Added display_mode Configuration**: Supports `keyword` (grouped by keyword) and `platform` (grouped by platform) two display modes
### 2025/12/30 - v4.5.0
- **RSS Subscription Source Support**: Added RSS/Atom capture, grouped by keyword statistics (consistent with hot list format)
- **Storage Structure Refactor**: Flattened directory structure `output/{type}/{date}.db`
- **Unified Sorting Configuration**: `sort_by_position_first` affects both hot list and RSS
- **Configuration Structure Refactor**: `config.yaml` reorganized into 7 logical groups (app, report, notification, storage, platforms, rss, advanced), clearer configuration path
### 2025/12/26 - mcp-v1.2.0
**MCP Module Update - Optimized Toolset, Added Aggregation Comparison Function, Merged Redundant Tools:**
- Added `aggregate_news` tool - cross-platform news deduplication and aggregation
- Added `compare_periods` tool - period comparison analysis (week-on-week/month-on-month)
- Merged `find_similar_news` + `search_related_news_history` → `find_related_news`
- Enhanced `get_trending_topics` - added `auto_extract` mode for automatic hotspot extraction
- Fixed several bugs
- Synchronously updated README-MCP-FAQ.md Chinese and English versions (Q1-Q18)
### 2025/12/20 - v4.0.3
- Added URL standardization function, solving duplicate push issues caused by dynamic parameters (e.g., `band_rank`) on some platforms
- Fixed incremental mode detection logic, correctly identifying historical titles
### 2025/12/17 - v4.0.1
- StorageManager added push record proxy method
- S3 client switched to virtual-hosted style to improve compatibility (supports Tencent Cloud COS and other services)
### 2025/12/13 - mcp-v1.1.0
**MCP Module Update:**
- Adapted to v4.0.0, compatible with v3.x data
- Added storage synchronization tools: `sync_from_remote`, `get_storage_status`, `list_available_dates`
### 2025/12/13 - v4.0.0
**🎉 Major Update: Comprehensive Refactor of Storage and Core Architecture**
- **Multi-storage Backend Support**: Introduced a new storage module, supports local SQLite and remote cloud storage (S3 compatible protocol, e.g., Cloudflare R2), adapting to GitHub Actions, Docker, and local environments.
- **Database Structure Optimization**: Refactored SQLite database table structure, improving data efficiency and query capabilities.
- **Core Code Modularization**: Main program logic split into multiple modules of trendradar package, significantly improving code maintainability.
- **Enhanced Features**: Implemented date format standardization, data retention strategy, timezone configuration support, time display optimization, and fixed remote storage data persistence issues, ensuring accurate data merging.
- **Cleanup and Compatibility**: Removed most historical compatibility code, unified data storage and reading methods.
### 2025/12/03 - v3.5.0
**🎉 Core Function Enhancement**
1. **Multi-account Push Support**
- All push channels (Feishu, DingTalk, Enterprise WeChat, Telegram, ntfy, Bark, Slack) support multi-account configuration
- Use semicolon `;` to separate multiple accounts, e.g., `FEISHU_WEBHOOK_URL=url1;url2`
- Automatically verify pairing configuration (e.g., Telegram's token and chat_id) quantity consistency
2. **Push Area Configuration**
- Customize display order of regions using `display.region_order` (replaced `reverse_content_order` in v5.2.0)
- Control display of regions using `display.regions` (hot list, new hotspots, RSS, independent display area, AI analysis)
3. **Global Filtering Keywords**
- Added `[GLOBAL_FILTER]` section marker, supports global filtering of unwanted content
- Applicable scenarios: filtering ads, marketing, low-quality content, etc.
**🐳 Docker Double Path HTML Generation Optimization**
- **Issue Fix**: Solved `index.html` unable to synchronize to host machine in Docker environment
- **Double Path Generation**: Generate today's summary HTML in two locations
- `index.html` (project root directory): For GitHub Pages access
- `output/index.html`: Accessible through Docker Volume mounting, host machine can directly access
- **Compatibility**: Ensure Docker, GitHub Actions, and local running environments can access web reports normally
**🐳 Docker MCP Image Support**
- Added independent MCP service image `wantcat/trendradar-mcp`
- Supports Docker deployment of AI analysis function through HTTP interface (port 3333) providing service
- Dual-container architecture: news push service and MCP service run independently, can be expanded and restarted separately
- See [Docker Deployment - MCP Service](#6-docker-部署)
**🌐 Web Server Support**
- Added built-in web server, supports accessing generated reports through browser
- Control start/stop through `manage.py` command: `docker exec -it trendradar python manage.py start_webserver`
- Access address: `http://localhost:8080` (port configurable)
- Security features: static file service, directory restriction, local access
- Supports automatic startup and manual control two modes
**📖 Documentation Optimization**
- Added [How to Display Push Content?](#7-how-to-display-push-content) section: customize push style and content
- Added [When to Push?](#8-when-to-push) section: set push time period
- Added [How often to run?](#9-how-often-to-run) section: set automatic running frequency
- Added [Push to Multiple Groups/Devices](#10-push-to-multiple-groupsdevices) section: push to multiple receivers simultaneously
- Optimized configuration sections: unified "configuration location" description
- Simplified quick start configuration instructions: three core files at a glance
- Optimized [Docker Deployment](#6-docker-部署) section: added image description, recommended git clone deployment, and deployment method refactoring
**🔧 Upgrade Instructions`:
- **GitHub Fork Users**: Update `main.py`, `config/config.yaml`
- **Multi-account Push**: New feature, default not enabled, existing single-account configuration unaffected
### 2025/11/26 - mcp-v1.0.3
**MCP Module Update:**
- Added date parsing tool resolve_date_range, solving AI model date calculation inconsistency issues
- Supports natural language date expression parsing (this week, last 7 days, last month, etc.)
- Tool count increased from 13 to 14
### 2025/11/28 - v3.4.1
**🔧 Format Optimization**
1. **Bark Push Enhancement**
- Bark now supports Markdown rendering
- Enable native Markdown format: bold, links, lists, code blocks, etc.
- Removed pure text conversion, fully utilize Bark native rendering capability
2. **Slack Format Precision**
- Use dedicated mrkdwn format to process batch content
- Improve byte size estimation accuracy (avoid message exceeding limit)
- Optimize link format: `<url|text>` and bold syntax: `*text*`
3. **Performance Improvement`
- Format conversion completed during batch processing, avoid secondary processing
- Accurately estimate message size, reduce send failure rate
**🔧 Upgrade Instructions`:
- **GitHub Fork Users**: Update `main.py`, `config.yaml`
### 2025/11/25 - v3.4.0
**🎉 Added Slack Push Support**
1. **Team Collaboration Push Channel**
- Supports Slack Incoming Webhooks (popular team collaboration tool)
- Message centralized management, suitable for team sharing hotspot information
- Supports mrkdwn format (bold, links, etc.)
2. **Multiple Deployment Methods`
- GitHub Actions: Configure `SLACK_WEBHOOK_URL` Secret
- Docker: environment variable `SLACK_WEBHOOK_URL`
- Local running: `config/config.yaml` configuration file
> 📖 **Detailed Configuration Tutorial`**: [Quick Start - Slack Push](#-quick-start)
- Optimized setup-windows.bat and setup-windows-en.bat one-click installation of MCP experience
**🔧 Upgrade Instructions`:
- **GitHub Fork Users**: Update `main.py`, `config/config.yaml`, `.github/workflows/crawler.yml`
### 2025/11/24 - v3.3.0
**🎉 Added Bark Push Support**
1. **iOS Exclusive Push Channel`
- Supports Bark push (based on APNs, iOS platform)
- Free and open source, concise and efficient, no ad interference
- Supports official server and self-built server two methods
2. **Multiple Deployment Methods`
- GitHub Actions: Configure `BARK_URL` Secret
- Docker: environment variable `BARK_URL`
- Local running: `config/config.yaml` configuration file
> 📖 **Detailed Configuration Tutorial`**: [Quick Start - Bark Push](#-quick-start)
**🐛 Bug Fix`
- Fixed `config.yaml` `ntfy_server_url` configuration not working issue ([#345](https://github.com/sansan0/TrendRadar/issues/345)]
**🔧 Upgrade Instructions`:
-`main.py`
### 2025/11/23 - v3.2.0
**🎯 Added Advanced Customization Function**
1. **Keyword Sorting Priority Configuration**
- Supports two sorting strategies: heat priority vs configuration order priority
- Satisfy different usage scenarios: hotspot tracking or personalized focus
2. **Display Quantity Precise Control**
- Global configuration: uniformly limit all keyword display quantities
- Separate configuration: use `@number` syntax to set restrictions for specific keywords
- Effectively control push length and highlight key content
> 📖 **Detailed Configuration Tutorial**: [Keyword Configuration - Advanced Configuration](#keyword-advanced-configuration)
**🔧 Upgrade Instructions`:
- **GitHub Fork Users**: Update `main.py`, `config/config.yaml`
### 2025/11/18 - mcp-v1.0.2
**MCP Module Update:**
- Optimized query today’s news, reducing errors returning past dates
### 5.12/20 - v4.0.3
- Added URL standardization function, solving duplicate push issues caused by dynamic parameters (e.g., `band_rank`) on some platforms
- Fixed incremental mode detection logic, correctly identifying historical titles
### 2025/11/17 - v4.0.1
- StorageManager added push record proxy method
- S3 client switched to virtual-hosted style to improve compatibility (supports Tencent Cloud COS and other services)
### 2025/11/13 - mcp-v1.1.0
**MCP Module Update:**
- Adapted to v4.0.0, compatible with v3.x data
- Added storage synchronization tools: `sync_from_remote`, `get_storage_status`, `list_available_dates`
### 2025/11/13 - v4.0.0
**🎉 Major Update: Comprehensive Refactor of Storage and Core Architecture**
- **Multi-storage Backend Support**: Introduced a new storage module, supports local SQLite and remote cloud storage (S3 compatible protocol, e.g., Cloudflare R2), adapting to GitHub Actions, Docker, and local environments.
- **Database Structure Optimization**: Refactored SQLite database table structure, improving data efficiency and query capabilities.
- **Core Code Modularization**: Main program logic split into multiple modules of trendradar package, significantly improving code maintainability.
- **Enhanced Features**: Implemented date format standardization, data retention strategy, timezone configuration support, time display optimization, and fixed remote storage data persistence issues, ensuring accurate data merging.
- **Cleanup and Compatibility**: Removed most historical compatibility code, unified data storage and reading methods.
### 2025/11/03 - v3.5.0
**🎉 Core Function Enhancement**
1. **Multi-account Push Support**
- All push channels (Feishu, DingTalk, Enterprise WeChat, Telegram, ntfy, Bark, Slack) support multi-account configuration
- Use semicolon `;` to separate multiple accounts, e.g., `FEISHU_WEBHOOK_URL=url1;url2`
- Automatically verify pairing configuration (e.g., Telegram's token and chat_id) quantity consistency
2. **Push Area Configuration**
- Customize display order of regions using `display.region_order` (replaced `reverse_content_order` in v5.2.0)
- Control display of regions using `display.regions` (hot list, new hotspots, RSS, independent display area, AI analysis)
3. **Global Filtering Keywords**
- Added `[GLOBAL_FILTER]` section marker, supports global filtering of unwanted content
- Applicable scenarios: filtering ads, marketing, low-quality content, etc.
**🐳 Docker Double Path HTML Generation Optimization**
- **Issue Fix**: Solved `index.html` unable to synchronize to host machine in Docker environment
- **Double Path Generation**: Generate today's summary HTML in two locations
- `index.html` (project root directory): For GitHub Pages access
- `output/index.html`: Accessible through Docker Volume mounting, host machine can directly access
- **Compatibility**: Ensure Docker, GitHub Actions, and local running environments can access web reports normally
**🐳 Docker MCP Image Support**
- Added independent MCP service image `wantcat/trendradar-mcp`
- Supports Docker deployment of AI analysis function through HTTP interface (port 3333) providing service
- Dual-container architecture: news push service and MCP service run independently, can be expanded and restarted separately
- See [Docker Deployment - MCP Service](#6-docker-部署)
**🌐 Web Server Support**
- Added built-in web server, supports accessing generated reports through browser
- Control start/stop through `manage.py` command: `docker exec -it trendradar python manage.py start_webserver`
- Access address: `http://localhost:8080` (port configurable)
- Security features: static file service, directory restriction, local access
- Supports automatic startup and manual control two modes
**📖 Documentation Optimization**
- Added [How to Display Push Content?](#7-how-to-display-push-content) section: customize push style and content
- Added [When to Push?](#8-when-to-push) section: set push time period
- Added [How often to run?](#9-how-often-to-run) section: set automatic running frequency
- Added [Push to Multiple Groups/Devices](#10-push-to-multiple-groupsdevices) section: push to multiple receivers simultaneously
- Optimized configuration sections: unified "configuration location" description
- Simplified quick start configuration instructions: three core files at a glance
- Optimized [Docker Deployment](#6-docker-部署) section: added image description, recommended git clone deployment, and deployment method refactoring
**🔧 Upgrade Instructions`:
- **GitHub Fork Users**: Update `main.py`, `config/config.yaml`
- **Multi-account Push**: New feature, default not enabled, existing single-account configuration unaffected
### 2025/11/22 - v3.1.1
- **Fix crash caused by data exception**: Resolves the `'float' object has no attribute 'lower'` error encountered by some users in the GitHub Actions environment
- Added dual protection mechanism: Filters out invalid titles (None, float, empty string) during data acquisition and adds type checking at function calls
- Improved system stability to ensure normal operation even when data source returns abnormal format
**Upgrade Instructions** (GitHub Fork Users):
- Must update: `main.py`
- Recommended to use minor version upgrade method: copy and replace the above file
### 2025/11/20 - v3.1.0
- **Added personal WeChat push support**: Enterprise WeChat applications can push to personal WeChat without installing the Enterprise WeChat APP
- Supports two message formats: `markdown` (Enterprise WeChat group robot) and `text` (personal WeChat application)
- Added `WEWORK_MSG_TYPE` environment variable configuration, supporting multiple deployment methods such as GitHub Actions, Docker, and docker-compose
- `text` mode automatically clears Markdown syntax to provide pure text push effect
- See the "Personal WeChat Push" configuration instructions in the Quick Start for details
**Upgrade Instructions** (GitHub Fork Users):
- Must update: `main.py`, `config/config.yaml`
- Optional update: `.github/workflows/crawler.yml` (if using GitHub Actions deployment)
- Recommended to use minor version upgrade method: copy and replace the above files
### 2025/11/12 - v3.0.5
- Fixed email sending SSL/TLS port configuration logic error
- Optimized email service providers (QQ/163/126) to use port 465 (SSL) by default
- **Added Docker environment variable support**: Core configuration items (`enable_crawler`, `report_mode`, `push_window`, etc.) support overriding through environment variables, solving the issue of NAS users' modified configuration files not taking effect (see the [🐳 Docker Deployment](#-docker-deployment) section for details)
### 2025/10/26 - mcp-v1.0.1
**MCP Module Update:**
- Fixed date query parameter passing error
- Unified time parameter format for all tools
### 2025/10/31 - v3.0.4
- Solved the error caused by Feishu pushing content that is too long, and implemented batched pushing
### 2025/10/23 - v3.0.3
- Expanded the display range of ntfy error information
### 2025/10/21 - v3.0.2
- Fixed ntfy push encoding issue
### 2025/10/20 - v3.0.0
**Major Update - AI Analysis Function Launched**
- **Core Features**:
- Added AI analysis server based on MCP (Model Context Protocol)
- Supports 17 intelligent analysis tools: basic query, intelligent retrieval, advanced analysis, RSS query, system management
- Natural language interaction: query and analyze news data through conversation
- Multi-client support: Claude Desktop, Cherry Studio, Cursor, Cline, etc.
- **Analysis Capabilities**:
- Topic trend analysis (heat tracking, life cycle, explosion detection, trend prediction)
- Data insights (platform comparison, activity statistics, keyword co-occurrence)
- Sentiment analysis, similar news search, intelligent summary generation
- Historical related news retrieval, multi-mode search
- **Update Instructions**:
- This is an independent AI analysis function that does not affect existing push functions
- Can be used selectively without upgrading existing deployment
### 2025/10/15 - v2.4.4
- **Update Content**:
- Fixed ntfy push encoding issue + 1
- Fixed push time window judgment issue
- **Update Instructions**:
- Recommended to use [minor version upgrade]
### 2025/10/10 - v2.4.3
> Thanks to [nidaye996](https://github.com/sansan0/TrendRadar/issues/98) for discovering the experience issue
- **Update Content**:
- Refactored "silent push mode" to "push time window control" to improve functional understanding
- Clearly defined push time window as an optional additional function, which can be used with three push modes
- Improved comments and documentation descriptions to make functional positioning clearer
- **Update Instructions**:
- This is just a refactoring, and you don't need to upgrade
### 2025/10/8 - v2.4.2
- **Update Content**:
- Fixed ntfy push encoding issue
- Fixed configuration file missing issue
- Optimized ntfy push effect
- Added GitHub page image segment export function
- **Update Instructions**:
- Recommended to use [major version update]
### 2025/10/2 - v2.4.0
**Added ntfy push notification**
- **Core Features**:
- Supports ntfy.sh public service and self-hosted server
- **Usage Scenarios**:
- Suitable for users who pursue privacy (support self-hosting)
- Cross-platform push (iOS, Android, Desktop, Web)
- No need to register an account (public server)
- Open-source and free (MIT protocol)
- **Update Instructions**:
- Recommended to use [major version update]
### 2025/09/26 - v2.3.2
- Corrected the issue of email notification configuration check being omitted ([#88](https://github.com/sansan0/TrendRadar/issues/88))
**Fix Instructions**:
- Solved the issue where the system still prompts "no webhook configured" even if the email notification is correctly configured
### 2025/09/22 - v2.3.1
- **Added email push function**, supporting sending hot news reports to email
- **Intelligent SMTP identification**: Automatically identifies 10+ email service providers' configurations, such as Gmail, QQ mailbox, Outlook, and NetEase mailbox
- **HTML exquisite format**: Email content adopts the same HTML format as the web version, with beautiful layout and mobile-end adaptation
- **Batch sending support**: Supports multiple recipients, separated by commas, and sends to multiple people at once
- **Custom SMTP**: Customizable SMTP server and port
- Fixed Docker build network connection issue
**Usage Instructions**:
- Applicable scenarios: Suitable for users who need email archiving, team sharing, and timing reports
- Supported email: Gmail, QQ mailbox, Outlook/Hotmail, 163/126 mailbox, Sina mailbox, Sohu mailbox, etc.
**Update Instructions**:
- This update has more content, and if you want to upgrade, it is recommended to use [major version upgrade]
### 2025/09/17 - v2.2.0
- Added one-click save news picture function, making it easy to share hotspots
**Usage Instructions**:
- Applicable scenarios: When you enable the web version function (GitHub Pages) according to the tutorial
- Usage method: Open the webpage link with your mobile phone or computer, click the "Save as Picture" button at the top of the page
- Actual effect: The system will automatically make the current news report into a beautiful picture and save it to your mobile phone album or computer desktop
- Convenient sharing: You can directly send the picture to friends, post it on Moments, or share it to the work group, letting others see the important information you found
### 2025/09/13 - v2.1.2
- Solved the issue of DingTalk push capacity limit causing news push failure (adopted batched push)
### 2025/09/04 - v2.1.1
- Fixed the issue of Docker not running normally on certain architectures
- Officially released the official Docker image `wantcat/trendradar`, supporting multiple architectures
- Optimized Docker deployment process, no need for local build to quickly use
### 2025/08/30 - v2.1.0
**Core Improvements**:
- **Push logic optimization**: Changed from "pushing every execution" to "controllable pushing within a time window"
- **Time window control**: Can set push time range to avoid disturbing during non-working hours
- **Push frequency optional**: Supports single push or multiple pushes within a time period
**Update Instructions**:
- This function is disabled by default and needs to be manually enabled in `config.yaml`
- Upgrade requires updating both `main.py` and `config.yaml` files
### 2025/08/27 - v2.0.4
- This version is not a functional fix but an important reminder
- Please properly keep webhooks confidential, do not expose them publicly, and do not put them in `config.yaml`
- If you have exposed webhooks or put them in `config.yaml`, it is recommended to delete and regenerate them
### 2025/08/06 - v2.0.3
- Optimized GitHub page web version effect for mobile-end use
### 2025/07/28 - v2.0.2
- Refactored code
- Solved the issue of version number being easily forgotten to update
### 2025/07/27 - v2.0.1
**Fixed Issues**:
1. Docker shell script line ending character CRLF caused execution exception
2. `frequency_words.txt` is empty, causing news sending to be empty logic issue
- Fixed, when you choose `frequency_words.txt` to be empty, it will **push all news**, but limited by message push size, please make adjustments as follows
- Solution 1: Close mobile push and only choose GitHub Pages deployment (this is the best solution to get complete information, which will reorder hotspots according to your **custom hotspot algorithm**)
- Solution 2: Reduce push platforms, prioritize **Enterprise WeChat** or **Telegram**, these two push have batched push function (because batched push affects push experience, and only these two platforms have a little push capacity, so I had to do batched push function, but at least ensure the information obtained is complete)
- Solution 3: Can be combined with Solution 2, mode selection `current` or `incremental` can effectively reduce one-time push content
### 2025/07/17 - v2.0.0
**Major Refactor**:
- Configuration management refactor: All configurations are now managed through `config/config.yaml` file (I didn't split `main.py` to make it easy for you to copy and upgrade)
- Running mode upgrade: Supports three modes - `daily` (daily summary), `current` (current list), `incremental` (incremental monitoring)
- Docker support: Complete Docker deployment solution, supporting containerized operation
**Configuration Instructions**:
- `config/config.yaml` - Main configuration file (application settings, crawler configuration, notification configuration, platform configuration, etc.)
- `config/frequency_words.txt` - Keyword configuration (monitoring vocabulary settings)
### 2025/07/09 - v1.4.1
**New Features**: Added incremental push (configured at the head of `main.py` with `FOCUS_NEW_ONLY`), which only cares about new topics rather than continuous heat, and only sends notifications when there is new content.
**Fixed Issues**: Certain situations where news itself contains special symbols lead to occasional layout anomalies.
### 2025/06/23 - v1.3.0
Enterprise WeChat and Telegram push message have length limits, so I adopted splitting and pushing messages.
### 2025/06/21 - v1.2.1
For versions before this, not only `main.py` needs to be copied and replaced, but also `crawler.yml` needs to be copied and replaced
https://github.com/sansan0/TrendRadar/blob/master/.github/workflows/crawler.yml
### 2025/06/19 - v1.2.0
> Thanks to claude research for sorting out platform APIs, which helped me quickly complete platform adaptations (although the code is more redundant~
1. Supports Telegram, Enterprise WeChat, and DingTalk push channels, supporting multi-channel configuration and simultaneous push
### 2025/06/18 - v1.1.0
> **200 stars⭐**, continue to help everyone~ Recently, under my "instigation," many people liked, shared, and recommended on my public account, and I saw the encouragement data for specific accounts in the background. Many became angel-round old fans (I started my public account more than a month ago, although it was registered seven or eight years ago, ha, it's early to get on the bus and late to get off).
1. Important update: Added weights, and the news you see now are the hottest and most concerned at the top
2. Updated document usage because many functions were recently updated, and my previous usage document was simple (see the complete tutorial for ⚙️ `frequency_words.txt` configuration below)
### 2025/06/16 - v1.0.0
1. Added a new version update prompt for the project, which is on by default. If you want to turn it off, you can change `True` to `False` in `main.py` for `FEISHU_SHOW_VERSION_UPDATE`
### 2025/06/13+14
1. Removed compatible code. Previous fork students directly copying code will show abnormalities on that day (it will return to normal the next day)
2. Added a new news display at the bottom of Feishu and HTML
### 2025/06/09
**100 stars⭐**, write a small function to help everyone~
Added a 'must-have word' function to `frequency_words.txt` using the `+` symbol
1. Must-have word syntax:
Tang Sanzang or Zhu Bajie must appear simultaneously in the title to be included in the pushed news.
```
+Tang Sanzang
+Zhu Bajie
```
2. Filtering words have higher priority:
If the filtering words match "Tang Sanzang recites sutras," even if the must-have words contain Tang Sanzang, it will not be displayed.
```
+Tang Sanzang
!Tang Sanzang recites sutras
```
### 2025/06/02
1. **Webpage** and **Feishu message** support direct jump to news details on mobile
2. Optimized display effect + 1
### 2025/05/26
1. Optimized Feishu message display effect
<table>
<tr>
<td align="center">
Before optimization<br>
<img src="_image/before.jpg" alt="Feishu message interface - before optimization" width="400"/>
</td>
<td align="center">
After optimization<br>
<img src="_image/after.jpg" alt="Feishu message interface - after optimization" width="400"/>
</td>
</tr>
</table>
</details>
<br>
## ✨ Core Features
### **All-Network Hotspot Aggregation**
- Zhihu
- Douyin
- Bilibili hot search
- Wall Street insights
- Tieba
- Baidu hot search
- Financial news
- Pengpai News
- Phoenix
- The headline
- Weibo
Default monitoring of 11 mainstream platforms, and additional platforms can be added
> 💡 For detailed configuration tutorials, see [Configuration Instructions - Platform Configuration](#1-platform-configuration)
### **RSS Subscription Source Support** (added in v4.5.0)
Supports RSS/Atom subscription source crawling, grouped by keywords (consistent with hot list format):
- **Unified format**: RSS and hot list use the same keyword matching and display format
- **Simple configuration**: Add RSS source directly in `config.yaml`
- **Merged push**: Hot list and RSS merged into one message push
- **Freshness filtering**: Automatically filter out old articles that exceed the specified number of days to avoid repeated pushes. Supports global default days and single-source independent settings
> 💡 RSS uses the same `frequency_words.txt` for keyword filtering as the hot list
### **Visual Configuration Editor**
Provides a web-based graphical configuration interface, no need to manually edit YAML files, and all configuration items can be modified and exported through the form.
👉 **Online experience**: [https://sansan0.github.io/TrendRadar/](https://sansan0.github.io/TrendRadar/)
<img src="/_image/editor.png" alt="Visual configuration editor" width="80%">
### **Intelligent Push Strategy**
**Three push modes**:
| Mode | Applicable scenarios | Push characteristics |
|------|---------|---------|
| **Daily Summary** (daily) | Enterprise administrators/ordinary users | Pushes all matched news at a fixed time every day (including previously pushed news) |
| **Current List** (current) | Self-media people/content creators | Pushes current list-matched news at a fixed time every day (news that continues to be on the list will appear every time) |
| **Incremental Monitoring** (incremental) | Investors/traders | Only pushes newly added content, with zero repetition |
> 💡 **Quick Selection Guide**:
> - If you don't want to see repeated news → Use `incremental` (incremental monitoring)
> - If you want to see the complete list trend → Use `current` (current list)
> - If you need a daily summary report → Use `daily` (daily summary)
>
> For detailed comparison and configuration tutorials, see [Configuration Instructions - Push Mode Instructions](#3-push-mode-instructions)
**Additional Features** (optional):
| Feature | Description | Default |
|------|------|------|
| **Scheduling System** | Arrange daily schedules from Monday to Sunday: allocate different time periods, push modes, and AI analysis strategies for each day. **Each period can have a separate filtering method (keywords/AI) and focus direction**. Built-in 5 presets (always_on / morning_evening / office_hours / night_owl / custom), and custom support. Supports workday/weekend differentiation, cross-midnight periods, per-period de-duplication, and time-slot conflict detection (v6.0.0 + v6.5.0) | morning_evening |
| **Content Order Configuration** | Adjust the display order of each area (hot list, new hotspots, RSS, independent display area, AI analysis) through `display.region_order`; control whether each area is displayed through `display.regions` (v5.2.0) | See configuration file |
| **Display Mode Switching** | `keyword`=grouped by keywords, `platform`=grouped by platform (added in v4.6.0) | keyword |
> 💡 For detailed configuration tutorials, see [How to display push content?](#7-how-to-display-push-content) and [When will I be pushed?](#8-when-will-i-be-pushed)
### **Accurate Content Filtering**
Set personal keywords (e.g., AI, BYD, education policy), and only push relevant hotspots, filtering out irrelevant information.
> 💡 **Basic Configuration Tutorial**: [Keyword Configuration - Basic Syntax](#keyword-basic-syntax)
>
> 💡 **Advanced Configuration Tutorial**: [Keyword Configuration - Advanced Configuration](#keyword-advanced-configuration)
>
> 💡 You can also not filter and push all hotspots completely (leave `frequency_words.txt` blank)
### **AI Intelligent News Filtering** (added in v6.5.0)
Describe your interests in natural language, and AI automatically classifies news, replacing traditional keyword matching.
- **Natural Language Interest Description**: Write down your focus directions in daily language in `ai_interests.txt`, without learning keyword syntax.
- **Two-Stage Intelligent Processing**: AI first extracts structured labels from interest descriptions and then classifies news into labels in batches and scores them.
- **Score Threshold Control**: Precisely control the push quality through `ai_filter.min_score`, pushing only high-correlation news.
- **Automatic Fallback Guarantee**: When AI filtering fails, automatically fall back to keyword matching to ensure uninterrupted pushes.
- **Intelligent Label Update**: When interests change, AI automatically evaluates the change amplitude and decides incremental or full reclassification.
- **Flexible Switching**: `filter.method` supports `keyword` (default) and `ai` modes. Timeline can cover different time periods.
```yaml
# config.yaml quick enable example
filter:
method: ai # keyword (default) | ai
ai_filter:
min_score: 6 # minimum push score threshold (1-10)
```
> 💡 AI filtering shares model configuration with AI analysis/translation, requiring only one configuration of `ai.api_key`
### **Hotspot Trend Analysis**
Real-time tracking of news heat changes, allowing you to know not only "what's on the hot search" but also "how the hotspot evolves."
- **Time Axis Tracking**: Record the complete time span of each piece of news from its first appearance to its last appearance.
- **Heat Change**: Statistics on the ranking changes and appearance frequency of news in different time periods.
- **New Detection**: Real-time identification of new hotspots, marked with 🆕 to remind you at the first time.
- **Sustainability Analysis**: Distinguish one-time hotspots from continuously fermenting in-depth news.
- **Cross-Platform Comparison**: Compare the ranking performance of the same news on different platforms to see differences in media attention.
> 💡 For push format instructions, see [Message Style Instructions](#5-what-do-i-receive)
### **Personalized Hotspot Algorithm**
No longer be led by the algorithms of various platforms, TrendRadar will reorganize the hottest searches across the entire network.
> 💡 Three ratios can be adjusted, see [Configuration Details - Hotspot Weight Adjustment](#4-热点权重调整)
### **Multi-Channel and Multi-Account Push**
Support **Enterprise WeChat** (+ WeChat push solution), **Feishu**, **DingTalk**, **Telegram**, **Email**, **ntfy**, **Bark**, **Slack**, and **Universal Webhook** (can be connected to Discord, IFTTT, and other arbitrary platforms), with messages directly reaching mobile phones and email boxes.
> 💡 For detailed configuration tutorials, see [Push to Multiple Groups/Devices](#10-推送到多个群设备)
### **AI Multi-Language Translation** (added in v5.2.0)
Translate push content into any language, breaking language barriers, whether reading domestic hotspots or subscribing to overseas information through RSS, can be easily accessed in your native language.
- **One-Click Translation**: Set `ai_translation.enabled: true` and target language in `config.yaml`
- **Multi-Language Support**: Supports English, Korean, Japanese, French, and other languages
- **Intelligent Batch Processing**: Automatic batch translation, reducing API call times, and saving costs
- **Customizable Style**: Customize translation style and terminology through `ai_translation_prompt.txt`
- **Shared Model Configuration**: Share model settings with AI analysis function using the `ai` configuration section
```yaml
# config.yaml quick enable example
ai_translation:
enabled: true
language: "English" # target language for translation
```
> 💡 The translation function shares model configuration with the AI analysis function, and only needs to configure `ai.api_key` once to use both functions.
**RSS Source Reference**: The following are some RSS subscription source collections that can be used as needed.
- [awesome-tech-rss](https://github.com/tuan3w/awesome-tech-rss) - Technology, entrepreneurship, and programming blogs and media
- [awesome-rss-feeds](https://github.com/plenaryapp/awesome-rss-feeds) - Mainstream news media RSS collections from various countries
> ⚠️ Some overseas media content may involve sensitive topics, and AI models may refuse to translate. It is recommended to filter subscription sources according to actual needs.
### **HTML Report Browser Enhancement** (added in v6.6.0)
Open the pushed HTML report in the browser, and automatically unlock the enhanced experience (email clients are not affected):
- **Wide-Screen Mode**: Automatically switch to 1200px wide-screen layout on desktop, fully utilizing screen space
- **Tab Quick Switch**: Both keyword grouping and independent display areas support Tab navigation, eliminating long page scrolling
- **Dark Mode**: One-click switch to dark theme, automatically remembering preferences
- **Real-Time Search**: Press `/` to activate the search box, instantly filtering news titles
- **One-Click Copy**: Hover over news serial numbers to copy titles and links
- **Shortcuts**: `W` for wide-screen, `D` for dark mode, `/` for search, and `?` to view all shortcuts
> 💡 All enhanced features are based on progressive enhancement, and email clients still display the original 600px layout, with zero regression.
### **Flexible Storage Architecture** (major update in v4.0.0)
**Multi-Storage Backend Support**:
- **Remote Cloud Storage**: Default for GitHub Actions environment, supporting S3 compatible protocol (R2/OSS/COS, etc.), with data stored in the cloud, not polluting the repository
- **Local SQLite Database**: Default for Docker/local environment, with complete data control
- **Automatic Backend Selection**: Intelligent switching of storage methods based on the running environment
> 💡 For detailed instructions, see [Where is the data stored?](#11-数据保存在哪里)
### **Multi-Platform Deployment**
- **GitHub Actions**: Timed automatic crawling + remote cloud storage (requires sign-in renewal)
- **Docker Deployment**: Supports multi-architecture containerized operation, with local data storage
- **Local Operation**: Windows/Mac/Linux direct operation
### **AI Analysis Push** (added in v5.0.0)
Use large AI models to conduct in-depth analysis of push content, automatically generating hotspot insight reports.
- **Intelligent Analysis**: Automatically analyze hotspot trends, keyword heat, cross-platform associations, and potential impacts
- **Multi-Provider**: Based on LiteLLM unified interface, supporting 100+ AI providers (DeepSeek, OpenAI, Gemini, Anthropic, local Ollama, etc.), with automatic backup model switching
- **Independent Analysis Mode**: AI analysis scope can be different from push - push only sends new messages (avoiding disturbance), but AI can analyze all news for the day (seeing the complete trend)
- **Flexible Push**: Optional to push only original content, only AI analysis, or both
- **Customizable Prompt Words**: Customize analysis perspectives through `config/ai_analysis_prompt.txt`
> 💡 For detailed configuration tutorials, see [Let AI help me analyze hotspots](#12-让-ai-帮我分析热点)
### **Independent Display Area** (added in v5.0.0)
Provide a complete hot list display for specified platforms, unaffected by keyword filtering.
- **Complete Hot List**: Complete display of hot lists for specified platforms, suitable for users who want to see complete rankings
- **RSS Independent Display**: RSS source content can be displayed completely, without keyword restrictions
- **AI In-Depth Analysis**: Can independently enable AI trend analysis for complete hot lists, without displaying in push
- **Flexible Configuration**: Supports configuration of display platforms, RSS sources, and maximum number of entries
> 💡 For detailed configuration tutorials, see [How is the push content displayed? - Independent Display Area](#7-推送内容怎么显示)
### **AI Intelligent Analysis** (added in v3.0.0)
AI dialogue analysis system based on MCP (Model Context Protocol) protocol, allowing in-depth mining of news data using natural language.
> **💡 Usage Tips**: AI functionality requires local news data support
> - The project comes with test data, allowing immediate experience of the functionality
> - It is recommended to deploy and run the project yourself to obtain more real-time data
>
> See [AI Intelligent Analysis](#-ai-智能分析) for details.
### **Web Deployment**
After running, the root directory generates `index.html`, which is a complete news report page.
> **Deployment Method**: Click **Use this template** to create a repository, which can be deployed to Cloudflare Pages or GitHub Pages and other static hosting platforms.
>
> **💡 Tips**: Enabling GitHub Pages can obtain an online access address. Go to the repository Settings → Pages to enable it. [Effect Preview](https://sansan0.github.io/TrendRadar/)
>
> ⚠️ The original GitHub Actions automatic storage function has been discontinued (this solution once caused high GitHub server load, affecting platform stability).
### **Reduce APP Dependence**
From being "controlled by algorithm recommendations" to "actively obtaining desired information"
**Suitable Groups**: Investors, self-media practitioners, enterprise public relations, and ordinary users concerned about current events.
**Typical Scenarios**: Stock investment monitoring, brand reputation tracking, industry dynamic attention, and life information acquisition.
| Web Effect (Email Push Effect) | Feishu Push Effect | AI Analysis Push Effect |
|:---:|:---:|:---:|
|  |  |  |
<br>
## 🚀 Quick Start
> **Reminder**: It is recommended to **[view the latest official documentation](https://github.com/sansan0/TrendRadar?tab=readme-ov-file)** to ensure configuration steps are up-to-date.
### Please Choose Your Suitable Deployment Method
#### 🅰️ Option 1: Docker Deployment (Recommended 🔥)
* **Features**: More stable than GitHub Actions, with local data storage (no need for cloud storage configuration)
* **Applicable**: For those with their own servers, NAS, or long-running computers
* **Note**: You need to read and understand the basic configuration process below, then jump to the Docker tutorial for deployment.
#### 🅱️ Option 2: GitHub Actions Deployment (content below ⬇️)
* **Features**: Serverless, with data stored in **remote cloud storage** (recommended configuration)
* **Applicable**: For users without servers, utilizing GitHub free resources
* **Note**: Cloud storage configuration is required for a complete experience, and periodic sign-in renewal is needed.
### 1️⃣ Step 1: Obtain Project Code
Click the green **[Use this template]** button in the upper right corner of this repository → Select "Create a new repository".
> ⚠️ Reminder:
> - Subsequent documentation mentions "Fork" can be understood as "Use this template"
> - Using Fork may cause operational exceptions, see [Issue #606](https://github.com/sansan0/TrendRadar/issues/606)
<br>
### 2️⃣ Step 2: Set up GitHub Secrets
In your forked repository, go to `Settings` > `Secrets and variables` > `Actions` > `New repository secret`
**📌 Important Notes (please read carefully):**
- **One Name corresponds to one Secret**: For each configuration item, click the "New repository secret" button once and fill in a pair of "Name" and "Secret".
- **You won't be able to see the value after saving**: For security reasons, when you re-edit, you can only see the Name, but not the Secret content.
- **Do not create names arbitrarily**: The Name of the Secret must **strictly use** the names listed below (such as `WEWORK_WEBHOOK_URL`, `FEISHU_WEBHOOK_URL`, etc.), and cannot be modified or created randomly, otherwise the system will not be able to identify it.
- **Multiple platforms can be configured at the same time**: The system will send notifications to all configured platforms.
**Configuration Example:**
<img src="_image/secrets.png" alt="GitHub Secrets configuration example"/>
As shown in the figure above, each line is a configuration item:
- **Name**: Must use the fixed name listed in the expanded content below (such as `WEWORK_WEBHOOK_URL`)
- **Secret**: Fill in the actual content you obtained from the corresponding platform (such as Webhook address, Token, etc.)
<br>
<details>
<summary>👉 Click to expand: <strong>WeWork Robot</strong> (easiest and fastest configuration)</summary>
<br>
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `WEWORK_WEBHOOK_URL` (please copy and paste this name, don't type it manually)
- **Secret**: Your WeWork robot Webhook address
<br>
**Robot setup steps:**
#### Mobile setup:
1. Open WeWork App → Enter the target internal group chat
2. Click the "..." button in the upper right corner → Select "Message push"
3. Click "Add" → Enter "TrendRadar"
4. Copy the Webhook address, click Save, and copy the content to configure to the GitHub Secret above
#### PC setup process is similar
</details>
<details>
<summary>👉 Click to expand: <strong>Personal WeChat push</strong> (based on WeWork application, push to personal WeChat)</summary>
<br>
> Since this solution is based on WeWork's plugin mechanism, the push style is plain text (no markdown format), but it can be pushed directly to personal WeChat without installing WeWork App.
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `WEWORK_WEBHOOK_URL` (please copy and paste this name, don't type it manually)
- **Secret**: Your WeWork application Webhook address
- **Name**: `WEWORK_MSG_TYPE` (please copy and paste this name, don't type it manually)
- **Secret**: `text`
<br>
**Setup steps:**
1. Complete the WeWork robot Webhook setup above
2. Add `WEWORK_MSG_TYPE` Secret, set the value to `text`
3. Follow the picture below to operate and associate personal WeChat
4. After configuration, the WeWork App on the mobile phone can be deleted
<img src="_image/wework.png" title="Personal WeChat push configuration"/>
**Description**:
- Use the same Webhook address as WeWork robot
- The difference lies in the message format: `text` is plain text, `markdown` is rich text (default)
- Plain text format will automatically remove all markdown syntax (bold, link, etc.)
</details>
<details>
<summary>👉 Click to expand: <strong>Feishu Robot</strong> (message display is relatively friendly)</summary>
<br>
If **AI analysis** is enabled, Feishu push occasionally (about 5% probability) will have a delay of several minutes (speculated to be the platform's compliance audit for AI-generated content).
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `FEISHU_WEBHOOK_URL` (please copy and paste this name, don't type it manually)
- **Secret**: Your Feishu robot Webhook address (the link starts with https://www.feishu.cn/flow/api/trigger-webhook/********)
<br>
There are two solutions, **Solution 1** is simple to configure, and **Solution 2** is complex (but stable push).
Among them, Solution 1 was discovered and suggested by **ziventian**, thanks to him. By default, it is a personal push, and you can also configure group push operation [#97](https://github.com/sansan0/TrendRadar/issues/97),
**Solution 1:**
> For some people, there are additional operations, otherwise, it will report "system error". You need to search for the robot on your mobile phone and then enable the Feishu robot application (this suggestion comes from netizens, you can refer to it)
1. Open https://botbuilder.feishu.cn/home/my-command in your computer browser
2. Click "New robot command"
3. Click "Select trigger", scroll down, and click "Webhook trigger"
4. You will see the "Webhook address", copy this link to a local notepad for now, and continue with the next operation
5. Put the following content in "Parameter" and click "Complete"
```json
{
"message_type": "text",
"content": {
"text": "{{content}}"
}
}
```
6. Click "Select operation" > "Send message through official robot"
7. Fill in the message title as "TrendRadar hot spot monitoring"
8. The key part is, click the + button, select "Webhook trigger", and then arrange according to the picture below

9. After configuration, copy the Webhook address in step 4 to the `FEISHU_WEBHOOK_URL` in GitHub Secrets
<br>
**Solution 2:**
1. Open https://botbuilder.feishu.cn/home/my-app in your computer browser
2. Click "New robot application"
3. After entering the created application, click "Process design" > "Create process" > "Select trigger"
4. Scroll down and click "Webhook trigger"
5. You will see the "Webhook address", copy this link to a local notepad for now, and continue with the next operation
6. Put the following content in "Parameter" and click "Complete"
```json
{
"message_type": "text",
"content": {
"text": "{{content}}"
}
}
```
7. Click "Select operation" > "Send Feishu message", select "Group message", and then click the input box below, click "Groups I manage" (if you don't have a group, you can create a group on the Feishu app)
8. Fill in the message title as "TrendRadar hot spot monitoring"
9. The key part is, click the + button, select "Webhook trigger", and then arrange according to the picture below

10. After configuration, copy the Webhook address in step 5 to the `FEISHU_WEBHOOK_URL` in GitHub Secrets
</details>
<details>
<summary>👉 Click to expand: <strong>DingTalk Robot</strong></summary>
<br>
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `DINGTALK_WEBHOOK_URL` (please copy and paste this name, don't type it manually)
- **Secret**: Your DingTalk robot Webhook address
<br>
**Robot setup steps:**
1. **Create a robot (only PC side supports)**:
- Open DingTalk PC client, enter the target group chat
- Click the group setting icon (⚙️) → Find "Robot" and click to open
- Select "Add robot" → "Custom"
2. **Configure the robot**:
- Set the robot name
- **Security settings**:
- **Custom keywords**: Set "hot spot"
3. **Complete the setup**:
- Check the service terms agreement → Click "Complete"
- Copy the obtained Webhook URL
- Configure the URL to `DINGTALK_WEBHOOK_URL` in GitHub Secrets
**Note**: The mobile end can only receive messages, and cannot create a new robot.
</details>
<details>
<summary>👉 Click to expand: <strong>Telegram Bot</strong></summary>
<br>
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `TELEGRAM_BOT_TOKEN` (please copy and paste this name, don't type it manually)
- **Secret**: Your Telegram Bot Token
- **Name**: `TELEGRAM_CHAT_ID` (please copy and paste this name, don't type it manually)
- **Secret**: Your Telegram Chat ID
**Description**: Telegram needs to configure **two** Secrets, please click the "New repository secret" button twice to add
<br>
**Robot setup steps:**
1. **Create a robot**:
- Search for `@BotFather` in Telegram (pay attention to case, with a blue badge, with about 37849827 monthly users, this is the official one, some fake official accounts please distinguish)
- Send `/newbot` command to create a new robot
- Set the robot name (must end with "bot", it is easy to encounter duplicate names, so you have to think of different names)
- Get the Bot Token (format like: `123456789:AAHfiqksKZ8WmR2zSjiQ7_v4TMAKdiHm9T0`)
2. **Get Chat ID**:
**Method 1: Get through the official API**
- Send a message to your robot
- Visit: `https://api.telegram.org/bot<your Bot Token>/getUpdates`
- Find the number in `"chat":{"id":number}` in the returned JSON
**Method 2: Use a third-party tool**
- Search for `@userinfobot` and send `/start`
- Get your user ID as Chat ID
3. **Configure to GitHub**:
- `TELEGRAM_BOT_TOKEN`: Fill in the Bot Token obtained in step 1
- `TELEGRAM_CHAT_ID`: Fill in the Chat ID obtained in step 2
</details>
<details>
<summary>👉 Click to expand: <strong>Email push</strong> (support all mainstream mailboxes)</summary>
<br>
- Precautions: To prevent the email group sending function from being **abused**, the current group sending is that all recipients can see each other's email addresses.
- If you have not configured the email sending below, it is not recommended to try
> ⚠️ **Important configuration dependency**: Email push requires an HTML report file. Please ensure that `storage.formats.html` in `config/config.yaml` is set to `true`:
> ```yaml
> storage:
> formats:
> sqlite: true
> txt: false
> html: true # Must be enabled, otherwise email push will fail
> ```
> If set to `false`, an error will be reported when sending an email: `Error: HTML file does not exist or is not provided: None`
<br>
**GitHub Secret Configuration (⚠️ Name must be consistent):**
- **Name**: `EMAIL_FROM` (please copy and paste this name, don't type it manually)
- **Secret**: Sender's email address
- **Name**: `EMAIL_PASSWORD` (please copy and paste this name, don't type it manually)
- **Secret**: Email password or authorization code
- **Name**: `EMAIL_TO` (please copy and paste this name, don't type it manually)
- **Secret**: Recipient's email address (multiple recipients are separated by English commas, and can also be the same as EMAIL_FROM, send to yourself)
- **Name**: `EMAIL_SMTP_SERVER` (optional configuration, please copy and paste this name)
- **Secret**: SMTP server address (can be left blank, the system will automatically identify)
- **Name**: `EMAIL_SMTP_PORT` (optional configuration, please copy and paste this name)
- **Secret**: SMTP port (can be left blank, the system will automatically identify)
**Description**: Email push requires at least **3 required** Secrets (EMAIL_FROM, EMAIL_PASSWORD, EMAIL_TO), and the last two are optional configurations
<br>
**Supported email service providers** (automatic identification of SMTP configuration):
| Email service provider | Domain | SMTP server | Port | Encryption method |
|-----------|------|------------|------|------------------|
| **Gmail** | gmail.com | smtp.gmail.com | 587 | TLS |
| **QQ mailbox** | qq.com | smtp.qq.com | 465 | SSL |
| **Outlook** | outlook.com | smtp-mail.outlook.com | 587 | TLS |
| **Hotmail** | hotmail.com | smtp-mail.outlook.com | 587 | TLS |
| **Live** | live.com | smtp-mail.outlook.com | 587 | TLS |
| **163 mailbox** | 163.com | smtp.163.com | 465 | SSL |
| **126 mailbox** | 126.com | smtp.126.com | 465 | SSL |
| **Sina mailbox** | sina.com | smtp.sina.com | 465 | SSL |
| **Sohu mailbox** | sohu.com | smtp.sohu.com | 465 | SSL |
| **Tianyi mailbox** | 189.cn | smtp.189.cn | 465 | SSL |
| **Aliyun mailbox** | aliyun.com | smtp.aliyun.com | 465 | TLS |
| **Yandex mailbox** | yandex.com | smtp.yandex.com | 465 | TLS |
| **iCloud mailbox** | icloud.com | smtp.mail.me.com | 587 | SSL |
> **Automatic identification**: When using the above mailbox, there is no need to manually configure `EMAIL_SMTP_SERVER` and `EMAIL_SMTP_PORT`, the system will automatically identify.
>
> **Feedback description**:
> - If you use **other mailboxes** to test successfully, welcome to open [Issues](https://github.com/sansan0/TrendRadar/issues) to inform, I will add it to the support list
> - If the above mailbox configuration is incorrect or cannot be used, please open [Issues](https://github.com/sansan0/TrendRadar/issues) to feedback and help improve the project
>
> **Special thanks**:
> - Thanks to [@DYZYD](https://github.com/DYZYD) for contributing Tianyi mailbox (189.cn) configuration and completing self-sending and self-receiving test ([#291](https://github.com/sansan0/TrendRadar/issues/291))
> - Thanks to [@longzhenren](https://github.com/longzhenren) for contributing Aliyun mailbox (aliyun.com) configuration and completing the test ([#344](https://github.com/sansan0/TrendRadar/issues/344))
> - Thanks to [@ACANX](https://github.com/ACANX) for contributing Yandex mailbox (yandex.com) configuration and completing the test ([#663](https://github.com/sansan0/TrendRadar/issues/663))
> - Thanks to [@Sleepy-Tianhao](https://github.com/Sleepy-Tianhao) for contributing iCloud mailbox (icloud.com) configuration and completing the test ([#728](https://github.com/sansan0/TrendRadar/issues/728))
**Common mailbox settings:**
#### QQ mailbox:
1. Log in to QQ mailbox web version → Settings → Account
2. Enable POP3/SMTP service
3. Generate authorization code (16 letters)
4. Fill in `EMAIL_PASSWORD` with authorization code, not QQ password
#### Gmail:
1. Enable two-step verification
2. Generate application-specific password
3. Fill in `EMAIL_PASSWORD` with application-specific password
#### 163/126 mailbox:
1. Log in to the web version → Settings → POP3/SMTP/IMAP
2. Enable SMTP service
3. Set client authorization code
4. Fill in `EMAIL_PASSWORD` with authorization code
<br>
**Advanced configuration**:
If automatic identification fails, you can manually configure SMTP:
- `EMAIL_SMTP_SERVER`: such as smtp.gmail.com
- `EMAIL_SMTP_PORT`: such as 587 (TLS) or 465 (SSL)
<br>
**If there are multiple recipients (note that they are separated by English commas)**:
- EMAIL_TO="user1@example.com,user2@example.com,user3@example.com"
</details>
<details>
<summary>👉 Click to expand: <strong>ntfy push</strong> (open source and free, support self-hosting)</summary>
<br>
**Two usage methods:**
### Method 1: Free use (recommended for novices) 🆓
**Features**:
- ✅ No need to register an account, use immediately
- ✅ 250 messages per day (enough for 90% of users)
- ✅ Topic name is "password" (need to choose a name that is not easy to guess)
- ⚠️ Messages are not encrypted, not suitable for sensitive information, but suitable for our project's non-sensitive information
Getting Started:**
1. **Download the ntfy app**:
- Android: [Google Play](https://play.google.com/store/apps/details?id=io.heckel.ntfy) / [F-Droid](https://f-droid.org/en/packages/io.heckel.ntfy/)
- iOS: [App Store](https://apps.apple.com/us/app/ntfy/id1625396347)
- Desktop: Visit [ntfy.sh](https://ntfy.sh)
2. **Subscribe to a topic** (choose a hard-to-guess name):
```
Suggested format: trendradar-{your initials}-{random number}
No Chinese characters allowed
✅ Good example: trendradar-zs-8492
❌ Bad example: news, alerts (too easy to guess)
```
3. **Configure GitHub Secret (⚠️ Name must match exactly)**:
- **Name**: `NTFY_TOPIC` (copy and paste this name)
- **Secret**: Enter the topic name you just subscribed to
- **Name**: `NTFY_SERVER_URL` (optional, copy and paste this name)
- **Secret**: Leave blank (default is ntfy.sh)
- **Name**: `NTFY_TOKEN` (optional, copy and paste this name)
- **Secret**: Leave blank
**Note**: ntfy requires at least one required Secret (NTFY_TOPIC), the other two are optional.
4. **Test**:
```bash
curl -d "Test message" ntfy.sh/your_topic_name
```
---
### Method 2: Self-hosting (complete privacy control) 🔒
**Suitable for**: Those with servers, requiring complete privacy, and technical expertise
**Advantages**:
- ✅ Completely open-source (Apache 2.0 + GPLv2)
- ✅ Complete data control
- ✅ No limitations
- ✅ Zero cost
**One-click Docker deployment**:
```bash
docker run -d \
--name ntfy \
-p 80:80 \
-v /var/cache/ntfy:/var/cache/ntfy \
binwiederhier/ntfy \
serve --cache-file /var/cache/ntfy/cache.db
```
**Configure TrendRadar**:
```yaml
NTFY_SERVER_URL: https://ntfy.yourdomain.com
NTFY_TOPIC: trendradar-alerts # Simple name allowed for self-hosting
NTFY_TOKEN: tk_your_token # Optional: enable access control
```
**Subscribe in the app**:
- Click "Use another server"
- Enter your server address
- Enter the topic name
- (Optional) Enter login credentials
---
**Frequently Asked Questions:**
<details>
<summary><strong>Q1: Is the free version sufficient?</strong></summary>
250 messages per day are enough for most users. With a 30-minute crawl interval, that's about 48 pushes per day, which is more than sufficient.
</details>
<details>
<summary><strong>Q2: Is the topic name really secure?</strong></summary>
If you choose a random, sufficiently long name (like `trendradar-zs-8492-news`), brute-force attacks are nearly impossible:
- ntfy has strict rate limiting (1 request per second)
- 64 character choices (A-Z, a-z, 0-9, _, -)
- A 10-character random string has 64^10 possibilities (takes years to crack)
</details>
---
**Recommendations:**
| User type | Recommended solution | Reason |
|---------|---------|------|
| Ordinary users | Method 1 (free) | Simple and quick, sufficient |
| Technical users | Method 2 (self-hosting) | Complete control, no limitations |
| High-frequency users | Method 3 (paid) | Check the official website for details |
**Related links:**
- [ntfy official documentation](https://docs.ntfy.sh/)
- [Self-hosting tutorial](https://docs.ntfy.sh/install/)
- [GitHub repository](https://github.com/binwiederhier/ntfy)
</details>
<details>
<summary>👉 Click to expand: <strong>Bark push</strong> (iOS exclusive, simple and efficient)</summary>
<br>
**GitHub Secret configuration (⚠️ Name must match exactly)**:
- **Name**: `BARK_URL` (copy and paste this name)
- **Secret**: Your Bark push URL
<br>
**Bark introduction:**
Bark is a free and open-source push tool for iOS, characterized by simplicity, speed, and no ads.
**Usage:**
### Method 1: Use the official server (recommended for beginners) 🆓
1. **Download Bark App**:
- iOS: [App Store](https://apps.apple.com/app/bark-%e7%bb%9c%e4%bd%a0%e7%9b%96%e6%89%8b%e6%9c%ba%e5%8f%91%e6%8e%a8%e9%80%81/id1403753865)
2. **Get the push URL**:
- Open the Bark App
- Copy the push URL displayed on the homepage (format like: `https://api.day.app/your_device_key`)
- Configure the URL to `BARK_URL` in GitHub Secrets
### Method 2: Self-hosting (complete privacy control) 🔒
**Suitable for**: Those with servers, requiring complete privacy, and technical expertise
**One-click Docker deployment**:
```bash
docker run -d \
--name bark-server \
-p 8080:8080 \
finab/bark-server
```
**Configure TrendRadar**:
```yaml
BARK_URL: http://your-server-ip:8080/your_device_key
```
---
**Notes:**
- ✅ Bark uses APNs for pushing, with a maximum message size of 4KB
- ✅ Supports automatic batching, no need to worry about long messages
- ✅ Push format is plain text (automatically removes Markdown syntax)
- ⚠️ Only supports iOS platform
**Related links:**
- [Bark official website](https://bark.day.app/)
- [Bark GitHub repository](https://github.com/Finb/Bark)
- [Bark Server self-hosting tutorial](https://github.com/Finb/bark-server)
</details>
<details>
<summary>👉 Click to expand: <strong>Slack push</strong></summary>
<br>
**GitHub Secret configuration (⚠️ Name must match exactly)**:
- **Name**: `SLACK_WEBHOOK_URL` (copy and paste this name)
- **Secret**: Your Slack Incoming Webhook URL
<br>
**Slack introduction:**
Slack is a team collaboration tool, and Incoming Webhooks can push messages to Slack channels.
**Setup steps:**
### Step 1: Create a Slack App
1. **Access the Slack API page**:
- Open https://api.slack.com/apps?new_app=1
- If not logged in, log in to your Slack workspace
2. **Choose creation method**:
- Click **"From scratch"**
3. **Fill in app information**:
- **App Name**: Enter the app name (e.g., `TrendRadar` or `Hot News Monitor`)
- **Workspace**: Select your workspace from the dropdown list
- Click **"Create App"**
### Step 2: Enable Incoming Webhooks
1. **Navigate to Incoming Webhooks**:
- Find and click **"Incoming Webhooks"** in the left menu
2. **Enable the feature**:
- Find **"Activate Incoming Webhooks"** switch
- Toggle the switch from `OFF` to `ON`
- The page will refresh to display new configuration options
### Step 3: Generate Webhook URL
1. **Add a new Webhook**:
- Scroll to the bottom of the page
- Click **"Add New Webhook to Workspace"**
2. **Select the target channel**:
- An authorization page will pop up
- Select the channel to receive messages from the dropdown list (e.g., `#hot-news`)
- ⚠️ If you want to select a private channel, you must first join the channel
3. **Authorize the app**:
- Click **"Allow"** to complete authorization
- The system will automatically jump back to the configuration page
### Step 4: Copy and save the Webhook URL
1. **View the generated URL**:
- In the "Webhook URLs for Your Workspace" section
- You will see the generated Webhook URL
- Format like: `https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX`
2. **Copy the URL**:
- Click the **"Copy"** button next to the URL
- Or manually select and copy the URL
3. **Configure to TrendRadar**:
- **GitHub Actions**: Add the URL to `SLACK_WEBHOOK_URL` in GitHub Secrets
- **Local testing**: Fill the URL into `slack_webhook_url` in `config/config.yaml`
- **Docker deployment**: Add the URL to `SLACK_WEBHOOK_URL` in `docker/.env`
---
**Notes:**
- ✅ Supports Markdown format (automatically converted to Slack mrkdwn)
- ✅ Supports automatic batching (4KB per batch)
- ✅ Suitable for team collaboration, centralized message management
- ⚠️ Webhook URL contains the secret key, do not expose it
**Message format preview:**
```
*[1/2 batches]*
📊 *Hot word statistics*
🔥 *[1/3] AI ChatGPT* : 2 messages
1. [Baidu Hot Search] 🆕 ChatGPT-5 officially released *[1]* - 09:15 (1 time)
2. [Toutiao] AI chip concept stocks surge *[3]* - [08:30 ~ 10:45] (3 times)
```
**Related links:**
- [Slack Incoming Webhooks official documentation](https://api.slack.com/messaging/webhooks)
- [Slack API app management](https://api.slack.com/apps)
</details>
<details>
<summary>👉 Click to expand: <strong>Generic Webhook push</strong> (supports Discord, Matrix, IFTTT, etc.)</summary>
<br>
**GitHub Secret configuration (⚠️ Name must match exactly)**:
- **Name**: `GENERIC_WEBHOOK_URL` (copy and paste this name)
- **Secret**: Your Webhook URL
- **Name**: `GENERIC_WEBHOOK_TEMPLATE` (optional, copy and paste this name)
- **Secret**: JSON template string, supports `{title}` and `{content}` placeholders
<br>
**Generic Webhook introduction:**
Generic Webhook supports any platform that accepts HTTP POST requests, including but not limited to:
- **Discord**: Push to channels via Webhook
- **Matrix**: Bridge push via Webhook
- **IFTTT**: Trigger automation processes
- **Self-built services**: Any custom service that supports Webhook
**Configuration example:**
### Discord configuration
1. **Get the Webhook URL**:
- Go to Discord server settings → Integrations → Webhooks
- Create a new Webhook and copy the URL
2. **Configure the template**:
```json
{"content": "{content}"}
```
3. **GitHub Secret configuration**:
- `GENERIC_WEBHOOK_URL`: Discord Webhook URL
- `GENERIC_WEBHOOK_TEMPLATE`: `{"content": "{content}"}`
### Custom template
The template supports two placeholders:
- `{title}` - Message title
- `{content}` - Message content
**Template example:**
```json
# Default format (leave blank)
{"title": "{title}", "content": "{content}"}
# Discord format
{"content": "{content}"}
# Custom format
{"text": "{content}", "username": "TrendRadar"}
```
---
**Notes:**
- ✅ Supports Markdown format (consistent with WeChat format)
- ✅ Supports automatic batching
- ✅ Supports multi-account configuration (use `;` to separate)
- ⚠️ The template must be a valid JSON format
- ⚠️ Different platforms have different requirements for message format, please refer to the target platform documentation
</details>
<br>
### Step 3: Manually test news pushes
> ⚠️ Reminder:
> - Complete steps 1-2, then test immediately! Test successfully before adjusting configurations (step 4)
> - Please enter your own project, not this one!
**How to find your Actions page**:
- **Method 1**: Open your forked project homepage, click the **Actions** tab at the top
- **Method 2**: Directly access `https://github.com/your_username/TrendRadar/actions`
**Example comparison**:
- ❌ Author's project: `https://github.com/sansan0/TrendRadar/actions`
- ✅ Your project: `https://github.com/your_username/TrendRadar/actions`
**Test steps**:
1. Enter your project's Actions page
2. Find **"Get Hot News"** (must be these words), click into it, and click the **"Run workflow"** button on the right
- If you don't see this, refer to [#109](https://github.com/sansan0/TrendRadar/issues/109) to solve
3. About 3 minutes later, messages will be pushed to your configured platform
<br>
> ⚠️ Reminder:
> - Don't test too frequently to avoid triggering GitHub Actions limits
> - After clicking Run workflow, you need to refresh the browser page to see the new run record
<br>
### Step 4: Configuration instructions (optional)
The default configuration is already usable. If you need personalized adjustments, understand the following files:
| File | Function |
|------|------|
| `config/config.yaml` | Main configuration file: push mode, time window, platform list, hot word weight, etc. |
| `config/frequency_words.txt` | Keyword file: set concerned vocabulary, filter push content |
| `config/ai_analysis_prompt.txt` | AI prompt template: customize AI analyst role and analysis dimensions |
| `.github/workflows/crawler.yml` | Execution frequency: control how often to run (⚠️ be cautious when modifying) |
👉 **Detailed configuration tutorial**: [Configuration details](#configuration-details)
<br
### 5️⃣ Step 5: Remote Cloud Storage & Check-in Configuration
**v4.0.0 Important Change**: Introduced an "activity detection" mechanism, GitHub Actions require periodic check-ins to maintain operation.
- **Operation Cycle**: Valid for **7 days**, services will automatically suspend after countdown.
- **Renewal Method**: Manually trigger "Check In" workflow on Actions page to reset 7-day validity.
- **Operation Path**: `Actions` → `Check In` → `Run workflow`
- **Design Philosophy**:
- If you forget to check in for 7 days, perhaps this information is not essential for you. Timely suspension helps you detach from the information flow and gives your brain a breather.
- GitHub Actions are valuable public computing resources. The check-in mechanism aims to avoid idle computing power and ensure resources are allocated to active and needed users. Thanks for your understanding and support.
---
**About Remote Cloud Storage Configuration (choose according to deployment method):**
- **GitHub Actions Users**:
- **Current Status**: Actions run in a fresh environment each time, without saving files. If cloud storage is not configured, the project will run in **lightweight mode** (no incremental push, no historical tracking).
- **Recommendation**: Configure remote cloud storage for a complete experience.
- **Docker / Local Users**:
- **Current Status**: Data is saved on the local hard drive by default.
- **Recommendation**: Cloud storage is optional and can be used for off-site backup.
<details>
<summary>👉 Click to expand: <strong>Remote Cloud Storage Configuration Tutorial (using Cloudflare R2 as an example)</strong></summary>
<br>
**⚠️ Preconditions (important):**
According to Cloudflare platform rules, enabling R2 requires binding a payment method.
* **Purpose**: Only for identity verification (Verify Only), **no charges incurred**.
* **Payment**: Supports dual-currency credit cards or PayPal in China.
* **Usage**: R2's free quota (10GB storage/month) is sufficient for daily operation, no need to worry about payment.
---
**GitHub Secret Configuration (need to add 4 items):**
| Name(Name) | Secret(value)description |
|-------------|-----------------|
| `S3_BUCKET_NAME` | Bucket name (e.g., `trendradar-data`) |
| `S3_ACCESS_KEY_ID` | Access key ID (Access Key ID) |
| `S3_SECRET_ACCESS_KEY` | Access key (Secret Access Key) |
| `S3_ENDPOINT_URL` | S3 API endpoint (e.g., R2: `https://<account-id>.r2.cloudflarestorage.com`) |
**Optional Configuration:**
| Name(Name) | Secret(value)description |
|-------------|-----------------|
| `S3_REGION` | Region (default `auto`, some service providers may require specification) |
> 💡 **More storage configuration options**: See [Where is the data saved?](#11-where-is-the-data-saved)
<br>
**Detailed operation steps (obtaining credentials):**
1. **Enter R2 Overview**:
- Log in to [Cloudflare Dashboard](https://dash.cloudflare.com/).
- Find and click `R2 object storage` in the left sidebar.
2. **Create Bucket**:
- Click `Overview`.
- Click `Create bucket` in the top right corner.
- Enter a name (e.g., `trendradar-data`) and click `Create bucket`.
3. **Create API Token**:
- Go back to the **Overview** page.
- Click `Manage` in the bottom right corner to find and click `Manage R2 API Tokens`.
- You will see `S3 API`: `https://<account-id>.r2.cloudflarestorage.com` (this is S3_ENDPOINT_URL).
- Click `Create Account API Token`.
- **⚠️ Key settings**:
- **Token name**: Fill in freely (e.g., `github-action-write`).
- **Permissions**: Choose `Admin Read & Write`.
- **Specify bucket**: For security, recommend choosing `For specified bucket only` and select your bucket (e.g., `trendradar-data`).
- Click `Create API Token`, **copy immediately** the displayed `Access Key ID` and `Secret Access Key` (only shown once!).
</details>
<br>
### 6️⃣ Step 6: Enable AI Analysis Push
This is a core feature of v5.0.0, allowing AI to summarize and analyze news for you, recommended.
**Configuration method:**
Add to GitHub Secrets (or `.env` / `config.yaml`):
- `AI_API_KEY`: Your API Key (supports DeepSeek, OpenAI, etc.)
- `AI_PROVIDER`: Service provider name (e.g., `deepseek`, `openai`)
That's it, no complex deployment needed, and you'll see intelligent analysis reports on the next push.
<br>
### 7️⃣ Step 7: 🎉 Deployment Successful!
Congratulations! You can now start enjoying the efficient information flow brought by TrendRadar.
💬 **Join the community**: Welcome to follow the official account "**[硅基茶水间](#-support-the-project)**" and share your usage experience and advanced gameplay.
<br>
### 8️⃣ Step 8: Advanced: Choose Your AI Assistant
TrendRadar provides two AI usage methods to meet different needs:
| Feature | ✨ AI Analysis Push | 🧠 AI Intelligent Analysis |
| :--- | :--- | :--- |
| **Mode** | **Passive reception** (daily report) | **Active dialogue** (in-depth research) |
| **Scenario** | "What's happening today?" | "Analyze the changes in the AI industry over the past week" |
| **Deployment** | Simple (fill in Key) | Advanced (require local operation/Docker) |
| **Client** | Mobile | Computer |
👉 **Conclusion**: Start with **AI Analysis Push** to meet daily needs; if you're a data analyst or need in-depth mining, try **[AI Intelligent Analysis](#-ai-intelligent-analysis)**.
<br>
<a name="配置详解"></a>
## ⚙️ Configuration Details
> **📖 Reminder**: This section provides detailed configuration instructions, recommend completing the [Quick Start](#-quick-start) basic configuration before checking back for detailed options.
### 1. Which platforms do I want to watch?
<details id="自定义监控平台">
<summary>👉 Click to expand: <strong>Select information sources</strong></summary>
<br>
**Configuration location:** `config/config.yaml` `platforms` section
The project's data sources come from [newsnow](https://github.com/ourongxing/newsnow). You can click on the [website](https://newsnow.busiyi.world/), click on [More], and check if there are any platforms you want.
Specific additions can be accessed by visiting [project source code](https://github.com/ourongxing/newsnow/tree/main/server/sources), according to the file names inside, modify the `platforms` configuration in `config/config.yaml`:
```yaml
platforms:
enabled: true # Whether to enable hot list platform crawling
sources:
- id: "toutiao"
name: "Toutiao"
- id: "baidu"
name: "Baidu Hot Search"
- id: "wallstreetcn-hot"
name: "Wall Street Insights"
# Add more platforms...
```
> 💡 **Shortcut**: If you can't read the source code, you can copy others' sorted [platform configuration summary](https://github.com/sansan0/TrendRadar/issues/95)
> ⚠️ **Note**: The platform is not the more the better, recommend choosing 10-15 core platforms. Too many platforms will lead to information overload and reduce user experience.
</details>
### 2. What content do I care about?
Tell the robot what you want to see in the `frequency_words.txt` file, and it will help you monitor. Supports ordinary words, must words, filter words, and other gameplay.
| Syntax type | Symbol | Effect | Example | Matching logic |
|---------|------|------|------|---------|
| **Ordinary word** | None | Basic matching | `Huawei` | Contains any one |
| **Must word** | `+` | Limited range | `+Mobile` | Must contain simultaneously |
| **Filter word** | `!` | Exclude interference | `!Advertisement` | Contains then directly exclude |
| **Quantity limit** | `@` | Control display quantity | `@10` | Display at most 10 news (added in v3.2.0) |
| **Global filter** | `[GLOBAL_FILTER]` | Global exclusion of specified content | See below | Filter in any case (added in v3.5.0) |
| **Regular expression** | `/pattern/` | Precise matching mode | `/\bai\b/` | Use regular expression matching (added in v4.7.0) |
| **Display name** | `=> Remark` | Custom display text | `/\bai\b/ => AI related` | Push and HTML display remark name (added in v4.7.0) |
#### 2.1 Basic syntax
<a name="关键词基础语法"></a>
<details>
<summary>👉 Click to expand: <strong>Basic syntax tutorial</strong></summary>
<br>
**Configuration location:** `config/frequency_words.txt`
##### 1. **Ordinary keyword** - Basic matching
```txt
Huawei
OPPO
Apple
```
**Effect:** News titles containing any of these words will be captured
##### 2. **Must word** `+Vocabulary` - Limited range
```txt
Huawei
OPPO
+Mobile
```
**Effect:** Must contain both ordinary words and must words to be captured
##### 3. **Filter word** `!Vocabulary` - Exclude interference
```txt
Apple
Huawei
!Fruit
!Price
```
**Effect:** News containing filter words will be **directly excluded**, even if containing keywords
##### 4. **Quantity limit** `@Number` - Control display quantity (v3.2.0 added)
```txt
Tesla
Musk
@5
```
**Effect:** Limit the maximum number of news articles displayed for the keyword group
**Configuration priority:** `@Number` > Global configuration > Not limited
##### 5. **Global filter** `[GLOBAL_FILTER]` - Global exclusion of specified content (v3.5.0 added)
```txt
[GLOBAL_FILTER]
Advertisement
Promotion
Marketing
Shocking
Title party
[WORD_GROUPS]
Technology
AI
Huawei
Hongmeng
!Car
```
**Effect:** Filter news containing specified words in any case, **highest priority**
**Usage scenarios:**
- Filter low-quality content: Shocking, title party, exposure, etc.
- Filter marketing content: Advertisement, promotion, sponsorship, etc.
- Filter specific topics: Entertainment, gossip (according to needs)
**Filter priority:** Global filter > Word group filter (`!`) > Word group matching
**Area description:**
- `[GLOBAL_FILTER]`: Global filter area, containing words will be filtered in any case
- `[WORD_GROUPS]`: Word group area, keeping existing syntax (`!`, `+`, `@`)
- If no area is marked, default all as word group processing (backward compatible)
**Matching example:**
```txt
[GLOBAL_FILTER]
Advertisement
[WORD_GROUPS]
Technology
AI
```
- ❌ "Advertisement: Latest technology product launch" ← Contains global filter word "Advertisement", directly reject
- ✅ "Technology company releases AI new product" ← Does not contain global filter word, match "Technology" word group
- ✅ "AI technology breakthrough attracts attention" ← Does not contain global filter word, match "Technology" word group
**Notes:**
- Global filter words should be used cautiously to avoid excessive filtering leading to omission of valuable content
- Recommend controlling global filter words within 5-15
- For filtering specific word groups, prioritize using word group internal filter words (`!` prefix)
##### 6. **Regular expression** `/pattern/` - Precise matching mode (v4.7.0 added)
Ordinary keywords use substring matching, which is convenient in Chinese environments but may cause mis-matching in English environments. For example, `ai` will match to `training` in `training`.
Using regular expression syntax `/pattern/` can achieve precise matching:
```txt
/(?<![a-z])ai(?![a-z])/
Artificial intelligence
Machine learning
```
**Effect:** Use regular expressions for matching, supporting all Python regular syntax
**Common regular patterns:**
| Demand | Regular writing | Description |
|------|---------|------|
| English word boundary | `/\bword\b/` | Match independent words, e.g., `/\bai\b/` matches "AI" but not "training" |
| Non-letter before and after | `/(?<![a-z])ai(?![a-z])/` | Looser boundary, suitable for Chinese-English mixed scenarios |
| Start matching | `/^breaking/` | Only match titles starting with "breaking" |
| End matching | `/发布$/` | Only match titles ending with "" |
| Multi-select | `/苹果\|华为\|小米/` | Match any one (note escape `\|`) |
**Matching example:**
```txt
# Configuration
/(?<![a-z])ai(?![a-z])/
Artificial intelligence
```
- ✅ "AI is the future" ← Match independent "AI"
- ✅ "" ← Match "ai" before and after Chinese characters
- ✅ "Artificial intelligence development is rapid" ← Match "artificial intelligence"
- ❌ "Resistance training is important" ← "ai" in "training" does not match
- ❌ "The maid cleaned the room" ← "ai" in "maid" does not match
**Combination use:**
```txt
# Regular + ordinary word + filter word
/\bai\b/
Artificial intelligence
Machine learning
!Advertisement
```
**Notes:**
- Regular expressions automatically enable case-insensitive matching (`re.IGNORECASE`)
- Supports `/pattern/i` and other JavaScript style writing (flags will be ignored because case-insensitive is enabled by default)
- Invalid regular syntax will be treated as ordinary words
- Regular can be used for ordinary words, must words (`+`), filter words (`!`)
**💡 Can't write regular expressions? Let AI help you generate!**
If you're not familiar with regular expressions, you can directly let ChatGPT / Gemini / DeepSeek help you generate. Just tell AI:
> I need a Python regular expression to match the English word "ai" but not "ai" in "training".
> Please directly provide the regular expression in the format `/pattern/`, no need for additional explanation.
AI will give you a result like: `/(?<![a-zA-Z])ai(?![a-zA-Z])/`
##### 7. **Display name** `=> Remark` - Custom display text (v4.7.0 added)
Regular expressions in push messages and HTML pages may not be friendly. Use the `=> Remark` syntax to set the display name:
```txt
/(?<![a-zA-Z])ai(?![a-zA-Z])/ => AI related
Artificial intelligence
```
**Effect:** Push messages and HTML pages display "AI related" instead of complex regular expressions
**Syntax format:**
```txt
# Regular + display name
/pattern/ => Display name
/pattern/i => Display name # Supports flags writing (flags are ignored)
/pattern/=>Display name # => Spaces on both sides are optional
# Ordinary word + display name
deepseek => DeepSeek dynamics
```
**Matching example:**
```txt
# Configuration
/(?<![a-zA-Z])ai(?![a-zA-Z])/ => AI related
Artificial intelligence
```
| Original configuration | Push/HTML display |
|---------|---------------|
| `/(?<![a-z])ai(?![a-z])/` + `Artificial intelligence` | `(?<![a-z])ai(?![a-z]) Artificial intelligence` |
| `/(?<![a-z])ai(?![a-z])/ => AI related` + `Artificial intelligence` | **`AI related`** |
**Notes:**
- Display name only needs to be written on the first word of the group
- If multiple words in a group have display names, use the first one
- If no display name is set, automatically use all words in the group to concatenate
---
#### 🔗 Word group function - Important role of empty lines
**Core rules:** Use **empty lines** to separate different word groups, each group independently counts
##### Example configuration:
```txt
iPhone
Huawei
OPPO
+Release
A-share
Shanghai stock exchange
Shenzhen stock exchange
+Rise and fall
!Prediction
World Cup
European Cup
Asian Cup
+Match
```
##### Word group explanation and matching effect:
**Group 1 - Mobile new product class:**
- Keywords: iPhone, Huawei, OPPO
- Must word: Release
- Effect: Must contain mobile brand name and "Release"
**Matching example:**
- ✅ "iPhone 15 officially released price announced" ← Has "iPhone" + "Release"
- ✅ "Huawei Mate60 series release live" ← Has "Huawei" + "Release"
- ✅ "OPPO Find X7 release time confirmed" ← Has "OPPO" + "Release"
- ❌ "iPhone sales hit a new high" ← Has "iPhone" but lacks "Release"
**Group 2 - Stock market class:**
- Keywords: A-share, Shanghai stock exchange, Shenzhen stock exchange
- Must word: Rise and fall
- Filter word: Prediction
- Effect: Focus on stock market fluctuations, excluding predictions
**Matching example:**
- ✅ "A-share market rose sharply today" ← Has "A-share" + "Rise and fall"
- ✅ "Shanghai stock exchange hit a new high" ← Has "Shanghai stock exchange" + "Rise and fall"
- ❌ "Expert predicts A-share market trend" ← Has "A-share" + "Rise and fall" but contains "Prediction"
**Group 3 - Football match class:**
- Keywords: World Cup, European Cup, Asian Cup
- Must word: Match
- Effect: Focus on match-related news
---
#### 📝 Configuration tips
##### 1. **From loose to strict**
```txt
# Step 1: Test with loose keywords
Artificial intelligence
AI
ChatGPT
# Step 2: Add must words to limit scope after finding mis-matching
Artificial intelligence
AI
ChatGPT
+Technology
# Step 3: Add filter words to exclude interference after finding disturbing content
Artificial intelligence
AI
ChatGPT
+Technology
!Advertisement
!Training
```
##### 2. **Avoid excessive complexity**
❌ **Not recommended:** A word group contains too many words
```txt
Huawei
OPPO
Apple
Samsung
Vivo
OnePlus
Meizu
+Mobile
+Release
+Sales
!Fake
!Repair
!Second-hand
```
✅ **Recommended:** Split into multiple precise word groups
```txt
Huawei
OPPO
+New product
Apple
Samsung
+Release
Mobile
Sales
+Market
```
</details>
#### 2.2 Advanced configuration (v3.2.0 added)
<a name="关键词高级配置"></a>
<details>
<summary>👉 Click to expand: <strong>Advanced configuration tutorial</strong></summary>
<br>
</details>
##### Keyword Sorting Priority
**Configuration Location:** `config/config.yaml`
```yaml
report:
sort_by_position_first: false # Sorting priority configuration
```
| Configuration Value | Sorting Rule | Applicable Scenario |
|--------|---------|---------|
| `false` (default) | Hotspot count ↓ → Configuration position ↑ | Focus on heat trend |
| `true` | Configuration position ↑ → Hotspot count ↓ | Focus on personal priority |
**Example:** Configuration order A, B, C, hotspot count A(3), B(10), C(5)
- `false`: B(10) → C(5) → A(3)
- `true`: A(3) → B(10) → C(5)
##### Global Display Quantity Limit
```yaml
report:
max_news_per_keyword: 10 # Maximum 10 news per keyword (0 = no limit)
```
**Docker Environment Variables:**
```bash
SORT_BY_POSITION_FIRST=true
MAX_NEWS_PER_KEYWORD=10
```
**Comprehensive Example:**
```yaml
# config.yaml
report:
sort_by_position_first: true # Prioritize by configuration order
max_news_per_keyword: 10 # Global default 10 news per keyword
```
```txt
# frequency_words.txt
Tesla
Musk
@20 # Focus on, display 20 news (override global configuration)
Huawei # Use global configuration, display 10 news
BYD
@5 # Limit to 5 news
```
**Final Effect:** Display by configuration order: Tesla (20 news) → Huawei (10 news) → BYD (5 news)
</details>
### 3. Which Push Mode to Choose?
<details>
<summary>👉 Click to expand: <strong>Detailed comparison of three push modes</strong></summary>
<br>
**Configuration Location:** `config/config.yaml` under `report.mode`
```yaml
report:
mode: "daily" # Options: "daily" | "incremental" | "current"
```
#### Detailed Comparison Table
| Mode | Suitable for | Push Timing | Display Content | Typical Use Case |
|------|----------|----------|----------|------------|
| **Daily Summary**<br/>`daily` | 📋 Enterprise managers/ordinary users | Timed push (default hourly) | All matching news for the day<br/>+ New news area | **Example**: Check all important news for the day at 6 PM<br/>**Characteristics**: See the complete trend for the day, without missing any hotspots<br/>**Reminder**: Includes previously pushed news |
| **Current Rankings**<br/>`current` | 📰 Self-media creators/content creators | Timed push (default hourly) | Current rankings matching news<br/>+ New news area | **Example**: Track "what topics are trending now"<br/>**Characteristics**: Understand current popularity ranking changes in real-time<br/>**Reminder**: News that has been on the list will appear multiple times |
| **Incremental Monitoring**<br/>`incremental` | 📈 Investors/traders | Push only when new | New matching frequency word news | **Example**: Monitor "Tesla", only notify when new news appears<br/>**Characteristics**: Zero repetition, only see first-time appearing news<br/>**Suitable for**: High-frequency monitoring, avoiding information disturbance |
#### Actual Push Effect Example
Assuming you monitor the keyword "Apple", executed hourly:
| Time | Daily Mode Push | Current Mode Push | Incremental Mode Push |
|-----|--------------|----------------|-------------------|
| 10:00 | News A, News B | News A, News B | News A, News B |
| 11:00 | News A, News B, News C | News B, News C, News D | **Only** News C |
| 12:00 | News A, News B, News C | News C, News D, News E | **Only** News D, News E |
**Explanation**:
- `daily`: Accumulate and display all news for the day (A, B, C are retained)
- `current`: Display current rankings news (ranking changes, News D ranks, News A drops)
- `incremental`: **Only push newly appearing news** (avoid repetition)
#### Frequently Asked Questions
> **💡 Encountering this issue?** 👉 "Executed once an hour, the news output at the first execution still appears in the next hour"
> - **Cause**: You might have chosen `daily` (daily summary) or `current` (current rankings) mode
> - **Solution**: Switch to `incremental` (incremental monitoring) mode, only push new content
#### ⚠️ Important Note for Incremental Mode
> **Users who chose `incremental` (incremental monitoring) mode, please note:**
>
> 📌 **Incremental mode only pushes when new matching news appears**
>
> **If you haven't received a push for a long time, it might be because:**
> 1. There are no new hotspots matching your keywords during the current period
> 2. Keyword configuration is too strict or too broad
> 3. Monitored platform quantity is small
>
> **Solutions:**
> - Solution 1: 👉 [Optimize keyword configuration](#2-keyword-configuration) - Adjust keyword precision, add or modify monitoring words
> - Solution 2: Switch push mode - Use `current` or `daily` mode, can receive timed pushes
> - Solution 3: 👉 [Increase monitored platforms](#1-platform-configuration) - Add more news platforms, expand information sources
</details>
### 4. Adjusting Hotspot Algorithm
<details>
<summary>👉 Click to expand: <strong>Customizing hotspot weights</strong></summary>
<br>
**Configuration Location:** `config/config.yaml` under `advanced.weight`
```yaml
advanced:
weight:
rank: 0.6 # Ranking weight
frequency: 0.3 # Frequency weight
hotness: 0.1 # Hotness weight
```
The current default configuration is balanced.
#### Two Core Scenarios
**Pursue Real-time Hotspots**:
```yaml
advanced:
weight:
rank: 0.8 # Mainly look at ranking
frequency: 0.1 # Less concerned about sustainability
hotness: 0.1
```
**Suitable for**: Self-media bloggers, marketing personnel, users who want to quickly understand current hot topics
**Pursue In-depth Topics**:
```yaml
advanced:
weight:
rank: 0.4 # Moderately look at ranking
frequency: 0.5 # Emphasize daily sustainability
hotness: 0.1
```
**Suitable for**: Investors, researchers, news workers, users who need in-depth trend analysis
#### Adjustment Method
1. **Three numbers must add up to 1.0**
2. **Increase the important one**: If ranking is important, increase `rank`, if sustainability is important, increase `frequency`
3. **Suggest adjusting by 0.1-0.2 each time**, observe the effect
Core idea: Users pursuing speed and timeliness increase ranking weight, users pursuing depth and stability increase frequency weight.
</details>
### 5. What Does the Message I Receive Look Like?
<details>
<summary>👉 Click to expand: <strong>Message style preview</strong></summary>
<br>
#### Push Example
📊 Hotspot Vocabulary Statistics
🔥 [1/3] AI ChatGPT : 2 News
1. [Baidu Hot Search] 🆕 ChatGPT-5 Officially Released [**1**] - 09:15 (1 time)
2. [Toutiao] AI Chip Concept Stocks Soar [**3**] - [08:30 ~ 10:45] (3 times)
━━━━━━━━━━━━━━━━━━━
📈 [2/3] BYD Tesla : 2 News
1. [Weibo] 🆕 BYD Monthly Sales Record Broken [**2**] - 10:20 (1 time)
2. [Douyin] Tesla Price Promotion [**4**] - [07:45 ~ 09:15] (2 times)
━━━━━━━━━━━━━━━━━━━
📌 [3/3] A-share Stock Market : 1 News
1. [Wall Street News] A-share Midday Review Analysis [**5**] - [11:30 ~ 12:00] (2 times)
🆕 **Newly Added Hotspot News** (Total 2 News)
**Baidu Hot Search** (1 News):
1. ChatGPT-5 Officially Released [**1**]
**Weibo** (1 News):
1. BYD Monthly Sales Record Broken [**2**]
Update Time: 2025-01-15 12:30:15
#### Message Format Description
| Format Element | Example | Meaning | Explanation |
| ------------- | --------------------------- | ------------ | --------------------------------------- |
| 🔥📈📌 | 🔥 [1/3] AI ChatGPT | Heat Level | 🔥 High heat (≥10 news) 📈 Medium heat (5-9 news) 📌 Ordinary heat (<5 news) |
| [Serial Number/Total] | [1/3] | Sorting Position | Current word group ranking among all matching word groups |
| Frequency Word Group | AI ChatGPT | Keyword Group | Word group in configuration file, title must contain the word |
| : N News | : 2 News | Matching Quantity | Total news matched for the word group |
| [Platform Name] | [Baidu Hot Search] | Source Platform | News platform name |
| 🆕 | 🆕 ChatGPT-5 Officially Released | New Mark | Newly appeared hotspot in this round |
| [**Number**] | [**1**] | High Ranking | Ranking ≤ threshold, red and bold display |
| [Number] | [7] | Ordinary Ranking | Ranking > threshold, ordinary display |
| - Time | - 09:15 | First Time | Time when the news was first discovered |
| [Time~Time] | [08:30 ~ 10:45] | Duration | Time range from first appearance to last appearance |
| (N times) | (3 times) | Appearance Frequency | Total appearances during monitoring |
| **New Area** | 🆕 **Newly Added Hotspot News** | New Topic Summary | Separate display of newly appeared hot topics |
</details>
### 6. Docker Deployment
**Image Description:**
TrendRadar provides two independent Docker images, choose to deploy according to needs:
| Image Name | Purpose | Description |
|---------|------|------|
| `wantcat/trendradar` | News push service | Timely crawl news, push notifications (required) |
| `wantcat/trendradar-mcp` | AI analysis service | MCP protocol support, AI dialogue analysis (optional) |
> 💡 **Suggestion**:
> - Only need push function: Deploy `wantcat/trendradar` image only
> - Need AI analysis function: Deploy both images
<details>
<summary>👉 Click to expand: <strong>Complete Docker deployment guide</strong></summary>
<br>
#### Method 1: Using Docker Compose (Recommended)
1. **Create Project Directory and Configuration**:
```bash
# Clone the project to local
git clone https://github.com/sansan0/TrendRadar.git
cd TrendRadar
```
> 💡 **Description**: The key directory structure required for Docker deployment is as follows:
```
Current Directory/
├── config/
│ ├── config.yaml # Core function configuration (required)
│ ├── frequency_words.txt # Keyword configuration (required)
│ ├── timeline.yaml # Timeline configuration
│ ├── ai_analysis_prompt.txt # AI analysis prompt (optional)
│ ├── ai_translation_prompt.txt # AI translation prompt (optional)
│ ├── ai_interests.txt # AI interest filtering configuration (optional)
│ ├── ai_filter/ # AI filtering related prompts
│ │ ├── prompt.txt
│ │ ├── extract_prompt.txt
│ │ └── update_tags_prompt.txt
│ └── custom/ # User-defined configuration (optional)
│ ├── ai/ # Custom AI prompts
│ └── keyword/ # Custom keyword files
└── docker/
├── .env # Sensitive information + Docker-specific configuration
└── docker-compose.yml # Docker Compose orchestration file
```
2. **Configuration File Description**:
**Configuration Division Principle (v4.6.0 optimization)**:
| File | Purpose | Modification Frequency | Description |
|------|------|---------------------|------|
| `config/config.yaml` | **Core Function Configuration** | Low | Report mode, push settings, storage format, push window, AI analysis switch, platform enablement, etc. |
| `config/frequency_words.txt` | **Keyword Configuration** | High | Set hot words of concern, support grouping, regular expressions, aliases, and other advanced syntax |
| `config/timeline.yaml` | **Timeline Configuration** | Low | Control news timeline display and filtering rules |
| `config/ai_analysis_prompt.txt` | **AI Analysis Prompt** | Medium | Customize AI analysis role definition and output format (v5.0.0+) |
| `config/ai_translation_prompt.txt` | **AI Translation Prompt** | Low | Customize AI translation prompt template |
| `config/ai_interests.txt` | **AI Interest Filtering** | Medium | Define AI-based interest automatic news filtering rules |
| `config/ai_filter/` | **AI Filtering Prompt** | Low | Internal prompts for AI filtering module (generally no need to modify) |
| `config/custom/` | **User-defined Extension** | As needed | `custom/ai/` for custom AI prompts, `custom/keyword/` for custom keyword files |
| `docker/.env` | **Sensitive Information + Docker-specific Configuration** | Low | Webhook URLs, API Key, S3 credentials, scheduled tasks, etc., **not tracked by git** |
> 💡 **Division Points**:
> - **Functional Behavior** → Modify `config.yaml` (e.g., enable/disable a platform, adjust push mode)
> - **Content of Concern** → Modify `frequency_words.txt` (e.g., add new keywords of concern)
> - **AI Output Style** → Modify `ai_analysis_prompt.txt` or `ai_translation_prompt.txt`
> - **Credentials and Certificates** → Modify `docker/.env` (集中存放 sensitive information such as API Key, Webhook URL)
> - **Personalized Extension** → Use `config/custom/` directory to avoid direct modification of default configuration being overwritten by upgrades
> 💡 **Configuration Modification Takes Effect**: After modifying `config.yaml`, execute `docker compose up -d` to restart the container to take effect
**⚙️ Environment Variable Override Mechanism (v3.0.5+)**
Environment variables in `.env` file will override corresponding configurations in `config.yaml`:
| Environment Variable | Corresponding Configuration | Example Value | Description |
|---------|---------|-------|------|
| `ENABLE_WEBSERVER` | - | `true` / `false` | Whether to automatically start the web server |
| `WEBSERVER_PORT` | - | `8080` | Web server port |
| `WEBSERVER_WATCHDOG` | - | `true` / `false` | Whether to enable "web service automatic recovery" (automatically restart when service is abnormal) |
| `WEBSERVER_WATCHDOG_INTERVAL` | - | `60` | Automatic recovery check interval (seconds) |
| `FEISHU_WEBHOOK_URL` | `notification.channels.feishu.webhook_url` | `https://...` | Feishu Webhook (multiple accounts separated by `;`) |
| `AI_ANALYSIS_ENABLED` | `ai_analysis.enabled` | `true` / `false` | Whether to enable AI analysis (added in v5.0.0) |
| `AI_API_KEY` | `ai.api_key` | `sk-xxx...` | AI API Key (shared by `ai_analysis` and `ai_translation`) |
| `AI_PROVIDER` | `ai.provider` | `deepseek` / `openai` / `gemini` | AI provider |
| `S3_*` | `storage.remote.*` | - | Remote storage configuration (5 parameters) |
**Configuration Priority**: Environment variables > config.yaml
**Usage**:
- Modify `.env` file and fill in required configurations
- Or add directly in the environment variable management interface of NAS/Synology Docker
- Take effect after restarting the container: `docker compose up -d`
3. **Start Service**:
**Option A: Start All Services (Push + AI Analysis)**
```bash
# Pull the latest image
docker compose pull
# Start all services (trendradar + trendradar-mcp)
docker compose up -d
```
**Option B: Start News Push Service Only**
```bash
# Start trendradar only (timed crawling and pushing)
docker compose pull trendradar
docker compose up -d trendradar
```
**Option C: Start MCP AI Analysis Service Only**
```bash
# Start trendradar-mcp only (providing AI analysis interface)
docker compose pull trendradar-mcp
docker compose up -d trendradar-mcp
```
> 💡 **Hint**:
> - Most users only need to start `trendradar` to achieve news push function
> - Only when using ChatGPT/Gemini for AI dialogue analysis, you need to start `trendradar-mcp`
> - Two services are independent and can be flexibly combined according to needs
4. **View Running Status**:
```bash
# View news push service logs
docker logs -f trendradar
# View MCP AI analysis service logs
docker logs -f trendradar-mcp
# View all container status
docker ps | grep trendradar
# Stop specific service
docker compose stop trendradar # Stop push service
docker compose stop trendradar-mcp # Stop MCP service
```
#### Method 2: Local Build (Developer Option)
If you need to customize and modify the code or build your own image:
```bash
# Clone the project
git clone https://github.com/sansan0/TrendRadar.git
cd TrendRadar
# Modify configuration files
vim config/config.yaml
vim config/frequency_words.txt
# Use build version of docker compose
cd docker
cp docker-compose-build.yml docker-compose.yml
```
**Build and Start Service**:
```bash
# Option A: Build and start all services
docker compose build
docker compose up -d
# Option B: Build and start news push service only
docker compose build trendradar
docker compose up -d trendradar
# Option C: Build and start MCP AI analysis service only
docker compose build trendradar-mcp
docker compose up -d trendradar-mcp
```
> 💡 **Architecture Parameter Description**:
> - Default build `amd64` architecture image (suitable for most x86_64 servers)
> - If you need to build `arm64` architecture (Apple Silicon, Raspberry Pi, etc.), set environment variables:
> ```bash
> export DOCKER_ARCH=arm64
> docker compose build
> ```
#### Image Update
```bash
# Method 1: Manual update (crawler + MCP image)
docker pull wantcat/trendradar:latest
docker pull wantcat/trendradar-mcp:latest
docker compose down
docker compose up -d
# Method 2: Use docker compose to update
docker compose pull
docker compose up -d
```
**Available Images**:
| Image Name | Purpose | Description |
|---------|------|------|
| `wantcat/trendradar` | News push service | Timed crawling news, push notifications |
| `wantcat/trendradar-mcp` | MCP service | AI analysis function (optional) |
#### Service Management Commands
```bash
# View running status
docker exec -it trendradar python manage.py status
# Manually execute a crawler
docker exec -it trendradar python manage.py run
# View real-time logs
docker exec -it trendradar python manage.py logs
# Display current configuration
docker exec -it trendradar python manage.py config
# Display output files
docker exec -it trendradar python manage.py files
# Web server management (for browser access to generated reports)
docker exec -it trendradar python manage.py start_webserver # Start web server
docker exec -it trendradar python manage.py stop_webserver # Stop web server
docker exec -it trendradar python manage.py webserver_status # View web server status
# View help information
docker exec -it trendradar python manage.py help
# Restart container
docker restart trendradar
# Stop container
docker stop trendradar
# Delete container (retain data)
docker rm trendradar
```
> 💡 **Web Server Description**:
> - After startup, you can access `http://localhost:8080` through the browser to view the latest report
> - Access historical reports through directory navigation (e.g., `http://localhost:8080/2025-xx-xx/`)
> - Port can be configured in `.env` file `WEBSERVER_PORT` parameter
> - Automatic startup: set `ENABLE_WEBSERVER=true` in `.env`
> - Automatic recovery: `WEBSERVER_WATCHDOG=true` (enabled by default), check every `WEBSERVER_WATCHDOG_INTERVAL` seconds, abnormality will automatically reopen the web service
> - `stop_webserver` means "you manually close the web service" (command: `docker exec -it trendradar python manage.py stop_webserver`)
> - "Automatic pull-up" means "the system automatically reopen the web service"; if you manually close and want to recover, execute `docker exec -it trendradar python manage.py start_webserver`
> - Security tip: only provides static file access, restricted to the output directory, bound to local access
#### Data Persistence
Generated reports and data are saved in the `./output` directory by default, and data will be retained even if the container is restarted or deleted.
**📊 Web Version Report Access Path**:
TrendRadar generated daily summary HTML report will be saved to two locations:
| File Location | Access Method | Applicable Scenario |
|---------|---------|---------|
| `output/index.html` | Host machine direct access | **Docker deployment** (through Volume mount, host machine visible) |
| `index.html` | Root directory access | **GitHub Pages** (repository root directory, Pages automatically identified) |
| `output/html/YYYY-MM-DD/daily_summary.html` | Historical report access | All environments (archived by date) |
**Local Access Example**:
```bash
# Method 1: Access through web server (recommended, Docker environment)
# 1. Start web server
docker exec -it trendradar python manage.py start_webserver
# 2. Access in browser
http://localhost:8080 # Access latest report (default index.html)
http://localhost:8080/html/2025-xx-xx/ # Access report for specified date
# Method 2: Directly open file (local environment)
open ./output/index.html # macOS
start ./output/index.html # Windows
xdg-open ./output/index.html # Linux
# Method 3: Access historical archive
open ./output/html/2025-xx-xx/daily_summary.html
```
**Why are there two index.html files?**
- `output/index.html`: Docker Volume mounted to host machine, can be directly opened locally
- `index.html`: GitHub Actions pushed to repository, GitHub Pages automatically deployed
> 💡 **Hint**: Two files have the same content, choose either one to access
#### Troubleshooting
```bash
# Check container status
docker inspect trendradar
# View container logs
docker logs --tail 100 trendradar
# Enter container for debugging
docker exec -it trendradar /bin/bash
# Verify configuration file
docker exec -it trendradar ls -la /app/config/
```
#### MCP Service Deployment (AI Analysis Function)
If you need to use the AI analysis function, you can deploy an independent MCP service container.
**Architecture Description**:
```mermaid
flowchart TB
subgraph trendradar["trendradar"]
A1[Timed crawling news]
A2[Push notifications]
end
subgraph trendradar-mcp["trendradar-mcp"]
B1[127.0.0.1:3333]
B2[AI analysis interface]
end
subgraph shared["Shared Volume"]
C1["config/ (ro)"]
C2["output/ (ro)"]
end
trendradar --> shared
trendradar-mcp --> shared
```
**Quick Start**:
If you have completed deployment according to [Method 1: Using Docker Compose](#method-1-using-docker-compose-recommended), you only need to start the MCP service:
```bash
cd TrendRadar/docker
docker compose up -d trendradar-mcp
# View running status
docker ps | grep trendradar-mcp
```
**Start MCP Service Alone** (without using docker compose):
```bash
# Linux/Mac
docker run -d --name trendradar-mcp \
-p 127.0.0.1:3333:3333 \
-v $(pwd)/config:/app/config:ro \
-v $(pwd)/output:/app/output:ro \
-e TZ=Asia/Shanghai \
wantcat/trendradar-mcp:latest
# Windows PowerShell
docker run -d --name trendradar-mcp `
-p 127.0.0.1:3333:3333 `
-v ${PWD}/config:/app/config:ro `
-v ${PWD}/output:/app/output:ro `
-e TZ=Asia/Shanghai `
wantcat/trendradar-mcp:latest
```
> ⚠️ **Note**: When running alone, ensure that the current directory has `config/` and `output/` folders, and contains configuration files and news data.
**Verify Service**:
```bash
# Check MCP service health status
curl http://127.0.0.1:3333/mcp
```
# View MCP Service Logs
docker logs -f trendradar-mcp
## Configure in AI Client:
After MCP service starts, configure according to different clients:
### Cherry Studio (Recommended, GUI Configuration):
- Settings → MCP Server → Add
- Type: `streamableHttp`
- URL: `http://127.0.0.1:3333/mcp`
### Claude Desktop / Cline (JSON Configuration):
```json
{
"mcpServers": {
"trendradar": {
"url": "http://127.0.0.1:3333/mcp",
"type": "streamableHttp"
}
}
}
```
> 💡 **Tip**: MCP service only listens to local port (127.0.0.1), ensuring security. For remote access, configure reverse proxy and authentication yourself.
</details>
### 7. How is Content Displayed?
<details>
<summary>👉 Click to expand: <strong>Customizing Push Styles and Content</strong></summary>
<br>
**Configuration Location:** `config/config.yaml` in `report` and `display` sections
```yaml
report:
mode: "daily" # Push mode
display_mode: "keyword" # Display mode (added in v4.6.0)
rank_threshold: 5 # Ranking highlight threshold
sort_by_position_first: false # Sorting priority
max_news_per_keyword: 0 # Maximum news per keyword
display:
region_order: # Region display order (added in v5.2.0)
- new_items # New hotspots region
- hotlist # Hotlist region
- rss # RSS subscription region
- standalone # Independent display area
- ai_analysis # AI analysis region
```
#### Common Configuration Item Descriptions
| What do I want to adjust? | Which parameter to modify? | Default value | Description |
|-------------|-------------|-------|------|
| **Push Mode** | `mode` | `daily` | Determines push timing and content, see [Push Mode Details](#3-push-mode-details) |
| **Grouping Method** | `display_mode` | `keyword` | `keyword`=group by keyword (e.g., "AI"), `platform`=group by platform (e.g., "Weibo") |
| **Highlight Key Points** | `rank_threshold` | `5` | News ranking top 5 will be **bolded** for easy viewing |
| **Sorting Rule** | `sort_by_position_first` | `false` | `false`=high heat ranks first, `true`=your configured words rank first |
| **Quantity Limit** | `max_news_per_keyword` | `0` | Maximum news per keyword, `0` means no limit |
| **Display Order** | `display.region_order` | See above configuration | Adjust list order to control region display positions |
#### Grouping Method Comparison (display_mode)
Do you want to see "what news are under this topic" or "what news are on this platform"?
| Mode | Grouping Method | Title Prefix | Applicable Scenario |
|------|---------|---------|---------|
| `keyword` (Default) | **Group by keyword** | `[Platform Name]` | I follow "AI" and want to see AI-related news from various platforms |
| `platform` | **Group by platform** | `[Keyword]` | I follow "Weibo" and want to see news about my followed keywords on Weibo |
#### Region Display Order (region_order)
By adjusting the `display.region_order` list order, you can control the display position of each region in the push message.
**Default Order**: New hotspots → Hotlist → RSS → Independent display area → AI analysis
**Custom Example**: Want AI analysis to be at the top?
```yaml
display:
region_order:
- ai_analysis # Move to the first line
- new_items
- hotlist
- rss
- standalone
```
**Note**: A region will only be displayed if it meets two conditions:
1. It is in the `region_order` list
2. The corresponding switch in `display.regions` is `true`
#### Region Switches (regions)
Control whether each region is displayed in the push:
```yaml
display:
regions:
hotlist: true # Hotlist region (keyword-matched hot news)
new_items: false # New hotspots region (including hotlist new + RSS new)
rss: true # RSS subscription region (keyword-matched RSS content)
standalone: false # Independent display area (complete hotlist/RSS, not filtered by keywords)
ai_analysis: true # AI analysis region
```
| Region | Configuration Key | Default Value | Description |
|------|--------|-------|------|
| **Hotlist** | `hotlist` | `true` | Keyword-matched hot news aggregation |
| **New Hotspots** | `new_items` | `false` | Newly appeared hot topics (including hotlist new + RSS new). Note: The 🆕 mark in the hotlist region is not affected by this switch |
| **RSS** | `rss` | `true` | Keyword-matched RSS subscription content. If closed, skip RSS analysis, but RSS in the independent display area will not be affected |
| **Independent Display Area** | `standalone` | `false` | Complete content display of specified platforms/RSS, not filtered by keywords |
| **AI Analysis** | `ai_analysis` | `true` | AI-generated hotspot analysis summary |
#### Sorting Priority (sort_by_position_first)
Assume you configured keywords: 1. Tesla, 2. BYD.
Actual heat: BYD (10 articles), Tesla (3 articles).
| Configuration Value | Sorting Result | Your Idea |
|-------|---------|---------|
| `false` (Default) | BYD (10 articles) → Tesla (3 articles) | "Who is hotter, who ranks first" |
| `true` | Tesla (3 articles) → BYD (10 articles) | "My configured order is the priority, regardless of heat" |
#### Independent Display Area (standalone)
**Scenario**: Some platforms (e.g., Zhihu Hotlist, HackerNews), I want to **see the complete list**, regardless of whether it matches my keywords.
```yaml
display:
regions:
standalone: true # Display independent display area in push (closing does not affect AI analysis)
standalone:
platforms: ["zhihu", "weibo"] # These platforms' hotlists are completely displayed
rss_feeds: ["hacker-news"] # These RSS sources' content is completely displayed
max_items: 20 # Maximum display items
```
> 💡 **Push Display and AI Analysis are Independent**: `regions.standalone` only controls whether the independent display area is shown in the push. Even if the push display is closed, as long as `include_standalone: true` is enabled in the AI configuration, AI will still analyze the complete data of these platforms. Suitable for users who want AI to perform in-depth analysis but do not want the push message to be too long.
</details>
### 8. When to Push?
<details>
<summary>👉 Click to expand: <strong>Set Push Timing (Scheduling System)</strong></summary>
<br>
**Configuration Location:** `config/config.yaml` in `schedule` section + `config/timeline.yaml`
#### Quick Start
Just select a preset template in `config.yaml`, no need to edit `timeline.yaml`:
```yaml
schedule:
enabled: true
preset: "morning_evening" # Change here
```
#### Optional Preset Templates
| Template Name | Description | Push Behavior |
|-------|------|---------|
| `morning_evening` | Daily incremental + evening summary (recommended) | Incremental throughout the day + evening summary from 19:00-21:00 |
| `always_on` | Always-on monitoring | Push whenever there is an update, no time segment division |
| `office_hours` | Office hours | Three-segmented push during working hours (morning review → midday hotspots → end-of-work summary), weekend incremental free push |
| `night_owl` | Night owl | Afternoon review + overnight full summary (22:00-01:00跨午夜) |
| `custom` | Completely customizable | Edit `timeline.yaml` bottom custom section |
#### Completely Customizable
If preset templates do not meet your needs, edit the `custom` section at the bottom of `timeline.yaml` to freely define time segments, daily plans, and weekly mappings. See `timeline.yaml` file comments for details.
#### Important Reminder
> ⚠️ **Users upgrading from old versions**:
> - v6.0.0 removed old `notification.push_window` and `ai_analysis.analysis_window` configurations
> - Use new `schedule` + `timeline.yaml` scheduling system
> - Old "push once a day" can be replaced with `morning_evening` preset
> - Old "office hours push" can be replaced with `office_hours` preset
> ⚠️ **GitHub Actions Users**:
> - GitHub Actions execution time is unstable, may have ±15 minutes deviation
> - Time segment range should leave at least **2 hours**
> - For precise timing push, recommend using **Docker deployment** on a personal server
</details>
### 9. How Often to Run?
<details>
<summary>👉 Click to expand: <strong>Set Automatic Running Frequency</strong></summary>
<br>
**Configuration Location:** `.github/workflows/crawler.yml` in `schedule` section
```yaml
on:
schedule:
- cron: "0 * * * *" # Run once an hour
```
#### How to Modify Running Frequency?
GitHub Actions uses a "Cron" time format, no need to deeply understand, just copy and replace.
**Configuration Location:** `.github/workflows/crawler.yml` file in `schedule` section
| What I want... | Copy this line | Description |
|-----------|------------|------|
| **Once an hour** | `- cron: "0 * * * *"` | **Default configuration**, runs at minute 0 |
| **Every 30 minutes** | `- cron: "*/30 * * * *"` | Run every 30 minutes |
| **Daily at 8:00 AM** | `- cron: "0 0 * * *"` | ⚠️ Write `0` because UTC time (0:00) = Beijing time (8:00 AM) |
| **Working hours every half hour** | `- cron: "*/30 0-14 * * *"` | Corresponds to Beijing time 8:00 - 22:00 |
| **Three meals a day** | `- cron: "0 0,6,12 * * *"` | Corresponds to Beijing time 8:00, 14:00, 20:00 |
#### ⚠️ Two Important Reminders
1. **Time difference issue**: GitHub's server is abroad, using UTC time.
- **Simple arithmetic**: Your desired Beijing time **minus 8 hours** = time to fill in.
- *Example: Want it to run at 20:00 Beijing time, fill in 12:00*
2. **Do not be too frequent**: Recommend interval not less than 30 minutes.
- GitHub free resources are limited, running too frequently may be restricted by the official account.
- And Actions start-up has a few minutes delay, too precise control is meaningless.
#### Step-by-Step Modification
1. In your GitHub repository, find `.github/workflows/crawler.yml` file
2. Click the ✏️ (Edit) button in the top right corner
3. Find the `cron: "..."` line, replace the content with the above "code"
4. Click the green **Commit changes** button to save
</details>
### 10. Push to Multiple Groups/Devices
<details>
<summary>👉 Click to expand: <strong>Push to Multiple Receivers Simultaneously</strong></summary>
> ### ⚠️ **Security First**
> **Do not write passwords/ Tokens directly in `config.yaml`!**
> If you upload files containing passwords to GitHub, the whole world can see them.
>
> **Correct approach**:
> - **GitHub Actions users**: Go to Settings -> Secrets and add
> - **Docker users**: Write in `.env` file (this file will not be uploaded)
#### How to push to multiple places simultaneously?
Simple, use semicolon `;` to separate multiple addresses.
**Example**:
Assume you have two Feishu groups and want to receive push notifications:
- Group 1 address: `https://.../webhook/aaa`
- Group 2 address: `https://.../webhook/bbb`
Fill in configuration:
`https://.../webhook/aaa;https://.../webhook/bbb`
#### Platforms Supporting Multiple Accounts
| Platform | Configuration Method | Notes |
|------|---------|----------|
| **Feishu/DingTalk/WeChat** | Use `;` to separate multiple Webhook URLs | Simplest, just string them together |
| **Bark (iOS)** | Use `;` to separate multiple Key URLs | Push to multiple iPhones |
| **Telegram** | Both Token and ChatID need to be separated by `;` | ⚠️ **Note correspondence**:<br>Token1 corresponds to ChatID1<br>Token2 corresponds to ChatID2 |
| **ntfy** | Topic and Token need to be separated by `;` | If a topic does not need a token, leave it blank:<br>`token1;;token3` (middle one is empty) |
#### Common Configuration Examples (GitHub Secrets / .env)
```bash
# Feishu send to 3 groups
FEISHU_WEBHOOK_URL=https://hook1...;https://hook2...;https://hook3...
# DingTalk send to 2 groups
DINGTALK_WEBHOOK_URL=https://oapi...;https://oapi...
# Telegram send to 2 people (note correspondence)
TELEGRAM_BOT_TOKEN=tokenA;tokenB
TELEGRAM_CHAT_ID=userA;userB
```
> **Tip**: To prevent abuse, by default, each platform has a limit of up to 3 accounts. If you need more, modify the `MAX_ACCOUNTS_PER_CHANNEL` configuration.
</details>
### 11. Where is Data Stored?
<details id="storage-config">
<summary>👉 Click to expand: <strong>Choose Data Storage Location</strong></summary>
<br>
#### Where will data be stored?
The system will automatically choose the most suitable location for you, and you usually don't need to worry:
| Your running environment | Data storage location | Description |
|-------------|-----------|------|
| **Docker / Local Run** | **Local hard drive** | Exists in the `output/` folder of the project directory, can be viewed anytime. |
| **GitHub Actions** | **Cloud storage** | Since GitHub Actions will destroy the environment after running, a cloud storage configuration is required (e.g., Cloudflare R2). |
#### How to configure cloud storage? (GitHub Actions users must read)
If you are using GitHub Actions, you need a "cloud hard drive" to store data. For example, use Cloudflare R2 (because it has free quota).
**Add these 5 variables in GitHub Secrets:**
| Variable name | What to fill in |
|-------|-------|
| `STORAGE_BACKEND` | `remote` |
| `S3_BUCKET_NAME` | Your storage bucket name |
| `S3_ACCESS_KEY_ID` | Your Access Key |
| `S3_SECRET_ACCESS_KEY` | Your Secret Key |
| `S3_ENDPOINT_URL` | Your R2 interface address |
> 💡 **Detailed tutorial**: How to apply for R2? See [Quick Start - Remote Storage Configuration](#-quick-start)
#### How long will data be saved?
By default, we will not automatically delete your data. But if you think the data is too large, you can set "automatic cleanup".
**Configuration location**: `config/config.yaml`
```yaml
storage:
local:
retention_days: 30 # Local data kept for 30 days (0 means permanent)
remote:
retention_days: 30 # Cloud data kept for 30 days
```
#### Push time is incorrect? (Timezone setting)
If you are overseas or find that the push time does not match your local time, you can modify the timezone.
**Configuration location**: `config/config.yaml`
```yaml
app:
timezone: "Asia/Shanghai" # Default is China time
```
- For example, if you are in Los Angeles, change it to: `America/Los_Angeles`
- For example, if you are in London, change it to: `Europe/London`
</details>
### 12. Let AI Analyze Hotspots for You
<details id="ai-analysis-config">
<summary>👉 Click to expand: <strong>Enable AI Intelligent Analysis</strong></summary>
<br>
#### What can AI do for me?
Enabling this feature, AI will act like a professional analyst, for each batch of news pushed:
1. **Automatic reading**: Read all matched hot news
2. **Deep thinking**: Analyze isolated news and their associations
3. **Write a report**: At the end of the push message, attach a short and profound "insight report"
**Includes**: Hotspot trend summary, opinion trend judgment, cross-platform association analysis, potential impact assessment, etc.
</details>
#### How to Enable AI Analysis?
The simplest method is through environment variable configuration (recommended GitHub Secrets or .env).
**Required Configurations**:
| Variable Name | What to Fill | Description |
|---------------|--------------|-------------|
| `AI_ANALYSIS_ENABLED` | `true` | Enable Switch |
| `AI_API_KEY` | `sk-xxxxxx` | Your API Key |
| `AI_MODEL` | `deepseek/deepseek-chat` | Model Identifier (Format: `provider/model`) |
**Supported AI Providers** (Based on LiteLLM, supports 100+ providers):
| Provider | What to Fill for AI_MODEL | Description |
|----------|---------------------------|-------------|
| **DeepSeek** (Recommended) | `deepseek/deepseek-chat` | High cost-effectiveness, suitable for high-frequency analysis |
| **OpenAI** | `openai/gpt-4o`<br>`openai/gpt-4o-mini` | GPT-4o Series |
| **Google Gemini** | `gemini/gemini-1.5-flash`<br>`gemini/gemini-1.5-pro` | Gemini Series |
| **Custom API** | Any format | Use with `AI_API_BASE` |
> 💡 **New Feature**: Now unified interface based on [LiteLLM](https://github.com/BerriAI/litellm), supports 100+ AI providers, simpler configuration, and better error handling.
**Optional Configurations**:
| Variable Name | Default Value | Description |
|---------------|---------------|-------------|
| `AI_API_BASE` | (Automatic) | Custom API Address (e.g., OneAPI, local model) |
| `AI_TEMPERATURE` | `1.0` | Sampling Temperature (0-2, higher is more random) |
| `AI_MAX_TOKENS` | `5000` | Maximum generated tokens |
| `AI_TIMEOUT` | `120` | Request timeout time (seconds) |
| `AI_NUM_RETRIES` | `2` | Number of retries on failure |
#### Advanced Play: AI Translation
If you follow foreign RSS sources (like Hacker News), AI can help translate content into Chinese for you.
**Configuration Location**: `config/config.yaml`
```yaml
ai_translation:
enabled: true # Enable translation
language: "Chinese" # Translate into what language (Chinese, English, Japanese...)
```
#### Advanced Play: Custom AI "Persona"
Do you find AI speaking too formally? You can modify its prompt to make it fit your preferred style (e.g., "toxic commentator", "experienced investment advisor").
- **Edit File**: `config/ai_analysis_prompt.txt`
- **Edit Method**: Open with a text editor, tell AI what analysis style you want.
</details>
<br>
## ✨ AI Intelligent Analysis
TrendRadar v3.0.0 adds AI analysis capabilities based on **MCP (Model Context Protocol)**, allowing you to converse with news data in natural language for in-depth analysis.
### ⚠️ Read Before Use
**Important Note**: AI functionality requires local news data support.
AI analysis is **not** a direct query of real-time network data but analyzes your **locally accumulated news data** (stored in the `output` folder).
#### Instructions:
1. **Project comes with test data**: The `output` directory contains **2025-12-21~2025-12-27** one week's hot list news data for quick AI feature experience.
2. **Query Limitations**:
- ✅ Can only query data within the existing date range (December 21-27, a total of 7 days).
- ❌ Cannot query real-time news or future dates.
3. **Get Latest Data**:
- Test data is for quick experience; **recommended to deploy the project yourself** to get real-time data.
- Follow [Quick Start](#-quick-start) to deploy and run the project.
- Wait for at least 1 day to accumulate news data before querying the latest hotspots.
### 1. Quick Deployment
Cherry Studio provides a GUI configuration interface for 5-minute quick deployment, making complex parts a one-click installation.
**Graphic Deployment Tutorial**: Updated to my [public account](#-support-the-project), reply "mcp" to get it.
**Detailed Deployment Tutorial**: [README-Cherry-Studio.md](README-Cherry-Studio.md)
**Deployment Mode Instructions**:
- **STDIO Mode (Recommended)**: Configure once and no need to repeat; **graphic deployment tutorial** only uses this mode as an example.
- **HTTP Mode (Alternative)**: If STDIO mode configuration encounters issues, use HTTP mode. The configuration method is basically the same as STDIO, but the content to copy and paste is a single line, less prone to errors. The only difference is that you need to manually start the service before each use. For details, refer to [README-Cherry-Studio.md](README-Cherry-Studio.md) at the bottom for HTTP mode instructions.
### 2. Learning How to Chat with AI
**Detailed Dialogue Tutorial**: [README-MCP-FAQ.md](README-MCP-FAQ.md)
> 💡 **Tip**: It's not recommended to ask multiple questions at once. If the AI model you choose can't even handle sequential calls like the example below, consider switching to another.
<img src="/_image/ai4.png" alt="mcp usage effect diagram" width="600">
<br>
## 🔌 MCP Client
TrendRadar MCP service supports the standard Model Context Protocol (MCP) and can connect to various MCP-supported AI clients for intelligent analysis.
### Supported Clients
**Notes**:
- Replace `/path/to/TrendRadar` with your actual project path.
- Use double backslashes for Windows paths: `C:\\Users\\YourName\\TrendRadar`.
- Remember to restart after saving.
<details>
<summary>👉 Click to expand: <b>Cursor</b></summary>
#### Method 1: HTTP Mode
1. **Start HTTP Service**:
```bash
# Windows
start-http.bat
# Mac/Linux
./start-http.sh
```
2. **Configure Cursor**:
**Project-level Configuration** (Recommended):
Create `.cursor/mcp.json` in the project root directory:
```json
{
"mcpServers": {
"trendradar": {
"url": "http://localhost:3333/mcp",
"description": "TrendRadar News Hotspot Aggregation Analysis"
}
}
}
```
**Global Configuration**:
Create `~/.cursor/mcp.json` (with the same content).
3. **Usage Steps**:
- Restart Cursor after saving the configuration file.
- Check the connected tools in the "Available Tools" section of the chat interface.
- Start using: `Search for today's "AI" related news`.
#### Method 2: STDIO Mode (Recommended)
Create `.cursor/mcp.json`:
```json
{
"mcpServers": {
"trendradar": {
"command": "uv",
"args": [
"--directory",
"/path/to/TrendRadar",
"run",
"python",
"-m",
"mcp_server.server"
]
}
}
}
```
</details>
<details>
<summary>👉 Click to expand: <b>VSCode (Cline/Continue)</b></summary>
#### Cline Configuration
Add to Cline's MCP settings:
**HTTP Mode**:
```json
{
"trendradar": {
"url": "http://localhost:3333/mcp",
"type": "streamableHttp",
"autoApprove": [],
"disabled": false
}
}
```
**STDIO Mode** (Recommended):
```json
{
"trendradar": {
"command": "uv",
"args": [
"--directory",
"/path/to/TrendRadar",
"run",
"python",
"-m",
"mcp_server.server"
],
"type": "stdio",
"disabled": false
}
}
```
#### Continue Configuration
Edit `~/.continue/config.json`:
```json
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
"/path/to/TrendRadar",
"run",
"python",
"-m",
"mcp_server.server"
]
}
}
]
}
}
```
**Usage Example**:
```
Analyze the trend of "Tesla" in the last 7 days
Generate today's hotspot summary report
Search for news related to "Bitcoin" and analyze sentiment
```
</details>
<details>
<summary>👉 Click to expand: <b>MCP Inspector</b> (Debugging Tool)</summary>
<br>
MCP Inspector is an official debugging tool for testing MCP connections:
#### Usage Steps
1. **Start TrendRadar HTTP Service**:
```bash
# Windows
start-http.bat
# Mac/Linux
./start-http.sh
```
2. **Start MCP Inspector**:
```bash
npx @modelcontextprotocol/inspector
```
3. **Connect in Browser**:
- Visit: `http://localhost:3333/mcp`
- Test "Ping Server" function to verify connection.
- Check "List Tools" to see if it returns 17 tools:
- Basic Query: get_latest_news, get_news_by_date, get_trending_topics
- Intelligent Search: search_news, find_related_news
- Advanced Analysis: analyze_topic_trend, analyze_data_insights, analyze_sentiment, aggregate_news, compare_periods, generate_summary_report
- RSS Query: get_latest_rss, search_rss, get_rss_feeds_status
- System Management: get_current_config, get_system_status, resolve_date_range
</details>
<details>
<summary>👉 Click to expand: <b>Other MCP-supported Clients</b></summary>
<br>
Any client supporting the Model Context Protocol can connect to TrendRadar:
#### HTTP Mode
**Service Address**: `http://localhost:3333/mcp`
**Basic Configuration Template**:
```json
{
"name": "trendradar",
"url": "http://localhost:3333/mcp",
"type": "http",
"description": "News Hotspot Aggregation Analysis"
}
```
#### STDIO Mode (Recommended)
**Basic Configuration Template**:
```json
{
"name": "trendradar",
"command": "uv",
"args": [
"--directory",
"/path/to/TrendRadar",
"run",
"python",
"-m",
"mcp_server.server"
],
"type": "stdio"
}
```
**Notes**:
- Replace `/path/to/TrendRadar` with the actual project path.
- Use backslashes for Windows paths: `C:\\Users\\...`.
- Ensure project dependencies are installed (run setup scripts).
</details>
### Frequently Asked Questions
<details>
<summary>👉 Click to expand: <b>Q1: HTTP Service Fails to Start?</b></summary>
<br>
**Checklist**:
1. Confirm port 3333 is not occupied:
```bash
# Windows
netstat -ano | findstr :3333
# Mac/Linux
lsof -i :3333
```
2. Check if project dependencies are installed:
```bash
# Re-run installation script
# Windows: setup-windows.bat or setup-windows-en.bat
# Mac/Linux: ./setup-mac.sh
```
3. View detailed error logs:
```bash
uv run python -m mcp_server.server --transport http --port 3333
```
4. Try custom port:
```bash
uv run python -m mcp_server.server --transport http --port 33333
```
</details>
<details>
<summary>👉 Click to expand: <b>Q2: Client Cannot Connect to MCP Service?</b></summary>
<br>
**Solutions**:
1. **STDIO Mode**:
- Confirm UV path is correct (run `which uv` or `where uv`).
- Confirm project path is correct and has no Chinese characters.
- Check client error logs.
2. **HTTP Mode**:
- Confirm service is started (visit `http://localhost:3333/mcp`).
- Check firewall settings.
- Try using 127.0.0.1 instead of localhost.
3. **General Check**:
- Restart client application.
- Check MCP service logs.
- Use MCP Inspector to test connection.
</details>
<details>
<summary>👉 Click to expand: <b>Q3: Tool Calls Fail or Return Errors?</b></summary>
<br>
**Possible Causes**:
1. **Data Does Not Exist**:
- Confirm spider has been run (output directory data exists).
- Check query date range for available data.
- View available dates in the output directory.
2. **Parameter Error**:
- Check date format: `YYYY-MM-DD`.
- Confirm platform ID is correct: `zhihu`, `weibo`, etc.
- View tool documentation for parameter instructions.
3. **Configuration Issue**:
- Confirm `config/config.yaml` exists.
- Confirm `config/frequency_words.txt` exists.
- Check configuration file format.
</details>
<br>
## 📚 Project Related
> **4 Articles**:
- [Leave a message at the bottom of this article for project authors to answer questions on their phone](https://mp.weixin.qq.com/s/KYEPfTPVzZNWFclZh4am_g)
- [2 months to break 1000 stars, my GitHub project promotion experience](https://mp.weixin.qq.com/s/jzn0vLiQFX408opcfpPPxQ)
- [Notes on running this project with GitHub fork](https://mp.weixin.qq.com/s/C8evK-U7onG1sTTdwdW2zg)
- [How to write articles for public accounts or news articles based on this project](https://mp.weixin.qq.com/s/8ghyfDAtQZjLrnWTQabYOQ)
>**AI Development**:
- If you have niche needs, you can develop based on my project; even those with no programming background can try.
- All my open-source projects use **AI-assisted software** to improve development efficiency; this tool is open-sourced.
- **Core Function**: Quickly feed project code to AI; you only need to supplement personal needs.
- **Project Address**: https://github.com/sansan0/ai-code-context-helper
### Other Projects
> 📍 Mao Zedong Footprint Map - Interactive dynamic display of complete trajectory from 1893-1976. Welcome comrades to contribute data.
- https://github.com/sansan0/mao-map
> Bilibili comment area data visualization analysis software.
- https://github.com/sansan0/bilibili-comment-analyzer
[](https://www.star-history.com/#sansan0/TrendRadar&Date)
<br>
## 📄 License
GPL-3.0 License
---
<div align="center">
[🔝 Back to Top](#trendradar)
</div>
Connection Info
You Might Also Like
awesome-mcp-servers
A collection of MCP servers.
git
A Model Context Protocol server for Git automation and interaction.
Appwrite
Build like a team of hundreds
oh-my-opencode
Background agents · Curated agents like oracle, librarians, frontend...
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.
everything-claude-code
Complete Claude Code configuration collection - agents, skills, hooks,...