Content
<div align="center">
<h1>K8M</h1>
</div>
<div align=center>
[](https://trendshift.io/repositories/14095)
</div>
<div align=center>


</div>
<div align=center>
[](https://github.com/weibaohui/k8m/blob/master/LICENSE)
[](https://goreportcard.com/report/github.com/weibaohui/k8m)



[](https://archestra.ai/mcp-catalog/weibaohui__k8m)
[](https://zread.ai/weibaohui/k8m)

</div>
[English](README_en.md) | [中文](README.md)
**k8m** is an AI-driven Mini Kubernetes AI Dashboard lightweight console tool designed to simplify cluster management. It is built on AMIS and uses [`kom`](https://github.com/weibaohui/kom) as the Kubernetes API client. **k8m** has built-in Qwen2.5-Coder-7B and supports deepseek-ai/DeepSeek-R1-Distill-Qwen-7B model interaction capabilities, while also supporting access to your own private large models (including ollama).
### DEMO
[DEMO](http://107.150.119.151:3618)
[DEMO-InCluster](http://107.150.119.151:31999)
username/password demo/demo
### Documentation
- For detailed configuration and usage instructions, please refer to the [documentation](docs/README.md).
- For update logs, please refer to [CHANGELOG](CHANGELOG.md).
- [Development Design Document-Chinese](https://zread.ai/weibaohui/k8m)
- [Development Design Document-English](https://deepwiki.com/weibaohui/k8m)
### Key Features
- **Miniaturized Design**: All functions are integrated into a single executable file, making deployment convenient and usage simple.
- **Easy to Use**: A friendly user interface and intuitive operation flow make Kubernetes management easier. Supports standard k8s, aws eks, k3s, kind, k0s and other cluster types.
- **Efficient Performance**: The backend is built with Golang, and the frontend is based on Baidu AMIS, ensuring high resource utilization and fast response speed.
- **AI-Driven Integration**: Implements word explanation, resource guide, automatic translation of YAML attributes, Describe information interpretation, log AI triage, and running command recommendation based on ChatGPT, and integrates [k8s-gpt](https://github.com/k8sgpt-ai/k8sgpt) functions to provide intelligent support for managing k8s.
- **Functional Plug-ins**: Feature functions are plug-in based, enabled on demand, and do not occupy resources when not enabled.
- **MCP Integration**: Visual management of MCP, realizing large model call Tools, built-in 49 kinds of k8s multi-cluster MCP tools, which can be combined to realize more than 100 kinds of cluster operations, and can be used as MCP Server for other large model software. Easily realize large model management k8s. Can record every MCP call in detail. Support mcp.so mainstream services.
- **MCP Permission Integration**: Multi-cluster management permissions are integrated with MCP large model call permissions. In a nutshell: whoever uses the large model uses their permissions to execute MCP. Safe to use, without worries, and avoid operation overruns.
- **Multi-Cluster Management**: Automatically identifies the InCluster mode used inside the cluster, automatically scans configuration files in the same level directory after configuring the kubeconfig path, and registers and manages multiple clusters at the same time, supporting heartbeat detection and automatic reconnection.
- **Multi-Cluster Permission Management**: Supports authorization for users and user groups, which can be authorized by cluster, including cluster read-only, Exec command, and cluster administrator permissions. After authorizing a user group, all users in the group will receive the corresponding authorization. Supports setting namespace black and white lists.
- **Support for the latest k8s features**: Support APIGateway, OpenKruise and other functional features.
- **Pod File Management**: In the file tree on the left side of the Console interface, the right-click menu supports browsing, editing, uploading, downloading, and deleting files in the Pod, simplifying daily operations.
- **Pod Operation Management**: Supports real-time viewing of Pod logs, downloading logs, and directly executing Shell commands in the Pod. Supports Ctrl+F search, similar to grep -A -B highlight search
- **API Open**: Support creating API KEY, accessing from third-party external, providing swagger interface management page.
- **Cluster Inspection Support**: Supports multi-cluster scheduled inspection, custom inspection rules, and supports lua script rules. Supports sending to DingTalk group, WeChat group, Feishu group and custom webhook. Support AI summary.
- **k8s Event Forwarding**: Supports multi-cluster k8s Event forwarding to webhook, which can be filtered by cluster, keyword, namespace, name, etc., to establish multiple dedicated monitoring and forwarding channels. Support AI summary.
- **CRD Management**: Automatically discovers and manages CRD resources, lists all CRDs in a tree, and improves work efficiency.
- **Helm Market**: Supports Helm to freely add repositories, one-click installation, uninstallation, and upgrade of Helm applications, and supports automatic updates.
- **Cross-Platform Support**: Compatible with Linux, macOS, and Windows, and supports multiple architectures such as x86 and ARM, ensuring seamless operation on multiple platforms.
- **Multi-Database Support**: Supports multiple databases such as SQLite, MySql, and PostgreSql.
- **Fully Open Source**: Open all source code, without any restrictions, can be freely customized and expanded, and can be used commercially.
The design concept of **k8m** is "AI-driven, lightweight and efficient, simplifying complexity", which helps developers and operation and maintenance personnel quickly get started and easily manage Kubernetes clusters.

## **Run**
1. **Download**: Download the latest version from [GitHub release](https://github.com/weibaohui/k8m/releases).
2. **Run**: Use the `./k8m` command to start, visit [http://127.0.0.1:3618](http://127.0.0.1:3618).
3. **Login username and password**:
- Username: `k8m`
- Password: `k8m`
- Please note that you should modify the username and password and enable two-step verification after going online.
4. **Parameters**:
```shell
Usage of ./k8m:
--enable-temp-admin Whether to enable temporary administrator account configuration, disabled by default
--admin-password string Administrator password, takes effect after enabling temporary administrator account configuration
--admin-username string Administrator username, takes effect after enabling temporary administrator account configuration
--print-config Whether to print configuration information (default false)
--connect-cluster Whether to automatically connect to existing clusters when starting the cluster, disabled by default
-d, --debug Debug mode
--in-cluster Whether to automatically register and manage the host cluster, enabled by default
--jwt-token-secret string Secret used to generate JWT token after login (default "your-secret-key")
-c, --kubeconfig string kubeconfig file path (default "/root/.kube/config")
--kubectl-shell-image string Kubectl Shell image. The default is bitnami/kubectl:latest, which must contain the kubectl command (default "bitnami/kubectl:latest")
--log-v int klog log level klog.V(2) (default 2)
--login-type string Login method, password, oauth, token, etc., default is password (default "password")
--image-pull-timeout Node Shell, Kubectl Shell image pull timeout. The default is 30 seconds
--node-shell-image string NodeShell image. The default is alpine:latest, which must contain the `nsenter` command (default "alpine:latest")
-p, --port int Listening port (default 3618)
-v, --v Level klog log level (default 2)
```
You can also start it directly through docker-compose (recommended):
```yaml
services:
k8m:
container_name: k8m
image: registry.cn-hangzhou.aliyuncs.com/minik8m/k8m
restart: always
ports:
- "3618:3618"
environment:
TZ: Asia/Shanghai
volumes:
- ./data:/app/data
```
After starting, access the `3618` port, default user: `k8m`, default password `k8m`.
If you want to quickly pull up the experience through the online environment, you can visit: [k8m](https://cnb.cool/znb/qifei/-/tree/main/letsfly/justforfun/k8m)
## Running in a containerized k8s cluster
Use [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/), [MiniKube](https://minikube.sigs.k8s.io/docs/start/)
Install a small k8s cluster
## KinD
* Create KinD Kubernetes cluster
```
brew install kind
```
* Create a new Kubernetes cluster:
```
kind create cluster --name k8sgpt-demo
```
## Deploy k8m to the cluster for experience
### Installation script
```docker
kubectl apply -f https://raw.githubusercontent.com/weibaohui/k8m/refs/heads/main/deploy/k8m.yaml
```
* Access:
The nodePort is used by default, please visit port 31999. Or configure Ingress yourself
http://NodePortIP:31999
## Production deployment enables master-slave election plug-in, precautions
- The service definition for single-instance operation `do not add` `k8m.io/role: leader` label, adding it will prevent normal access.
- The service definition for multi-instance operation `must add` `k8m.io/role: leader` label, otherwise it will not switch.
- The yaml for multi-instance operation is as follows:
```docker
kubectl apply -f https://raw.githubusercontent.com/weibaohui/k8m/refs/heads/main/deploy/k8m-ms.yaml
```
## **ChatGPT Configuration Guide**
### Built-in GPT
From version v0.0.8, GPT will be built-in without configuration.
If you need to use your own GPT, please refer to the following documents.
- [Self-hosted/Custom Large Model Support](docs/use-self-hosted-ai.md) - How to use self-hosted
- [Ollama Configuration](docs/ollama.md) - How to configure and use the Ollama large model.
### **ChatGPT Status Debugging**
If there is still no effect after setting the parameters, please try to use `./k8m -v 6` to get more debugging information.
The following information will be output. By viewing the logs, confirm whether ChatGPT is enabled.
## Development Debugging
If you want to develop and debug locally, please execute a local front-end build first to automatically generate the dist directory. Because this project uses binary embedding, there will be an error without the dist front-end.
#### Step 1: Compile the front-end
```bash
cd ui
pnpm run build
```
#### Compile and debug the backend
```bash
#Download dependencies
go mod tidy
#Run
air
#Or
go run *.go
# Listen on localhost:3618 port
```
#### Front-end hot reload
```bash
cd ui
pnpm run dev
#The Vite service will listen on localhost:3000 port
#Vite forwards backend access to 3618 port
```
Visit http://localhost:3000
### HELP & SUPPORT
If you have any further questions or need additional help, please feel free to contact me!
### Special Thanks
[zhaomingcheng01](https://github.com/zhaomingcheng01): Proposed many very high-quality suggestions and made outstanding contributions to the ease of use of k8m~
[La0jin](https://github.com/La0jin): Provides online resources and maintenance, which greatly improves the display effect of k8m
[eryajf](https://github.com/eryajf): Provides us with very useful github actions, which adds automated release, build, and release functions to k8m
## Contact Me
WeChat (Daluoma's Sun) Search ID: daluomadetaiyang, note k8m.
<br><img width="214" alt="Image" src="https://github.com/user-attachments/assets/166db141-42c5-42c4-9964-8e25cf12d04c" />
## WeChat Group

Connection Info
You Might Also Like
markitdown
MarkItDown-MCP is a lightweight server for converting URIs to Markdown.
Appwrite
Build like a team of hundreds
Context 7
Context7 MCP provides up-to-date code documentation for any prompt.
awesome-claude-skills
A curated list of awesome Claude Skills, resources, and tools for...
semantic-kernel
Build and deploy intelligent AI agents with Semantic Kernel's orchestration...
cc-switch
All-in-One Assistant for Claude Code, Codex & Gemini CLI across platforms.