mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-10 05:11:25 +08:00
Compare commits
11 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 8bb16c48c0 | |||
| c6384363f9 | |||
| 8993e8ad3e | |||
| 289989d9f7 | |||
| dc2ae0e6f1 | |||
| 9c966c152d | |||
| 4efae41048 | |||
| b8437032e9 | |||
| 2d339ca81b | |||
| d53abc9696 | |||
| 446c886d38 |
@@ -12,7 +12,8 @@
|
||||
<p align="center">
|
||||
<a href="https://cowagent.ai/">🌐 官网</a> ·
|
||||
<a href="https://docs.cowagent.ai/">📖 文档中心</a> ·
|
||||
<a href="https://docs.cowagent.ai/guide/quick-start">🚀 快速开始</a>
|
||||
<a href="https://docs.cowagent.ai/guide/quick-start">🚀 快速开始</a> ·
|
||||
<a href="https://link-ai.tech/cowagent/create">☁️ 在线体验</a>
|
||||
</p>
|
||||
|
||||
|
||||
@@ -26,8 +27,7 @@
|
||||
- ✅ **技能系统:** 实现了Skills创建和运行的引擎,内置多种技能,并支持通过自然语言对话完成自定义Skills开发
|
||||
- ✅ **多模态消息:** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作
|
||||
- ✅ **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi、Doubao等国内外主流模型厂商
|
||||
- ✅ **多端部署:** 支持运行在本地计算机或服务器,可集成到网页、飞书、钉钉、微信公众号、企业微信应用中使用
|
||||
- ✅ **知识库:** 集成企业知识库能力,让Agent成为专属数字员工,基于[LinkAI](https://link-ai.tech)平台实现
|
||||
- ✅ **多端部署:** 支持运行在本地计算机或服务器,可集成到飞书、钉钉、企业微信、QQ、微信公众号、网页中使用
|
||||
|
||||
## 声明
|
||||
|
||||
@@ -37,9 +37,11 @@
|
||||
|
||||
## 演示
|
||||
|
||||
使用说明(Agent模式):[CowAgent介绍](https://docs.cowagent.ai/intro/features)
|
||||
- 使用说明(Agent模式):[CowAgent介绍](https://docs.cowagent.ai/intro/features)
|
||||
|
||||
DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
- 免部署在线体验:[CowAgent](https://link-ai.tech/cowagent/create)
|
||||
|
||||
- DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
|
||||
## 社区
|
||||
|
||||
@@ -51,9 +53,9 @@ DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
|
||||
# 企业服务
|
||||
|
||||
<a href="https://link-ai.tech" target="_blank"><img width="720" src="https://cdn.link-ai.tech/image/link-ai-intro.jpg"></a>
|
||||
<a href="https://link-ai.tech" target="_blank"><img width="650" src="https://cdn.link-ai.tech/image/link-ai-intro.jpg"></a>
|
||||
|
||||
> [LinkAI](https://link-ai.tech/) 是面向企业和开发者的一站式AI智能体平台,聚合多模态大模型、知识库、Agent 插件、工作流等能力,支持一键接入主流平台并进行管理,支持SaaS、私有化部署等多种模式。
|
||||
> [LinkAI](https://link-ai.tech/) 是面向企业和个人的一站式AI智能体平台,聚合多模态大模型、知识库、技能、工作流等能力,支持一键接入主流平台并管理,支持SaaS、私有化部署等多种模式,可免部署在线运行[CowAgent助理](https://link-ai.tech/cowagent/create)。
|
||||
>
|
||||
> LinkAI 目前已在智能客服、私域运营、企业效率助手等场景积累了丰富的AI解决方案,在消费、健康、文教、科技制造等各行业沉淀了大模型落地应用的最佳实践,致力于帮助更多企业和开发者拥抱 AI 生产力。
|
||||
|
||||
@@ -65,6 +67,8 @@ DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
|
||||
# 🏷 更新日志
|
||||
|
||||
>**2026.03.18:** [2.0.3版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.3),新增企微智能机器人和 QQ 通道、支持Coding Plan、新增多个模型、Web端文件处理、记忆系统升级。
|
||||
|
||||
>**2026.02.27:** [2.0.2版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2),Web 控制台全面升级(流式对话、模型/技能/记忆/通道/定时任务/日志管理)、支持多通道同时运行、会话持久化存储、新增多个模型。
|
||||
|
||||
>**2026.02.13:** [2.0.1版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1),内置 Web Search 工具、智能上下文裁剪策略、运行时信息动态更新、Windows 兼容性适配,修复定时任务记忆丢失、飞书连接等多项问题。
|
||||
@@ -86,7 +90,7 @@ DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4
|
||||
在终端执行以下命令:
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
脚本使用说明:[一键运行脚本](https://docs.cowagent.ai/guide/quick-start)
|
||||
@@ -98,9 +102,9 @@ bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
|
||||
项目支持国内外主流厂商的模型接口,可选模型及配置说明参考:[模型说明](#模型说明)。
|
||||
|
||||
> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:MiniMax-M2.5、glm-5、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview、gpt-5.4
|
||||
> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:MiniMax-M2.5、glm-5、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview、gpt-5.4、gpt-5.4-mini
|
||||
|
||||
同时支持使用 **LinkAI平台** 接口,可灵活切换 OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimi 等多种常用模型,并支持知识库、工作流、插件等Agent能力,参考 [接口文档](https://docs.link-ai.tech/platform/api)。
|
||||
同时支持使用 **LinkAI平台** 接口,支持上述全部模型,并支持知识库、工作流、插件等Agent技能,参考 [接口文档](https://docs.link-ai.tech/platform/api)。
|
||||
|
||||
### 2.环境安装
|
||||
|
||||
@@ -161,7 +165,7 @@ pip3 install -r requirements-optional.txt
|
||||
"speech_recognition": false, # 是否开启语音识别
|
||||
"group_speech_recognition": false, # 是否开启群组语音识别
|
||||
"voice_reply_voice": false, # 是否使用语音回复语音
|
||||
"use_linkai": false, # 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台接口
|
||||
"use_linkai": false, # 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台模型
|
||||
"agent": true, # 是否启用Agent模式,启用后拥有多轮工具决策、长期记忆、Skills能力等
|
||||
"agent_workspace": "~/cow", # Agent的工作空间路径,用于存储memory、skills、系统设定等
|
||||
"agent_max_context_tokens": 40000, # Agent模式下最大上下文tokens,超出将自动丢弃最早的上下文
|
||||
@@ -191,9 +195,8 @@ pip3 install -r requirements-optional.txt
|
||||
<details>
|
||||
<summary>3. LinkAI配置</summary>
|
||||
|
||||
+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台,使用知识库、工作流、插件等能力, 参考[接口文档](https://docs.link-ai.tech/platform/api/chat)
|
||||
+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台,使用模型、知识库、工作流、插件等技能, 参考[接口文档](https://docs.link-ai.tech/platform/api/chat)
|
||||
+ `linkai_api_key`: LinkAI Api Key,可在 [控制台](https://link-ai.tech/console/interface) 创建
|
||||
+ `linkai_app_code`: LinkAI 应用或工作流的code,选填,普通对话模式中使用。
|
||||
</details>
|
||||
|
||||
注:全部配置项说明可在 [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py) 文件中查看。
|
||||
@@ -223,8 +226,9 @@ nohup python3 app.py & tail -f nohup.out
|
||||
|
||||
执行后程序运行于服务器后台,可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。 日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。
|
||||
|
||||
此外,项目的 `scripts` 目录下有一键运行、关闭程序的脚本供使用。 运行后默认channel为web,通过可以通过修改配置文件进行切换。
|
||||
此外,项目根目录下的 `run.sh` 脚本支持一键启动和管理服务,包括 `./run.sh start`、`./run.sh stop`、`./run.sh restart`、`./run.sh logs` 等命令,执行 `./run.sh help` 可查看全部用法。
|
||||
|
||||
> 如果需要通过浏览器访问Web控制台,请确保服务器的 `9899` 端口已在防火墙或安全组中放行,建议仅对指定IP开放以保证安全。
|
||||
|
||||
### 3.Docker部署
|
||||
|
||||
@@ -235,7 +239,7 @@ nohup python3 app.py & tail -f nohup.out
|
||||
**(1) 下载 docker-compose.yml 文件**
|
||||
|
||||
```bash
|
||||
wget https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
```
|
||||
|
||||
下载完成后打开 `docker-compose.yml` 填写所需配置,例如 `CHANNEL_TYPE`、`OPEN_AI_API_KEY` 和等配置。
|
||||
@@ -254,17 +258,7 @@ sudo docker compose up -d # 若docker-compose为 1.X 版本,则执行
|
||||
sudo docker logs -f chatgpt-on-wechat
|
||||
```
|
||||
|
||||
**(3) 插件使用**
|
||||
|
||||
如果需要在docker容器中修改插件配置,可通过挂载的方式完成,将 [插件配置文件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/config.json.template)
|
||||
重命名为 `config.json`,放置于 `docker-compose.yml` 相同目录下,并在 `docker-compose.yml` 中的 `chatgpt-on-wechat` 部分下添加 `volumes` 映射:
|
||||
|
||||
```
|
||||
volumes:
|
||||
- ./config.json:/app/plugins/config.json
|
||||
```
|
||||
**注**:使用docker方式部署的详细教程可以参考:[docker部署CoW项目](https://www.wangpc.cc/ai/docker-deploy-cow/)
|
||||
|
||||
> 如果需要通过浏览器访问Web控制台,请确保服务器的 `9899` 端口已在防火墙或安全组中放行,建议仅对指定IP开放以保证安全。
|
||||
|
||||
## 模型说明
|
||||
|
||||
@@ -282,13 +276,13 @@ volumes:
|
||||
"model": "gpt-5.4",
|
||||
"open_ai_api_key": "YOUR_API_KEY",
|
||||
"open_ai_api_base": "https://api.openai.com/v1",
|
||||
"bot_type": "chatGPT"
|
||||
"bot_type": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
- `model`: 与OpenAI接口的 [model参数](https://platform.openai.com/docs/models) 一致,支持包括 gpt-5.4、o系列、gpt-4.1等模型,Agent模式推荐使用 `gpt-5.4`
|
||||
- `model`: 与OpenAI接口的 [model参数](https://platform.openai.com/docs/models) 一致,支持包括 gpt-5.4、gpt-5.4-mini、gpt-5.4-nano、o系列、gpt-4.1等模型,Agent模式推荐使用 `gpt-5.4`、`gpt-5.4-mini`
|
||||
- `open_ai_api_base`: 如果需要接入第三方代理接口,可通过修改该参数进行接入
|
||||
- `bot_type`: 使用OpenAI相关模型时无需填写。当使用第三方代理接口接入Claude等非OpenAI官方模型时,该参数设为 `chatGPT`
|
||||
- `bot_type`: 使用OpenAI相关模型时无需填写。当使用第三方代理接口接入Claude等非OpenAI官方模型时,该参数设为 `openai`
|
||||
</details>
|
||||
|
||||
<details>
|
||||
@@ -300,16 +294,15 @@ volumes:
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "gpt-5.4-mini",
|
||||
"use_linkai": true,
|
||||
"linkai_api_key": "YOUR API KEY",
|
||||
"linkai_app_code": "YOUR APP CODE"
|
||||
"linkai_api_key": "YOUR API KEY"
|
||||
}
|
||||
```
|
||||
|
||||
+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台的智能体,使用知识库、工作流、数据库、MCP插件等丰富的Agent能力
|
||||
+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台的模型,并使用知识库、工作流、数据库、插件等丰富的Agent技能
|
||||
+ `linkai_api_key`: LinkAI平台的API Key,可在 [控制台](https://link-ai.tech/console/interface) 中创建
|
||||
+ `linkai_app_code`: LinkAI智能体 (应用或工作流) 的code,选填,普通对话模式可用。智能体创建可参考 [说明文档](https://docs.link-ai.tech/platform/quick-start)
|
||||
+ `model`: model字段填写空则直接使用智能体的模型,可在平台中灵活切换,[模型列表](https://link-ai.tech/console/models)中的全部模型均可使用
|
||||
+ `model`: [模型列表](https://link-ai.tech/console/models)中的全部模型均可使用
|
||||
</details>
|
||||
|
||||
<details>
|
||||
@@ -329,7 +322,7 @@ volumes:
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "MiniMax-M2.5",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": ""
|
||||
@@ -358,7 +351,7 @@ volumes:
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "glm-5",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
|
||||
"open_ai_api_key": ""
|
||||
@@ -387,7 +380,7 @@ volumes:
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "qwen3.5-plus",
|
||||
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"open_ai_api_key": "sk-qVxxxxG"
|
||||
@@ -416,7 +409,7 @@ volumes:
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "kimi-k2.5",
|
||||
"open_ai_api_base": "https://api.moonshot.cn/v1",
|
||||
"open_ai_api_key": ""
|
||||
@@ -486,8 +479,8 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
|
||||
{
|
||||
"model": "deepseek-chat",
|
||||
"open_ai_api_key": "sk-xxxxxxxxxxx",
|
||||
"open_ai_api_base": "https://api.deepseek.com/v1",
|
||||
"bot_type": "chatGPT"
|
||||
"open_ai_api_base": "https://api.deepseek.com/v1",
|
||||
"bot_type": "openai"
|
||||
|
||||
}
|
||||
```
|
||||
@@ -542,7 +535,7 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "ERNIE-4.0-Turbo-8K",
|
||||
"open_ai_api_base": "https://qianfan.baidubce.com/v2",
|
||||
"open_ai_api_key": "bce-v3/ALTxxxxxxd2b"
|
||||
@@ -578,7 +571,7 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
|
||||
方式二:OpenAI兼容方式接入,配置如下:
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "4.0Ultra",
|
||||
"open_ai_api_base": "https://spark-api-open.xf-yun.com/v1",
|
||||
"open_ai_api_key": ""
|
||||
@@ -610,6 +603,23 @@ API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn)
|
||||
- `text_to_image`: 图像生成模型,参考[模型列表](https://www.modelscope.cn/models?filter=inference_type&page=1)
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Coding Plan</summary>
|
||||
|
||||
Coding Plan 是各厂商推出的编程包月套餐,所有厂商均可通过 OpenAI 兼容方式接入:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "模型名称",
|
||||
"open_ai_api_base": "厂商 Coding Plan API Base",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
目前支持阿里云、MiniMax、智谱GLM、Kimi、火山引擎等厂商,各厂商详细配置请参考 [Coding Plan 文档](https://docs.cowagent.ai/models/coding-plan)。
|
||||
</details>
|
||||
|
||||
|
||||
## 通道说明
|
||||
|
||||
|
||||
@@ -376,7 +376,7 @@ def _build_workspace_section(workspace_dir: str, language: str) -> List[str]:
|
||||
"",
|
||||
"以下文件在会话启动时**已经自动加载**到系统提示词的「项目上下文」section 中,你**无需再用 read 工具读取它们**:",
|
||||
"",
|
||||
"- ✅ `AGENT.md`: 已加载 - 你的人格和灵魂设定。当用户修改你的名字、性格或交流风格时,用 `edit` 更新此文件",
|
||||
"- ✅ `AGENT.md`: 已加载 - 你的人格和灵魂设定。当你的名字、性格或交流风格发生变化时,主动用 `edit` 更新此文件",
|
||||
"- ✅ `USER.md`: 已加载 - 用户的身份信息。当用户修改称呼、姓名等身份信息时,用 `edit` 更新此文件",
|
||||
"- ✅ `RULE.md`: 已加载 - 工作空间使用指南和规则",
|
||||
"",
|
||||
@@ -423,7 +423,8 @@ def _build_context_files_section(context_files: List[ContextFile], language: str
|
||||
]
|
||||
|
||||
if has_agent:
|
||||
lines.append("如果存在 `AGENT.md`,请体现其中定义的人格和语气。避免僵硬、模板化的回复;遵循其指导,除非有更高优先级的指令覆盖它。")
|
||||
lines.append("**`AGENT.md` 是你的灵魂文件**:严格体现其中定义的人格、语气和设定,避免僵硬、模板化的回复。")
|
||||
lines.append("当用户通过对话透露了对你性格、风格、职责、能力边界的新期望,你应该主动用 `edit` 更新 AGENT.md 以反映这些演变。")
|
||||
lines.append("")
|
||||
|
||||
# 添加每个文件的内容
|
||||
|
||||
@@ -609,14 +609,14 @@ class AgentStreamExecutor:
|
||||
"arguments": ""
|
||||
}
|
||||
|
||||
if "id" in tc_delta:
|
||||
if tc_delta.get("id"):
|
||||
tool_calls_buffer[index]["id"] = tc_delta["id"]
|
||||
|
||||
if "function" in tc_delta:
|
||||
func = tc_delta["function"]
|
||||
if "name" in func:
|
||||
if func.get("name"):
|
||||
tool_calls_buffer[index]["name"] = func["name"]
|
||||
if "arguments" in func:
|
||||
if func.get("arguments"):
|
||||
tool_calls_buffer[index]["arguments"] += func["arguments"]
|
||||
|
||||
# Preserve _gemini_raw_parts for Gemini thoughtSignature round-trip
|
||||
@@ -720,9 +720,9 @@ class AgentStreamExecutor:
|
||||
)
|
||||
else:
|
||||
if retry_count >= max_retries:
|
||||
logger.error(f"❌ LLM API error after {max_retries} retries: {e}")
|
||||
logger.error(f"❌ LLM API error after {max_retries} retries: {e}", exc_info=True)
|
||||
else:
|
||||
logger.error(f"❌ LLM call error (non-retryable): {e}")
|
||||
logger.error(f"❌ LLM call error (non-retryable): {e}", exc_info=True)
|
||||
raise
|
||||
|
||||
# Parse tool calls
|
||||
|
||||
@@ -35,7 +35,7 @@ class Vision(BaseTool):
|
||||
|
||||
name: str = "vision"
|
||||
description: str = (
|
||||
"Analyze an image (local file or URL) using Vision API. "
|
||||
"Analyze a local image or image URL (jpg/jpeg/png) using Vision API. "
|
||||
"Can describe content, extract text, identify objects, colors, etc. "
|
||||
"Requires OPENAI_API_KEY or LINKAI_API_KEY."
|
||||
)
|
||||
|
||||
@@ -106,7 +106,7 @@ class AgentLLMModel(LLMModel):
|
||||
return configured_bot_type
|
||||
|
||||
if not model_name or not isinstance(model_name, str):
|
||||
return const.CHATGPT
|
||||
return const.OPENAI
|
||||
if model_name in self._MODEL_BOT_TYPE_MAP:
|
||||
return self._MODEL_BOT_TYPE_MAP[model_name]
|
||||
if model_name.lower().startswith("minimax") or model_name in ["abab6.5-chat"]:
|
||||
@@ -116,11 +116,11 @@ class AgentLLMModel(LLMModel):
|
||||
if model_name in [const.MOONSHOT, "moonshot-v1-8k", "moonshot-v1-32k", "moonshot-v1-128k"]:
|
||||
return const.MOONSHOT
|
||||
if model_name in [const.DEEPSEEK_CHAT, const.DEEPSEEK_REASONER]:
|
||||
return const.CHATGPT
|
||||
return const.OPENAI
|
||||
for prefix, btype in self._MODEL_PREFIX_MAP:
|
||||
if model_name.startswith(prefix):
|
||||
return btype
|
||||
return const.CHATGPT
|
||||
return const.OPENAI
|
||||
|
||||
@property
|
||||
def bot(self):
|
||||
|
||||
+1
-1
@@ -13,7 +13,7 @@ from voice.factory import create_voice
|
||||
class Bridge(object):
|
||||
def __init__(self):
|
||||
self.btype = {
|
||||
"chat": const.CHATGPT,
|
||||
"chat": const.OPENAI,
|
||||
"voice_to_text": conf().get("voice_to_text", "openai"),
|
||||
"text_to_voice": conf().get("text_to_voice", "google"),
|
||||
"translate": conf().get("translate", "baidu"),
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
// =====================================================================
|
||||
// Version — update this before each release
|
||||
// =====================================================================
|
||||
const APP_VERSION = 'v2.0.2';
|
||||
const APP_VERSION = 'v2.0.3';
|
||||
|
||||
// =====================================================================
|
||||
// i18n
|
||||
@@ -992,7 +992,6 @@ let configProviders = {};
|
||||
let configApiBases = {};
|
||||
let configApiKeys = {};
|
||||
let configCurrentModel = '';
|
||||
let configHasBotType = false;
|
||||
let cfgProviderValue = '';
|
||||
let cfgModelValue = '';
|
||||
|
||||
@@ -1052,7 +1051,6 @@ function initConfigView(data) {
|
||||
configApiBases = data.api_bases || {};
|
||||
configApiKeys = data.api_keys || {};
|
||||
configCurrentModel = data.model || '';
|
||||
configHasBotType = !!data.has_bot_type;
|
||||
|
||||
const providerEl = document.getElementById('cfg-provider');
|
||||
const providerOpts = Object.entries(configProviders).map(([pid, p]) => ({ value: pid, label: p.label }));
|
||||
@@ -1214,8 +1212,10 @@ function saveModelConfig() {
|
||||
const updates = { model: model };
|
||||
const p = configProviders[cfgProviderValue];
|
||||
updates.use_linkai = (cfgProviderValue === 'linkai');
|
||||
if (configHasBotType) {
|
||||
updates.bot_type = (cfgProviderValue === 'linkai') ? '' : cfgProviderValue;
|
||||
if (cfgProviderValue === 'linkai') {
|
||||
updates.bot_type = '';
|
||||
} else {
|
||||
updates.bot_type = cfgProviderValue;
|
||||
}
|
||||
if (p && p.api_base_key) {
|
||||
const base = document.getElementById('cfg-api-base').value.trim();
|
||||
|
||||
@@ -497,7 +497,7 @@ class ConfigHandler:
|
||||
const.DOUBAO_SEED_2_PRO, const.DOUBAO_SEED_2_CODE,
|
||||
const.CLAUDE_4_6_SONNET, const.CLAUDE_4_6_OPUS, const.CLAUDE_4_5_SONNET,
|
||||
const.GEMINI_31_FLASH_LITE_PRE, const.GEMINI_31_PRO_PRE, const.GEMINI_3_FLASH_PRE,
|
||||
const.GPT_54, const.GPT_5, const.GPT_41, const.GPT_4o,
|
||||
const.GPT_54, const.GPT_54_MINI, const.GPT_54_NANO, const.GPT_5, const.GPT_41, const.GPT_4o,
|
||||
const.DEEPSEEK_CHAT, const.DEEPSEEK_REASONER,
|
||||
]
|
||||
|
||||
@@ -551,12 +551,12 @@ class ConfigHandler:
|
||||
"api_base_default": "https://generativelanguage.googleapis.com",
|
||||
"models": [const.GEMINI_31_FLASH_LITE_PRE, const.GEMINI_31_PRO_PRE, const.GEMINI_3_FLASH_PRE],
|
||||
}),
|
||||
("chatGPT", {
|
||||
("openai", {
|
||||
"label": "OpenAI",
|
||||
"api_key_field": "open_ai_api_key",
|
||||
"api_base_key": "open_ai_api_base",
|
||||
"api_base_default": "https://api.openai.com/v1",
|
||||
"models": [const.GPT_54, const.GPT_5, const.GPT_41, const.GPT_4o],
|
||||
"models": [const.GPT_54, const.GPT_54_MINI, const.GPT_54_NANO, const.GPT_5, const.GPT_41, const.GPT_4o],
|
||||
}),
|
||||
("deepseek", {
|
||||
"label": "DeepSeek",
|
||||
@@ -625,8 +625,7 @@ class ConfigHandler:
|
||||
"use_agent": use_agent,
|
||||
"title": title,
|
||||
"model": local_config.get("model", ""),
|
||||
"bot_type": local_config.get("bot_type", ""),
|
||||
"has_bot_type": "bot_type" in local_config,
|
||||
"bot_type": "openai" if local_config.get("bot_type") == "chatGPT" else local_config.get("bot_type", ""),
|
||||
"use_linkai": bool(local_config.get("use_linkai", False)),
|
||||
"channel_type": local_config.get("channel_type", ""),
|
||||
"agent_max_context_tokens": local_config.get("agent_max_context_tokens", 50000),
|
||||
@@ -671,9 +670,6 @@ class ConfigHandler:
|
||||
file_cfg = json.load(f)
|
||||
else:
|
||||
file_cfg = {}
|
||||
if "bot_type" in applied and "bot_type" not in file_cfg:
|
||||
del applied["bot_type"]
|
||||
local_config.pop("bot_type", None)
|
||||
file_cfg.update(applied)
|
||||
with open(config_path, "w", encoding="utf-8") as f:
|
||||
json.dump(file_cfg, f, indent=4, ensure_ascii=False)
|
||||
|
||||
@@ -602,6 +602,9 @@ def build_website_prompt(workspace_dir: str) -> list:
|
||||
]
|
||||
|
||||
def start(channel, channel_mgr=None):
|
||||
if not get_deployment_id():
|
||||
return
|
||||
|
||||
global chat_client
|
||||
chat_client = CloudClient(api_key=conf().get("linkai_api_key"), host=conf().get("cloud_host", ""), channel=channel)
|
||||
chat_client.channel_mgr = channel_mgr
|
||||
|
||||
+5
-2
@@ -1,6 +1,7 @@
|
||||
# 厂商类型
|
||||
OPEN_AI = "openAI"
|
||||
CHATGPT = "chatGPT"
|
||||
OPENAI = "openai"
|
||||
CHATGPT = "chatGPT" # legacy alias for OPENAI, kept for backward compatibility
|
||||
BAIDU = "baidu"
|
||||
XUNFEI = "xunfei"
|
||||
CHATGPTONAZURE = "chatGPTOnAzure"
|
||||
@@ -68,6 +69,8 @@ GPT_5 = "gpt-5"
|
||||
GPT_5_MINI = "gpt-5-mini"
|
||||
GPT_5_NANO = "gpt-5-nano"
|
||||
GPT_54 = "gpt-5.4" # GPT-5.4 - Agent recommended model
|
||||
GPT_54_MINI = "gpt-5.4-mini"
|
||||
GPT_54_NANO = "gpt-5.4-nano"
|
||||
O1 = "o1-preview"
|
||||
O1_MINI = "o1-mini"
|
||||
WHISPER_1 = "whisper-1"
|
||||
@@ -153,7 +156,7 @@ MODEL_LIST = [
|
||||
GPT_4o, GPT_4O_0806, GPT_4o_MINI,
|
||||
GPT_41, GPT_41_MINI, GPT_41_NANO,
|
||||
GPT_5, GPT_5_MINI, GPT_5_NANO,
|
||||
GPT_54,
|
||||
GPT_54, GPT_54_MINI, GPT_54_NANO,
|
||||
O1, O1_MINI,
|
||||
|
||||
# DeepSeek
|
||||
|
||||
@@ -20,7 +20,7 @@ available_setting = {
|
||||
"proxy": "", # openai使用的代理
|
||||
# chatgpt模型, 当use_azure_chatgpt为true时,其名称为Azure上model deployment名称
|
||||
"model": "gpt-3.5-turbo", # 可选择: gpt-4o, pt-4o-mini, gpt-4-turbo, claude-3-sonnet, wenxin, moonshot, qwen-turbo, xunfei, glm-4, minimax, gemini等模型,全部可选模型详见common/const.py文件
|
||||
"bot_type": "", # 可选配置,使用兼容openai格式的三方服务时候,需填"chatGPT"。bot具体名称详见common/const.py文件列出的bot_type,如不填根据model名称判断,
|
||||
"bot_type": "", # 可选配置,使用兼容openai格式的三方服务时候,需填"openai"(历史值"chatGPT"仍兼容)。bot具体名称详见common/const.py文件,如不填根据model名称判断
|
||||
"use_azure_chatgpt": False, # 是否使用azure的chatgpt
|
||||
"azure_deployment_id": "", # azure 模型部署名称
|
||||
"azure_api_version": "", # azure api版本
|
||||
|
||||
@@ -25,11 +25,11 @@ WORKDIR ${BUILD_PREFIX}
|
||||
ADD docker/entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& mkdir -p /home/noroot \
|
||||
&& groupadd -r noroot \
|
||||
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
|
||||
&& chown -R noroot:noroot /home/noroot ${BUILD_PREFIX} /usr/local/lib
|
||||
&& mkdir -p /home/agent/cow \
|
||||
&& groupadd -r agent \
|
||||
&& useradd -r -g agent -s /bin/bash -d /home/agent agent \
|
||||
&& chown -R agent:agent /home/agent ${BUILD_PREFIX} /usr/local/lib
|
||||
|
||||
USER noroot
|
||||
USER agent
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
|
||||
+31
-14
@@ -5,22 +5,39 @@ services:
|
||||
container_name: chatgpt-on-wechat
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
ports:
|
||||
- "9899:9899"
|
||||
environment:
|
||||
CHANNEL_TYPE: 'web'
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
MODEL: ''
|
||||
PROXY: ''
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
SPEECH_RECOGNITION: 'False'
|
||||
CHARACTER_DESC: '你是基于大语言模型的AI智能助手,旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
USE_GLOBAL_PLUGIN_CONFIG: 'True'
|
||||
MODEL: 'MiniMax-M2.5'
|
||||
MINIMAX_API_KEY: ''
|
||||
ZHIPU_AI_API_KEY: ''
|
||||
ARK_API_KEY: ''
|
||||
MOONSHOT_API_KEY: ''
|
||||
DASHSCOPE_API_KEY: ''
|
||||
CLAUDE_API_KEY: ''
|
||||
CLAUDE_API_BASE: 'https://api.anthropic.com/v1'
|
||||
OPEN_AI_API_KEY: ''
|
||||
OPEN_AI_API_BASE: 'https://api.openai.com/v1'
|
||||
GEMINI_API_KEY: ''
|
||||
GEMINI_API_BASE: 'https://generativelanguage.googleapis.com'
|
||||
VOICE_TO_TEXT: 'openai'
|
||||
TEXT_TO_VOICE: 'openai'
|
||||
VOICE_REPLY_VOICE: 'False'
|
||||
SPEECH_RECOGNITION: 'True'
|
||||
GROUP_SPEECH_RECOGNITION: 'False'
|
||||
USE_LINKAI: 'False'
|
||||
AGENT: 'True'
|
||||
LINKAI_API_KEY: ''
|
||||
LINKAI_APP_CODE: ''
|
||||
FEISHU_APP_ID: ''
|
||||
FEISHU_APP_SECRET: ''
|
||||
DINGTALK_CLIENT_ID: ''
|
||||
DINGTALK_CLIENT_SECRET: ''
|
||||
WECOM_BOT_ID: ''
|
||||
WECOM_BOT_SECRET: ''
|
||||
AGENT: 'True'
|
||||
AGENT_MAX_CONTEXT_TOKENS: 40000
|
||||
AGENT_MAX_CONTEXT_TURNS: 20
|
||||
AGENT_MAX_STEPS: 15
|
||||
volumes:
|
||||
- ./cow:/home/agent/cow
|
||||
|
||||
+1
-1
@@ -127,7 +127,7 @@ Agent可根据智能体的名称和描述进行决策,并通过 app_code 调
|
||||
在命令行中执行:
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
详细说明及后续程序管理参考:[项目启动脚本](https://github.com/zhayujie/chatgpt-on-wechat/wiki/CowAgentQuickStart)
|
||||
|
||||
+7
-3
@@ -59,7 +59,8 @@
|
||||
"group": "安装部署",
|
||||
"pages": [
|
||||
"guide/quick-start",
|
||||
"guide/manual-install"
|
||||
"guide/manual-install",
|
||||
"guide/upgrade"
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -80,7 +81,8 @@
|
||||
"models/gemini",
|
||||
"models/openai",
|
||||
"models/deepseek",
|
||||
"models/linkai"
|
||||
"models/linkai",
|
||||
"models/coding-plan"
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -171,6 +173,7 @@
|
||||
"group": "发布记录",
|
||||
"pages": [
|
||||
"releases/overview",
|
||||
"releases/v2.0.3",
|
||||
"releases/v2.0.2",
|
||||
"releases/v2.0.1",
|
||||
"releases/v2.0.0"
|
||||
@@ -224,7 +227,8 @@
|
||||
"en/models/gemini",
|
||||
"en/models/openai",
|
||||
"en/models/deepseek",
|
||||
"en/models/linkai"
|
||||
"en/models/linkai",
|
||||
"en/models/coding-plan"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
+30
-3
@@ -12,7 +12,8 @@
|
||||
<p align="center">
|
||||
<a href="https://cowagent.ai/">🌐 Website</a> ·
|
||||
<a href="https://docs.cowagent.ai/en/intro/index">📖 Docs</a> ·
|
||||
<a href="https://docs.cowagent.ai/en/guide/quick-start">🚀 Quick Start</a>
|
||||
<a href="https://docs.cowagent.ai/en/guide/quick-start">🚀 Quick Start</a> ·
|
||||
<a href="https://link-ai.tech/cowagent/create">☁️ Try Online</a>
|
||||
</p>
|
||||
|
||||
## Introduction
|
||||
@@ -33,6 +34,10 @@
|
||||
2. Agent mode consumes more tokens than normal chat mode. Choose models based on effectiveness and cost. Agent has access to the host OS — please deploy in trusted environments.
|
||||
3. CowAgent focuses on open-source development and does not participate in, authorize, or issue any cryptocurrency.
|
||||
|
||||
## Demo
|
||||
|
||||
Try online (no deployment needed): [CowAgent](https://link-ai.tech/cowagent/create)
|
||||
|
||||
## Changelog
|
||||
|
||||
> **2026.02.27:** [v2.0.2](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2) — Web console overhaul (streaming chat, model/skill/memory/channel/scheduler/log management), multi-channel concurrent running, session persistence, new models including Gemini 3.1 Pro / Claude 4.6 Sonnet / Qwen3.5 Plus.
|
||||
@@ -56,7 +61,7 @@ Full changelog: [Release Notes](https://docs.cowagent.ai/en/releases/overview)
|
||||
The project provides a one-click script for installation, configuration, startup, and management:
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
After running, the Web service starts by default. Access `http://localhost:9899/chat` to chat.
|
||||
@@ -102,7 +107,7 @@ nohup python3 app.py & tail -f nohup.out
|
||||
### Docker Deployment
|
||||
|
||||
```bash
|
||||
wget https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
# Edit docker-compose.yml with your config
|
||||
sudo docker compose up -d
|
||||
sudo docker logs -f chatgpt-on-wechat
|
||||
@@ -128,6 +133,28 @@ Supports mainstream model providers. Recommended models for Agent mode:
|
||||
|
||||
For detailed configuration of each model, see the [Models documentation](https://docs.cowagent.ai/en/models/index).
|
||||
|
||||
### Coding Plan
|
||||
|
||||
Coding Plan is a monthly subscription package offered by various providers, ideal for high-frequency Agent usage. All providers can be accessed via OpenAI-compatible mode:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "MODEL_NAME",
|
||||
"open_ai_api_base": "PROVIDER_CODING_PLAN_API_BASE",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
- `bot_type`: Must be `openai`
|
||||
- `model`: Model name supported by the provider
|
||||
- `open_ai_api_base`: Provider's Coding Plan API Base (different from standard pay-as-you-go)
|
||||
- `open_ai_api_key`: Provider's Coding Plan API Key
|
||||
|
||||
> Note: Coding Plan API Base and API Key are usually separate from standard pay-as-you-go ones. Please obtain them from each provider's platform.
|
||||
|
||||
Supported providers include Alibaba Cloud, MiniMax, Zhipu GLM, Kimi, Volcengine, and more. For detailed configuration of each provider, see the [Coding Plan documentation](https://docs.cowagent.ai/en/models/coding-plan).
|
||||
|
||||
<br/>
|
||||
|
||||
## Channels
|
||||
|
||||
@@ -67,7 +67,7 @@ Docker deployment does not require cloning source code or installing dependencie
|
||||
**1. Download config**
|
||||
|
||||
```bash
|
||||
wget https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
```
|
||||
|
||||
Edit `docker-compose.yml` with your configuration.
|
||||
|
||||
@@ -10,7 +10,7 @@ Supports Linux, macOS, and Windows. Requires Python 3.7-3.12 (3.9 recommended).
|
||||
## Install Command
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
The script automatically performs these steps:
|
||||
|
||||
@@ -41,7 +41,7 @@ CowAgent can proactively think and plan tasks, operate computers and external re
|
||||
Run the following command in your terminal for one-click install, configuration, and startup:
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
By default, the Web service starts after running. Access `http://localhost:9899/chat` to chat in the web interface.
|
||||
|
||||
@@ -0,0 +1,139 @@
|
||||
---
|
||||
title: Coding Plan
|
||||
description: Coding Plan model configuration
|
||||
---
|
||||
|
||||
> Coding Plan is a monthly subscription package offered by various providers, ideal for high-frequency Agent usage. CowAgent supports all Coding Plan providers via OpenAI-compatible mode.
|
||||
|
||||
<Note>
|
||||
Coding Plan API Base and API Key are usually separate from the standard pay-as-you-go ones. Please obtain them from each provider's platform.
|
||||
</Note>
|
||||
|
||||
## General Configuration
|
||||
|
||||
All providers can be accessed via the OpenAI-compatible protocol, and can be quickly configured through the web console. Set the model provider to **OpenAI**, select a custom model and enter the model code, then fill in the corresponding provider's API Base and API Key:
|
||||
|
||||
<img src="https://cdn.link-ai.tech/doc/20260318113134.png" width="800"/>
|
||||
|
||||
You can also configure directly in `config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "MODEL_NAME",
|
||||
"open_ai_api_base": "PROVIDER_CODING_PLAN_API_BASE",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `bot_type` | Must be `openai` (OpenAI-compatible mode) |
|
||||
| `model` | Model name supported by the provider |
|
||||
| `open_ai_api_base` | Provider's Coding Plan API Base URL |
|
||||
| `open_ai_api_key` | Provider's Coding Plan API Key |
|
||||
|
||||
---
|
||||
|
||||
## Alibaba Cloud
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "qwen3.5-plus",
|
||||
"open_ai_api_base": "https://coding.dashscope.aliyuncs.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `qwen3.5-plus`, `qwen3-max-2026-01-23`, `qwen3-coder-next`, `qwen3-coder-plus`, `glm-5`, `glm-4.7`, `kimi-k2.5`, `MiniMax-M2.5` |
|
||||
| `open_ai_api_base` | `https://coding.dashscope.aliyuncs.com/v1` |
|
||||
| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |
|
||||
|
||||
Reference: [Quick Start](https://help.aliyun.com/zh/model-studio/coding-plan-quickstart?spm=a2c4g.11186623.help-menu-2400256.d_0_2_1.70115203zi5Igc), [Model List](https://help.aliyun.com/zh/model-studio/coding-plan)
|
||||
|
||||
---
|
||||
|
||||
## MiniMax
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "MiniMax-M2.5",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `MiniMax-M2.5`, `MiniMax-M2.5-highspeed`, `MiniMax-M2.1`, `MiniMax-M2` |
|
||||
| `open_ai_api_base` | China: `https://api.minimaxi.com/v1`; Global: `https://api.minimax.io/v1` |
|
||||
| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |
|
||||
|
||||
Reference: [China Key](https://platform.minimaxi.com/docs/coding-plan/quickstart), [Model List](https://platform.minimaxi.com/docs/guides/pricing-coding-plan), [Global Key](https://platform.minimax.io/docs/coding-plan/quickstart)
|
||||
|
||||
---
|
||||
|
||||
## Zhipu GLM
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "glm-4.7",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/coding/paas/v4",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `glm-5`, `glm-4.7`, `glm-4.6`, `glm-4.5`, `glm-4.5-air` |
|
||||
| `open_ai_api_base` | China: `https://open.bigmodel.cn/api/coding/paas/v4`; Global: `https://api.z.ai/api/coding/paas/v4` |
|
||||
| `open_ai_api_key` | Shared with standard API |
|
||||
|
||||
Reference: [China Quick Start](https://docs.bigmodel.cn/cn/coding-plan/quick-start), [Global Quick Start](https://docs.z.ai/devpack/quick-start)
|
||||
|
||||
---
|
||||
|
||||
## Kimi
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "kimi-for-coding",
|
||||
"open_ai_api_base": "https://api.kimi.com/coding/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `kimi-for-coding` |
|
||||
| `open_ai_api_base` | `https://api.kimi.com/coding/v1` |
|
||||
| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |
|
||||
|
||||
Reference: [Key & Docs](https://www.kimi.com/code/docs/)
|
||||
|
||||
---
|
||||
|
||||
## Volcengine
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "Doubao-Seed-2.0-Code",
|
||||
"open_ai_api_base": "https://ark.cn-beijing.volces.com/api/coding/v3",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `Doubao-Seed-2.0-Code`, `Doubao-Seed-2.0-pro`, `Doubao-Seed-2.0-lite`, `Doubao-Seed-Code`, `MiniMax-M2.5`, `Kimi-K2.5`, `GLM-4.7`, `DeepSeek-V3.2` |
|
||||
| `open_ai_api_base` | `https://ark.cn-beijing.volces.com/api/coding/v3` |
|
||||
| `open_ai_api_key` | Shared with standard API |
|
||||
|
||||
Reference: [Quick Start](https://www.volcengine.com/docs/82379/1928261?lang=zh)
|
||||
@@ -8,7 +8,7 @@ Use OpenAI-compatible configuration:
|
||||
```json
|
||||
{
|
||||
"model": "deepseek-chat",
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"open_ai_api_key": "YOUR_API_KEY",
|
||||
"open_ai_api_base": "https://api.deepseek.com/v1"
|
||||
}
|
||||
@@ -17,6 +17,6 @@ Use OpenAI-compatible configuration:
|
||||
| Parameter | Description |
|
||||
| --- | --- |
|
||||
| `model` | `deepseek-chat` (DeepSeek-V3), `deepseek-reasoner` (DeepSeek-R1) |
|
||||
| `bot_type` | Must be `chatGPT` (OpenAI-compatible mode) |
|
||||
| `bot_type` | Must be `openai` (OpenAI-compatible mode) |
|
||||
| `open_ai_api_key` | Create at [DeepSeek Platform](https://platform.deepseek.com/api_keys) |
|
||||
| `open_ai_api_base` | DeepSeek platform BASE URL |
|
||||
|
||||
@@ -19,7 +19,7 @@ OpenAI-compatible configuration is also supported:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "glm-5",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -11,7 +11,7 @@ CowAgent supports mainstream LLMs from domestic and international providers. Mod
|
||||
|
||||
## Configuration
|
||||
|
||||
Configure the model name and API key in `config.json` according to your chosen model. Each model also supports OpenAI-compatible access by setting `bot_type` to `chatGPT` and configuring `open_ai_api_base` and `open_ai_api_key`.
|
||||
Configure the model name and API key in `config.json` according to your chosen model. Each model also supports OpenAI-compatible access by setting `bot_type` to `openai` and configuring `open_ai_api_base` and `open_ai_api_key`.
|
||||
|
||||
You can also use the [LinkAI](https://link-ai.tech) platform interface to flexibly switch between multiple models with support for knowledge base, workflows, and other Agent capabilities.
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ OpenAI-compatible configuration is also supported:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "kimi-k2.5",
|
||||
"open_ai_api_base": "https://api.moonshot.cn/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -19,7 +19,7 @@ OpenAI-compatible configuration is also supported:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "MiniMax-M2.5",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -16,4 +16,4 @@ description: OpenAI model configuration
|
||||
| `model` | Matches the [model parameter](https://platform.openai.com/docs/models) of the OpenAI API. Supports o-series, gpt-5.4, gpt-5 series, gpt-4.1, etc. Recommended for Agent mode: `gpt-5.4` |
|
||||
| `open_ai_api_key` | Create at [OpenAI Platform](https://platform.openai.com/api-keys) |
|
||||
| `open_ai_api_base` | Optional. Change to use third-party proxy |
|
||||
| `bot_type` | Not required for official OpenAI models. Set to `chatGPT` when using Claude or other non-OpenAI models via proxy |
|
||||
| `bot_type` | Not required for official OpenAI models. Set to `openai` when using Claude or other non-OpenAI models via proxy |
|
||||
|
||||
@@ -19,7 +19,7 @@ OpenAI-compatible configuration is also supported:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "qwen3.5-plus",
|
||||
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -56,9 +56,13 @@ python3 app.py
|
||||
nohup python3 app.py & tail -f nohup.out
|
||||
```
|
||||
|
||||
<Tip>
|
||||
如果在服务器上部署,需要在防火墙或安全组中放行 `9899` 端口才能通过浏览器访问 Web 控制台,建议仅对指定IP开放以保证安全。
|
||||
</Tip>
|
||||
|
||||
## Docker 部署
|
||||
|
||||
使用 Docker 部署无需下载源码和安装依赖。Agent 模式下更推荐使用源码部署以获得更多系统访问能力。
|
||||
使用 Docker 部署无需下载源码和安装依赖。Agent模式下更推荐使用源码部署以获得更多系统访问能力。
|
||||
|
||||
<Note>
|
||||
需要安装 [Docker](https://docs.docker.com/engine/install/) 和 docker-compose。
|
||||
@@ -67,7 +71,7 @@ nohup python3 app.py & tail -f nohup.out
|
||||
**1. 下载配置文件**
|
||||
|
||||
```bash
|
||||
wget https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
|
||||
```
|
||||
|
||||
打开 `docker-compose.yml` 填写所需配置。
|
||||
@@ -84,6 +88,10 @@ sudo docker compose up -d
|
||||
sudo docker logs -f chatgpt-on-wechat
|
||||
```
|
||||
|
||||
<Tip>
|
||||
如果在服务器上部署,需要在防火墙或安全组中放行 `9899` 端口才能通过浏览器访问 Web 控制台,建议仅对指定IP开放以保证安全。
|
||||
</Tip>
|
||||
|
||||
## 核心配置项
|
||||
|
||||
```json
|
||||
|
||||
@@ -10,7 +10,7 @@ description: 使用脚本一键安装和管理 CowAgent
|
||||
## 安装命令
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
脚本自动执行以下流程:
|
||||
|
||||
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: 更新升级
|
||||
description: CowAgent 的升级方式说明
|
||||
---
|
||||
|
||||
## 脚本升级(推荐)
|
||||
|
||||
如果使用 `run.sh` 管理服务,执行以下命令即可一键升级:
|
||||
|
||||
```bash
|
||||
./run.sh update
|
||||
```
|
||||
|
||||
该命令会自动完成以下流程:
|
||||
|
||||
1. 停止当前运行的服务
|
||||
2. 拉取最新代码
|
||||
3. 重新检查依赖
|
||||
4. 启动服务
|
||||
|
||||
## 手动升级
|
||||
|
||||
在项目根目录下执行:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
更新完成后重启服务:
|
||||
|
||||
```bash
|
||||
# 如果使用 run.sh 管理
|
||||
./run.sh restart
|
||||
|
||||
# 如果使用 nohup 直接运行
|
||||
kill $(ps -ef | grep app.py | grep -v grep | awk '{print $2}')
|
||||
nohup python3 app.py & tail -f nohup.out
|
||||
```
|
||||
|
||||
## Docker 升级
|
||||
|
||||
在 `docker-compose.yml` 所在目录下执行:
|
||||
|
||||
```bash
|
||||
sudo docker compose pull
|
||||
sudo docker compose up -d
|
||||
```
|
||||
|
||||
<Tip>
|
||||
升级前建议备份 `config.json` 配置文件。Docker 环境下如需保留数据,可通过 volume 挂载持久化工作空间目录。
|
||||
</Tip>
|
||||
+10
-5
@@ -3,15 +3,20 @@ title: 项目介绍
|
||||
description: CowAgent - 基于大模型的超级AI助理
|
||||
---
|
||||
|
||||
<img src="https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc0406669fed7.png" alt="CowAgent" width="500px"/>
|
||||
<img src="https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc0406669fed7.png" alt="CowAgent" width="450px"/>
|
||||
|
||||
**CowAgent** 是基于大模型的超级AI助理,能够主动思考和任务规划、操作计算机和外部资源、创造和执行Skills、拥有长期记忆并不断成长。
|
||||
|
||||
CowAgent 支持灵活切换多种模型,能处理文本、语音、图片、文件等多模态消息,可接入网页、飞书、钉钉、企业微信应用、微信公众号中使用,7×24小时运行于你的个人电脑或服务器中。
|
||||
|
||||
<Card title="GitHub" icon="github" href="https://github.com/zhayujie/chatgpt-on-wechat">
|
||||
github.com/zhayujie/chatgpt-on-wechat
|
||||
</Card>
|
||||
<CardGroup cols={2}>
|
||||
<Card title="GitHub" icon="github" href="https://github.com/zhayujie/chatgpt-on-wechat">
|
||||
开源代码仓库,欢迎 Star 和贡献
|
||||
</Card>
|
||||
<Card title="免部署在线体验" icon="cloud" href="https://link-ai.tech/cowagent/create">
|
||||
无需安装,立即在线体验 CowAgent
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## 核心能力
|
||||
|
||||
@@ -41,7 +46,7 @@ CowAgent 支持灵活切换多种模型,能处理文本、语音、图片、
|
||||
在终端执行以下命令,即可一键安装、配置、启动 CowAgent:
|
||||
|
||||
```bash
|
||||
bash <(curl -sS https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
|
||||
```
|
||||
|
||||
运行后默认会启动 Web 服务,通过访问 `http://localhost:9899/chat` 在网页端对话。
|
||||
|
||||
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Coding Plan
|
||||
description: Coding Plan 模式模型配置
|
||||
---
|
||||
|
||||
> Coding Plan 是各厂商推出的编程包月套餐,适合高频使用 Agent 的场景。CowAgent 支持通过 OpenAI 兼容方式接入各厂商的 Coding Plan 接口。
|
||||
|
||||
<Note>
|
||||
Coding Plan 的 API Base 和 API Key 通常与普通按量计费接口不通用,请在各厂商平台单独获取。
|
||||
</Note>
|
||||
|
||||
## 通用配置格式
|
||||
|
||||
所有厂商均可使用 OpenAI 兼容协议接入,可在web控制台快速配置。设置模型厂商为**OpenAI**,选择自定义模型并填入模型编码,最后填写对应厂商的API Base 和 API Key:
|
||||
|
||||
<img src="https://cdn.link-ai.tech/doc/20260318113134.png" width="800"/>
|
||||
|
||||
也可通过 `config.json` 配置文件直接修改:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "模型名称",
|
||||
"open_ai_api_base": "厂商 Coding Plan API Base",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `bot_type` | 固定为 `openai`(OpenAI 兼容方式) |
|
||||
| `model` | 各厂商支持的模型名称 |
|
||||
| `open_ai_api_base` | 各厂商 Coding Plan 专用 API Base |
|
||||
| `open_ai_api_key` | 各厂商 Coding Plan 专用 API Key |
|
||||
|
||||
---
|
||||
|
||||
## 阿里云
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "qwen3.5-plus",
|
||||
"open_ai_api_base": "https://coding.dashscope.aliyuncs.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `qwen3.5-plus`、`qwen3-max-2026-01-23`、`qwen3-coder-next`、`qwen3-coder-plus`、`glm-5`、`glm-4.7`、`kimi-k2.5`、`MiniMax-M2.5` |
|
||||
| `open_ai_api_base` | `https://coding.dashscope.aliyuncs.com/v1` |
|
||||
| `open_ai_api_key` | Coding Plan 专用 Key(与按量计费接口不通用) |
|
||||
|
||||
官方文档:[快速开始](https://help.aliyun.com/zh/model-studio/coding-plan-quickstart?spm=a2c4g.11186623.help-menu-2400256.d_0_2_1.70115203zi5Igc)、[模型列表](https://help.aliyun.com/zh/model-studio/coding-plan)
|
||||
|
||||
---
|
||||
|
||||
## MiniMax
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "MiniMax-M2.5",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `MiniMax-M2.5`、`MiniMax-M2.5-highspeed`、`MiniMax-M2.1`、`MiniMax-M2` |
|
||||
| `open_ai_api_base` | 国内:`https://api.minimaxi.com/v1`;海外:`https://api.minimax.io/v1` |
|
||||
| `open_ai_api_key` | Coding Plan 专用 Key(与按量计费接口不通用) |
|
||||
|
||||
官方文档:[国内 Key 获取](https://platform.minimaxi.com/docs/coding-plan/quickstart)、[模型列表](https://platform.minimaxi.com/docs/guides/pricing-coding-plan)、[国际 Key 获取](https://platform.minimax.io/docs/coding-plan/quickstart)
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 智谱 GLM
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "glm-4.7",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/coding/paas/v4",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `glm-5`、`glm-4.7`、`glm-4.6`、`glm-4.5`、`glm-4.5-air` |
|
||||
| `open_ai_api_base` | 中国区:`https://open.bigmodel.cn/api/coding/paas/v4`;全球区:`https://api.z.ai/api/coding/paas/v4` |
|
||||
| `open_ai_api_key` | API Key 与普通接口通用 |
|
||||
|
||||
官方文档:[国内版快速开始](https://docs.bigmodel.cn/cn/coding-plan/quick-start)、[国际版快速开始](https://docs.z.ai/devpack/quick-start)
|
||||
|
||||
---
|
||||
|
||||
## Kimi
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "kimi-for-coding",
|
||||
"open_ai_api_base": "https://api.kimi.com/coding/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `kimi-for-coding` |
|
||||
| `open_ai_api_base` | `https://api.kimi.com/coding/v1` |
|
||||
| `open_ai_api_key` | Coding Plan 专用 Key(与按量计费接口不通用) |
|
||||
|
||||
官方文档:[Key 获取](https://www.kimi.com/code/docs/)
|
||||
|
||||
---
|
||||
|
||||
## 火山引擎
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "openai",
|
||||
"model": "Doubao-Seed-2.0-Code",
|
||||
"open_ai_api_base": "https://ark.cn-beijing.volces.com/api/coding/v3",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `Doubao-Seed-2.0-Code`、`Doubao-Seed-2.0-pro`、`Doubao-Seed-2.0-lite`、`Doubao-Seed-Code`、`MiniMax-M2.5`、`Kimi-K2.5`、`GLM-4.7`、`DeepSeek-V3.2` |
|
||||
| `open_ai_api_base` | `https://ark.cn-beijing.volces.com/api/coding/v3` |
|
||||
| `open_ai_api_key` | API Key 与普通接口通用 |
|
||||
|
||||
官方文档:[快速开始](https://www.volcengine.com/docs/82379/1928261?lang=zh)
|
||||
@@ -10,13 +10,13 @@ description: DeepSeek 模型配置
|
||||
"model": "deepseek-chat",
|
||||
"open_ai_api_key": "YOUR_API_KEY",
|
||||
"open_ai_api_base": "https://api.deepseek.com/v1",
|
||||
"bot_type": "chatGPT"
|
||||
"bot_type": "openai"
|
||||
}
|
||||
```
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | `deepseek-chat`(DeepSeek-V3)、`deepseek-reasoner`(DeepSeek-R1) |
|
||||
| `bot_type` | 固定为 `chatGPT`(OpenAI 兼容方式) |
|
||||
| `bot_type` | 固定为 `openai`(OpenAI 兼容方式) |
|
||||
| `open_ai_api_key` | 在 [DeepSeek 平台](https://platform.deepseek.com/api_keys) 创建 |
|
||||
| `open_ai_api_base` | DeepSeek 平台 BASE URL |
|
||||
|
||||
+1
-1
@@ -19,7 +19,7 @@ description: 智谱AI GLM 模型配置
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "glm-5",
|
||||
"open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -11,9 +11,13 @@ CowAgent 支持国内外主流厂商的大语言模型,模型接口实现在
|
||||
|
||||
## 配置方式
|
||||
|
||||
根据所选模型,在 `config.json` 中填写对应的模型名称和 API Key 即可。每个模型也支持 OpenAI 兼容方式接入,将 `bot_type` 设为 `chatGPT`,配置 `open_ai_api_base` 和 `open_ai_api_key`。
|
||||
根据所选模型,在 `config.json` 中填写对应的模型名称和 API Key 即可。每个模型也支持 OpenAI 兼容方式接入,将 `bot_type` 设为 `openai`,配置 `open_ai_api_base` 和 `open_ai_api_key`。
|
||||
|
||||
同时支持使用 [LinkAI](https://link-ai.tech) 平台接口,可灵活切换多种模型并支持知识库、工作流等 Agent 能力。
|
||||
同时支持使用 [LinkAI](https://link-ai.tech) 平台接口,可灵活切换多种模型,并支持知识库、工作流、插件等 Agent 能力。
|
||||
|
||||
也可以通过 [Web 控制台](/channels/web) 在线管理模型配置,无需手动编辑配置文件:
|
||||
|
||||
<img width="850" src="https://cdn.link-ai.tech/doc/20260227173811.png" />
|
||||
|
||||
## 支持的模型
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ description: Kimi (Moonshot) 模型配置
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "kimi-k2.5",
|
||||
"open_ai_api_base": "https://api.moonshot.cn/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -19,7 +19,7 @@ description: MiniMax 模型配置
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "MiniMax-M2.5",
|
||||
"open_ai_api_base": "https://api.minimaxi.com/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -13,7 +13,7 @@ description: OpenAI 模型配置
|
||||
|
||||
| 参数 | 说明 |
|
||||
| --- | --- |
|
||||
| `model` | 与 OpenAI 接口的 [model 参数](https://platform.openai.com/docs/models) 一致,支持 o 系列、gpt-5.4、gpt-5 系列、gpt-4.1 等,Agent 模式推荐使用 `gpt-5.4` |
|
||||
| `model` | 与 OpenAI 接口的 [model 参数](https://platform.openai.com/docs/models) 一致,支持 o 系列、gpt-5.4、gpt-5.4-mini、gpt-5.4-nano、gpt-5 系列、gpt-4.1 等,Agent 模式推荐使用 `gpt-5.4` |
|
||||
| `open_ai_api_key` | 在 [OpenAI 平台](https://platform.openai.com/api-keys) 创建 |
|
||||
| `open_ai_api_base` | 可选,修改可接入第三方代理接口 |
|
||||
| `bot_type` | 使用 OpenAI 官方模型时无需填写。当通过代理接口使用 Claude 等非 OpenAI 模型时,设为 `chatGPT` |
|
||||
| `bot_type` | 使用 OpenAI 官方模型时无需填写。当通过代理接口使用 Claude 等非 OpenAI 模型时,设为 `openai` |
|
||||
|
||||
@@ -19,7 +19,7 @@ description: 通义千问模型配置
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_type": "chatGPT",
|
||||
"bot_type": "openai",
|
||||
"model": "qwen3.5-plus",
|
||||
"open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
|
||||
"open_ai_api_key": "YOUR_API_KEY"
|
||||
|
||||
@@ -5,6 +5,7 @@ description: CowAgent 版本更新历史
|
||||
|
||||
| 版本 | 日期 | 说明 |
|
||||
| --- | --- | --- |
|
||||
| [2.0.3](/releases/v2.0.3) | 2026.03.18 | 新增企微智能机器人和 QQ 通道、支持Coding Plan、新增多个模型、Web端文件处理、记忆系统升级 |
|
||||
| [2.0.2](/releases/v2.0.2) | 2026.02.27 | Web 控制台升级、多通道同时运行、会话持久化 |
|
||||
| [2.0.1](/releases/v2.0.1) | 2026.02.13 | 内置 Web Search 工具、智能上下文管理、多项修复 |
|
||||
| [2.0.0](/releases/v2.0.0) | 2026.02.03 | 全面升级为超级 Agent 助理 |
|
||||
|
||||
@@ -0,0 +1,91 @@
|
||||
---
|
||||
title: v2.0.3
|
||||
description: CowAgent 2.0.3 - 新增企微智能机器人和 QQ 通道、Web 控制台文件处理、记忆系统升级
|
||||
---
|
||||
|
||||
## 🔌 新增接入通道
|
||||
|
||||
### 企业微信智能机器人
|
||||
|
||||
新增企业微信智能机器人(`wecom_bot`)通道,支持流式卡片消息输出,支持文本和图片消息的接收与回复,可在 Web 控制台中进行通道配置和管理。
|
||||
|
||||
接入文档:[企微智能机器人接入](https://docs.cowagent.ai/channels/wecom-bot)。
|
||||
|
||||
相关提交:[d4480b6](https://github.com/zhayujie/chatgpt-on-wechat/commit/d4480b6), [a42f31f](https://github.com/zhayujie/chatgpt-on-wechat/commit/a42f31f), [4ecd4df](https://github.com/zhayujie/chatgpt-on-wechat/commit/4ecd4df), [8b45d6c](https://github.com/zhayujie/chatgpt-on-wechat/commit/8b45d6c)
|
||||
|
||||
### QQ 通道
|
||||
|
||||
新增 QQ 官方机器人(`qq`)通道,支持文本和图片消息的接收与回复,支持私聊和群聊场景。
|
||||
|
||||
接入文档参考:[QQ机器人接入](https://docs.cowagent.ai/channels/qq)。
|
||||
|
||||
相关提交:[005a0e1](https://github.com/zhayujie/chatgpt-on-wechat/commit/005a0e1), [a4d54f5](https://github.com/zhayujie/chatgpt-on-wechat/commit/a4d54f5)
|
||||
|
||||
## 🖥️ Web 控制台支持文件输入和处理
|
||||
|
||||
Web 控制台对话界面支持文件和图片上传,可直接发送文件给 Agent 进行处理。同时 Read 工具新增对 Office 文档(Word、Excel、PPT)的解析能力。
|
||||
|
||||
相关提交:[30c6d9b](https://github.com/zhayujie/chatgpt-on-wechat/commit/30c6d9b)
|
||||
|
||||
## 🤖 新增模型
|
||||
|
||||
- **GPT-5.4 系列**:新增 `gpt-5.4`、`gpt-5.4-mini`、`gpt-5.4-nano` 模型支持 ([1623deb](https://github.com/zhayujie/chatgpt-on-wechat/commit/1623deb))
|
||||
- **Gemini 3.1 Flash Lite Preview**:新增 `gemini-3.1-flash-lite-preview` 模型支持 ([ba915f2](https://github.com/zhayujie/chatgpt-on-wechat/commit/ba915f2))
|
||||
|
||||
## 💰 Coding Plan 支持
|
||||
|
||||
新增各厂商 Coding Plan(编程包月套餐)的接入支持,通过 OpenAI 兼容方式统一接入。目前已支持阿里云、MiniMax、智谱 GLM、Kimi、火山引擎等厂商。
|
||||
|
||||
详细配置参考 [Coding Plan 文档](https://docs.cowagent.ai/models/coding-plan)。
|
||||
|
||||
## 🧠 记忆系统升级
|
||||
|
||||
记忆写入(Memory Flush)升级:
|
||||
|
||||
- 使用 LLM 对超出上下文窗口的对话内容进行智能摘要,生成精炼的每日记忆条目
|
||||
- 摘要在后台线程异步执行,不阻塞回复
|
||||
- 优化上下文批量裁剪策略,降低冲刷频率
|
||||
- 新增每日定时冲刷兜底机制,避免低活跃场景下记忆丢失
|
||||
- 修复上下文记忆丢失问题
|
||||
|
||||
相关提交:[022c13f](https://github.com/zhayujie/chatgpt-on-wechat/commit/022c13f), [c116235](https://github.com/zhayujie/chatgpt-on-wechat/commit/c116235)
|
||||
|
||||
## 🔧 工具重构
|
||||
|
||||
- **图片识别**:将图片识别(Image Vision)从 Skill 重构为内置 Tool,新增独立的图片视觉提供方(Vision Provider)配置,提升稳定性和可维护性 ([a50fafa](https://github.com/zhayujie/chatgpt-on-wechat/commit/a50fafa), [3b8b562](https://github.com/zhayujie/chatgpt-on-wechat/commit/3b8b562))
|
||||
- **网页抓取**:将网页抓取(Web Fetch)从 Skill 重构为内置 Tool,支持远程文档文件(PDF、Word、Excel、PPT)的下载和解析 ([ccb9030](https://github.com/zhayujie/chatgpt-on-wechat/commit/ccb9030), [fa61744](https://github.com/zhayujie/chatgpt-on-wechat/commit/fa61744))
|
||||
|
||||
## 🐳 Docker 部署优化
|
||||
|
||||
- **配置模板对齐**:`docker-compose.yml` 环境变量与 `config-template.json` 对齐,补充完整的模型 API Key 和 Agent 等配置项
|
||||
- **Web 控制台端口映射**:新增 `9899` 端口映射,Docker 部署后可通过浏览器访问 Web 控制台
|
||||
- **配置热更新**:各模型 Bot 的 API Key 和 API Base 改为实时读取,通过 Web 控制台修改配置后无需重启即可生效
|
||||
- **工作空间持久化**:新增 `./cow` Volume 挂载,Agent 工作空间数据(记忆、人格、技能等)持久化到宿主机,容器重建或升级不丢失
|
||||
|
||||
## ⚡ 性能优化
|
||||
|
||||
- **启动加速**:飞书通道采用懒加载方式导入依赖,避免 4-10 秒的启动延迟 ([924dc79](https://github.com/zhayujie/chatgpt-on-wechat/commit/924dc79))
|
||||
- **通道稳定性**:优化通道连接稳定性,支持通道配置通过环境变量设置 ([f1c04bc](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1c04bc), [46d97fd](https://github.com/zhayujie/chatgpt-on-wechat/commit/46d97fd))
|
||||
|
||||
## 🐛 问题修复
|
||||
|
||||
- **bot_type 配置**:修复 Agent 模式下 `bot_type` 配置传递问题 ([#2691](https://github.com/zhayujie/chatgpt-on-wechat/pull/2691)) Thanks [@Weikjssss](https://github.com/Weikjssss)
|
||||
- **bot_type 优先级**:调整 Agent 模式下 `bot_type` 的解析优先级 ([#2692](https://github.com/zhayujie/chatgpt-on-wechat/pull/2692)) Thanks [@6vision](https://github.com/6vision)
|
||||
- **智谱模型配置**:修复智谱 `bot_type` 命名、Web 控制台持久化及正则转义问题 ([#2693](https://github.com/zhayujie/chatgpt-on-wechat/pull/2693)) Thanks [@6vision](https://github.com/6vision)
|
||||
- **OpenAI 兼容层**:使用 `openai_compat` 层统一错误处理 ([#2688](https://github.com/zhayujie/chatgpt-on-wechat/pull/2688)) Thanks [@JasonOA888](https://github.com/JasonOA888)
|
||||
- **OpenAI 兼容迁移**:完成所有模型 Bot 的 `openai_compat` 迁移 ([#2689](https://github.com/zhayujie/chatgpt-on-wechat/pull/2689))
|
||||
- **Gemini 工具调用**:修复 Gemini 模型的工具调用匹配问题 ([eda82ba](https://github.com/zhayujie/chatgpt-on-wechat/commit/eda82ba))
|
||||
- **会话并发**:修复会话并发场景下的竞态条件问题 ([9879878](https://github.com/zhayujie/chatgpt-on-wechat/commit/9879878))
|
||||
- **历史消息恢复**:修复历史会话消息不完整问题,仅恢复 user/assistant 文本消息,剥离工具调用 ([b788a3d](https://github.com/zhayujie/chatgpt-on-wechat/commit/b788a3d), [a33ce97](https://github.com/zhayujie/chatgpt-on-wechat/commit/a33ce97))
|
||||
- **飞书群聊**:移除飞书群聊场景下对 `bot_name` 的依赖 ([b641bff](https://github.com/zhayujie/chatgpt-on-wechat/commit/b641bff))
|
||||
- **Safari 兼容**:修复 Safari 浏览器 IME 回车键误触发消息发送问题 ([0687916](https://github.com/zhayujie/chatgpt-on-wechat/commit/0687916))
|
||||
- **Windows 兼容**:修复 Windows 下 bash 风格 `$VAR` 环境变量转换为 `%VAR%` 的问题 ([7c67513](https://github.com/zhayujie/chatgpt-on-wechat/commit/7c67513))
|
||||
- **MiniMax 参数**:增加 MiniMax 模型的 `max_tokens` 限制 ([1767413](https://github.com/zhayujie/chatgpt-on-wechat/commit/1767413))
|
||||
- **.gitignore 更新**:添加 Python 目录忽略规则 ([#2683](https://github.com/zhayujie/chatgpt-on-wechat/pull/2683)) Thanks [@pelioo](https://github.com/pelioo)
|
||||
- **AGENT.md 主动演进**:优化系统提示词中对 AGENT.md 的更新引导,从被动的"用户修改时更新"改为主动识别对话中的性格、风格变化并自动更新
|
||||
|
||||
## 📦 升级方式
|
||||
|
||||
源码部署可执行 `./run.sh update` 一键升级,或手动拉取代码后重启。详见 [更新升级文档](https://docs.cowagent.ai/guide/upgrade)。
|
||||
|
||||
**发布日期**:2026.03.18 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.2...master)
|
||||
@@ -17,7 +17,7 @@ def create_bot(bot_type):
|
||||
from models.baidu.baidu_wenxin import BaiduWenxinBot
|
||||
return BaiduWenxinBot()
|
||||
|
||||
elif bot_type in (const.CHATGPT, const.DEEPSEEK): # DeepSeek uses OpenAI-compatible API
|
||||
elif bot_type in (const.OPENAI, const.CHATGPT, const.DEEPSEEK): # OpenAI-compatible API
|
||||
from models.chatgpt.chat_gpt_bot import ChatGPTBot
|
||||
return ChatGPTBot()
|
||||
|
||||
|
||||
@@ -30,11 +30,20 @@ user_session = dict()
|
||||
class ClaudeAPIBot(Bot, OpenAIImage):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.api_key = conf().get("claude_api_key")
|
||||
self.api_base = conf().get("claude_api_base") or "https://api.anthropic.com/v1"
|
||||
self.proxy = conf().get("proxy", None)
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=conf().get("model") or "text-davinci-003")
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("claude_api_key")
|
||||
|
||||
@property
|
||||
def api_base(self):
|
||||
return conf().get("claude_api_base") or "https://api.anthropic.com/v1"
|
||||
|
||||
@property
|
||||
def proxy(self):
|
||||
return conf().get("proxy", None)
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
if context and context.type:
|
||||
|
||||
@@ -35,10 +35,14 @@ class DashscopeBot(Bot):
|
||||
super().__init__()
|
||||
self.sessions = SessionManager(DashscopeSession, model=conf().get("model") or "qwen-plus")
|
||||
self.model_name = conf().get("model") or "qwen-plus"
|
||||
self.api_key = conf().get("dashscope_api_key")
|
||||
if self.api_key:
|
||||
os.environ["DASHSCOPE_API_KEY"] = self.api_key
|
||||
self.client = dashscope.Generation
|
||||
api_key = conf().get("dashscope_api_key")
|
||||
if api_key:
|
||||
os.environ["DASHSCOPE_API_KEY"] = api_key
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("dashscope_api_key")
|
||||
|
||||
@staticmethod
|
||||
def _is_multimodal_model(model_name: str) -> bool:
|
||||
|
||||
@@ -24,13 +24,17 @@ class DoubaoBot(Bot):
|
||||
"temperature": conf().get("temperature", 0.8),
|
||||
"top_p": conf().get("top_p", 1.0),
|
||||
}
|
||||
self.api_key = conf().get("ark_api_key")
|
||||
self.base_url = conf().get("ark_base_url", "https://ark.cn-beijing.volces.com/api/v3")
|
||||
# Ensure base_url does not end with /chat/completions
|
||||
if self.base_url.endswith("/chat/completions"):
|
||||
self.base_url = self.base_url.rsplit("/chat/completions", 1)[0]
|
||||
if self.base_url.endswith("/"):
|
||||
self.base_url = self.base_url.rstrip("/")
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("ark_api_key")
|
||||
|
||||
@property
|
||||
def base_url(self):
|
||||
url = conf().get("ark_base_url", "https://ark.cn-beijing.volces.com/api/v3")
|
||||
if url.endswith("/chat/completions"):
|
||||
url = url.rsplit("/chat/completions", 1)[0]
|
||||
return url.rstrip("/")
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
|
||||
@@ -28,21 +28,18 @@ class GoogleGeminiBot(Bot):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.api_key = conf().get("gemini_api_key")
|
||||
# 复用chatGPT的token计算方式
|
||||
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
|
||||
self.model = conf().get("model") or "gemini-pro"
|
||||
if self.model == "gemini":
|
||||
self.model = "gemini-pro"
|
||||
|
||||
# 支持自定义API base地址
|
||||
self.api_base = conf().get("gemini_api_base", "").strip()
|
||||
if self.api_base:
|
||||
# 移除末尾的斜杠
|
||||
self.api_base = self.api_base.rstrip('/')
|
||||
logger.info(f"[Gemini] Using custom API base: {self.api_base}")
|
||||
else:
|
||||
self.api_base = "https://generativelanguage.googleapis.com"
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("gemini_api_key")
|
||||
|
||||
@property
|
||||
def api_base(self):
|
||||
base = conf().get("gemini_api_base", "").strip()
|
||||
if base:
|
||||
return base.rstrip('/')
|
||||
return "https://generativelanguage.googleapis.com"
|
||||
|
||||
def reply(self, query, context: Context = None) -> Reply:
|
||||
session_id = None
|
||||
|
||||
@@ -24,21 +24,19 @@ class MinimaxBot(Bot):
|
||||
"temperature": conf().get("temperature", 0.3),
|
||||
"top_p": conf().get("top_p", 0.95),
|
||||
}
|
||||
# Use unified key name: minimax_api_key
|
||||
self.api_key = conf().get("minimax_api_key")
|
||||
if not self.api_key:
|
||||
# Fallback to old key name for backward compatibility
|
||||
self.api_key = conf().get("Minimax_api_key")
|
||||
if self.api_key:
|
||||
logger.warning("[MINIMAX] 'Minimax_api_key' is deprecated, please use 'minimax_api_key' instead")
|
||||
|
||||
# REST API endpoint
|
||||
# Use Chinese endpoint by default, users can override in config
|
||||
# International users should set: "minimax_api_base": "https://api.minimax.io/v1"
|
||||
self.api_base = conf().get("minimax_api_base", "https://api.minimaxi.com/v1")
|
||||
|
||||
self.sessions = SessionManager(MinimaxSession, model=const.MiniMax)
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
key = conf().get("minimax_api_key")
|
||||
if not key:
|
||||
key = conf().get("Minimax_api_key")
|
||||
return key
|
||||
|
||||
@property
|
||||
def api_base(self):
|
||||
return conf().get("minimax_api_base", "https://api.minimaxi.com/v1")
|
||||
|
||||
def reply(self, query, context: Context = None) -> Reply:
|
||||
# acquire reply content
|
||||
logger.info("[MINIMAX] query={}".format(query))
|
||||
|
||||
@@ -26,8 +26,14 @@ class ModelScopeBot(Bot):
|
||||
"temperature": conf().get("temperature", 0.3), # 如果设置,值域须为 [0, 1] 我们推荐 0.3,以达到较合适的效果。
|
||||
"top_p": conf().get("top_p", 1.0), # 使用默认值
|
||||
}
|
||||
self.api_key = conf().get("modelscope_api_key")
|
||||
self.base_url = conf().get("modelscope_base_url", "https://api-inference.modelscope.cn/v1/chat/completions")
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("modelscope_api_key")
|
||||
|
||||
@property
|
||||
def base_url(self):
|
||||
return conf().get("modelscope_base_url", "https://api-inference.modelscope.cn/v1/chat/completions")
|
||||
"""
|
||||
需要获取ModelScope支持API-inference的模型名称列表,请到魔搭社区官网模型中心查看 https://modelscope.cn/models?filter=inference_type&page=1。
|
||||
或者使用命令 curl https://api-inference.modelscope.cn/v1/models 对模型列表和ID进行获取。查看commend/const.py文件也可以获取模型列表。
|
||||
|
||||
@@ -26,13 +26,17 @@ class MoonshotBot(Bot):
|
||||
"temperature": conf().get("temperature", 0.3),
|
||||
"top_p": conf().get("top_p", 1.0),
|
||||
}
|
||||
self.api_key = conf().get("moonshot_api_key")
|
||||
self.base_url = conf().get("moonshot_base_url", "https://api.moonshot.cn/v1")
|
||||
# Ensure base_url does not end with /chat/completions (backward compat)
|
||||
if self.base_url.endswith("/chat/completions"):
|
||||
self.base_url = self.base_url.rsplit("/chat/completions", 1)[0]
|
||||
if self.base_url.endswith("/"):
|
||||
self.base_url = self.base_url.rstrip("/")
|
||||
|
||||
@property
|
||||
def api_key(self):
|
||||
return conf().get("moonshot_api_key")
|
||||
|
||||
@property
|
||||
def base_url(self):
|
||||
url = conf().get("moonshot_base_url", "https://api.moonshot.cn/v1")
|
||||
if url.endswith("/chat/completions"):
|
||||
url = url.rsplit("/chat/completions", 1)[0]
|
||||
return url.rstrip("/")
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
|
||||
@@ -64,7 +64,7 @@ class Dungeon(Plugin):
|
||||
if e_context["context"].type != ContextType.TEXT:
|
||||
return
|
||||
bottype = Bridge().get_bot_type("chat")
|
||||
if bottype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
|
||||
if bottype not in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
|
||||
return
|
||||
bot = Bridge().get_bot("chat")
|
||||
content = e_context["context"].content[:]
|
||||
|
||||
@@ -315,7 +315,7 @@ class Godcmd(Plugin):
|
||||
except Exception as e:
|
||||
ok, result = False, "你没有设置私有GPT模型"
|
||||
elif cmd == "reset":
|
||||
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI, const.QWEN, const.GEMINI, const.ZHIPU_AI, const.CLAUDEAPI]:
|
||||
if bottype in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI, const.QWEN, const.GEMINI, const.ZHIPU_AI, const.CLAUDEAPI]:
|
||||
bot.sessions.clear_session(session_id)
|
||||
if Bridge().chat_bots.get(bottype):
|
||||
Bridge().chat_bots.get(bottype).sessions.clear_session(session_id)
|
||||
@@ -340,7 +340,7 @@ class Godcmd(Plugin):
|
||||
load_config()
|
||||
ok, result = True, "配置已重载"
|
||||
elif cmd == "resetall":
|
||||
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI,
|
||||
if bottype in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI,
|
||||
const.BAIDU, const.XUNFEI, const.QWEN, const.GEMINI, const.ZHIPU_AI, const.MOONSHOT,
|
||||
const.MODELSCOPE]:
|
||||
channel.cancel_all_session()
|
||||
|
||||
@@ -99,7 +99,7 @@ class Role(Plugin):
|
||||
if e_context["context"].type != ContextType.TEXT:
|
||||
return
|
||||
btype = Bridge().get_bot_type("chat")
|
||||
if btype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.QWEN_DASHSCOPE, const.XUNFEI, const.BAIDU, const.ZHIPU_AI, const.MOONSHOT, const.MiniMax, const.LINKAI,const.MODELSCOPE]:
|
||||
if btype not in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.QWEN_DASHSCOPE, const.XUNFEI, const.BAIDU, const.ZHIPU_AI, const.MOONSHOT, const.MiniMax, const.LINKAI, const.MODELSCOPE]:
|
||||
logger.debug(f'不支持的bot: {btype}')
|
||||
return
|
||||
bot = Bridge().get_bot("chat")
|
||||
|
||||
@@ -52,6 +52,7 @@ class Tool(Plugin):
|
||||
|
||||
# 暂时不支持未来扩展的bot
|
||||
if Bridge().get_bot_type("chat") not in (
|
||||
const.OPENAI,
|
||||
const.CHATGPT,
|
||||
const.OPEN_AI,
|
||||
const.CHATGPTONAZURE,
|
||||
|
||||
Reference in New Issue
Block a user