Compare commits

...

121 Commits

Author SHA1 Message Date
zhayujie 21b956b983 fix: mj open auth bug 2023-10-16 16:44:06 +08:00
zhayujie 792e940279 fix: knowledge base miss suffix bug 2023-10-13 19:12:23 +08:00
zhayujie c2477b26c0 fix: summary no user_id bug 2023-10-13 18:58:13 +08:00
zhayujie 4b27de809b fix: image create prefix 2023-10-13 18:10:05 +08:00
zhayujie 572932d8e8 docs: update README.md 2023-10-13 16:31:02 +08:00
zhayujie 270dd778d9 docs: update config-template and readme 2023-10-13 16:26:29 +08:00
zhayujie dd04287b0a Merge pull request #1454 from befantasy/patch-5
Update chat_channel.py fix SHARING Type 报错。
2023-10-13 15:45:00 +08:00
zhayujie 36ac6d005a Merge pull request #1457 from befantasy/master
新增”ContextType.ACCEPT_FRIEND“,方便插件对“同意好友请求”后的事件进行处理。
2023-10-13 15:44:25 +08:00
zhayujie 238f05f453 fix: summary plugin group enable bug 2023-10-07 10:50:59 +08:00
zhayujie dd082bd212 fix: search miss config 2023-09-30 20:02:26 +08:00
zhayujie cfd2f27b0b feat: knowledge base search miss config 2023-09-30 15:21:26 +08:00
zhayujie a2160d135e feat: knowledge base miss prefix 2023-09-30 15:14:42 +08:00
zhayujie 16d7836369 fix: summary failed tips 2023-09-29 17:00:47 +08:00
zhayujie f3de4dcc5f fix: remove mini-program url 2023-09-29 16:37:21 +08:00
zhayujie e34523028f fix: admin auth bug 2023-09-29 15:52:34 +08:00
zhayujie efe2fbacd6 Merge branch 'master' of github.com:zhayujie/chatgpt-on-wechat 2023-09-28 16:27:52 +08:00
zhayujie 2fa1df29be fix: file size calc bug 2023-09-28 16:26:53 +08:00
befantasy f72cd13fba Update wechat_message.py 2023-09-28 16:18:04 +08:00
befantasy 5b552dffbf Update wechat_channel.py 新增 ContextType.ACCEPT_FRIEND 2023-09-28 16:16:30 +08:00
befantasy a0ae2d13dc Update context.py 新增ContextType "ACCEPT_FRIEND" 2023-09-28 16:11:09 +08:00
befantasy f7262a0a3a Update chat_channel.py fix SHARING Type 报错。
chatgpt-on-wechat    | [ERROR][2023-09-27 18:48:41][chat_channel.py:211] - [WX] unknown context type: SHARING
2023-09-27 19:26:47 +08:00
zhayujie 9736f121eb Update README.md 2023-09-26 18:43:25 +08:00
zhayujie 7c8fb7eacc Merge pull request #1428 from scut-chenzk/chenzk
修复收到从微信发出的图片消息保存到本地失败的问题
2023-09-26 15:59:23 +08:00
zhayujie b45eea5908 Merge pull request #1427 from befantasy/master
itchat通道增加ReplyType.FILE/ReplyType.VIDEO/ReplyType.VIDEO_URL,以方便插件的开发。keyword插件增加文件和视频匹配回复
2023-09-26 01:27:35 +08:00
zhayujie 6babf4ee6c Merge pull request #1445 from befantasy/patch-3
Update godcmd.py 增加debug模式的关闭
2023-09-26 00:37:17 +08:00
zhayujie 576526d4ee Merge pull request #1446 from 6vision/master
个人订阅号消息存储优化
2023-09-26 00:36:36 +08:00
zhayujie c03e31b7be fix: linkai instruction bug 2023-09-25 23:15:59 +08:00
zhayujie a1aa925019 fix: no summary config bug 2023-09-25 18:30:19 +08:00
zhayujie a5a234ed97 fix: remove file after summary 2023-09-25 16:42:36 +08:00
zhayujie 5b5dbcd78b feat: remove file word calc and support url link 2023-09-24 14:33:39 +08:00
zhayujie bd1c6361d3 Update README.md 2023-09-24 12:54:34 +08:00
zhayujie 1fc1febf03 Merge pull request #1450 from zhayujie/feat-doc-chat
feat: 文档总结和与内容对话
2023-09-24 12:30:45 +08:00
zhayujie 55cc35efa9 feat: document summary and chat with content 2023-09-24 12:27:09 +08:00
vision 5ba8fdc5e7 fix 2023-09-23 14:31:54 +08:00
vision 6ea295e227 Merge pull request #1 from 6vision/feat
个人订阅号长语音支持
2023-09-23 13:46:25 +08:00
befantasy 5010c76ef7 Update godcmd.py 增加debug模式的关闭 2023-09-23 13:37:01 +08:00
6vision 79c7f0c29f 个人订阅号长语音支持 2023-09-23 13:27:36 +08:00
6vision 2b3e643786 适配一次请求多条回复 2023-09-23 11:59:01 +08:00
chenzhenkun 90cdff327c 修复收到从微信发出的图片消息保存到本地失败的问题 2023-09-15 19:07:52 +08:00
zhayujie 55c116e727 Update README.md 2023-09-15 18:42:56 +08:00
befantasy 3dd83aa6b7 Update chat_channel.py 2023-09-15 18:38:31 +08:00
befantasy a74aa12641 Update wechat_channel.py 2023-09-15 18:37:05 +08:00
befantasy 151e8c69f9 Update keyword.py 2023-09-15 18:22:10 +08:00
befantasy d8bfa77705 Update keyword.py 2023-09-15 16:56:51 +08:00
befantasy 6bd286e8d5 Update wechat_channel.py to support ReplyType.FILE 2023-09-15 16:22:46 +08:00
befantasy 905532b681 Update chat_channel.py to support ReplyType.FILE 2023-09-15 16:21:27 +08:00
zhayujie 04d5c1ab01 Delete .github/ISSUE_TEMPLATE/config.yml 2023-09-15 15:45:23 +08:00
zhayujie 28be141dc7 Merge pull request #1422 from scut-chenzk/chenzk
修复接语音回复失效的问题
2023-09-15 15:14:00 +08:00
chenzk 652b786baf Merge branch 'zhayujie:master' into chenzk 2023-09-14 23:42:00 +08:00
chenzhenkun ba6c671051 修复收到图片消息保存到本地失败的问题 2023-09-14 23:39:07 +08:00
chenzhenkun ca25d0433f 修复接语音回复失效的问题 2023-09-14 17:52:11 +08:00
zhayujie 5338106dfa Merge pull request #1308 from leesonchen/master
企业服务号的语音输出进行切割
2023-09-12 18:18:17 +08:00
zhayujie b6b76be4f6 fix: add summary plugin bot type 2023-09-06 16:50:23 +08:00
zhayujie 03d94fcfa0 fix: not enable user_image_create_prefix by default 2023-09-06 12:02:13 +08:00
zhayujie b2c5f0d455 feat: mj use default config 2023-09-06 11:53:33 +08:00
zhayujie 54f60dd38c chore: remove dependencies that can only be used under windows 2023-09-04 11:14:48 +08:00
zhayujie 42f181aca2 Merge pull request #1394 from resphinas/claude_bot
Update claude_ai_bot.py
2023-09-04 10:47:02 +08:00
resphina 9c3a27894f Update claude_ai_bot.py 2023-09-03 19:12:27 +08:00
resphina f7cd348912 Update claude_ai_bot.py 2023-09-03 19:04:43 +08:00
zhayujie aeaeb75d3b Merge pull request #1396 from 6vision/master
Optimize image download and storage logic
2023-09-03 17:32:30 +08:00
vision 96542b532e Update requirements-optional.txt 2023-09-03 17:14:28 +08:00
vision 139295fe0d Update requirements-optional.txt
增加企微个人号channel所需依赖
2023-09-03 16:47:25 +08:00
vision 13217b2ce2 Merge pull request #1 from 6vision/patch-1
Optimize image download and storage logic
2023-09-03 16:35:01 +08:00
vision 5cc8b56a7c Optimize image download and storage logic
- Implement new compression logic for files larger than 10MB to improve storage efficiency.
- Switch from JPEG to PNG to enhance image quality and compatibility.
2023-09-03 16:29:19 +08:00
resphina e23e01c95e Update claude_ai_bot.py 2023-09-03 15:40:08 +08:00
resphina bca8ba12c7 Update claude_ai_bot.py 2023-09-03 15:22:25 +08:00
vision 3c44bdbe1c Update requirements-optional.txt 2023-09-03 15:10:05 +08:00
zhayujie db93ed025b Merge branch 'master' of github.com:zhayujie/chatgpt-on-wechat 2023-09-02 21:50:28 +08:00
zhayujie 4209e108d0 fix: wework single chat no prefix circle reply 2023-09-02 21:49:43 +08:00
zhayujie 14cbf011af Merge pull request #1391 from resphinas/claude_bot
Rename claude_ai_session to claude_ai_session.py
2023-09-02 10:42:29 +08:00
resphina 03a41ec199 Rename claude_ai_session to claude_ai_session.py 2023-09-02 02:40:57 +08:00
zhayujie 125fe2a026 Merge pull request #1390 from scut-chenzk/chenzk
Chenzk
2023-09-01 19:42:21 +08:00
chenzhenkun ac4adac29e 兼容微信艾特的情况 2023-09-01 19:37:19 +08:00
chenzhenkun ac449d078e Merge remote-tracking branch 'origin/chenzk' into chenzk
# Conflicts:
#	channel/chat_channel.py
2023-09-01 19:22:02 +08:00
chenzhenkun 79be4530d4 防止命中前缀导致死循环的情况 2023-09-01 19:18:53 +08:00
chenzk 85ce52d70c Merge branch 'zhayujie:master' into chenzk 2023-09-01 18:57:52 +08:00
chenzhenkun 7ab56b9076 添加日志以方便定位问题 2023-09-01 18:56:24 +08:00
zhayujie dedf976375 Merge pull request #1389 from scut-chenzk/chenzk
修复自己艾特自己会死循环的问题
2023-09-01 18:42:41 +08:00
chenzhenkun 89f438208a 修复自己艾特自己会死循环的问题 2023-09-01 18:39:31 +08:00
zhayujie ffbc5080ae Merge pull request #1388 from resphinas/claude_bot
实现claude对接配置中的 共享上下文开关
2023-09-01 18:34:43 +08:00
resphina 4167f13bac Update README.md 2023-09-01 18:12:48 +08:00
resphina 6ba0baabb0 Update claude_ai_bot.py 2023-09-01 18:04:39 +08:00
resphina 081003df47 Update config.py 2023-09-01 17:55:09 +08:00
resphina 559194ffb2 Update config.py 2023-09-01 17:54:03 +08:00
resphina 97a26d4a46 Update README.md 2023-09-01 17:53:21 +08:00
resphina 503c6c9b7e Update claude_ai_bot.py 2023-09-01 17:31:30 +08:00
resphina 9a1e10deff Create claude_ai_session 2023-09-01 17:30:31 +08:00
zhayujie 054f927c05 fix: at_list bug in wechat channel 2023-09-01 13:45:04 +08:00
resphina 22210747d0 Update README.md 2023-09-01 12:40:09 +08:00
resphina 53b2deb72c 更新机器人相关接口文档说明 2023-09-01 12:38:58 +08:00
zhayujie 6fc158e7d6 hotfix: config.py format 2023-09-01 11:32:58 +08:00
zhayujie a23a65c731 Merge pull request #1382 from resphinas/claude_bot
新增Claude聊天机器人接口(逆向cookie实现,稳定不失效)
2023-09-01 10:48:33 +08:00
resphina 7dc7105ee2 Update requirements-optional.txt 2023-09-01 10:32:33 +08:00
resphina bac70108b2 Update requirements.txt 2023-09-01 10:32:03 +08:00
resphina 297404b21e Update config-template.json 2023-09-01 10:31:45 +08:00
resphina 33a7f8b558 Delete chatgpt-on-wechat-master.iml 2023-09-01 10:08:34 +08:00
resphina 4a670b7df7 Update config-template.json 2023-09-01 09:40:26 +08:00
resphina 79e4af315e Update log.py 2023-09-01 09:39:45 +08:00
resphina c6e31b2fdc Update chat_gpt_bot.py 2023-09-01 09:39:08 +08:00
resphina 91dc44df53 Update const.py 2023-09-01 09:38:47 +08:00
resphina 7e57f8f157 Merge branch 'master' into claude_bot 2023-09-01 09:37:10 +08:00
zhayujie 15f6b7c6d3 Merge pull request #1385 from scut-chenzk/chenzk
支持wework企业微信机器人
2023-08-31 22:44:17 +08:00
chenzhenkun b213ba541d 新增wework企业微信机器人支持插件功能 2023-08-31 21:02:00 +08:00
chenzhenkun 7c6ed9944e 支持wework企业微信机器人 2023-08-30 20:49:00 +08:00
resphinas a5a825e439 system role remove 2023-08-29 06:45:21 +08:00
resphinas a4ab547f77 proxy update 2023-08-29 05:59:59 +08:00
resphinas 76ed763abe proxy update 2023-08-29 05:58:39 +08:00
resphinas b9e3125610 格式纠正2 2023-08-28 18:04:28 +08:00
resphina 8d9d5b7b6f Update claude_ai_bot.py 2023-08-28 17:40:27 +08:00
resphina 187601da1e Update config-template.json 2023-08-28 17:30:03 +08:00
resphina cc3a0fc367 Update config-template.json 2023-08-28 17:28:13 +08:00
resphinas 44cc4165d1 claude_bot 2023-08-28 17:22:20 +08:00
resphinas f98b43514e claude_bot 2023-08-28 17:18:00 +08:00
resphinas 3c9b1a14e9 claude bot update 2023-08-28 16:43:26 +08:00
zhayujie 827e8eddf8 chore: remove dockerhub in arm build 2023-08-27 12:28:10 +08:00
zhayujie 7bc27d6167 fix: remove docker hub register in arm build 2023-08-27 12:10:08 +08:00
zhayujie ba06edd63a fix: remove pysilk_mod 2023-08-26 17:32:52 +08:00
zhayujie cacf553a5b feat: add arm workflows 2023-08-26 17:17:03 +08:00
zhayujie d89091a8ea fix: git action deploy 2023-08-26 14:14:32 +08:00
zhayujie 01a56e1155 feat: try arm docker image 2023-08-26 12:45:16 +08:00
leeson 8224c2fc16 企业服务号的语音输出进行切割 2023-07-08 23:58:07 +08:00
34 changed files with 1486 additions and 135 deletions
-6
View File
@@ -1,6 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: 知识星球
url: https://public.zsxq.com/groups/88885848842852.html
about: 如果你想了解更多项目细节,并与开发者们交流更多关于AI技术的实践,欢迎加入星球
+71
View File
@@ -0,0 +1,71 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Create and publish a Docker image
on:
push:
branches: ['master']
create:
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
push: true
file: ./docker/Dockerfile.latest
platforms: linux/arm64
tags: ${{ steps.meta.outputs.tags }}-arm64
labels: ${{ steps.meta.outputs.labels }}
- uses: actions/delete-package-versions@v4
with:
package-name: 'chatgpt-on-wechat'
package-type: 'container'
min-versions-to-keep: 10
delete-only-untagged-versions: 'true'
token: ${{ secrets.GITHUB_TOKEN }}
+14 -16
View File
@@ -5,10 +5,10 @@
最新版本支持的功能如下:
- [x] **多端部署:** 有多种部署方式可选择且功能完备,目前已支持个人微信,微信公众号和企业微信应用等部署方式
- [x] **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3, GPT-3.5, GPT-4, 文心一言, 讯飞星火
- [x] **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3.5, GPT-4, claude, 文心一言, 讯飞星火
- [x] **语音识别:** 可识别语音消息,通过文字或语音回复,支持 azure, baidu, google, openai等多种语音模型
- [x] **图片生成:** 支持图片生成 和 图生图(如照片修复),可选择 Dell-E, stable diffusion, replicate, midjourney模型
- [x] **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结等插件
- [x] **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结、文档总结和对话等插件
- [X] **Tool工具:** 与操作系统和互联网交互,支持最新信息搜索、数学计算、天气和资讯查询、网页总结,基于 [chatgpt-tool-hub](https://github.com/goldfishh/chatgpt-tool-hub) 实现
- [x] **知识库:** 通过上传知识库文件自定义专属机器人,可作为数字分身、领域知识库、智能客服使用,基于 [LinkAI](https://chat.link-ai.tech/console) 实现
@@ -28,9 +28,11 @@ Demo made by [Visionn](https://www.wangpc.cc/)
# 更新日志
>**2023.09.26** 插件增加 文件/文章链接 一键总结和对话的功能,使用参考:[插件说明](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/linkai#3%E6%96%87%E6%A1%A3%E6%80%BB%E7%BB%93%E5%AF%B9%E8%AF%9D%E5%8A%9F%E8%83%BD)
>**2023.08.08** 接入百度文心一言模型,通过 [插件](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/linkai) 支持 Midjourney 绘图
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建个人知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建领域知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.04.26** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
@@ -42,19 +44,19 @@ Demo made by [Visionn](https://www.wangpc.cc/)
>**2023.03.09** 基于 `whisper API`(后续已接入更多的语音`API`服务) 实现对微信语音消息的解析和回复,添加配置项 `"speech_recognition":true` 即可启用,使用参考 [#415](https://github.com/zhayujie/chatgpt-on-wechat/issues/415)。(contributed by [wanggang1987](https://github.com/wanggang1987) in [#385](https://github.com/zhayujie/chatgpt-on-wechat/pull/385))
>**2023.03.02** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
>**2023.02.09** 扫码登录存在账号限制风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158)
# 快速开始
## 准备
### 1. OpenAI账号注册
### 1. 账号注册
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.pythonthree.com/register-openai-chatgpt/) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
项目默认使用OpenAI接口,需前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。接口需要海外网络访问及绑定信用卡支付。
> 项目中默认使用的对话模型是 gpt3.5 turbo,计费方式是约每 500 汉字 (包含请求和回复) 消耗 $0.002,图片生成是每张消耗 $0.016。
> 默认对话模型是 openai 的 gpt-3.5-turbo,计费方式是约每 1000tokens (约750个英文单词 或 500汉字包含请求和回复) 消耗 $0.002,图片生成是Dell E模型,每张消耗 $0.016。
项目同时也支持使用 LinkAI 接口,无需代理,可使用 文心、讯飞、GPT-3、GPT-4 等模型,支持 定制化知识库、联网搜索、MJ绘图、文档总结和对话等能力。修改配置即可一键切换,参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
### 2.运行环境
@@ -157,7 +159,7 @@ pip3 install azure-cognitiveservices-speech
**4.其他配置**
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k`, `wenxin` (其中gpt-4 api暂未完全开放,申请通过后可使用)
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k`, `wenxin` , `claude` , `xunfei`(其中gpt-4 api暂未完全开放,申请通过后可使用)
+ `temperature`,`frequency_penalty`,`presence_penalty`: Chat API接口参数,详情参考[OpenAI官方文档。](https://platform.openai.com/docs/api-reference/chat)
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
@@ -184,10 +186,10 @@ pip3 install azure-cognitiveservices-speech
如果是开发机 **本地运行**,直接在项目根目录下执行:
```bash
python3 app.py
python3 app.py # windows环境下该命令通常为 python app.py
```
终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。
终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。
### 2.服务器部署
@@ -269,8 +271,4 @@ FAQs <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>
## 联系
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。
如果你想了解更多项目细节,与开发者们交流更多关于AI技术的实践,欢迎加入星球:
<a href="https://public.zsxq.com/groups/88885848842852.html"><img width="360" src="./docs/images/planet.jpg"></a>
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。参与更多讨论可加入技术交流群。
+1 -1
View File
@@ -43,7 +43,7 @@ def run():
# os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:9001'
channel = channel_factory.create_channel(channel_name)
if channel_name in ["wx", "wxy", "terminal", "wechatmp", "wechatmp_service", "wechatcom_app"]:
if channel_name in ["wx", "wxy", "terminal", "wechatmp", "wechatmp_service", "wechatcom_app", "wework"]:
PluginManager().load_plugins()
# startup channel
+4
View File
@@ -39,4 +39,8 @@ def create_bot(bot_type):
elif bot_type == const.LINKAI:
from bot.linkai.link_ai_bot import LinkAIBot
return LinkAIBot()
elif bot_type == const.CLAUDEAI:
from bot.claude.claude_ai_bot import ClaudeAIBot
return ClaudeAIBot()
raise RuntimeError
+222
View File
@@ -0,0 +1,222 @@
import re
import time
import json
import uuid
from curl_cffi import requests
from bot.bot import Bot
from bot.claude.claude_ai_session import ClaudeAiSession
from bot.openai.open_ai_image import OpenAIImage
from bot.session_manager import SessionManager
from bridge.context import Context, ContextType
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf
class ClaudeAIBot(Bot, OpenAIImage):
def __init__(self):
super().__init__()
self.sessions = SessionManager(ClaudeAiSession, model=conf().get("model") or "gpt-3.5-turbo")
self.claude_api_cookie = conf().get("claude_api_cookie")
self.proxy = conf().get("proxy")
self.con_uuid_dic = {}
if self.proxy:
self.proxies = {
"http": self.proxy,
"https": self.proxy
}
else:
self.proxies = None
self.error = ""
self.org_uuid = self.get_organization_id()
def generate_uuid(self):
random_uuid = uuid.uuid4()
random_uuid_str = str(random_uuid)
formatted_uuid = f"{random_uuid_str[0:8]}-{random_uuid_str[9:13]}-{random_uuid_str[14:18]}-{random_uuid_str[19:23]}-{random_uuid_str[24:]}"
return formatted_uuid
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
return self._chat(query, context)
elif context.type == ContextType.IMAGE_CREATE:
ok, res = self.create_img(query, 0)
if ok:
reply = Reply(ReplyType.IMAGE_URL, res)
else:
reply = Reply(ReplyType.ERROR, res)
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def get_organization_id(self):
url = "https://claude.ai/api/organizations"
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'Connection': 'keep-alive',
'Cookie': f'{self.claude_api_cookie}'
}
try:
response = requests.get(url, headers=headers, impersonate="chrome110", proxies =self.proxies, timeout=400)
res = json.loads(response.text)
uuid = res[0]['uuid']
except:
if "App unavailable" in response.text:
logger.error("IP error: The IP is not allowed to be used on Claude")
self.error = "ip所在地区不被claude支持"
elif "Invalid authorization" in response.text:
logger.error("Cookie error: Invalid authorization of claude, check cookie please.")
self.error = "无法通过claude身份验证,请检查cookie"
return None
return uuid
def conversation_share_check(self,session_id):
if conf().get("claude_uuid") is not None and conf().get("claude_uuid") != "":
con_uuid = conf().get("claude_uuid")
return con_uuid
if session_id not in self.con_uuid_dic:
self.con_uuid_dic[session_id] = self.generate_uuid()
self.create_new_chat(self.con_uuid_dic[session_id])
return self.con_uuid_dic[session_id]
def check_cookie(self):
flag = self.get_organization_id()
return flag
def create_new_chat(self, con_uuid):
"""
新建claude对话实体
:param con_uuid: 对话id
:return:
"""
url = f"https://claude.ai/api/organizations/{self.org_uuid}/chat_conversations"
payload = json.dumps({"uuid": con_uuid, "name": ""})
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Origin': 'https://claude.ai',
'DNT': '1',
'Connection': 'keep-alive',
'Cookie': self.claude_api_cookie,
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'TE': 'trailers'
}
response = requests.post(url, headers=headers, data=payload, impersonate="chrome110", proxies=self.proxies, timeout=400)
# Returns JSON of the newly created conversation information
return response.json()
def _chat(self, query, context, retry_count=0) -> Reply:
"""
发起对话请求
:param query: 请求提示词
:param context: 对话上下文
:param retry_count: 当前递归重试次数
:return: 回复
"""
if retry_count >= 2:
# exit from retry 2 times
logger.warn("[CLAUDEAI] failed after maximum number of retry times")
return Reply(ReplyType.ERROR, "请再问我一次吧")
try:
session_id = context["session_id"]
if self.org_uuid is None:
return Reply(ReplyType.ERROR, self.error)
session = self.sessions.session_query(query, session_id)
con_uuid = self.conversation_share_check(session_id)
model = conf().get("model") or "gpt-3.5-turbo"
# remove system message
if session.messages[0].get("role") == "system":
if model == "wenxin" or model == "claude":
session.messages.pop(0)
logger.info(f"[CLAUDEAI] query={query}")
# do http request
base_url = "https://claude.ai"
payload = json.dumps({
"completion": {
"prompt": f"{query}",
"timezone": "Asia/Kolkata",
"model": "claude-2"
},
"organization_uuid": f"{self.org_uuid}",
"conversation_uuid": f"{con_uuid}",
"text": f"{query}",
"attachments": []
})
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept': 'text/event-stream, text/event-stream',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Origin': 'https://claude.ai',
'DNT': '1',
'Connection': 'keep-alive',
'Cookie': f'{self.claude_api_cookie}',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'TE': 'trailers'
}
res = requests.post(base_url + "/api/append_message", headers=headers, data=payload,impersonate="chrome110",proxies= self.proxies,timeout=400)
if res.status_code == 200 or "pemission" in res.text:
# execute success
decoded_data = res.content.decode("utf-8")
decoded_data = re.sub('\n+', '\n', decoded_data).strip()
data_strings = decoded_data.split('\n')
completions = []
for data_string in data_strings:
json_str = data_string[6:].strip()
data = json.loads(json_str)
if 'completion' in data:
completions.append(data['completion'])
reply_content = ''.join(completions)
if "rate limi" in reply_content:
logger.error("rate limit error: The conversation has reached the system speed limit and is synchronized with Cladue. Please go to the official website to check the lifting time")
return Reply(ReplyType.ERROR, "对话达到系统速率限制,与cladue同步,请进入官网查看解除限制时间")
logger.info(f"[CLAUDE] reply={reply_content}, total_tokens=invisible")
self.sessions.session_reply(reply_content, session_id, 100)
return Reply(ReplyType.TEXT, reply_content)
else:
flag = self.check_cookie()
if flag == None:
return Reply(ReplyType.ERROR, self.error)
response = res.json()
error = response.get("error")
logger.error(f"[CLAUDE] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}, detail: {res.text}, uuid: {con_uuid}")
if res.status_code >= 500:
# server error, need retry
time.sleep(2)
logger.warn(f"[CLAUDE] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
return Reply(ReplyType.ERROR, "提问太快啦,请休息一下再问我吧")
except Exception as e:
logger.exception(e)
# retry
time.sleep(2)
logger.warn(f"[CLAUDE] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
+9
View File
@@ -0,0 +1,9 @@
from bot.session_manager import Session
class ClaudeAiSession(Session):
def __init__(self, session_id, system_prompt=None, model="claude"):
super().__init__(session_id, system_prompt)
self.model = model
# claude逆向不支持role prompt
# self.reset()
+93 -3
View File
@@ -12,7 +12,7 @@ from bot.session_manager import SessionManager
from bridge.context import Context, ContextType
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf
from config import conf, pconf
class LinkAIBot(Bot, OpenAIImage):
@@ -23,6 +23,7 @@ class LinkAIBot(Bot, OpenAIImage):
def __init__(self):
super().__init__()
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
self.args = {}
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
@@ -72,13 +73,16 @@ class LinkAIBot(Bot, OpenAIImage):
body = {
"app_code": app_code,
"messages": session.messages,
"model": model, # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin
"model": model, # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"temperature": conf().get("temperature"),
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
}
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}")
file_id = context.kwargs.get("file_id")
if file_id:
body["file_id"] = file_id
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}, file_id={file_id}")
headers = {"Authorization": "Bearer " + linkai_api_key}
# do http request
@@ -92,6 +96,9 @@ class LinkAIBot(Bot, OpenAIImage):
total_tokens = response["usage"]["total_tokens"]
logger.info(f"[LINKAI] reply={reply_content}, total_tokens={total_tokens}")
self.sessions.session_reply(reply_content, session_id, total_tokens)
suffix = self._fecth_knowledge_search_suffix(response)
if suffix:
reply_content += suffix
return Reply(ReplyType.TEXT, reply_content)
else:
@@ -114,3 +121,86 @@ class LinkAIBot(Bot, OpenAIImage):
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
def reply_text(self, session: ChatGPTSession, app_code="", retry_count=0) -> dict:
if retry_count >= 2:
# exit from retry 2 times
logger.warn("[LINKAI] failed after maximum number of retry times")
return {
"total_tokens": 0,
"completion_tokens": 0,
"content": "请再问我一次吧"
}
try:
body = {
"app_code": app_code,
"messages": session.messages,
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"temperature": conf().get("temperature"),
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
}
if self.args.get("max_tokens"):
body["max_tokens"] = self.args.get("max_tokens")
headers = {"Authorization": "Bearer " + conf().get("linkai_api_key")}
# do http request
base_url = conf().get("linkai_api_base", "https://api.link-ai.chat")
res = requests.post(url=base_url + "/v1/chat/completions", json=body, headers=headers,
timeout=conf().get("request_timeout", 180))
if res.status_code == 200:
# execute success
response = res.json()
reply_content = response["choices"][0]["message"]["content"]
total_tokens = response["usage"]["total_tokens"]
logger.info(f"[LINKAI] reply={reply_content}, total_tokens={total_tokens}")
return {
"total_tokens": total_tokens,
"completion_tokens": response["usage"]["completion_tokens"],
"content": reply_content,
}
else:
response = res.json()
error = response.get("error")
logger.error(f"[LINKAI] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}")
if res.status_code >= 500:
# server error, need retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self.reply_text(session, app_code, retry_count + 1)
return {
"total_tokens": 0,
"completion_tokens": 0,
"content": "提问太快啦,请休息一下再问我吧"
}
except Exception as e:
logger.exception(e)
# retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self.reply_text(session, app_code, retry_count + 1)
def _fecth_knowledge_search_suffix(self, response) -> str:
try:
if response.get("knowledge_base"):
search_hit = response.get("knowledge_base").get("search_hit")
first_similarity = response.get("knowledge_base").get("first_similarity")
logger.info(f"[LINKAI] knowledge base, search_hit={search_hit}, first_similarity={first_similarity}")
plugin_config = pconf("linkai")
if plugin_config.get("knowledge_base") and plugin_config.get("knowledge_base").get("search_miss_text_enabled"):
search_miss_similarity = plugin_config.get("knowledge_base").get("search_miss_similarity")
search_miss_text = plugin_config.get("knowledge_base").get("search_miss_suffix")
if not search_hit:
return search_miss_text
if search_miss_similarity and float(search_miss_similarity) > first_similarity:
return search_miss_text
except Exception as e:
logger.exception(e)
+8
View File
@@ -29,7 +29,10 @@ class Bridge(object):
self.btype["chat"] = const.XUNFEI
if conf().get("use_linkai") and conf().get("linkai_api_key"):
self.btype["chat"] = const.LINKAI
if model_type in ["claude"]:
self.btype["chat"] = const.CLAUDEAI
self.bots = {}
self.chat_bots = {}
def get_bot(self, typename):
if self.bots.get(typename) is None:
@@ -59,6 +62,11 @@ class Bridge(object):
def fetch_translate(self, text, from_lang="", to_lang="en") -> Reply:
return self.get_bot("translate").translate(text, from_lang, to_lang)
def find_chat_bot(self, bot_type: str):
if self.chat_bots.get(bot_type) is None:
self.chat_bots[bot_type] = create_bot(bot_type)
return self.chat_bots.get(bot_type)
def reset_bot(self):
"""
重置bot路由
+6
View File
@@ -7,9 +7,15 @@ class ContextType(Enum):
TEXT = 1 # 文本消息
VOICE = 2 # 音频消息
IMAGE = 3 # 图片消息
FILE = 4 # 文件信息
VIDEO = 5 # 视频信息
SHARING = 6 # 分享信息
IMAGE_CREATE = 10 # 创建图片命令
ACCEPT_FRIEND = 19 # 同意好友请求
JOIN_GROUP = 20 # 加入群聊
PATPAT = 21 # 拍了拍
FUNCTION = 22 # 函数调用
def __str__(self):
return self.name
+7 -1
View File
@@ -8,9 +8,15 @@ class ReplyType(Enum):
VOICE = 2 # 音频文件
IMAGE = 3 # 图片文件
IMAGE_URL = 4 # 图片URL
VIDEO_URL = 5 # 视频URL
FILE = 6 # 文件
CARD = 7 # 微信名片,仅支持ntchat
InviteRoom = 8 # 邀请好友进群
INFO = 9
ERROR = 10
TEXT_ = 11 # 强制文本
VIDEO = 12
MINIAPP = 13 # 小程序
def __str__(self):
return self.name
+4
View File
@@ -33,4 +33,8 @@ def create_channel(channel_type):
from channel.wechatcom.wechatcomapp_channel import WechatComAppChannel
return WechatComAppChannel()
elif channel_type == "wework":
from channel.wework.wework_channel import WeworkChannel
return WeworkChannel()
raise RuntimeError
+25 -15
View File
@@ -99,21 +99,26 @@ class ChatChannel(Channel):
match_prefix = check_prefix(content, conf().get("group_chat_prefix"))
match_contain = check_contain(content, conf().get("group_chat_keyword"))
flag = False
if match_prefix is not None or match_contain is not None:
flag = True
if match_prefix:
content = content.replace(match_prefix, "", 1).strip()
if context["msg"].is_at:
logger.info("[WX]receive group at")
if not conf().get("group_at_off", False):
if context["msg"].to_user_id != context["msg"].actual_user_id:
if match_prefix is not None or match_contain is not None:
flag = True
pattern = f"@{re.escape(self.name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
if subtract_res == content and context["msg"].self_display_name:
# 前缀移除后没有变化,使用群昵称再次移除
pattern = f"@{re.escape(context['msg'].self_display_name)}(\u2005|\u0020)"
if match_prefix:
content = content.replace(match_prefix, "", 1).strip()
if context["msg"].is_at:
logger.info("[WX]receive group at")
if not conf().get("group_at_off", False):
flag = True
pattern = f"@{re.escape(self.name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
content = subtract_res
if isinstance(context["msg"].at_list, list):
for at in context["msg"].at_list:
pattern = f"@{re.escape(at)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", subtract_res)
if subtract_res == content and context["msg"].self_display_name:
# 前缀移除后没有变化,使用群昵称再次移除
pattern = f"@{re.escape(context['msg'].self_display_name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
content = subtract_res
if not flag:
if context["origin_ctype"] == ContextType.VOICE:
logger.info("[WX]receive group voice, but checkprefix didn't match")
@@ -197,7 +202,12 @@ class ChatChannel(Channel):
reply = self._generate_reply(new_context)
else:
return
elif context.type == ContextType.IMAGE: # 图片消息,当前无默认逻辑
elif context.type == ContextType.IMAGE: # 图片消息,当前仅做下载保存到本地的逻辑
cmsg = context["msg"]
cmsg.prepare()
elif context.type == ContextType.SHARING: # 分享信息,当前无默认逻辑
pass
elif context.type == ContextType.FUNCTION or context.type == ContextType.FILE: # 文件消息及函数调用等,当前无默认逻辑
pass
else:
logger.error("[WX] unknown context type: {}".format(context.type))
@@ -233,7 +243,7 @@ class ChatChannel(Channel):
reply.content = reply_text
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
reply.content = "[" + str(reply.type) + "]\n" + reply.content
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE:
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE or reply.type == ReplyType.FILE or reply.type == ReplyType.VIDEO or reply.type == ReplyType.VIDEO_URL:
pass
else:
logger.error("[WX] unknown reply type: {}".format(reply.type))
+3 -1
View File
@@ -53,6 +53,7 @@ class ChatMessage(object):
is_at = False
actual_user_id = None
actual_user_nickname = None
at_list = None
_prepare_fn = None
_prepared = False
@@ -67,7 +68,7 @@ class ChatMessage(object):
self._prepare_fn()
def __str__(self):
return "ChatMessage: id={}, create_time={}, ctype={}, content={}, from_user_id={}, from_user_nickname={}, to_user_id={}, to_user_nickname={}, other_user_id={}, other_user_nickname={}, is_group={}, is_at={}, actual_user_id={}, actual_user_nickname={}".format(
return "ChatMessage: id={}, create_time={}, ctype={}, content={}, from_user_id={}, from_user_nickname={}, to_user_id={}, to_user_nickname={}, other_user_id={}, other_user_nickname={}, is_group={}, is_at={}, actual_user_id={}, actual_user_nickname={}, at_list={}".format(
self.msg_id,
self.create_time,
self.ctype,
@@ -82,4 +83,5 @@ class ChatMessage(object):
self.is_at,
self.actual_user_id,
self.actual_user_nickname,
self.at_list
)
+26 -3
View File
@@ -25,7 +25,7 @@ from lib import itchat
from lib.itchat.content import *
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE])
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING])
def handler_single_msg(msg):
try:
cmsg = WechatMessage(msg, False)
@@ -36,7 +36,7 @@ def handler_single_msg(msg):
return None
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE], isGroupChat=True)
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING], isGroupChat=True)
def handler_group_msg(msg):
try:
cmsg = WechatMessage(msg, True)
@@ -167,11 +167,13 @@ class WechatChannel(ChatChannel):
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
logger.debug("[WX]receive image for group msg: {}".format(cmsg.content))
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT]:
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT, ContextType.ACCEPT_FRIEND]:
logger.debug("[WX]receive note msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.TEXT:
# logger.debug("[WX]receive group msg: {}, cmsg={}".format(json.dumps(cmsg._rawmsg, ensure_ascii=False), cmsg))
pass
elif cmsg.ctype == ContextType.FILE:
logger.debug(f"[WX]receive attachment msg, file_name={cmsg.content}")
else:
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
@@ -208,3 +210,24 @@ class WechatChannel(ChatChannel):
image_storage.seek(0)
itchat.send_image(image_storage, toUserName=receiver)
logger.info("[WX] sendImage, receiver={}".format(receiver))
elif reply.type == ReplyType.FILE: # 新增文件回复类型
file_storage = reply.content
itchat.send_file(file_storage, toUserName=receiver)
logger.info("[WX] sendFile, receiver={}".format(receiver))
elif reply.type == ReplyType.VIDEO: # 新增视频回复类型
video_storage = reply.content
itchat.send_video(video_storage, toUserName=receiver)
logger.info("[WX] sendFile, receiver={}".format(receiver))
elif reply.type == ReplyType.VIDEO_URL: # 新增视频URL回复类型
video_url = reply.content
logger.debug(f"[WX] start download video, video_url={video_url}")
video_res = requests.get(video_url, stream=True)
video_storage = io.BytesIO()
size = 0
for block in video_res.iter_content(1024):
size += len(block)
video_storage.write(block)
logger.info(f"[WX] download video success, size={size}, video_url={video_url}")
video_storage.seek(0)
itchat.send_video(video_storage, toUserName=receiver)
logger.info("[WX] sendVideo url={}, receiver={}".format(video_url, receiver))
+11 -1
View File
@@ -7,7 +7,6 @@ from common.tmp_dir import TmpDir
from lib import itchat
from lib.itchat.content import *
class WechatMessage(ChatMessage):
def __init__(self, itchat_msg, is_group=False):
super().__init__(itchat_msg)
@@ -35,6 +34,9 @@ class WechatMessage(ChatMessage):
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[-1]
elif "加入群聊" in itchat_msg["Content"]:
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
elif "你已添加了" in itchat_msg["Content"]: #通过好友请求
self.ctype = ContextType.ACCEPT_FRIEND
self.content = itchat_msg["Content"]
elif "拍了拍我" in itchat_msg["Content"]:
self.ctype = ContextType.PATPAT
self.content = itchat_msg["Content"]
@@ -42,6 +44,14 @@ class WechatMessage(ChatMessage):
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
else:
raise NotImplementedError("Unsupported note message: " + itchat_msg["Content"])
elif itchat_msg["Type"] == ATTACHMENT:
self.ctype = ContextType.FILE
self.content = TmpDir().path() + itchat_msg["FileName"] # content直接存临时目录路径
self._prepare_fn = lambda: itchat_msg.download(self.content)
elif itchat_msg["Type"] == SHARING:
self.ctype = ContextType.SHARING
self.content = itchat_msg.get("Url")
else:
raise NotImplementedError("Unsupported message type: Type:{} MsgType:{}".format(itchat_msg["Type"], itchat_msg["MsgType"]))
+6 -4
View File
@@ -49,7 +49,7 @@ class Query:
# New request
if (
from_user not in channel.cache_dict
channel.cache_dict.get(from_user) is None
and from_user not in channel.running
or content.startswith("#")
and message_id not in channel.request_cnt # insert the godcmd
@@ -131,8 +131,10 @@ class Query:
# Only one request can access to the cached data
try:
(reply_type, reply_content) = channel.cache_dict.pop(from_user)
except KeyError:
(reply_type, reply_content) = channel.cache_dict[from_user].pop(0)
if not channel.cache_dict[from_user]: # If popping the message makes the list empty, delete the user entry from cache
del channel.cache_dict[from_user]
except IndexError:
return "success"
if reply_type == "text":
@@ -146,7 +148,7 @@ class Query:
max_split=1,
)
reply_text = splits[0] + continue_text
channel.cache_dict[from_user] = ("text", splits[1])
channel.cache_dict[from_user].append(("text", splits[1]))
logger.info(
"[wechatmp] Request {} do send to {} {}: {}\n{}".format(
+45 -25
View File
@@ -10,6 +10,7 @@ import requests
import web
from wechatpy.crypto import WeChatCrypto
from wechatpy.exceptions import WeChatClientException
from collections import defaultdict
from bridge.context import *
from bridge.reply import *
@@ -20,7 +21,7 @@ from common.log import logger
from common.singleton import singleton
from common.utils import split_string_by_utf8_length
from config import conf
from voice.audio_convert import any_to_mp3
from voice.audio_convert import any_to_mp3, split_audio
# If using SSL, uncomment the following lines, and modify the certificate path.
# from cheroot.server import HTTPServer
@@ -46,7 +47,7 @@ class WechatMPChannel(ChatChannel):
self.crypto = WeChatCrypto(token, aes_key, appid)
if self.passive_reply:
# Cache the reply to the user's first message
self.cache_dict = dict()
self.cache_dict = defaultdict(list)
# Record whether the current message is being processed
self.running = set()
# Count the request from wechat official server by message_id
@@ -82,24 +83,28 @@ class WechatMPChannel(ChatChannel):
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
reply_text = reply.content
logger.info("[wechatmp] text cached, receiver {}\n{}".format(receiver, reply_text))
self.cache_dict[receiver] = ("text", reply_text)
self.cache_dict[receiver].append(("text", reply_text))
elif reply.type == ReplyType.VOICE:
try:
voice_file_path = reply.content
with open(voice_file_path, "rb") as f:
# support: <2M, <60s, mp3/wma/wav/amr
response = self.client.material.add("voice", f)
logger.debug("[wechatmp] upload voice response: {}".format(response))
# 根据文件大小估计一个微信自动审核的时间,审核结束前返回将会导致语音无法播放,这个估计有待验证
f_size = os.fstat(f.fileno()).st_size
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
# todo check media_id
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
media_id = response["media_id"]
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("voice", media_id)
voice_file_path = reply.content
duration, files = split_audio(voice_file_path, 60 * 1000)
if len(files) > 1:
logger.info("[wechatmp] voice too long {}s > 60s , split into {} parts".format(duration / 1000.0, len(files)))
for path in files:
# support: <2M, <60s, mp3/wma/wav/amr
try:
with open(path, "rb") as f:
response = self.client.material.add("voice", f)
logger.debug("[wechatmp] upload voice response: {}".format(response))
f_size = os.fstat(f.fileno()).st_size
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
# todo check media_id
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
media_id = response["media_id"]
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver].append(("voice", media_id))
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
@@ -119,7 +124,7 @@ class WechatMPChannel(ChatChannel):
return
media_id = response["media_id"]
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("image", media_id)
self.cache_dict[receiver].append(("image", media_id))
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
image_storage = reply.content
image_storage.seek(0)
@@ -134,7 +139,7 @@ class WechatMPChannel(ChatChannel):
return
media_id = response["media_id"]
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("image", media_id)
self.cache_dict[receiver].append(("image", media_id))
else:
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
reply_text = reply.content
@@ -162,13 +167,28 @@ class WechatMPChannel(ChatChannel):
file_name = os.path.basename(file_path)
file_type = "audio/mpeg"
logger.info("[wechatmp] file_name: {}, file_type: {} ".format(file_name, file_type))
# support: <2M, <60s, AMR\MP3
response = self.client.media.upload("voice", (file_name, open(file_path, "rb"), file_type))
logger.debug("[wechatmp] upload voice response: {}".format(response))
media_ids = []
duration, files = split_audio(file_path, 60 * 1000)
if len(files) > 1:
logger.info("[wechatmp] voice too long {}s > 60s , split into {} parts".format(duration / 1000.0, len(files)))
for path in files:
# support: <2M, <60s, AMR\MP3
response = self.client.media.upload("voice", (os.path.basename(path), open(path, "rb"), file_type))
logger.debug("[wechatcom] upload voice response: {}".format(response))
media_ids.append(response["media_id"])
os.remove(path)
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
self.client.message.send_voice(receiver, response["media_id"])
try:
os.remove(file_path)
except Exception:
pass
for media_id in media_ids:
self.client.message.send_voice(receiver, media_id)
time.sleep(1)
logger.info("[wechatmp] Do send voice to {}".format(receiver))
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
+17
View File
@@ -0,0 +1,17 @@
import os
import time
os.environ['ntwork_LOG'] = "ERROR"
import ntwork
wework = ntwork.WeWork()
def forever():
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
ntwork.exit_()
os._exit(0)
+326
View File
@@ -0,0 +1,326 @@
import io
import os
import random
import tempfile
import threading
os.environ['ntwork_LOG'] = "ERROR"
import ntwork
import requests
import uuid
from bridge.context import *
from bridge.reply import *
from channel.chat_channel import ChatChannel
from channel.wework.wework_message import *
from channel.wework.wework_message import WeworkMessage
from common.singleton import singleton
from common.log import logger
from common.time_check import time_checker
from common.utils import compress_imgfile, fsize
from config import conf
from channel.wework.run import wework
from channel.wework import run
from PIL import Image
def get_wxid_by_name(room_members, group_wxid, name):
if group_wxid in room_members:
for member in room_members[group_wxid]['member_list']:
if member['room_nickname'] == name or member['username'] == name:
return member['user_id']
return None # 如果没有找到对应的group_wxid或name,则返回None
def download_and_compress_image(url, filename, quality=30):
# 确定保存图片的目录
directory = os.path.join(os.getcwd(), "tmp")
# 如果目录不存在,则创建目录
if not os.path.exists(directory):
os.makedirs(directory)
# 下载图片
pic_res = requests.get(url, stream=True)
image_storage = io.BytesIO()
for block in pic_res.iter_content(1024):
image_storage.write(block)
# 检查图片大小并可能进行压缩
sz = fsize(image_storage)
if sz >= 10 * 1024 * 1024: # 如果图片大于 10 MB
logger.info("[wework] image too large, ready to compress, sz={}".format(sz))
image_storage = compress_imgfile(image_storage, 10 * 1024 * 1024 - 1)
logger.info("[wework] image compressed, sz={}".format(fsize(image_storage)))
# 将内存缓冲区的指针重置到起始位置
image_storage.seek(0)
# 读取并保存图片
image = Image.open(image_storage)
image_path = os.path.join(directory, f"{filename}.png")
image.save(image_path, "png")
return image_path
def download_video(url, filename):
# 确定保存视频的目录
directory = os.path.join(os.getcwd(), "tmp")
# 如果目录不存在,则创建目录
if not os.path.exists(directory):
os.makedirs(directory)
# 下载视频
response = requests.get(url, stream=True)
total_size = 0
video_path = os.path.join(directory, f"{filename}.mp4")
with open(video_path, 'wb') as f:
for block in response.iter_content(1024):
total_size += len(block)
# 如果视频的总大小超过30MB (30 * 1024 * 1024 bytes),则停止下载并返回
if total_size > 30 * 1024 * 1024:
logger.info("[WX] Video is larger than 30MB, skipping...")
return None
f.write(block)
return video_path
def create_message(wework_instance, message, is_group):
logger.debug(f"正在为{'群聊' if is_group else '单聊'}创建 WeworkMessage")
cmsg = WeworkMessage(message, wework=wework_instance, is_group=is_group)
logger.debug(f"cmsg:{cmsg}")
return cmsg
def handle_message(cmsg, is_group):
logger.debug(f"准备用 WeworkChannel 处理{'群聊' if is_group else '单聊'}消息")
if is_group:
WeworkChannel().handle_group(cmsg)
else:
WeworkChannel().handle_single(cmsg)
logger.debug(f"已用 WeworkChannel 处理完{'群聊' if is_group else '单聊'}消息")
def _check(func):
def wrapper(self, cmsg: ChatMessage):
msgId = cmsg.msg_id
create_time = cmsg.create_time # 消息时间戳
if create_time is None:
return func(self, cmsg)
if int(create_time) < int(time.time()) - 60: # 跳过1分钟前的历史消息
logger.debug("[WX]history message {} skipped".format(msgId))
return
return func(self, cmsg)
return wrapper
@wework.msg_register(
[ntwork.MT_RECV_TEXT_MSG, ntwork.MT_RECV_IMAGE_MSG, 11072, ntwork.MT_RECV_VOICE_MSG])
def all_msg_handler(wework_instance: ntwork.WeWork, message):
logger.debug(f"收到消息: {message}")
if 'data' in message:
# 首先查找conversation_id,如果没有找到,则查找room_conversation_id
conversation_id = message['data'].get('conversation_id', message['data'].get('room_conversation_id'))
if conversation_id is not None:
is_group = "R:" in conversation_id
try:
cmsg = create_message(wework_instance=wework_instance, message=message, is_group=is_group)
except NotImplementedError as e:
logger.error(f"[WX]{message.get('MsgId', 'unknown')} 跳过: {e}")
return None
delay = random.randint(1, 2)
timer = threading.Timer(delay, handle_message, args=(cmsg, is_group))
timer.start()
else:
logger.debug("消息数据中无 conversation_id")
return None
return None
def accept_friend_with_retries(wework_instance, user_id, corp_id):
result = wework_instance.accept_friend(user_id, corp_id)
logger.debug(f'result:{result}')
# @wework.msg_register(ntwork.MT_RECV_FRIEND_MSG)
# def friend(wework_instance: ntwork.WeWork, message):
# data = message["data"]
# user_id = data["user_id"]
# corp_id = data["corp_id"]
# logger.info(f"接收到好友请求,消息内容:{data}")
# delay = random.randint(1, 180)
# threading.Timer(delay, accept_friend_with_retries, args=(wework_instance, user_id, corp_id)).start()
#
# return None
def get_with_retry(get_func, max_retries=5, delay=5):
retries = 0
result = None
while retries < max_retries:
result = get_func()
if result:
break
logger.warning(f"获取数据失败,重试第{retries + 1}次······")
retries += 1
time.sleep(delay) # 等待一段时间后重试
return result
@singleton
class WeworkChannel(ChatChannel):
NOT_SUPPORT_REPLYTYPE = []
def __init__(self):
super().__init__()
def startup(self):
smart = conf().get("wework_smart", True)
wework.open(smart)
logger.info("等待登录······")
wework.wait_login()
login_info = wework.get_login_info()
self.user_id = login_info['user_id']
self.name = login_info['nickname']
logger.info(f"登录信息:>>>user_id:{self.user_id}>>>>>>>>name:{self.name}")
logger.info("静默延迟60s,等待客户端刷新数据,请勿进行任何操作······")
time.sleep(60)
contacts = get_with_retry(wework.get_external_contacts)
rooms = get_with_retry(wework.get_rooms)
directory = os.path.join(os.getcwd(), "tmp")
if not contacts or not rooms:
logger.error("获取contacts或rooms失败,程序退出")
ntwork.exit_()
os.exit(0)
if not os.path.exists(directory):
os.makedirs(directory)
# 将contacts保存到json文件中
with open(os.path.join(directory, 'wework_contacts.json'), 'w', encoding='utf-8') as f:
json.dump(contacts, f, ensure_ascii=False, indent=4)
with open(os.path.join(directory, 'wework_rooms.json'), 'w', encoding='utf-8') as f:
json.dump(rooms, f, ensure_ascii=False, indent=4)
# 创建一个空字典来保存结果
result = {}
# 遍历列表中的每个字典
for room in rooms['room_list']:
# 获取聊天室ID
room_wxid = room['conversation_id']
# 获取聊天室成员
room_members = wework.get_room_members(room_wxid)
# 将聊天室成员保存到结果字典中
result[room_wxid] = room_members
# 将结果保存到json文件中
with open(os.path.join(directory, 'wework_room_members.json'), 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=4)
logger.info("wework程序初始化完成········")
run.forever()
@time_checker
@_check
def handle_single(self, cmsg: ChatMessage):
if cmsg.from_user_id == cmsg.to_user_id:
# ignore self reply
return
if cmsg.ctype == ContextType.VOICE:
if not conf().get("speech_recognition"):
return
logger.debug("[WX]receive voice msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
logger.debug("[WX]receive image msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.PATPAT:
logger.debug("[WX]receive patpat msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.TEXT:
logger.debug("[WX]receive text msg: {}, cmsg={}".format(json.dumps(cmsg._rawmsg, ensure_ascii=False), cmsg))
else:
logger.debug("[WX]receive msg: {}, cmsg={}".format(cmsg.content, cmsg))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=False, msg=cmsg)
if context:
self.produce(context)
@time_checker
@_check
def handle_group(self, cmsg: ChatMessage):
if cmsg.ctype == ContextType.VOICE:
if not conf().get("speech_recognition"):
return
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
logger.debug("[WX]receive image for group msg: {}".format(cmsg.content))
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT]:
logger.debug("[WX]receive note msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.TEXT:
pass
else:
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
if context:
self.produce(context)
# 统一的发送函数,每个Channel自行实现,根据reply的type字段发送不同类型的消息
def send(self, reply: Reply, context: Context):
logger.debug(f"context: {context}")
receiver = context["receiver"]
actual_user_id = context["msg"].actual_user_id
if reply.type == ReplyType.TEXT or reply.type == ReplyType.TEXT_:
match = re.search(r"^@(.*?)\n", reply.content)
logger.debug(f"match: {match}")
if match:
new_content = re.sub(r"^@(.*?)\n", "\n", reply.content)
at_list = [actual_user_id]
logger.debug(f"new_content: {new_content}")
wework.send_room_at_msg(receiver, new_content, at_list)
else:
wework.send_text(receiver, reply.content)
logger.info("[WX] sendMsg={}, receiver={}".format(reply, receiver))
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
wework.send_text(receiver, reply.content)
logger.info("[WX] sendMsg={}, receiver={}".format(reply, receiver))
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
image_storage = reply.content
image_storage.seek(0)
# Read data from image_storage
data = image_storage.read()
# Create a temporary file
with tempfile.NamedTemporaryFile(delete=False) as temp:
temp_path = temp.name
temp.write(data)
# Send the image
wework.send_image(receiver, temp_path)
logger.info("[WX] sendImage, receiver={}".format(receiver))
# Remove the temporary file
os.remove(temp_path)
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
filename = str(uuid.uuid4())
# 调用你的函数,下载图片并保存为本地文件
image_path = download_and_compress_image(img_url, filename)
wework.send_image(receiver, file_path=image_path)
logger.info("[WX] sendImage url={}, receiver={}".format(img_url, receiver))
elif reply.type == ReplyType.VIDEO_URL:
video_url = reply.content
filename = str(uuid.uuid4())
video_path = download_video(video_url, filename)
if video_path is None:
# 如果视频太大,下载可能会被跳过,此时 video_path 将为 None
wework.send_text(receiver, "抱歉,视频太大了!!!")
else:
wework.send_video(receiver, video_path)
logger.info("[WX] sendVideo, receiver={}".format(receiver))
elif reply.type == ReplyType.VOICE:
current_dir = os.getcwd()
voice_file = reply.content.split("/")[-1]
reply.content = os.path.join(current_dir, "tmp", voice_file)
wework.send_file(receiver, reply.content)
logger.info("[WX] sendFile={}, receiver={}".format(reply.content, receiver))
+211
View File
@@ -0,0 +1,211 @@
import datetime
import json
import os
import re
import time
import pilk
from bridge.context import ContextType
from channel.chat_message import ChatMessage
from common.log import logger
from ntwork.const import send_type
def get_with_retry(get_func, max_retries=5, delay=5):
retries = 0
result = None
while retries < max_retries:
result = get_func()
if result:
break
logger.warning(f"获取数据失败,重试第{retries + 1}次······")
retries += 1
time.sleep(delay) # 等待一段时间后重试
return result
def get_room_info(wework, conversation_id):
logger.debug(f"传入的 conversation_id: {conversation_id}")
rooms = wework.get_rooms()
if not rooms or 'room_list' not in rooms:
logger.error(f"获取群聊信息失败: {rooms}")
return None
time.sleep(1)
logger.debug(f"获取到的群聊信息: {rooms}")
for room in rooms['room_list']:
if room['conversation_id'] == conversation_id:
return room
return None
def cdn_download(wework, message, file_name):
data = message["data"]
aes_key = data["cdn"]["aes_key"]
file_size = data["cdn"]["size"]
# 获取当前工作目录,然后与文件名拼接得到保存路径
current_dir = os.getcwd()
save_path = os.path.join(current_dir, "tmp", file_name)
# 下载保存图片到本地
if "url" in data["cdn"].keys() and "auth_key" in data["cdn"].keys():
url = data["cdn"]["url"]
auth_key = data["cdn"]["auth_key"]
# result = wework.wx_cdn_download(url, auth_key, aes_key, file_size, save_path) # ntwork库本身接口有问题,缺失了aes_key这个参数
"""
下载wx类型的cdn文件,以https开头
"""
data = {
'url': url,
'auth_key': auth_key,
'aes_key': aes_key,
'size': file_size,
'save_path': save_path
}
result = wework._WeWork__send_sync(send_type.MT_WXCDN_DOWNLOAD_MSG, data) # 直接用wx_cdn_download的接口内部实现来调用
elif "file_id" in data["cdn"].keys():
file_type = 2
file_id = data["cdn"]["file_id"]
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
else:
logger.error(f"something is wrong, data: {data}")
return
# 输出下载结果
logger.debug(f"result: {result}")
def c2c_download_and_convert(wework, message, file_name):
data = message["data"]
aes_key = data["cdn"]["aes_key"]
file_size = data["cdn"]["size"]
file_type = 5
file_id = data["cdn"]["file_id"]
current_dir = os.getcwd()
save_path = os.path.join(current_dir, "tmp", file_name)
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
logger.debug(result)
# 在下载完SILK文件之后,立即将其转换为WAV文件
base_name, _ = os.path.splitext(save_path)
wav_file = base_name + ".wav"
pilk.silk_to_wav(save_path, wav_file, rate=24000)
# 删除SILK文件
try:
os.remove(save_path)
except Exception as e:
pass
class WeworkMessage(ChatMessage):
def __init__(self, wework_msg, wework, is_group=False):
try:
super().__init__(wework_msg)
self.msg_id = wework_msg['data'].get('conversation_id', wework_msg['data'].get('room_conversation_id'))
# 使用.get()防止 'send_time' 键不存在时抛出错误
self.create_time = wework_msg['data'].get("send_time")
self.is_group = is_group
self.wework = wework
if wework_msg["type"] == 11041: # 文本消息类型
if any(substring in wework_msg['data']['content'] for substring in ("该消息类型暂不能展示", "不支持的消息类型")):
return
self.ctype = ContextType.TEXT
self.content = wework_msg['data']['content']
elif wework_msg["type"] == 11044: # 语音消息类型,需要缓存文件
file_name = datetime.datetime.now().strftime('%Y%m%d%H%M%S') + ".silk"
base_name, _ = os.path.splitext(file_name)
file_name_2 = base_name + ".wav"
current_dir = os.getcwd()
self.ctype = ContextType.VOICE
self.content = os.path.join(current_dir, "tmp", file_name_2)
self._prepare_fn = lambda: c2c_download_and_convert(wework, wework_msg, file_name)
elif wework_msg["type"] == 11042: # 图片消息类型,需要下载文件
file_name = datetime.datetime.now().strftime('%Y%m%d%H%M%S') + ".jpg"
current_dir = os.getcwd()
self.ctype = ContextType.IMAGE
self.content = os.path.join(current_dir, "tmp", file_name)
self._prepare_fn = lambda: cdn_download(wework, wework_msg, file_name)
elif wework_msg["type"] == 11072: # 新成员入群通知
self.ctype = ContextType.JOIN_GROUP
member_list = wework_msg['data']['member_list']
self.actual_user_nickname = member_list[0]['name']
self.actual_user_id = member_list[0]['user_id']
self.content = f"{self.actual_user_nickname}加入了群聊!"
directory = os.path.join(os.getcwd(), "tmp")
rooms = get_with_retry(wework.get_rooms)
if not rooms:
logger.error("更新群信息失败···")
else:
result = {}
for room in rooms['room_list']:
# 获取聊天室ID
room_wxid = room['conversation_id']
# 获取聊天室成员
room_members = wework.get_room_members(room_wxid)
# 将聊天室成员保存到结果字典中
result[room_wxid] = room_members
with open(os.path.join(directory, 'wework_room_members.json'), 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=4)
logger.info("有新成员加入,已自动更新群成员列表缓存!")
else:
raise NotImplementedError(
"Unsupported message type: Type:{} MsgType:{}".format(wework_msg["type"], wework_msg["MsgType"]))
data = wework_msg['data']
login_info = self.wework.get_login_info()
logger.debug(f"login_info: {login_info}")
nickname = f"{login_info['username']}({login_info['nickname']})" if login_info['nickname'] else login_info['username']
user_id = login_info['user_id']
sender_id = data.get('sender')
conversation_id = data.get('conversation_id')
sender_name = data.get("sender_name")
self.from_user_id = user_id if sender_id == user_id else conversation_id
self.from_user_nickname = nickname if sender_id == user_id else sender_name
self.to_user_id = user_id
self.to_user_nickname = nickname
self.other_user_nickname = sender_name
self.other_user_id = conversation_id
if self.is_group:
conversation_id = data.get('conversation_id') or data.get('room_conversation_id')
self.other_user_id = conversation_id
if conversation_id:
room_info = get_room_info(wework=wework, conversation_id=conversation_id)
self.other_user_nickname = room_info.get('nickname', None) if room_info else None
at_list = data.get('at_list', [])
tmp_list = []
for at in at_list:
tmp_list.append(at['nickname'])
at_list = tmp_list
logger.debug(f"at_list: {at_list}")
logger.debug(f"nickname: {nickname}")
self.is_at = False
if nickname in at_list or login_info['nickname'] in at_list or login_info['username'] in at_list:
self.is_at = True
self.at_list = at_list
# 检查消息内容是否包含@用户名。处理复制粘贴的消息,这类消息可能不会触发@通知,但内容中可能包含 "@用户名"。
content = data.get('content', '')
name = nickname
pattern = f"@{re.escape(name)}(\u2005|\u0020)"
if re.search(pattern, content):
logger.debug(f"Wechaty message {self.msg_id} includes at")
self.is_at = True
if not self.actual_user_id:
self.actual_user_id = data.get("sender")
self.actual_user_nickname = sender_name if self.ctype != ContextType.JOIN_GROUP else self.actual_user_nickname
else:
logger.error("群聊消息中没有找到 conversation_id 或 room_conversation_id")
logger.debug(f"WeworkMessage has been successfully instantiated with message id: {self.msg_id}")
except Exception as e:
logger.error(f"在 WeworkMessage 的初始化过程中出现错误:{e}")
raise e
+2 -1
View File
@@ -8,4 +8,5 @@ LINKAI = "linkai"
VERSION = "1.3.0"
MODEL_LIST = ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "wenxin", "xunfei"]
CLAUDEAI = "claude"
MODEL_LIST = ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "wenxin", "xunfei","claude"]
+1 -3
View File
@@ -20,9 +20,7 @@
"ChatGPT测试群"
],
"image_create_prefix": [
"画",
"看",
"找"
"画"
],
"speech_recognition": false,
"group_speech_recognition": false,
+8 -3
View File
@@ -23,8 +23,8 @@ available_setting = {
# Bot触发配置
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
"group_chat_reply_prefix": "", # 群聊时自动回复的前缀
"group_chat_reply_suffix": "", # 群聊时自动回复的后缀,\n 可以换行
"group_chat_keyword": [], # 群聊时包含该关键词则会触发机器人回复
@@ -59,6 +59,11 @@ available_setting = {
"xunfei_app_id": "", # 讯飞应用ID
"xunfei_api_key": "", # 讯飞 API key
"xunfei_api_secret": "", # 讯飞 API secret
# claude 配置
"claude_api_cookie": "",
"claude_uuid": "",
# wework的通用配置
"wework_smart": True, # 配置wework是否使用已登录的企业微信,False为多开
# 语音设置
"speech_recognition": False, # 是否开启语音识别
"group_speech_recognition": False, # 是否开启群组语音识别
@@ -120,7 +125,7 @@ available_setting = {
"use_linkai": False,
"linkai_api_key": "",
"linkai_app_code": "",
"linkai_api_base": "https://api.link-ai.chat" # linkAI服务地址,若国内无法访问或延迟较高可改为 https://api.link-ai.tech
"linkai_api_base": "https://api.link-ai.chat", # linkAI服务地址,若国内无法访问或延迟较高可改为 https://api.link-ai.tech
}
+11 -2
View File
@@ -4,8 +4,10 @@ import json
import os
import random
import string
import logging
from typing import Tuple
import bridge.bridge
import plugins
from bridge.bridge import Bridge
from bridge.context import ContextType
@@ -201,6 +203,7 @@ class Godcmd(Plugin):
self.password = gconf["password"]
self.admin_users = gconf["admin_users"] # 预存的管理员账号,这些账号不需要认证。itchat的用户名每次都会变,不可用
global_config["admin_users"] = self.admin_users
self.isrunning = True # 机器人是否运行中
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
@@ -310,6 +313,8 @@ class Godcmd(Plugin):
elif cmd == "reset":
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI]:
bot.sessions.clear_session(session_id)
if Bridge().chat_bots.get(bottype):
Bridge().chat_bots.get(bottype).sessions.clear_session(session_id)
channel.cancel_session(session_id)
ok, result = True, "会话已重置"
else:
@@ -339,8 +344,12 @@ class Godcmd(Plugin):
else:
ok, result = False, "当前对话机器人不支持重置会话"
elif cmd == "debug":
logger.setLevel("DEBUG")
ok, result = True, "DEBUG模式已开启"
if logger.getEffectiveLevel() == logging.DEBUG: # 判断当前日志模式是否DEBUG
logger.setLevel(logging.INFO)
ok, result = True, "DEBUG模式已关闭"
else:
logger.setLevel(logging.DEBUG)
ok, result = True, "DEBUG模式已开启"
elif cmd == "plist":
plugins = PluginManager().list_plugins()
ok = True
+27 -5
View File
@@ -2,7 +2,7 @@
import json
import os
import requests
import plugins
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
@@ -51,15 +51,37 @@ class Keyword(Plugin):
content = e_context["context"].content.strip()
logger.debug("[keyword] on_handle_context. content: %s" % content)
if content in self.keyword:
logger.debug(f"[keyword] 匹配到关键字【{content}")
logger.info(f"[keyword] 匹配到关键字【{content}")
reply_text = self.keyword[content]
# 判断匹配内容的类型
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".webp"]):
# 如果是以 http:// 或 https:// 开头,且.jpg/.jpeg/.png/.gif结尾,则认为是图片 URL
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".img"]):
# 如果是以 http:// 或 https:// 开头,且".jpg", ".jpeg", ".png", ".gif", ".img"结尾,则认为是图片 URL
reply = Reply()
reply.type = ReplyType.IMAGE_URL
reply.content = reply_text
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"]):
# 如果是以 http:// 或 https:// 开头,且".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"结尾,则下载文件到tmp目录并发送给用户
file_path = "tmp"
if not os.path.exists(file_path):
os.makedirs(file_path)
file_name = reply_text.split("/")[-1] # 获取文件名
file_path = os.path.join(file_path, file_name)
response = requests.get(reply_text)
with open(file_path, "wb") as f:
f.write(response.content)
#channel/wechat/wechat_channel.py和channel/wechat_channel.py中缺少ReplyType.FILE类型。
reply = Reply()
reply.type = ReplyType.FILE
reply.content = file_path
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".mp4"]):
# 如果是以 http:// 或 https:// 开头,且".mp4"结尾,则下载视频到tmp目录并发送给用户
reply = Reply()
reply.type = ReplyType.VIDEO_URL
reply.content = reply_text
else:
# 否则认为是普通文本
reply = Reply()
@@ -68,7 +90,7 @@ class Keyword(Plugin):
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS # 事件结束,并跳过处理context的默认逻辑
def get_help_text(self, **kwargs):
help_text = "关键词过滤"
return help_text
+46 -13
View File
@@ -1,18 +1,18 @@
## 插件说明
基于 LinkAI 提供的知识库、Midjourney绘画等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
基于 LinkAI 提供的知识库、Midjourney绘画、文档对话等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
## 插件配置
`plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`:
`plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置,但功能默认关闭,需要可通过指令进行开启)。
以下是配置项说明:
以下是插件配置项说明:
```bash
{
"group_app_map": { # 群聊 和 应用编码 的映射关系
"测试群1": "default", # 表示在名称为 "测试群1" 的群聊中将使用app_code 为 default 的应用
"测试群2": "Kv2fXJcH"
"group_app_map": { # 群聊 和 应用编码 的映射关系
"测试群名称1": "default", # 表示在名称为 "测试群名称1" 的群聊中将使用app_code 为 default 的应用
"测试群名称2": "Kv2fXJcH"
},
"midjourney": {
"enabled": true, # midjourney 绘画开关
@@ -21,19 +21,30 @@
"max_tasks": 3, # 支持同时提交的总任务个数
"max_tasks_per_user": 1, # 支持单个用户同时提交的任务个数
"use_image_create_prefix": true # 是否使用全局的绘画触发词,如果开启将同时支持由`config.json`中的 image_create_prefix 配置触发
},
"summary": {
"enabled": true, # 文档总结和对话功能开关
"group_enabled": true, # 是否支持群聊开启
"max_file_size": 5000 # 文件的大小限制,单位KB,默认为5M,超过该大小直接忽略
}
}
```
根目录 `config.json` 中配置,`API_KEY` 在 [控制台](https://chat.link-ai.tech/console/interface) 中创建并复制过来:
```bash
"linkai_api_key": "Link_xxxxxxxxx"
```
注意:
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,可根据需要进行填写,未填写配置时默认不开启相应功能
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,`summary` 部分是文档总结及对话功能的配置。三部分的配置相互独立,可按需开启
- 实际 `config.json` 配置中应保证json格式,不应携带 '#' 及后面的注释
- 如果是`docker`部署,可通过映射 `plugins/config.json` 到容器中来完成插件配置,参考[文档](https://github.com/zhayujie/chatgpt-on-wechat#3-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
## 插件使用
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画 和 summary文档总结对话功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
完成配置后运行项目,会自动运行插件,输入 `#help linkai` 可查看插件功能。
@@ -51,6 +62,8 @@
### 2.Midjourney绘画功能
若未配置 `plugins/linkai/config.json`,默认会关闭画图功能,直接使用 `$mj open` 可基于默认配置直接使用mj画图。
指令格式:
```
@@ -69,7 +82,27 @@
"$mjr 11055927171882"
```
注:
1. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词,以"画"开头便可以生成图片。
2. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败,生成失败不消耗积分
3. 使用 `$mj open``$mj close` 指令可以快速打开和关闭绘图功能
意事项
1. 使用 `$mj open``$mj close` 指令可以快速打开和关闭绘图功能
2. 海外环境部署请将 `img_proxy` 设置为 `false`
3. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词,以"画"开头便可以生成图片。
4. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败,生成失败不消耗积分
5. 若未收到图片可能有两种可能,一种是收到了图片但微信发送失败,可以在后台日志查看有没有获取到图片url,一般原因是受到了wx限制,可以稍后重试或更换账号尝试;另一种情况是图片提示词存在疑似违规,mj不会直接提示错误但会在画图后删掉原图导致程序无法获取,这种情况不消耗积分。
### 3.文档总结对话功能
#### 配置
该功能依赖 LinkAI的知识库及对话功能,需要在项目根目录的config.json中设置 `linkai_api_key`, 同时根据上述插件配置说明,在插件config.json添加 `summary` 部分的配置,设置 `enabled` 为 true。
如果不想创建 `plugins/linkai/config.json` 配置,可以直接通过 `$linkai sum open` 指令开启该功能。
#### 使用
功能开启后,向机器人发送 **文件****分享链接卡片** 即可生成摘要,进一步可以与文件或链接的内容进行多轮对话。
#### 限制
1. 文件目前 支持 `txt`, `docx`, `pdf`, `md`, `csv`格式,文件大小由 `max_file_size` 限制,最大不超过15M,文件字数最多可支持百万字的文件。但不建议上传字数过多的文件,一是token消耗过大,二是摘要很难覆盖到全部内容,只能通过多轮对话来了解细节。
2. 分享链接 目前仅支持 公众号文章,后续会支持更多文章类型及视频链接等
3. 总结及对话的 费用与 LinkAI 3.5-4K 模型的计费方式相同,按文档内容的tokens进行计算
+7 -2
View File
@@ -1,7 +1,7 @@
{
"group_app_map": {
"测试群1": "default",
"测试群2": "Kv2fXJcH"
"测试群1": "default",
"测试群2": "Kv2fXJcH"
},
"midjourney": {
"enabled": true,
@@ -10,5 +10,10 @@
"max_tasks": 3,
"max_tasks_per_user": 1,
"use_image_create_prefix": true
},
"summary": {
"enabled": true,
"group_enabled": true,
"max_file_size": 5000
}
}
+138 -23
View File
@@ -1,26 +1,37 @@
import plugins
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
from config import global_config
from plugins import *
from .midjourney import MJBot
from .summary import LinkSummary
from bridge import bridge
from common.expired_dict import ExpiredDict
from common import const
import os
from .utils import Util
@plugins.register(
name="linkai",
desc="A plugin that supports knowledge base and midjourney drawing.",
version="0.1.0",
author="https://link-ai.tech",
desire_priority=99
)
class LinkAI(Plugin):
def __init__(self):
super().__init__()
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
self.config = super().load_config()
if not self.config:
# 未加载到配置,使用模板中的配置
self.config = self._load_config_template()
if self.config:
self.mj_bot = MJBot(self.config.get("midjourney"))
logger.info("[LinkAI] inited")
self.sum_config = {}
if self.config:
self.sum_config = self.config.get("summary")
logger.info(f"[LinkAI] inited, config={self.config}")
def on_handle_context(self, e_context: EventContext):
"""
@@ -31,10 +42,39 @@ class LinkAI(Plugin):
return
context = e_context['context']
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE]:
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE, ContextType.FILE, ContextType.SHARING]:
# filter content no need solve
return
if context.type == ContextType.FILE and self._is_summary_open(context):
# 文件处理
context.get("msg").prepare()
file_path = context.content
if not LinkSummary().check_file(file_path, self.sum_config):
return
_send_info(e_context, "正在为你加速生成摘要,请稍后")
res = LinkSummary().summary_file(file_path)
if not res:
_set_reply_text("因为神秘力量无法获取文章内容,请稍后再试吧", e_context, level=ReplyType.TEXT)
return
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文件内容的对话", e_context, level=ReplyType.TEXT)
os.remove(file_path)
return
if (context.type == ContextType.SHARING and self._is_summary_open(context)) or \
(context.type == ContextType.TEXT and LinkSummary().check_url(context.content)):
if not LinkSummary().check_url(context.content):
return
_send_info(e_context, "正在为你加速生成摘要,请稍后")
res = LinkSummary().summary_url(context.content)
if not res:
_set_reply_text("因为神秘力量无法获取文章内容,请稍后再试吧~", e_context, level=ReplyType.TEXT)
return
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文章内容的对话", e_context, level=ReplyType.TEXT)
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
return
mj_type = self.mj_bot.judge_mj_task_type(e_context)
if mj_type:
# MJ作图任务处理
@@ -46,10 +86,38 @@ class LinkAI(Plugin):
self._process_admin_cmd(e_context)
return
if context.type == ContextType.TEXT and context.content == "开启对话" and _find_sum_id(context):
# 文本对话
_send_info(e_context, "正在为你开启对话,请稍后")
res = LinkSummary().summary_chat(_find_sum_id(context))
if not res:
_set_reply_text("开启对话失败,请稍后再试吧", e_context)
return
USER_FILE_MAP[_find_user_id(context) + "-file_id"] = res.get("file_id")
_set_reply_text("💡你可以问我关于这篇文章的任何问题,例如:\n\n" + res.get("questions") + "\n\n发送 \"退出对话\" 可以关闭与文章的对话", e_context, level=ReplyType.TEXT)
return
if context.type == ContextType.TEXT and context.content == "退出对话" and _find_file_id(context):
del USER_FILE_MAP[_find_user_id(context) + "-file_id"]
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
bot.sessions.clear_session(context["session_id"])
_set_reply_text("对话已退出", e_context, level=ReplyType.TEXT)
return
if context.type == ContextType.TEXT and _find_file_id(context):
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
context.kwargs["file_id"] = _find_file_id(context)
reply = bot.reply(context.content, context)
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS
return
if self._is_chat_task(e_context):
# 文本对话任务处理
self._process_chat_task(e_context)
# 插件管理功能
def _process_admin_cmd(self, e_context: EventContext):
context = e_context['context']
@@ -60,7 +128,7 @@ class LinkAI(Plugin):
if len(cmd) == 2 and (cmd[1] == "open" or cmd[1] == "close"):
# 知识库开关指令
if not _is_admin(e_context):
if not Util.is_admin(e_context):
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
return
is_open = True
@@ -70,7 +138,7 @@ class LinkAI(Plugin):
is_open = False
conf()["use_linkai"] = is_open
bridge.Bridge().reset_bot()
_set_reply_text(f"知识库功能{tips_text}", e_context, level=ReplyType.INFO)
_set_reply_text(f"LinkAI对话功能{tips_text}", e_context, level=ReplyType.INFO)
return
if len(cmd) == 3 and cmd[1] == "app":
@@ -78,7 +146,7 @@ class LinkAI(Plugin):
if not context.kwargs.get("isgroup"):
_set_reply_text("该指令需在群聊中使用", e_context, level=ReplyType.ERROR)
return
if not _is_admin(e_context):
if not Util.is_admin(e_context):
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
return
app_code = cmd[2]
@@ -91,11 +159,36 @@ class LinkAI(Plugin):
# 保存插件配置
super().save_config(self.config)
_set_reply_text(f"应用设置成功: {app_code}", e_context, level=ReplyType.INFO)
else:
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
level=ReplyType.INFO)
return
if len(cmd) == 3 and cmd[1] == "sum" and (cmd[2] == "open" or cmd[2] == "close"):
# 知识库开关指令
if not Util.is_admin(e_context):
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
return
is_open = True
tips_text = "开启"
if cmd[2] == "close":
tips_text = "关闭"
is_open = False
if not self.sum_config:
_set_reply_text(f"插件未启用summary功能,请参考以下链添加插件配置\n\nhttps://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/linkai/README.md", e_context, level=ReplyType.INFO)
else:
self.sum_config["enabled"] = is_open
_set_reply_text(f"文章总结功能{tips_text}", e_context, level=ReplyType.INFO)
return
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
level=ReplyType.INFO)
return
def _is_summary_open(self, context) -> bool:
if not self.sum_config or not self.sum_config.get("enabled"):
return False
if context.kwargs.get("isgroup") and not self.sum_config.get("group_enabled"):
return False
return True
# LinkAI 对话任务处理
def _is_chat_task(self, e_context: EventContext):
context = e_context['context']
@@ -109,7 +202,7 @@ class LinkAI(Plugin):
"""
context = e_context['context']
# 群聊应用管理
group_name = context.kwargs.get("msg").from_user_nickname
group_name = context.get("msg").from_user_nickname
app_code = self._fetch_group_app_code(group_name)
if app_code:
context.kwargs['app_code'] = app_code
@@ -127,7 +220,7 @@ class LinkAI(Plugin):
def get_help_text(self, verbose=False, **kwargs):
trigger_prefix = _get_trigger_prefix()
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画等能力。\n\n"
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画、文档总结对话等能力。\n\n"
if not verbose:
return help_text
help_text += f'📖 知识库\n - 群聊中指定应用: {trigger_prefix}linkai app 应用编码\n'
@@ -137,21 +230,34 @@ class LinkAI(Plugin):
help_text += f"🎨 绘画\n - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \n - 放大: {trigger_prefix}mju 图片ID 图片序号\n - 变换: {trigger_prefix}mjv 图片ID 图片序号\n - 重置: {trigger_prefix}mjr 图片ID"
help_text += f"\n\n例如:\n\"{trigger_prefix}mj a little cat, white --ar 9:16\"\n\"{trigger_prefix}mju 11055927171882 2\""
help_text += f"\n\"{trigger_prefix}mjv 11055927171882 2\"\n\"{trigger_prefix}mjr 11055927171882\""
help_text += f"\n\n💡 文档总结和对话\n - 开启: {trigger_prefix}linkai sum open\n - 使用: 发送文件、公众号文章等可生成摘要,并与内容对话"
return help_text
def _load_config_template(self):
logger.debug("No LinkAI plugin config.json, use plugins/linkai/config.json.template")
try:
plugin_config_path = os.path.join(self.path, "config.json.template")
if os.path.exists(plugin_config_path):
with open(plugin_config_path, "r", encoding="utf-8") as f:
plugin_conf = json.load(f)
plugin_conf["midjourney"]["enabled"] = False
plugin_conf["summary"]["enabled"] = False
return plugin_conf
except Exception as e:
logger.exception(e)
# 静态方法
def _is_admin(e_context: EventContext) -> bool:
"""
判断消息是否由管理员用户发送
:param e_context: 消息上下文
:return: True: 是, False: 否
"""
context = e_context["context"]
def _send_info(e_context: EventContext, content: str):
reply = Reply(ReplyType.TEXT, content)
channel = e_context["channel"]
channel.send(reply, e_context["context"])
def _find_user_id(context):
if context["isgroup"]:
return context.kwargs.get("msg").actual_user_id in global_config["admin_users"]
return context.kwargs.get("msg").actual_user_id
else:
return context["receiver"] in global_config["admin_users"]
return context["receiver"]
def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
@@ -159,6 +265,15 @@ def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = Re
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS
def _get_trigger_prefix():
return conf().get("plugin_trigger_prefix", "$")
def _find_sum_id(context):
return USER_FILE_MAP.get(_find_user_id(context) + "-sum_id")
def _find_file_id(context):
user_id = _find_user_id(context)
if user_id:
return USER_FILE_MAP.get(user_id + "-file_id")
USER_FILE_MAP = ExpiredDict(conf().get("expires_in_seconds") or 60 * 30)
+6 -3
View File
@@ -5,10 +5,10 @@ import requests
import threading
import time
from bridge.reply import Reply, ReplyType
import aiohttp
import asyncio
from bridge.context import ContextType
from plugins import EventContext, EventAction
from .utils import Util
INVALID_REQUEST = 410
NOT_FOUND_ORIGIN_IMAGE = 461
@@ -49,7 +49,7 @@ task_name_mapping = {
class MJTask:
def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 30,
def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 6,
status=Status.PENDING):
self.id = id
self.user_id = user_id
@@ -96,7 +96,7 @@ class MJBot:
return TaskType.VARIATION
elif cmd_list[0].lower() == f"{trigger_prefix}mjr":
return TaskType.RESET
elif context.type == ContextType.IMAGE_CREATE and self.config.get("use_image_create_prefix"):
elif context.type == ContextType.IMAGE_CREATE and self.config.get("use_image_create_prefix") and self.config.get("enabled"):
return TaskType.GENERATE
def process_mj_task(self, mj_type: TaskType, e_context: EventContext):
@@ -114,6 +114,9 @@ class MJBot:
return
if len(cmd) == 2 and (cmd[1] == "open" or cmd[1] == "close"):
if not Util.is_admin(e_context):
Util.set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
return
# midjourney 开关指令
is_open = True
tips_text = "开启"
+94
View File
@@ -0,0 +1,94 @@
import requests
from config import conf
from common.log import logger
import os
class LinkSummary:
def __init__(self):
pass
def summary_file(self, file_path: str):
file_body = {
"file": open(file_path, "rb"),
"name": file_path.split("/")[-1],
}
res = requests.post(url=self.base_url() + "/v1/summary/file", headers=self.headers(), files=file_body, timeout=(5, 300))
return self._parse_summary_res(res)
def summary_url(self, url: str):
body = {
"url": url
}
res = requests.post(url=self.base_url() + "/v1/summary/url", headers=self.headers(), json=body, timeout=(5, 180))
return self._parse_summary_res(res)
def summary_chat(self, summary_id: str):
body = {
"summary_id": summary_id
}
res = requests.post(url=self.base_url() + "/v1/summary/chat", headers=self.headers(), json=body, timeout=(5, 180))
if res.status_code == 200:
res = res.json()
logger.debug(f"[LinkSum] chat open, res={res}")
if res.get("code") == 200:
data = res.get("data")
return {
"questions": data.get("questions"),
"file_id": data.get("file_id")
}
else:
res_json = res.json()
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
return None
def _parse_summary_res(self, res):
if res.status_code == 200:
res = res.json()
logger.debug(f"[LinkSum] url summary, res={res}")
if res.get("code") == 200:
data = res.get("data")
return {
"summary": data.get("summary"),
"summary_id": data.get("summary_id")
}
else:
res_json = res.json()
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
return None
def base_url(self):
return conf().get("linkai_api_base", "https://api.link-ai.chat")
def headers(self):
return {"Authorization": "Bearer " + conf().get("linkai_api_key")}
def check_file(self, file_path: str, sum_config: dict) -> bool:
file_size = os.path.getsize(file_path) // 1000
if (sum_config.get("max_file_size") and file_size > sum_config.get("max_file_size")) or file_size > 15000:
logger.warn(f"[LinkSum] file size exceeds limit, No processing, file_size={file_size}KB")
return False
suffix = file_path.split(".")[-1]
support_list = ["txt", "csv", "docx", "pdf", "md"]
if suffix not in support_list:
logger.warn(f"[LinkSum] unsupported file, suffix={suffix}, support_list={support_list}")
return False
return True
def check_url(self, url: str):
if not url:
return False
support_list = ["http://mp.weixin.qq.com", "https://mp.weixin.qq.com"]
black_support_list = ["https://mp.weixin.qq.com/mp/waerrpage"]
for black_url_prefix in black_support_list:
if url.strip().startswith(black_url_prefix):
logger.warn(f"[LinkSum] unsupported url, no need to process, url={url}")
return False
for support_url in support_list:
if url.strip().startswith(support_url):
return True
logger.debug(f"[LinkSum] unsupported url, no need to process, url={url}")
return False
+28
View File
@@ -0,0 +1,28 @@
from config import global_config
from bridge.reply import Reply, ReplyType
from plugins.event import EventContext, EventAction
class Util:
@staticmethod
def is_admin(e_context: EventContext) -> bool:
"""
判断消息是否由管理员用户发送
:param e_context: 消息上下文
:return: True: 是, False: 否
"""
context = e_context["context"]
if context["isgroup"]:
actual_user_id = context.kwargs.get("msg").actual_user_id
for admin_user in global_config["admin_users"]:
if actual_user_id and actual_user_id in admin_user:
return True
return False
else:
return context["receiver"] in global_config["admin_users"]
@staticmethod
def set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
reply = Reply(level, content)
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS
+5 -2
View File
@@ -15,12 +15,15 @@ class Plugin:
"""
# 优先获取 plugins/config.json 中的全局配置
plugin_conf = pconf(self.name)
if not plugin_conf or not conf().get("use_global_plugin_config"):
# 全局配置不存在 或者 未开启全局配置开关,则获取插件目录下的配置
if not plugin_conf:
# 全局配置不存在,则获取插件目录下的配置
plugin_config_path = os.path.join(self.path, "config.json")
if os.path.exists(plugin_config_path):
with open(plugin_config_path, "r", encoding="utf-8") as f:
plugin_conf = json.load(f)
# 写入全局配置内存
plugin_config[self.name] = plugin_conf
logger.debug(f"loading plugin config, plugin_name={self.name}, conf={plugin_conf}")
return plugin_conf
+4 -2
View File
@@ -16,15 +16,17 @@ dulwich
# wechaty
wechaty>=0.10.7
wechaty_puppet>=0.4.23
pysilk_mod>=1.6.0 # needed by send voice
# pysilk_mod>=1.6.0 # needed by send voice only in wechaty
# wechatmp wechatcom
web.py
wechatpy
# chatgpt-tool-hub plugin
--extra-index-url https://pypi.python.org/simple
chatgpt_tool_hub==0.4.6
# xunfei spark
websocket-client==1.2.0
# claude bot
curl_cffi