Compare commits

...

121 Commits

Author SHA1 Message Date
zhayujie 7c8fb7eacc Merge pull request #1428 from scut-chenzk/chenzk
修复收到从微信发出的图片消息保存到本地失败的问题
2023-09-26 15:59:23 +08:00
zhayujie b45eea5908 Merge pull request #1427 from befantasy/master
itchat通道增加ReplyType.FILE/ReplyType.VIDEO/ReplyType.VIDEO_URL,以方便插件的开发。keyword插件增加文件和视频匹配回复
2023-09-26 01:27:35 +08:00
zhayujie 6babf4ee6c Merge pull request #1445 from befantasy/patch-3
Update godcmd.py 增加debug模式的关闭
2023-09-26 00:37:17 +08:00
zhayujie 576526d4ee Merge pull request #1446 from 6vision/master
个人订阅号消息存储优化
2023-09-26 00:36:36 +08:00
zhayujie c03e31b7be fix: linkai instruction bug 2023-09-25 23:15:59 +08:00
zhayujie a1aa925019 fix: no summary config bug 2023-09-25 18:30:19 +08:00
zhayujie a5a234ed97 fix: remove file after summary 2023-09-25 16:42:36 +08:00
zhayujie 5b5dbcd78b feat: remove file word calc and support url link 2023-09-24 14:33:39 +08:00
zhayujie bd1c6361d3 Update README.md 2023-09-24 12:54:34 +08:00
zhayujie 1fc1febf03 Merge pull request #1450 from zhayujie/feat-doc-chat
feat: 文档总结和与内容对话
2023-09-24 12:30:45 +08:00
zhayujie 55cc35efa9 feat: document summary and chat with content 2023-09-24 12:27:09 +08:00
vision 5ba8fdc5e7 fix 2023-09-23 14:31:54 +08:00
vision 6ea295e227 Merge pull request #1 from 6vision/feat
个人订阅号长语音支持
2023-09-23 13:46:25 +08:00
befantasy 5010c76ef7 Update godcmd.py 增加debug模式的关闭 2023-09-23 13:37:01 +08:00
6vision 79c7f0c29f 个人订阅号长语音支持 2023-09-23 13:27:36 +08:00
6vision 2b3e643786 适配一次请求多条回复 2023-09-23 11:59:01 +08:00
chenzhenkun 90cdff327c 修复收到从微信发出的图片消息保存到本地失败的问题 2023-09-15 19:07:52 +08:00
zhayujie 55c116e727 Update README.md 2023-09-15 18:42:56 +08:00
befantasy 3dd83aa6b7 Update chat_channel.py 2023-09-15 18:38:31 +08:00
befantasy a74aa12641 Update wechat_channel.py 2023-09-15 18:37:05 +08:00
befantasy 151e8c69f9 Update keyword.py 2023-09-15 18:22:10 +08:00
befantasy d8bfa77705 Update keyword.py 2023-09-15 16:56:51 +08:00
befantasy 6bd286e8d5 Update wechat_channel.py to support ReplyType.FILE 2023-09-15 16:22:46 +08:00
befantasy 905532b681 Update chat_channel.py to support ReplyType.FILE 2023-09-15 16:21:27 +08:00
zhayujie 04d5c1ab01 Delete .github/ISSUE_TEMPLATE/config.yml 2023-09-15 15:45:23 +08:00
zhayujie 28be141dc7 Merge pull request #1422 from scut-chenzk/chenzk
修复接语音回复失效的问题
2023-09-15 15:14:00 +08:00
chenzk 652b786baf Merge branch 'zhayujie:master' into chenzk 2023-09-14 23:42:00 +08:00
chenzhenkun ba6c671051 修复收到图片消息保存到本地失败的问题 2023-09-14 23:39:07 +08:00
chenzhenkun ca25d0433f 修复接语音回复失效的问题 2023-09-14 17:52:11 +08:00
zhayujie 5338106dfa Merge pull request #1308 from leesonchen/master
企业服务号的语音输出进行切割
2023-09-12 18:18:17 +08:00
zhayujie b6b76be4f6 fix: add summary plugin bot type 2023-09-06 16:50:23 +08:00
zhayujie 03d94fcfa0 fix: not enable user_image_create_prefix by default 2023-09-06 12:02:13 +08:00
zhayujie b2c5f0d455 feat: mj use default config 2023-09-06 11:53:33 +08:00
zhayujie 54f60dd38c chore: remove dependencies that can only be used under windows 2023-09-04 11:14:48 +08:00
zhayujie 42f181aca2 Merge pull request #1394 from resphinas/claude_bot
Update claude_ai_bot.py
2023-09-04 10:47:02 +08:00
resphina 9c3a27894f Update claude_ai_bot.py 2023-09-03 19:12:27 +08:00
resphina f7cd348912 Update claude_ai_bot.py 2023-09-03 19:04:43 +08:00
zhayujie aeaeb75d3b Merge pull request #1396 from 6vision/master
Optimize image download and storage logic
2023-09-03 17:32:30 +08:00
vision 96542b532e Update requirements-optional.txt 2023-09-03 17:14:28 +08:00
vision 139295fe0d Update requirements-optional.txt
增加企微个人号channel所需依赖
2023-09-03 16:47:25 +08:00
vision 13217b2ce2 Merge pull request #1 from 6vision/patch-1
Optimize image download and storage logic
2023-09-03 16:35:01 +08:00
vision 5cc8b56a7c Optimize image download and storage logic
- Implement new compression logic for files larger than 10MB to improve storage efficiency.
- Switch from JPEG to PNG to enhance image quality and compatibility.
2023-09-03 16:29:19 +08:00
resphina e23e01c95e Update claude_ai_bot.py 2023-09-03 15:40:08 +08:00
resphina bca8ba12c7 Update claude_ai_bot.py 2023-09-03 15:22:25 +08:00
vision 3c44bdbe1c Update requirements-optional.txt 2023-09-03 15:10:05 +08:00
zhayujie db93ed025b Merge branch 'master' of github.com:zhayujie/chatgpt-on-wechat 2023-09-02 21:50:28 +08:00
zhayujie 4209e108d0 fix: wework single chat no prefix circle reply 2023-09-02 21:49:43 +08:00
zhayujie 14cbf011af Merge pull request #1391 from resphinas/claude_bot
Rename claude_ai_session to claude_ai_session.py
2023-09-02 10:42:29 +08:00
resphina 03a41ec199 Rename claude_ai_session to claude_ai_session.py 2023-09-02 02:40:57 +08:00
zhayujie 125fe2a026 Merge pull request #1390 from scut-chenzk/chenzk
Chenzk
2023-09-01 19:42:21 +08:00
chenzhenkun ac4adac29e 兼容微信艾特的情况 2023-09-01 19:37:19 +08:00
chenzhenkun ac449d078e Merge remote-tracking branch 'origin/chenzk' into chenzk
# Conflicts:
#	channel/chat_channel.py
2023-09-01 19:22:02 +08:00
chenzhenkun 79be4530d4 防止命中前缀导致死循环的情况 2023-09-01 19:18:53 +08:00
chenzk 85ce52d70c Merge branch 'zhayujie:master' into chenzk 2023-09-01 18:57:52 +08:00
chenzhenkun 7ab56b9076 添加日志以方便定位问题 2023-09-01 18:56:24 +08:00
zhayujie dedf976375 Merge pull request #1389 from scut-chenzk/chenzk
修复自己艾特自己会死循环的问题
2023-09-01 18:42:41 +08:00
chenzhenkun 89f438208a 修复自己艾特自己会死循环的问题 2023-09-01 18:39:31 +08:00
zhayujie ffbc5080ae Merge pull request #1388 from resphinas/claude_bot
实现claude对接配置中的 共享上下文开关
2023-09-01 18:34:43 +08:00
resphina 4167f13bac Update README.md 2023-09-01 18:12:48 +08:00
resphina 6ba0baabb0 Update claude_ai_bot.py 2023-09-01 18:04:39 +08:00
resphina 081003df47 Update config.py 2023-09-01 17:55:09 +08:00
resphina 559194ffb2 Update config.py 2023-09-01 17:54:03 +08:00
resphina 97a26d4a46 Update README.md 2023-09-01 17:53:21 +08:00
resphina 503c6c9b7e Update claude_ai_bot.py 2023-09-01 17:31:30 +08:00
resphina 9a1e10deff Create claude_ai_session 2023-09-01 17:30:31 +08:00
zhayujie 054f927c05 fix: at_list bug in wechat channel 2023-09-01 13:45:04 +08:00
resphina 22210747d0 Update README.md 2023-09-01 12:40:09 +08:00
resphina 53b2deb72c 更新机器人相关接口文档说明 2023-09-01 12:38:58 +08:00
zhayujie 6fc158e7d6 hotfix: config.py format 2023-09-01 11:32:58 +08:00
zhayujie a23a65c731 Merge pull request #1382 from resphinas/claude_bot
新增Claude聊天机器人接口(逆向cookie实现,稳定不失效)
2023-09-01 10:48:33 +08:00
resphina 7dc7105ee2 Update requirements-optional.txt 2023-09-01 10:32:33 +08:00
resphina bac70108b2 Update requirements.txt 2023-09-01 10:32:03 +08:00
resphina 297404b21e Update config-template.json 2023-09-01 10:31:45 +08:00
resphina 33a7f8b558 Delete chatgpt-on-wechat-master.iml 2023-09-01 10:08:34 +08:00
resphina 4a670b7df7 Update config-template.json 2023-09-01 09:40:26 +08:00
resphina 79e4af315e Update log.py 2023-09-01 09:39:45 +08:00
resphina c6e31b2fdc Update chat_gpt_bot.py 2023-09-01 09:39:08 +08:00
resphina 91dc44df53 Update const.py 2023-09-01 09:38:47 +08:00
resphina 7e57f8f157 Merge branch 'master' into claude_bot 2023-09-01 09:37:10 +08:00
zhayujie 15f6b7c6d3 Merge pull request #1385 from scut-chenzk/chenzk
支持wework企业微信机器人
2023-08-31 22:44:17 +08:00
chenzhenkun b213ba541d 新增wework企业微信机器人支持插件功能 2023-08-31 21:02:00 +08:00
chenzhenkun 7c6ed9944e 支持wework企业微信机器人 2023-08-30 20:49:00 +08:00
resphinas a5a825e439 system role remove 2023-08-29 06:45:21 +08:00
resphinas a4ab547f77 proxy update 2023-08-29 05:59:59 +08:00
resphinas 76ed763abe proxy update 2023-08-29 05:58:39 +08:00
resphinas b9e3125610 格式纠正2 2023-08-28 18:04:28 +08:00
resphina 8d9d5b7b6f Update claude_ai_bot.py 2023-08-28 17:40:27 +08:00
resphina 187601da1e Update config-template.json 2023-08-28 17:30:03 +08:00
resphina cc3a0fc367 Update config-template.json 2023-08-28 17:28:13 +08:00
resphinas 44cc4165d1 claude_bot 2023-08-28 17:22:20 +08:00
resphinas f98b43514e claude_bot 2023-08-28 17:18:00 +08:00
resphinas 3c9b1a14e9 claude bot update 2023-08-28 16:43:26 +08:00
zhayujie 827e8eddf8 chore: remove dockerhub in arm build 2023-08-27 12:28:10 +08:00
zhayujie 7bc27d6167 fix: remove docker hub register in arm build 2023-08-27 12:10:08 +08:00
zhayujie ba06edd63a fix: remove pysilk_mod 2023-08-26 17:32:52 +08:00
zhayujie cacf553a5b feat: add arm workflows 2023-08-26 17:17:03 +08:00
zhayujie d89091a8ea fix: git action deploy 2023-08-26 14:14:32 +08:00
zhayujie 01a56e1155 feat: try arm docker image 2023-08-26 12:45:16 +08:00
zhayujie a64d7c42b1 fix: xunfei ws error log 2023-08-26 11:46:01 +08:00
zhayujie 36b6cc58bf fix: on_close params 2023-08-26 11:37:27 +08:00
zhayujie 5ac8a257e7 fix: add gpt-3.5-turbo in model_list 2023-08-26 10:50:31 +08:00
zhayujie 74119d0372 fix: websocket version 2023-08-25 23:57:59 +08:00
zhayujie 4e162c73e5 fix: update websocket version 2023-08-25 23:10:47 +08:00
zhayujie 5ff753a492 feat: add global model check 2023-08-25 17:26:40 +08:00
zhayujie 89400630c0 fix: xunfei client bug 2023-08-25 16:55:32 +08:00
zhayujie 3899c0cfe3 Merge pull request #1371 from uezhenxiang2023/Peter
add ElevenLabs TTS to voice factory
2023-08-25 16:15:18 +08:00
zhayujie a086f1989f feat: add xunfei spark bot 2023-08-25 16:06:55 +08:00
zhayujie 1171b04e93 fix: wenxin token discard bug 2023-08-25 12:24:16 +08:00
uezhenxiang2023 c55d81825a Merge branch 'zhayujie:master' into Peter 2023-08-25 12:12:06 +08:00
zhayujie 2dcd026e9f logs: add baidu reply log 2023-08-25 11:19:00 +08:00
zhayujie cdf8609d24 Merge pull request #1360 from zyqfork/master
dockerfile fallback debian11,fix azure cognitiveservices speech error
2023-08-25 01:24:34 +08:00
zhayujie 36580c5f7f Merge pull request #1363 from iRedScarf/master
把温度值设置默认放进config.json
2023-08-25 01:24:02 +08:00
zhayujie 1cff2521f4 fix: add web.py and linkai base url 2023-08-22 11:09:01 +08:00
uezhenxiang2023 db4998a56b replace requests with elevenlabs for audio generation 2023-08-20 10:58:26 +08:00
uezhenxiang2023 acbd506568 add ElevenLabs TTS to voice factory 2023-08-19 11:20:47 +08:00
eks 0cf8e3be73 Merge branch 'zhayujie:master' into master 2023-08-16 16:54:34 +08:00
zhayujie 2473334dfc fix: channel send compatibility and add log 2023-08-14 23:09:51 +08:00
eks 1ff72d1d37 Merge branch 'zhayujie:master' into master 2023-08-11 13:50:11 +08:00
eks 241fad5524 Update config-template.json
把温度值默认放进config.json
2023-08-11 13:49:47 +08:00
zouyq 1b48cea50a dockerfile fallback debian11,fix azure cognitiveservices speech error
Python 3.10-slim based Debian 12, using Azure TextToVoice may result in an error. the Speech SDK does not currently support OpenSSL 3.0, which is the default version in Ubuntu 22.04 and Debian 12
2023-08-10 17:39:25 +08:00
leeson 8224c2fc16 企业服务号的语音输出进行切割 2023-07-08 23:58:07 +08:00
41 changed files with 1801 additions and 171 deletions
-6
View File
@@ -1,6 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: 知识星球
url: https://public.zsxq.com/groups/88885848842852.html
about: 如果你想了解更多项目细节,并与开发者们交流更多关于AI技术的实践,欢迎加入星球
+71
View File
@@ -0,0 +1,71 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Create and publish a Docker image
on:
push:
branches: ['master']
create:
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
push: true
file: ./docker/Dockerfile.latest
platforms: linux/arm64
tags: ${{ steps.meta.outputs.tags }}-arm64
labels: ${{ steps.meta.outputs.labels }}
- uses: actions/delete-package-versions@v4
with:
package-name: 'chatgpt-on-wechat'
package-type: 'container'
min-versions-to-keep: 10
delete-only-untagged-versions: 'true'
token: ${{ secrets.GITHUB_TOKEN }}
+16 -5
View File
@@ -5,7 +5,7 @@
最新版本支持的功能如下:
- [x] **多端部署:** 有多种部署方式可选择且功能完备,目前已支持个人微信,微信公众号和企业微信应用等部署方式
- [x] **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3, GPT-3.5, GPT-4, 文心一言模型
- [x] **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3.5, GPT-4, claude, 文心一言, 讯飞星火
- [x] **语音识别:** 可识别语音消息,通过文字或语音回复,支持 azure, baidu, google, openai等多种语音模型
- [x] **图片生成:** 支持图片生成 和 图生图(如照片修复),可选择 Dell-E, stable diffusion, replicate, midjourney模型
- [x] **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结等插件
@@ -28,9 +28,11 @@ Demo made by [Visionn](https://www.wangpc.cc/)
# 更新日志
>**2023.09.01** 增加 [企微个人号](https://github.com/zhayujie/chatgpt-on-wechat/pull/1385) 通道,[claude](https://github.com/zhayujie/chatgpt-on-wechat/pull/1382) 模型
>**2023.08.08** 接入百度文心一言模型,通过 [插件](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/linkai) 支持 Midjourney 绘图
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建个人知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建领域知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.04.26** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
@@ -113,7 +115,7 @@ pip3 install azure-cognitiveservices-speech
# config.json文件内容示例
{
"open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY
"model": "gpt-3.5-turbo", # 模型名称。当use_azure_chatgpt为true时,其名称为Azure上model deployment名称
"model": "gpt-3.5-turbo", # 模型名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"proxy": "", # 代理客户端的ip和端口,国内环境开启代理的需要填写该项,如 "127.0.0.1:7890"
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
@@ -129,7 +131,10 @@ pip3 install azure-cognitiveservices-speech
"azure_api_version": "", # 采用Azure ChatGPT时,API版本
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
# 订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复,可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持角色扮演和文字冒险等丰富插件。\n输入{trigger_prefix}#help 查看详细指令。"
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持角色扮演和文字冒险等丰富插件。\n输入{trigger_prefix}#help 查看详细指令。",
"use_linkai": false, # 是否使用LinkAI接口,默认关闭,开启后可国内访问,使用知识库和MJ
"linkai_api_key": "", # LinkAI Api Key
"linkai_app_code": "" # LinkAI 应用code
}
```
**配置说明:**
@@ -154,7 +159,7 @@ pip3 install azure-cognitiveservices-speech
**4.其他配置**
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k`, `wenxin` (其中gpt-4 api暂未完全开放,申请通过后可使用)
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k`, `wenxin` , `claude` , `xunfei`(其中gpt-4 api暂未完全开放,申请通过后可使用)
+ `temperature`,`frequency_penalty`,`presence_penalty`: Chat API接口参数,详情参考[OpenAI官方文档。](https://platform.openai.com/docs/api-reference/chat)
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
@@ -166,6 +171,12 @@ pip3 install azure-cognitiveservices-speech
+ `character_desc` 配置中保存着你对机器人说的一段话,他会记住这段话并作为他的设定,你可以为他定制任何人格 (关于会话上下文的更多内容参考该 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/43))
+ `subscribe_msg`:订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复, 可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
**5.LinkAI配置 (可选)**
+ `use_linkai`: 是否使用LinkAI接口,开启后可国内访问,使用知识库和 `Midjourney` 绘画, 参考 [文档](https://link-ai.tech/platform/link-app/wechat)
+ `linkai_api_key`: LinkAI Api Key,可在 [控制台](https://chat.link-ai.tech/console/interface) 创建
+ `linkai_app_code`: LinkAI 应用code,选填
**本说明文档可能会未及时更新,当前所有可选的配置项均在该[`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py)中列出。**
## 运行
+1 -1
View File
@@ -43,7 +43,7 @@ def run():
# os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:9001'
channel = channel_factory.create_channel(channel_name)
if channel_name in ["wx", "wxy", "terminal", "wechatmp", "wechatmp_service", "wechatcom_app"]:
if channel_name in ["wx", "wxy", "terminal", "wechatmp", "wechatmp_service", "wechatcom_app", "wework"]:
PluginManager().load_plugins()
# startup channel
+1 -1
View File
@@ -2,7 +2,6 @@
import requests, json
from bot.bot import Bot
from bridge.reply import Reply, ReplyType
from bot.session_manager import SessionManager
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
@@ -77,6 +76,7 @@ class BaiduWenxinBot(Bot):
payload = {'messages': session.messages}
response = requests.request("POST", url, headers=headers, data=json.dumps(payload))
response_text = json.loads(response.text)
logger.info(f"[BAIDU] response text={response_text}")
res_content = response_text["result"]
total_tokens = response_text["usage"]["total_tokens"]
completion_tokens = response_text["usage"]["completion_tokens"]
+10 -44
View File
@@ -9,6 +9,7 @@ from common.log import logger
]
"""
class BaiduWenxinSession(Session):
def __init__(self, session_id, system_prompt=None, model="gpt-3.5-turbo"):
super().__init__(session_id, system_prompt)
@@ -17,7 +18,6 @@ class BaiduWenxinSession(Session):
# self.reset()
def discard_exceeding(self, max_tokens, cur_tokens=None):
# pdb.set_trace()
precise = True
try:
cur_tokens = self.calc_tokens()
@@ -27,18 +27,9 @@ class BaiduWenxinSession(Session):
raise e
logger.debug("Exception when counting tokens precisely for query: {}".format(e))
while cur_tokens > max_tokens:
if len(self.messages) > 2:
self.messages.pop(1)
elif len(self.messages) == 2 and self.messages[1]["role"] == "assistant":
self.messages.pop(1)
if precise:
cur_tokens = self.calc_tokens()
else:
cur_tokens = cur_tokens - max_tokens
break
elif len(self.messages) == 2 and self.messages[1]["role"] == "user":
logger.warn("user message exceed max_tokens. total_tokens={}".format(cur_tokens))
break
if len(self.messages) >= 2:
self.messages.pop(0)
self.messages.pop(0)
else:
logger.debug("max_tokens={}, total_tokens={}, len(messages)={}".format(max_tokens, cur_tokens, len(self.messages)))
break
@@ -52,36 +43,11 @@ class BaiduWenxinSession(Session):
return num_tokens_from_messages(self.messages, self.model)
# refer to https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
def num_tokens_from_messages(messages, model):
"""Returns the number of tokens used by a list of messages."""
import tiktoken
if model in ["gpt-3.5-turbo-0301", "gpt-35-turbo"]:
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k"]:
return num_tokens_from_messages(messages, model="gpt-4")
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
logger.debug("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4":
tokens_per_message = 3
tokens_per_name = 1
else:
logger.warn(f"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
tokens = 0
for msg in messages:
# 官方token计算规则暂不明确: "大约为 token数为 "中文字 + 其他语种单词数 x 1.3"
# 这里先直接根据字数粗略估算吧,暂不影响正常使用,仅在判断是否丢弃历史会话的时候会有偏差
tokens += len(msg["content"])
return tokens
+7 -5
View File
@@ -14,31 +14,33 @@ def create_bot(bot_type):
# 替换Baidu Unit为Baidu文心千帆对话接口
# from bot.baidu.baidu_unit_bot import BaiduUnitBot
# return BaiduUnitBot()
from bot.baidu.baidu_wenxin import BaiduWenxinBot
return BaiduWenxinBot()
elif bot_type == const.CHATGPT:
# ChatGPT 网页端web接口
from bot.chatgpt.chat_gpt_bot import ChatGPTBot
return ChatGPTBot()
elif bot_type == const.OPEN_AI:
# OpenAI 官方对话模型API
from bot.openai.open_ai_bot import OpenAIBot
return OpenAIBot()
elif bot_type == const.CHATGPTONAZURE:
# Azure chatgpt service https://azure.microsoft.com/en-in/products/cognitive-services/openai-service/
from bot.chatgpt.chat_gpt_bot import AzureChatGPTBot
return AzureChatGPTBot()
elif bot_type == const.XUNFEI:
from bot.xunfei.xunfei_spark_bot import XunFeiBot
return XunFeiBot()
elif bot_type == const.LINKAI:
from bot.linkai.link_ai_bot import LinkAIBot
return LinkAIBot()
elif bot_type == const.CLAUDEAI:
from bot.claude.claude_ai_bot import ClaudeAIBot
return ClaudeAIBot()
raise RuntimeError
+14 -1
View File
@@ -55,11 +55,16 @@ class ChatGPTSession(Session):
# refer to https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
def num_tokens_from_messages(messages, model):
"""Returns the number of tokens used by a list of messages."""
if model in ["wenxin", "xunfei"]:
return num_tokens_by_character(messages)
import tiktoken
if model in ["gpt-3.5-turbo-0301", "gpt-35-turbo"]:
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k"]:
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k"]:
return num_tokens_from_messages(messages, model="gpt-4")
try:
@@ -85,3 +90,11 @@ def num_tokens_from_messages(messages, model):
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
def num_tokens_by_character(messages):
"""Returns the number of tokens used by a list of messages."""
tokens = 0
for msg in messages:
tokens += len(msg["content"])
return tokens
+222
View File
@@ -0,0 +1,222 @@
import re
import time
import json
import uuid
from curl_cffi import requests
from bot.bot import Bot
from bot.claude.claude_ai_session import ClaudeAiSession
from bot.openai.open_ai_image import OpenAIImage
from bot.session_manager import SessionManager
from bridge.context import Context, ContextType
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf
class ClaudeAIBot(Bot, OpenAIImage):
def __init__(self):
super().__init__()
self.sessions = SessionManager(ClaudeAiSession, model=conf().get("model") or "gpt-3.5-turbo")
self.claude_api_cookie = conf().get("claude_api_cookie")
self.proxy = conf().get("proxy")
self.con_uuid_dic = {}
if self.proxy:
self.proxies = {
"http": self.proxy,
"https": self.proxy
}
else:
self.proxies = None
self.error = ""
self.org_uuid = self.get_organization_id()
def generate_uuid(self):
random_uuid = uuid.uuid4()
random_uuid_str = str(random_uuid)
formatted_uuid = f"{random_uuid_str[0:8]}-{random_uuid_str[9:13]}-{random_uuid_str[14:18]}-{random_uuid_str[19:23]}-{random_uuid_str[24:]}"
return formatted_uuid
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
return self._chat(query, context)
elif context.type == ContextType.IMAGE_CREATE:
ok, res = self.create_img(query, 0)
if ok:
reply = Reply(ReplyType.IMAGE_URL, res)
else:
reply = Reply(ReplyType.ERROR, res)
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def get_organization_id(self):
url = "https://claude.ai/api/organizations"
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'Connection': 'keep-alive',
'Cookie': f'{self.claude_api_cookie}'
}
try:
response = requests.get(url, headers=headers, impersonate="chrome110", proxies =self.proxies, timeout=400)
res = json.loads(response.text)
uuid = res[0]['uuid']
except:
if "App unavailable" in response.text:
logger.error("IP error: The IP is not allowed to be used on Claude")
self.error = "ip所在地区不被claude支持"
elif "Invalid authorization" in response.text:
logger.error("Cookie error: Invalid authorization of claude, check cookie please.")
self.error = "无法通过claude身份验证,请检查cookie"
return None
return uuid
def conversation_share_check(self,session_id):
if conf().get("claude_uuid") is not None and conf().get("claude_uuid") != "":
con_uuid = conf().get("claude_uuid")
return con_uuid
if session_id not in self.con_uuid_dic:
self.con_uuid_dic[session_id] = self.generate_uuid()
self.create_new_chat(self.con_uuid_dic[session_id])
return self.con_uuid_dic[session_id]
def check_cookie(self):
flag = self.get_organization_id()
return flag
def create_new_chat(self, con_uuid):
"""
新建claude对话实体
:param con_uuid: 对话id
:return:
"""
url = f"https://claude.ai/api/organizations/{self.org_uuid}/chat_conversations"
payload = json.dumps({"uuid": con_uuid, "name": ""})
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Origin': 'https://claude.ai',
'DNT': '1',
'Connection': 'keep-alive',
'Cookie': self.claude_api_cookie,
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'TE': 'trailers'
}
response = requests.post(url, headers=headers, data=payload, impersonate="chrome110", proxies=self.proxies, timeout=400)
# Returns JSON of the newly created conversation information
return response.json()
def _chat(self, query, context, retry_count=0) -> Reply:
"""
发起对话请求
:param query: 请求提示词
:param context: 对话上下文
:param retry_count: 当前递归重试次数
:return: 回复
"""
if retry_count >= 2:
# exit from retry 2 times
logger.warn("[CLAUDEAI] failed after maximum number of retry times")
return Reply(ReplyType.ERROR, "请再问我一次吧")
try:
session_id = context["session_id"]
if self.org_uuid is None:
return Reply(ReplyType.ERROR, self.error)
session = self.sessions.session_query(query, session_id)
con_uuid = self.conversation_share_check(session_id)
model = conf().get("model") or "gpt-3.5-turbo"
# remove system message
if session.messages[0].get("role") == "system":
if model == "wenxin" or model == "claude":
session.messages.pop(0)
logger.info(f"[CLAUDEAI] query={query}")
# do http request
base_url = "https://claude.ai"
payload = json.dumps({
"completion": {
"prompt": f"{query}",
"timezone": "Asia/Kolkata",
"model": "claude-2"
},
"organization_uuid": f"{self.org_uuid}",
"conversation_uuid": f"{con_uuid}",
"text": f"{query}",
"attachments": []
})
headers = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0',
'Accept': 'text/event-stream, text/event-stream',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://claude.ai/chats',
'Content-Type': 'application/json',
'Origin': 'https://claude.ai',
'DNT': '1',
'Connection': 'keep-alive',
'Cookie': f'{self.claude_api_cookie}',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'TE': 'trailers'
}
res = requests.post(base_url + "/api/append_message", headers=headers, data=payload,impersonate="chrome110",proxies= self.proxies,timeout=400)
if res.status_code == 200 or "pemission" in res.text:
# execute success
decoded_data = res.content.decode("utf-8")
decoded_data = re.sub('\n+', '\n', decoded_data).strip()
data_strings = decoded_data.split('\n')
completions = []
for data_string in data_strings:
json_str = data_string[6:].strip()
data = json.loads(json_str)
if 'completion' in data:
completions.append(data['completion'])
reply_content = ''.join(completions)
if "rate limi" in reply_content:
logger.error("rate limit error: The conversation has reached the system speed limit and is synchronized with Cladue. Please go to the official website to check the lifting time")
return Reply(ReplyType.ERROR, "对话达到系统速率限制,与cladue同步,请进入官网查看解除限制时间")
logger.info(f"[CLAUDE] reply={reply_content}, total_tokens=invisible")
self.sessions.session_reply(reply_content, session_id, 100)
return Reply(ReplyType.TEXT, reply_content)
else:
flag = self.check_cookie()
if flag == None:
return Reply(ReplyType.ERROR, self.error)
response = res.json()
error = response.get("error")
logger.error(f"[CLAUDE] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}, detail: {res.text}, uuid: {con_uuid}")
if res.status_code >= 500:
# server error, need retry
time.sleep(2)
logger.warn(f"[CLAUDE] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
return Reply(ReplyType.ERROR, "提问太快啦,请休息一下再问我吧")
except Exception as e:
logger.exception(e)
# retry
time.sleep(2)
logger.warn(f"[CLAUDE] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
+9
View File
@@ -0,0 +1,9 @@
from bot.session_manager import Session
class ClaudeAiSession(Session):
def __init__(self, session_id, system_prompt=None, model="claude"):
super().__init__(session_id, system_prompt)
self.model = model
# claude逆向不支持role prompt
# self.reset()
+73 -4
View File
@@ -22,8 +22,8 @@ class LinkAIBot(Bot, OpenAIImage):
def __init__(self):
super().__init__()
self.base_url = "https://api.link-ai.chat/v1"
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
self.args = {}
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
@@ -73,17 +73,21 @@ class LinkAIBot(Bot, OpenAIImage):
body = {
"app_code": app_code,
"messages": session.messages,
"model": model, # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin
"model": model, # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"temperature": conf().get("temperature"),
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
}
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}")
file_id = context.kwargs.get("file_id")
if file_id:
body["file_id"] = file_id
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}, file_id={file_id}")
headers = {"Authorization": "Bearer " + linkai_api_key}
# do http request
res = requests.post(url=self.base_url + "/chat/completions", json=body, headers=headers,
base_url = conf().get("linkai_api_base", "https://api.link-ai.chat")
res = requests.post(url=base_url + "/v1/chat/completions", json=body, headers=headers,
timeout=conf().get("request_timeout", 180))
if res.status_code == 200:
# execute success
@@ -114,3 +118,68 @@ class LinkAIBot(Bot, OpenAIImage):
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
def reply_text(self, session: ChatGPTSession, app_code="", retry_count=0) -> dict:
if retry_count >= 2:
# exit from retry 2 times
logger.warn("[LINKAI] failed after maximum number of retry times")
return {
"total_tokens": 0,
"completion_tokens": 0,
"content": "请再问我一次吧"
}
try:
body = {
"app_code": app_code,
"messages": session.messages,
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"temperature": conf().get("temperature"),
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
}
if self.args.get("max_tokens"):
body["max_tokens"] = self.args.get("max_tokens")
headers = {"Authorization": "Bearer " + conf().get("linkai_api_key")}
# do http request
base_url = conf().get("linkai_api_base", "https://api.link-ai.chat")
res = requests.post(url=base_url + "/v1/chat/completions", json=body, headers=headers,
timeout=conf().get("request_timeout", 180))
if res.status_code == 200:
# execute success
response = res.json()
reply_content = response["choices"][0]["message"]["content"]
total_tokens = response["usage"]["total_tokens"]
logger.info(f"[LINKAI] reply={reply_content}, total_tokens={total_tokens}")
return {
"total_tokens": total_tokens,
"completion_tokens": response["usage"]["completion_tokens"],
"content": reply_content,
}
else:
response = res.json()
error = response.get("error")
logger.error(f"[LINKAI] chat failed, status_code={res.status_code}, "
f"msg={error.get('message')}, type={error.get('type')}")
if res.status_code >= 500:
# server error, need retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self.reply_text(session, app_code, retry_count + 1)
return {
"total_tokens": 0,
"completion_tokens": 0,
"content": "提问太快啦,请休息一下再问我吧"
}
except Exception as e:
logger.exception(e)
# retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self.reply_text(session, app_code, retry_count + 1)
+250
View File
@@ -0,0 +1,250 @@
# encoding:utf-8
import requests, json
from bot.bot import Bot
from bot.session_manager import SessionManager
from bot.baidu.baidu_wenxin_session import BaiduWenxinSession
from bridge.context import ContextType, Context
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf
from common import const
import time
import _thread as thread
import datetime
from datetime import datetime
from wsgiref.handlers import format_date_time
from urllib.parse import urlencode
import base64
import ssl
import hashlib
import hmac
import json
from time import mktime
from urllib.parse import urlparse
import websocket
import queue
import threading
import random
# 消息队列 map
queue_map = dict()
# 响应队列 map
reply_map = dict()
class XunFeiBot(Bot):
def __init__(self):
super().__init__()
self.app_id = conf().get("xunfei_app_id")
self.api_key = conf().get("xunfei_api_key")
self.api_secret = conf().get("xunfei_api_secret")
# 默认使用v2.0版本,1.5版本可设置为 general
self.domain = "generalv2"
# 默认使用v2.0版本,1.5版本可设置为 "ws://spark-api.xf-yun.com/v1.1/chat"
self.spark_url = "ws://spark-api.xf-yun.com/v2.1/chat"
self.host = urlparse(self.spark_url).netloc
self.path = urlparse(self.spark_url).path
# 和wenxin使用相同的session机制
self.sessions = SessionManager(BaiduWenxinSession, model=const.XUNFEI)
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
logger.info("[XunFei] query={}".format(query))
session_id = context["session_id"]
request_id = self.gen_request_id(session_id)
reply_map[request_id] = ""
session = self.sessions.session_query(query, session_id)
threading.Thread(target=self.create_web_socket, args=(session.messages, request_id)).start()
depth = 0
time.sleep(0.1)
t1 = time.time()
usage = {}
while depth <= 300:
try:
data_queue = queue_map.get(request_id)
if not data_queue:
depth += 1
time.sleep(0.1)
continue
data_item = data_queue.get(block=True, timeout=0.1)
if data_item.is_end:
# 请求结束
del queue_map[request_id]
if data_item.reply:
reply_map[request_id] += data_item.reply
usage = data_item.usage
break
reply_map[request_id] += data_item.reply
depth += 1
except Exception as e:
depth += 1
continue
t2 = time.time()
logger.info(f"[XunFei-API] response={reply_map[request_id]}, time={t2 - t1}s, usage={usage}")
self.sessions.session_reply(reply_map[request_id], session_id, usage.get("total_tokens"))
reply = Reply(ReplyType.TEXT, reply_map[request_id])
del reply_map[request_id]
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def create_web_socket(self, prompt, session_id, temperature=0.5):
logger.info(f"[XunFei] start connect, prompt={prompt}")
websocket.enableTrace(False)
wsUrl = self.create_url()
ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close,
on_open=on_open)
data_queue = queue.Queue(1000)
queue_map[session_id] = data_queue
ws.appid = self.app_id
ws.question = prompt
ws.domain = self.domain
ws.session_id = session_id
ws.temperature = temperature
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
def gen_request_id(self, session_id: str):
return session_id + "_" + str(int(time.time())) + "" + str(random.randint(0, 100))
# 生成url
def create_url(self):
# 生成RFC1123格式的时间戳
now = datetime.now()
date = format_date_time(mktime(now.timetuple()))
# 拼接字符串
signature_origin = "host: " + self.host + "\n"
signature_origin += "date: " + date + "\n"
signature_origin += "GET " + self.path + " HTTP/1.1"
# 进行hmac-sha256进行加密
signature_sha = hmac.new(self.api_secret.encode('utf-8'), signature_origin.encode('utf-8'),
digestmod=hashlib.sha256).digest()
signature_sha_base64 = base64.b64encode(signature_sha).decode(encoding='utf-8')
authorization_origin = f'api_key="{self.api_key}", algorithm="hmac-sha256", headers="host date request-line", ' \
f'signature="{signature_sha_base64}"'
authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')
# 将请求的鉴权参数组合为字典
v = {
"authorization": authorization,
"date": date,
"host": self.host
}
# 拼接鉴权参数,生成url
url = self.spark_url + '?' + urlencode(v)
# 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释,比对相同参数时生成的url与自己代码生成的url是否一致
return url
def gen_params(self, appid, domain, question):
"""
通过appid和用户的提问来生成请参数
"""
data = {
"header": {
"app_id": appid,
"uid": "1234"
},
"parameter": {
"chat": {
"domain": domain,
"random_threshold": 0.5,
"max_tokens": 2048,
"auditing": "default"
}
},
"payload": {
"message": {
"text": question
}
}
}
return data
class ReplyItem:
def __init__(self, reply, usage=None, is_end=False):
self.is_end = is_end
self.reply = reply
self.usage = usage
# 收到websocket错误的处理
def on_error(ws, error):
logger.error(f"[XunFei] error: {str(error)}")
# 收到websocket关闭的处理
def on_close(ws, one, two):
data_queue = queue_map.get(ws.session_id)
data_queue.put("END")
# 收到websocket连接建立的处理
def on_open(ws):
logger.info(f"[XunFei] Start websocket, session_id={ws.session_id}")
thread.start_new_thread(run, (ws,))
def run(ws, *args):
data = json.dumps(gen_params(appid=ws.appid, domain=ws.domain, question=ws.question, temperature=ws.temperature))
ws.send(data)
# Websocket 操作
# 收到websocket消息的处理
def on_message(ws, message):
data = json.loads(message)
code = data['header']['code']
if code != 0:
logger.error(f'请求错误: {code}, {data}')
ws.close()
else:
choices = data["payload"]["choices"]
status = choices["status"]
content = choices["text"][0]["content"]
data_queue = queue_map.get(ws.session_id)
if not data_queue:
logger.error(f"[XunFei] can't find data queue, session_id={ws.session_id}")
return
reply_item = ReplyItem(content)
if status == 2:
usage = data["payload"].get("usage")
reply_item = ReplyItem(content, usage)
reply_item.is_end = True
ws.close()
data_queue.put(reply_item)
def gen_params(appid, domain, question, temperature=0.5):
"""
通过appid和用户的提问来生成请参数
"""
data = {
"header": {
"app_id": appid,
"uid": "1234"
},
"parameter": {
"chat": {
"domain": domain,
"temperature": temperature,
"random_threshold": 0.5,
"max_tokens": 2048,
"auditing": "default"
}
},
"payload": {
"message": {
"text": question
}
}
}
return data
+10
View File
@@ -25,9 +25,14 @@ class Bridge(object):
self.btype["chat"] = const.CHATGPTONAZURE
if model_type in ["wenxin"]:
self.btype["chat"] = const.BAIDU
if model_type in ["xunfei"]:
self.btype["chat"] = const.XUNFEI
if conf().get("use_linkai") and conf().get("linkai_api_key"):
self.btype["chat"] = const.LINKAI
if model_type in ["claude"]:
self.btype["chat"] = const.CLAUDEAI
self.bots = {}
self.chat_bots = {}
def get_bot(self, typename):
if self.bots.get(typename) is None:
@@ -57,6 +62,11 @@ class Bridge(object):
def fetch_translate(self, text, from_lang="", to_lang="en") -> Reply:
return self.get_bot("translate").translate(text, from_lang, to_lang)
def find_chat_bot(self, bot_type: str):
if self.chat_bots.get(bot_type) is None:
self.chat_bots[bot_type] = create_bot(bot_type)
return self.chat_bots.get(bot_type)
def reset_bot(self):
"""
重置bot路由
+5
View File
@@ -7,9 +7,14 @@ class ContextType(Enum):
TEXT = 1 # 文本消息
VOICE = 2 # 音频消息
IMAGE = 3 # 图片消息
FILE = 4 # 文件信息
VIDEO = 5 # 视频信息
SHARING = 6 # 分享信息
IMAGE_CREATE = 10 # 创建图片命令
JOIN_GROUP = 20 # 加入群聊
PATPAT = 21 # 拍了拍
FUNCTION = 22 # 函数调用
def __str__(self):
return self.name
+7 -1
View File
@@ -8,9 +8,15 @@ class ReplyType(Enum):
VOICE = 2 # 音频文件
IMAGE = 3 # 图片文件
IMAGE_URL = 4 # 图片URL
VIDEO_URL = 5 # 视频URL
FILE = 6 # 文件
CARD = 7 # 微信名片,仅支持ntchat
InviteRoom = 8 # 邀请好友进群
INFO = 9
ERROR = 10
TEXT_ = 11 # 强制文本
VIDEO = 12
MINIAPP = 13 # 小程序
def __str__(self):
return self.name
+4
View File
@@ -33,4 +33,8 @@ def create_channel(channel_type):
from channel.wechatcom.wechatcomapp_channel import WechatComAppChannel
return WechatComAppChannel()
elif channel_type == "wework":
from channel.wework.wework_channel import WeworkChannel
return WeworkChannel()
raise RuntimeError
+23 -15
View File
@@ -99,21 +99,26 @@ class ChatChannel(Channel):
match_prefix = check_prefix(content, conf().get("group_chat_prefix"))
match_contain = check_contain(content, conf().get("group_chat_keyword"))
flag = False
if match_prefix is not None or match_contain is not None:
flag = True
if match_prefix:
content = content.replace(match_prefix, "", 1).strip()
if context["msg"].is_at:
logger.info("[WX]receive group at")
if not conf().get("group_at_off", False):
if context["msg"].to_user_id != context["msg"].actual_user_id:
if match_prefix is not None or match_contain is not None:
flag = True
pattern = f"@{re.escape(self.name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
if subtract_res == content and context["msg"].self_display_name:
# 前缀移除后没有变化,使用群昵称再次移除
pattern = f"@{re.escape(context['msg'].self_display_name)}(\u2005|\u0020)"
if match_prefix:
content = content.replace(match_prefix, "", 1).strip()
if context["msg"].is_at:
logger.info("[WX]receive group at")
if not conf().get("group_at_off", False):
flag = True
pattern = f"@{re.escape(self.name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
content = subtract_res
if isinstance(context["msg"].at_list, list):
for at in context["msg"].at_list:
pattern = f"@{re.escape(at)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", subtract_res)
if subtract_res == content and context["msg"].self_display_name:
# 前缀移除后没有变化,使用群昵称再次移除
pattern = f"@{re.escape(context['msg'].self_display_name)}(\u2005|\u0020)"
subtract_res = re.sub(pattern, r"", content)
content = subtract_res
if not flag:
if context["origin_ctype"] == ContextType.VOICE:
logger.info("[WX]receive group voice, but checkprefix didn't match")
@@ -197,7 +202,10 @@ class ChatChannel(Channel):
reply = self._generate_reply(new_context)
else:
return
elif context.type == ContextType.IMAGE: # 图片消息,当前无默认逻辑
elif context.type == ContextType.IMAGE: # 图片消息,当前仅做下载保存到本地的逻辑
cmsg = context["msg"]
cmsg.prepare()
elif context.type == ContextType.FUNCTION or context.type == ContextType.FILE: # 文件消息及函数调用等,当前无默认逻辑
pass
else:
logger.error("[WX] unknown context type: {}".format(context.type))
@@ -233,7 +241,7 @@ class ChatChannel(Channel):
reply.content = reply_text
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
reply.content = "[" + str(reply.type) + "]\n" + reply.content
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE:
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE or reply.type == ReplyType.FILE or reply.type == ReplyType.VIDEO or reply.type == ReplyType.VIDEO_URL:
pass
else:
logger.error("[WX] unknown reply type: {}".format(reply.type))
+3 -1
View File
@@ -53,6 +53,7 @@ class ChatMessage(object):
is_at = False
actual_user_id = None
actual_user_nickname = None
at_list = None
_prepare_fn = None
_prepared = False
@@ -67,7 +68,7 @@ class ChatMessage(object):
self._prepare_fn()
def __str__(self):
return "ChatMessage: id={}, create_time={}, ctype={}, content={}, from_user_id={}, from_user_nickname={}, to_user_id={}, to_user_nickname={}, other_user_id={}, other_user_nickname={}, is_group={}, is_at={}, actual_user_id={}, actual_user_nickname={}".format(
return "ChatMessage: id={}, create_time={}, ctype={}, content={}, from_user_id={}, from_user_nickname={}, to_user_id={}, to_user_nickname={}, other_user_id={}, other_user_nickname={}, is_group={}, is_at={}, actual_user_id={}, actual_user_nickname={}, at_list={}".format(
self.msg_id,
self.create_time,
self.ctype,
@@ -82,4 +83,5 @@ class ChatMessage(object):
self.is_at,
self.actual_user_id,
self.actual_user_nickname,
self.at_list
)
+29 -2
View File
@@ -25,7 +25,7 @@ from lib import itchat
from lib.itchat.content import *
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE])
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING])
def handler_single_msg(msg):
try:
cmsg = WechatMessage(msg, False)
@@ -36,7 +36,7 @@ def handler_single_msg(msg):
return None
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE], isGroupChat=True)
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING], isGroupChat=True)
def handler_group_msg(msg):
try:
cmsg = WechatMessage(msg, True)
@@ -172,6 +172,8 @@ class WechatChannel(ChatChannel):
elif cmsg.ctype == ContextType.TEXT:
# logger.debug("[WX]receive group msg: {}, cmsg={}".format(json.dumps(cmsg._rawmsg, ensure_ascii=False), cmsg))
pass
elif cmsg.ctype == ContextType.FILE:
logger.debug(f"[WX]receive attachment msg, file_name={cmsg.content}")
else:
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
@@ -192,10 +194,14 @@ class WechatChannel(ChatChannel):
logger.info("[WX] sendFile={}, receiver={}".format(reply.content, receiver))
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
logger.debug(f"[WX] start download image, img_url={img_url}")
pic_res = requests.get(img_url, stream=True)
image_storage = io.BytesIO()
size = 0
for block in pic_res.iter_content(1024):
size += len(block)
image_storage.write(block)
logger.info(f"[WX] download image success, size={size}, img_url={img_url}")
image_storage.seek(0)
itchat.send_image(image_storage, toUserName=receiver)
logger.info("[WX] sendImage url={}, receiver={}".format(img_url, receiver))
@@ -204,3 +210,24 @@ class WechatChannel(ChatChannel):
image_storage.seek(0)
itchat.send_image(image_storage, toUserName=receiver)
logger.info("[WX] sendImage, receiver={}".format(receiver))
elif reply.type == ReplyType.FILE: # 新增文件回复类型
file_storage = reply.content
itchat.send_file(file_storage, toUserName=receiver)
logger.info("[WX] sendFile, receiver={}".format(receiver))
elif reply.type == ReplyType.VIDEO: # 新增视频回复类型
video_storage = reply.content
itchat.send_video(video_storage, toUserName=receiver)
logger.info("[WX] sendFile, receiver={}".format(receiver))
elif reply.type == ReplyType.VIDEO_URL: # 新增视频URL回复类型
video_url = reply.content
logger.debug(f"[WX] start download video, video_url={video_url}")
video_res = requests.get(video_url, stream=True)
video_storage = io.BytesIO()
size = 0
for block in video_res.iter_content(1024):
size += len(block)
video_storage.write(block)
logger.info(f"[WX] download video success, size={size}, video_url={video_url}")
video_storage.seek(0)
itchat.send_video(video_storage, toUserName=receiver)
logger.info("[WX] sendVideo url={}, receiver={}".format(video_url, receiver))
+8 -1
View File
@@ -7,7 +7,6 @@ from common.tmp_dir import TmpDir
from lib import itchat
from lib.itchat.content import *
class WechatMessage(ChatMessage):
def __init__(self, itchat_msg, is_group=False):
super().__init__(itchat_msg)
@@ -42,6 +41,14 @@ class WechatMessage(ChatMessage):
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
else:
raise NotImplementedError("Unsupported note message: " + itchat_msg["Content"])
elif itchat_msg["Type"] == ATTACHMENT:
self.ctype = ContextType.FILE
self.content = TmpDir().path() + itchat_msg["FileName"]
self._prepare_fn = lambda: itchat_msg.download(self.content)
elif itchat_msg["Type"] == SHARING:
self.ctype = ContextType.SHARING
self.content = itchat_msg.get("Url")
else:
raise NotImplementedError("Unsupported message type: Type:{} MsgType:{}".format(itchat_msg["Type"], itchat_msg["MsgType"]))
+6 -4
View File
@@ -49,7 +49,7 @@ class Query:
# New request
if (
from_user not in channel.cache_dict
channel.cache_dict.get(from_user) is None
and from_user not in channel.running
or content.startswith("#")
and message_id not in channel.request_cnt # insert the godcmd
@@ -131,8 +131,10 @@ class Query:
# Only one request can access to the cached data
try:
(reply_type, reply_content) = channel.cache_dict.pop(from_user)
except KeyError:
(reply_type, reply_content) = channel.cache_dict[from_user].pop(0)
if not channel.cache_dict[from_user]: # If popping the message makes the list empty, delete the user entry from cache
del channel.cache_dict[from_user]
except IndexError:
return "success"
if reply_type == "text":
@@ -146,7 +148,7 @@ class Query:
max_split=1,
)
reply_text = splits[0] + continue_text
channel.cache_dict[from_user] = ("text", splits[1])
channel.cache_dict[from_user].append(("text", splits[1]))
logger.info(
"[wechatmp] Request {} do send to {} {}: {}\n{}".format(
+45 -25
View File
@@ -10,6 +10,7 @@ import requests
import web
from wechatpy.crypto import WeChatCrypto
from wechatpy.exceptions import WeChatClientException
from collections import defaultdict
from bridge.context import *
from bridge.reply import *
@@ -20,7 +21,7 @@ from common.log import logger
from common.singleton import singleton
from common.utils import split_string_by_utf8_length
from config import conf
from voice.audio_convert import any_to_mp3
from voice.audio_convert import any_to_mp3, split_audio
# If using SSL, uncomment the following lines, and modify the certificate path.
# from cheroot.server import HTTPServer
@@ -46,7 +47,7 @@ class WechatMPChannel(ChatChannel):
self.crypto = WeChatCrypto(token, aes_key, appid)
if self.passive_reply:
# Cache the reply to the user's first message
self.cache_dict = dict()
self.cache_dict = defaultdict(list)
# Record whether the current message is being processed
self.running = set()
# Count the request from wechat official server by message_id
@@ -82,24 +83,28 @@ class WechatMPChannel(ChatChannel):
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
reply_text = reply.content
logger.info("[wechatmp] text cached, receiver {}\n{}".format(receiver, reply_text))
self.cache_dict[receiver] = ("text", reply_text)
self.cache_dict[receiver].append(("text", reply_text))
elif reply.type == ReplyType.VOICE:
try:
voice_file_path = reply.content
with open(voice_file_path, "rb") as f:
# support: <2M, <60s, mp3/wma/wav/amr
response = self.client.material.add("voice", f)
logger.debug("[wechatmp] upload voice response: {}".format(response))
# 根据文件大小估计一个微信自动审核的时间,审核结束前返回将会导致语音无法播放,这个估计有待验证
f_size = os.fstat(f.fileno()).st_size
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
# todo check media_id
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
media_id = response["media_id"]
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("voice", media_id)
voice_file_path = reply.content
duration, files = split_audio(voice_file_path, 60 * 1000)
if len(files) > 1:
logger.info("[wechatmp] voice too long {}s > 60s , split into {} parts".format(duration / 1000.0, len(files)))
for path in files:
# support: <2M, <60s, mp3/wma/wav/amr
try:
with open(path, "rb") as f:
response = self.client.material.add("voice", f)
logger.debug("[wechatmp] upload voice response: {}".format(response))
f_size = os.fstat(f.fileno()).st_size
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
# todo check media_id
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
media_id = response["media_id"]
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver].append(("voice", media_id))
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
@@ -119,7 +124,7 @@ class WechatMPChannel(ChatChannel):
return
media_id = response["media_id"]
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("image", media_id)
self.cache_dict[receiver].append(("image", media_id))
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
image_storage = reply.content
image_storage.seek(0)
@@ -134,7 +139,7 @@ class WechatMPChannel(ChatChannel):
return
media_id = response["media_id"]
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
self.cache_dict[receiver] = ("image", media_id)
self.cache_dict[receiver].append(("image", media_id))
else:
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
reply_text = reply.content
@@ -162,13 +167,28 @@ class WechatMPChannel(ChatChannel):
file_name = os.path.basename(file_path)
file_type = "audio/mpeg"
logger.info("[wechatmp] file_name: {}, file_type: {} ".format(file_name, file_type))
# support: <2M, <60s, AMR\MP3
response = self.client.media.upload("voice", (file_name, open(file_path, "rb"), file_type))
logger.debug("[wechatmp] upload voice response: {}".format(response))
media_ids = []
duration, files = split_audio(file_path, 60 * 1000)
if len(files) > 1:
logger.info("[wechatmp] voice too long {}s > 60s , split into {} parts".format(duration / 1000.0, len(files)))
for path in files:
# support: <2M, <60s, AMR\MP3
response = self.client.media.upload("voice", (os.path.basename(path), open(path, "rb"), file_type))
logger.debug("[wechatcom] upload voice response: {}".format(response))
media_ids.append(response["media_id"])
os.remove(path)
except WeChatClientException as e:
logger.error("[wechatmp] upload voice failed: {}".format(e))
return
self.client.message.send_voice(receiver, response["media_id"])
try:
os.remove(file_path)
except Exception:
pass
for media_id in media_ids:
self.client.message.send_voice(receiver, media_id)
time.sleep(1)
logger.info("[wechatmp] Do send voice to {}".format(receiver))
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
+17
View File
@@ -0,0 +1,17 @@
import os
import time
os.environ['ntwork_LOG'] = "ERROR"
import ntwork
wework = ntwork.WeWork()
def forever():
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
ntwork.exit_()
os._exit(0)
+326
View File
@@ -0,0 +1,326 @@
import io
import os
import random
import tempfile
import threading
os.environ['ntwork_LOG'] = "ERROR"
import ntwork
import requests
import uuid
from bridge.context import *
from bridge.reply import *
from channel.chat_channel import ChatChannel
from channel.wework.wework_message import *
from channel.wework.wework_message import WeworkMessage
from common.singleton import singleton
from common.log import logger
from common.time_check import time_checker
from common.utils import compress_imgfile, fsize
from config import conf
from channel.wework.run import wework
from channel.wework import run
from PIL import Image
def get_wxid_by_name(room_members, group_wxid, name):
if group_wxid in room_members:
for member in room_members[group_wxid]['member_list']:
if member['room_nickname'] == name or member['username'] == name:
return member['user_id']
return None # 如果没有找到对应的group_wxid或name,则返回None
def download_and_compress_image(url, filename, quality=30):
# 确定保存图片的目录
directory = os.path.join(os.getcwd(), "tmp")
# 如果目录不存在,则创建目录
if not os.path.exists(directory):
os.makedirs(directory)
# 下载图片
pic_res = requests.get(url, stream=True)
image_storage = io.BytesIO()
for block in pic_res.iter_content(1024):
image_storage.write(block)
# 检查图片大小并可能进行压缩
sz = fsize(image_storage)
if sz >= 10 * 1024 * 1024: # 如果图片大于 10 MB
logger.info("[wework] image too large, ready to compress, sz={}".format(sz))
image_storage = compress_imgfile(image_storage, 10 * 1024 * 1024 - 1)
logger.info("[wework] image compressed, sz={}".format(fsize(image_storage)))
# 将内存缓冲区的指针重置到起始位置
image_storage.seek(0)
# 读取并保存图片
image = Image.open(image_storage)
image_path = os.path.join(directory, f"{filename}.png")
image.save(image_path, "png")
return image_path
def download_video(url, filename):
# 确定保存视频的目录
directory = os.path.join(os.getcwd(), "tmp")
# 如果目录不存在,则创建目录
if not os.path.exists(directory):
os.makedirs(directory)
# 下载视频
response = requests.get(url, stream=True)
total_size = 0
video_path = os.path.join(directory, f"{filename}.mp4")
with open(video_path, 'wb') as f:
for block in response.iter_content(1024):
total_size += len(block)
# 如果视频的总大小超过30MB (30 * 1024 * 1024 bytes),则停止下载并返回
if total_size > 30 * 1024 * 1024:
logger.info("[WX] Video is larger than 30MB, skipping...")
return None
f.write(block)
return video_path
def create_message(wework_instance, message, is_group):
logger.debug(f"正在为{'群聊' if is_group else '单聊'}创建 WeworkMessage")
cmsg = WeworkMessage(message, wework=wework_instance, is_group=is_group)
logger.debug(f"cmsg:{cmsg}")
return cmsg
def handle_message(cmsg, is_group):
logger.debug(f"准备用 WeworkChannel 处理{'群聊' if is_group else '单聊'}消息")
if is_group:
WeworkChannel().handle_group(cmsg)
else:
WeworkChannel().handle_single(cmsg)
logger.debug(f"已用 WeworkChannel 处理完{'群聊' if is_group else '单聊'}消息")
def _check(func):
def wrapper(self, cmsg: ChatMessage):
msgId = cmsg.msg_id
create_time = cmsg.create_time # 消息时间戳
if create_time is None:
return func(self, cmsg)
if int(create_time) < int(time.time()) - 60: # 跳过1分钟前的历史消息
logger.debug("[WX]history message {} skipped".format(msgId))
return
return func(self, cmsg)
return wrapper
@wework.msg_register(
[ntwork.MT_RECV_TEXT_MSG, ntwork.MT_RECV_IMAGE_MSG, 11072, ntwork.MT_RECV_VOICE_MSG])
def all_msg_handler(wework_instance: ntwork.WeWork, message):
logger.debug(f"收到消息: {message}")
if 'data' in message:
# 首先查找conversation_id,如果没有找到,则查找room_conversation_id
conversation_id = message['data'].get('conversation_id', message['data'].get('room_conversation_id'))
if conversation_id is not None:
is_group = "R:" in conversation_id
try:
cmsg = create_message(wework_instance=wework_instance, message=message, is_group=is_group)
except NotImplementedError as e:
logger.error(f"[WX]{message.get('MsgId', 'unknown')} 跳过: {e}")
return None
delay = random.randint(1, 2)
timer = threading.Timer(delay, handle_message, args=(cmsg, is_group))
timer.start()
else:
logger.debug("消息数据中无 conversation_id")
return None
return None
def accept_friend_with_retries(wework_instance, user_id, corp_id):
result = wework_instance.accept_friend(user_id, corp_id)
logger.debug(f'result:{result}')
# @wework.msg_register(ntwork.MT_RECV_FRIEND_MSG)
# def friend(wework_instance: ntwork.WeWork, message):
# data = message["data"]
# user_id = data["user_id"]
# corp_id = data["corp_id"]
# logger.info(f"接收到好友请求,消息内容:{data}")
# delay = random.randint(1, 180)
# threading.Timer(delay, accept_friend_with_retries, args=(wework_instance, user_id, corp_id)).start()
#
# return None
def get_with_retry(get_func, max_retries=5, delay=5):
retries = 0
result = None
while retries < max_retries:
result = get_func()
if result:
break
logger.warning(f"获取数据失败,重试第{retries + 1}次······")
retries += 1
time.sleep(delay) # 等待一段时间后重试
return result
@singleton
class WeworkChannel(ChatChannel):
NOT_SUPPORT_REPLYTYPE = []
def __init__(self):
super().__init__()
def startup(self):
smart = conf().get("wework_smart", True)
wework.open(smart)
logger.info("等待登录······")
wework.wait_login()
login_info = wework.get_login_info()
self.user_id = login_info['user_id']
self.name = login_info['nickname']
logger.info(f"登录信息:>>>user_id:{self.user_id}>>>>>>>>name:{self.name}")
logger.info("静默延迟60s,等待客户端刷新数据,请勿进行任何操作······")
time.sleep(60)
contacts = get_with_retry(wework.get_external_contacts)
rooms = get_with_retry(wework.get_rooms)
directory = os.path.join(os.getcwd(), "tmp")
if not contacts or not rooms:
logger.error("获取contacts或rooms失败,程序退出")
ntwork.exit_()
os.exit(0)
if not os.path.exists(directory):
os.makedirs(directory)
# 将contacts保存到json文件中
with open(os.path.join(directory, 'wework_contacts.json'), 'w', encoding='utf-8') as f:
json.dump(contacts, f, ensure_ascii=False, indent=4)
with open(os.path.join(directory, 'wework_rooms.json'), 'w', encoding='utf-8') as f:
json.dump(rooms, f, ensure_ascii=False, indent=4)
# 创建一个空字典来保存结果
result = {}
# 遍历列表中的每个字典
for room in rooms['room_list']:
# 获取聊天室ID
room_wxid = room['conversation_id']
# 获取聊天室成员
room_members = wework.get_room_members(room_wxid)
# 将聊天室成员保存到结果字典中
result[room_wxid] = room_members
# 将结果保存到json文件中
with open(os.path.join(directory, 'wework_room_members.json'), 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=4)
logger.info("wework程序初始化完成········")
run.forever()
@time_checker
@_check
def handle_single(self, cmsg: ChatMessage):
if cmsg.from_user_id == cmsg.to_user_id:
# ignore self reply
return
if cmsg.ctype == ContextType.VOICE:
if not conf().get("speech_recognition"):
return
logger.debug("[WX]receive voice msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
logger.debug("[WX]receive image msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.PATPAT:
logger.debug("[WX]receive patpat msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.TEXT:
logger.debug("[WX]receive text msg: {}, cmsg={}".format(json.dumps(cmsg._rawmsg, ensure_ascii=False), cmsg))
else:
logger.debug("[WX]receive msg: {}, cmsg={}".format(cmsg.content, cmsg))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=False, msg=cmsg)
if context:
self.produce(context)
@time_checker
@_check
def handle_group(self, cmsg: ChatMessage):
if cmsg.ctype == ContextType.VOICE:
if not conf().get("speech_recognition"):
return
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
logger.debug("[WX]receive image for group msg: {}".format(cmsg.content))
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT]:
logger.debug("[WX]receive note msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.TEXT:
pass
else:
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
if context:
self.produce(context)
# 统一的发送函数,每个Channel自行实现,根据reply的type字段发送不同类型的消息
def send(self, reply: Reply, context: Context):
logger.debug(f"context: {context}")
receiver = context["receiver"]
actual_user_id = context["msg"].actual_user_id
if reply.type == ReplyType.TEXT or reply.type == ReplyType.TEXT_:
match = re.search(r"^@(.*?)\n", reply.content)
logger.debug(f"match: {match}")
if match:
new_content = re.sub(r"^@(.*?)\n", "\n", reply.content)
at_list = [actual_user_id]
logger.debug(f"new_content: {new_content}")
wework.send_room_at_msg(receiver, new_content, at_list)
else:
wework.send_text(receiver, reply.content)
logger.info("[WX] sendMsg={}, receiver={}".format(reply, receiver))
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
wework.send_text(receiver, reply.content)
logger.info("[WX] sendMsg={}, receiver={}".format(reply, receiver))
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
image_storage = reply.content
image_storage.seek(0)
# Read data from image_storage
data = image_storage.read()
# Create a temporary file
with tempfile.NamedTemporaryFile(delete=False) as temp:
temp_path = temp.name
temp.write(data)
# Send the image
wework.send_image(receiver, temp_path)
logger.info("[WX] sendImage, receiver={}".format(receiver))
# Remove the temporary file
os.remove(temp_path)
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
img_url = reply.content
filename = str(uuid.uuid4())
# 调用你的函数,下载图片并保存为本地文件
image_path = download_and_compress_image(img_url, filename)
wework.send_image(receiver, file_path=image_path)
logger.info("[WX] sendImage url={}, receiver={}".format(img_url, receiver))
elif reply.type == ReplyType.VIDEO_URL:
video_url = reply.content
filename = str(uuid.uuid4())
video_path = download_video(video_url, filename)
if video_path is None:
# 如果视频太大,下载可能会被跳过,此时 video_path 将为 None
wework.send_text(receiver, "抱歉,视频太大了!!!")
else:
wework.send_video(receiver, video_path)
logger.info("[WX] sendVideo, receiver={}".format(receiver))
elif reply.type == ReplyType.VOICE:
current_dir = os.getcwd()
voice_file = reply.content.split("/")[-1]
reply.content = os.path.join(current_dir, "tmp", voice_file)
wework.send_file(receiver, reply.content)
logger.info("[WX] sendFile={}, receiver={}".format(reply.content, receiver))
+211
View File
@@ -0,0 +1,211 @@
import datetime
import json
import os
import re
import time
import pilk
from bridge.context import ContextType
from channel.chat_message import ChatMessage
from common.log import logger
from ntwork.const import send_type
def get_with_retry(get_func, max_retries=5, delay=5):
retries = 0
result = None
while retries < max_retries:
result = get_func()
if result:
break
logger.warning(f"获取数据失败,重试第{retries + 1}次······")
retries += 1
time.sleep(delay) # 等待一段时间后重试
return result
def get_room_info(wework, conversation_id):
logger.debug(f"传入的 conversation_id: {conversation_id}")
rooms = wework.get_rooms()
if not rooms or 'room_list' not in rooms:
logger.error(f"获取群聊信息失败: {rooms}")
return None
time.sleep(1)
logger.debug(f"获取到的群聊信息: {rooms}")
for room in rooms['room_list']:
if room['conversation_id'] == conversation_id:
return room
return None
def cdn_download(wework, message, file_name):
data = message["data"]
aes_key = data["cdn"]["aes_key"]
file_size = data["cdn"]["size"]
# 获取当前工作目录,然后与文件名拼接得到保存路径
current_dir = os.getcwd()
save_path = os.path.join(current_dir, "tmp", file_name)
# 下载保存图片到本地
if "url" in data["cdn"].keys() and "auth_key" in data["cdn"].keys():
url = data["cdn"]["url"]
auth_key = data["cdn"]["auth_key"]
# result = wework.wx_cdn_download(url, auth_key, aes_key, file_size, save_path) # ntwork库本身接口有问题,缺失了aes_key这个参数
"""
下载wx类型的cdn文件,以https开头
"""
data = {
'url': url,
'auth_key': auth_key,
'aes_key': aes_key,
'size': file_size,
'save_path': save_path
}
result = wework._WeWork__send_sync(send_type.MT_WXCDN_DOWNLOAD_MSG, data) # 直接用wx_cdn_download的接口内部实现来调用
elif "file_id" in data["cdn"].keys():
file_type = 2
file_id = data["cdn"]["file_id"]
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
else:
logger.error(f"something is wrong, data: {data}")
return
# 输出下载结果
logger.debug(f"result: {result}")
def c2c_download_and_convert(wework, message, file_name):
data = message["data"]
aes_key = data["cdn"]["aes_key"]
file_size = data["cdn"]["size"]
file_type = 5
file_id = data["cdn"]["file_id"]
current_dir = os.getcwd()
save_path = os.path.join(current_dir, "tmp", file_name)
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
logger.debug(result)
# 在下载完SILK文件之后,立即将其转换为WAV文件
base_name, _ = os.path.splitext(save_path)
wav_file = base_name + ".wav"
pilk.silk_to_wav(save_path, wav_file, rate=24000)
# 删除SILK文件
try:
os.remove(save_path)
except Exception as e:
pass
class WeworkMessage(ChatMessage):
def __init__(self, wework_msg, wework, is_group=False):
try:
super().__init__(wework_msg)
self.msg_id = wework_msg['data'].get('conversation_id', wework_msg['data'].get('room_conversation_id'))
# 使用.get()防止 'send_time' 键不存在时抛出错误
self.create_time = wework_msg['data'].get("send_time")
self.is_group = is_group
self.wework = wework
if wework_msg["type"] == 11041: # 文本消息类型
if any(substring in wework_msg['data']['content'] for substring in ("该消息类型暂不能展示", "不支持的消息类型")):
return
self.ctype = ContextType.TEXT
self.content = wework_msg['data']['content']
elif wework_msg["type"] == 11044: # 语音消息类型,需要缓存文件
file_name = datetime.datetime.now().strftime('%Y%m%d%H%M%S') + ".silk"
base_name, _ = os.path.splitext(file_name)
file_name_2 = base_name + ".wav"
current_dir = os.getcwd()
self.ctype = ContextType.VOICE
self.content = os.path.join(current_dir, "tmp", file_name_2)
self._prepare_fn = lambda: c2c_download_and_convert(wework, wework_msg, file_name)
elif wework_msg["type"] == 11042: # 图片消息类型,需要下载文件
file_name = datetime.datetime.now().strftime('%Y%m%d%H%M%S') + ".jpg"
current_dir = os.getcwd()
self.ctype = ContextType.IMAGE
self.content = os.path.join(current_dir, "tmp", file_name)
self._prepare_fn = lambda: cdn_download(wework, wework_msg, file_name)
elif wework_msg["type"] == 11072: # 新成员入群通知
self.ctype = ContextType.JOIN_GROUP
member_list = wework_msg['data']['member_list']
self.actual_user_nickname = member_list[0]['name']
self.actual_user_id = member_list[0]['user_id']
self.content = f"{self.actual_user_nickname}加入了群聊!"
directory = os.path.join(os.getcwd(), "tmp")
rooms = get_with_retry(wework.get_rooms)
if not rooms:
logger.error("更新群信息失败···")
else:
result = {}
for room in rooms['room_list']:
# 获取聊天室ID
room_wxid = room['conversation_id']
# 获取聊天室成员
room_members = wework.get_room_members(room_wxid)
# 将聊天室成员保存到结果字典中
result[room_wxid] = room_members
with open(os.path.join(directory, 'wework_room_members.json'), 'w', encoding='utf-8') as f:
json.dump(result, f, ensure_ascii=False, indent=4)
logger.info("有新成员加入,已自动更新群成员列表缓存!")
else:
raise NotImplementedError(
"Unsupported message type: Type:{} MsgType:{}".format(wework_msg["type"], wework_msg["MsgType"]))
data = wework_msg['data']
login_info = self.wework.get_login_info()
logger.debug(f"login_info: {login_info}")
nickname = f"{login_info['username']}({login_info['nickname']})" if login_info['nickname'] else login_info['username']
user_id = login_info['user_id']
sender_id = data.get('sender')
conversation_id = data.get('conversation_id')
sender_name = data.get("sender_name")
self.from_user_id = user_id if sender_id == user_id else conversation_id
self.from_user_nickname = nickname if sender_id == user_id else sender_name
self.to_user_id = user_id
self.to_user_nickname = nickname
self.other_user_nickname = sender_name
self.other_user_id = conversation_id
if self.is_group:
conversation_id = data.get('conversation_id') or data.get('room_conversation_id')
self.other_user_id = conversation_id
if conversation_id:
room_info = get_room_info(wework=wework, conversation_id=conversation_id)
self.other_user_nickname = room_info.get('nickname', None) if room_info else None
at_list = data.get('at_list', [])
tmp_list = []
for at in at_list:
tmp_list.append(at['nickname'])
at_list = tmp_list
logger.debug(f"at_list: {at_list}")
logger.debug(f"nickname: {nickname}")
self.is_at = False
if nickname in at_list or login_info['nickname'] in at_list or login_info['username'] in at_list:
self.is_at = True
self.at_list = at_list
# 检查消息内容是否包含@用户名。处理复制粘贴的消息,这类消息可能不会触发@通知,但内容中可能包含 "@用户名"。
content = data.get('content', '')
name = nickname
pattern = f"@{re.escape(name)}(\u2005|\u0020)"
if re.search(pattern, content):
logger.debug(f"Wechaty message {self.msg_id} includes at")
self.is_at = True
if not self.actual_user_id:
self.actual_user_id = data.get("sender")
self.actual_user_nickname = sender_name if self.ctype != ContextType.JOIN_GROUP else self.actual_user_nickname
else:
logger.error("群聊消息中没有找到 conversation_id 或 room_conversation_id")
logger.debug(f"WeworkMessage has been successfully instantiated with message id: {self.msg_id}")
except Exception as e:
logger.error(f"在 WeworkMessage 的初始化过程中出现错误:{e}")
raise e
+4
View File
@@ -2,7 +2,11 @@
OPEN_AI = "openAI"
CHATGPT = "chatGPT"
BAIDU = "baidu"
XUNFEI = "xunfei"
CHATGPTONAZURE = "chatGPTOnAzure"
LINKAI = "linkai"
VERSION = "1.3.0"
CLAUDEAI = "claude"
MODEL_LIST = ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "wenxin", "xunfei","claude"]
+2
View File
@@ -30,6 +30,8 @@
"conversation_max_tokens": 1000,
"expires_in_seconds": 3600,
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
"temperature": 0.7,
"top_p": 1,
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输入。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持tool、角色扮演和文字冒险等丰富的插件。\n输入{trigger_prefix}#help 查看详细指令。",
"use_linkai": false,
"linkai_api_key": "",
+21 -8
View File
@@ -16,15 +16,15 @@ available_setting = {
"open_ai_api_base": "https://api.openai.com/v1",
"proxy": "", # openai使用的代理
# chatgpt模型, 当use_azure_chatgpt为true时,其名称为Azure上model deployment名称
"model": "gpt-3.5-turbo", # 还支持 gpt-3.5-turbo-16k, gpt-4, wenxin
"model": "gpt-3.5-turbo", # 还支持 gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei
"use_azure_chatgpt": False, # 是否使用azure的chatgpt
"azure_deployment_id": "", # azure 模型部署名称
"azure_api_version": "", # azure api版本
# Bot触发配置
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
"group_chat_reply_prefix": "", # 群聊时自动回复的前缀
"group_chat_reply_suffix": "", # 群聊时自动回复的后缀,\n 可以换行
"group_chat_keyword": [], # 群聊时包含该关键词则会触发机器人回复
@@ -52,16 +52,25 @@ available_setting = {
"request_timeout": 60, # chatgpt请求超时时间,openai接口默认设置为600,对于难问题一般需要较长时间
"timeout": 120, # chatgpt重试超时时间,在这个时间内,将会自动重试
# Baidu 文心一言参数
"baidu_wenxin_model": "eb-instant", # 默认使用ERNIE-Bot-turbo模型
"baidu_wenxin_api_key": "", # Baidu api key
"baidu_wenxin_secret_key": "", # Baidu secret key
"baidu_wenxin_model": "eb-instant", # 默认使用ERNIE-Bot-turbo模型
"baidu_wenxin_api_key": "", # Baidu api key
"baidu_wenxin_secret_key": "", # Baidu secret key
# 讯飞星火API
"xunfei_app_id": "", # 讯飞应用ID
"xunfei_api_key": "", # 讯飞 API key
"xunfei_api_secret": "", # 讯飞 API secret
# claude 配置
"claude_api_cookie": "",
"claude_uuid": "",
# wework的通用配置
"wework_smart": True, # 配置wework是否使用已登录的企业微信,False为多开
# 语音设置
"speech_recognition": False, # 是否开启语音识别
"group_speech_recognition": False, # 是否开启群组语音识别
"voice_reply_voice": False, # 是否使用语音回复语音,需要设置对应语音合成引擎的api key
"always_reply_voice": False, # 是否一直使用语音回复
"voice_to_text": "openai", # 语音识别引擎,支持openai,baidu,google,azure
"text_to_voice": "baidu", # 语音合成引擎,支持baidu,google,pytts(offline),azure
"text_to_voice": "baidu", # 语音合成引擎,支持baidu,google,pytts(offline),azure,elevenlabs
# baidu 语音api配置, 使用百度语音识别和语音合成时需要
"baidu_app_id": "",
"baidu_api_key": "",
@@ -71,6 +80,9 @@ available_setting = {
# azure 语音api配置, 使用azure语音识别和语音合成时需要
"azure_voice_api_key": "",
"azure_voice_region": "japaneast",
# elevenlabs 语音api配置
"xi_api_key": "", #获取ap的方法可以参考https://docs.elevenlabs.io/api-reference/quick-start/authentication
"xi_voice_id": "", #ElevenLabs提供了9种英式、美式等英语发音id,分别是“Adam/Antoni/Arnold/Bella/Domi/Elli/Josh/Rachel/Sam”
# 服务时间限制,目前支持itchat
"chat_time_module": False, # 是否开启服务时间限制
"chat_start_time": "00:00", # 服务开始时间
@@ -112,7 +124,8 @@ available_setting = {
# 知识库平台配置
"use_linkai": False,
"linkai_api_key": "",
"linkai_app_code": ""
"linkai_app_code": "",
"linkai_api_base": "https://api.link-ai.chat", # linkAI服务地址,若国内无法访问或延迟较高可改为 https://api.link-ai.tech
}
+2 -2
View File
@@ -1,4 +1,4 @@
FROM python:3.10-slim
FROM python:3.10-slim-bullseye
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
@@ -32,4 +32,4 @@ RUN chmod +x /entrypoint.sh \
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
+35 -6
View File
@@ -4,15 +4,15 @@ import json
import os
import random
import string
import traceback
import logging
from typing import Tuple
import bridge.bridge
import plugins
from bridge.bridge import Bridge
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
from common import const
from common.log import logger
from config import conf, load_config, global_config
from plugins import *
@@ -32,6 +32,10 @@ COMMANDS = {
"args": ["口令"],
"desc": "管理员认证",
},
"model": {
"alias": ["model", "模型"],
"desc": "查看和设置全局模型",
},
"set_openai_api_key": {
"alias": ["set_openai_api_key"],
"args": ["api_key"],
@@ -257,6 +261,18 @@ class Godcmd(Plugin):
break
if not ok:
result = "插件不存在或未启用"
elif cmd == "model":
if not isadmin and not self.is_admin_in_group(e_context["context"]):
ok, result = False, "需要管理员权限执行"
elif len(args) == 0:
ok, result = True, "当前模型为: " + str(conf().get("model"))
elif len(args) == 1:
if args[0] not in const.MODEL_LIST:
ok, result = False, "模型名称不存在"
else:
conf()["model"] = args[0]
Bridge().reset_bot()
ok, result = True, "模型设置为: " + str(conf().get("model"))
elif cmd == "id":
ok, result = True, user
elif cmd == "set_openai_api_key":
@@ -294,8 +310,10 @@ class Godcmd(Plugin):
except Exception as e:
ok, result = False, "你没有设置私有GPT模型"
elif cmd == "reset":
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI]:
bot.sessions.clear_session(session_id)
if Bridge().chat_bots.get(bottype):
Bridge().chat_bots.get(bottype).sessions.clear_session(session_id)
channel.cancel_session(session_id)
ok, result = True, "会话已重置"
else:
@@ -317,15 +335,20 @@ class Godcmd(Plugin):
load_config()
ok, result = True, "配置已重载"
elif cmd == "resetall":
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI,
const.BAIDU, const.XUNFEI]:
channel.cancel_all_session()
bot.sessions.clear_all_session()
ok, result = True, "重置所有会话成功"
else:
ok, result = False, "当前对话机器人不支持重置会话"
elif cmd == "debug":
logger.setLevel("DEBUG")
ok, result = True, "DEBUG模式已开启"
if logger.getEffectiveLevel() == logging.DEBUG: # 判断当前日志模式是否DEBUG
logger.setLevel(logging.INFO)
ok, result = True, "DEBUG模式已关闭"
else:
logger.setLevel(logging.DEBUG)
ok, result = True, "DEBUG模式已开启"
elif cmd == "plist":
plugins = PluginManager().list_plugins()
ok = True
@@ -437,3 +460,9 @@ class Godcmd(Plugin):
def get_help_text(self, isadmin=False, isgroup=False, **kwargs):
return get_help_text(isadmin, isgroup)
def is_admin_in_group(self, context):
if context["isgroup"]:
return context.kwargs.get("msg").actual_user_id in global_config["admin_users"]
return False
+27 -5
View File
@@ -2,7 +2,7 @@
import json
import os
import requests
import plugins
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
@@ -51,15 +51,37 @@ class Keyword(Plugin):
content = e_context["context"].content.strip()
logger.debug("[keyword] on_handle_context. content: %s" % content)
if content in self.keyword:
logger.debug(f"[keyword] 匹配到关键字【{content}")
logger.info(f"[keyword] 匹配到关键字【{content}")
reply_text = self.keyword[content]
# 判断匹配内容的类型
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".webp"]):
# 如果是以 http:// 或 https:// 开头,且.jpg/.jpeg/.png/.gif结尾,则认为是图片 URL
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".img"]):
# 如果是以 http:// 或 https:// 开头,且".jpg", ".jpeg", ".png", ".gif", ".img"结尾,则认为是图片 URL
reply = Reply()
reply.type = ReplyType.IMAGE_URL
reply.content = reply_text
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"]):
# 如果是以 http:// 或 https:// 开头,且".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"结尾,则下载文件到tmp目录并发送给用户
file_path = "tmp"
if not os.path.exists(file_path):
os.makedirs(file_path)
file_name = reply_text.split("/")[-1] # 获取文件名
file_path = os.path.join(file_path, file_name)
response = requests.get(reply_text)
with open(file_path, "wb") as f:
f.write(response.content)
#channel/wechat/wechat_channel.py和channel/wechat_channel.py中缺少ReplyType.FILE类型。
reply = Reply()
reply.type = ReplyType.FILE
reply.content = file_path
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".mp4"]):
# 如果是以 http:// 或 https:// 开头,且".mp4"结尾,则下载视频到tmp目录并发送给用户
reply = Reply()
reply.type = ReplyType.VIDEO_URL
reply.content = reply_text
else:
# 否则认为是普通文本
reply = Reply()
@@ -68,7 +90,7 @@ class Keyword(Plugin):
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS # 事件结束,并跳过处理context的默认逻辑
def get_help_text(self, **kwargs):
help_text = "关键词过滤"
return help_text
+46 -13
View File
@@ -1,18 +1,18 @@
## 插件说明
基于 LinkAI 提供的知识库、Midjourney绘画等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
基于 LinkAI 提供的知识库、Midjourney绘画、文档对话等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
## 插件配置
`plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`:
`plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置,但功能默认关闭,需要可通过指令进行开启)。
以下是配置项说明:
以下是插件配置项说明:
```bash
{
"group_app_map": { # 群聊 和 应用编码 的映射关系
"测试群1": "default", # 表示在名称为 "测试群1" 的群聊中将使用app_code 为 default 的应用
"测试群2": "Kv2fXJcH"
"group_app_map": { # 群聊 和 应用编码 的映射关系
"测试群名称1": "default", # 表示在名称为 "测试群名称1" 的群聊中将使用app_code 为 default 的应用
"测试群名称2": "Kv2fXJcH"
},
"midjourney": {
"enabled": true, # midjourney 绘画开关
@@ -21,19 +21,30 @@
"max_tasks": 3, # 支持同时提交的总任务个数
"max_tasks_per_user": 1, # 支持单个用户同时提交的任务个数
"use_image_create_prefix": true # 是否使用全局的绘画触发词,如果开启将同时支持由`config.json`中的 image_create_prefix 配置触发
},
"summary": {
"enabled": true, # 文档总结和对话功能开关
"group_enabled": true, # 是否支持群聊开启
"max_file_size": 10000 # 文件的大小限制,单位KB,默认为10M,超过该大小直接忽略
}
}
```
根目录 `config.json` 中配置,`API_KEY` 在 [控制台](https://chat.link-ai.tech/console/interface) 中创建并复制过来:
```bash
"linkai_api_key": "Link_xxxxxxxxx"
```
注意:
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,可根据需要进行填写,未填写配置时默认不开启相应功能
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,`summary` 部分是文档总结及对话功能的配置。三部分的配置相互独立,可按需开启
- 实际 `config.json` 配置中应保证json格式,不应携带 '#' 及后面的注释
- 如果是`docker`部署,可通过映射 `plugins/config.json` 到容器中来完成插件配置,参考[文档](https://github.com/zhayujie/chatgpt-on-wechat#3-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
## 插件使用
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画 和 summary文档总结对话功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
完成配置后运行项目,会自动运行插件,输入 `#help linkai` 可查看插件功能。
@@ -51,6 +62,8 @@
### 2.Midjourney绘画功能
若未配置 `plugins/linkai/config.json`,默认会关闭画图功能,直接使用 `$mj open` 可基于默认配置直接使用mj画图。
指令格式:
```
@@ -69,7 +82,27 @@
"$mjr 11055927171882"
```
注:
1. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词,以"画"开头便可以生成图片。
2. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败,生成失败不消耗积分
3. 使用 `$mj open``$mj close` 指令可以快速打开和关闭绘图功能
意事项
1. 使用 `$mj open``$mj close` 指令可以快速打开和关闭绘图功能
2. 海外环境部署请将 `img_proxy` 设置为 `false`
3. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词,以"画"开头便可以生成图片。
4. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败,生成失败不消耗积分
5. 若未收到图片可能有两种可能,一种是收到了图片但微信发送失败,可以在后台日志查看有没有获取到图片url,一般原因是受到了wx限制,可以稍后重试或更换账号尝试;另一种情况是图片提示词存在疑似违规,mj不会直接提示错误但会在画图后删掉原图导致程序无法获取,这种情况不消耗积分。
### 3.文档总结对话功能
#### 配置
该功能依赖 LinkAI的知识库及对话功能,需要在项目根目录的config.json中设置 `linkai_api_key`, 同时根据上述插件配置说明,在插件config.json添加 `summary` 部分的配置,设置 `enabled` 为 true。
如果不想创建 `plugins/linkai/config.json` 配置,可以直接通过 `$linkai sum open` 指令开启该功能。
#### 使用
功能开启后,向机器人发送 **文件****分享链接卡片** 即可生成摘要,进一步可以与文件或链接的内容进行多轮对话。
#### 限制
1. 文件目前 支持 `txt`, `docx`, `pdf`, `md`, `csv`格式,文件大小由 `max_file_size` 限制,最大不超过15M,文件字数最多可支持百万字的文件。但不建议上传字数过多的文件,一是token消耗过大,二是摘要很难覆盖到全部内容,只能通过多轮对话来了解细节。
2. 分享链接 目前仅支持 公众号文章,后续会支持更多文章类型及视频链接等
3. 总结及对话的 费用与 LinkAI 3.5-4K 模型的计费方式相同,按文档内容的tokens进行计算
+7 -2
View File
@@ -1,7 +1,7 @@
{
"group_app_map": {
"测试群1": "default",
"测试群2": "Kv2fXJcH"
"测试群1": "default",
"测试群2": "Kv2fXJcH"
},
"midjourney": {
"enabled": true,
@@ -10,5 +10,10 @@
"max_tasks": 3,
"max_tasks_per_user": 1,
"use_image_create_prefix": true
},
"summary": {
"enabled": true,
"group_enabled": true,
"max_file_size": 15000
}
}
+136 -8
View File
@@ -4,7 +4,11 @@ from bridge.reply import Reply, ReplyType
from config import global_config
from plugins import *
from .midjourney import MJBot
from .summary import LinkSummary
from bridge import bridge
from common.expired_dict import ExpiredDict
from common import const
import os
@plugins.register(
@@ -12,16 +16,24 @@ from bridge import bridge
desc="A plugin that supports knowledge base and midjourney drawing.",
version="0.1.0",
author="https://link-ai.tech",
desire_priority=99
)
class LinkAI(Plugin):
def __init__(self):
super().__init__()
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
self.config = super().load_config()
if not self.config:
# 未加载到配置,使用模板中的配置
self.config = self._load_config_template()
if self.config:
self.mj_bot = MJBot(self.config.get("midjourney"))
self.sum_config = {}
if self.config:
self.sum_config = self.config.get("summary")
logger.info("[LinkAI] inited")
def on_handle_context(self, e_context: EventContext):
"""
消息处理逻辑
@@ -31,10 +43,39 @@ class LinkAI(Plugin):
return
context = e_context['context']
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE]:
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE, ContextType.FILE, ContextType.SHARING]:
# filter content no need solve
return
if context.type == ContextType.FILE and self._is_summary_open(context):
# 文件处理
context.get("msg").prepare()
file_path = context.content
if not LinkSummary().check_file(file_path, self.sum_config):
return
_send_info(e_context, "正在为你加速生成摘要,请稍后")
res = LinkSummary().summary_file(file_path)
if not res:
_set_reply_text("总结出现异常,请稍后再试吧", e_context)
return
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文件内容的对话", e_context, level=ReplyType.TEXT)
os.remove(file_path)
return
if (context.type == ContextType.SHARING and self._is_summary_open(context)) or \
(context.type == ContextType.TEXT and LinkSummary().check_url(context.content)):
if not LinkSummary().check_url(context.content):
return
_send_info(e_context, "正在为你加速生成摘要,请稍后")
res = LinkSummary().summary_url(context.content)
if not res:
_set_reply_text("总结出现异常,请稍后再试吧", e_context)
return
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文章内容的对话", e_context, level=ReplyType.TEXT)
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
return
mj_type = self.mj_bot.judge_mj_task_type(e_context)
if mj_type:
# MJ作图任务处理
@@ -46,10 +87,38 @@ class LinkAI(Plugin):
self._process_admin_cmd(e_context)
return
if context.type == ContextType.TEXT and context.content == "开启对话" and _find_sum_id(context):
# 文本对话
_send_info(e_context, "正在为你开启对话,请稍后")
res = LinkSummary().summary_chat(_find_sum_id(context))
if not res:
_set_reply_text("开启对话失败,请稍后再试吧", e_context)
return
USER_FILE_MAP[_find_user_id(context) + "-file_id"] = res.get("file_id")
_set_reply_text("💡你可以问我关于这篇文章的任何问题,例如:\n\n" + res.get("questions") + "\n\n发送 \"退出对话\" 可以关闭与文章的对话", e_context, level=ReplyType.TEXT)
return
if context.type == ContextType.TEXT and context.content == "退出对话" and _find_file_id(context):
del USER_FILE_MAP[_find_user_id(context) + "-file_id"]
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
bot.sessions.clear_session(context["session_id"])
_set_reply_text("对话已退出", e_context, level=ReplyType.TEXT)
return
if context.type == ContextType.TEXT and _find_file_id(context):
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
context.kwargs["file_id"] = _find_file_id(context)
reply = bot.reply(context.content, context)
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS
return
if self._is_chat_task(e_context):
# 文本对话任务处理
self._process_chat_task(e_context)
# 插件管理功能
def _process_admin_cmd(self, e_context: EventContext):
context = e_context['context']
@@ -70,7 +139,7 @@ class LinkAI(Plugin):
is_open = False
conf()["use_linkai"] = is_open
bridge.Bridge().reset_bot()
_set_reply_text(f"知识库功能{tips_text}", e_context, level=ReplyType.INFO)
_set_reply_text(f"LinkAI对话功能{tips_text}", e_context, level=ReplyType.INFO)
return
if len(cmd) == 3 and cmd[1] == "app":
@@ -91,11 +160,36 @@ class LinkAI(Plugin):
# 保存插件配置
super().save_config(self.config)
_set_reply_text(f"应用设置成功: {app_code}", e_context, level=ReplyType.INFO)
else:
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
level=ReplyType.INFO)
return
if len(cmd) == 3 and cmd[1] == "sum" and (cmd[2] == "open" or cmd[2] == "close"):
# 知识库开关指令
if not _is_admin(e_context):
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
return
is_open = True
tips_text = "开启"
if cmd[2] == "close":
tips_text = "关闭"
is_open = False
if not self.sum_config:
_set_reply_text(f"插件未启用summary功能,请参考以下链添加插件配置\n\nhttps://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/linkai/README.md", e_context, level=ReplyType.INFO)
else:
self.sum_config["enabled"] = is_open
_set_reply_text(f"文章总结功能{tips_text}", e_context, level=ReplyType.INFO)
return
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
level=ReplyType.INFO)
return
def _is_summary_open(self, context) -> bool:
if not self.sum_config or not self.sum_config.get("enabled"):
return False
if not context.kwargs.get("isgroup") and not self.sum_config.get("group_enabled"):
return False
return True
# LinkAI 对话任务处理
def _is_chat_task(self, e_context: EventContext):
context = e_context['context']
@@ -109,7 +203,7 @@ class LinkAI(Plugin):
"""
context = e_context['context']
# 群聊应用管理
group_name = context.kwargs.get("msg").from_user_nickname
group_name = context.get("msg").from_user_nickname
app_code = self._fetch_group_app_code(group_name)
if app_code:
context.kwargs['app_code'] = app_code
@@ -127,7 +221,7 @@ class LinkAI(Plugin):
def get_help_text(self, verbose=False, **kwargs):
trigger_prefix = _get_trigger_prefix()
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画等能力。\n\n"
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画、文档总结对话等能力。\n\n"
if not verbose:
return help_text
help_text += f'📖 知识库\n - 群聊中指定应用: {trigger_prefix}linkai app 应用编码\n'
@@ -137,8 +231,27 @@ class LinkAI(Plugin):
help_text += f"🎨 绘画\n - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \n - 放大: {trigger_prefix}mju 图片ID 图片序号\n - 变换: {trigger_prefix}mjv 图片ID 图片序号\n - 重置: {trigger_prefix}mjr 图片ID"
help_text += f"\n\n例如:\n\"{trigger_prefix}mj a little cat, white --ar 9:16\"\n\"{trigger_prefix}mju 11055927171882 2\""
help_text += f"\n\"{trigger_prefix}mjv 11055927171882 2\"\n\"{trigger_prefix}mjr 11055927171882\""
help_text += f"\n\n💡 文档总结和对话\n - 开启: {trigger_prefix}linkai sum open\n - 使用: 发送文件、公众号文章等可生成摘要,并与内容对话"
return help_text
def _load_config_template(self):
logger.debug("No LinkAI plugin config.json, use plugins/linkai/config.json.template")
try:
plugin_config_path = os.path.join(self.path, "config.json.template")
if os.path.exists(plugin_config_path):
with open(plugin_config_path, "r", encoding="utf-8") as f:
plugin_conf = json.load(f)
plugin_conf["midjourney"]["enabled"] = False
plugin_conf["summary"]["enabled"] = False
return plugin_conf
except Exception as e:
logger.exception(e)
def _send_info(e_context: EventContext, content: str):
reply = Reply(ReplyType.TEXT, content)
channel = e_context["channel"]
channel.send(reply, e_context["context"])
# 静态方法
def _is_admin(e_context: EventContext) -> bool:
@@ -154,11 +267,26 @@ def _is_admin(e_context: EventContext) -> bool:
return context["receiver"] in global_config["admin_users"]
def _find_user_id(context):
if context["isgroup"]:
return context.kwargs.get("msg").actual_user_id
else:
return context["receiver"]
def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
reply = Reply(level, content)
e_context["reply"] = reply
e_context.action = EventAction.BREAK_PASS
def _get_trigger_prefix():
return conf().get("plugin_trigger_prefix", "$")
def _find_sum_id(context):
return USER_FILE_MAP.get(_find_user_id(context) + "-sum_id")
def _find_file_id(context):
return USER_FILE_MAP.get(_find_user_id(context) + "-file_id")
USER_FILE_MAP = ExpiredDict(conf().get("expires_in_seconds") or 60 * 60)
+17 -6
View File
@@ -5,7 +5,6 @@ import requests
import threading
import time
from bridge.reply import Reply, ReplyType
import aiohttp
import asyncio
from bridge.context import ContextType
from plugins import EventContext, EventAction
@@ -68,8 +67,7 @@ class MJTask:
# midjourney bot
class MJBot:
def __init__(self, config):
self.base_url = "https://api.link-ai.chat/v1/img/midjourney"
self.base_url = conf().get("linkai_api_base", "https://api.link-ai.chat") + "/v1/img/midjourney"
self.headers = {"Authorization": "Bearer " + conf().get("linkai_api_key")}
self.config = config
self.tasks = {}
@@ -97,7 +95,7 @@ class MJBot:
return TaskType.VARIATION
elif cmd_list[0].lower() == f"{trigger_prefix}mjr":
return TaskType.RESET
elif context.type == ContextType.IMAGE_CREATE and self.config.get("use_image_create_prefix"):
elif context.type == ContextType.IMAGE_CREATE and self.config.get("use_image_create_prefix") and self.config.get("enabled"):
return TaskType.GENERATE
def process_mj_task(self, mj_type: TaskType, e_context: EventContext):
@@ -310,7 +308,7 @@ class MJBot:
# send img
reply = Reply(ReplyType.IMAGE_URL, task.img_url)
channel = e_context["channel"]
channel._send(reply, e_context["context"])
_send(channel, reply, e_context["context"])
# send info
trigger_prefix = conf().get("plugin_trigger_prefix", "$")
@@ -327,7 +325,7 @@ class MJBot:
text += f"\n\n🔄使用 {trigger_prefix}mjr 命令重新生成图片\n"
text += f"例如:\n{trigger_prefix}mjr {task.img_id}"
reply = Reply(ReplyType.INFO, text)
channel._send(reply, e_context["context"])
_send(channel, reply, e_context["context"])
self._print_tasks()
return
@@ -406,6 +404,19 @@ class MJBot:
return result
def _send(channel, reply: Reply, context, retry_cnt=0):
try:
channel.send(reply, context)
except Exception as e:
logger.error("[WX] sendMsg error: {}".format(str(e)))
if isinstance(e, NotImplementedError):
return
logger.exception(e)
if retry_cnt < 2:
time.sleep(3 + 3 * retry_cnt)
channel.send(reply, context, retry_cnt + 1)
def check_prefix(content, prefix_list):
if not prefix_list:
return None
+89
View File
@@ -0,0 +1,89 @@
import requests
from config import conf
from common.log import logger
import os
class LinkSummary:
def __init__(self):
pass
def summary_file(self, file_path: str):
file_body = {
"file": open(file_path, "rb"),
"name": file_path.split("/")[-1],
}
res = requests.post(url=self.base_url() + "/v1/summary/file", headers=self.headers(), files=file_body, timeout=(5, 180))
return self._parse_summary_res(res)
def summary_url(self, url: str):
body = {
"url": url
}
res = requests.post(url=self.base_url() + "/v1/summary/url", headers=self.headers(), json=body, timeout=(5, 180))
return self._parse_summary_res(res)
def summary_chat(self, summary_id: str):
body = {
"summary_id": summary_id
}
res = requests.post(url=self.base_url() + "/v1/summary/chat", headers=self.headers(), json=body, timeout=(5, 180))
if res.status_code == 200:
res = res.json()
logger.debug(f"[LinkSum] chat open, res={res}")
if res.get("code") == 200:
data = res.get("data")
return {
"questions": data.get("questions"),
"file_id": data.get("file_id")
}
else:
res_json = res.json()
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
return None
def _parse_summary_res(self, res):
if res.status_code == 200:
res = res.json()
logger.debug(f"[LinkSum] url summary, res={res}")
if res.get("code") == 200:
data = res.get("data")
return {
"summary": data.get("summary"),
"summary_id": data.get("summary_id")
}
else:
res_json = res.json()
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
return None
def base_url(self):
return conf().get("linkai_api_base", "https://api.link-ai.chat")
def headers(self):
return {"Authorization": "Bearer " + conf().get("linkai_api_key")}
def check_file(self, file_path: str, sum_config: dict) -> bool:
file_size = os.path.getsize(file_path) // 1000
if (sum_config.get("max_file_size") and file_size > sum_config.get("max_file_size")) or file_size > 15000:
logger.warn(f"[LinkSum] file size exceeds limit, No processing, file_size={file_size}KB")
return True
suffix = file_path.split(".")[-1]
support_list = ["txt", "csv", "docx", "pdf", "md"]
if suffix not in support_list:
logger.warn(f"[LinkSum] unsupported file, suffix={suffix}, support_list={support_list}")
return False
return True
def check_url(self, url: str):
if not url:
return False
support_list = ["http://mp.weixin.qq.com", "https://mp.weixin.qq.com"]
for support_url in support_list:
if url.strip().startswith(support_url):
return True
logger.debug("[LinkSum] unsupported url, no need to process")
return False
+2 -2
View File
@@ -15,8 +15,8 @@ class Plugin:
"""
# 优先获取 plugins/config.json 中的全局配置
plugin_conf = pconf(self.name)
if not plugin_conf or not conf().get("use_global_plugin_config"):
# 全局配置不存在 或者 未开启全局配置开关,则获取插件目录下的配置
if not plugin_conf:
# 全局配置不存在,则获取插件目录下的配置
plugin_config_path = os.path.join(self.path, "config.json")
if os.path.exists(plugin_config_path):
with open(plugin_config_path, "r", encoding="utf-8") as f:
+7 -3
View File
@@ -16,13 +16,17 @@ dulwich
# wechaty
wechaty>=0.10.7
wechaty_puppet>=0.4.23
pysilk_mod>=1.6.0 # needed by send voice
# pysilk_mod>=1.6.0 # needed by send voice only in wechaty
# wechatmp wechatcom
web.py
wechatpy
# chatgpt-tool-hub plugin
--extra-index-url https://pypi.python.org/simple
chatgpt_tool_hub==0.4.6
# xunfei spark
websocket-client==1.2.0
# claude bot
curl_cffi
+1
View File
@@ -6,3 +6,4 @@ requests>=2.28.2
chardet>=5.1.0
Pillow
pre-commit
web.py
+33
View File
@@ -0,0 +1,33 @@
import time
from elevenlabs import set_api_key,generate
from bridge.reply import Reply, ReplyType
from common.log import logger
from common.tmp_dir import TmpDir
from voice.voice import Voice
from config import conf
XI_API_KEY = conf().get("xi_api_key")
set_api_key(XI_API_KEY)
name = conf().get("xi_voice_id")
class ElevenLabsVoice(Voice):
def __init__(self):
pass
def voiceToText(self, voice_file):
pass
def textToVoice(self, text):
audio = generate(
text=text,
voice=name,
model='eleven_multilingual_v1'
)
fileName = TmpDir().path() + "reply-" + str(int(time.time())) + "-" + str(hash(text) & 0x7FFFFFFF) + ".mp3"
with open(fileName, "wb") as f:
f.write(audio)
logger.info("[ElevenLabs] textToVoice text={} voice file name={}".format(text, fileName))
return Reply(ReplyType.VOICE, fileName)
+4
View File
@@ -29,4 +29,8 @@ def create_voice(voice_type):
from voice.azure.azure_voice import AzureVoice
return AzureVoice()
elif voice_type == "elevenlabs":
from voice.elevent.elevent_voice import ElevenLabsVoice
return ElevenLabsVoice()
raise RuntimeError