mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-08 20:31:04 +08:00
Compare commits
60 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 058c167f79 | |||
| 49446d4872 | |||
| ced560e1e1 | |||
| 339102c3cd | |||
| 6331350239 | |||
| 34e06fcbf8 | |||
| 70aac312ff | |||
| 5e00704152 | |||
| 1a9edb6907 | |||
| 0c18c3a6dd | |||
| 847bb51ce4 | |||
| fa60a5dc63 | |||
| aaed3f9839 | |||
| 21b956b983 | |||
| 792e940279 | |||
| c2477b26c0 | |||
| 4b27de809b | |||
| 572932d8e8 | |||
| 270dd778d9 | |||
| dd04287b0a | |||
| 36ac6d005a | |||
| 701daedf49 | |||
| 238f05f453 | |||
| dd082bd212 | |||
| cfd2f27b0b | |||
| a2160d135e | |||
| 16d7836369 | |||
| f3de4dcc5f | |||
| e34523028f | |||
| efe2fbacd6 | |||
| 2fa1df29be | |||
| f72cd13fba | |||
| 5b552dffbf | |||
| a0ae2d13dc | |||
| f7262a0a3a | |||
| 9736f121eb | |||
| 7c8fb7eacc | |||
| b45eea5908 | |||
| 6babf4ee6c | |||
| 576526d4ee | |||
| c03e31b7be | |||
| a1aa925019 | |||
| a5a234ed97 | |||
| 5b5dbcd78b | |||
| bd1c6361d3 | |||
| 1fc1febf03 | |||
| 55cc35efa9 | |||
| 5ba8fdc5e7 | |||
| 6ea295e227 | |||
| 5010c76ef7 | |||
| 79c7f0c29f | |||
| 2b3e643786 | |||
| 90cdff327c | |||
| 55c116e727 | |||
| 3dd83aa6b7 | |||
| a74aa12641 | |||
| 151e8c69f9 | |||
| d8bfa77705 | |||
| 6bd286e8d5 | |||
| 905532b681 |
@@ -8,7 +8,7 @@
|
||||
- [x] **基础对话:** 私聊及群聊的消息智能回复,支持多轮会话上下文记忆,支持 GPT-3.5, GPT-4, claude, 文心一言, 讯飞星火
|
||||
- [x] **语音识别:** 可识别语音消息,通过文字或语音回复,支持 azure, baidu, google, openai等多种语音模型
|
||||
- [x] **图片生成:** 支持图片生成 和 图生图(如照片修复),可选择 Dell-E, stable diffusion, replicate, midjourney模型
|
||||
- [x] **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结等插件
|
||||
- [x] **丰富插件:** 支持个性化插件扩展,已实现多角色切换、文字冒险、敏感词过滤、聊天记录总结、文档总结和对话等插件
|
||||
- [X] **Tool工具:** 与操作系统和互联网交互,支持最新信息搜索、数学计算、天气和资讯查询、网页总结,基于 [chatgpt-tool-hub](https://github.com/goldfishh/chatgpt-tool-hub) 实现
|
||||
- [x] **知识库:** 通过上传知识库文件自定义专属机器人,可作为数字分身、领域知识库、智能客服使用,基于 [LinkAI](https://chat.link-ai.tech/console) 实现
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
# 演示
|
||||
|
||||
https://user-images.githubusercontent.com/26161723/233777277-e3b9928e-b88f-43e2-b0e0-3cbc923bc799.mp4
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/assets/26161723/d5154020-36e3-41db-8706-40ce9f3f1b1e
|
||||
|
||||
Demo made by [Visionn](https://www.wangpc.cc/)
|
||||
|
||||
@@ -27,11 +27,12 @@ Demo made by [Visionn](https://www.wangpc.cc/)
|
||||
<img width="240" src="./docs/images/contact.jpg">
|
||||
|
||||
# 更新日志
|
||||
>**2023.09.01:** 增加 [企微个人号](#1385) 通道,[claude](1388),讯飞星火模型
|
||||
|
||||
>**2023.09.26:** 插件增加 文件/文章链接 一键总结和对话的功能,使用参考:[插件说明](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/linkai#3%E6%96%87%E6%A1%A3%E6%80%BB%E7%BB%93%E5%AF%B9%E8%AF%9D%E5%8A%9F%E8%83%BD)
|
||||
|
||||
>**2023.08.08:** 接入百度文心一言模型,通过 [插件](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/linkai) 支持 Midjourney 绘图
|
||||
|
||||
>**2023.06.12:** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建个人知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
>**2023.06.12:** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建领域知识库,并接入微信、公众号及企业微信中,打造专属客服机器人。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
|
||||
>**2023.04.26:** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
|
||||
|
||||
@@ -43,19 +44,19 @@ Demo made by [Visionn](https://www.wangpc.cc/)
|
||||
|
||||
>**2023.03.09:** 基于 `whisper API`(后续已接入更多的语音`API`服务) 实现对微信语音消息的解析和回复,添加配置项 `"speech_recognition":true` 即可启用,使用参考 [#415](https://github.com/zhayujie/chatgpt-on-wechat/issues/415)。(contributed by [wanggang1987](https://github.com/wanggang1987) in [#385](https://github.com/zhayujie/chatgpt-on-wechat/pull/385))
|
||||
|
||||
>**2023.03.02:** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
|
||||
>**2023.02.09:** 扫码登录存在账号限制风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158)
|
||||
|
||||
# 快速开始
|
||||
|
||||
## 准备
|
||||
|
||||
### 1. OpenAI账号注册
|
||||
### 1. 账号注册
|
||||
|
||||
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.pythonthree.com/register-openai-chatgpt/) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
|
||||
项目默认使用OpenAI接口,需前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。接口需要海外网络访问及绑定信用卡支付。
|
||||
|
||||
> 项目中默认使用的对话模型是 gpt3.5 turbo,计费方式是约每 500 汉字 (包含请求和回复) 消耗 $0.002,图片生成是每张消耗 $0.016。
|
||||
> 默认对话模型是 openai 的 gpt-3.5-turbo,计费方式是约每 1000tokens (约750个英文单词 或 500汉字,包含请求和回复) 消耗 $0.002,图片生成是Dell E模型,每张消耗 $0.016。
|
||||
|
||||
项目同时也支持使用 LinkAI 接口,无需代理,可使用 文心、讯飞、GPT-3、GPT-4 等模型,支持 定制化知识库、联网搜索、MJ绘图、文档总结和对话等能力。修改配置即可一键切换,参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
|
||||
### 2.运行环境
|
||||
|
||||
@@ -185,10 +186,10 @@ pip3 install azure-cognitiveservices-speech
|
||||
如果是开发机 **本地运行**,直接在项目根目录下执行:
|
||||
|
||||
```bash
|
||||
python3 app.py
|
||||
python3 app.py # windows环境下该命令通常为 python app.py
|
||||
```
|
||||
终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。
|
||||
|
||||
终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。
|
||||
|
||||
### 2.服务器部署
|
||||
|
||||
@@ -270,8 +271,4 @@ FAQs: <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>
|
||||
|
||||
## 联系
|
||||
|
||||
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。
|
||||
|
||||
如果你想了解更多项目细节,与开发者们交流更多关于AI技术的实践,欢迎加入星球:
|
||||
|
||||
<a href="https://public.zsxq.com/groups/88885848842852.html"><img width="360" src="./docs/images/planet.jpg"></a>
|
||||
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。参与更多讨论可加入技术交流群。
|
||||
|
||||
@@ -16,7 +16,10 @@ class BaiduWenxinBot(Bot):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=conf().get("baidu_wenxin_model") or "eb-instant")
|
||||
wenxin_model = conf().get("baidu_wenxin_model") or "eb-instant"
|
||||
if conf().get("model") and conf().get("model") == "wenxin-4":
|
||||
wenxin_model = "completions_pro"
|
||||
self.sessions = SessionManager(BaiduWenxinSession, model=wenxin_model)
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
|
||||
@@ -12,7 +12,7 @@ from bot.session_manager import SessionManager
|
||||
from bridge.context import Context, ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common.log import logger
|
||||
from config import conf
|
||||
from config import conf, pconf
|
||||
|
||||
|
||||
class LinkAIBot(Bot, OpenAIImage):
|
||||
@@ -79,7 +79,10 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
}
|
||||
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}")
|
||||
file_id = context.kwargs.get("file_id")
|
||||
if file_id:
|
||||
body["file_id"] = file_id
|
||||
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}, file_id={file_id}")
|
||||
headers = {"Authorization": "Bearer " + linkai_api_key}
|
||||
|
||||
# do http request
|
||||
@@ -93,6 +96,14 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
total_tokens = response["usage"]["total_tokens"]
|
||||
logger.info(f"[LINKAI] reply={reply_content}, total_tokens={total_tokens}")
|
||||
self.sessions.session_reply(reply_content, session_id, total_tokens)
|
||||
|
||||
agent_suffix = self._fetch_agent_suffix(response)
|
||||
if agent_suffix:
|
||||
reply_content += agent_suffix
|
||||
if not agent_suffix:
|
||||
knowledge_suffix = self._fetch_knowledge_search_suffix(response)
|
||||
if knowledge_suffix:
|
||||
reply_content += knowledge_suffix
|
||||
return Reply(ReplyType.TEXT, reply_content)
|
||||
|
||||
else:
|
||||
@@ -180,3 +191,48 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
time.sleep(2)
|
||||
logger.warn(f"[LINKAI] do retry, times={retry_count}")
|
||||
return self.reply_text(session, app_code, retry_count + 1)
|
||||
|
||||
|
||||
def _fetch_knowledge_search_suffix(self, response) -> str:
|
||||
try:
|
||||
if response.get("knowledge_base"):
|
||||
search_hit = response.get("knowledge_base").get("search_hit")
|
||||
first_similarity = response.get("knowledge_base").get("first_similarity")
|
||||
logger.info(f"[LINKAI] knowledge base, search_hit={search_hit}, first_similarity={first_similarity}")
|
||||
plugin_config = pconf("linkai")
|
||||
if plugin_config and plugin_config.get("knowledge_base") and plugin_config.get("knowledge_base").get("search_miss_text_enabled"):
|
||||
search_miss_similarity = plugin_config.get("knowledge_base").get("search_miss_similarity")
|
||||
search_miss_text = plugin_config.get("knowledge_base").get("search_miss_suffix")
|
||||
if not search_hit:
|
||||
return search_miss_text
|
||||
if search_miss_similarity and float(search_miss_similarity) > first_similarity:
|
||||
return search_miss_text
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
def _fetch_agent_suffix(self, response):
|
||||
try:
|
||||
plugin_list = []
|
||||
logger.debug(f"[LinkAgent] res={response}")
|
||||
if response.get("agent") and response.get("agent").get("chain") and response.get("agent").get("need_show_plugin"):
|
||||
chain = response.get("agent").get("chain")
|
||||
suffix = "\n\n- - - - - - - - - - - -"
|
||||
i = 0
|
||||
for turn in chain:
|
||||
plugin_name = turn.get('plugin_name')
|
||||
suffix += "\n"
|
||||
need_show_thought = response.get("agent").get("need_show_thought")
|
||||
if turn.get("thought") and plugin_name and need_show_thought:
|
||||
suffix += f"{turn.get('thought')}\n"
|
||||
if plugin_name:
|
||||
plugin_list.append(turn.get('plugin_name'))
|
||||
suffix += f"{turn.get('plugin_icon')} {turn.get('plugin_name')}"
|
||||
if turn.get('plugin_input'):
|
||||
suffix += f":{turn.get('plugin_input')}"
|
||||
if i < len(chain) - 1:
|
||||
suffix += "\n"
|
||||
i += 1
|
||||
logger.info(f"[LinkAgent] use plugins: {plugin_list}")
|
||||
return suffix
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
+7
-1
@@ -23,7 +23,7 @@ class Bridge(object):
|
||||
self.btype["chat"] = const.OPEN_AI
|
||||
if conf().get("use_azure_chatgpt", False):
|
||||
self.btype["chat"] = const.CHATGPTONAZURE
|
||||
if model_type in ["wenxin"]:
|
||||
if model_type in ["wenxin", "wenxin-4"]:
|
||||
self.btype["chat"] = const.BAIDU
|
||||
if model_type in ["xunfei"]:
|
||||
self.btype["chat"] = const.XUNFEI
|
||||
@@ -32,6 +32,7 @@ class Bridge(object):
|
||||
if model_type in ["claude"]:
|
||||
self.btype["chat"] = const.CLAUDEAI
|
||||
self.bots = {}
|
||||
self.chat_bots = {}
|
||||
|
||||
def get_bot(self, typename):
|
||||
if self.bots.get(typename) is None:
|
||||
@@ -61,6 +62,11 @@ class Bridge(object):
|
||||
def fetch_translate(self, text, from_lang="", to_lang="en") -> Reply:
|
||||
return self.get_bot("translate").translate(text, from_lang, to_lang)
|
||||
|
||||
def find_chat_bot(self, bot_type: str):
|
||||
if self.chat_bots.get(bot_type) is None:
|
||||
self.chat_bots[bot_type] = create_bot(bot_type)
|
||||
return self.chat_bots.get(bot_type)
|
||||
|
||||
def reset_bot(self):
|
||||
"""
|
||||
重置bot路由
|
||||
|
||||
@@ -9,8 +9,10 @@ class ContextType(Enum):
|
||||
IMAGE = 3 # 图片消息
|
||||
FILE = 4 # 文件信息
|
||||
VIDEO = 5 # 视频信息
|
||||
SHARING = 6 # 分享信息
|
||||
|
||||
IMAGE_CREATE = 10 # 创建图片命令
|
||||
ACCEPT_FRIEND = 19 # 同意好友请求
|
||||
JOIN_GROUP = 20 # 加入群聊
|
||||
PATPAT = 21 # 拍了拍
|
||||
FUNCTION = 22 # 函数调用
|
||||
|
||||
@@ -205,6 +205,8 @@ class ChatChannel(Channel):
|
||||
elif context.type == ContextType.IMAGE: # 图片消息,当前仅做下载保存到本地的逻辑
|
||||
cmsg = context["msg"]
|
||||
cmsg.prepare()
|
||||
elif context.type == ContextType.SHARING: # 分享信息,当前无默认逻辑
|
||||
pass
|
||||
elif context.type == ContextType.FUNCTION or context.type == ContextType.FILE: # 文件消息及函数调用等,当前无默认逻辑
|
||||
pass
|
||||
else:
|
||||
@@ -241,7 +243,7 @@ class ChatChannel(Channel):
|
||||
reply.content = reply_text
|
||||
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
|
||||
reply.content = "[" + str(reply.type) + "]\n" + reply.content
|
||||
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE:
|
||||
elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE or reply.type == ReplyType.FILE or reply.type == ReplyType.VIDEO or reply.type == ReplyType.VIDEO_URL:
|
||||
pass
|
||||
else:
|
||||
logger.error("[WX] unknown reply type: {}".format(reply.type))
|
||||
|
||||
@@ -25,7 +25,7 @@ from lib import itchat
|
||||
from lib.itchat.content import *
|
||||
|
||||
|
||||
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE])
|
||||
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING])
|
||||
def handler_single_msg(msg):
|
||||
try:
|
||||
cmsg = WechatMessage(msg, False)
|
||||
@@ -36,7 +36,7 @@ def handler_single_msg(msg):
|
||||
return None
|
||||
|
||||
|
||||
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE], isGroupChat=True)
|
||||
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE, ATTACHMENT, SHARING], isGroupChat=True)
|
||||
def handler_group_msg(msg):
|
||||
try:
|
||||
cmsg = WechatMessage(msg, True)
|
||||
@@ -142,6 +142,9 @@ class WechatChannel(ChatChannel):
|
||||
@time_checker
|
||||
@_check
|
||||
def handle_single(self, cmsg: ChatMessage):
|
||||
# filter system message
|
||||
if cmsg.other_user_id in ["weixin"]:
|
||||
return
|
||||
if cmsg.ctype == ContextType.VOICE:
|
||||
if conf().get("speech_recognition") != True:
|
||||
return
|
||||
@@ -167,11 +170,13 @@ class WechatChannel(ChatChannel):
|
||||
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
|
||||
elif cmsg.ctype == ContextType.IMAGE:
|
||||
logger.debug("[WX]receive image for group msg: {}".format(cmsg.content))
|
||||
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT]:
|
||||
elif cmsg.ctype in [ContextType.JOIN_GROUP, ContextType.PATPAT, ContextType.ACCEPT_FRIEND]:
|
||||
logger.debug("[WX]receive note msg: {}".format(cmsg.content))
|
||||
elif cmsg.ctype == ContextType.TEXT:
|
||||
# logger.debug("[WX]receive group msg: {}, cmsg={}".format(json.dumps(cmsg._rawmsg, ensure_ascii=False), cmsg))
|
||||
pass
|
||||
elif cmsg.ctype == ContextType.FILE:
|
||||
logger.debug(f"[WX]receive attachment msg, file_name={cmsg.content}")
|
||||
else:
|
||||
logger.debug("[WX]receive group msg: {}".format(cmsg.content))
|
||||
context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)
|
||||
@@ -208,3 +213,24 @@ class WechatChannel(ChatChannel):
|
||||
image_storage.seek(0)
|
||||
itchat.send_image(image_storage, toUserName=receiver)
|
||||
logger.info("[WX] sendImage, receiver={}".format(receiver))
|
||||
elif reply.type == ReplyType.FILE: # 新增文件回复类型
|
||||
file_storage = reply.content
|
||||
itchat.send_file(file_storage, toUserName=receiver)
|
||||
logger.info("[WX] sendFile, receiver={}".format(receiver))
|
||||
elif reply.type == ReplyType.VIDEO: # 新增视频回复类型
|
||||
video_storage = reply.content
|
||||
itchat.send_video(video_storage, toUserName=receiver)
|
||||
logger.info("[WX] sendFile, receiver={}".format(receiver))
|
||||
elif reply.type == ReplyType.VIDEO_URL: # 新增视频URL回复类型
|
||||
video_url = reply.content
|
||||
logger.debug(f"[WX] start download video, video_url={video_url}")
|
||||
video_res = requests.get(video_url, stream=True)
|
||||
video_storage = io.BytesIO()
|
||||
size = 0
|
||||
for block in video_res.iter_content(1024):
|
||||
size += len(block)
|
||||
video_storage.write(block)
|
||||
logger.info(f"[WX] download video success, size={size}, video_url={video_url}")
|
||||
video_storage.seek(0)
|
||||
itchat.send_video(video_storage, toUserName=receiver)
|
||||
logger.info("[WX] sendVideo url={}, receiver={}".format(video_url, receiver))
|
||||
|
||||
@@ -7,7 +7,6 @@ from common.tmp_dir import TmpDir
|
||||
from lib import itchat
|
||||
from lib.itchat.content import *
|
||||
|
||||
|
||||
class WechatMessage(ChatMessage):
|
||||
def __init__(self, itchat_msg, is_group=False):
|
||||
super().__init__(itchat_msg)
|
||||
@@ -35,6 +34,9 @@ class WechatMessage(ChatMessage):
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[-1]
|
||||
elif "加入群聊" in itchat_msg["Content"]:
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
elif "你已添加了" in itchat_msg["Content"]: #通过好友请求
|
||||
self.ctype = ContextType.ACCEPT_FRIEND
|
||||
self.content = itchat_msg["Content"]
|
||||
elif "拍了拍我" in itchat_msg["Content"]:
|
||||
self.ctype = ContextType.PATPAT
|
||||
self.content = itchat_msg["Content"]
|
||||
@@ -42,6 +44,14 @@ class WechatMessage(ChatMessage):
|
||||
self.actual_user_nickname = re.findall(r"\"(.*?)\"", itchat_msg["Content"])[0]
|
||||
else:
|
||||
raise NotImplementedError("Unsupported note message: " + itchat_msg["Content"])
|
||||
elif itchat_msg["Type"] == ATTACHMENT:
|
||||
self.ctype = ContextType.FILE
|
||||
self.content = TmpDir().path() + itchat_msg["FileName"] # content直接存临时目录路径
|
||||
self._prepare_fn = lambda: itchat_msg.download(self.content)
|
||||
elif itchat_msg["Type"] == SHARING:
|
||||
self.ctype = ContextType.SHARING
|
||||
self.content = itchat_msg.get("Url")
|
||||
|
||||
else:
|
||||
raise NotImplementedError("Unsupported message type: Type:{} MsgType:{}".format(itchat_msg["Type"], itchat_msg["MsgType"]))
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ class Query:
|
||||
|
||||
# New request
|
||||
if (
|
||||
from_user not in channel.cache_dict
|
||||
channel.cache_dict.get(from_user) is None
|
||||
and from_user not in channel.running
|
||||
or content.startswith("#")
|
||||
and message_id not in channel.request_cnt # insert the godcmd
|
||||
@@ -131,8 +131,10 @@ class Query:
|
||||
|
||||
# Only one request can access to the cached data
|
||||
try:
|
||||
(reply_type, reply_content) = channel.cache_dict.pop(from_user)
|
||||
except KeyError:
|
||||
(reply_type, reply_content) = channel.cache_dict[from_user].pop(0)
|
||||
if not channel.cache_dict[from_user]: # If popping the message makes the list empty, delete the user entry from cache
|
||||
del channel.cache_dict[from_user]
|
||||
except IndexError:
|
||||
return "success"
|
||||
|
||||
if reply_type == "text":
|
||||
@@ -146,7 +148,7 @@ class Query:
|
||||
max_split=1,
|
||||
)
|
||||
reply_text = splits[0] + continue_text
|
||||
channel.cache_dict[from_user] = ("text", splits[1])
|
||||
channel.cache_dict[from_user].append(("text", splits[1]))
|
||||
|
||||
logger.info(
|
||||
"[wechatmp] Request {} do send to {} {}: {}\n{}".format(
|
||||
|
||||
@@ -10,6 +10,7 @@ import requests
|
||||
import web
|
||||
from wechatpy.crypto import WeChatCrypto
|
||||
from wechatpy.exceptions import WeChatClientException
|
||||
from collections import defaultdict
|
||||
|
||||
from bridge.context import *
|
||||
from bridge.reply import *
|
||||
@@ -46,7 +47,7 @@ class WechatMPChannel(ChatChannel):
|
||||
self.crypto = WeChatCrypto(token, aes_key, appid)
|
||||
if self.passive_reply:
|
||||
# Cache the reply to the user's first message
|
||||
self.cache_dict = dict()
|
||||
self.cache_dict = defaultdict(list)
|
||||
# Record whether the current message is being processed
|
||||
self.running = set()
|
||||
# Count the request from wechat official server by message_id
|
||||
@@ -82,24 +83,28 @@ class WechatMPChannel(ChatChannel):
|
||||
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
|
||||
reply_text = reply.content
|
||||
logger.info("[wechatmp] text cached, receiver {}\n{}".format(receiver, reply_text))
|
||||
self.cache_dict[receiver] = ("text", reply_text)
|
||||
self.cache_dict[receiver].append(("text", reply_text))
|
||||
elif reply.type == ReplyType.VOICE:
|
||||
try:
|
||||
voice_file_path = reply.content
|
||||
with open(voice_file_path, "rb") as f:
|
||||
# support: <2M, <60s, mp3/wma/wav/amr
|
||||
response = self.client.material.add("voice", f)
|
||||
logger.debug("[wechatmp] upload voice response: {}".format(response))
|
||||
# 根据文件大小估计一个微信自动审核的时间,审核结束前返回将会导致语音无法播放,这个估计有待验证
|
||||
f_size = os.fstat(f.fileno()).st_size
|
||||
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
|
||||
# todo check media_id
|
||||
except WeChatClientException as e:
|
||||
logger.error("[wechatmp] upload voice failed: {}".format(e))
|
||||
return
|
||||
media_id = response["media_id"]
|
||||
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
|
||||
self.cache_dict[receiver] = ("voice", media_id)
|
||||
voice_file_path = reply.content
|
||||
duration, files = split_audio(voice_file_path, 60 * 1000)
|
||||
if len(files) > 1:
|
||||
logger.info("[wechatmp] voice too long {}s > 60s , split into {} parts".format(duration / 1000.0, len(files)))
|
||||
|
||||
for path in files:
|
||||
# support: <2M, <60s, mp3/wma/wav/amr
|
||||
try:
|
||||
with open(path, "rb") as f:
|
||||
response = self.client.material.add("voice", f)
|
||||
logger.debug("[wechatmp] upload voice response: {}".format(response))
|
||||
f_size = os.fstat(f.fileno()).st_size
|
||||
time.sleep(1.0 + 2 * f_size / 1024 / 1024)
|
||||
# todo check media_id
|
||||
except WeChatClientException as e:
|
||||
logger.error("[wechatmp] upload voice failed: {}".format(e))
|
||||
return
|
||||
media_id = response["media_id"]
|
||||
logger.info("[wechatmp] voice uploaded, receiver {}, media_id {}".format(receiver, media_id))
|
||||
self.cache_dict[receiver].append(("voice", media_id))
|
||||
|
||||
elif reply.type == ReplyType.IMAGE_URL: # 从网络下载图片
|
||||
img_url = reply.content
|
||||
@@ -119,7 +124,7 @@ class WechatMPChannel(ChatChannel):
|
||||
return
|
||||
media_id = response["media_id"]
|
||||
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
|
||||
self.cache_dict[receiver] = ("image", media_id)
|
||||
self.cache_dict[receiver].append(("image", media_id))
|
||||
elif reply.type == ReplyType.IMAGE: # 从文件读取图片
|
||||
image_storage = reply.content
|
||||
image_storage.seek(0)
|
||||
@@ -134,7 +139,7 @@ class WechatMPChannel(ChatChannel):
|
||||
return
|
||||
media_id = response["media_id"]
|
||||
logger.info("[wechatmp] image uploaded, receiver {}, media_id {}".format(receiver, media_id))
|
||||
self.cache_dict[receiver] = ("image", media_id)
|
||||
self.cache_dict[receiver].append(("image", media_id))
|
||||
else:
|
||||
if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:
|
||||
reply_text = reply.content
|
||||
|
||||
@@ -8,6 +8,7 @@ import pilk
|
||||
from bridge.context import ContextType
|
||||
from channel.chat_message import ChatMessage
|
||||
from common.log import logger
|
||||
from ntwork.const import send_type
|
||||
|
||||
|
||||
def get_with_retry(get_func, max_retries=5, delay=5):
|
||||
@@ -41,15 +42,37 @@ def cdn_download(wework, message, file_name):
|
||||
data = message["data"]
|
||||
aes_key = data["cdn"]["aes_key"]
|
||||
file_size = data["cdn"]["size"]
|
||||
file_type = 2
|
||||
file_id = data["cdn"]["file_id"]
|
||||
|
||||
# 获取当前工作目录,然后与文件名拼接得到保存路径
|
||||
current_dir = os.getcwd()
|
||||
save_path = os.path.join(current_dir, "tmp", file_name)
|
||||
|
||||
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
|
||||
logger.debug(result)
|
||||
# 下载保存图片到本地
|
||||
if "url" in data["cdn"].keys() and "auth_key" in data["cdn"].keys():
|
||||
url = data["cdn"]["url"]
|
||||
auth_key = data["cdn"]["auth_key"]
|
||||
# result = wework.wx_cdn_download(url, auth_key, aes_key, file_size, save_path) # ntwork库本身接口有问题,缺失了aes_key这个参数
|
||||
"""
|
||||
下载wx类型的cdn文件,以https开头
|
||||
"""
|
||||
data = {
|
||||
'url': url,
|
||||
'auth_key': auth_key,
|
||||
'aes_key': aes_key,
|
||||
'size': file_size,
|
||||
'save_path': save_path
|
||||
}
|
||||
result = wework._WeWork__send_sync(send_type.MT_WXCDN_DOWNLOAD_MSG, data) # 直接用wx_cdn_download的接口内部实现来调用
|
||||
elif "file_id" in data["cdn"].keys():
|
||||
file_type = 2
|
||||
file_id = data["cdn"]["file_id"]
|
||||
result = wework.c2c_cdn_download(file_id, aes_key, file_size, file_type, save_path)
|
||||
else:
|
||||
logger.error(f"something is wrong, data: {data}")
|
||||
return
|
||||
|
||||
# 输出下载结果
|
||||
logger.debug(f"result: {result}")
|
||||
|
||||
|
||||
def c2c_download_and_convert(wework, message, file_name):
|
||||
|
||||
+2
-4
@@ -5,8 +5,6 @@ BAIDU = "baidu"
|
||||
XUNFEI = "xunfei"
|
||||
CHATGPTONAZURE = "chatGPTOnAzure"
|
||||
LINKAI = "linkai"
|
||||
|
||||
VERSION = "1.3.0"
|
||||
|
||||
CLAUDEAI = "claude"
|
||||
MODEL_LIST = ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "wenxin", "xunfei","claude"]
|
||||
|
||||
MODEL_LIST = ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "wenxin", "wenxin-4", "xunfei", "claude"]
|
||||
|
||||
@@ -20,9 +20,7 @@
|
||||
"ChatGPT测试群"
|
||||
],
|
||||
"image_create_prefix": [
|
||||
"画",
|
||||
"看",
|
||||
"找"
|
||||
"画"
|
||||
],
|
||||
"speech_recognition": false,
|
||||
"group_speech_recognition": false,
|
||||
|
||||
@@ -32,6 +32,7 @@ available_setting = {
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], # 开启自动回复的群名称列表
|
||||
"group_name_keyword_white_list": [], # 开启自动回复的群名称关键词列表
|
||||
"group_chat_in_one_session": ["ChatGPT测试群"], # 支持会话上下文共享的群名称
|
||||
"group_welcome_msg": "", # 配置新人进群固定欢迎语,不配置则使用随机风格欢迎
|
||||
"trigger_by_self": False, # 是否允许机器人触发
|
||||
"image_create_prefix": ["画", "看", "找"], # 开启图片回复的前缀
|
||||
"concurrency_in_session": 1, # 同一会话最多有多少条消息在处理中,大于1可能乱序
|
||||
|
||||
@@ -4,8 +4,10 @@ import json
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import logging
|
||||
from typing import Tuple
|
||||
|
||||
import bridge.bridge
|
||||
import plugins
|
||||
from bridge.bridge import Bridge
|
||||
from bridge.context import ContextType
|
||||
@@ -134,9 +136,9 @@ ADMIN_COMMANDS = {
|
||||
|
||||
# 定义帮助函数
|
||||
def get_help_text(isadmin, isgroup):
|
||||
help_text = "通用指令:\n"
|
||||
help_text = "通用指令\n"
|
||||
for cmd, info in COMMANDS.items():
|
||||
if cmd == "auth": # 不提示认证指令
|
||||
if cmd in ["auth", "set_openai_api_key", "reset_openai_api_key", "set_gpt_model", "reset_gpt_model", "gpt_model"]: # 不显示帮助指令
|
||||
continue
|
||||
if cmd == "id" and conf().get("channel_type", "wx") not in ["wxy", "wechatmp"]:
|
||||
continue
|
||||
@@ -149,7 +151,7 @@ def get_help_text(isadmin, isgroup):
|
||||
|
||||
# 插件指令
|
||||
plugins = PluginManager().list_plugins()
|
||||
help_text += "\n目前可用插件有:"
|
||||
help_text += "\n可用插件"
|
||||
for plugin in plugins:
|
||||
if plugins[plugin].enabled and not plugins[plugin].hidden:
|
||||
namecn = plugins[plugin].namecn
|
||||
@@ -201,6 +203,7 @@ class Godcmd(Plugin):
|
||||
|
||||
self.password = gconf["password"]
|
||||
self.admin_users = gconf["admin_users"] # 预存的管理员账号,这些账号不需要认证。itchat的用户名每次都会变,不可用
|
||||
global_config["admin_users"] = self.admin_users
|
||||
self.isrunning = True # 机器人是否运行中
|
||||
|
||||
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
|
||||
@@ -310,6 +313,8 @@ class Godcmd(Plugin):
|
||||
elif cmd == "reset":
|
||||
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI]:
|
||||
bot.sessions.clear_session(session_id)
|
||||
if Bridge().chat_bots.get(bottype):
|
||||
Bridge().chat_bots.get(bottype).sessions.clear_session(session_id)
|
||||
channel.cancel_session(session_id)
|
||||
ok, result = True, "会话已重置"
|
||||
else:
|
||||
@@ -339,8 +344,12 @@ class Godcmd(Plugin):
|
||||
else:
|
||||
ok, result = False, "当前对话机器人不支持重置会话"
|
||||
elif cmd == "debug":
|
||||
logger.setLevel("DEBUG")
|
||||
ok, result = True, "DEBUG模式已开启"
|
||||
if logger.getEffectiveLevel() == logging.DEBUG: # 判断当前日志模式是否DEBUG
|
||||
logger.setLevel(logging.INFO)
|
||||
ok, result = True, "DEBUG模式已关闭"
|
||||
else:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
ok, result = True, "DEBUG模式已开启"
|
||||
elif cmd == "plist":
|
||||
plugins = PluginManager().list_plugins()
|
||||
ok = True
|
||||
|
||||
@@ -6,6 +6,7 @@ from bridge.reply import Reply, ReplyType
|
||||
from channel.chat_message import ChatMessage
|
||||
from common.log import logger
|
||||
from plugins import *
|
||||
from config import conf
|
||||
|
||||
|
||||
@plugins.register(
|
||||
@@ -31,6 +32,13 @@ class Hello(Plugin):
|
||||
return
|
||||
|
||||
if e_context["context"].type == ContextType.JOIN_GROUP:
|
||||
if "group_welcome_msg" in conf():
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.TEXT
|
||||
reply.content = conf().get("group_welcome_msg", "")
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS # 事件结束,并跳过处理context的默认逻辑
|
||||
return
|
||||
e_context["context"].type = ContextType.TEXT
|
||||
msg: ChatMessage = e_context["context"]["msg"]
|
||||
e_context["context"].content = f'请你随机使用一种风格说一句问候语来欢迎新用户"{msg.actual_user_nickname}"加入群聊。'
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
import json
|
||||
import os
|
||||
|
||||
import requests
|
||||
import plugins
|
||||
from bridge.context import ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
@@ -51,15 +51,37 @@ class Keyword(Plugin):
|
||||
content = e_context["context"].content.strip()
|
||||
logger.debug("[keyword] on_handle_context. content: %s" % content)
|
||||
if content in self.keyword:
|
||||
logger.debug(f"[keyword] 匹配到关键字【{content}】")
|
||||
logger.info(f"[keyword] 匹配到关键字【{content}】")
|
||||
reply_text = self.keyword[content]
|
||||
|
||||
# 判断匹配内容的类型
|
||||
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".webp"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且.jpg/.jpeg/.png/.gif结尾,则认为是图片 URL
|
||||
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".img"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且".jpg", ".jpeg", ".png", ".gif", ".img"结尾,则认为是图片 URL。
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.IMAGE_URL
|
||||
reply.content = reply_text
|
||||
|
||||
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且".pdf", ".doc", ".docx", ".xls", "xlsx",".zip", ".rar"结尾,则下载文件到tmp目录并发送给用户
|
||||
file_path = "tmp"
|
||||
if not os.path.exists(file_path):
|
||||
os.makedirs(file_path)
|
||||
file_name = reply_text.split("/")[-1] # 获取文件名
|
||||
file_path = os.path.join(file_path, file_name)
|
||||
response = requests.get(reply_text)
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(response.content)
|
||||
#channel/wechat/wechat_channel.py和channel/wechat_channel.py中缺少ReplyType.FILE类型。
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.FILE
|
||||
reply.content = file_path
|
||||
|
||||
elif (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".mp4"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且".mp4"结尾,则下载视频到tmp目录并发送给用户
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.VIDEO_URL
|
||||
reply.content = reply_text
|
||||
|
||||
else:
|
||||
# 否则认为是普通文本
|
||||
reply = Reply()
|
||||
@@ -68,7 +90,7 @@ class Keyword(Plugin):
|
||||
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS # 事件结束,并跳过处理context的默认逻辑
|
||||
|
||||
|
||||
def get_help_text(self, **kwargs):
|
||||
help_text = "关键词过滤"
|
||||
return help_text
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
## 插件说明
|
||||
|
||||
基于 LinkAI 提供的知识库、Midjourney绘画等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
|
||||
基于 LinkAI 提供的知识库、Midjourney绘画、文档对话等能力对机器人的功能进行增强。平台地址: https://chat.link-ai.tech/console
|
||||
|
||||
## 插件配置
|
||||
|
||||
将 `plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置,功能默认关闭,可通过指令进行开启)。
|
||||
将 `plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置,但功能默认关闭,需要可通过指令进行开启)。
|
||||
|
||||
以下是配置项说明:
|
||||
以下是插件配置项说明:
|
||||
|
||||
```bash
|
||||
{
|
||||
"group_app_map": { # 群聊 和 应用编码 的映射关系
|
||||
"group_app_map": { # 群聊 和 应用编码 的映射关系
|
||||
"测试群名称1": "default", # 表示在名称为 "测试群名称1" 的群聊中将使用app_code 为 default 的应用
|
||||
"测试群名称2": "Kv2fXJcH"
|
||||
},
|
||||
@@ -21,19 +21,30 @@
|
||||
"max_tasks": 3, # 支持同时提交的总任务个数
|
||||
"max_tasks_per_user": 1, # 支持单个用户同时提交的任务个数
|
||||
"use_image_create_prefix": true # 是否使用全局的绘画触发词,如果开启将同时支持由`config.json`中的 image_create_prefix 配置触发
|
||||
},
|
||||
"summary": {
|
||||
"enabled": true, # 文档总结和对话功能开关
|
||||
"group_enabled": true, # 是否支持群聊开启
|
||||
"max_file_size": 5000 # 文件的大小限制,单位KB,默认为5M,超过该大小直接忽略
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
根目录 `config.json` 中配置,`API_KEY` 在 [控制台](https://chat.link-ai.tech/console/interface) 中创建并复制过来:
|
||||
|
||||
```bash
|
||||
"linkai_api_key": "Link_xxxxxxxxx"
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,可根据需要进行填写,未填写配置时默认不开启相应功能
|
||||
- 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用, `midjourney` 部分是 mj 画图的配置,`summary` 部分是文档总结及对话功能的配置。三部分的配置相互独立,可按需开启
|
||||
- 实际 `config.json` 配置中应保证json格式,不应携带 '#' 及后面的注释
|
||||
- 如果是`docker`部署,可通过映射 `plugins/config.json` 到容器中来完成插件配置,参考[文档](https://github.com/zhayujie/chatgpt-on-wechat#3-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)
|
||||
|
||||
## 插件使用
|
||||
|
||||
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;而midjourney绘画 和 summary文档总结对话功能则只需填写 `linkai_api_key` 配置,`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
|
||||
完成配置后运行项目,会自动运行插件,输入 `#help linkai` 可查看插件功能。
|
||||
|
||||
@@ -73,7 +84,25 @@
|
||||
|
||||
注意事项:
|
||||
1. 使用 `$mj open` 和 `$mj close` 指令可以快速打开和关闭绘图功能
|
||||
2. 海外环境部署请将 `img_proxy` 设置为 `False`
|
||||
2. 海外环境部署请将 `img_proxy` 设置为 `false`
|
||||
3. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词,以"画"开头便可以生成图片。
|
||||
4. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败,生成失败不消耗积分
|
||||
5. 若未收到图片可能有两种可能,一种是收到了图片但微信发送失败,可以在后台日志查看有没有获取到图片url,一般原因是受到了wx限制,可以稍后重试或更换账号尝试;另一种情况是图片提示词存在疑似违规,mj不会直接提示错误但会在画图后删掉原图导致程序无法获取,这种情况不消耗积分。
|
||||
|
||||
### 3.文档总结对话功能
|
||||
|
||||
#### 配置
|
||||
|
||||
该功能依赖 LinkAI的知识库及对话功能,需要在项目根目录的config.json中设置 `linkai_api_key`, 同时根据上述插件配置说明,在插件config.json添加 `summary` 部分的配置,设置 `enabled` 为 true。
|
||||
|
||||
如果不想创建 `plugins/linkai/config.json` 配置,可以直接通过 `$linkai sum open` 指令开启该功能。
|
||||
|
||||
#### 使用
|
||||
|
||||
功能开启后,向机器人发送 **文件** 或 **分享链接卡片** 即可生成摘要,进一步可以与文件或链接的内容进行多轮对话。
|
||||
|
||||
#### 限制
|
||||
|
||||
1. 文件目前 支持 `txt`, `docx`, `pdf`, `md`, `csv`格式,文件大小由 `max_file_size` 限制,最大不超过15M,文件字数最多可支持百万字的文件。但不建议上传字数过多的文件,一是token消耗过大,二是摘要很难覆盖到全部内容,只能通过多轮对话来了解细节。
|
||||
2. 分享链接 目前仅支持 公众号文章,后续会支持更多文章类型及视频链接等
|
||||
3. 总结及对话的 费用与 LinkAI 3.5-4K 模型的计费方式相同,按文档内容的tokens进行计算
|
||||
|
||||
@@ -10,5 +10,10 @@
|
||||
"max_tasks": 3,
|
||||
"max_tasks_per_user": 1,
|
||||
"use_image_create_prefix": true
|
||||
},
|
||||
"summary": {
|
||||
"enabled": true,
|
||||
"group_enabled": true,
|
||||
"max_file_size": 5000
|
||||
}
|
||||
}
|
||||
|
||||
+123
-22
@@ -1,17 +1,21 @@
|
||||
import plugins
|
||||
from bridge.context import ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from config import global_config
|
||||
from plugins import *
|
||||
from .midjourney import MJBot
|
||||
from .summary import LinkSummary
|
||||
from bridge import bridge
|
||||
|
||||
from common.expired_dict import ExpiredDict
|
||||
from common import const
|
||||
import os
|
||||
from .utils import Util
|
||||
|
||||
@plugins.register(
|
||||
name="linkai",
|
||||
desc="A plugin that supports knowledge base and midjourney drawing.",
|
||||
version="0.1.0",
|
||||
author="https://link-ai.tech",
|
||||
desire_priority=99
|
||||
)
|
||||
class LinkAI(Plugin):
|
||||
def __init__(self):
|
||||
@@ -23,7 +27,11 @@ class LinkAI(Plugin):
|
||||
self.config = self._load_config_template()
|
||||
if self.config:
|
||||
self.mj_bot = MJBot(self.config.get("midjourney"))
|
||||
logger.info("[LinkAI] inited")
|
||||
self.sum_config = {}
|
||||
if self.config:
|
||||
self.sum_config = self.config.get("summary")
|
||||
logger.info(f"[LinkAI] inited, config={self.config}")
|
||||
|
||||
|
||||
def on_handle_context(self, e_context: EventContext):
|
||||
"""
|
||||
@@ -34,10 +42,39 @@ class LinkAI(Plugin):
|
||||
return
|
||||
|
||||
context = e_context['context']
|
||||
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE]:
|
||||
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE, ContextType.FILE, ContextType.SHARING]:
|
||||
# filter content no need solve
|
||||
return
|
||||
|
||||
if context.type == ContextType.FILE and self._is_summary_open(context):
|
||||
# 文件处理
|
||||
context.get("msg").prepare()
|
||||
file_path = context.content
|
||||
if not LinkSummary().check_file(file_path, self.sum_config):
|
||||
return
|
||||
_send_info(e_context, "正在为你加速生成摘要,请稍后")
|
||||
res = LinkSummary().summary_file(file_path)
|
||||
if not res:
|
||||
_set_reply_text("因为神秘力量无法获取文章内容,请稍后再试吧", e_context, level=ReplyType.TEXT)
|
||||
return
|
||||
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
|
||||
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文件内容的对话", e_context, level=ReplyType.TEXT)
|
||||
os.remove(file_path)
|
||||
return
|
||||
|
||||
if (context.type == ContextType.SHARING and self._is_summary_open(context)) or \
|
||||
(context.type == ContextType.TEXT and LinkSummary().check_url(context.content)):
|
||||
if not LinkSummary().check_url(context.content):
|
||||
return
|
||||
_send_info(e_context, "正在为你加速生成摘要,请稍后")
|
||||
res = LinkSummary().summary_url(context.content)
|
||||
if not res:
|
||||
_set_reply_text("因为神秘力量无法获取文章内容,请稍后再试吧~", e_context, level=ReplyType.TEXT)
|
||||
return
|
||||
_set_reply_text(res.get("summary") + "\n\n💬 发送 \"开启对话\" 可以开启与文章内容的对话", e_context, level=ReplyType.TEXT)
|
||||
USER_FILE_MAP[_find_user_id(context) + "-sum_id"] = res.get("summary_id")
|
||||
return
|
||||
|
||||
mj_type = self.mj_bot.judge_mj_task_type(e_context)
|
||||
if mj_type:
|
||||
# MJ作图任务处理
|
||||
@@ -49,10 +86,38 @@ class LinkAI(Plugin):
|
||||
self._process_admin_cmd(e_context)
|
||||
return
|
||||
|
||||
if context.type == ContextType.TEXT and context.content == "开启对话" and _find_sum_id(context):
|
||||
# 文本对话
|
||||
_send_info(e_context, "正在为你开启对话,请稍后")
|
||||
res = LinkSummary().summary_chat(_find_sum_id(context))
|
||||
if not res:
|
||||
_set_reply_text("开启对话失败,请稍后再试吧", e_context)
|
||||
return
|
||||
USER_FILE_MAP[_find_user_id(context) + "-file_id"] = res.get("file_id")
|
||||
_set_reply_text("💡你可以问我关于这篇文章的任何问题,例如:\n\n" + res.get("questions") + "\n\n发送 \"退出对话\" 可以关闭与文章的对话", e_context, level=ReplyType.TEXT)
|
||||
return
|
||||
|
||||
if context.type == ContextType.TEXT and context.content == "退出对话" and _find_file_id(context):
|
||||
del USER_FILE_MAP[_find_user_id(context) + "-file_id"]
|
||||
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
|
||||
bot.sessions.clear_session(context["session_id"])
|
||||
_set_reply_text("对话已退出", e_context, level=ReplyType.TEXT)
|
||||
return
|
||||
|
||||
if context.type == ContextType.TEXT and _find_file_id(context):
|
||||
bot = bridge.Bridge().find_chat_bot(const.LINKAI)
|
||||
context.kwargs["file_id"] = _find_file_id(context)
|
||||
reply = bot.reply(context.content, context)
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
return
|
||||
|
||||
|
||||
if self._is_chat_task(e_context):
|
||||
# 文本对话任务处理
|
||||
self._process_chat_task(e_context)
|
||||
|
||||
|
||||
# 插件管理功能
|
||||
def _process_admin_cmd(self, e_context: EventContext):
|
||||
context = e_context['context']
|
||||
@@ -63,7 +128,7 @@ class LinkAI(Plugin):
|
||||
|
||||
if len(cmd) == 2 and (cmd[1] == "open" or cmd[1] == "close"):
|
||||
# 知识库开关指令
|
||||
if not _is_admin(e_context):
|
||||
if not Util.is_admin(e_context):
|
||||
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
is_open = True
|
||||
@@ -81,7 +146,7 @@ class LinkAI(Plugin):
|
||||
if not context.kwargs.get("isgroup"):
|
||||
_set_reply_text("该指令需在群聊中使用", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
if not _is_admin(e_context):
|
||||
if not Util.is_admin(e_context):
|
||||
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
app_code = cmd[2]
|
||||
@@ -94,11 +159,36 @@ class LinkAI(Plugin):
|
||||
# 保存插件配置
|
||||
super().save_config(self.config)
|
||||
_set_reply_text(f"应用设置成功: {app_code}", e_context, level=ReplyType.INFO)
|
||||
else:
|
||||
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
|
||||
level=ReplyType.INFO)
|
||||
return
|
||||
|
||||
if len(cmd) == 3 and cmd[1] == "sum" and (cmd[2] == "open" or cmd[2] == "close"):
|
||||
# 知识库开关指令
|
||||
if not Util.is_admin(e_context):
|
||||
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
is_open = True
|
||||
tips_text = "开启"
|
||||
if cmd[2] == "close":
|
||||
tips_text = "关闭"
|
||||
is_open = False
|
||||
if not self.sum_config:
|
||||
_set_reply_text(f"插件未启用summary功能,请参考以下链添加插件配置\n\nhttps://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/linkai/README.md", e_context, level=ReplyType.INFO)
|
||||
else:
|
||||
self.sum_config["enabled"] = is_open
|
||||
_set_reply_text(f"文章总结功能{tips_text}", e_context, level=ReplyType.INFO)
|
||||
return
|
||||
|
||||
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context,
|
||||
level=ReplyType.INFO)
|
||||
return
|
||||
|
||||
def _is_summary_open(self, context) -> bool:
|
||||
if not self.sum_config or not self.sum_config.get("enabled"):
|
||||
return False
|
||||
if context.kwargs.get("isgroup") and not self.sum_config.get("group_enabled"):
|
||||
return False
|
||||
return True
|
||||
|
||||
# LinkAI 对话任务处理
|
||||
def _is_chat_task(self, e_context: EventContext):
|
||||
context = e_context['context']
|
||||
@@ -112,7 +202,7 @@ class LinkAI(Plugin):
|
||||
"""
|
||||
context = e_context['context']
|
||||
# 群聊应用管理
|
||||
group_name = context.kwargs.get("msg").from_user_nickname
|
||||
group_name = context.get("msg").from_user_nickname
|
||||
app_code = self._fetch_group_app_code(group_name)
|
||||
if app_code:
|
||||
context.kwargs['app_code'] = app_code
|
||||
@@ -130,7 +220,7 @@ class LinkAI(Plugin):
|
||||
|
||||
def get_help_text(self, verbose=False, **kwargs):
|
||||
trigger_prefix = _get_trigger_prefix()
|
||||
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画等能力。\n\n"
|
||||
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画、文档总结、联网搜索等能力。\n\n"
|
||||
if not verbose:
|
||||
return help_text
|
||||
help_text += f'📖 知识库\n - 群聊中指定应用: {trigger_prefix}linkai app 应用编码\n'
|
||||
@@ -140,6 +230,7 @@ class LinkAI(Plugin):
|
||||
help_text += f"🎨 绘画\n - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \n - 放大: {trigger_prefix}mju 图片ID 图片序号\n - 变换: {trigger_prefix}mjv 图片ID 图片序号\n - 重置: {trigger_prefix}mjr 图片ID"
|
||||
help_text += f"\n\n例如:\n\"{trigger_prefix}mj a little cat, white --ar 9:16\"\n\"{trigger_prefix}mju 11055927171882 2\""
|
||||
help_text += f"\n\"{trigger_prefix}mjv 11055927171882 2\"\n\"{trigger_prefix}mjr 11055927171882\""
|
||||
help_text += f"\n\n💡 文档总结和对话\n - 开启: {trigger_prefix}linkai sum open\n - 使用: 发送文件、公众号文章等可生成摘要,并与内容对话"
|
||||
return help_text
|
||||
|
||||
def _load_config_template(self):
|
||||
@@ -150,22 +241,23 @@ class LinkAI(Plugin):
|
||||
with open(plugin_config_path, "r", encoding="utf-8") as f:
|
||||
plugin_conf = json.load(f)
|
||||
plugin_conf["midjourney"]["enabled"] = False
|
||||
plugin_conf["summary"]["enabled"] = False
|
||||
return plugin_conf
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
# 静态方法
|
||||
def _is_admin(e_context: EventContext) -> bool:
|
||||
"""
|
||||
判断消息是否由管理员用户发送
|
||||
:param e_context: 消息上下文
|
||||
:return: True: 是, False: 否
|
||||
"""
|
||||
context = e_context["context"]
|
||||
|
||||
def _send_info(e_context: EventContext, content: str):
|
||||
reply = Reply(ReplyType.TEXT, content)
|
||||
channel = e_context["channel"]
|
||||
channel.send(reply, e_context["context"])
|
||||
|
||||
|
||||
def _find_user_id(context):
|
||||
if context["isgroup"]:
|
||||
return context.kwargs.get("msg").actual_user_id in global_config["admin_users"]
|
||||
return context.kwargs.get("msg").actual_user_id
|
||||
else:
|
||||
return context["receiver"] in global_config["admin_users"]
|
||||
return context["receiver"]
|
||||
|
||||
|
||||
def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
|
||||
@@ -173,6 +265,15 @@ def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = Re
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
|
||||
|
||||
def _get_trigger_prefix():
|
||||
return conf().get("plugin_trigger_prefix", "$")
|
||||
|
||||
def _find_sum_id(context):
|
||||
return USER_FILE_MAP.get(_find_user_id(context) + "-sum_id")
|
||||
|
||||
def _find_file_id(context):
|
||||
user_id = _find_user_id(context)
|
||||
if user_id:
|
||||
return USER_FILE_MAP.get(user_id + "-file_id")
|
||||
|
||||
USER_FILE_MAP = ExpiredDict(conf().get("expires_in_seconds") or 60 * 30)
|
||||
|
||||
@@ -5,10 +5,10 @@ import requests
|
||||
import threading
|
||||
import time
|
||||
from bridge.reply import Reply, ReplyType
|
||||
import aiohttp
|
||||
import asyncio
|
||||
from bridge.context import ContextType
|
||||
from plugins import EventContext, EventAction
|
||||
from .utils import Util
|
||||
|
||||
INVALID_REQUEST = 410
|
||||
NOT_FOUND_ORIGIN_IMAGE = 461
|
||||
@@ -49,7 +49,7 @@ task_name_mapping = {
|
||||
|
||||
|
||||
class MJTask:
|
||||
def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 30,
|
||||
def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 6,
|
||||
status=Status.PENDING):
|
||||
self.id = id
|
||||
self.user_id = user_id
|
||||
@@ -114,6 +114,9 @@ class MJBot:
|
||||
return
|
||||
|
||||
if len(cmd) == 2 and (cmd[1] == "open" or cmd[1] == "close"):
|
||||
if not Util.is_admin(e_context):
|
||||
Util.set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
# midjourney 开关指令
|
||||
is_open = True
|
||||
tips_text = "开启"
|
||||
|
||||
@@ -0,0 +1,94 @@
|
||||
import requests
|
||||
from config import conf
|
||||
from common.log import logger
|
||||
import os
|
||||
|
||||
|
||||
class LinkSummary:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def summary_file(self, file_path: str):
|
||||
file_body = {
|
||||
"file": open(file_path, "rb"),
|
||||
"name": file_path.split("/")[-1],
|
||||
}
|
||||
res = requests.post(url=self.base_url() + "/v1/summary/file", headers=self.headers(), files=file_body, timeout=(5, 300))
|
||||
return self._parse_summary_res(res)
|
||||
|
||||
def summary_url(self, url: str):
|
||||
body = {
|
||||
"url": url
|
||||
}
|
||||
res = requests.post(url=self.base_url() + "/v1/summary/url", headers=self.headers(), json=body, timeout=(5, 180))
|
||||
return self._parse_summary_res(res)
|
||||
|
||||
def summary_chat(self, summary_id: str):
|
||||
body = {
|
||||
"summary_id": summary_id
|
||||
}
|
||||
res = requests.post(url=self.base_url() + "/v1/summary/chat", headers=self.headers(), json=body, timeout=(5, 180))
|
||||
if res.status_code == 200:
|
||||
res = res.json()
|
||||
logger.debug(f"[LinkSum] chat open, res={res}")
|
||||
if res.get("code") == 200:
|
||||
data = res.get("data")
|
||||
return {
|
||||
"questions": data.get("questions"),
|
||||
"file_id": data.get("file_id")
|
||||
}
|
||||
else:
|
||||
res_json = res.json()
|
||||
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
|
||||
return None
|
||||
|
||||
def _parse_summary_res(self, res):
|
||||
if res.status_code == 200:
|
||||
res = res.json()
|
||||
logger.debug(f"[LinkSum] url summary, res={res}")
|
||||
if res.get("code") == 200:
|
||||
data = res.get("data")
|
||||
return {
|
||||
"summary": data.get("summary"),
|
||||
"summary_id": data.get("summary_id")
|
||||
}
|
||||
else:
|
||||
res_json = res.json()
|
||||
logger.error(f"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}")
|
||||
return None
|
||||
|
||||
def base_url(self):
|
||||
return conf().get("linkai_api_base", "https://api.link-ai.chat")
|
||||
|
||||
def headers(self):
|
||||
return {"Authorization": "Bearer " + conf().get("linkai_api_key")}
|
||||
|
||||
def check_file(self, file_path: str, sum_config: dict) -> bool:
|
||||
file_size = os.path.getsize(file_path) // 1000
|
||||
|
||||
if (sum_config.get("max_file_size") and file_size > sum_config.get("max_file_size")) or file_size > 15000:
|
||||
logger.warn(f"[LinkSum] file size exceeds limit, No processing, file_size={file_size}KB")
|
||||
return False
|
||||
|
||||
suffix = file_path.split(".")[-1]
|
||||
support_list = ["txt", "csv", "docx", "pdf", "md"]
|
||||
if suffix not in support_list:
|
||||
logger.warn(f"[LinkSum] unsupported file, suffix={suffix}, support_list={support_list}")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def check_url(self, url: str):
|
||||
if not url:
|
||||
return False
|
||||
support_list = ["http://mp.weixin.qq.com", "https://mp.weixin.qq.com"]
|
||||
black_support_list = ["https://mp.weixin.qq.com/mp/waerrpage"]
|
||||
for black_url_prefix in black_support_list:
|
||||
if url.strip().startswith(black_url_prefix):
|
||||
logger.warn(f"[LinkSum] unsupported url, no need to process, url={url}")
|
||||
return False
|
||||
for support_url in support_list:
|
||||
if url.strip().startswith(support_url):
|
||||
return True
|
||||
logger.debug(f"[LinkSum] unsupported url, no need to process, url={url}")
|
||||
return False
|
||||
@@ -0,0 +1,28 @@
|
||||
from config import global_config
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from plugins.event import EventContext, EventAction
|
||||
|
||||
|
||||
class Util:
|
||||
@staticmethod
|
||||
def is_admin(e_context: EventContext) -> bool:
|
||||
"""
|
||||
判断消息是否由管理员用户发送
|
||||
:param e_context: 消息上下文
|
||||
:return: True: 是, False: 否
|
||||
"""
|
||||
context = e_context["context"]
|
||||
if context["isgroup"]:
|
||||
actual_user_id = context.kwargs.get("msg").actual_user_id
|
||||
for admin_user in global_config["admin_users"]:
|
||||
if actual_user_id and actual_user_id in admin_user:
|
||||
return True
|
||||
return False
|
||||
else:
|
||||
return context["receiver"] in global_config["admin_users"]
|
||||
|
||||
@staticmethod
|
||||
def set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
|
||||
reply = Reply(level, content)
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
@@ -21,6 +21,9 @@ class Plugin:
|
||||
if os.path.exists(plugin_config_path):
|
||||
with open(plugin_config_path, "r", encoding="utf-8") as f:
|
||||
plugin_conf = json.load(f)
|
||||
|
||||
# 写入全局配置内存
|
||||
plugin_config[self.name] = plugin_conf
|
||||
logger.debug(f"loading plugin config, plugin_name={self.name}, conf={plugin_conf}")
|
||||
return plugin_conf
|
||||
|
||||
|
||||
@@ -15,6 +15,10 @@
|
||||
"timetask": {
|
||||
"url": "https://github.com/haikerapples/timetask.git",
|
||||
"desc": "一款定时任务系统的插件"
|
||||
},
|
||||
"Apilot": {
|
||||
"url": "https://github.com/6vision/Apilot.git",
|
||||
"desc": "通过api直接查询早报、热榜、快递、天气等实用信息的插件"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user