mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-17 18:08:57 +08:00
Compare commits
58 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| ca916b7ce5 | |||
| 01e02934da | |||
| c81a79f7b9 | |||
| 1133648bf6 | |||
| e05bc541d7 | |||
| d689d20482 | |||
| 39dd99b272 | |||
| cda21acb43 | |||
| 9bd7d09f20 | |||
| b22994c2d2 | |||
| e027286b6d | |||
| d6e16995e0 | |||
| 782bff3a51 | |||
| de26dc0597 | |||
| 233b24ab0f | |||
| 2f9e5b1219 | |||
| dd36b8b150 | |||
| f81ac31fe1 | |||
| 74a253f521 | |||
| 41762a1c57 | |||
| a786fa4b75 | |||
| e4c7602c0c | |||
| e0d2e34980 | |||
| 9ef8e1be3f | |||
| aae9b64833 | |||
| 4bab4299f2 | |||
| 954e55f4b4 | |||
| 2361e3c28c | |||
| 8aac86f0a9 | |||
| 6384e9310b | |||
| 7a9205dfba | |||
| 94b47a56f4 | |||
| 709b5be634 | |||
| f970b2c168 | |||
| 973acb37ed | |||
| 1c9020a565 | |||
| c5f1d0042c | |||
| fa706e8b1d | |||
| 12c170f227 | |||
| db27dfe227 | |||
| 2db4673392 | |||
| 38619db629 | |||
| 930fd436ea | |||
| 98b8ff2fc8 | |||
| d0662683f9 | |||
| 957f2574a9 | |||
| 109b362ebd | |||
| ff3fdfa738 | |||
| e2636ed54a | |||
| dbe2f17e1a | |||
| 4dc535673f | |||
| f414b6408e | |||
| 3aa2e6a04d | |||
| 1963ff273f | |||
| bb737a71d5 | |||
| 4dbc54fa15 | |||
| 1d4ff796d7 | |||
| 44cb54a9ea |
@@ -28,6 +28,12 @@ jobs:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
@@ -39,7 +45,9 @@ jobs:
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
images: |
|
||||
${{ env.IMAGE_NAME }}
|
||||
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v3
|
||||
|
||||
+2
-1
@@ -24,4 +24,5 @@ plugins/**/
|
||||
!plugins/banwords/**/
|
||||
!plugins/hello
|
||||
!plugins/role
|
||||
!plugins/keyword
|
||||
!plugins/keyword
|
||||
!plugins/linkai
|
||||
@@ -13,11 +13,6 @@
|
||||
|
||||
> 欢迎接入更多应用,参考 [Terminal代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/terminal/terminal_channel.py)实现接收和发送消息逻辑即可接入。 同时欢迎增加新的插件,参考 [插件说明文档](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins)。
|
||||
|
||||
**一键部署:**
|
||||
- 个人微信
|
||||
|
||||
[](https://railway.app/template/qApznZ?referralCode=RC3znh)
|
||||
|
||||
# 演示
|
||||
|
||||
https://user-images.githubusercontent.com/26161723/233777277-e3b9928e-b88f-43e2-b0e0-3cbc923bc799.mp4
|
||||
@@ -26,13 +21,13 @@ Demo made by [Visionn](https://www.wangpc.cc/)
|
||||
|
||||
# 交流群
|
||||
|
||||
添加小助手微信进群:
|
||||
添加小助手微信进群,请备注 "wechat":
|
||||
|
||||
<img width="240" src="./docs/images/contact.jpg">
|
||||
|
||||
# 更新日志
|
||||
|
||||
>**2023.06.12:** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建 个人知识库,并接入微信中。Beta版本欢迎体验,使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
>**2023.06.12:** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建 个人知识库,并接入微信、公众号及企业微信中。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
|
||||
|
||||
>**2023.04.26:** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
|
||||
|
||||
@@ -63,6 +58,8 @@ Demo made by [Visionn](https://www.wangpc.cc/)
|
||||
支持 Linux、MacOS、Windows 系统(可在Linux服务器上长期运行),同时需安装 `Python`。
|
||||
> 建议Python版本在 3.7.1~3.9.X 之间,推荐3.8版本,3.10及以上版本在 MacOS 可用,其他系统上不确定能否正常运行。
|
||||
|
||||
> 注意:Docker 或 Railway 部署无需安装python环境和下载源码,可直接快进到下一节。
|
||||
|
||||
**(1) 克隆项目代码:**
|
||||
|
||||
```bash
|
||||
@@ -114,7 +111,7 @@ pip3 install azure-cognitiveservices-speech
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY
|
||||
"model": "gpt-3.5-turbo", # 模型名称。当use_azure_chatgpt为true时,其名称为Azure上model deployment名称
|
||||
"proxy": "127.0.0.1:7890", # 代理客户端的ip和端口
|
||||
"proxy": "", # 代理客户端的ip和端口,国内环境开启代理的需要填写该项,如 "127.0.0.1:7890"
|
||||
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
|
||||
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
|
||||
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
|
||||
@@ -126,6 +123,7 @@ pip3 install azure-cognitiveservices-speech
|
||||
"group_speech_recognition": false, # 是否开启群组语音识别
|
||||
"use_azure_chatgpt": false, # 是否使用Azure ChatGPT service代替openai ChatGPT service. 当设置为true时需要设置 open_ai_api_base,如 https://xxx.openai.azure.com/
|
||||
"azure_deployment_id": "", # 采用Azure ChatGPT时,模型部署名称
|
||||
"azure_api_version": "", # 采用Azure ChatGPT时,API版本
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
|
||||
# 订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复,可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
|
||||
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持角色扮演和文字冒险等丰富插件。\n输入{trigger_prefix}#help 查看详细指令。"
|
||||
@@ -196,14 +194,61 @@ nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通
|
||||
|
||||
### 3.Docker部署
|
||||
|
||||
参考文档 [Docker部署](https://github.com/limccn/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))。
|
||||
> 使用docker部署无需下载源码和安装依赖,只需要获取 docker-compose.yml 配置文件并启动容器即可。
|
||||
|
||||
### 4. Railway部署 (✅推荐)
|
||||
> Railway每月提供5刀和最多500小时的免费额度。
|
||||
1. 进入 [Railway](https://railway.app/template/qApznZ?referralCode=RC3znh)。
|
||||
> 前提是需要安装好 `docker` 及 `docker-compose`,安装成功的表现是执行 `docker -v` 和 `docker-compose version` (或 docker compose version) 可以查看到版本号,可前往 [docker官网](https://docs.docker.com/engine/install/) 进行下载。
|
||||
|
||||
#### (1) 下载 docker-compose.yml 文件
|
||||
|
||||
```bash
|
||||
wget https://open-1317903499.cos.ap-guangzhou.myqcloud.com/docker-compose.yml
|
||||
```
|
||||
|
||||
下载完成后打开 `docker-compose.yml` 修改所需配置,如 `OPEN_AI_API_KEY` 和 `GROUP_NAME_WHITE_LIST` 等。
|
||||
|
||||
#### (2) 启动容器
|
||||
|
||||
在 `docker-compose.yml` 所在目录下执行以下命令启动容器:
|
||||
|
||||
```bash
|
||||
sudo docker compose up -d
|
||||
```
|
||||
|
||||
运行 `sudo docker ps` 能查看到 NAMES 为 chatgpt-on-wechat 的容器即表示运行成功。
|
||||
|
||||
注意:
|
||||
|
||||
- 如果 `docker-compose` 是 1.X 版本 则需要执行 `sudo docker-compose up -d` 来启动容器
|
||||
- 该命令会自动去 [docker hub](https://hub.docker.com/r/zhayujie/chatgpt-on-wechat) 拉取 latest 版本的镜像,latest 镜像会在每次项目 release 新的版本时生成
|
||||
|
||||
最后运行以下命令可查看容器运行日志,扫描日志中的二维码即可完成登录:
|
||||
|
||||
```bash
|
||||
sudo docker logs -f chatgpt-on-wechat
|
||||
```
|
||||
|
||||
#### (3) 插件使用
|
||||
|
||||
如果需要在docker容器中修改插件配置,可通过挂载的方式完成,将 [插件配置文件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/config.json.template)
|
||||
重命名为 `config.json`,放置于 `docker-compose.yml` 相同目录下,并在 `docker-compose.yml` 中的 `chatgpt-on-wechat` 部分下添加 `volumes` 映射:
|
||||
|
||||
```
|
||||
volumes:
|
||||
- ./config.json:/app/plugins/config.json
|
||||
```
|
||||
|
||||
### 4. Railway部署
|
||||
|
||||
> Railway 每月提供5刀和最多500小时的免费额度。 (07.11更新: 目前大部分账号已无法免费部署)
|
||||
|
||||
1. 进入 [Railway](https://railway.app/template/qApznZ?referralCode=RC3znh)
|
||||
2. 点击 `Deploy Now` 按钮。
|
||||
3. 设置环境变量来重载程序运行的参数,例如`open_ai_api_key`, `character_desc`。
|
||||
|
||||
**一键部署:**
|
||||
|
||||
[](https://railway.app/template/qApznZ?referralCode=RC3znh)
|
||||
|
||||
## 常见问题
|
||||
|
||||
FAQs: <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>
|
||||
|
||||
@@ -121,6 +121,7 @@ class ChatGPTBot(Bot, OpenAIImage):
|
||||
if args is None:
|
||||
args = self.args
|
||||
response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **args)
|
||||
# logger.debug("[CHATGPT] response={}".format(response))
|
||||
# logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
|
||||
return {
|
||||
"total_tokens": response["usage"]["total_tokens"],
|
||||
@@ -165,7 +166,7 @@ class AzureChatGPTBot(ChatGPTBot):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
openai.api_type = "azure"
|
||||
openai.api_version = "2023-03-15-preview"
|
||||
openai.api_version = conf().get("azure_api_version", "2023-06-01-preview")
|
||||
self.args["deployment_id"] = conf().get("azure_deployment_id")
|
||||
|
||||
def create_img(self, query, retry_count=0, api_key=None):
|
||||
|
||||
@@ -57,25 +57,25 @@ def num_tokens_from_messages(messages, model):
|
||||
"""Returns the number of tokens used by a list of messages."""
|
||||
import tiktoken
|
||||
|
||||
if model == "gpt-3.5-turbo" or model == "gpt-35-turbo":
|
||||
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
|
||||
elif model == "gpt-4":
|
||||
return num_tokens_from_messages(messages, model="gpt-4-0314")
|
||||
if model in ["gpt-3.5-turbo-0301", "gpt-35-turbo"]:
|
||||
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
|
||||
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k"]:
|
||||
return num_tokens_from_messages(messages, model="gpt-4")
|
||||
|
||||
try:
|
||||
encoding = tiktoken.encoding_for_model(model)
|
||||
except KeyError:
|
||||
logger.debug("Warning: model not found. Using cl100k_base encoding.")
|
||||
encoding = tiktoken.get_encoding("cl100k_base")
|
||||
if model == "gpt-3.5-turbo-0301":
|
||||
if model == "gpt-3.5-turbo":
|
||||
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
|
||||
tokens_per_name = -1 # if there's a name, the role is omitted
|
||||
elif model == "gpt-4-0314":
|
||||
elif model == "gpt-4":
|
||||
tokens_per_message = 3
|
||||
tokens_per_name = 1
|
||||
else:
|
||||
logger.warn(f"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo-0301.")
|
||||
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
|
||||
logger.warn(f"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo.")
|
||||
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
|
||||
num_tokens = 0
|
||||
for message in messages:
|
||||
num_tokens += tokens_per_message
|
||||
|
||||
+32
-25
@@ -29,18 +29,24 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
if context.type == ContextType.TEXT:
|
||||
return self._chat(query, context)
|
||||
elif context.type == ContextType.IMAGE_CREATE:
|
||||
ok, retstring = self.create_img(query, 0)
|
||||
reply = None
|
||||
ok, res = self.create_img(query, 0)
|
||||
if ok:
|
||||
reply = Reply(ReplyType.IMAGE_URL, retstring)
|
||||
reply = Reply(ReplyType.IMAGE_URL, res)
|
||||
else:
|
||||
reply = Reply(ReplyType.ERROR, retstring)
|
||||
reply = Reply(ReplyType.ERROR, res)
|
||||
return reply
|
||||
else:
|
||||
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
|
||||
return reply
|
||||
|
||||
def _chat(self, query, context, retry_count=0):
|
||||
def _chat(self, query, context, retry_count=0) -> Reply:
|
||||
"""
|
||||
发起对话请求
|
||||
:param query: 请求提示词
|
||||
:param context: 对话上下文
|
||||
:param retry_count: 当前递归重试次数
|
||||
:return: 回复
|
||||
"""
|
||||
if retry_count >= 2:
|
||||
# exit from retry 2 times
|
||||
logger.warn("[LINKAI] failed after maximum number of retry times")
|
||||
@@ -52,7 +58,7 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
logger.info(f"[LINKAI] won't set appcode because a plugin ({context['generate_breaked_by']}) affected the context")
|
||||
app_code = None
|
||||
else:
|
||||
app_code = conf().get("linkai_app_code")
|
||||
app_code = context.kwargs.get("app_code") or conf().get("linkai_app_code")
|
||||
linkai_api_key = conf().get("linkai_api_key")
|
||||
|
||||
session_id = context["session_id"]
|
||||
@@ -63,10 +69,8 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
if app_code and session.messages[0].get("role") == "system":
|
||||
session.messages.pop(0)
|
||||
|
||||
logger.info(f"[LINKAI] query={query}, app_code={app_code}")
|
||||
|
||||
body = {
|
||||
"appCode": app_code,
|
||||
"app_code": app_code,
|
||||
"messages": session.messages,
|
||||
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称
|
||||
"temperature": conf().get("temperature"),
|
||||
@@ -74,31 +78,34 @@ class LinkAIBot(Bot, OpenAIImage):
|
||||
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
}
|
||||
logger.info(f"[LINKAI] query={query}, app_code={app_code}, mode={body.get('model')}")
|
||||
headers = {"Authorization": "Bearer " + linkai_api_key}
|
||||
|
||||
# do http request
|
||||
res = requests.post(url=self.base_url + "/chat/completion", json=body, headers=headers).json()
|
||||
res = requests.post(url=self.base_url + "/chat/completions", json=body, headers=headers,
|
||||
timeout=conf().get("request_timeout", 180))
|
||||
if res.status_code == 200:
|
||||
# execute success
|
||||
response = res.json()
|
||||
reply_content = response["choices"][0]["message"]["content"]
|
||||
total_tokens = response["usage"]["total_tokens"]
|
||||
logger.info(f"[LINKAI] reply={reply_content}, total_tokens={total_tokens}")
|
||||
self.sessions.session_reply(reply_content, session_id, total_tokens)
|
||||
return Reply(ReplyType.TEXT, reply_content)
|
||||
|
||||
if not res or not res["success"]:
|
||||
if res.get("code") == self.AUTH_FAILED_CODE:
|
||||
logger.exception(f"[LINKAI] please check your linkai_api_key, res={res}")
|
||||
return Reply(ReplyType.ERROR, "请再问我一次吧")
|
||||
else:
|
||||
response = res.json()
|
||||
error = response.get("error")
|
||||
logger.error(f"[LINKAI] chat failed, status_code={res.status_code}, "
|
||||
f"msg={error.get('message')}, type={error.get('type')}")
|
||||
|
||||
elif res.get("code") == self.NO_QUOTA_CODE:
|
||||
logger.exception(f"[LINKAI] please check your account quota, https://chat.link-ai.tech/console/account")
|
||||
return Reply(ReplyType.ERROR, "提问太快啦,请休息一下再问我吧")
|
||||
|
||||
else:
|
||||
# retry
|
||||
if res.status_code >= 500:
|
||||
# server error, need retry
|
||||
time.sleep(2)
|
||||
logger.warn(f"[LINKAI] do retry, times={retry_count}")
|
||||
return self._chat(query, context, retry_count + 1)
|
||||
|
||||
# execute success
|
||||
reply_content = res["data"]["content"]
|
||||
logger.info(f"[LINKAI] reply={reply_content}")
|
||||
self.sessions.session_reply(reply_content, session_id)
|
||||
return Reply(ReplyType.TEXT, reply_content)
|
||||
return Reply(ReplyType.ERROR, "提问太快啦,请休息一下再问我吧")
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
@@ -223,9 +223,9 @@ class ChatChannel(Channel):
|
||||
return self._decorate_reply(context, reply)
|
||||
if context.get("isgroup", False):
|
||||
reply_text = "@" + context["msg"].actual_user_nickname + "\n" + reply_text.strip()
|
||||
reply_text = conf().get("group_chat_reply_prefix", "") + reply_text
|
||||
reply_text = conf().get("group_chat_reply_prefix", "") + reply_text + conf().get("group_chat_reply_suffix", "")
|
||||
else:
|
||||
reply_text = conf().get("single_chat_reply_prefix", "") + reply_text
|
||||
reply_text = conf().get("single_chat_reply_prefix", "") + reply_text + conf().get("single_chat_reply_suffix", "")
|
||||
reply.content = reply_text
|
||||
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
|
||||
reply.content = "[" + str(reply.type) + "]\n" + reply.content
|
||||
|
||||
@@ -48,6 +48,7 @@ class ChatMessage(object):
|
||||
to_user_nickname = None
|
||||
other_user_id = None
|
||||
other_user_nickname = None
|
||||
my_msg = False
|
||||
|
||||
is_group = False
|
||||
is_at = False
|
||||
|
||||
@@ -53,11 +53,14 @@ def _check(func):
|
||||
if msgId in self.receivedMsgs:
|
||||
logger.info("Wechat message {} already received, ignore".format(msgId))
|
||||
return
|
||||
self.receivedMsgs[msgId] = cmsg
|
||||
self.receivedMsgs[msgId] = True
|
||||
create_time = cmsg.create_time # 消息时间戳
|
||||
if conf().get("hot_reload") == True and int(create_time) < int(time.time()) - 60: # 跳过1分钟前的历史消息
|
||||
logger.debug("[WX]history message {} skipped".format(msgId))
|
||||
return
|
||||
if cmsg.my_msg:
|
||||
logger.debug("[WX]my message {} skipped".format(msgId))
|
||||
return
|
||||
return func(self, cmsg)
|
||||
|
||||
return wrapper
|
||||
@@ -105,7 +108,7 @@ class WechatChannel(ChatChannel):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.receivedMsgs = ExpiredDict(60 * 60 * 24)
|
||||
self.receivedMsgs = ExpiredDict(60 * 60)
|
||||
|
||||
def startup(self):
|
||||
itchat.instance.receivingRetryCount = 600 # 修改断线超时时间
|
||||
@@ -159,7 +162,7 @@ class WechatChannel(ChatChannel):
|
||||
@_check
|
||||
def handle_group(self, cmsg: ChatMessage):
|
||||
if cmsg.ctype == ContextType.VOICE:
|
||||
if conf().get("speech_recognition") != True:
|
||||
if conf().get("group_speech_recognition") != True:
|
||||
return
|
||||
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
|
||||
elif cmsg.ctype == ContextType.IMAGE:
|
||||
|
||||
@@ -58,6 +58,8 @@ class WechatMessage(ChatMessage):
|
||||
if self.to_user_id == user_id:
|
||||
self.to_user_nickname = nickname
|
||||
try: # 陌生人时候, 'User'字段可能不存在
|
||||
self.my_msg = itchat_msg["ToUserName"] == itchat_msg["User"]["UserName"] and \
|
||||
itchat_msg["ToUserName"] != itchat_msg["FromUserName"]
|
||||
self.other_user_id = itchat_msg["User"]["UserName"]
|
||||
self.other_user_nickname = itchat_msg["User"]["NickName"]
|
||||
if self.other_user_id == self.from_user_id:
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
"open_ai_api_key": "YOUR API KEY",
|
||||
"model": "gpt-3.5-turbo",
|
||||
"proxy": "",
|
||||
"hot_reload": false,
|
||||
"single_chat_prefix": [
|
||||
"bot",
|
||||
"@bot"
|
||||
|
||||
@@ -19,11 +19,14 @@ available_setting = {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"use_azure_chatgpt": False, # 是否使用azure的chatgpt
|
||||
"azure_deployment_id": "", # azure 模型部署名称
|
||||
"azure_api_version": "", # azure api版本
|
||||
# Bot触发配置
|
||||
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
|
||||
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
|
||||
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
|
||||
"single_chat_reply_suffix": "", # 私聊时自动回复的后缀,\n 可以换行
|
||||
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
|
||||
"group_chat_reply_prefix": "", # 群聊时自动回复的前缀
|
||||
"group_chat_reply_suffix": "", # 群聊时自动回复的后缀,\n 可以换行
|
||||
"group_chat_keyword": [], # 群聊时包含该关键词则会触发机器人回复
|
||||
"group_at_off": False, # 是否关闭群聊时@bot的触发
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], # 开启自动回复的群名称列表
|
||||
@@ -35,7 +38,8 @@ available_setting = {
|
||||
"image_create_size": "256x256", # 图片大小,可选有 256x256, 512x512, 1024x1024
|
||||
# chatgpt会话参数
|
||||
"expires_in_seconds": 3600, # 无操作会话的过期时间
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
|
||||
# 人格描述
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
|
||||
"conversation_max_tokens": 1000, # 支持上下文记忆的最多字符数
|
||||
# chatgpt限流配置
|
||||
"rate_limit_chatgpt": 20, # chatgpt的调用频率限制
|
||||
@@ -99,6 +103,8 @@ available_setting = {
|
||||
"appdata_dir": "", # 数据目录
|
||||
# 插件配置
|
||||
"plugin_trigger_prefix": "$", # 规范插件提供聊天相关指令的前缀,建议不要和管理员指令前缀"#"冲突
|
||||
# 是否使用全局插件配置
|
||||
"use_global_plugin_config": False,
|
||||
# 知识库平台配置
|
||||
"use_linkai": False,
|
||||
"linkai_api_key": "",
|
||||
@@ -226,3 +232,32 @@ def subscribe_msg():
|
||||
trigger_prefix = conf().get("single_chat_prefix", [""])[0]
|
||||
msg = conf().get("subscribe_msg", "")
|
||||
return msg.format(trigger_prefix=trigger_prefix)
|
||||
|
||||
|
||||
# global plugin config
|
||||
plugin_config = {}
|
||||
|
||||
|
||||
def write_plugin_config(pconf: dict):
|
||||
"""
|
||||
写入插件全局配置
|
||||
:param pconf: 全量插件配置
|
||||
"""
|
||||
global plugin_config
|
||||
for k in pconf:
|
||||
plugin_config[k.lower()] = pconf[k]
|
||||
|
||||
|
||||
def pconf(plugin_name: str) -> dict:
|
||||
"""
|
||||
根据插件名称获取配置
|
||||
:param plugin_name: 插件名称
|
||||
:return: 该插件的配置项
|
||||
"""
|
||||
return plugin_config.get(plugin_name.lower())
|
||||
|
||||
|
||||
# 全局配置,用于存放全局生效的状态
|
||||
global_config = {
|
||||
"admin_users": []
|
||||
}
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
FROM python:3.10-alpine
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER
|
||||
|
||||
ENV BUILD_PREFIX=/app
|
||||
|
||||
RUN apk add --no-cache \
|
||||
bash \
|
||||
curl \
|
||||
wget \
|
||||
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`} \
|
||||
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache -r requirements.txt --extra-index-url https://alpine-wheels.github.io/index\
|
||||
&& pip install --no-cache -r requirements-optional.txt --extra-index-url https://alpine-wheels.github.io/index\
|
||||
&& apk del curl wget
|
||||
|
||||
WORKDIR ${BUILD_PREFIX}
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
|
||||
&& chown -R noroot:noroot ${BUILD_PREFIX}
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -1,29 +0,0 @@
|
||||
FROM python:3.10-alpine
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER
|
||||
|
||||
ENV BUILD_PREFIX=/app
|
||||
|
||||
ADD . ${BUILD_PREFIX}
|
||||
|
||||
RUN apk add --no-cache bash ffmpeg espeak \
|
||||
&& cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json config.json \
|
||||
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache -r requirements.txt --extra-index-url https://alpine-wheels.github.io/index\
|
||||
&& pip install --no-cache -r requirements-optional.txt --extra-index-url https://alpine-wheels.github.io/index
|
||||
|
||||
WORKDIR ${BUILD_PREFIX}
|
||||
|
||||
ADD docker/entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
|
||||
&& chown -R noroot:noroot ${BUILD_PREFIX}
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -1,41 +0,0 @@
|
||||
FROM python:3.10
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER
|
||||
|
||||
ENV BUILD_PREFIX=/app
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`} \
|
||||
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache -r requirements.txt \
|
||||
&& pip install --no-cache -r requirements-optional.txt
|
||||
|
||||
WORKDIR ${BUILD_PREFIX}
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& mkdir -p /home/noroot \
|
||||
&& groupadd -r noroot \
|
||||
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
|
||||
&& chown -R noroot:noroot /home/noroot ${BUILD_PREFIX} /usr/local/lib
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# fetch latest release tag
|
||||
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`
|
||||
|
||||
# build image
|
||||
docker build -f Dockerfile.alpine \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
# tag image
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:alpine
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# fetch latest release tag
|
||||
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`
|
||||
|
||||
# build image
|
||||
docker build -f Dockerfile.debian \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
# tag image
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:debian
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
|
||||
@@ -1,23 +0,0 @@
|
||||
FROM zhayujie/chatgpt-on-wechat:alpine
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
USER root
|
||||
|
||||
RUN apk add --no-cache \
|
||||
ffmpeg \
|
||||
espeak \
|
||||
&& pip install --no-cache \
|
||||
baidu-aip \
|
||||
chardet \
|
||||
SpeechRecognition
|
||||
|
||||
# replace entrypoint
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -1,24 +0,0 @@
|
||||
FROM zhayujie/chatgpt-on-wechat:debian
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
USER root
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
ffmpeg \
|
||||
espeak \
|
||||
&& pip install --no-cache \
|
||||
baidu-aip \
|
||||
chardet \
|
||||
SpeechRecognition
|
||||
|
||||
# replace entrypoint
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -1,24 +0,0 @@
|
||||
version: '2.0'
|
||||
services:
|
||||
chatgpt-on-wechat:
|
||||
build:
|
||||
context: ./
|
||||
dockerfile: Dockerfile.alpine
|
||||
image: zhayujie/chatgpt-on-wechat-voice-reply
|
||||
container_name: chatgpt-on-wechat-voice-reply
|
||||
environment:
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
OPEN_AI_PROXY: ''
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
SPEECH_RECOGNITION: 'true'
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
VOICE_REPLY_VOICE: 'true'
|
||||
BAIDU_APP_ID: 'YOUR BAIDU APP ID'
|
||||
BAIDU_API_KEY: 'YOUR BAIDU API KEY'
|
||||
BAIDU_SECRET_KEY: 'YOUR BAIDU SERVICE KEY'
|
||||
@@ -1,117 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# build prefix
|
||||
CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""}
|
||||
# path to config.json
|
||||
CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""}
|
||||
# execution command line
|
||||
CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""}
|
||||
|
||||
OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""}
|
||||
OPEN_AI_PROXY=${OPEN_AI_PROXY:-""}
|
||||
SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""}
|
||||
SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""}
|
||||
GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""}
|
||||
GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""}
|
||||
IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""}
|
||||
CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""}
|
||||
SPEECH_RECOGNITION=${SPEECH_RECOGNITION:-""}
|
||||
CHARACTER_DESC=${CHARACTER_DESC:-""}
|
||||
EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""}
|
||||
|
||||
VOICE_REPLY_VOICE=${VOICE_REPLY_VOICE:-""}
|
||||
BAIDU_APP_ID=${BAIDU_APP_ID:-""}
|
||||
BAIDU_API_KEY=${BAIDU_API_KEY:-""}
|
||||
BAIDU_SECRET_KEY=${BAIDU_SECRET_KEY:-""}
|
||||
|
||||
# CHATGPT_ON_WECHAT_PREFIX is empty, use /app
|
||||
if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
fi
|
||||
|
||||
# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'
|
||||
if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json
|
||||
fi
|
||||
|
||||
# CHATGPT_ON_WECHAT_EXEC is empty, use ‘python app.py’
|
||||
if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_EXEC="python app.py"
|
||||
fi
|
||||
|
||||
# modify content in config.json
|
||||
if [ "$OPEN_AI_API_KEY" != "" ] ; then
|
||||
sed -i "s/\"open_ai_api_key\".*,$/\"open_ai_api_key\": \"$OPEN_AI_API_KEY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
else
|
||||
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
|
||||
fi
|
||||
|
||||
# use http_proxy as default
|
||||
if [ "$HTTP_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$HTTP_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$OPEN_AI_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$OPEN_AI_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"single_chat_prefix\".*,$/\"single_chat_prefix\": $SINGLE_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"single_chat_reply_prefix\".*,$/\"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"group_chat_prefix\".*,$/\"group_chat_prefix\": $GROUP_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
|
||||
sed -i "s/\"group_name_white_list\".*,$/\"group_name_white_list\": $GROUP_NAME_WHITE_LIST,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"image_create_prefix\".*,$/\"image_create_prefix\": $IMAGE_CREATE_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
|
||||
sed -i "s/\"conversation_max_tokens\".*,$/\"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SPEECH_RECOGNITION" != "" ] ; then
|
||||
sed -i "s/\"speech_recognition\".*,$/\"speech_recognition\": $SPEECH_RECOGNITION,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CHARACTER_DESC" != "" ] ; then
|
||||
sed -i "s/\"character_desc\".*,$/\"character_desc\": \"$CHARACTER_DESC\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$EXPIRES_IN_SECONDS" != "" ] ; then
|
||||
sed -i "s/\"expires_in_seconds\".*$/\"expires_in_seconds\": $EXPIRES_IN_SECONDS/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# append
|
||||
if [ "$BAIDU_SECRET_KEY" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_secret_key\": \"$BAIDU_SECRET_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$BAIDU_API_KEY" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_api_key\": \"$BAIDU_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$BAIDU_APP_ID" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_app_id\": \"$BAIDU_APP_ID\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$VOICE_REPLY_VOICE" != "" ] ; then
|
||||
sed -i "1a \ \ \"voice_reply_voice\": $VOICE_REPLY_VOICE," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# go to prefix dir
|
||||
cd $CHATGPT_ON_WECHAT_PREFIX
|
||||
# excute
|
||||
$CHATGPT_ON_WECHAT_EXEC
|
||||
|
||||
|
||||
@@ -1,20 +1,24 @@
|
||||
version: '2.0'
|
||||
services:
|
||||
chatgpt-on-wechat:
|
||||
build:
|
||||
context: ./
|
||||
dockerfile: Dockerfile.alpine
|
||||
image: zhayujie/chatgpt-on-wechat
|
||||
container_name: sample-chatgpt-on-wechat
|
||||
container_name: chatgpt-on-wechat
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
environment:
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
OPEN_AI_PROXY: ''
|
||||
MODEL: 'gpt-3.5-turbo'
|
||||
PROXY: ''
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
SPEECH_RECOGNITION: "False"
|
||||
SPEECH_RECOGNITION: 'False'
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
USE_GLOBAL_PLUGIN_CONFIG: 'True'
|
||||
USE_LINKAI: 'False'
|
||||
LINKAI_API_KEY: ''
|
||||
LINKAI_APP_CODE: ''
|
||||
@@ -1,16 +0,0 @@
|
||||
OPEN_AI_API_KEY=YOUR API KEY
|
||||
OPEN_AI_PROXY=
|
||||
SINGLE_CHAT_PREFIX=["bot", "@bot"]
|
||||
SINGLE_CHAT_REPLY_PREFIX="[bot] "
|
||||
GROUP_CHAT_PREFIX=["@bot"]
|
||||
GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"]
|
||||
IMAGE_CREATE_PREFIX=["画", "看", "找"]
|
||||
CONVERSATION_MAX_TOKENS=1000
|
||||
SPEECH_RECOGNITION=false
|
||||
CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
|
||||
EXPIRES_IN_SECONDS=3600
|
||||
|
||||
# Optional
|
||||
#CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
#CHATGPT_ON_WECHAT_CONFIG_PATH=/app/config.json
|
||||
#CHATGPT_ON_WECHAT_EXEC=python app.py
|
||||
@@ -1,26 +0,0 @@
|
||||
IMG:=`cat Name`
|
||||
MOUNT:=
|
||||
PORT_MAP:=
|
||||
DOTENV:=.env
|
||||
CONTAINER_NAME:=sample-chatgpt-on-wechat
|
||||
|
||||
echo:
|
||||
echo $(IMG)
|
||||
|
||||
run_d:
|
||||
docker rm $(CONTAINER_NAME) || echo
|
||||
docker run -dt --name $(CONTAINER_NAME) $(PORT_MAP) \
|
||||
--env-file=$(DOTENV) \
|
||||
$(MOUNT) $(IMG)
|
||||
|
||||
run_i:
|
||||
docker rm $(CONTAINER_NAME) || echo
|
||||
docker run -it --name $(CONTAINER_NAME) $(PORT_MAP) \
|
||||
--env-file=$(DOTENV) \
|
||||
$(MOUNT) $(IMG)
|
||||
|
||||
stop:
|
||||
docker stop $(CONTAINER_NAME)
|
||||
|
||||
rm: stop
|
||||
docker rm $(CONTAINER_NAME)
|
||||
@@ -1 +0,0 @@
|
||||
zhayujie/chatgpt-on-wechat
|
||||
@@ -24,16 +24,17 @@ class Banwords(Plugin):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
try:
|
||||
# load config
|
||||
conf = super().load_config()
|
||||
curdir = os.path.dirname(__file__)
|
||||
config_path = os.path.join(curdir, "config.json")
|
||||
conf = None
|
||||
if not os.path.exists(config_path):
|
||||
conf = {"action": "ignore"}
|
||||
with open(config_path, "w") as f:
|
||||
json.dump(conf, f, indent=4)
|
||||
else:
|
||||
with open(config_path, "r") as f:
|
||||
conf = json.load(f)
|
||||
if not conf:
|
||||
# 配置不存在则写入默认配置
|
||||
config_path = os.path.join(curdir, "config.json")
|
||||
if not os.path.exists(config_path):
|
||||
conf = {"action": "ignore"}
|
||||
with open(config_path, "w") as f:
|
||||
json.dump(conf, f, indent=4)
|
||||
|
||||
self.searchr = WordsSearch()
|
||||
self.action = conf["action"]
|
||||
banwords_path = os.path.join(curdir, "banwords.txt")
|
||||
|
||||
@@ -29,14 +29,9 @@ class BDunit(Plugin):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
try:
|
||||
curdir = os.path.dirname(__file__)
|
||||
config_path = os.path.join(curdir, "config.json")
|
||||
conf = None
|
||||
if not os.path.exists(config_path):
|
||||
conf = super().load_config()
|
||||
if not conf:
|
||||
raise Exception("config.json not found")
|
||||
else:
|
||||
with open(config_path, "r") as f:
|
||||
conf = json.load(f)
|
||||
self.service_id = conf["service_id"]
|
||||
self.api_key = conf["api_key"]
|
||||
self.secret_key = conf["secret_key"]
|
||||
|
||||
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"godcmd": {
|
||||
"password": "",
|
||||
"admin_users": []
|
||||
},
|
||||
"banwords": {
|
||||
"action": "replace",
|
||||
"reply_filter": true,
|
||||
"reply_action": "ignore"
|
||||
},
|
||||
"tool": {
|
||||
"tools": [
|
||||
"python",
|
||||
"url-get",
|
||||
"terminal",
|
||||
"meteo-weather"
|
||||
],
|
||||
"kwargs": {
|
||||
"top_k_results": 2,
|
||||
"no_default": false,
|
||||
"model_name": "gpt-3.5-turbo"
|
||||
}
|
||||
},
|
||||
"linkai": {
|
||||
"group_app_map": {
|
||||
"测试群1": "default",
|
||||
"测试群2": "Kv2fXJcH"
|
||||
},
|
||||
"midjourney": {
|
||||
"enabled": true,
|
||||
"auto_translate": true,
|
||||
"img_proxy": true,
|
||||
"max_tasks": 3,
|
||||
"max_tasks_per_user": 1,
|
||||
"use_image_create_prefix": true
|
||||
}
|
||||
}
|
||||
}
|
||||
+10
-11
@@ -13,7 +13,7 @@ from bridge.context import ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common import const
|
||||
from common.log import logger
|
||||
from config import conf, load_config
|
||||
from config import conf, load_config, global_config
|
||||
from plugins import *
|
||||
|
||||
# 定义指令集
|
||||
@@ -178,16 +178,13 @@ class Godcmd(Plugin):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
curdir = os.path.dirname(__file__)
|
||||
config_path = os.path.join(curdir, "config.json")
|
||||
gconf = None
|
||||
if not os.path.exists(config_path):
|
||||
gconf = {"password": "", "admin_users": []}
|
||||
with open(config_path, "w") as f:
|
||||
json.dump(gconf, f, indent=4)
|
||||
else:
|
||||
with open(config_path, "r") as f:
|
||||
gconf = json.load(f)
|
||||
config_path = os.path.join(os.path.dirname(__file__), "config.json")
|
||||
gconf = super().load_config()
|
||||
if not gconf:
|
||||
if not os.path.exists(config_path):
|
||||
gconf = {"password": "", "admin_users": []}
|
||||
with open(config_path, "w") as f:
|
||||
json.dump(gconf, f, indent=4)
|
||||
if gconf["password"] == "":
|
||||
self.temp_password = "".join(random.sample(string.digits, 4))
|
||||
logger.info("[Godcmd] 因未设置口令,本次的临时口令为%s。" % self.temp_password)
|
||||
@@ -429,9 +426,11 @@ class Godcmd(Plugin):
|
||||
password = args[0]
|
||||
if password == self.password:
|
||||
self.admin_users.append(userid)
|
||||
global_config["admin_users"].append(userid)
|
||||
return True, "认证成功"
|
||||
elif password == self.temp_password:
|
||||
self.admin_users.append(userid)
|
||||
global_config["admin_users"].append(userid)
|
||||
return True, "认证成功,请尽快设置口令"
|
||||
else:
|
||||
return False, "认证失败"
|
||||
|
||||
@@ -34,14 +34,14 @@ class Hello(Plugin):
|
||||
e_context["context"].type = ContextType.TEXT
|
||||
msg: ChatMessage = e_context["context"]["msg"]
|
||||
e_context["context"].content = f'请你随机使用一种风格说一句问候语来欢迎新用户"{msg.actual_user_nickname}"加入群聊。'
|
||||
e_context.action = EventAction.CONTINUE # 事件继续,交付给下个插件或默认逻辑
|
||||
e_context.action = EventAction.BREAK # 事件结束,进入默认处理逻辑
|
||||
return
|
||||
|
||||
if e_context["context"].type == ContextType.PATPAT:
|
||||
e_context["context"].type = ContextType.TEXT
|
||||
msg: ChatMessage = e_context["context"]["msg"]
|
||||
e_context["context"].content = f"请你随机使用一种风格介绍你自己,并告诉用户输入#help可以查看帮助信息。"
|
||||
e_context.action = EventAction.CONTINUE # 事件继续,交付给下个插件或默认逻辑
|
||||
e_context.action = EventAction.BREAK # 事件结束,进入默认处理逻辑
|
||||
return
|
||||
|
||||
content = e_context["context"].content
|
||||
|
||||
@@ -54,9 +54,18 @@ class Keyword(Plugin):
|
||||
logger.debug(f"[keyword] 匹配到关键字【{content}】")
|
||||
reply_text = self.keyword[content]
|
||||
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.TEXT
|
||||
reply.content = reply_text
|
||||
# 判断匹配内容的类型
|
||||
if (reply_text.startswith("http://") or reply_text.startswith("https://")) and any(reply_text.endswith(ext) for ext in [".jpg", ".jpeg", ".png", ".gif", ".webp"]):
|
||||
# 如果是以 http:// 或 https:// 开头,且.jpg/.jpeg/.png/.gif结尾,则认为是图片 URL
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.IMAGE_URL
|
||||
reply.content = reply_text
|
||||
else:
|
||||
# 否则认为是普通文本
|
||||
reply = Reply()
|
||||
reply.type = ReplyType.TEXT
|
||||
reply.content = reply_text
|
||||
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS # 事件结束,并跳过处理context的默认逻辑
|
||||
|
||||
|
||||
@@ -0,0 +1,58 @@
|
||||
## 插件说明
|
||||
|
||||
基于 LinkAI 提供的知识库、Midjourney绘画等能力对机器人的功能进行增强。地址: https://chat.link-ai.tech/console
|
||||
|
||||
## 插件配置
|
||||
|
||||
将 `plugins/linkai` 下的 `config.json.template` 复制为 `config.json`。如果是`docker`部署,可通过映射 plugins/config.json 来完成配置。以下是配置项说明:
|
||||
|
||||
```bash
|
||||
{
|
||||
"group_app_map": { # 群聊 和 应用编码 的映射关系
|
||||
"测试群1": "default", # 表示在名称为 "测试群1" 的群聊中将使用app_code 为 default 的应用
|
||||
"测试群2": "Kv2fXJcH"
|
||||
},
|
||||
"midjourney": {
|
||||
"enabled": true, # midjourney 绘画开关
|
||||
"auto_translate": true, # 是否自动将提示词翻译为英文
|
||||
"img_proxy": true, # 是否对生成的图片使用代理,如果你是国外服务器,将这一项设置为false会获得更快的生成速度
|
||||
"max_tasks": 3, # 支持同时提交的总任务个数
|
||||
"max_tasks_per_user": 1, # 支持单个用户同时提交的任务个数
|
||||
"use_image_create_prefix": true # 是否使用全局的绘画触发词,如果开启将同时支持由`config.json`中的 image_create_prefix 配置触发
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
注意:实际 `config.json` 配置中应保证json格式,不应携带 '#' 及后面的注释。
|
||||
|
||||
## 插件使用
|
||||
|
||||
> 使用插件中的知识库管理功能需要首先开启`linkai`对话,依赖于全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置;midjourney绘画功能则只需填写 `linkai_api_key` 配置。
|
||||
|
||||
完成配置后运行项目,会自动运行插件,输入 `#help linkai` 可查看插件功能。
|
||||
|
||||
### 1.知识库管理功能
|
||||
|
||||
提供在不同群聊使用不同应用的功能。可以在上述 `group_app_map` 配置中固定映射关系,也可以通过指令在群中快速完成切换。
|
||||
|
||||
应用切换指令需要首先完成管理员 (`godcmd`) 插件的认证,然后按以下格式输入:
|
||||
|
||||
`$linkai app {app_code}`
|
||||
|
||||
例如输入 `$linkai app Kv2fXJcH`,即将当前群聊与 app_code为 Kv2fXJcH 的应用绑定。
|
||||
|
||||
### 2.Midjourney绘画功能
|
||||
|
||||
指令格式:
|
||||
|
||||
```
|
||||
- 图片生成: $mj 描述词1, 描述词2..
|
||||
- 图片放大: $mju 图片ID 图片序号
|
||||
```
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
"$mj a little cat, white --ar 9:16"
|
||||
"$mju 1105592717188272288 2"
|
||||
```
|
||||
@@ -0,0 +1 @@
|
||||
from .linkai import *
|
||||
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"group_app_map": {
|
||||
"测试群1": "default",
|
||||
"测试群2": "Kv2fXJcH"
|
||||
},
|
||||
"midjourney": {
|
||||
"enabled": true,
|
||||
"auto_translate": true,
|
||||
"img_proxy": true,
|
||||
"max_tasks": 3,
|
||||
"max_tasks_per_user": 1,
|
||||
"use_image_create_prefix": true
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,137 @@
|
||||
import asyncio
|
||||
import json
|
||||
import threading
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
import plugins
|
||||
from bridge.context import ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from channel.chat_message import ChatMessage
|
||||
from common.log import logger
|
||||
from config import conf, global_config
|
||||
from plugins import *
|
||||
from .midjourney import MJBot, TaskType
|
||||
|
||||
# 任务线程池
|
||||
task_thread_pool = ThreadPoolExecutor(max_workers=4)
|
||||
|
||||
|
||||
@plugins.register(
|
||||
name="linkai",
|
||||
desc="A plugin that supports knowledge base and midjourney drawing.",
|
||||
version="0.1.0",
|
||||
author="https://link-ai.tech",
|
||||
)
|
||||
class LinkAI(Plugin):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context
|
||||
self.config = super().load_config()
|
||||
if self.config:
|
||||
self.mj_bot = MJBot(self.config.get("midjourney"))
|
||||
logger.info("[LinkAI] inited")
|
||||
|
||||
def on_handle_context(self, e_context: EventContext):
|
||||
"""
|
||||
消息处理逻辑
|
||||
:param e_context: 消息上下文
|
||||
"""
|
||||
if not self.config:
|
||||
return
|
||||
|
||||
context = e_context['context']
|
||||
if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE]:
|
||||
# filter content no need solve
|
||||
return
|
||||
|
||||
mj_type = self.mj_bot.judge_mj_task_type(e_context)
|
||||
if mj_type:
|
||||
# MJ作图任务处理
|
||||
self.mj_bot.process_mj_task(mj_type, e_context)
|
||||
return
|
||||
|
||||
if context.content.startswith(f"{_get_trigger_prefix()}linkai"):
|
||||
# 应用管理功能
|
||||
self._process_admin_cmd(e_context)
|
||||
return
|
||||
|
||||
if self._is_chat_task(e_context):
|
||||
# 文本对话任务处理
|
||||
self._process_chat_task(e_context)
|
||||
|
||||
# 插件管理功能
|
||||
def _process_admin_cmd(self, e_context: EventContext):
|
||||
context = e_context['context']
|
||||
cmd = context.content.split()
|
||||
if len(cmd) == 1 or (len(cmd) == 2 and cmd[1] == "help"):
|
||||
_set_reply_text(self.get_help_text(verbose=True), e_context, level=ReplyType.INFO)
|
||||
return
|
||||
if len(cmd) == 3 and cmd[1] == "app":
|
||||
if not context.kwargs.get("isgroup"):
|
||||
_set_reply_text("该指令需在群聊中使用", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
if context.kwargs.get("msg").actual_user_id not in global_config["admin_users"]:
|
||||
_set_reply_text("需要管理员权限执行", e_context, level=ReplyType.ERROR)
|
||||
return
|
||||
app_code = cmd[2]
|
||||
group_name = context.kwargs.get("msg").from_user_nickname
|
||||
group_mapping = self.config.get("group_app_map")
|
||||
if group_mapping:
|
||||
group_mapping[group_name] = app_code
|
||||
else:
|
||||
self.config["group_app_map"] = {group_name: app_code}
|
||||
# 保存插件配置
|
||||
super().save_config(self.config)
|
||||
_set_reply_text(f"应用设置成功: {app_code}", e_context, level=ReplyType.INFO)
|
||||
else:
|
||||
_set_reply_text(f"指令错误,请输入{_get_trigger_prefix()}linkai help 获取帮助", e_context, level=ReplyType.INFO)
|
||||
return
|
||||
|
||||
# LinkAI 对话任务处理
|
||||
def _is_chat_task(self, e_context: EventContext):
|
||||
context = e_context['context']
|
||||
# 群聊应用管理
|
||||
return self.config.get("group_app_map") and context.kwargs.get("isgroup")
|
||||
|
||||
def _process_chat_task(self, e_context: EventContext):
|
||||
"""
|
||||
处理LinkAI对话任务
|
||||
:param e_context: 对话上下文
|
||||
"""
|
||||
context = e_context['context']
|
||||
# 群聊应用管理
|
||||
group_name = context.kwargs.get("msg").from_user_nickname
|
||||
app_code = self._fetch_group_app_code(group_name)
|
||||
if app_code:
|
||||
context.kwargs['app_code'] = app_code
|
||||
|
||||
def _fetch_group_app_code(self, group_name: str) -> str:
|
||||
"""
|
||||
根据群聊名称获取对应的应用code
|
||||
:param group_name: 群聊名称
|
||||
:return: 应用code
|
||||
"""
|
||||
group_mapping = self.config.get("group_app_map")
|
||||
if group_mapping:
|
||||
app_code = group_mapping.get(group_name) or group_mapping.get("ALL_GROUP")
|
||||
return app_code
|
||||
|
||||
def get_help_text(self, verbose=False, **kwargs):
|
||||
trigger_prefix = _get_trigger_prefix()
|
||||
help_text = "用于集成 LinkAI 提供的知识库、Midjourney绘画等能力。\n\n"
|
||||
if not verbose:
|
||||
return help_text
|
||||
help_text += f'📖 知识库\n - 群聊中指定应用: {trigger_prefix}linkai app 应用编码\n\n例如: \n"$linkai app Kv2fXJcH"\n\n'
|
||||
help_text += f"🎨 绘画\n - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \n - 放大: {trigger_prefix}mju 图片ID 图片序号\n\n例如:\n\"{trigger_prefix}mj a little cat, white --ar 9:16\"\n\"{trigger_prefix}mju 1105592717188272288 2\""
|
||||
return help_text
|
||||
|
||||
|
||||
# 静态方法
|
||||
def _set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
|
||||
reply = Reply(level, content)
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
|
||||
|
||||
def _get_trigger_prefix():
|
||||
return conf().get("plugin_trigger_prefix", "$")
|
||||
@@ -0,0 +1,391 @@
|
||||
from enum import Enum
|
||||
from config import conf
|
||||
from common.log import logger
|
||||
import requests
|
||||
import threading
|
||||
import time
|
||||
from bridge.reply import Reply, ReplyType
|
||||
import aiohttp
|
||||
import asyncio
|
||||
from bridge.context import ContextType
|
||||
from plugins import EventContext, EventAction
|
||||
|
||||
INVALID_REQUEST = 410
|
||||
|
||||
class TaskType(Enum):
|
||||
GENERATE = "generate"
|
||||
UPSCALE = "upscale"
|
||||
VARIATION = "variation"
|
||||
RESET = "reset"
|
||||
|
||||
|
||||
class Status(Enum):
|
||||
PENDING = "pending"
|
||||
FINISHED = "finished"
|
||||
EXPIRED = "expired"
|
||||
ABORTED = "aborted"
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
|
||||
class TaskMode(Enum):
|
||||
FAST = "fast"
|
||||
RELAX = "relax"
|
||||
|
||||
|
||||
class MJTask:
|
||||
def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 30,
|
||||
status=Status.PENDING):
|
||||
self.id = id
|
||||
self.user_id = user_id
|
||||
self.task_type = task_type
|
||||
self.raw_prompt = raw_prompt
|
||||
self.send_func = None # send_func(img_url)
|
||||
self.expiry_time = time.time() + expires
|
||||
self.status = status
|
||||
self.img_url = None # url
|
||||
self.img_id = None
|
||||
|
||||
def __str__(self):
|
||||
return f"id={self.id}, user_id={self.user_id}, task_type={self.task_type}, status={self.status}, img_id={self.img_id}"
|
||||
|
||||
|
||||
# midjourney bot
|
||||
class MJBot:
|
||||
def __init__(self, config):
|
||||
self.base_url = "https://api.link-ai.chat/v1/img/midjourney"
|
||||
|
||||
self.headers = {"Authorization": "Bearer " + conf().get("linkai_api_key")}
|
||||
self.config = config
|
||||
self.tasks = {}
|
||||
self.temp_dict = {}
|
||||
self.tasks_lock = threading.Lock()
|
||||
self.event_loop = asyncio.new_event_loop()
|
||||
|
||||
def judge_mj_task_type(self, e_context: EventContext):
|
||||
"""
|
||||
判断MJ任务的类型
|
||||
:param e_context: 上下文
|
||||
:return: 任务类型枚举
|
||||
"""
|
||||
if not self.config or not self.config.get("enabled"):
|
||||
return None
|
||||
trigger_prefix = conf().get("plugin_trigger_prefix", "$")
|
||||
context = e_context['context']
|
||||
if context.type == ContextType.TEXT:
|
||||
cmd_list = context.content.split(maxsplit=1)
|
||||
if cmd_list[0].lower() == f"{trigger_prefix}mj":
|
||||
return TaskType.GENERATE
|
||||
elif cmd_list[0].lower() == f"{trigger_prefix}mju":
|
||||
return TaskType.UPSCALE
|
||||
elif context.type == ContextType.IMAGE_CREATE and self.config.get("use_image_create_prefix"):
|
||||
return TaskType.GENERATE
|
||||
|
||||
def process_mj_task(self, mj_type: TaskType, e_context: EventContext):
|
||||
"""
|
||||
处理mj任务
|
||||
:param mj_type: mj任务类型
|
||||
:param e_context: 对话上下文
|
||||
"""
|
||||
context = e_context['context']
|
||||
session_id = context["session_id"]
|
||||
cmd = context.content.split(maxsplit=1)
|
||||
if len(cmd) == 1 and context.type == ContextType.TEXT:
|
||||
self._set_reply_text(self.get_help_text(verbose=True), e_context, level=ReplyType.INFO)
|
||||
return
|
||||
|
||||
if not self._check_rate_limit(session_id, e_context):
|
||||
logger.warn("[MJ] midjourney task exceed rate limit")
|
||||
return
|
||||
|
||||
if mj_type == TaskType.GENERATE:
|
||||
if context.type == ContextType.IMAGE_CREATE:
|
||||
raw_prompt = context.content
|
||||
else:
|
||||
# 图片生成
|
||||
raw_prompt = cmd[1]
|
||||
reply = self.generate(raw_prompt, session_id, e_context)
|
||||
e_context['reply'] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
return
|
||||
|
||||
elif mj_type == TaskType.UPSCALE:
|
||||
# 图片放大
|
||||
clist = cmd[1].split()
|
||||
if len(clist) < 2:
|
||||
self._set_reply_text(f"{cmd[0]} 命令缺少参数", e_context)
|
||||
return
|
||||
img_id = clist[0]
|
||||
index = int(clist[1])
|
||||
if index < 1 or index > 4:
|
||||
self._set_reply_text(f"图片序号 {index} 错误,应在 1 至 4 之间", e_context)
|
||||
return
|
||||
key = f"{TaskType.UPSCALE.name}_{img_id}_{index}"
|
||||
if self.temp_dict.get(key):
|
||||
self._set_reply_text(f"第 {index} 张图片已经放大过了", e_context)
|
||||
return
|
||||
# 图片放大操作
|
||||
reply = self.upscale(session_id, img_id, index, e_context)
|
||||
e_context['reply'] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
return
|
||||
|
||||
else:
|
||||
self._set_reply_text(f"暂不支持该命令", e_context)
|
||||
|
||||
def generate(self, prompt: str, user_id: str, e_context: EventContext) -> Reply:
|
||||
"""
|
||||
图片生成
|
||||
:param prompt: 提示词
|
||||
:param user_id: 用户id
|
||||
:param e_context: 对话上下文
|
||||
:return: 任务ID
|
||||
"""
|
||||
logger.info(f"[MJ] image generate, prompt={prompt}")
|
||||
mode = self._fetch_mode(prompt)
|
||||
body = {"prompt": prompt, "mode": mode, "auto_translate": self.config.get("auto_translate")}
|
||||
if not self.config.get("img_proxy"):
|
||||
body["img_proxy"] = False
|
||||
res = requests.post(url=self.base_url + "/generate", json=body, headers=self.headers, timeout=(5, 40))
|
||||
if res.status_code == 200:
|
||||
res = res.json()
|
||||
logger.debug(f"[MJ] image generate, res={res}")
|
||||
if res.get("code") == 200:
|
||||
task_id = res.get("data").get("task_id")
|
||||
real_prompt = res.get("data").get("real_prompt")
|
||||
if mode == TaskMode.RELAX.value:
|
||||
time_str = "1~10分钟"
|
||||
else:
|
||||
time_str = "1分钟"
|
||||
content = f"🚀您的作品将在{time_str}左右完成,请耐心等待\n- - - - - - - - -\n"
|
||||
if real_prompt:
|
||||
content += f"初始prompt: {prompt}\n转换后prompt: {real_prompt}"
|
||||
else:
|
||||
content += f"prompt: {prompt}"
|
||||
reply = Reply(ReplyType.INFO, content)
|
||||
task = MJTask(id=task_id, status=Status.PENDING, raw_prompt=prompt, user_id=user_id,
|
||||
task_type=TaskType.GENERATE)
|
||||
# put to memory dict
|
||||
self.tasks[task.id] = task
|
||||
# asyncio.run_coroutine_threadsafe(self.check_task(task, e_context), self.event_loop)
|
||||
self._do_check_task(task, e_context)
|
||||
return reply
|
||||
else:
|
||||
res_json = res.json()
|
||||
logger.error(f"[MJ] generate error, msg={res_json.get('message')}, status_code={res.status_code}")
|
||||
if res.status_code == INVALID_REQUEST:
|
||||
reply = Reply(ReplyType.ERROR, "图片生成失败,请检查提示词参数或内容")
|
||||
else:
|
||||
reply = Reply(ReplyType.ERROR, "图片生成失败,请稍后再试")
|
||||
return reply
|
||||
|
||||
def upscale(self, user_id: str, img_id: str, index: int, e_context: EventContext) -> Reply:
|
||||
logger.info(f"[MJ] image upscale, img_id={img_id}, index={index}")
|
||||
body = {"type": TaskType.UPSCALE.name, "img_id": img_id, "index": index}
|
||||
if not self.config.get("img_proxy"):
|
||||
body["img_proxy"] = False
|
||||
res = requests.post(url=self.base_url + "/operate", json=body, headers=self.headers, timeout=(5, 40))
|
||||
logger.debug(res)
|
||||
if res.status_code == 200:
|
||||
res = res.json()
|
||||
if res.get("code") == 200:
|
||||
task_id = res.get("data").get("task_id")
|
||||
logger.info(f"[MJ] image upscale processing, task_id={task_id}")
|
||||
content = f"🔎图片正在放大中,请耐心等待"
|
||||
reply = Reply(ReplyType.INFO, content)
|
||||
task = MJTask(id=task_id, status=Status.PENDING, user_id=user_id, task_type=TaskType.UPSCALE)
|
||||
# put to memory dict
|
||||
self.tasks[task.id] = task
|
||||
key = f"{TaskType.UPSCALE.name}_{img_id}_{index}"
|
||||
self.temp_dict[key] = True
|
||||
# asyncio.run_coroutine_threadsafe(self.check_task(task, e_context), self.event_loop)
|
||||
self._do_check_task(task, e_context)
|
||||
return reply
|
||||
else:
|
||||
error_msg = ""
|
||||
if res.status_code == 461:
|
||||
error_msg = "请输入正确的图片ID"
|
||||
res_json = res.json()
|
||||
logger.error(f"[MJ] upscale error, msg={res_json.get('message')}, status_code={res.status_code}")
|
||||
reply = Reply(ReplyType.ERROR, error_msg or "图片生成失败,请稍后再试")
|
||||
return reply
|
||||
|
||||
def check_task_sync(self, task: MJTask, e_context: EventContext):
|
||||
logger.debug(f"[MJ] start check task status, {task}")
|
||||
max_retry_times = 90
|
||||
while max_retry_times > 0:
|
||||
time.sleep(10)
|
||||
url = f"{self.base_url}/tasks/{task.id}"
|
||||
try:
|
||||
res = requests.get(url, headers=self.headers, timeout=8)
|
||||
if res.status_code == 200:
|
||||
res_json = res.json()
|
||||
logger.debug(f"[MJ] task check res sync, task_id={task.id}, status={res.status_code}, "
|
||||
f"data={res_json.get('data')}, thread={threading.current_thread().name}")
|
||||
if res_json.get("data") and res_json.get("data").get("status") == Status.FINISHED.name:
|
||||
# process success res
|
||||
if self.tasks.get(task.id):
|
||||
self.tasks[task.id].status = Status.FINISHED
|
||||
self._process_success_task(task, res_json.get("data"), e_context)
|
||||
return
|
||||
max_retry_times -= 1
|
||||
else:
|
||||
res_json = res.json()
|
||||
logger.warn(f"[MJ] image check error, status_code={res.status_code}, res={res_json}")
|
||||
max_retry_times -= 20
|
||||
except Exception as e:
|
||||
max_retry_times -= 20
|
||||
logger.warn(e)
|
||||
logger.warn("[MJ] end from poll")
|
||||
if self.tasks.get(task.id):
|
||||
self.tasks[task.id].status = Status.EXPIRED
|
||||
|
||||
async def check_task_async(self, task: MJTask, e_context: EventContext):
|
||||
try:
|
||||
logger.debug(f"[MJ] start check task status, {task}")
|
||||
max_retry_times = 90
|
||||
while max_retry_times > 0:
|
||||
await asyncio.sleep(10)
|
||||
async with aiohttp.ClientSession() as session:
|
||||
url = f"{self.base_url}/tasks/{task.id}"
|
||||
try:
|
||||
async with session.get(url, headers=self.headers) as res:
|
||||
if res.status == 200:
|
||||
res_json = await res.json()
|
||||
logger.debug(f"[MJ] task check res, task_id={task.id}, status={res.status}, "
|
||||
f"data={res_json.get('data')}, thread={threading.current_thread().name}")
|
||||
if res_json.get("data") and res_json.get("data").get("status") == Status.FINISHED.name:
|
||||
# process success res
|
||||
if self.tasks.get(task.id):
|
||||
self.tasks[task.id].status = Status.FINISHED
|
||||
self._process_success_task(task, res_json.get("data"), e_context)
|
||||
return
|
||||
else:
|
||||
res_json = await res.json()
|
||||
logger.warn(f"[MJ] image check error, status_code={res.status}, res={res_json}")
|
||||
max_retry_times -= 20
|
||||
except Exception as e:
|
||||
max_retry_times -= 20
|
||||
logger.warn(e)
|
||||
max_retry_times -= 1
|
||||
logger.warn("[MJ] end from poll")
|
||||
if self.tasks.get(task.id):
|
||||
self.tasks[task.id].status = Status.EXPIRED
|
||||
except Exception as e:
|
||||
logger.error(e)
|
||||
|
||||
def _do_check_task(self, task: MJTask, e_context: EventContext):
|
||||
threading.Thread(target=self.check_task_sync, args=(task, e_context)).start()
|
||||
|
||||
def _process_success_task(self, task: MJTask, res: dict, e_context: EventContext):
|
||||
"""
|
||||
处理任务成功的结果
|
||||
:param task: MJ任务
|
||||
:param res: 请求结果
|
||||
:param e_context: 对话上下文
|
||||
"""
|
||||
# channel send img
|
||||
task.status = Status.FINISHED
|
||||
task.img_id = res.get("img_id")
|
||||
task.img_url = res.get("img_url")
|
||||
logger.info(f"[MJ] task success, task_id={task.id}, img_id={task.img_id}, img_url={task.img_url}")
|
||||
|
||||
# send img
|
||||
reply = Reply(ReplyType.IMAGE_URL, task.img_url)
|
||||
channel = e_context["channel"]
|
||||
channel._send(reply, e_context["context"])
|
||||
|
||||
# send info
|
||||
trigger_prefix = conf().get("plugin_trigger_prefix", "$")
|
||||
text = ""
|
||||
if task.task_type == TaskType.GENERATE:
|
||||
text = f"🎨绘画完成!\nprompt: {task.raw_prompt}\n- - - - - - - - -\n图片ID: {task.img_id}"
|
||||
text += f"\n\n🔎可使用 {trigger_prefix}mju 命令放大指定图片\n"
|
||||
text += f"例如:\n{trigger_prefix}mju {task.img_id} 1"
|
||||
reply = Reply(ReplyType.INFO, text)
|
||||
channel._send(reply, e_context["context"])
|
||||
|
||||
self._print_tasks()
|
||||
return
|
||||
|
||||
def _check_rate_limit(self, user_id: str, e_context: EventContext) -> bool:
|
||||
"""
|
||||
midjourney任务限流控制
|
||||
:param user_id: 用户id
|
||||
:param e_context: 对话上下文
|
||||
:return: 任务是否能够生成, True:可以生成, False: 被限流
|
||||
"""
|
||||
tasks = self.find_tasks_by_user_id(user_id)
|
||||
task_count = len([t for t in tasks if t.status == Status.PENDING])
|
||||
if task_count >= self.config.get("max_tasks_per_user"):
|
||||
reply = Reply(ReplyType.INFO, "您的Midjourney作图任务数已达上限,请稍后再试")
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
return False
|
||||
task_count = len([t for t in self.tasks.values() if t.status == Status.PENDING])
|
||||
if task_count >= self.config.get("max_tasks"):
|
||||
reply = Reply(ReplyType.INFO, "Midjourney作图任务数已达上限,请稍后再试")
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
return False
|
||||
return True
|
||||
|
||||
def _fetch_mode(self, prompt) -> str:
|
||||
mode = self.config.get("mode")
|
||||
if "--relax" in prompt or mode == TaskMode.RELAX.value:
|
||||
return TaskMode.RELAX.value
|
||||
return mode or TaskMode.FAST.value
|
||||
|
||||
def _run_loop(self, loop: asyncio.BaseEventLoop):
|
||||
"""
|
||||
运行事件循环,用于轮询任务的线程
|
||||
:param loop: 事件循环
|
||||
"""
|
||||
loop.run_forever()
|
||||
loop.stop()
|
||||
|
||||
def _print_tasks(self):
|
||||
for id in self.tasks:
|
||||
logger.debug(f"[MJ] current task: {self.tasks[id]}")
|
||||
|
||||
def _set_reply_text(self, content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):
|
||||
"""
|
||||
设置回复文本
|
||||
:param content: 回复内容
|
||||
:param e_context: 对话上下文
|
||||
:param level: 回复等级
|
||||
"""
|
||||
reply = Reply(level, content)
|
||||
e_context["reply"] = reply
|
||||
e_context.action = EventAction.BREAK_PASS
|
||||
|
||||
def get_help_text(self, verbose=False, **kwargs):
|
||||
trigger_prefix = conf().get("plugin_trigger_prefix", "$")
|
||||
help_text = "🎨利用Midjourney进行画图\n\n"
|
||||
if not verbose:
|
||||
return help_text
|
||||
help_text += f" - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \n - 放大: {trigger_prefix}mju 图片ID 图片序号\n\n例如:\n\"{trigger_prefix}mj a little cat, white --ar 9:16\"\n\"{trigger_prefix}mju 1105592717188272288 2\""
|
||||
|
||||
return help_text
|
||||
|
||||
def find_tasks_by_user_id(self, user_id) -> list:
|
||||
result = []
|
||||
with self.tasks_lock:
|
||||
now = time.time()
|
||||
for task in self.tasks.values():
|
||||
if task.status == Status.PENDING and now > task.expiry_time:
|
||||
task.status = Status.EXPIRED
|
||||
logger.info(f"[MJ] {task} expired")
|
||||
if task.user_id == user_id:
|
||||
result.append(task)
|
||||
return result
|
||||
|
||||
|
||||
def check_prefix(content, prefix_list):
|
||||
if not prefix_list:
|
||||
return None
|
||||
for prefix in prefix_list:
|
||||
if content.startswith(prefix):
|
||||
return prefix
|
||||
return None
|
||||
@@ -1,6 +1,45 @@
|
||||
import os
|
||||
import json
|
||||
from config import pconf, plugin_config, conf
|
||||
from common.log import logger
|
||||
|
||||
|
||||
class Plugin:
|
||||
def __init__(self):
|
||||
self.handlers = {}
|
||||
|
||||
def load_config(self) -> dict:
|
||||
"""
|
||||
加载当前插件配置
|
||||
:return: 插件配置字典
|
||||
"""
|
||||
# 优先获取 plugins/config.json 中的全局配置
|
||||
plugin_conf = pconf(self.name)
|
||||
if not plugin_conf or not conf().get("use_global_plugin_config"):
|
||||
# 全局配置不存在 或者 未开启全局配置开关,则获取插件目录下的配置
|
||||
plugin_config_path = os.path.join(self.path, "config.json")
|
||||
if os.path.exists(plugin_config_path):
|
||||
with open(plugin_config_path, "r") as f:
|
||||
plugin_conf = json.load(f)
|
||||
logger.debug(f"loading plugin config, plugin_name={self.name}, conf={plugin_conf}")
|
||||
return plugin_conf
|
||||
|
||||
def save_config(self, config: dict):
|
||||
try:
|
||||
plugin_config[self.name] = config
|
||||
# 写入全局配置
|
||||
global_config_path = "./plugins/config.json"
|
||||
if os.path.exists(global_config_path):
|
||||
with open(global_config_path, "w", encoding='utf-8') as f:
|
||||
json.dump(plugin_config, f, indent=4, ensure_ascii=False)
|
||||
# 写入插件配置
|
||||
plugin_config_path = os.path.join(self.path, "config.json")
|
||||
if os.path.exists(plugin_config_path):
|
||||
with open(plugin_config_path, "w", encoding='utf-8') as f:
|
||||
json.dump(config, f, indent=4, ensure_ascii=False)
|
||||
|
||||
except Exception as e:
|
||||
logger.warn("save plugin config failed: {}".format(e))
|
||||
|
||||
def get_help_text(self, **kwargs):
|
||||
return "暂无帮助信息"
|
||||
|
||||
@@ -9,7 +9,7 @@ import sys
|
||||
from common.log import logger
|
||||
from common.singleton import singleton
|
||||
from common.sorted_dict import SortedDict
|
||||
from config import conf
|
||||
from config import conf, write_plugin_config
|
||||
|
||||
from .event import *
|
||||
|
||||
@@ -62,6 +62,28 @@ class PluginManager:
|
||||
self.save_config()
|
||||
return pconf
|
||||
|
||||
@staticmethod
|
||||
def _load_all_config():
|
||||
"""
|
||||
背景: 目前插件配置存放于每个插件目录的config.json下,docker运行时不方便进行映射,故增加统一管理的入口,优先
|
||||
加载 plugins/config.json,原插件目录下的config.json 不受影响
|
||||
|
||||
从 plugins/config.json 中加载所有插件的配置并写入 config.py 的全局配置中,供插件中使用
|
||||
插件实例中通过 config.pconf(plugin_name) 即可获取该插件的配置
|
||||
"""
|
||||
all_config_path = "./plugins/config.json"
|
||||
try:
|
||||
if os.path.exists(all_config_path):
|
||||
# read from all plugins config
|
||||
with open(all_config_path, "r", encoding="utf-8") as f:
|
||||
all_conf = json.load(f)
|
||||
logger.info(f"load all config from plugins/config.json: {all_conf}")
|
||||
|
||||
# write to global config
|
||||
write_plugin_config(all_conf)
|
||||
except Exception as e:
|
||||
logger.error(e)
|
||||
|
||||
def scan_plugins(self):
|
||||
logger.info("Scaning plugins ...")
|
||||
plugins_dir = "./plugins"
|
||||
@@ -88,7 +110,7 @@ class PluginManager:
|
||||
self.loaded[plugin_path] = importlib.import_module(import_path)
|
||||
self.current_plugin_path = None
|
||||
except Exception as e:
|
||||
logger.exception("Failed to import plugin %s: %s" % (plugin_name, e))
|
||||
logger.warn("Failed to import plugin %s: %s" % (plugin_name, e))
|
||||
continue
|
||||
pconf = self.pconf
|
||||
news = [self.plugins[name] for name in self.plugins]
|
||||
@@ -123,7 +145,7 @@ class PluginManager:
|
||||
try:
|
||||
instance = plugincls()
|
||||
except Exception as e:
|
||||
logger.exception("Failed to init %s, diabled. %s" % (name, e))
|
||||
logger.warn("Failed to init %s, diabled. %s" % (name, e))
|
||||
self.disable_plugin(name)
|
||||
failed_plugins.append(name)
|
||||
continue
|
||||
@@ -149,6 +171,8 @@ class PluginManager:
|
||||
def load_plugins(self):
|
||||
self.load_config()
|
||||
self.scan_plugins()
|
||||
# 加载全量插件配置
|
||||
self._load_all_config()
|
||||
pconf = self.pconf
|
||||
logger.debug("plugins.json config={}".format(pconf))
|
||||
for name, plugin in pconf["plugins"].items():
|
||||
|
||||
@@ -11,6 +11,10 @@
|
||||
"summary": {
|
||||
"url": "https://github.com/lanvent/plugin_summary.git",
|
||||
"desc": "总结聊天记录的插件"
|
||||
},
|
||||
"timetask": {
|
||||
"url": "https://github.com/haikerapples/timetask.git",
|
||||
"desc": "一款定时任务系统的插件"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -114,18 +114,19 @@ $tool reset: 重置工具。
|
||||
---
|
||||
|
||||
###### 注1:带*工具需要获取api-key才能使用(在config.json内的kwargs添加项),部分工具需要外网支持
|
||||
#### [申请方法](https://github.com/goldfishh/chatgpt-tool-hub/blob/master/docs/apply_optional_tool.md)
|
||||
## [工具的api申请方法](https://github.com/goldfishh/chatgpt-tool-hub/blob/master/docs/apply_optional_tool.md)
|
||||
|
||||
## config.json 配置说明
|
||||
###### 默认工具无需配置,其它工具需手动配置,一个例子:
|
||||
###### 默认工具无需配置,其它工具需手动配置,以增加morning-news和bing-search两个工具为例:
|
||||
```json
|
||||
{
|
||||
"tools": ["wikipedia", "你想要添加的其他工具"], // 填入你想用到的额外工具名
|
||||
"tools": ["bing-search", "news", "你想要添加的其他工具"], // 填入你想用到的额外工具名,这里加入了工具"bing-search"和工具"news"(news工具会自动加载morning-news、finance-news等子工具)
|
||||
"kwargs": {
|
||||
"debug": true, // 当你遇到问题求助时,需要配置
|
||||
"request_timeout": 120, // openai接口超时时间
|
||||
"no_default": false, // 是否不使用默认的4个工具
|
||||
// 带*工具需要申请api-key,在这里填入,api_name参考前述`申请方法`
|
||||
"bing_subscription_key": "4871f273a4804743",//带*工具需要申请api-key,这里填入了工具bing-search对应的api,api_name参考前述`工具的api申请方法`
|
||||
"morning_news_api_key": "5w1kjNh9VQlUc",// 这里填入了morning-news对应的api,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
+3
-10
@@ -10,7 +10,6 @@ from bridge.bridge import Bridge
|
||||
from bridge.context import ContextType
|
||||
from bridge.reply import Reply, ReplyType
|
||||
from common import const
|
||||
from common.log import logger
|
||||
from config import conf
|
||||
from plugins import *
|
||||
|
||||
@@ -119,15 +118,8 @@ class Tool(Plugin):
|
||||
return
|
||||
|
||||
def _read_json(self) -> dict:
|
||||
curdir = os.path.dirname(__file__)
|
||||
config_path = os.path.join(curdir, "config.json")
|
||||
tool_config = {"tools": [], "kwargs": {}}
|
||||
if not os.path.exists(config_path):
|
||||
return tool_config
|
||||
else:
|
||||
with open(config_path, "r") as f:
|
||||
tool_config = json.load(f)
|
||||
return tool_config
|
||||
default_config = {"tools": [], "kwargs": {}}
|
||||
return super().load_config() or default_config
|
||||
|
||||
def _build_tool_kwargs(self, kwargs: dict):
|
||||
tool_model_name = kwargs.get("model_name")
|
||||
@@ -137,6 +129,7 @@ class Tool(Plugin):
|
||||
"debug": kwargs.get("debug", False),
|
||||
"openai_api_key": conf().get("open_ai_api_key", ""),
|
||||
"open_ai_api_base": conf().get("open_ai_api_base", "https://api.openai.com/v1"),
|
||||
"deployment_id": conf().get("azure_deployment_id", ""),
|
||||
"proxy": conf().get("proxy", ""),
|
||||
"request_timeout": request_timeout if request_timeout else conf().get("request_timeout", 120),
|
||||
# note: 目前tool暂未对其他模型测试,但这里仍对配置来源做了优先级区分,一般插件配置可覆盖全局配置
|
||||
|
||||
@@ -25,4 +25,4 @@ wechatpy
|
||||
# chatgpt-tool-hub plugin
|
||||
|
||||
--extra-index-url https://pypi.python.org/simple
|
||||
chatgpt_tool_hub==0.4.4
|
||||
chatgpt_tool_hub==0.4.6
|
||||
|
||||
+2
-2
@@ -1,8 +1,8 @@
|
||||
openai==0.27.2
|
||||
openai>=0.27.8
|
||||
HTMLParser>=0.0.2
|
||||
PyQRCode>=1.2.1
|
||||
qrcode>=7.4.2
|
||||
requests>=2.28.2
|
||||
chardet>=5.1.0
|
||||
Pillow
|
||||
pre-commit
|
||||
pre-commit
|
||||
|
||||
Reference in New Issue
Block a user