Compare commits

..

36 Commits

Author SHA1 Message Date
zhayujie 41762a1c57 Merge pull request #1332 from zhayujie/feat-1.3.3
fix: reduce memory usage
2023-07-21 17:18:56 +08:00
zhayujie a786fa4b75 fix: reduce the expiration time and avoid storing the original message text to decrease memory usage 2023-07-21 17:16:34 +08:00
zhayujie e4c7602c0c docs: update README.md 2023-07-21 17:14:11 +08:00
zhayujie e0d2e34980 Merge pull request #1328 from zhayujie/feat-1.3.3
feat: support global plugin config for docker env
2023-07-21 10:50:16 +08:00
zhayujie 9ef8e1be3f feat: move loading config method to base class 2023-07-20 16:08:19 +08:00
zhayujie aae9b64833 fix: reduce unnecessary error traceback logs 2023-07-20 14:46:41 +08:00
zhayujie 4bab4299f2 fix: global plugin config read 2023-07-20 14:24:40 +08:00
zhayujie 954e55f4b4 feat: add plugin global config to support docker volumes 2023-07-20 11:36:02 +08:00
zhayujie 2361e3c28c docs: update README for railway cancelled free service 2023-07-19 18:23:59 +08:00
zhayujie 8aac86f0a9 Merge pull request #1291 from 6vision/master
(tool)fix azure model
2023-07-05 01:44:06 +08:00
vision 6384e9310b plugin(tool): 更新0.4.6
1、temp fix summary tool not ending bug
2、兼容0613 gpt-3.5
3、add azure's model name: gpt-35-turbo
2023-07-05 01:06:53 +08:00
vision 7a9205dfba fix azure model
更新chatgpt_tool_hub至0.4.6,拉取最新代码。tool即可使用azure接口!
2023-07-05 01:01:46 +08:00
Jianglang 94b47a56f4 Merge pull request #1282 from haikerapples/master_haiker_timetask
内置 timetask 插件
2023-07-01 18:37:07 +08:00
zhayujie 709b5be634 fix: group voice config and azure model calc support 2023-07-01 13:17:08 +08:00
haikerwang f970b2c168 内置 timetask 插件 2023-06-29 00:58:57 +08:00
zhayujie 973acb37ed docs: update README.md 2023-06-27 22:28:51 +08:00
zhayujie 1c9020a565 docs: update README.md 2023-06-26 23:52:32 +08:00
zhayujie c5f1d0042c docs: update README.md 2023-06-26 20:11:35 +08:00
zhayujie fa706e8b1d Merge pull request #1275 from zhayujie/feat-docker
chore: remove useless docker files
2023-06-26 14:16:18 +08:00
zhayujie 12c170f227 chore: remove useless docker files 2023-06-26 14:05:08 +08:00
zhayujie db27dfe227 docs: modify docker deploy steps 2023-06-26 13:10:51 +08:00
zhayujie 2db4673392 chore: fixed openai version 2023-06-26 12:29:09 +08:00
zhayujie 38619db629 Merge pull request #1274 from zhayujie/feat-dockerhub
feat: modify docker-compose file to pull image from dockerhub
2023-06-26 12:00:57 +08:00
zhayujie 930fd436ea feat: modify docker-compose file to pull image from dockerhub 2023-06-26 11:58:55 +08:00
zhayujie 98b8ff2fc8 Merge pull request #1271 from zhayujie/feat-dockerhub
feat: publish to dockerhub in github CI simultaneously
2023-06-26 01:24:24 +08:00
zhayujie d0662683f9 feat: publish to dockerhub in github CI simultaneously 2023-06-26 01:20:04 +08:00
zhayujie 957f2574a9 Merge pull request #1257 from 6vision/master
add reply_suffix
2023-06-17 16:50:11 +08:00
vision 109b362ebd Update config.py 2023-06-17 16:42:52 +08:00
vision ff3fdfa738 add reply_suffix 2023-06-17 16:36:08 +08:00
vision e2636ed54a add replay_suffix
增加自动回复后缀的可选配置参数
2023-06-17 15:53:49 +08:00
vision dbe2f17e1a add reply_suffix
增加私聊和群聊回复后缀的可选配置
2023-06-17 15:46:03 +08:00
zhayujie 4dc535673f Merge pull request #1252 from 6vision/master
Update Tool README.md
2023-06-16 15:48:04 +08:00
vision f414b6408e Update README.md 2023-06-16 15:08:57 +08:00
lanvent 3aa2e6a04d fix: caclucate tokens correctly for *0613 models 2023-06-16 00:51:29 +08:00
lanvent 1963ff273f chore(hello): change plugin logic 2023-06-14 13:40:20 +08:00
lanvent bb737a71d5 feat: update counting tokens for new models 2023-06-14 13:36:07 +08:00
33 changed files with 226 additions and 452 deletions
+9 -1
View File
@@ -28,6 +28,12 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
@@ -39,7 +45,9 @@ jobs:
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
images: |
${{ env.IMAGE_NAME }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
+55 -11
View File
@@ -13,11 +13,6 @@
> 欢迎接入更多应用,参考 [Terminal代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/terminal/terminal_channel.py)实现接收和发送消息逻辑即可接入。 同时欢迎增加新的插件,参考 [插件说明文档](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins)。
**一键部署:**
- 个人微信
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/qApznZ?referralCode=RC3znh)
# 演示
https://user-images.githubusercontent.com/26161723/233777277-e3b9928e-b88f-43e2-b0e0-3cbc923bc799.mp4
@@ -26,13 +21,13 @@ Demo made by [Visionn](https://www.wangpc.cc/)
# 交流群
添加小助手微信进群:
添加小助手微信进群,请备注 "wechat"
<img width="240" src="./docs/images/contact.jpg">
# 更新日志
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建 个人知识库,并接入微信中。Beta版本欢迎体验,使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建 个人知识库,并接入微信、公众号及企业微信中。使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)。
>**2023.04.26** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
@@ -63,6 +58,8 @@ Demo made by [Visionn](https://www.wangpc.cc/)
支持 Linux、MacOS、Windows 系统(可在Linux服务器上长期运行),同时需安装 `Python`
> 建议Python版本在 3.7.1~3.9.X 之间,推荐3.8版本,3.10及以上版本在 MacOS 可用,其他系统上不确定能否正常运行。
> 注意:Docker 或 Railway 部署无需安装python环境和下载源码,可直接快进到下一节。
**(1) 克隆项目代码:**
```bash
@@ -196,14 +193,61 @@ nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通
### 3.Docker部署
参考文档 [Docker部署](https://github.com/limccn/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))
> 使用docker部署无需下载源码和安装依赖,只需要获取 docker-compose.yml 配置文件并启动容器即可
### 4. Railway部署 (✅推荐)
> Railway每月提供5刀和最多500小时的免费额度。
1. 进入 [Railway](https://railway.app/template/qApznZ?referralCode=RC3znh)。
> 前提是需要安装好 `docker` 及 `docker-compose`,安装成功的表现是执行 `docker -v` 和 `docker-compose version` (或 docker compose version) 可以查看到版本号,可前往 [docker官网](https://docs.docker.com/engine/install/) 进行下载。
#### (1) 下载 docker-compose.yml 文件
```bash
wget https://open-1317903499.cos.ap-guangzhou.myqcloud.com/docker-compose.yml
```
下载完成后打开 `docker-compose.yml` 修改所需配置,如 `OPEN_AI_API_KEY``GROUP_NAME_WHITE_LIST` 等。
#### (2) 启动容器
`docker-compose.yml` 所在目录下执行以下命令启动容器:
```bash
sudo docker compose up -d
```
运行 `sudo docker ps` 能查看到 NAMES 为 chatgpt-on-wechat 的容器即表示运行成功。
注意:
- 如果 `docker-compose` 是 1.X 版本 则需要执行 `sudo docker-compose up -d` 来启动容器
- 该命令会自动去 [docker hub](https://hub.docker.com/r/zhayujie/chatgpt-on-wechat) 拉取 latest 版本的镜像,latest 镜像会在每次项目 release 新的版本时生成
最后运行以下命令可查看容器运行日志,扫描日志中的二维码即可完成登录:
```bash
sudo docker logs -f chatgpt-on-wechat
```
#### (3) 插件使用
如果需要在docker容器中修改插件配置,可通过挂载的方式完成,将 [插件配置文件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/config.json.template)
重命名为 `config.json`,放置于 `docker-compose.yml` 相同目录下,并在 `docker-compose.yml` 中的 `chatgpt-on-wechat` 部分下添加 `volumes` 映射:
```
volumes:
- ./config.json:/app/plugins/config.json
```
### 4. Railway部署
> Railway 每月提供5刀和最多500小时的免费额度。 (07.11更新: 目前大部分账号已无法免费部署)
1. 进入 [Railway](https://railway.app/template/qApznZ?referralCode=RC3znh)
2. 点击 `Deploy Now` 按钮。
3. 设置环境变量来重载程序运行的参数,例如`open_ai_api_key`, `character_desc`
**一键部署:**
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/qApznZ?referralCode=RC3znh)
## 常见问题
FAQs <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>
+1
View File
@@ -121,6 +121,7 @@ class ChatGPTBot(Bot, OpenAIImage):
if args is None:
args = self.args
response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **args)
# logger.debug("[CHATGPT] response={}".format(response))
# logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
return {
"total_tokens": response["usage"]["total_tokens"],
+8 -8
View File
@@ -57,25 +57,25 @@ def num_tokens_from_messages(messages, model):
"""Returns the number of tokens used by a list of messages."""
import tiktoken
if model == "gpt-3.5-turbo" or model == "gpt-35-turbo":
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
return num_tokens_from_messages(messages, model="gpt-4-0314")
if model in ["gpt-3.5-turbo-0301", "gpt-35-turbo"]:
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
elif model in ["gpt-4-0314", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-16k-0613", "gpt-35-turbo-16k"]:
return num_tokens_from_messages(messages, model="gpt-4")
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
logger.debug("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo-0301":
if model == "gpt-3.5-turbo":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
elif model == "gpt-4":
tokens_per_message = 3
tokens_per_name = 1
else:
logger.warn(f"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo-0301.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
logger.warn(f"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo")
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
+2 -2
View File
@@ -223,9 +223,9 @@ class ChatChannel(Channel):
return self._decorate_reply(context, reply)
if context.get("isgroup", False):
reply_text = "@" + context["msg"].actual_user_nickname + "\n" + reply_text.strip()
reply_text = conf().get("group_chat_reply_prefix", "") + reply_text
reply_text = conf().get("group_chat_reply_prefix", "") + reply_text + conf().get("group_chat_reply_suffix", "")
else:
reply_text = conf().get("single_chat_reply_prefix", "") + reply_text
reply_text = conf().get("single_chat_reply_prefix", "") + reply_text + conf().get("single_chat_reply_suffix", "")
reply.content = reply_text
elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:
reply.content = "[" + str(reply.type) + "]\n" + reply.content
+3 -3
View File
@@ -53,7 +53,7 @@ def _check(func):
if msgId in self.receivedMsgs:
logger.info("Wechat message {} already received, ignore".format(msgId))
return
self.receivedMsgs[msgId] = cmsg
self.receivedMsgs[msgId] = True
create_time = cmsg.create_time # 消息时间戳
if conf().get("hot_reload") == True and int(create_time) < int(time.time()) - 60: # 跳过1分钟前的历史消息
logger.debug("[WX]history message {} skipped".format(msgId))
@@ -105,7 +105,7 @@ class WechatChannel(ChatChannel):
def __init__(self):
super().__init__()
self.receivedMsgs = ExpiredDict(60 * 60 * 24)
self.receivedMsgs = ExpiredDict(60 * 60)
def startup(self):
itchat.instance.receivingRetryCount = 600 # 修改断线超时时间
@@ -159,7 +159,7 @@ class WechatChannel(ChatChannel):
@_check
def handle_group(self, cmsg: ChatMessage):
if cmsg.ctype == ContextType.VOICE:
if conf().get("speech_recognition") != True:
if conf().get("group_speech_recognition") != True:
return
logger.debug("[WX]receive voice for group msg: {}".format(cmsg.content))
elif cmsg.ctype == ContextType.IMAGE:
+1
View File
@@ -2,6 +2,7 @@
"open_ai_api_key": "YOUR API KEY",
"model": "gpt-3.5-turbo",
"proxy": "",
"hot_reload": false,
"single_chat_prefix": [
"bot",
"@bot"
+28 -2
View File
@@ -22,8 +22,10 @@ available_setting = {
# Bot触发配置
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
"group_chat_prefix": ["@bot"], # 聊时包含该前缀则会触发机器人回复
"single_chat_reply_suffix": "", # 聊时自动回复的后缀,\n 可以换行
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
"group_chat_reply_prefix": "", # 群聊时自动回复的前缀
"group_chat_reply_suffix": "", # 群聊时自动回复的后缀,\n 可以换行
"group_chat_keyword": [], # 群聊时包含该关键词则会触发机器人回复
"group_at_off": False, # 是否关闭群聊时@bot的触发
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], # 开启自动回复的群名称列表
@@ -35,7 +37,8 @@ available_setting = {
"image_create_size": "256x256", # 图片大小,可选有 256x256, 512x512, 1024x1024
# chatgpt会话参数
"expires_in_seconds": 3600, # 无操作会话的过期时间
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
# 人格描述
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
"conversation_max_tokens": 1000, # 支持上下文记忆的最多字符数
# chatgpt限流配置
"rate_limit_chatgpt": 20, # chatgpt的调用频率限制
@@ -226,3 +229,26 @@ def subscribe_msg():
trigger_prefix = conf().get("single_chat_prefix", [""])[0]
msg = conf().get("subscribe_msg", "")
return msg.format(trigger_prefix=trigger_prefix)
# global plugin config
plugin_config = {}
def write_plugin_config(pconf: dict):
"""
写入插件全局配置
:param pconf: 全量插件配置
"""
global plugin_config
for k in pconf:
plugin_config[k.lower()] = pconf[k]
def pconf(plugin_name: str) -> dict:
"""
根据插件名称获取配置
:param plugin_name: 插件名称
:return: 该插件的配置项
"""
return plugin_config.get(plugin_name.lower())
-39
View File
@@ -1,39 +0,0 @@
FROM python:3.10-alpine
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
ENV BUILD_PREFIX=/app
RUN apk add --no-cache \
bash \
curl \
wget \
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`} \
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& cd ${BUILD_PREFIX} \
&& cp config-template.json ${BUILD_PREFIX}/config.json \
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
&& pip install --no-cache -r requirements.txt --extra-index-url https://alpine-wheels.github.io/index\
&& pip install --no-cache -r requirements-optional.txt --extra-index-url https://alpine-wheels.github.io/index\
&& apk del curl wget
WORKDIR ${BUILD_PREFIX}
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh \
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
&& chown -R noroot:noroot ${BUILD_PREFIX}
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
-29
View File
@@ -1,29 +0,0 @@
FROM python:3.10-alpine
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
ENV BUILD_PREFIX=/app
ADD . ${BUILD_PREFIX}
RUN apk add --no-cache bash ffmpeg espeak \
&& cd ${BUILD_PREFIX} \
&& cp config-template.json config.json \
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
&& pip install --no-cache -r requirements.txt --extra-index-url https://alpine-wheels.github.io/index\
&& pip install --no-cache -r requirements-optional.txt --extra-index-url https://alpine-wheels.github.io/index
WORKDIR ${BUILD_PREFIX}
ADD docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh \
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
&& chown -R noroot:noroot ${BUILD_PREFIX}
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
-41
View File
@@ -1,41 +0,0 @@
FROM python:3.10
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
ENV BUILD_PREFIX=/app
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
wget \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`} \
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& cd ${BUILD_PREFIX} \
&& cp config-template.json ${BUILD_PREFIX}/config.json \
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
&& pip install --no-cache -r requirements.txt \
&& pip install --no-cache -r requirements-optional.txt
WORKDIR ${BUILD_PREFIX}
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh \
&& mkdir -p /home/noroot \
&& groupadd -r noroot \
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
&& chown -R noroot:noroot /home/noroot ${BUILD_PREFIX} /usr/local/lib
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
-15
View File
@@ -1,15 +0,0 @@
#!/bin/bash
# fetch latest release tag
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`
# build image
docker build -f Dockerfile.alpine \
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
-t zhayujie/chatgpt-on-wechat .
# tag image
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:alpine
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
-15
View File
@@ -1,15 +0,0 @@
#!/bin/bash
# fetch latest release tag
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`
# build image
docker build -f Dockerfile.debian \
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
-t zhayujie/chatgpt-on-wechat .
# tag image
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:debian
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
@@ -1,23 +0,0 @@
FROM zhayujie/chatgpt-on-wechat:alpine
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
USER root
RUN apk add --no-cache \
ffmpeg \
espeak \
&& pip install --no-cache \
baidu-aip \
chardet \
SpeechRecognition
# replace entrypoint
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
@@ -1,24 +0,0 @@
FROM zhayujie/chatgpt-on-wechat:debian
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
USER root
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ffmpeg \
espeak \
&& pip install --no-cache \
baidu-aip \
chardet \
SpeechRecognition
# replace entrypoint
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
@@ -1,24 +0,0 @@
version: '2.0'
services:
chatgpt-on-wechat:
build:
context: ./
dockerfile: Dockerfile.alpine
image: zhayujie/chatgpt-on-wechat-voice-reply
container_name: chatgpt-on-wechat-voice-reply
environment:
OPEN_AI_API_KEY: 'YOUR API KEY'
OPEN_AI_PROXY: ''
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
GROUP_CHAT_PREFIX: '["@bot"]'
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
CONVERSATION_MAX_TOKENS: 1000
SPEECH_RECOGNITION: 'true'
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
EXPIRES_IN_SECONDS: 3600
VOICE_REPLY_VOICE: 'true'
BAIDU_APP_ID: 'YOUR BAIDU APP ID'
BAIDU_API_KEY: 'YOUR BAIDU API KEY'
BAIDU_SECRET_KEY: 'YOUR BAIDU SERVICE KEY'
@@ -1,117 +0,0 @@
#!/bin/bash
set -e
# build prefix
CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""}
# path to config.json
CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""}
# execution command line
CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""}
OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""}
OPEN_AI_PROXY=${OPEN_AI_PROXY:-""}
SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""}
SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""}
GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""}
GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""}
IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""}
CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""}
SPEECH_RECOGNITION=${SPEECH_RECOGNITION:-""}
CHARACTER_DESC=${CHARACTER_DESC:-""}
EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""}
VOICE_REPLY_VOICE=${VOICE_REPLY_VOICE:-""}
BAIDU_APP_ID=${BAIDU_APP_ID:-""}
BAIDU_API_KEY=${BAIDU_API_KEY:-""}
BAIDU_SECRET_KEY=${BAIDU_SECRET_KEY:-""}
# CHATGPT_ON_WECHAT_PREFIX is empty, use /app
if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
CHATGPT_ON_WECHAT_PREFIX=/app
fi
# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'
if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then
CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json
fi
# CHATGPT_ON_WECHAT_EXEC is empty, use python app.py
if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then
CHATGPT_ON_WECHAT_EXEC="python app.py"
fi
# modify content in config.json
if [ "$OPEN_AI_API_KEY" != "" ] ; then
sed -i "s/\"open_ai_api_key\".*,$/\"open_ai_api_key\": \"$OPEN_AI_API_KEY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
else
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
fi
# use http_proxy as default
if [ "$HTTP_PROXY" != "" ] ; then
sed -i "s/\"proxy\".*,$/\"proxy\": \"$HTTP_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$OPEN_AI_PROXY" != "" ] ; then
sed -i "s/\"proxy\".*,$/\"proxy\": \"$OPEN_AI_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
sed -i "s/\"single_chat_prefix\".*,$/\"single_chat_prefix\": $SINGLE_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
sed -i "s/\"single_chat_reply_prefix\".*,$/\"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
sed -i "s/\"group_chat_prefix\".*,$/\"group_chat_prefix\": $GROUP_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
sed -i "s/\"group_name_white_list\".*,$/\"group_name_white_list\": $GROUP_NAME_WHITE_LIST,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
sed -i "s/\"image_create_prefix\".*,$/\"image_create_prefix\": $IMAGE_CREATE_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
sed -i "s/\"conversation_max_tokens\".*,$/\"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$SPEECH_RECOGNITION" != "" ] ; then
sed -i "s/\"speech_recognition\".*,$/\"speech_recognition\": $SPEECH_RECOGNITION,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$CHARACTER_DESC" != "" ] ; then
sed -i "s/\"character_desc\".*,$/\"character_desc\": \"$CHARACTER_DESC\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$EXPIRES_IN_SECONDS" != "" ] ; then
sed -i "s/\"expires_in_seconds\".*$/\"expires_in_seconds\": $EXPIRES_IN_SECONDS/" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
# append
if [ "$BAIDU_SECRET_KEY" != "" ] ; then
sed -i "1a \ \ \"baidu_secret_key\": \"$BAIDU_SECRET_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$BAIDU_API_KEY" != "" ] ; then
sed -i "1a \ \ \"baidu_api_key\": \"$BAIDU_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$BAIDU_APP_ID" != "" ] ; then
sed -i "1a \ \ \"baidu_app_id\": \"$BAIDU_APP_ID\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$VOICE_REPLY_VOICE" != "" ] ; then
sed -i "1a \ \ \"voice_reply_voice\": $VOICE_REPLY_VOICE," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
# go to prefix dir
cd $CHATGPT_ON_WECHAT_PREFIX
# excute
$CHATGPT_ON_WECHAT_EXEC
@@ -1,20 +1,23 @@
version: '2.0'
services:
chatgpt-on-wechat:
build:
context: ./
dockerfile: Dockerfile.alpine
image: zhayujie/chatgpt-on-wechat
container_name: sample-chatgpt-on-wechat
container_name: chatgpt-on-wechat
security_opt:
- seccomp:unconfined
environment:
OPEN_AI_API_KEY: 'YOUR API KEY'
OPEN_AI_PROXY: ''
MODEL: 'gpt-3.5-turbo'
PROXY: ''
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
GROUP_CHAT_PREFIX: '["@bot"]'
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
CONVERSATION_MAX_TOKENS: 1000
SPEECH_RECOGNITION: "False"
SPEECH_RECOGNITION: 'False'
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
EXPIRES_IN_SECONDS: 3600
EXPIRES_IN_SECONDS: 3600
USE_LINKAI: 'False'
LINKAI_API_KEY: ''
LINKAI_APP_CODE: ''
-16
View File
@@ -1,16 +0,0 @@
OPEN_AI_API_KEY=YOUR API KEY
OPEN_AI_PROXY=
SINGLE_CHAT_PREFIX=["bot", "@bot"]
SINGLE_CHAT_REPLY_PREFIX="[bot] "
GROUP_CHAT_PREFIX=["@bot"]
GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"]
IMAGE_CREATE_PREFIX=["画", "看", "找"]
CONVERSATION_MAX_TOKENS=1000
SPEECH_RECOGNITION=false
CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
EXPIRES_IN_SECONDS=3600
# Optional
#CHATGPT_ON_WECHAT_PREFIX=/app
#CHATGPT_ON_WECHAT_CONFIG_PATH=/app/config.json
#CHATGPT_ON_WECHAT_EXEC=python app.py
-26
View File
@@ -1,26 +0,0 @@
IMG:=`cat Name`
MOUNT:=
PORT_MAP:=
DOTENV:=.env
CONTAINER_NAME:=sample-chatgpt-on-wechat
echo:
echo $(IMG)
run_d:
docker rm $(CONTAINER_NAME) || echo
docker run -dt --name $(CONTAINER_NAME) $(PORT_MAP) \
--env-file=$(DOTENV) \
$(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -it --name $(CONTAINER_NAME) $(PORT_MAP) \
--env-file=$(DOTENV) \
$(MOUNT) $(IMG)
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME)
-1
View File
@@ -1 +0,0 @@
zhayujie/chatgpt-on-wechat
+10 -9
View File
@@ -24,16 +24,17 @@ class Banwords(Plugin):
def __init__(self):
super().__init__()
try:
# load config
conf = super().load_config()
curdir = os.path.dirname(__file__)
config_path = os.path.join(curdir, "config.json")
conf = None
if not os.path.exists(config_path):
conf = {"action": "ignore"}
with open(config_path, "w") as f:
json.dump(conf, f, indent=4)
else:
with open(config_path, "r") as f:
conf = json.load(f)
if not conf:
# 配置不存在则写入默认配置
config_path = os.path.join(curdir, "config.json")
if not os.path.exists(config_path):
conf = {"action": "ignore"}
with open(config_path, "w") as f:
json.dump(conf, f, indent=4)
self.searchr = WordsSearch()
self.action = conf["action"]
banwords_path = os.path.join(curdir, "banwords.txt")
+2 -7
View File
@@ -29,14 +29,9 @@ class BDunit(Plugin):
def __init__(self):
super().__init__()
try:
curdir = os.path.dirname(__file__)
config_path = os.path.join(curdir, "config.json")
conf = None
if not os.path.exists(config_path):
conf = super().load_config()
if not conf:
raise Exception("config.json not found")
else:
with open(config_path, "r") as f:
conf = json.load(f)
self.service_id = conf["service_id"]
self.api_key = conf["api_key"]
self.secret_key = conf["secret_key"]
+24
View File
@@ -0,0 +1,24 @@
{
"godcmd": {
"password": "",
"admin_users": []
},
"banwords": {
"action": "replace",
"reply_filter": true,
"reply_action": "ignore"
},
"tool": {
"tools": [
"python",
"url-get",
"terminal",
"meteo-weather"
],
"kwargs": {
"top_k_results": 2,
"no_default": false,
"model_name": "gpt-3.5-turbo"
}
}
}
+7 -10
View File
@@ -178,16 +178,13 @@ class Godcmd(Plugin):
def __init__(self):
super().__init__()
curdir = os.path.dirname(__file__)
config_path = os.path.join(curdir, "config.json")
gconf = None
if not os.path.exists(config_path):
gconf = {"password": "", "admin_users": []}
with open(config_path, "w") as f:
json.dump(gconf, f, indent=4)
else:
with open(config_path, "r") as f:
gconf = json.load(f)
config_path = os.path.join(os.path.dirname(__file__), "config.json")
gconf = super().load_config()
if not gconf:
if not os.path.exists(config_path):
gconf = {"password": "", "admin_users": []}
with open(config_path, "w") as f:
json.dump(gconf, f, indent=4)
if gconf["password"] == "":
self.temp_password = "".join(random.sample(string.digits, 4))
logger.info("[Godcmd] 因未设置口令,本次的临时口令为%s" % self.temp_password)
+2 -2
View File
@@ -34,14 +34,14 @@ class Hello(Plugin):
e_context["context"].type = ContextType.TEXT
msg: ChatMessage = e_context["context"]["msg"]
e_context["context"].content = f'请你随机使用一种风格说一句问候语来欢迎新用户"{msg.actual_user_nickname}"加入群聊。'
e_context.action = EventAction.CONTINUE # 事件继续,交付给下个插件或默认逻辑
e_context.action = EventAction.BREAK # 事件结束,进入默认处理逻辑
return
if e_context["context"].type == ContextType.PATPAT:
e_context["context"].type = ContextType.TEXT
msg: ChatMessage = e_context["context"]["msg"]
e_context["context"].content = f"请你随机使用一种风格介绍你自己,并告诉用户输入#help可以查看帮助信息。"
e_context.action = EventAction.CONTINUE # 事件继续,交付给下个插件或默认逻辑
e_context.action = EventAction.BREAK # 事件结束,进入默认处理逻辑
return
content = e_context["context"].content
+22
View File
@@ -1,6 +1,28 @@
import os
import json
from config import pconf
from common.log import logger
class Plugin:
def __init__(self):
self.handlers = {}
def load_config(self) -> dict:
"""
加载当前插件配置
:return: 插件配置字典
"""
# 优先获取 plugins/config.json 中的全局配置
plugin_conf = pconf(self.name)
if not plugin_conf:
# 全局配置不存在,则获取插件目录下的配置
plugin_config_path = os.path.join(self.path, "config.json")
if os.path.exists(plugin_config_path):
with open(plugin_config_path, "r") as f:
plugin_conf = json.load(f)
logger.debug(f"loading plugin config, plugin_name={self.name}, conf={plugin_conf}")
return plugin_conf
def get_help_text(self, **kwargs):
return "暂无帮助信息"
+27 -3
View File
@@ -9,7 +9,7 @@ import sys
from common.log import logger
from common.singleton import singleton
from common.sorted_dict import SortedDict
from config import conf
from config import conf, write_plugin_config
from .event import *
@@ -62,6 +62,28 @@ class PluginManager:
self.save_config()
return pconf
@staticmethod
def _load_all_config():
"""
背景: 目前插件配置存放于每个插件目录的config.json下,docker运行时不方便进行映射,故增加统一管理的入口,优先
加载 plugins/config.json,原插件目录下的config.json 不受影响
从 plugins/config.json 中加载所有插件的配置并写入 config.py 的全局配置中,供插件中使用
插件实例中通过 config.pconf(plugin_name) 即可获取该插件的配置
"""
all_config_path = "./plugins/config.json"
try:
if os.path.exists(all_config_path):
# read from all plugins config
with open(all_config_path, "r", encoding="utf-8") as f:
all_conf = json.load(f)
logger.info(f"load all config from plugins/config.json: {all_conf}")
# write to global config
write_plugin_config(all_conf)
except Exception as e:
logger.error(e)
def scan_plugins(self):
logger.info("Scaning plugins ...")
plugins_dir = "./plugins"
@@ -88,7 +110,7 @@ class PluginManager:
self.loaded[plugin_path] = importlib.import_module(import_path)
self.current_plugin_path = None
except Exception as e:
logger.exception("Failed to import plugin %s: %s" % (plugin_name, e))
logger.warn("Failed to import plugin %s: %s" % (plugin_name, e))
continue
pconf = self.pconf
news = [self.plugins[name] for name in self.plugins]
@@ -123,7 +145,7 @@ class PluginManager:
try:
instance = plugincls()
except Exception as e:
logger.exception("Failed to init %s, diabled. %s" % (name, e))
logger.warn("Failed to init %s, diabled. %s" % (name, e))
self.disable_plugin(name)
failed_plugins.append(name)
continue
@@ -149,6 +171,8 @@ class PluginManager:
def load_plugins(self):
self.load_config()
self.scan_plugins()
# 加载全量插件配置
self._load_all_config()
pconf = self.pconf
logger.debug("plugins.json config={}".format(pconf))
for name, plugin in pconf["plugins"].items():
+4
View File
@@ -11,6 +11,10 @@
"summary": {
"url": "https://github.com/lanvent/plugin_summary.git",
"desc": "总结聊天记录的插件"
},
"timetask": {
"url": "https://github.com/haikerapples/timetask.git",
"desc": "一款定时任务系统的插件"
}
}
}
+5 -4
View File
@@ -114,18 +114,19 @@ $tool reset: 重置工具。
---
###### 注1:带*工具需要获取api-key才能使用(在config.json内的kwargs添加项),部分工具需要外网支持
#### [申请方法](https://github.com/goldfishh/chatgpt-tool-hub/blob/master/docs/apply_optional_tool.md)
## [工具的api申请方法](https://github.com/goldfishh/chatgpt-tool-hub/blob/master/docs/apply_optional_tool.md)
## config.json 配置说明
###### 默认工具无需配置,其它工具需手动配置,一个例子
###### 默认工具无需配置,其它工具需手动配置,以增加morning-news和bing-search两个工具为例
```json
{
"tools": ["wikipedia", "你想要添加的其他工具"], // 填入你想用到的额外工具名
"tools": ["bing-search", "news", "你想要添加的其他工具"], // 填入你想用到的额外工具名,这里加入了工具"bing-search"和工具"news"(news工具会自动加载morning-news、finance-news等子工具)
"kwargs": {
"debug": true, // 当你遇到问题求助时,需要配置
"request_timeout": 120, // openai接口超时时间
"no_default": false, // 是否不使用默认的4个工具
// 带*工具需要申请api-key这里填入,api_name参考前述`申请方法`
"bing_subscription_key": "4871f273a4804743",//带*工具需要申请api-key,这里填入了工具bing-search对应的apiapi_name参考前述`工具的api申请方法`
"morning_news_api_key": "5w1kjNh9VQlUc",// 这里填入了morning-news对应的api
}
}
+3 -10
View File
@@ -10,7 +10,6 @@ from bridge.bridge import Bridge
from bridge.context import ContextType
from bridge.reply import Reply, ReplyType
from common import const
from common.log import logger
from config import conf
from plugins import *
@@ -119,15 +118,8 @@ class Tool(Plugin):
return
def _read_json(self) -> dict:
curdir = os.path.dirname(__file__)
config_path = os.path.join(curdir, "config.json")
tool_config = {"tools": [], "kwargs": {}}
if not os.path.exists(config_path):
return tool_config
else:
with open(config_path, "r") as f:
tool_config = json.load(f)
return tool_config
default_config = {"tools": [], "kwargs": {}}
return super().load_config() or default_config
def _build_tool_kwargs(self, kwargs: dict):
tool_model_name = kwargs.get("model_name")
@@ -137,6 +129,7 @@ class Tool(Plugin):
"debug": kwargs.get("debug", False),
"openai_api_key": conf().get("open_ai_api_key", ""),
"open_ai_api_base": conf().get("open_ai_api_base", "https://api.openai.com/v1"),
"deployment_id": conf().get("azure_deployment_id", ""),
"proxy": conf().get("proxy", ""),
"request_timeout": request_timeout if request_timeout else conf().get("request_timeout", 120),
# note: 目前tool暂未对其他模型测试,但这里仍对配置来源做了优先级区分,一般插件配置可覆盖全局配置
+1 -1
View File
@@ -25,4 +25,4 @@ wechatpy
# chatgpt-tool-hub plugin
--extra-index-url https://pypi.python.org/simple
chatgpt_tool_hub==0.4.4
chatgpt_tool_hub==0.4.6
+2 -2
View File
@@ -1,8 +1,8 @@
openai==0.27.2
openai>=0.27.8
HTMLParser>=0.0.2
PyQRCode>=1.2.1
qrcode>=7.4.2
requests>=2.28.2
chardet>=5.1.0
Pillow
pre-commit
pre-commit