mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-07 11:59:23 +08:00
Compare commits
59 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| a660aa2133 | |||
| 5e48dd50ac | |||
| 2d3ffa1738 | |||
| 663967680a | |||
| b190db73dc | |||
| 475d2f7911 | |||
| a1323c9de8 | |||
| 260c374a56 | |||
| 3d264207a8 | |||
| 3f889ab75f | |||
| 852adb72a2 | |||
| cfd423c991 | |||
| 021ee2312e | |||
| 0f830f2317 | |||
| 3ef7855384 | |||
| d760b045d5 | |||
| 53cc1df369 | |||
| 9b2da6c431 | |||
| b3e1f56fb9 | |||
| 1aa2382843 | |||
| 3c04325aae | |||
| b404e2c51f | |||
| 5b0f0e8b6c | |||
| f9b0ad7697 | |||
| 224ee6bd89 | |||
| 1dc39af423 | |||
| 2c8da59b47 | |||
| 9e3a5395c7 | |||
| 54290f7e5d | |||
| 1bb5c6dc0d | |||
| b204d305a1 | |||
| 8fa4041fc2 | |||
| 8107165792 | |||
| fc4912c640 | |||
| 36ed9d02b7 | |||
| d6c92e1fd5 | |||
| 4ccad86010 | |||
| 38ad01a387 | |||
| e014b0406c | |||
| a4e8e64b5d | |||
| 48e258dd67 | |||
| 574f05cc6f | |||
| c2e4d88842 | |||
| 99b4700b49 | |||
| 32cff41df5 | |||
| 8eace7e30e | |||
| d02508df41 | |||
| 3db452ef71 | |||
| d7a8854fa1 | |||
| 882e6c3576 | |||
| 51f0b898f0 | |||
| e6112568ed | |||
| 720ad07f83 | |||
| cc19017c01 | |||
| 55fe38d5fb | |||
| 494c5a6222 | |||
| 1711a5c064 | |||
| d38fc61043 | |||
| e5ab350bbf |
@@ -1,6 +1,6 @@
|
||||
### 前置确认
|
||||
|
||||
1. 网络能够访问openai接口 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
1. 网络能够访问openai接口
|
||||
2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装
|
||||
3. 在已有 issue 中未搜索到类似问题
|
||||
4. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
.DS_Store
|
||||
.idea
|
||||
.wechaty/
|
||||
__pycache__/
|
||||
venv*
|
||||
*.pyc
|
||||
config.json
|
||||
QR.png
|
||||
nohup.out
|
||||
tmp
|
||||
|
||||
@@ -10,12 +10,16 @@
|
||||
- [x] **多账号:** 支持多微信账号同时运行
|
||||
- [x] **图片生成:** 支持根据描述生成图片,并自动发送至个人聊天或群聊
|
||||
- [x] **上下文记忆**:支持多轮对话记忆,且为每个好友维护独立的上下会话
|
||||
- [x] **语音识别:** 支持接收和处理语音消息,通过文字或语音回复
|
||||
|
||||
|
||||
# 更新日志
|
||||
|
||||
>**2023.03.09:** 基于 `whisper API` 实现对微信语音消息的解析和回复,添加配置项 `"speech_recognition":true` 即可启用,使用参考 [#415](https://github.com/zhayujie/chatgpt-on-wechat/issues/415)。(contributed by [wanggang1987](https://github.com/wanggang1987) in [#385](https://github.com/zhayujie/chatgpt-on-wechat/pull/385))
|
||||
|
||||
>**2023.03.02:** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
|
||||
>**2023.02.20:** 增加 [python-wechaty](https://github.com/wechaty/python-wechaty) 作为可选渠道,使用Pad协议相对稳定,但Token收费 (使用参考[#244](https://github.com/zhayujie/chatgpt-on-wechat/pull/244),contributed by [ZQ7](https://github.com/ZQ7))
|
||||
>**2023.02.20:** 增加 [python-wechaty](https://github.com/wechaty/python-wechaty) 作为可选渠道,使用Pad协议,但Token收费 (使用参考[#244](https://github.com/zhayujie/chatgpt-on-wechat/pull/244),contributed by [ZQ7](https://github.com/ZQ7))
|
||||
|
||||
>**2023.02.09:** 扫码登录存在封号风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158)
|
||||
|
||||
@@ -58,15 +62,14 @@
|
||||
支持 Linux、MacOS、Windows 系统(可在Linux服务器上长期运行),同时需安装 `Python`。
|
||||
> 建议Python版本在 3.7.1~3.9.X 之间,3.10及以上版本在 MacOS 可用,其他系统上不确定能否正常运行。
|
||||
|
||||
|
||||
1.克隆项目代码:
|
||||
**(1) 克隆项目代码:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/zhayujie/chatgpt-on-wechat
|
||||
cd chatgpt-on-wechat/
|
||||
```
|
||||
|
||||
2.安装所需核心依赖:
|
||||
**(2) 安装核心依赖 (必选):**
|
||||
|
||||
```bash
|
||||
pip3 install itchat-uos==1.5.0.dev0
|
||||
@@ -74,13 +77,17 @@ pip3 install --upgrade openai
|
||||
```
|
||||
注:`itchat-uos`使用指定版本1.5.0.dev0,`openai`使用最新版本,需高于0.27.0。
|
||||
|
||||
**(3) 拓展依赖 (可选):**
|
||||
|
||||
语音识别及语音回复相关依赖:[#415](https://github.com/zhayujie/chatgpt-on-wechat/issues/415)。
|
||||
|
||||
|
||||
## 配置
|
||||
|
||||
配置文件的模板在根目录的`config-template.json`中,需复制该模板创建最终生效的 `config.json` 文件:
|
||||
|
||||
```bash
|
||||
cp config-template.json config.json
|
||||
cp config-template.json config.json
|
||||
```
|
||||
|
||||
然后在`config.json`中填入配置,以下是对默认配置的说明,可根据需要进行自定义修改:
|
||||
@@ -89,6 +96,7 @@ cp config-template.json config.json
|
||||
# config.json文件内容示例
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY
|
||||
"model": "gpt-3.5-turbo", # 模型名称
|
||||
"proxy": "127.0.0.1:7890", # 代理客户端的ip和端口
|
||||
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
|
||||
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
|
||||
@@ -96,7 +104,8 @@ cp config-template.json config.json
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"], # 开启自动回复的群名称列表
|
||||
"image_create_prefix": ["画", "看", "找"], # 开启图片回复的前缀
|
||||
"conversation_max_tokens": 1000, # 支持上下文记忆的最多字符数
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。" # 人格描述
|
||||
"speech_recognition": false, # 是否开启语音识别
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
|
||||
}
|
||||
```
|
||||
**配置说明:**
|
||||
@@ -112,12 +121,22 @@ cp config-template.json config.json
|
||||
+ 默认只要被人 @ 就会触发机器人自动回复;另外群聊天中只要检测到以 "@bot" 开头的内容,同样会自动回复(方便自己触发),这对应配置项 `group_chat_prefix`
|
||||
+ 可选配置: `group_name_keyword_white_list`配置项支持模糊匹配群名称,`group_chat_keyword`配置项则支持模糊匹配群消息内容,用法与上述两个配置项相同。(Contributed by [evolay](https://github.com/evolay))
|
||||
|
||||
**3.其他配置**
|
||||
**3.语音识别**
|
||||
|
||||
+ 添加 `"speech_recognition": true` 将开启语音识别,默认使用openai的whisper模型识别为文字,同时以文字回复,目前只支持私聊 (注意由于语音消息无法匹配前缀,一旦开启将对所有语音自动回复);
|
||||
+ 添加 `"voice_reply_voice": true` 将开启语音回复语音,但是需要配置对应语音合成平台的key,由于itchat协议的限制,只能发送语音mp3文件,若使用wechaty则回复的是微信语音。
|
||||
|
||||
**4.其他配置**
|
||||
|
||||
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k` (其中gpt-4 api暂未开放)
|
||||
+ `temperature`,`frequency_penalty`,`presence_penalty`: Chat API接口参数,详情参考[OpenAI官方文档。](https://platform.openai.com/docs/api-reference/chat)
|
||||
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
|
||||
+ 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档直接在 [代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/bot/openai/open_ai_bot.py) `bot/openai/open_ai_bot.py` 中进行调整。
|
||||
+ `conversation_max_tokens`:表示能够记忆的上下文最大字数(一问一答为一组对话,如果累积的对话字数超出限制,就会优先移除最早的一组对话)
|
||||
+ `rate_limit_chatgpt`,`rate_limit_dalle`:每分钟最高问答速率、画图速率,超速后排队按序处理。
|
||||
+ `clear_memory_commands`: 对话内指令,主动清空前文记忆,字符串数组可自定义指令别名。
|
||||
+ `hot_reload`: 程序退出后,暂存微信扫码状态,默认关闭。
|
||||
+ `character_desc` 配置中保存着你对机器人说的一段话,他会记住这段话并作为他的设定,你可以为他定制任何人格 (关于会话上下文的更多内容参考该 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/43))
|
||||
|
||||
|
||||
@@ -141,8 +160,7 @@ python3 app.py
|
||||
touch nohup.out # 首次运行需要新建日志文件
|
||||
nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通过日志输出二维码
|
||||
```
|
||||
扫码登录后程序即可运行于服务器后台,此时可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。
|
||||
scripts/目录有相应的脚本可以调用
|
||||
扫码登录后程序即可运行于服务器后台,此时可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。此外,`scripts` 目录下有一键运行、关闭程序的脚本供使用。
|
||||
|
||||
> **注意:** 如果 扫码后手机提示登录验证需要等待5s,而终端的二维码再次刷新并提示 `Log in time out, reloading QR code`,此时需参考此 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/8) 修改一行代码即可解决。
|
||||
|
||||
|
||||
+4
-3
@@ -1,6 +1,7 @@
|
||||
"""
|
||||
channel factory
|
||||
"""
|
||||
from common import const
|
||||
|
||||
|
||||
def create_bot(bot_type):
|
||||
@@ -9,17 +10,17 @@ def create_bot(bot_type):
|
||||
:param channel_type: channel type code
|
||||
:return: channel instance
|
||||
"""
|
||||
if bot_type == 'baidu':
|
||||
if bot_type == const.BAIDU:
|
||||
# Baidu Unit对话接口
|
||||
from bot.baidu.baidu_unit_bot import BaiduUnitBot
|
||||
return BaiduUnitBot()
|
||||
|
||||
elif bot_type == 'chatGPT':
|
||||
elif bot_type == const.CHATGPT:
|
||||
# ChatGPT 网页端web接口
|
||||
from bot.chatgpt.chat_gpt_bot import ChatGPTBot
|
||||
return ChatGPTBot()
|
||||
|
||||
elif bot_type == 'openAI':
|
||||
elif bot_type == const.OPEN_AI:
|
||||
# OpenAI 官方对话模型API
|
||||
from bot.openai.open_ai_bot import OpenAIBot
|
||||
return OpenAIBot()
|
||||
|
||||
+50
-35
@@ -1,72 +1,85 @@
|
||||
# encoding:utf-8
|
||||
|
||||
from bot.bot import Bot
|
||||
from config import conf
|
||||
from config import conf, load_config
|
||||
from common.log import logger
|
||||
from common.token_bucket import TokenBucket
|
||||
from common.expired_dict import ExpiredDict
|
||||
import openai
|
||||
import time
|
||||
|
||||
if conf().get('expires_in_seconds'):
|
||||
user_session = ExpiredDict(conf().get('expires_in_seconds'))
|
||||
all_sessions = ExpiredDict(conf().get('expires_in_seconds'))
|
||||
else:
|
||||
user_session = dict()
|
||||
all_sessions = dict()
|
||||
|
||||
# OpenAI对话模型API (可用)
|
||||
class ChatGPTBot(Bot):
|
||||
def __init__(self):
|
||||
openai.api_key = conf().get('open_ai_api_key')
|
||||
if conf().get('open_ai_api_base'):
|
||||
openai.api_base = conf().get('open_ai_api_base')
|
||||
proxy = conf().get('proxy')
|
||||
if proxy:
|
||||
openai.proxy = proxy
|
||||
if conf().get('rate_limit_chatgpt'):
|
||||
self.tb4chatgpt = TokenBucket(conf().get('rate_limit_chatgpt', 20))
|
||||
if conf().get('rate_limit_dalle'):
|
||||
self.tb4dalle = TokenBucket(conf().get('rate_limit_dalle', 50))
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
if not context or not context.get('type') or context.get('type') == 'TEXT':
|
||||
logger.info("[OPEN_AI] query={}".format(query))
|
||||
from_user_id = context['from_user_id']
|
||||
if query == '#清除记忆':
|
||||
Session.clear_session(from_user_id)
|
||||
session_id = context.get('session_id') or context.get('from_user_id')
|
||||
clear_memory_commands = conf().get('clear_memory_commands', ['#清除记忆'])
|
||||
if query in clear_memory_commands:
|
||||
Session.clear_session(session_id)
|
||||
return '记忆已清除'
|
||||
elif query == '#清除所有':
|
||||
Session.clear_all_session()
|
||||
return '所有人记忆已清除'
|
||||
return '所有人记忆已清除'
|
||||
elif query == '#更新配置':
|
||||
load_config()
|
||||
return '配置已更新'
|
||||
|
||||
new_query = Session.build_session_query(query, from_user_id)
|
||||
logger.debug("[OPEN_AI] session query={}".format(new_query))
|
||||
session = Session.build_session_query(query, session_id)
|
||||
logger.debug("[OPEN_AI] session query={}".format(session))
|
||||
|
||||
# if context.get('stream'):
|
||||
# # reply in stream
|
||||
# return self.reply_text_stream(query, new_query, from_user_id)
|
||||
# return self.reply_text_stream(query, new_query, session_id)
|
||||
|
||||
reply_content = self.reply_text(new_query, from_user_id, 0)
|
||||
logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content["content"]))
|
||||
reply_content = self.reply_text(session, session_id, 0)
|
||||
logger.debug("[OPEN_AI] new_query={}, session_id={}, reply_cont={}".format(session, session_id, reply_content["content"]))
|
||||
if reply_content["completion_tokens"] > 0:
|
||||
Session.save_session(reply_content["content"], from_user_id, reply_content["total_tokens"])
|
||||
Session.save_session(reply_content["content"], session_id, reply_content["total_tokens"])
|
||||
return reply_content["content"]
|
||||
|
||||
elif context.get('type', None) == 'IMAGE_CREATE':
|
||||
return self.create_img(query, 0)
|
||||
|
||||
def reply_text(self, query, user_id, retry_count=0) ->dict:
|
||||
def reply_text(self, session, session_id, retry_count=0) ->dict:
|
||||
'''
|
||||
call openai's ChatCompletion to get the answer
|
||||
:param query: query content
|
||||
:param user_id: from user id
|
||||
:param session: a conversation session
|
||||
:param session_id: session id
|
||||
:param retry_count: retry count
|
||||
:return: {}
|
||||
'''
|
||||
try:
|
||||
if conf().get('rate_limit_chatgpt') and not self.tb4chatgpt.get_token():
|
||||
return {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
|
||||
response = openai.ChatCompletion.create(
|
||||
model="gpt-3.5-turbo", # 对话模型的名称
|
||||
messages=query,
|
||||
temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性
|
||||
model= conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称
|
||||
messages=session,
|
||||
temperature=conf().get('temperature', 0.9), # 值在[0,1]之间,越大表示回复越具有不确定性
|
||||
#max_tokens=4096, # 回复最大的字符数
|
||||
top_p=1,
|
||||
frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
frequency_penalty=conf().get('frequency_penalty', 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
presence_penalty=conf().get('presence_penalty', 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
)
|
||||
logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
|
||||
# logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
|
||||
return {"total_tokens": response["usage"]["total_tokens"],
|
||||
"completion_tokens": response["usage"]["completion_tokens"],
|
||||
"content": response.choices[0]['message']['content']}
|
||||
@@ -76,7 +89,7 @@ class ChatGPTBot(Bot):
|
||||
if retry_count < 1:
|
||||
time.sleep(5)
|
||||
logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1))
|
||||
return self.reply_text(query, user_id, retry_count+1)
|
||||
return self.reply_text(session, session_id, retry_count+1)
|
||||
else:
|
||||
return {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
|
||||
except openai.error.APIConnectionError as e:
|
||||
@@ -91,11 +104,13 @@ class ChatGPTBot(Bot):
|
||||
except Exception as e:
|
||||
# unknown exception
|
||||
logger.exception(e)
|
||||
Session.clear_session(user_id)
|
||||
Session.clear_session(session_id)
|
||||
return {"completion_tokens": 0, "content": "请再问我一次吧"}
|
||||
|
||||
def create_img(self, query, retry_count=0):
|
||||
try:
|
||||
if conf().get('rate_limit_dalle') and not self.tb4dalle.get_token():
|
||||
return "请求太快了,请休息一下再问我吧"
|
||||
logger.info("[OPEN_AI] image_query={}".format(query))
|
||||
response = openai.Image.create(
|
||||
prompt=query, #图片描述
|
||||
@@ -110,16 +125,16 @@ class ChatGPTBot(Bot):
|
||||
if retry_count < 1:
|
||||
time.sleep(5)
|
||||
logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1))
|
||||
return self.reply_text(query, retry_count+1)
|
||||
return self.create_img(query, retry_count+1)
|
||||
else:
|
||||
return "提问太快啦,请休息一下再问我吧"
|
||||
return "请求太快啦,请休息一下再问我吧"
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
return None
|
||||
|
||||
class Session(object):
|
||||
@staticmethod
|
||||
def build_session_query(query, user_id):
|
||||
def build_session_query(query, session_id):
|
||||
'''
|
||||
build query with conversation history
|
||||
e.g. [
|
||||
@@ -129,28 +144,28 @@ class Session(object):
|
||||
{"role": "user", "content": "Where was it played?"}
|
||||
]
|
||||
:param query: query content
|
||||
:param user_id: from user id
|
||||
:param session_id: session id
|
||||
:return: query content with conversaction
|
||||
'''
|
||||
session = user_session.get(user_id, [])
|
||||
session = all_sessions.get(session_id, [])
|
||||
if len(session) == 0:
|
||||
system_prompt = conf().get("character_desc", "")
|
||||
system_item = {'role': 'system', 'content': system_prompt}
|
||||
session.append(system_item)
|
||||
user_session[user_id] = session
|
||||
all_sessions[session_id] = session
|
||||
user_item = {'role': 'user', 'content': query}
|
||||
session.append(user_item)
|
||||
return session
|
||||
|
||||
@staticmethod
|
||||
def save_session(answer, user_id, total_tokens):
|
||||
def save_session(answer, session_id, total_tokens):
|
||||
max_tokens = conf().get("conversation_max_tokens")
|
||||
if not max_tokens:
|
||||
# default 3000
|
||||
max_tokens = 1000
|
||||
max_tokens=int(max_tokens)
|
||||
|
||||
session = user_session.get(user_id)
|
||||
session = all_sessions.get(session_id)
|
||||
if session:
|
||||
# append conversation
|
||||
gpt_item = {'role': 'assistant', 'content': answer}
|
||||
@@ -174,9 +189,9 @@ class Session(object):
|
||||
dec_tokens = dec_tokens - max_tokens
|
||||
|
||||
@staticmethod
|
||||
def clear_session(user_id):
|
||||
user_session[user_id] = []
|
||||
def clear_session(session_id):
|
||||
all_sessions[session_id] = []
|
||||
|
||||
@staticmethod
|
||||
def clear_all_session():
|
||||
user_session.clear()
|
||||
all_sessions.clear()
|
||||
|
||||
@@ -12,13 +12,17 @@ user_session = dict()
|
||||
class OpenAIBot(Bot):
|
||||
def __init__(self):
|
||||
openai.api_key = conf().get('open_ai_api_key')
|
||||
|
||||
if conf().get('open_ai_api_base'):
|
||||
openai.api_base = conf().get('open_ai_api_base')
|
||||
proxy = conf().get('proxy')
|
||||
if proxy:
|
||||
openai.proxy = proxy
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
if not context or not context.get('type') or context.get('type') == 'TEXT':
|
||||
logger.info("[OPEN_AI] query={}".format(query))
|
||||
from_user_id = context['from_user_id']
|
||||
from_user_id = context.get('from_user_id') or context.get('session_id')
|
||||
if query == '#清除记忆':
|
||||
Session.clear_session(from_user_id)
|
||||
return '记忆已清除'
|
||||
@@ -41,7 +45,7 @@ class OpenAIBot(Bot):
|
||||
def reply_text(self, query, user_id, retry_count=0):
|
||||
try:
|
||||
response = openai.Completion.create(
|
||||
model="text-davinci-003", # 对话模型的名称
|
||||
model= conf().get("model") or "text-davinci-003", # 对话模型的名称
|
||||
prompt=query,
|
||||
temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性
|
||||
max_tokens=1200, # 回复最大的字符数
|
||||
@@ -163,4 +167,4 @@ class Session(object):
|
||||
|
||||
@staticmethod
|
||||
def clear_all_session():
|
||||
user_session.clear()
|
||||
user_session.clear()
|
||||
|
||||
+16
-1
@@ -1,4 +1,7 @@
|
||||
from bot import bot_factory
|
||||
from voice import voice_factory
|
||||
from config import conf
|
||||
from common import const
|
||||
|
||||
|
||||
class Bridge(object):
|
||||
@@ -6,4 +9,16 @@ class Bridge(object):
|
||||
pass
|
||||
|
||||
def fetch_reply_content(self, query, context):
|
||||
return bot_factory.create_bot("chatGPT").reply(query, context)
|
||||
bot_type = const.CHATGPT
|
||||
model_type = conf().get("model")
|
||||
if model_type in ["gpt-3.5-turbo", "gpt-4", "gpt-4-32k"]:
|
||||
bot_type = const.CHATGPT
|
||||
elif model_type in ["text-davinci-003"]:
|
||||
bot_type = const.OPEN_AI
|
||||
return bot_factory.create_bot(bot_type).reply(query, context)
|
||||
|
||||
def fetch_voice_to_text(self, voiceFile):
|
||||
return voice_factory.create_voice("openai").voiceToText(voiceFile)
|
||||
|
||||
def fetch_text_to_voice(self, text):
|
||||
return voice_factory.create_voice("baidu").textToVoice(text)
|
||||
+7
-1
@@ -11,7 +11,7 @@ class Channel(object):
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def handle(self, msg):
|
||||
def handle_text(self, msg):
|
||||
"""
|
||||
process received msg
|
||||
:param msg: message object
|
||||
@@ -29,3 +29,9 @@ class Channel(object):
|
||||
|
||||
def build_reply_content(self, query, context=None):
|
||||
return Bridge().fetch_reply_content(query, context)
|
||||
|
||||
def build_voice_to_text(self, voice_file):
|
||||
return Bridge().fetch_voice_to_text(voice_file)
|
||||
|
||||
def build_text_to_voice(self, text):
|
||||
return Bridge().fetch_text_to_voice(text)
|
||||
|
||||
@@ -14,4 +14,7 @@ def create_channel(channel_type):
|
||||
elif channel_type == 'wxy':
|
||||
from channel.wechat.wechaty_channel import WechatyChannel
|
||||
return WechatyChannel()
|
||||
elif channel_type == 'terminal':
|
||||
from channel.terminal.terminal_channel import TerminalChannel
|
||||
return TerminalChannel()
|
||||
raise RuntimeError
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
from channel.channel import Channel
|
||||
import sys
|
||||
|
||||
class TerminalChannel(Channel):
|
||||
def startup(self):
|
||||
context = {"from_user_id": "User"}
|
||||
print("\nPlease input your question")
|
||||
while True:
|
||||
try:
|
||||
prompt = self.get_input("User:\n")
|
||||
except KeyboardInterrupt:
|
||||
print("\nExiting...")
|
||||
sys.exit()
|
||||
|
||||
print("Bot:")
|
||||
sys.stdout.flush()
|
||||
for res in super().build_reply_content(prompt, context):
|
||||
print(res, end="")
|
||||
sys.stdout.flush()
|
||||
print("\n")
|
||||
|
||||
|
||||
def get_input(self, prompt):
|
||||
"""
|
||||
Multi-line input function
|
||||
"""
|
||||
print(prompt, end="")
|
||||
line = input()
|
||||
return line
|
||||
@@ -3,22 +3,25 @@
|
||||
"""
|
||||
wechat channel
|
||||
"""
|
||||
|
||||
import itchat
|
||||
import json
|
||||
from itchat.content import *
|
||||
from channel.channel import Channel
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from common.log import logger
|
||||
from common.tmp_dir import TmpDir
|
||||
from config import conf
|
||||
import requests
|
||||
import io
|
||||
import time
|
||||
|
||||
thread_pool = ThreadPoolExecutor(max_workers=8)
|
||||
|
||||
|
||||
@itchat.msg_register(TEXT)
|
||||
def handler_single_msg(msg):
|
||||
WechatChannel().handle(msg)
|
||||
WechatChannel().handle_text(msg)
|
||||
return None
|
||||
|
||||
|
||||
@@ -28,24 +31,55 @@ def handler_group_msg(msg):
|
||||
return None
|
||||
|
||||
|
||||
@itchat.msg_register(VOICE)
|
||||
def handler_single_voice(msg):
|
||||
WechatChannel().handle_voice(msg)
|
||||
return None
|
||||
|
||||
|
||||
class WechatChannel(Channel):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def startup(self):
|
||||
# login by scan QRCode
|
||||
itchat.auto_login(enableCmdQR=2)
|
||||
itchat.auto_login(enableCmdQR=2, hotReload=conf().get('hot_reload', False))
|
||||
|
||||
# start message listener
|
||||
itchat.run()
|
||||
|
||||
def handle(self, msg):
|
||||
logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False))
|
||||
def handle_voice(self, msg):
|
||||
if conf().get('speech_recognition') != True :
|
||||
return
|
||||
logger.debug("[WX]receive voice msg: " + msg['FileName'])
|
||||
thread_pool.submit(self._do_handle_voice, msg)
|
||||
|
||||
def _do_handle_voice(self, msg):
|
||||
from_user_id = msg['FromUserName']
|
||||
other_user_id = msg['User']['UserName']
|
||||
if from_user_id == other_user_id:
|
||||
file_name = TmpDir().path() + msg['FileName']
|
||||
msg.download(file_name)
|
||||
query = super().build_voice_to_text(file_name)
|
||||
if conf().get('voice_reply_voice'):
|
||||
self._do_send_voice(query, from_user_id)
|
||||
else:
|
||||
self._do_send_text(query, from_user_id)
|
||||
|
||||
def handle_text(self, msg):
|
||||
logger.debug("[WX]receive text msg: " + json.dumps(msg, ensure_ascii=False))
|
||||
content = msg['Text']
|
||||
self._handle_single_msg(msg, content)
|
||||
|
||||
def _handle_single_msg(self, msg, content):
|
||||
from_user_id = msg['FromUserName']
|
||||
to_user_id = msg['ToUserName'] # 接收人id
|
||||
other_user_id = msg['User']['UserName'] # 对手方id
|
||||
content = msg['Text']
|
||||
create_time = msg['CreateTime'] # 消息时间
|
||||
match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
|
||||
if conf().get('hot_reload') == True and int(create_time) < int(time.time()) - 60: #跳过1分钟前的历史消息
|
||||
logger.debug("[WX]history message skipped")
|
||||
return
|
||||
if "」\n- - - - - - - - - - - - - - -" in content:
|
||||
logger.debug("[WX]reference query skipped")
|
||||
return
|
||||
@@ -60,9 +94,8 @@ class WechatChannel(Channel):
|
||||
if img_match_prefix:
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
thread_pool.submit(self._do_send_img, content, from_user_id)
|
||||
else:
|
||||
thread_pool.submit(self._do_send, content, from_user_id)
|
||||
|
||||
else :
|
||||
thread_pool.submit(self._do_send_text, content, from_user_id)
|
||||
elif to_user_id == other_user_id and match_prefix:
|
||||
# 自己给好友发送消息
|
||||
str_list = content.split(match_prefix, 1)
|
||||
@@ -73,13 +106,17 @@ class WechatChannel(Channel):
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
thread_pool.submit(self._do_send_img, content, to_user_id)
|
||||
else:
|
||||
thread_pool.submit(self._do_send, content, to_user_id)
|
||||
thread_pool.submit(self._do_send_text, content, to_user_id)
|
||||
|
||||
|
||||
def handle_group(self, msg):
|
||||
logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False))
|
||||
group_name = msg['User'].get('NickName', None)
|
||||
group_id = msg['User'].get('UserName', None)
|
||||
create_time = msg['CreateTime'] # 消息时间
|
||||
if conf().get('hot_reload') == True and int(create_time) < int(time.time()) - 60: #跳过1分钟前的历史消息
|
||||
logger.debug("[WX]history group message skipped")
|
||||
return
|
||||
if not group_name:
|
||||
return ""
|
||||
origin_content = msg['Content']
|
||||
@@ -105,16 +142,30 @@ class WechatChannel(Channel):
|
||||
thread_pool.submit(self._do_send_group, content, msg)
|
||||
|
||||
def send(self, msg, receiver):
|
||||
logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
|
||||
itchat.send(msg, toUserName=receiver)
|
||||
logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
|
||||
|
||||
def _do_send(self, query, reply_user_id):
|
||||
def _do_send_voice(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = reply_user_id
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
replyFile = super().build_text_to_voice(reply_text)
|
||||
itchat.send_file(replyFile, toUserName=reply_user_id)
|
||||
logger.info('[WX] sendFile={}, receiver={}'.format(replyFile, reply_user_id))
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
def _do_send_text(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['session_id'] = reply_user_id
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
|
||||
except Exception as e:
|
||||
@@ -138,8 +189,8 @@ class WechatChannel(Channel):
|
||||
image_storage.seek(0)
|
||||
|
||||
# 图片发送
|
||||
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
|
||||
itchat.send_image(image_storage, reply_user_id)
|
||||
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
@@ -147,11 +198,19 @@ class WechatChannel(Channel):
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = msg['ActualUserName']
|
||||
group_name = msg['User']['NickName']
|
||||
group_id = msg['User']['UserName']
|
||||
group_chat_in_one_session = conf().get('group_chat_in_one_session', [])
|
||||
if ('ALL_GROUP' in group_chat_in_one_session or \
|
||||
group_name in group_chat_in_one_session or \
|
||||
self.check_contain(group_name, group_chat_in_one_session)):
|
||||
context['session_id'] = group_id
|
||||
else:
|
||||
context['session_id'] = msg['ActualUserName']
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip()
|
||||
self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName'])
|
||||
self.send(conf().get("group_chat_reply_prefix", "") + reply_text, group_id)
|
||||
|
||||
|
||||
def check_prefix(self, content, prefix_list):
|
||||
@@ -168,3 +227,4 @@ class WechatChannel(Channel):
|
||||
if content.find(ky) != -1:
|
||||
return True
|
||||
return None
|
||||
|
||||
|
||||
@@ -10,12 +10,16 @@ import json
|
||||
import time
|
||||
import asyncio
|
||||
import requests
|
||||
import pysilk
|
||||
import wave
|
||||
from pydub import AudioSegment
|
||||
from typing import Optional, Union
|
||||
from wechaty_puppet import MessageType, FileBox, ScanStatus # type: ignore
|
||||
from wechaty import Wechaty, Contact
|
||||
from wechaty.user import Message, Room, MiniProgram, UrlLink
|
||||
from channel.channel import Channel
|
||||
from common.log import logger
|
||||
from common.tmp_dir import TmpDir
|
||||
from config import conf
|
||||
|
||||
|
||||
@@ -89,6 +93,48 @@ class WechatyChannel(Channel):
|
||||
await self._do_send_img(content, to_user_id)
|
||||
else:
|
||||
await self._do_send(content, to_user_id)
|
||||
elif room is None and msg.type() == MessageType.MESSAGE_TYPE_AUDIO:
|
||||
if not msg.is_self(): # 接收语音消息
|
||||
# 下载语音文件
|
||||
voice_file = await msg.to_file_box()
|
||||
silk_file = TmpDir().path() + voice_file.name
|
||||
await voice_file.to_file(silk_file)
|
||||
logger.info("[WX]receive voice file: " + silk_file)
|
||||
# 将文件转成wav格式音频
|
||||
wav_file = silk_file.replace(".slk", ".wav")
|
||||
with open(silk_file, 'rb') as f:
|
||||
silk_data = f.read()
|
||||
pcm_data = pysilk.decode(silk_data)
|
||||
|
||||
with wave.open(wav_file, 'wb') as wav_data:
|
||||
wav_data.setnchannels(1)
|
||||
wav_data.setsampwidth(2)
|
||||
wav_data.setframerate(24000)
|
||||
wav_data.writeframes(pcm_data)
|
||||
if os.path.exists(wav_file):
|
||||
converter_state = "true" # 转换wav成功
|
||||
else:
|
||||
converter_state = "false" # 转换wav失败
|
||||
logger.info("[WX]receive voice converter: " + converter_state)
|
||||
# 语音识别为文本
|
||||
query = super().build_voice_to_text(wav_file)
|
||||
# 交验关键字
|
||||
match_prefix = self.check_prefix(query, conf().get('single_chat_prefix'))
|
||||
if match_prefix is not None:
|
||||
if match_prefix != '':
|
||||
str_list = query.split(match_prefix, 1)
|
||||
if len(str_list) == 2:
|
||||
query = str_list[1].strip()
|
||||
# 返回消息
|
||||
if conf().get('voice_reply_voice'):
|
||||
await self._do_send_voice(query, from_user_id)
|
||||
else:
|
||||
await self._do_send(query, from_user_id)
|
||||
else:
|
||||
logger.info("[WX]receive voice check prefix: " + 'False')
|
||||
# 清除缓存文件
|
||||
os.remove(wav_file)
|
||||
os.remove(silk_file)
|
||||
elif room and msg.type() == MessageType.MESSAGE_TYPE_TEXT:
|
||||
# 群组&文本消息
|
||||
room_id = room.room_id
|
||||
@@ -101,6 +147,13 @@ class WechatyChannel(Channel):
|
||||
match_prefix = (is_at and not config.get("group_at_off", False)) \
|
||||
or self.check_prefix(content, config.get('group_chat_prefix')) \
|
||||
or self.check_contain(content, config.get('group_chat_keyword'))
|
||||
# Wechaty判断is_at为True,返回的内容是过滤掉@之后的内容;而is_at为False,则会返回完整的内容
|
||||
# 故判断如果匹配到自定义前缀,则返回过滤掉前缀+空格后的内容,用于实现类似自定义+前缀触发生成AI图片的功能
|
||||
prefixes = config.get('group_chat_prefix')
|
||||
for prefix in prefixes:
|
||||
if content.startswith(prefix):
|
||||
content = content.replace(prefix, '', 1).strip()
|
||||
break
|
||||
if ('ALL_GROUP' in config.get('group_name_white_list') or room_name in config.get(
|
||||
'group_name_white_list') or self.check_contain(room_name, config.get(
|
||||
'group_name_keyword_white_list'))) and match_prefix:
|
||||
@@ -109,7 +162,7 @@ class WechatyChannel(Channel):
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
await self._do_send_group_img(content, room_id)
|
||||
else:
|
||||
await self._do_send_group(content, room_id, from_user_id, from_user_name)
|
||||
await self._do_send_group(content, room_id, room_name, from_user_id, from_user_name)
|
||||
|
||||
async def send(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver):
|
||||
logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver))
|
||||
@@ -128,13 +181,46 @@ class WechatyChannel(Channel):
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = reply_user_id
|
||||
context['session_id'] = reply_user_id
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
await self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
|
||||
async def _do_send_voice(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['session_id'] = reply_user_id
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
# 转换 mp3 文件为 silk 格式
|
||||
mp3_file = super().build_text_to_voice(reply_text)
|
||||
silk_file = mp3_file.replace(".mp3", ".silk")
|
||||
# Load the MP3 file
|
||||
audio = AudioSegment.from_file(mp3_file, format="mp3")
|
||||
# Convert to WAV format
|
||||
audio = audio.set_frame_rate(24000).set_channels(1)
|
||||
wav_data = audio.raw_data
|
||||
sample_width = audio.sample_width
|
||||
# Encode to SILK format
|
||||
silk_data = pysilk.encode(wav_data, 24000)
|
||||
# Save the silk file
|
||||
with open(silk_file, "wb") as f:
|
||||
f.write(silk_data)
|
||||
# 发送语音
|
||||
t = int(time.time())
|
||||
file_box = FileBox.from_file(silk_file, name=str(t) + '.silk')
|
||||
await self.send(file_box, reply_user_id)
|
||||
# 清除缓存文件
|
||||
os.remove(mp3_file)
|
||||
os.remove(silk_file)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
async def _do_send_img(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
@@ -159,11 +245,17 @@ class WechatyChannel(Channel):
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
async def _do_send_group(self, query, group_id, group_user_id, group_user_name):
|
||||
async def _do_send_group(self, query, group_id, group_name, group_user_id, group_user_name):
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = str(group_id) + '-' + str(group_user_id)
|
||||
group_chat_in_one_session = conf().get('group_chat_in_one_session', [])
|
||||
if ('ALL_GROUP' in group_chat_in_one_session or \
|
||||
group_name in group_chat_in_one_session or \
|
||||
self.check_contain(group_name, group_chat_in_one_session)):
|
||||
context['session_id'] = str(group_id)
|
||||
else:
|
||||
context['session_id'] = str(group_id) + '-' + str(group_user_id)
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
reply_text = '@' + group_user_name + ' ' + reply_text.strip()
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
# bot_type
|
||||
OPEN_AI = "openAI"
|
||||
CHATGPT = "chatGPT"
|
||||
BAIDU = "baidu"
|
||||
@@ -0,0 +1,20 @@
|
||||
|
||||
import os
|
||||
import pathlib
|
||||
from config import conf
|
||||
|
||||
|
||||
class TmpDir(object):
|
||||
"""A temporary directory that is deleted when the object is destroyed.
|
||||
"""
|
||||
|
||||
tmpFilePath = pathlib.Path('./tmp/')
|
||||
|
||||
def __init__(self):
|
||||
pathExists = os.path.exists(self.tmpFilePath)
|
||||
if not pathExists and conf().get('speech_recognition') == True:
|
||||
os.makedirs(self.tmpFilePath)
|
||||
|
||||
def path(self):
|
||||
return str(self.tmpFilePath) + '/'
|
||||
|
||||
@@ -0,0 +1,45 @@
|
||||
import threading
|
||||
import time
|
||||
|
||||
|
||||
class TokenBucket:
|
||||
def __init__(self, tpm, timeout=None):
|
||||
self.capacity = int(tpm) # 令牌桶容量
|
||||
self.tokens = 0 # 初始令牌数为0
|
||||
self.rate = int(tpm) / 60 # 令牌每秒生成速率
|
||||
self.timeout = timeout # 等待令牌超时时间
|
||||
self.cond = threading.Condition() # 条件变量
|
||||
self.is_running = True
|
||||
# 开启令牌生成线程
|
||||
threading.Thread(target=self._generate_tokens).start()
|
||||
|
||||
def _generate_tokens(self):
|
||||
"""生成令牌"""
|
||||
while self.is_running:
|
||||
with self.cond:
|
||||
if self.tokens < self.capacity:
|
||||
self.tokens += 1
|
||||
self.cond.notify() # 通知获取令牌的线程
|
||||
time.sleep(1 / self.rate)
|
||||
|
||||
def get_token(self):
|
||||
"""获取令牌"""
|
||||
with self.cond:
|
||||
while self.tokens <= 0:
|
||||
flag = self.cond.wait(self.timeout)
|
||||
if not flag: # 超时
|
||||
return False
|
||||
self.tokens -= 1
|
||||
return True
|
||||
|
||||
def close(self):
|
||||
self.is_running = False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
token_bucket = TokenBucket(20, None) # 创建一个每分钟生产20个tokens的令牌桶
|
||||
# token_bucket = TokenBucket(20, 0.1)
|
||||
for i in range(3):
|
||||
if token_bucket.get_token():
|
||||
print(f"第{i+1}次请求成功")
|
||||
token_bucket.close()
|
||||
+16
-12
@@ -1,12 +1,16 @@
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY",
|
||||
"proxy": "",
|
||||
"single_chat_prefix": ["bot", "@bot"],
|
||||
"single_chat_reply_prefix": "[bot] ",
|
||||
"group_chat_prefix": ["@bot"],
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"],
|
||||
"image_create_prefix": ["画", "看", "找"],
|
||||
"conversation_max_tokens": 1000,
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
|
||||
"expires_in_seconds": 3600
|
||||
}
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY",
|
||||
"model": "gpt-3.5-turbo",
|
||||
"proxy": "",
|
||||
"single_chat_prefix": ["bot", "@bot"],
|
||||
"single_chat_reply_prefix": "[bot] ",
|
||||
"group_chat_prefix": ["@bot"],
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"],
|
||||
"image_create_prefix": ["画", "看", "找"],
|
||||
"speech_recognition": false,
|
||||
"voice_reply_voice": false,
|
||||
"conversation_max_tokens": 1000,
|
||||
"expires_in_seconds": 3600,
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。"
|
||||
}
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ config = {}
|
||||
|
||||
def load_config():
|
||||
global config
|
||||
config_path = "config.json"
|
||||
config_path = "./config.json"
|
||||
if not os.path.exists(config_path):
|
||||
raise Exception('配置文件不存在,请根据config-template.json模板创建config.json文件')
|
||||
|
||||
|
||||
+19
-25
@@ -3,7 +3,7 @@ FROM python:3.7.9-alpine
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.2
|
||||
ARG CHATGPT_ON_WECHAT_VER
|
||||
|
||||
ENV BUILD_PREFIX=/app \
|
||||
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
|
||||
@@ -12,35 +12,29 @@ RUN apk add --no-cache \
|
||||
bash \
|
||||
curl \
|
||||
wget \
|
||||
gcc \
|
||||
g++ \
|
||||
ca-certificates \
|
||||
openssh \
|
||||
libffi-dev
|
||||
|
||||
RUN wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz
|
||||
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`} \
|
||||
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \
|
||||
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai \
|
||||
&& apk del curl wget
|
||||
|
||||
WORKDIR ${BUILD_PREFIX}
|
||||
|
||||
RUN cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json
|
||||
|
||||
RUN /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai \
|
||||
wechaty
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
RUN adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
|
||||
&& chown noroot:noroot ${BUILD_PREFIX}
|
||||
|
||||
USER noroot
|
||||
|
||||
+24
-26
@@ -3,42 +3,40 @@ FROM python:3.7.9
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.2
|
||||
ARG CHATGPT_ON_WECHAT_VER
|
||||
|
||||
ENV BUILD_PREFIX=/app \
|
||||
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`} \
|
||||
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
|
||||
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
|
||||
&& cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \
|
||||
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai
|
||||
|
||||
WORKDIR ${BUILD_PREFIX}
|
||||
|
||||
RUN cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json
|
||||
|
||||
RUN /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai \
|
||||
wechaty
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
RUN groupadd -r noroot \
|
||||
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
|
||||
&& chown -R noroot:noroot ${BUILD_PREFIX}
|
||||
RUN chmod +x /entrypoint.sh \
|
||||
&& groupadd -r noroot \
|
||||
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
|
||||
&& chown -R noroot:noroot ${BUILD_PREFIX}
|
||||
|
||||
USER noroot
|
||||
|
||||
|
||||
@@ -1,10 +1,16 @@
|
||||
#!/bin/bash
|
||||
|
||||
CHATGPT_ON_WECHAT_TAG=1.0.2
|
||||
# fetch latest release tag
|
||||
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`
|
||||
|
||||
# build image
|
||||
docker build -f Dockerfile.alpine \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
|
||||
# tag image
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:alpine
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
|
||||
|
||||
@@ -1,9 +1,15 @@
|
||||
#!/bin/bash
|
||||
|
||||
CHATGPT_ON_WECHAT_TAG=1.0.2
|
||||
# fetch latest release tag
|
||||
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
|
||||
grep '"tag_name":' | \
|
||||
sed -E 's/.*"([^"]+)".*/\1/'`
|
||||
|
||||
# build image
|
||||
docker build -f Dockerfile.debian \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
# tag image
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:debian
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
|
||||
@@ -0,0 +1,23 @@
|
||||
FROM zhayujie/chatgpt-on-wechat:alpine
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
USER root
|
||||
|
||||
RUN apk add --no-cache \
|
||||
ffmpeg \
|
||||
espeak \
|
||||
&& pip install --no-cache \
|
||||
baidu-aip \
|
||||
chardet \
|
||||
SpeechRecognition
|
||||
|
||||
# replace entrypoint
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -0,0 +1,24 @@
|
||||
FROM zhayujie/chatgpt-on-wechat:debian
|
||||
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
USER root
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
ffmpeg \
|
||||
espeak \
|
||||
&& pip install --no-cache \
|
||||
baidu-aip \
|
||||
chardet \
|
||||
SpeechRecognition
|
||||
|
||||
# replace entrypoint
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
USER noroot
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
@@ -0,0 +1,24 @@
|
||||
version: '2.0'
|
||||
services:
|
||||
chatgpt-on-wechat:
|
||||
build:
|
||||
context: ./
|
||||
dockerfile: Dockerfile.alpine
|
||||
image: zhayujie/chatgpt-on-wechat-voice-reply
|
||||
container_name: chatgpt-on-wechat-voice-reply
|
||||
environment:
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
OPEN_AI_PROXY: ''
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
SPEECH_RECOGNITION: 'true'
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
VOICE_REPLY_VOICE: 'true'
|
||||
BAIDU_APP_ID: 'YOUR BAIDU APP ID'
|
||||
BAIDU_API_KEY: 'YOUR BAIDU API KEY'
|
||||
BAIDU_SECRET_KEY: 'YOUR BAIDU SERVICE KEY'
|
||||
+117
@@ -0,0 +1,117 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# build prefix
|
||||
CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""}
|
||||
# path to config.json
|
||||
CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""}
|
||||
# execution command line
|
||||
CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""}
|
||||
|
||||
OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""}
|
||||
OPEN_AI_PROXY=${OPEN_AI_PROXY:-""}
|
||||
SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""}
|
||||
SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""}
|
||||
GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""}
|
||||
GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""}
|
||||
IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""}
|
||||
CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""}
|
||||
SPEECH_RECOGNITION=${SPEECH_RECOGNITION:-""}
|
||||
CHARACTER_DESC=${CHARACTER_DESC:-""}
|
||||
EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""}
|
||||
|
||||
VOICE_REPLY_VOICE=${VOICE_REPLY_VOICE:-""}
|
||||
BAIDU_APP_ID=${BAIDU_APP_ID:-""}
|
||||
BAIDU_API_KEY=${BAIDU_API_KEY:-""}
|
||||
BAIDU_SECRET_KEY=${BAIDU_SECRET_KEY:-""}
|
||||
|
||||
# CHATGPT_ON_WECHAT_PREFIX is empty, use /app
|
||||
if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
fi
|
||||
|
||||
# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'
|
||||
if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json
|
||||
fi
|
||||
|
||||
# CHATGPT_ON_WECHAT_EXEC is empty, use ‘python app.py’
|
||||
if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_EXEC="python app.py"
|
||||
fi
|
||||
|
||||
# modify content in config.json
|
||||
if [ "$OPEN_AI_API_KEY" != "" ] ; then
|
||||
sed -i "s/\"open_ai_api_key\".*,$/\"open_ai_api_key\": \"$OPEN_AI_API_KEY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
else
|
||||
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
|
||||
fi
|
||||
|
||||
# use http_proxy as default
|
||||
if [ "$HTTP_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$HTTP_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$OPEN_AI_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$OPEN_AI_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"single_chat_prefix\".*,$/\"single_chat_prefix\": $SINGLE_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"single_chat_reply_prefix\".*,$/\"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"group_chat_prefix\".*,$/\"group_chat_prefix\": $GROUP_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
|
||||
sed -i "s/\"group_name_white_list\".*,$/\"group_name_white_list\": $GROUP_NAME_WHITE_LIST,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
|
||||
sed -i "s/\"image_create_prefix\".*,$/\"image_create_prefix\": $IMAGE_CREATE_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
|
||||
sed -i "s/\"conversation_max_tokens\".*,$/\"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SPEECH_RECOGNITION" != "" ] ; then
|
||||
sed -i "s/\"speech_recognition\".*,$/\"speech_recognition\": $SPEECH_RECOGNITION,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CHARACTER_DESC" != "" ] ; then
|
||||
sed -i "s/\"character_desc\".*,$/\"character_desc\": \"$CHARACTER_DESC\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$EXPIRES_IN_SECONDS" != "" ] ; then
|
||||
sed -i "s/\"expires_in_seconds\".*$/\"expires_in_seconds\": $EXPIRES_IN_SECONDS/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# append
|
||||
if [ "$BAIDU_SECRET_KEY" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_secret_key\": \"$BAIDU_SECRET_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$BAIDU_API_KEY" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_api_key\": \"$BAIDU_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$BAIDU_APP_ID" != "" ] ; then
|
||||
sed -i "1a \ \ \"baidu_app_id\": \"$BAIDU_APP_ID\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$VOICE_REPLY_VOICE" != "" ] ; then
|
||||
sed -i "1a \ \ \"voice_reply_voice\": $VOICE_REPLY_VOICE," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# go to prefix dir
|
||||
cd $CHATGPT_ON_WECHAT_PREFIX
|
||||
# excute
|
||||
$CHATGPT_ON_WECHAT_EXEC
|
||||
|
||||
|
||||
@@ -8,11 +8,13 @@ services:
|
||||
container_name: sample-chatgpt-on-wechat
|
||||
environment:
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
WECHATY_PUPPET_SERVICE_TOKEN: 'WECHATY PUPPET SERVICE TOKEN'
|
||||
OPEN_AI_PROXY: ''
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
SPEECH_RECOGNITION: 'false'
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
EXPIRES_IN_SECONDS: 3600
|
||||
+26
-12
@@ -9,13 +9,16 @@ CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""}
|
||||
CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""}
|
||||
|
||||
OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""}
|
||||
OPEN_AI_PROXY=${OPEN_AI_PROXY:-""}
|
||||
SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""}
|
||||
SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""}
|
||||
GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""}
|
||||
GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""}
|
||||
IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""}
|
||||
CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""}
|
||||
SPEECH_RECOGNITION=${SPEECH_RECOGNITION:-""}
|
||||
CHARACTER_DESC=${CHARACTER_DESC:-""}
|
||||
EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""}
|
||||
|
||||
# CHATGPT_ON_WECHAT_PREFIX is empty, use /app
|
||||
if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
|
||||
@@ -34,43 +37,54 @@ fi
|
||||
|
||||
# modify content in config.json
|
||||
if [ "$OPEN_AI_API_KEY" != "" ] ; then
|
||||
sed -i "2c \"open_ai_api_key\": \"$OPEN_AI_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"open_ai_api_key\".*,$/\"open_ai_api_key\": \"$OPEN_AI_API_KEY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
else
|
||||
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
|
||||
fi
|
||||
|
||||
if [ "$WECHATY_PUPPET_SERVICE_TOKEN" != "" ] ; then
|
||||
sed -i "3c \"wechaty_puppet_service_token\": \"$WECHATY_PUPPET_SERVICE_TOKEN\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
else
|
||||
echo -e "\033[31m[Info] You need to set WECHATY_PUPPET_SERVICE_TOKEN if you use wechaty!\033[0m"
|
||||
# use http_proxy as default
|
||||
if [ "$HTTP_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$HTTP_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$OPEN_AI_PROXY" != "" ] ; then
|
||||
sed -i "s/\"proxy\".*,$/\"proxy\": \"$OPEN_AI_PROXY\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "4c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"single_chat_prefix\".*,$/\"single_chat_prefix\": $SINGLE_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
|
||||
sed -i "5c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"single_chat_reply_prefix\".*,$/\"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "6c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"group_chat_prefix\".*,$/\"group_chat_prefix\": $GROUP_CHAT_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
|
||||
sed -i "7c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"group_name_white_list\".*,$/\"group_name_white_list\": $GROUP_NAME_WHITE_LIST,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
|
||||
sed -i "8c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"image_create_prefix\".*,$/\"image_create_prefix\": $IMAGE_CREATE_PREFIX,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
|
||||
sed -i "9c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"conversation_max_tokens\".*,$/\"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SPEECH_RECOGNITION" != "" ] ; then
|
||||
sed -i "s/\"speech_recognition\".*,$/\"speech_recognition\": $SPEECH_RECOGNITION,/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CHARACTER_DESC" != "" ] ; then
|
||||
sed -i "10c \"character_desc\": \"$CHARACTER_DESC\"" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "s/\"character_desc\".*,$/\"character_desc\": \"$CHARACTER_DESC\",/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$EXPIRES_IN_SECONDS" != "" ] ; then
|
||||
sed -i "s/\"expires_in_seconds\".*$/\"expires_in_seconds\": $EXPIRES_IN_SECONDS/" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# go to prefix dir
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
OPEN_AI_API_KEY=YOUR API KEY
|
||||
WECHATY_PUPPET_SERVICE_TOKEN=WECHATY PUPPET SERVICE TOKEN
|
||||
OPEN_AI_PROXY=
|
||||
SINGLE_CHAT_PREFIX=["bot", "@bot"]
|
||||
SINGLE_CHAT_REPLY_PREFIX="[bot] "
|
||||
GROUP_CHAT_PREFIX=["@bot"]
|
||||
GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"]
|
||||
IMAGE_CREATE_PREFIX=["画", "看", "找"]
|
||||
CONVERSATION_MAX_TOKENS=1000
|
||||
SPEECH_RECOGNITION=false
|
||||
CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
|
||||
EXPIRES_IN_SECONDS=3600
|
||||
|
||||
# Optional
|
||||
#CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
|
||||
@@ -0,0 +1,36 @@
|
||||
|
||||
"""
|
||||
baidu voice service
|
||||
"""
|
||||
import time
|
||||
from aip import AipSpeech
|
||||
from common.log import logger
|
||||
from common.tmp_dir import TmpDir
|
||||
from voice.voice import Voice
|
||||
from config import conf
|
||||
|
||||
class BaiduVoice(Voice):
|
||||
APP_ID = conf().get('baidu_app_id')
|
||||
API_KEY = conf().get('baidu_api_key')
|
||||
SECRET_KEY = conf().get('baidu_secret_key')
|
||||
client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def voiceToText(self, voice_file):
|
||||
pass
|
||||
|
||||
def textToVoice(self, text):
|
||||
result = self.client.synthesis(text, 'zh', 1, {
|
||||
'spd': 5, 'pit': 5, 'vol': 5, 'per': 111
|
||||
})
|
||||
if not isinstance(result, dict):
|
||||
fileName = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3'
|
||||
with open(fileName, 'wb') as f:
|
||||
f.write(result)
|
||||
logger.info('[Baidu] textToVoice text={} voice file name={}'.format(text, fileName))
|
||||
return fileName
|
||||
else:
|
||||
logger.error('[Baidu] textToVoice error={}'.format(result))
|
||||
return None
|
||||
@@ -0,0 +1,51 @@
|
||||
|
||||
"""
|
||||
google voice service
|
||||
"""
|
||||
|
||||
import pathlib
|
||||
import subprocess
|
||||
import time
|
||||
import speech_recognition
|
||||
import pyttsx3
|
||||
from common.log import logger
|
||||
from common.tmp_dir import TmpDir
|
||||
from voice.voice import Voice
|
||||
|
||||
|
||||
class GoogleVoice(Voice):
|
||||
recognizer = speech_recognition.Recognizer()
|
||||
engine = pyttsx3.init()
|
||||
|
||||
def __init__(self):
|
||||
# 语速
|
||||
self.engine.setProperty('rate', 125)
|
||||
# 音量
|
||||
self.engine.setProperty('volume', 1.0)
|
||||
# 0为男声,1为女声
|
||||
voices = self.engine.getProperty('voices')
|
||||
self.engine.setProperty('voice', voices[1].id)
|
||||
|
||||
def voiceToText(self, voice_file):
|
||||
new_file = voice_file.replace('.mp3', '.wav')
|
||||
subprocess.call('ffmpeg -i ' + voice_file +
|
||||
' -acodec pcm_s16le -ac 1 -ar 16000 ' + new_file, shell=True)
|
||||
with speech_recognition.AudioFile(new_file) as source:
|
||||
audio = self.recognizer.record(source)
|
||||
try:
|
||||
text = self.recognizer.recognize_google(audio, language='zh-CN')
|
||||
logger.info(
|
||||
'[Google] voiceToText text={} voice file name={}'.format(text, voice_file))
|
||||
return text
|
||||
except speech_recognition.UnknownValueError:
|
||||
return "抱歉,我听不懂。"
|
||||
except speech_recognition.RequestError as e:
|
||||
return "抱歉,无法连接到 Google 语音识别服务;{0}".format(e)
|
||||
|
||||
def textToVoice(self, text):
|
||||
textFile = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3'
|
||||
self.engine.save_to_file(text, textFile)
|
||||
self.engine.runAndWait()
|
||||
logger.info(
|
||||
'[Google] textToVoice text={} voice file name={}'.format(text, textFile))
|
||||
return textFile
|
||||
@@ -0,0 +1,27 @@
|
||||
|
||||
"""
|
||||
google voice service
|
||||
"""
|
||||
import json
|
||||
import openai
|
||||
from config import conf
|
||||
from common.log import logger
|
||||
from voice.voice import Voice
|
||||
|
||||
|
||||
class OpenaiVoice(Voice):
|
||||
def __init__(self):
|
||||
openai.api_key = conf().get('open_ai_api_key')
|
||||
|
||||
def voiceToText(self, voice_file):
|
||||
logger.debug(
|
||||
'[Openai] voice file name={}'.format(voice_file))
|
||||
file = open(voice_file, "rb")
|
||||
reply = openai.Audio.transcribe("whisper-1", file)
|
||||
text = reply["text"]
|
||||
logger.info(
|
||||
'[Openai] voiceToText text={} voice file name={}'.format(text, voice_file))
|
||||
return text
|
||||
|
||||
def textToVoice(self, text):
|
||||
pass
|
||||
@@ -0,0 +1,16 @@
|
||||
"""
|
||||
Voice service abstract class
|
||||
"""
|
||||
|
||||
class Voice(object):
|
||||
def voiceToText(self, voice_file):
|
||||
"""
|
||||
Send voice to voice service and get text
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def textToVoice(self, text):
|
||||
"""
|
||||
Send text to voice service and get voice
|
||||
"""
|
||||
raise NotImplementedError
|
||||
@@ -0,0 +1,20 @@
|
||||
"""
|
||||
voice factory
|
||||
"""
|
||||
|
||||
def create_voice(voice_type):
|
||||
"""
|
||||
create a voice instance
|
||||
:param voice_type: voice type code
|
||||
:return: voice instance
|
||||
"""
|
||||
if voice_type == 'baidu':
|
||||
from voice.baidu.baidu_voice import BaiduVoice
|
||||
return BaiduVoice()
|
||||
elif voice_type == 'google':
|
||||
from voice.google.google_voice import GoogleVoice
|
||||
return GoogleVoice()
|
||||
elif voice_type == 'openai':
|
||||
from voice.openai.openai_voice import OpenaiVoice
|
||||
return OpenaiVoice()
|
||||
raise RuntimeError
|
||||
Reference in New Issue
Block a user