Compare commits

...

43 Commits

Author SHA1 Message Date
zhayujie a582a46ce9 fix: call super init 2023-06-12 14:05:47 +08:00
zhayujie abf80a3266 docs: update README 2023-06-12 13:52:49 +08:00
Jianglang d768f5c66d Update README.md 2023-06-11 00:02:18 +08:00
lanvent b25e843351 feat(link_ai_bot.py): add support for creating images using OpenAI's DALL-E API 2023-06-10 23:52:25 +08:00
lanvent 419a3e518e feat: make plugin compatible with LINKAI in most cases 2023-06-10 23:42:43 +08:00
lanvent d1b867a7c0 feat: support scene without app code in linkai 2023-06-10 21:28:15 +08:00
lanvent c34d70b3cb fix: add warning log when pysilk module is not installed 2023-06-10 11:22:12 +08:00
lanvent a33df9312f fix: warning message when using azure model 2023-06-10 11:06:50 +08:00
Jianglang ebf8db0b37 Merge pull request #1238 from chenzefeng09/fix_baidu_voice_init
fix: baidu voice init params type error
2023-06-10 00:48:41 +08:00
chenzefeng.09 e539ae3b69 fix: baidu voice init params type error 2023-06-09 18:54:58 +08:00
lanvent 4c5e8850aa fix: env vars type error (#1127) 2023-06-09 14:46:43 +08:00
zhayujie 94c0af3037 feat: support scen without app code 2023-06-08 23:57:59 +08:00
zhayujie 165182c68f config: remove the config temporarily and consider integrating it as a plugin 2023-06-08 20:58:59 +08:00
Jianglang 65b9542599 Merge pull request #1221 from Zhaoyi-Yan/patch-3
add \n after @nickname for group chat
2023-06-08 11:53:14 +08:00
Jianglang d01d1f8830 Merge pull request #1220 from Zhaoyi-Yan/patch-2
Add azure_deployment_id to Readme for Azure chatgpt.
2023-06-08 11:48:44 +08:00
Jianglang ad3e9f3d42 Update README.md 2023-06-08 11:44:17 +08:00
Jianglang 4589974095 Update README.md 2023-06-08 11:42:39 +08:00
Jianglang ed4553ddf8 Update README.md 2023-06-08 11:42:12 +08:00
Zhaoyi-Yan ff97ae73f1 add \n after @nickname for group chat 2023-06-06 15:16:57 +08:00
Zhaoyi-Yan f96b4d2781 Add azure_deployment_id to Readme for Azure chatgpt. 2023-06-06 14:44:09 +08:00
zhayujie ce32cfffdb docs: update README.md 2023-06-06 14:02:32 +08:00
zhayujie f66df8531e Update README.md 2023-06-06 09:54:34 +08:00
zhayujie dfe1c23e76 Merge pull request #1218 from zhayujie/feature-app-market
feat: no quota hint and add group qrcode
2023-06-05 23:55:25 +08:00
zhayujie 07fd81919f docs: udapte readme 2023-06-05 23:53:34 +08:00
zhayujie 210042bb81 feat: no quota hint and add group qrcode 2023-06-05 23:21:24 +08:00
lanvent 12dc7427e9 make railway happy 2023-06-02 22:15:20 +08:00
lanvent b476085110 fix: custom GPT model bug 2023-05-30 23:42:06 +08:00
zhayujie 776cdaf63c Merge pull request #1168 from zhayujie/feature-app-market
fix: config name optimize
2023-05-29 16:36:38 +08:00
zhayujie 69b6855745 fix: comment modify 2023-05-29 15:55:48 +08:00
zhayujie 3590babd8b fix: config name optimize 2023-05-29 15:52:26 +08:00
zhayujie c29d391c1d Merge pull request #1167 from zhayujie/feature-app-market
feature:  support online knowledge base
2023-05-29 15:41:12 +08:00
zhayujie 50e44dbb2a fix: session save 2023-05-28 22:12:36 +08:00
zhayujie 34277a3940 feat: add app market 2023-05-28 19:08:23 +08:00
lanvent f1a00d58ca chore(Dockerfile.latest): comment out the sed command to replace apt source with tuna mirror
The sed command to replace the apt source with the tuna mirror has been commented out. This is because the command is not necessary for the current build and may cause issues in the future.
2023-05-17 22:24:25 +08:00
Jianglang d1a5f17ae8 Merge pull request #1102 from goldfishh/master
plugin(tool): 更新0.4.4
2023-05-17 16:13:03 +08:00
goldfishh 6409f49609 plugin(tool): 更新0.4.4
1. 支持azure、api转发服务
2. 修复browser代理无前缀报错的问题
3. 优化core prompt
4. 修复系列issue提到的问题
2023-05-16 00:22:32 +08:00
Jianglang 9ee0ea88b5 Merge pull request #1089 from taoguoliang/master-fork
feat(命令): 添加set_gpt_model、set_gpt_model、set_gpt_model 几个命令的使用
2023-05-15 23:34:04 +08:00
Jianglang a3819d8673 Merge pull request #1096 from lichengzhe/master
处理cloudflare Bad Gateway异常,自动重试。
2023-05-15 23:32:03 +08:00
lichengzhe 2d7dd71a3d Bad Gateway exception retry 2023-05-15 14:04:55 +08:00
lichengzhe 0e8195ae61 Bad Gateway exception retry 2023-05-15 13:55:14 +08:00
taoguoliang 3e92d07618 feat(命令): 添加set_gpt_model、set_gpt_model、set_gpt_model 几个命令的使用 2023-05-13 16:57:02 +08:00
Jianglang e59597280d Merge pull request #1079 from 6vision/6vision-patch-1
Update README.md
2023-05-11 20:21:05 +08:00
vision f2e3d69d8a Update README.md
新闻类工具整合后,工具名称变更了,调整一下位置,更能引起注意
2023-05-11 15:49:55 +08:00
26 changed files with 238 additions and 45 deletions
+17 -8
View File
@@ -17,9 +17,6 @@
- 个人微信
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/qApznZ?referralCode=RC3znh)
- 企业微信应用号
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/-FHS--?referralCode=RC3znh)
# 演示
@@ -27,9 +24,17 @@ https://user-images.githubusercontent.com/26161723/233777277-e3b9928e-b88f-43e2-
Demo made by [Visionn](https://www.wangpc.cc/)
# 交流群
添加小助手微信进群:
<img width="240" src="./docs/images/contact.jpg">
# 更新日志
>**2023.04.26:** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,支持Railway部署,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
>**2023.06.12** 接入 [LinkAI](https://chat.link-ai.tech/console) 平台,可在线创建 个人知识库,并接入微信中。Beta版本欢迎体验,使用参考 [接入文档](https://link-ai.tech/platform/link-app/wechat)
>**2023.04.26** 支持企业微信应用号部署,兼容插件,并支持语音图片交互,私人助理理想选择,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatcom/README.md)。(contributed by [@lanvent](https://github.com/lanvent) in [#944](https://github.com/zhayujie/chatgpt-on-wechat/pull/944))
>**2023.04.05** 支持微信公众号部署,兼容插件,并支持语音图片交互,[使用文档](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/wechatmp/README.md)。(contributed by [@JS00000](https://github.com/JS00000) in [#686](https://github.com/zhayujie/chatgpt-on-wechat/pull/686))
@@ -120,6 +125,7 @@ pip3 install azure-cognitiveservices-speech
"speech_recognition": false, # 是否开启语音识别
"group_speech_recognition": false, # 是否开启群组语音识别
"use_azure_chatgpt": false, # 是否使用Azure ChatGPT service代替openai ChatGPT service. 当设置为true时需要设置 open_ai_api_base,如 https://xxx.openai.azure.com/
"azure_deployment_id": "", # 采用Azure ChatGPT时,模型部署名称
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。", # 人格描述
# 订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复,可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持角色扮演和文字冒险等丰富插件。\n输入{trigger_prefix}#help 查看详细指令。"
@@ -147,11 +153,11 @@ pip3 install azure-cognitiveservices-speech
**4.其他配置**
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k` (其中gpt-4 api暂未开放)
+ `model`: 模型名称,目前支持 `gpt-3.5-turbo`, `text-davinci-003`, `gpt-4`, `gpt-4-32k` (其中gpt-4 api暂未完全开放,申请通过后可使用)
+ `temperature`,`frequency_penalty`,`presence_penalty`: Chat API接口参数,详情参考[OpenAI官方文档。](https://platform.openai.com/docs/api-reference/chat)
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
+ 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档直接在 [代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/bot/openai/open_ai_bot.py) `bot/openai/open_ai_bot.py` 中进行调整
+ 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档,在[`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py)中检查哪些参数在本项目中是可配置的
+ `conversation_max_tokens`:表示能够记忆的上下文最大字数(一问一答为一组对话,如果累积的对话字数超出限制,就会优先移除最早的一组对话)
+ `rate_limit_chatgpt``rate_limit_dalle`:每分钟最高问答速率、画图速率,超速后排队按序处理。
+ `clear_memory_commands`: 对话内指令,主动清空前文记忆,字符串数组可自定义指令别名。
@@ -159,7 +165,7 @@ pip3 install azure-cognitiveservices-speech
+ `character_desc` 配置中保存着你对机器人说的一段话,他会记住这段话并作为他的设定,你可以为他定制任何人格 (关于会话上下文的更多内容参考该 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/43))
+ `subscribe_msg`:订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复, 可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
**所有可选的配置项均在该[文件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py)中列出。**
**本说明文档可能会未及时更新,当前所有可选的配置项均在该[`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py)中列出。**
## 运行
@@ -202,9 +208,12 @@ nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通
FAQs <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>
或直接在线咨询 [项目小助手](https://chat.link-ai.tech/app/Kv2fXJcH) (beta版本,语料完善中,回复仅供参考)
## 联系
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题优先查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。如果你想了解更多项目细节,并与开发者们交流更多关于AI技术的实践,欢迎加入星球:
欢迎提交PR、Issues,以及Star支持一下。程序运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,其次前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。
如果你想了解更多项目细节,与开发者们交流更多关于AI技术的实践,欢迎加入星球:
<a href="https://public.zsxq.com/groups/88885848842852.html"><img width="360" src="./docs/images/planet.jpg"></a>
+5
View File
@@ -33,4 +33,9 @@ def create_bot(bot_type):
from bot.chatgpt.chat_gpt_bot import AzureChatGPTBot
return AzureChatGPTBot()
elif bot_type == const.LINKAI:
from bot.linkai.link_ai_bot import LinkAIBot
return LinkAIBot()
raise RuntimeError
+18 -7
View File
@@ -36,7 +36,7 @@ class ChatGPTBot(Bot, OpenAIImage):
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称
"temperature": conf().get("temperature", 0.9), # 值在[0,1]之间,越大表示回复越具有不确定性
# "max_tokens":4096, # 回复最大的字符数
"top_p": 1,
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"request_timeout": conf().get("request_timeout", None), # 请求超时时间,openai接口默认设置为600,对于难问题一般需要较长时间
@@ -66,12 +66,16 @@ class ChatGPTBot(Bot, OpenAIImage):
logger.debug("[CHATGPT] session query={}".format(session.messages))
api_key = context.get("openai_api_key")
model = context.get("gpt_model")
new_args = None
if model:
new_args = self.args.copy()
new_args["model"] = model
# if context.get('stream'):
# # reply in stream
# return self.reply_text_stream(query, new_query, session_id)
reply_content = self.reply_text(session, api_key)
reply_content = self.reply_text(session, api_key, args=new_args)
logger.debug(
"[CHATGPT] new_query={}, session_id={}, reply_cont={}, completion_tokens={}".format(
session.messages,
@@ -102,7 +106,7 @@ class ChatGPTBot(Bot, OpenAIImage):
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def reply_text(self, session: ChatGPTSession, api_key=None, retry_count=0) -> dict:
def reply_text(self, session: ChatGPTSession, api_key=None, args=None, retry_count=0) -> dict:
"""
call openai's ChatCompletion to get the answer
:param session: a conversation session
@@ -114,7 +118,9 @@ class ChatGPTBot(Bot, OpenAIImage):
if conf().get("rate_limit_chatgpt") and not self.tb4chatgpt.get_token():
raise openai.error.RateLimitError("RateLimitError: rate limit exceeded")
# if api_key == None, the default openai.api_key will be used
response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **self.args)
if args is None:
args = self.args
response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **args)
# logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
return {
"total_tokens": response["usage"]["total_tokens"],
@@ -134,18 +140,23 @@ class ChatGPTBot(Bot, OpenAIImage):
result["content"] = "我没有收到你的消息"
if need_retry:
time.sleep(5)
elif isinstance(e, openai.error.APIError):
logger.warn("[CHATGPT] Bad Gateway: {}".format(e))
result["content"] = "请再问我一次"
if need_retry:
time.sleep(10)
elif isinstance(e, openai.error.APIConnectionError):
logger.warn("[CHATGPT] APIConnectionError: {}".format(e))
need_retry = False
result["content"] = "我连接不到你的网络"
else:
logger.warn("[CHATGPT] Exception: {}".format(e))
logger.exception("[CHATGPT] Exception: {}".format(e))
need_retry = False
self.sessions.clear_session(session.session_id)
if need_retry:
logger.warn("[CHATGPT] 第{}次重试".format(retry_count + 1))
return self.reply_text(session, api_key, retry_count + 1)
return self.reply_text(session, api_key, args, retry_count + 1)
else:
return result
+6 -5
View File
@@ -57,16 +57,17 @@ def num_tokens_from_messages(messages, model):
"""Returns the number of tokens used by a list of messages."""
import tiktoken
if model == "gpt-3.5-turbo" or model == "gpt-35-turbo":
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
return num_tokens_from_messages(messages, model="gpt-4-0314")
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
logger.debug("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo" or model == "gpt-35-turbo":
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
return num_tokens_from_messages(messages, model="gpt-4-0314")
elif model == "gpt-3.5-turbo-0301":
if model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
+108
View File
@@ -0,0 +1,108 @@
# access LinkAI knowledge base platform
# docs: https://link-ai.tech/platform/link-app/wechat
import time
import requests
from bot.bot import Bot
from bot.chatgpt.chat_gpt_session import ChatGPTSession
from bot.openai.open_ai_image import OpenAIImage
from bot.session_manager import SessionManager
from bridge.context import Context, ContextType
from bridge.reply import Reply, ReplyType
from common.log import logger
from config import conf
class LinkAIBot(Bot, OpenAIImage):
# authentication failed
AUTH_FAILED_CODE = 401
NO_QUOTA_CODE = 406
def __init__(self):
super().__init__()
self.base_url = "https://api.link-ai.chat/v1"
self.sessions = SessionManager(ChatGPTSession, model=conf().get("model") or "gpt-3.5-turbo")
def reply(self, query, context: Context = None) -> Reply:
if context.type == ContextType.TEXT:
return self._chat(query, context)
elif context.type == ContextType.IMAGE_CREATE:
ok, retstring = self.create_img(query, 0)
reply = None
if ok:
reply = Reply(ReplyType.IMAGE_URL, retstring)
else:
reply = Reply(ReplyType.ERROR, retstring)
return reply
else:
reply = Reply(ReplyType.ERROR, "Bot不支持处理{}类型的消息".format(context.type))
return reply
def _chat(self, query, context, retry_count=0):
if retry_count >= 2:
# exit from retry 2 times
logger.warn("[LINKAI] failed after maximum number of retry times")
return Reply(ReplyType.ERROR, "请再问我一次吧")
try:
# load config
if context.get("generate_breaked_by"):
logger.info(f"[LINKAI] won't set appcode because a plugin ({context['generate_breaked_by']}) affected the context")
app_code = None
else:
app_code = conf().get("linkai_app_code")
linkai_api_key = conf().get("linkai_api_key")
session_id = context["session_id"]
session = self.sessions.session_query(query, session_id)
# remove system message
if app_code and session.messages[0].get("role") == "system":
session.messages.pop(0)
logger.info(f"[LINKAI] query={query}, app_code={app_code}")
body = {
"appCode": app_code,
"messages": session.messages,
"model": conf().get("model") or "gpt-3.5-turbo", # 对话模型的名称
"temperature": conf().get("temperature"),
"top_p": conf().get("top_p", 1),
"frequency_penalty": conf().get("frequency_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
"presence_penalty": conf().get("presence_penalty", 0.0), # [-2,2]之间,该值越大则更倾向于产生不同的内容
}
headers = {"Authorization": "Bearer " + linkai_api_key}
# do http request
res = requests.post(url=self.base_url + "/chat/completion", json=body, headers=headers).json()
if not res or not res["success"]:
if res.get("code") == self.AUTH_FAILED_CODE:
logger.exception(f"[LINKAI] please check your linkai_api_key, res={res}")
return Reply(ReplyType.ERROR, "请再问我一次吧")
elif res.get("code") == self.NO_QUOTA_CODE:
logger.exception(f"[LINKAI] please check your account quota, https://chat.link-ai.tech/console/account")
return Reply(ReplyType.ERROR, "提问太快啦,请休息一下再问我吧")
else:
# retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
# execute success
reply_content = res["data"]["content"]
logger.info(f"[LINKAI] reply={reply_content}")
self.sessions.session_reply(reply_content, session_id)
return Reply(ReplyType.TEXT, reply_content)
except Exception as e:
logger.exception(e)
# retry
time.sleep(2)
logger.warn(f"[LINKAI] do retry, times={retry_count}")
return self._chat(query, context, retry_count + 1)
+2
View File
@@ -23,6 +23,8 @@ class Bridge(object):
self.btype["chat"] = const.OPEN_AI
if conf().get("use_azure_chatgpt", False):
self.btype["chat"] = const.CHATGPTONAZURE
if conf().get("use_linkai") and conf().get("linkai_api_key"):
self.btype["chat"] = const.LINKAI
self.bots = {}
def get_bot(self, typename):
+4 -1
View File
@@ -50,6 +50,7 @@ class ChatChannel(Channel):
cmsg = context["msg"]
user_data = conf().get_user_data(cmsg.from_user_id)
context["openai_api_key"] = user_data.get("openai_api_key")
context["gpt_model"] = user_data.get("gpt_model")
if context.get("isgroup", False):
group_name = cmsg.other_user_nickname
group_id = cmsg.other_user_id
@@ -161,6 +162,8 @@ class ChatChannel(Channel):
reply = e_context["reply"]
if not e_context.is_pass():
logger.debug("[WX] ready to handle context: type={}, content={}".format(context.type, context.content))
if e_context.is_break():
context["generate_breaked_by"] = e_context["breaked_by"]
if context.type == ContextType.TEXT or context.type == ContextType.IMAGE_CREATE: # 文字和图片消息
reply = super().build_reply_content(context.content, context)
elif context.type == ContextType.VOICE: # 语音消息
@@ -219,7 +222,7 @@ class ChatChannel(Channel):
reply = super().build_text_to_voice(reply.content)
return self._decorate_reply(context, reply)
if context.get("isgroup", False):
reply_text = "@" + context["msg"].actual_user_nickname + " " + reply_text.strip()
reply_text = "@" + context["msg"].actual_user_nickname + "\n" + reply_text.strip()
reply_text = conf().get("group_chat_reply_prefix", "") + reply_text
else:
reply_text = conf().get("single_chat_reply_prefix", "") + reply_text
-1
View File
@@ -23,7 +23,6 @@ from common.time_check import time_checker
from config import conf, get_appdata_dir
from lib import itchat
from lib.itchat.content import *
from plugins import *
@itchat.msg_register([TEXT, VOICE, PICTURE, NOTE])
+4 -6
View File
@@ -1,6 +1,6 @@
# 企业微信应用号channel
企业微信官方提供了客服、应用等API,本channel使用的是企业微信的应用API的能力。
企业微信官方提供了客服、应用等API,本channel使用的是企业微信的自建应用API的能力。
因为未来可能还会开发客服能力,所以本channel的类型名叫作`wechatcom_app`
@@ -72,13 +72,11 @@ Error code: 60020, message: "not allow to access from your ip, ...from ip: xx.xx
意思是IP不可信,需要参考上一步的`企业可信IP`配置,把这里的IP加进去。
### Railway部署方式
~~### Railway部署方式~~2023-06-08已失效)
公众号不能在`Railway`上部署,但企业微信应用[可以](https://railway.app/template/-FHS--?referralCode=RC3znh)!
~~公众号不能在`Railway`上部署,但企业微信应用[可以](https://railway.app/template/-FHS--?referralCode=RC3znh)!~~
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/-FHS--?referralCode=RC3znh)
填写配置后,将部署完成后的网址```**.railway.app/wxcomapp```,填写在上一步的URL中。发送信息后观察日志,把报错的IP加入到可信IP。(每次重启后都需要加入可信IP)
~~填写配置后,将部署完成后的网址```**.railway.app/wxcomapp```,填写在上一步的URL中。发送信息后观察日志,把报错的IP加入到可信IP。(每次重启后都需要加入可信IP)~~
## 测试体验
+1
View File
@@ -3,5 +3,6 @@ OPEN_AI = "openAI"
CHATGPT = "chatGPT"
BAIDU = "baidu"
CHATGPTONAZURE = "chatGPTOnAzure"
LINKAI = "linkai"
VERSION = "1.3.0"
+4 -1
View File
@@ -28,5 +28,8 @@
"conversation_max_tokens": 1000,
"expires_in_seconds": 3600,
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输入。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持tool、角色扮演和文字冒险等丰富的插件。\n输入{trigger_prefix}#help 查看详细指令。"
"subscribe_msg": "感谢您的关注!\n这里是ChatGPT,可以自由对话。\n支持语音对话。\n支持图片输入。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持tool、角色扮演和文字冒险等丰富的插件。\n输入{trigger_prefix}#help 查看详细指令。",
"use_linkai": false,
"linkai_api_key": "",
"linkai_app_code": ""
}
+4
View File
@@ -99,6 +99,10 @@ available_setting = {
"appdata_dir": "", # 数据目录
# 插件配置
"plugin_trigger_prefix": "$", # 规范插件提供聊天相关指令的前缀,建议不要和管理员指令前缀"#"冲突
# 知识库平台配置
"use_linkai": False,
"linkai_api_key": "",
"linkai_app_code": ""
}
+1 -1
View File
@@ -6,7 +6,7 @@ ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
RUN echo /etc/apt/sources.list
RUN sed -i 's/deb.debian.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list
# RUN sed -i 's/deb.debian.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list
ENV BUILD_PREFIX=/app
ADD . ${BUILD_PREFIX}
Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

+3 -3
View File
@@ -1,7 +1,7 @@
providers = ['python']
[phases.setup]
nixPkgs = ['python310']
cmds = ['apt-get update','apt-get install -y --no-install-recommends ffmpeg espeak libavcodec-extra','python -m venv /opt/venv && . /opt/venv/bin/activate && pip install -r requirements-optional.txt']
cmds = ['apt-get update','apt-get install -y --no-install-recommends ffmpeg espeak libavcodec-extra']
[phases.install]
cmds = ['python -m venv /opt/venv && . /opt/venv/bin/activate && pip install -r requirements.txt && pip install -r requirements-optional.txt']
[start]
cmd = "python ./app.py"
+1 -1
View File
@@ -64,7 +64,7 @@ class Dungeon(Plugin):
if e_context["context"].type != ContextType.TEXT:
return
bottype = Bridge().get_bot_type("chat")
if bottype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE]:
if bottype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
return
bot = Bridge().get_bot("chat")
content = e_context["context"].content[:]
+3
View File
@@ -50,3 +50,6 @@ class EventContext:
def is_pass(self):
return self.action == EventAction.BREAK_PASS
def is_break(self):
return self.action == EventAction.BREAK or self.action == EventAction.BREAK_PASS
+34 -2
View File
@@ -41,6 +41,18 @@ COMMANDS = {
"alias": ["reset_openai_api_key"],
"desc": "重置为默认的api_key",
},
"set_gpt_model": {
"alias": ["set_gpt_model"],
"desc": "设置你的私有模型",
},
"reset_gpt_model": {
"alias": ["reset_gpt_model"],
"desc": "重置你的私有模型",
},
"gpt_model": {
"alias": ["gpt_model"],
"desc": "查询你使用的模型",
},
"id": {
"alias": ["id", "用户"],
"desc": "获取用户id", # wechaty和wechatmp的用户id不会变化,可用于绑定管理员
@@ -264,8 +276,28 @@ class Godcmd(Plugin):
ok, result = True, "你的OpenAI私有api_key已清除"
except Exception as e:
ok, result = False, "你没有设置私有api_key"
elif cmd == "set_gpt_model":
if len(args) == 1:
user_data = conf().get_user_data(user)
user_data["gpt_model"] = args[0]
ok, result = True, "你的GPT模型已设置为" + args[0]
else:
ok, result = False, "请提供一个GPT模型"
elif cmd == "gpt_model":
user_data = conf().get_user_data(user)
model = conf().get("model")
if "gpt_model" in user_data:
model = user_data["gpt_model"]
ok, result = True, "你的GPT模型为" + str(model)
elif cmd == "reset_gpt_model":
try:
user_data = conf().get_user_data(user)
user_data.pop("gpt_model")
ok, result = True, "你的GPT模型已重置"
except Exception as e:
ok, result = False, "你没有设置私有GPT模型"
elif cmd == "reset":
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE]:
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
bot.sessions.clear_session(session_id)
channel.cancel_session(session_id)
ok, result = True, "会话已重置"
@@ -288,7 +320,7 @@ class Godcmd(Plugin):
load_config()
ok, result = True, "配置已重载"
elif cmd == "resetall":
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE]:
if bottype in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
channel.cancel_all_session()
bot.sessions.clear_all_session()
ok, result = True, "重置所有会话成功"
+3
View File
@@ -163,6 +163,9 @@ class PluginManager:
logger.debug("Plugin %s triggered by event %s" % (name, e_context.event))
instance = self.instances[name]
instance.handlers[e_context.event](e_context, *args, **kwargs)
if e_context.is_break():
e_context["breaked_by"] = name
logger.debug("Plugin %s breaked event %s" % (name, e_context.event))
return e_context
def set_plugin_priority(self, name: str, priority: int):
+1 -1
View File
@@ -99,7 +99,7 @@ class Role(Plugin):
if e_context["context"].type != ContextType.TEXT:
return
btype = Bridge().get_bot_type("chat")
if btype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE]:
if btype not in [const.OPEN_AI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:
return
bot = Bridge().get_bot("chat")
content = e_context["context"].content[:]
+4 -2
View File
@@ -57,6 +57,8 @@ $tool reset: 重置工具。
### 6. news 新闻类工具集合
> news更新:0.4版本对新闻类工具做了整合,配置文件只要加入`news`一个工具名就会自动加载所有新闻类工具
#### 6.1. news-api *
###### 从全球 80,000 多个信息源中获取当前和历史新闻文章
@@ -75,7 +77,7 @@ $tool reset: 重置工具。
> 该工具需要解决browser tool 的google-chrome依赖安装
> news更新:0.4版本对news工具做了整合,只要加入news一个工具就会自动加载所有新闻类工具
### 7. bing-search *
###### bing搜索引擎,从此你不用再烦恼搜索要用哪些关键词
@@ -129,7 +131,7 @@ $tool reset: 重置工具。
```
注:config.json文件非必须,未创建仍可使用本tool;带*工具需在kwargs填入对应api-key键值对
- `tools`:本插件初始化时加载的工具, 上述标题即是对应工具名称,带*工具必须在kwargs中配置相应api-key
- `tools`:本插件初始化时加载的工具, 上述一级标题即是对应工具名称,带*工具必须在kwargs中配置相应api-key
- `kwargs`:工具执行时的配置,一般在这里存放**api-key**,或环境配置
- `debug`: 输出chatgpt-tool-hub额外信息用于调试
- `request_timeout`: 访问openai接口的超时时间,默认与wechat-on-chatgpt配置一致,可单独配置
+1
View File
@@ -55,6 +55,7 @@ class Tool(Plugin):
const.CHATGPT,
const.OPEN_AI,
const.CHATGPTONAZURE,
const.LINKAI,
):
return
+1 -1
View File
@@ -25,4 +25,4 @@ wechatpy
# chatgpt-tool-hub plugin
--extra-index-url https://pypi.python.org/simple
chatgpt_tool_hub==0.4.3
chatgpt_tool_hub==0.4.4
+3 -1
View File
@@ -17,13 +17,15 @@ class BaiduTranslator(Translator):
self.url = endpoint + path
self.appid = conf().get("baidu_translate_app_id")
self.appkey = conf().get("baidu_translate_app_key")
if not self.appid or not self.appkey:
raise Exception("baidu translate appid or appkey not set")
# For list of language codes, please refer to `https://api.fanyi.baidu.com/doc/21`, need to convert to ISO 639-1 codes
def translate(self, query: str, from_lang: str = "", to_lang: str = "en") -> str:
if not from_lang:
from_lang = "auto" # baidu suppport auto detect
salt = random.randint(32768, 65536)
sign = self.make_md5(self.appid + query + str(salt) + self.appkey)
sign = self.make_md5("{}{}{}{}".format(self.appid, query, salt, self.appkey))
headers = {"Content-Type": "application/x-www-form-urlencoded"}
payload = {"appid": self.appid, "q": query, "from": from_lang, "to": to_lang, "salt": salt, "sign": sign}
+7 -1
View File
@@ -1,7 +1,13 @@
import shutil
import wave
import pysilk
from common.log import logger
try:
import pysilk
except ImportError:
logger.warn("import pysilk failed, wechaty voice message will not be supported.")
from pydub import AudioSegment
sil_supports = [8000, 12000, 16000, 24000, 32000, 44100, 48000] # slk转wav时,支持的采样率
+3 -3
View File
@@ -43,9 +43,9 @@ class BaiduVoice(Voice):
with open(config_path, "r") as fr:
bconf = json.load(fr)
self.app_id = conf().get("baidu_app_id")
self.api_key = conf().get("baidu_api_key")
self.secret_key = conf().get("baidu_secret_key")
self.app_id = str(conf().get("baidu_app_id"))
self.api_key = str(conf().get("baidu_api_key"))
self.secret_key = str(conf().get("baidu_secret_key"))
self.dev_id = conf().get("baidu_dev_pid")
self.lang = bconf["lang"]
self.ctp = bconf["ctp"]