mirror of
https://github.com/zhayujie/chatgpt-on-wechat.git
synced 2026-05-16 17:29:18 +08:00
Compare commits
43 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| ad7ab088fe | |||
| f2ae3e2fd8 | |||
| 733f9d1f10 | |||
| 2886f48788 | |||
| 04078fd4fa | |||
| 2c2217daad | |||
| 5de600c689 | |||
| 1d4966b69c | |||
| 7ad16731fd | |||
| 5df341fef2 | |||
| 39a5487f39 | |||
| 6a98bc2d5a | |||
| b154dd7e86 | |||
| 3d4d1c734a | |||
| f10911bc3b | |||
| 44e5979a03 | |||
| 598bc6569d | |||
| d667ccb396 | |||
| efbc9de9d1 | |||
| ebed4e7832 | |||
| fb598fba82 | |||
| 2c4d79e952 | |||
| a2db765ade | |||
| df3f19b534 | |||
| f67dae5b0b | |||
| cd5f58ff2c | |||
| 7be9e7d0a8 | |||
| 47c675f999 | |||
| cfa738087f | |||
| 73b4d63545 | |||
| 48900dfbc4 | |||
| a3153815c8 | |||
| 8729a31119 | |||
| b81d947dbb | |||
| 999b2ea51f | |||
| 0b802a61ec | |||
| 02ca1f8772 | |||
| 820b255e24 | |||
| bca0939c9d | |||
| 01d0af841d | |||
| 18e9d6a9b9 | |||
| e27e5958a5 | |||
| 2c5b1d5a8d |
@@ -1,7 +1,7 @@
|
||||
### 前置确认
|
||||
|
||||
1. 运行于国内网络环境,未开代理
|
||||
2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装
|
||||
1. 网络能够访问openai接口 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装
|
||||
3. 在已有 issue 中未搜索到类似问题
|
||||
4. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题
|
||||
|
||||
|
||||
@@ -5,3 +5,4 @@ venv*
|
||||
*.pyc
|
||||
config.json
|
||||
QR.png
|
||||
nohup.out
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
> ChatGPT近期以强大的对话和信息整合能力风靡全网,可以写代码、改论文、讲故事,几乎无所不能,这让人不禁有个大胆的想法,能否用他的对话模型把我们的微信打造成一个智能机器人,可以在与好友对话中给出意想不到的回应,而且再也不用担心女朋友影响我们 ~~打游戏~~ 工作了。
|
||||
|
||||
|
||||
基于ChatGPT的微信聊天机器人,通过 [OpenAI](https://github.com/openai/openai-quickstart-python) 接口生成对话内容,使用 [itchat](https://github.com/littlecodersh/ItChat) 实现微信消息的接收和自动回复。已实现的特性如下:
|
||||
基于ChatGPT的微信聊天机器人,通过 [ChatGPT](https://github.com/openai/openai-python) 接口生成对话内容,使用 [itchat](https://github.com/littlecodersh/ItChat) 实现微信消息的接收和自动回复。已实现的特性如下:
|
||||
|
||||
- [x] **文本对话:** 接收私聊及群组中的微信消息,使用ChatGPT生成回复内容,完成自动回复
|
||||
- [x] **规则定制化:** 支持私聊中按指定规则触发自动回复,支持对群组设置自动回复白名单
|
||||
@@ -13,6 +13,10 @@
|
||||
|
||||
|
||||
# 更新日志
|
||||
>**2023.03.02:** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
|
||||
>**2023.02.20:** 增加 [python-wechaty](https://github.com/wechaty/python-wechaty) 作为可选渠道,使用Pad协议相对稳定,但Token收费 (使用参考[#244](https://github.com/zhayujie/chatgpt-on-wechat/pull/244),contributed by [ZQ7](https://github.com/ZQ7))
|
||||
|
||||
>**2023.02.09:** 扫码登录存在封号风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158)
|
||||
|
||||
>**2023.02.05:** 在openai官方接口方案中 (GPT-3模型) 实现上下文对话
|
||||
@@ -44,7 +48,7 @@
|
||||
|
||||
### 1. OpenAI账号注册
|
||||
|
||||
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.cnblogs.com/damugua/p/16969508.html) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
|
||||
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.pythonthree.com/register-openai-chatgpt/) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
|
||||
|
||||
> 项目中使用的对话模型是 davinci,计费方式是约每 750 字 (包含请求和回复) 消耗 $0.02,图片生成是每张消耗 $0.016,账号创建有免费的 $18 额度,使用完可以更换邮箱重新注册。
|
||||
|
||||
@@ -68,7 +72,7 @@ cd chatgpt-on-wechat/
|
||||
pip3 install itchat-uos==1.5.0.dev0
|
||||
pip3 install --upgrade openai
|
||||
```
|
||||
注:`itchat-uos`使用指定版本1.5.0.dev0,`openai`使用最新版本,需高于0.25.0。
|
||||
注:`itchat-uos`使用指定版本1.5.0.dev0,`openai`使用最新版本,需高于0.27.0。
|
||||
|
||||
|
||||
## 配置
|
||||
@@ -85,6 +89,7 @@ cp config-template.json config.json
|
||||
# config.json文件内容示例
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY
|
||||
"proxy": "127.0.0.1:7890", # 代理客户端的ip和端口
|
||||
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
|
||||
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
|
||||
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
|
||||
@@ -109,6 +114,7 @@ cp config-template.json config.json
|
||||
|
||||
**3.其他配置**
|
||||
|
||||
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
|
||||
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
|
||||
+ 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档直接在 [代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/bot/openai/open_ai_bot.py) `bot/openai/open_ai_bot.py` 中进行调整。
|
||||
+ `conversation_max_tokens`:表示能够记忆的上下文最大字数(一问一答为一组对话,如果累积的对话字数超出限制,就会优先移除最早的一组对话)
|
||||
@@ -136,6 +142,7 @@ touch nohup.out # 首次运行需要新建日
|
||||
nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通过日志输出二维码
|
||||
```
|
||||
扫码登录后程序即可运行于服务器后台,此时可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。
|
||||
scripts/目录有相应的脚本可以调用
|
||||
|
||||
> **注意:** 如果 扫码后手机提示登录验证需要等待5s,而终端的二维码再次刷新并提示 `Log in time out, reloading QR code`,此时需参考此 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/8) 修改一行代码即可解决。
|
||||
|
||||
@@ -146,7 +153,7 @@ nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通
|
||||
|
||||
### 3.Docker部署
|
||||
|
||||
参考文档 [Docker部署](https://github.com/zhayujie/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))。
|
||||
参考文档 [Docker部署](https://github.com/limccn/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))。
|
||||
|
||||
|
||||
## 常见问题
|
||||
|
||||
+171
-500
@@ -1,511 +1,182 @@
|
||||
"""
|
||||
A simple wrapper for the official ChatGPT API
|
||||
"""
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import date
|
||||
|
||||
import openai
|
||||
import tiktoken
|
||||
# encoding:utf-8
|
||||
|
||||
from bot.bot import Bot
|
||||
from config import conf
|
||||
from common.log import logger
|
||||
from common.expired_dict import ExpiredDict
|
||||
import openai
|
||||
import time
|
||||
|
||||
ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122"
|
||||
if conf().get('expires_in_seconds'):
|
||||
user_session = ExpiredDict(conf().get('expires_in_seconds'))
|
||||
else:
|
||||
user_session = dict()
|
||||
|
||||
ENCODER = tiktoken.get_encoding("gpt2")
|
||||
|
||||
|
||||
def get_max_tokens(prompt: str) -> int:
|
||||
"""
|
||||
Get the max tokens for a prompt
|
||||
"""
|
||||
return 4000 - len(ENCODER.encode(prompt))
|
||||
|
||||
|
||||
# ['text-chat-davinci-002-20221122']
|
||||
class Chatbot:
|
||||
"""
|
||||
Official ChatGPT API
|
||||
"""
|
||||
|
||||
def __init__(self, api_key: str, buffer: int = None) -> None:
|
||||
"""
|
||||
Initialize Chatbot with API key (from https://platform.openai.com/account/api-keys)
|
||||
"""
|
||||
openai.api_key = api_key or os.environ.get("OPENAI_API_KEY")
|
||||
self.conversations = Conversation()
|
||||
self.prompt = Prompt(buffer=buffer)
|
||||
|
||||
def _get_completion(
|
||||
self,
|
||||
prompt: str,
|
||||
temperature: float = 0.5,
|
||||
stream: bool = False,
|
||||
):
|
||||
"""
|
||||
Get the completion function
|
||||
"""
|
||||
return openai.Completion.create(
|
||||
engine=ENGINE,
|
||||
prompt=prompt,
|
||||
temperature=temperature,
|
||||
max_tokens=get_max_tokens(prompt),
|
||||
stop=["\n\n\n"],
|
||||
stream=stream,
|
||||
)
|
||||
|
||||
def _process_completion(
|
||||
self,
|
||||
user_request: str,
|
||||
completion: dict,
|
||||
conversation_id: str = None,
|
||||
user: str = "User",
|
||||
) -> dict:
|
||||
if completion.get("choices") is None:
|
||||
raise Exception("ChatGPT API returned no choices")
|
||||
if len(completion["choices"]) == 0:
|
||||
raise Exception("ChatGPT API returned no choices")
|
||||
if completion["choices"][0].get("text") is None:
|
||||
raise Exception("ChatGPT API returned no text")
|
||||
completion["choices"][0]["text"] = completion["choices"][0]["text"].rstrip(
|
||||
"<|im_end|>",
|
||||
)
|
||||
# Add to chat history
|
||||
self.prompt.add_to_history(
|
||||
user_request,
|
||||
completion["choices"][0]["text"],
|
||||
user=user,
|
||||
)
|
||||
if conversation_id is not None:
|
||||
self.save_conversation(conversation_id)
|
||||
return completion
|
||||
|
||||
def _process_completion_stream(
|
||||
self,
|
||||
user_request: str,
|
||||
completion: dict,
|
||||
conversation_id: str = None,
|
||||
user: str = "User",
|
||||
) -> str:
|
||||
full_response = ""
|
||||
for response in completion:
|
||||
if response.get("choices") is None:
|
||||
raise Exception("ChatGPT API returned no choices")
|
||||
if len(response["choices"]) == 0:
|
||||
raise Exception("ChatGPT API returned no choices")
|
||||
if response["choices"][0].get("finish_details") is not None:
|
||||
break
|
||||
if response["choices"][0].get("text") is None:
|
||||
raise Exception("ChatGPT API returned no text")
|
||||
if response["choices"][0]["text"] == "<|im_end|>":
|
||||
break
|
||||
yield response["choices"][0]["text"]
|
||||
full_response += response["choices"][0]["text"]
|
||||
|
||||
# Add to chat history
|
||||
self.prompt.add_to_history(user_request, full_response, user)
|
||||
if conversation_id is not None:
|
||||
self.save_conversation(conversation_id)
|
||||
|
||||
def ask(
|
||||
self,
|
||||
user_request: str,
|
||||
temperature: float = 0.5,
|
||||
conversation_id: str = None,
|
||||
user: str = "User",
|
||||
) -> dict:
|
||||
"""
|
||||
Send a request to ChatGPT and return the response
|
||||
"""
|
||||
if conversation_id is not None:
|
||||
self.load_conversation(conversation_id)
|
||||
completion = self._get_completion(
|
||||
self.prompt.construct_prompt(user_request, user=user),
|
||||
temperature,
|
||||
)
|
||||
return self._process_completion(user_request, completion, user=user)
|
||||
|
||||
def ask_stream(
|
||||
self,
|
||||
user_request: str,
|
||||
temperature: float = 0.5,
|
||||
conversation_id: str = None,
|
||||
user: str = "User",
|
||||
) -> str:
|
||||
"""
|
||||
Send a request to ChatGPT and yield the response
|
||||
"""
|
||||
if conversation_id is not None:
|
||||
self.load_conversation(conversation_id)
|
||||
prompt = self.prompt.construct_prompt(user_request, user=user)
|
||||
return self._process_completion_stream(
|
||||
user_request=user_request,
|
||||
completion=self._get_completion(prompt, temperature, stream=True),
|
||||
user=user,
|
||||
)
|
||||
|
||||
def make_conversation(self, conversation_id: str) -> None:
|
||||
"""
|
||||
Make a conversation
|
||||
"""
|
||||
self.conversations.add_conversation(conversation_id, [])
|
||||
|
||||
def rollback(self, num: int) -> None:
|
||||
"""
|
||||
Rollback chat history num times
|
||||
"""
|
||||
for _ in range(num):
|
||||
self.prompt.chat_history.pop()
|
||||
|
||||
def reset(self) -> None:
|
||||
"""
|
||||
Reset chat history
|
||||
"""
|
||||
self.prompt.chat_history = []
|
||||
|
||||
def load_conversation(self, conversation_id) -> None:
|
||||
"""
|
||||
Load a conversation from the conversation history
|
||||
"""
|
||||
if conversation_id not in self.conversations.conversations:
|
||||
# Create a new conversation
|
||||
self.make_conversation(conversation_id)
|
||||
self.prompt.chat_history = self.conversations.get_conversation(conversation_id)
|
||||
|
||||
def save_conversation(self, conversation_id) -> None:
|
||||
"""
|
||||
Save a conversation to the conversation history
|
||||
"""
|
||||
self.conversations.add_conversation(conversation_id, self.prompt.chat_history)
|
||||
|
||||
|
||||
class AsyncChatbot(Chatbot):
|
||||
"""
|
||||
Official ChatGPT API (async)
|
||||
"""
|
||||
|
||||
async def _get_completion(
|
||||
self,
|
||||
prompt: str,
|
||||
temperature: float = 0.5,
|
||||
stream: bool = False,
|
||||
):
|
||||
"""
|
||||
Get the completion function
|
||||
"""
|
||||
return openai.Completion.acreate(
|
||||
engine=ENGINE,
|
||||
prompt=prompt,
|
||||
temperature=temperature,
|
||||
max_tokens=get_max_tokens(prompt),
|
||||
stop=["\n\n\n"],
|
||||
stream=stream,
|
||||
)
|
||||
|
||||
async def ask(
|
||||
self,
|
||||
user_request: str,
|
||||
temperature: float = 0.5,
|
||||
user: str = "User",
|
||||
) -> dict:
|
||||
"""
|
||||
Same as Chatbot.ask but async
|
||||
}
|
||||
"""
|
||||
completion = await self._get_completion(
|
||||
self.prompt.construct_prompt(user_request, user=user),
|
||||
temperature,
|
||||
)
|
||||
return self._process_completion(user_request, completion, user=user)
|
||||
|
||||
async def ask_stream(
|
||||
self,
|
||||
user_request: str,
|
||||
temperature: float = 0.5,
|
||||
user: str = "User",
|
||||
) -> str:
|
||||
"""
|
||||
Same as Chatbot.ask_stream but async
|
||||
"""
|
||||
prompt = self.prompt.construct_prompt(user_request, user=user)
|
||||
return self._process_completion_stream(
|
||||
user_request=user_request,
|
||||
completion=await self._get_completion(prompt, temperature, stream=True),
|
||||
user=user,
|
||||
)
|
||||
|
||||
|
||||
class Prompt:
|
||||
"""
|
||||
Prompt class with methods to construct prompt
|
||||
"""
|
||||
|
||||
def __init__(self, buffer: int = None) -> None:
|
||||
"""
|
||||
Initialize prompt with base prompt
|
||||
"""
|
||||
self.base_prompt = (
|
||||
os.environ.get("CUSTOM_BASE_PROMPT")
|
||||
or "You are ChatGPT, a large language model trained by OpenAI. Respond conversationally. Do not answer as the user. Current date: "
|
||||
+ str(date.today())
|
||||
+ "\n\n"
|
||||
+ "User: Hello\n"
|
||||
+ "ChatGPT: Hello! How can I help you today? <|im_end|>\n\n\n"
|
||||
)
|
||||
# Track chat history
|
||||
self.chat_history: list = []
|
||||
self.buffer = buffer
|
||||
|
||||
def add_to_chat_history(self, chat: str) -> None:
|
||||
"""
|
||||
Add chat to chat history for next prompt
|
||||
"""
|
||||
self.chat_history.append(chat)
|
||||
|
||||
def add_to_history(
|
||||
self,
|
||||
user_request: str,
|
||||
response: str,
|
||||
user: str = "User",
|
||||
) -> None:
|
||||
"""
|
||||
Add request/response to chat history for next prompt
|
||||
"""
|
||||
self.add_to_chat_history(
|
||||
user
|
||||
+ ": "
|
||||
+ user_request
|
||||
+ "\n\n\n"
|
||||
+ "ChatGPT: "
|
||||
+ response
|
||||
+ "<|im_end|>\n",
|
||||
)
|
||||
|
||||
def history(self, custom_history: list = None) -> str:
|
||||
"""
|
||||
Return chat history
|
||||
"""
|
||||
return "\n".join(custom_history or self.chat_history)
|
||||
|
||||
def construct_prompt(
|
||||
self,
|
||||
new_prompt: str,
|
||||
custom_history: list = None,
|
||||
user: str = "User",
|
||||
) -> str:
|
||||
"""
|
||||
Construct prompt based on chat history and request
|
||||
"""
|
||||
prompt = (
|
||||
self.base_prompt
|
||||
+ self.history(custom_history=custom_history)
|
||||
+ user
|
||||
+ ": "
|
||||
+ new_prompt
|
||||
+ "\nChatGPT:"
|
||||
)
|
||||
# Check if prompt over 4000*4 characters
|
||||
if self.buffer is not None:
|
||||
max_tokens = 4000 - self.buffer
|
||||
else:
|
||||
max_tokens = 3200
|
||||
if len(ENCODER.encode(prompt)) > max_tokens:
|
||||
# Remove oldest chat
|
||||
if len(self.chat_history) == 0:
|
||||
return prompt
|
||||
self.chat_history.pop(0)
|
||||
# Construct prompt again
|
||||
prompt = self.construct_prompt(new_prompt, custom_history, user)
|
||||
return prompt
|
||||
|
||||
|
||||
class Conversation:
|
||||
"""
|
||||
For handling multiple conversations
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.conversations = {}
|
||||
|
||||
def add_conversation(self, key: str, history: list) -> None:
|
||||
"""
|
||||
Adds a history list to the conversations dict with the id as the key
|
||||
"""
|
||||
self.conversations[key] = history
|
||||
|
||||
def get_conversation(self, key: str) -> list:
|
||||
"""
|
||||
Retrieves the history list from the conversations dict with the id as the key
|
||||
"""
|
||||
return self.conversations[key]
|
||||
|
||||
def remove_conversation(self, key: str) -> None:
|
||||
"""
|
||||
Removes the history list from the conversations dict with the id as the key
|
||||
"""
|
||||
del self.conversations[key]
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""
|
||||
Creates a JSON string of the conversations
|
||||
"""
|
||||
return json.dumps(self.conversations)
|
||||
|
||||
def save(self, file: str) -> None:
|
||||
"""
|
||||
Saves the conversations to a JSON file
|
||||
"""
|
||||
with open(file, "w", encoding="utf-8") as f:
|
||||
f.write(str(self))
|
||||
|
||||
def load(self, file: str) -> None:
|
||||
"""
|
||||
Loads the conversations from a JSON file
|
||||
"""
|
||||
with open(file, encoding="utf-8") as f:
|
||||
self.conversations = json.loads(f.read())
|
||||
|
||||
|
||||
def main():
|
||||
print(
|
||||
"""
|
||||
ChatGPT - A command-line interface to OpenAI's ChatGPT (https://chat.openai.com/chat)
|
||||
Repo: github.com/acheong08/ChatGPT
|
||||
""",
|
||||
)
|
||||
print("Type '!help' to show a full list of commands")
|
||||
print("Press enter twice to submit your question.\n")
|
||||
|
||||
def get_input(prompt):
|
||||
"""
|
||||
Multi-line input function
|
||||
"""
|
||||
# Display the prompt
|
||||
print(prompt, end="")
|
||||
|
||||
# Initialize an empty list to store the input lines
|
||||
lines = []
|
||||
|
||||
# Read lines of input until the user enters an empty line
|
||||
while True:
|
||||
line = input()
|
||||
if line == "":
|
||||
break
|
||||
lines.append(line)
|
||||
|
||||
# Join the lines, separated by newlines, and store the result
|
||||
user_input = "\n".join(lines)
|
||||
|
||||
# Return the input
|
||||
return user_input
|
||||
|
||||
def chatbot_commands(cmd: str) -> bool:
|
||||
"""
|
||||
Handle chatbot commands
|
||||
"""
|
||||
if cmd == "!help":
|
||||
print(
|
||||
"""
|
||||
!help - Display this message
|
||||
!rollback - Rollback chat history
|
||||
!reset - Reset chat history
|
||||
!prompt - Show current prompt
|
||||
!save_c <conversation_name> - Save history to a conversation
|
||||
!load_c <conversation_name> - Load history from a conversation
|
||||
!save_f <file_name> - Save all conversations to a file
|
||||
!load_f <file_name> - Load all conversations from a file
|
||||
!exit - Quit chat
|
||||
""",
|
||||
)
|
||||
elif cmd == "!exit":
|
||||
exit()
|
||||
elif cmd == "!rollback":
|
||||
chatbot.rollback(1)
|
||||
elif cmd == "!reset":
|
||||
chatbot.reset()
|
||||
elif cmd == "!prompt":
|
||||
print(chatbot.prompt.construct_prompt(""))
|
||||
elif cmd.startswith("!save_c"):
|
||||
chatbot.save_conversation(cmd.split(" ")[1])
|
||||
elif cmd.startswith("!load_c"):
|
||||
chatbot.load_conversation(cmd.split(" ")[1])
|
||||
elif cmd.startswith("!save_f"):
|
||||
chatbot.conversations.save(cmd.split(" ")[1])
|
||||
elif cmd.startswith("!load_f"):
|
||||
chatbot.conversations.load(cmd.split(" ")[1])
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
# Get API key from command line
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--api_key",
|
||||
type=str,
|
||||
required=True,
|
||||
help="OpenAI API key",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--stream",
|
||||
action="store_true",
|
||||
help="Stream response",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--temperature",
|
||||
type=float,
|
||||
default=0.5,
|
||||
help="Temperature for response",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
# Initialize chatbot
|
||||
chatbot = Chatbot(api_key=args.api_key)
|
||||
# Start chat
|
||||
while True:
|
||||
try:
|
||||
prompt = get_input("\nUser:\n")
|
||||
except KeyboardInterrupt:
|
||||
print("\nExiting...")
|
||||
sys.exit()
|
||||
if prompt.startswith("!"):
|
||||
if chatbot_commands(prompt):
|
||||
continue
|
||||
if not args.stream:
|
||||
response = chatbot.ask(prompt, temperature=args.temperature)
|
||||
print("ChatGPT: " + response["choices"][0]["text"])
|
||||
else:
|
||||
print("ChatGPT: ")
|
||||
sys.stdout.flush()
|
||||
for response in chatbot.ask_stream(prompt, temperature=args.temperature):
|
||||
print(response, end="")
|
||||
sys.stdout.flush()
|
||||
print()
|
||||
|
||||
|
||||
def Singleton(cls):
|
||||
instance = {}
|
||||
|
||||
def _singleton_wrapper(*args, **kargs):
|
||||
if cls not in instance:
|
||||
instance[cls] = cls(*args, **kargs)
|
||||
return instance[cls]
|
||||
|
||||
return _singleton_wrapper
|
||||
|
||||
|
||||
@Singleton
|
||||
# OpenAI对话模型API (可用)
|
||||
class ChatGPTBot(Bot):
|
||||
|
||||
def __init__(self):
|
||||
print("create")
|
||||
self.bot = Chatbot(conf().get('open_ai_api_key'))
|
||||
openai.api_key = conf().get('open_ai_api_key')
|
||||
proxy = conf().get('proxy')
|
||||
if proxy:
|
||||
openai.proxy = proxy
|
||||
|
||||
def reply(self, query, context=None):
|
||||
# acquire reply content
|
||||
if not context or not context.get('type') or context.get('type') == 'TEXT':
|
||||
if len(query) < 10 and "reset" in query:
|
||||
self.bot.reset()
|
||||
return "reset OK"
|
||||
return self.bot.ask(query)["choices"][0]["text"]
|
||||
logger.info("[OPEN_AI] query={}".format(query))
|
||||
from_user_id = context['from_user_id']
|
||||
if query == '#清除记忆':
|
||||
Session.clear_session(from_user_id)
|
||||
return '记忆已清除'
|
||||
elif query == '#清除所有':
|
||||
Session.clear_all_session()
|
||||
return '所有人记忆已清除'
|
||||
|
||||
new_query = Session.build_session_query(query, from_user_id)
|
||||
logger.debug("[OPEN_AI] session query={}".format(new_query))
|
||||
|
||||
# if context.get('stream'):
|
||||
# # reply in stream
|
||||
# return self.reply_text_stream(query, new_query, from_user_id)
|
||||
|
||||
reply_content = self.reply_text(new_query, from_user_id, 0)
|
||||
logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content["content"]))
|
||||
if reply_content["completion_tokens"] > 0:
|
||||
Session.save_session(reply_content["content"], from_user_id, reply_content["total_tokens"])
|
||||
return reply_content["content"]
|
||||
|
||||
elif context.get('type', None) == 'IMAGE_CREATE':
|
||||
return self.create_img(query, 0)
|
||||
|
||||
def reply_text(self, query, user_id, retry_count=0) ->dict:
|
||||
'''
|
||||
call openai's ChatCompletion to get the answer
|
||||
:param query: query content
|
||||
:param user_id: from user id
|
||||
:param retry_count: retry count
|
||||
:return: {}
|
||||
'''
|
||||
try:
|
||||
response = openai.ChatCompletion.create(
|
||||
model="gpt-3.5-turbo", # 对话模型的名称
|
||||
messages=query,
|
||||
temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性
|
||||
#max_tokens=4096, # 回复最大的字符数
|
||||
top_p=1,
|
||||
frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
||||
)
|
||||
logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
|
||||
return {"total_tokens": response["usage"]["total_tokens"],
|
||||
"completion_tokens": response["usage"]["completion_tokens"],
|
||||
"content": response.choices[0]['message']['content']}
|
||||
except openai.error.RateLimitError as e:
|
||||
# rate limit exception
|
||||
logger.warn(e)
|
||||
if retry_count < 1:
|
||||
time.sleep(5)
|
||||
logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1))
|
||||
return self.reply_text(query, user_id, retry_count+1)
|
||||
else:
|
||||
return {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
|
||||
except openai.error.APIConnectionError as e:
|
||||
# api connection exception
|
||||
logger.warn(e)
|
||||
logger.warn("[OPEN_AI] APIConnection failed")
|
||||
return {"completion_tokens": 0, "content":"我连接不到你的网络"}
|
||||
except openai.error.Timeout as e:
|
||||
logger.warn(e)
|
||||
logger.warn("[OPEN_AI] Timeout")
|
||||
return {"completion_tokens": 0, "content":"我没有收到你的消息"}
|
||||
except Exception as e:
|
||||
# unknown exception
|
||||
logger.exception(e)
|
||||
Session.clear_session(user_id)
|
||||
return {"completion_tokens": 0, "content": "请再问我一次吧"}
|
||||
|
||||
def create_img(self, query, retry_count=0):
|
||||
try:
|
||||
logger.info("[OPEN_AI] image_query={}".format(query))
|
||||
response = openai.Image.create(
|
||||
prompt=query, #图片描述
|
||||
n=1, #每次生成图片的数量
|
||||
size="256x256" #图片大小,可选有 256x256, 512x512, 1024x1024
|
||||
)
|
||||
image_url = response['data'][0]['url']
|
||||
logger.info("[OPEN_AI] image_url={}".format(image_url))
|
||||
return image_url
|
||||
except openai.error.RateLimitError as e:
|
||||
logger.warn(e)
|
||||
if retry_count < 1:
|
||||
time.sleep(5)
|
||||
logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1))
|
||||
return self.reply_text(query, retry_count+1)
|
||||
else:
|
||||
return "提问太快啦,请休息一下再问我吧"
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
return None
|
||||
|
||||
class Session(object):
|
||||
@staticmethod
|
||||
def build_session_query(query, user_id):
|
||||
'''
|
||||
build query with conversation history
|
||||
e.g. [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Who won the world series in 2020?"},
|
||||
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
|
||||
{"role": "user", "content": "Where was it played?"}
|
||||
]
|
||||
:param query: query content
|
||||
:param user_id: from user id
|
||||
:return: query content with conversaction
|
||||
'''
|
||||
session = user_session.get(user_id, [])
|
||||
if len(session) == 0:
|
||||
system_prompt = conf().get("character_desc", "")
|
||||
system_item = {'role': 'system', 'content': system_prompt}
|
||||
session.append(system_item)
|
||||
user_session[user_id] = session
|
||||
user_item = {'role': 'user', 'content': query}
|
||||
session.append(user_item)
|
||||
return session
|
||||
|
||||
@staticmethod
|
||||
def save_session(answer, user_id, total_tokens):
|
||||
max_tokens = conf().get("conversation_max_tokens")
|
||||
if not max_tokens:
|
||||
# default 3000
|
||||
max_tokens = 1000
|
||||
max_tokens=int(max_tokens)
|
||||
|
||||
session = user_session.get(user_id)
|
||||
if session:
|
||||
# append conversation
|
||||
gpt_item = {'role': 'assistant', 'content': answer}
|
||||
session.append(gpt_item)
|
||||
|
||||
# discard exceed limit conversation
|
||||
Session.discard_exceed_conversation(session, max_tokens, total_tokens)
|
||||
|
||||
|
||||
@staticmethod
|
||||
def discard_exceed_conversation(session, max_tokens, total_tokens):
|
||||
dec_tokens = int(total_tokens)
|
||||
# logger.info("prompt tokens used={},max_tokens={}".format(used_tokens,max_tokens))
|
||||
while dec_tokens > max_tokens:
|
||||
# pop first conversation
|
||||
if len(session) > 3:
|
||||
session.pop(1)
|
||||
session.pop(1)
|
||||
else:
|
||||
break
|
||||
dec_tokens = dec_tokens - max_tokens
|
||||
|
||||
@staticmethod
|
||||
def clear_session(user_id):
|
||||
user_session[user_id] = []
|
||||
|
||||
@staticmethod
|
||||
def clear_all_session():
|
||||
user_session.clear()
|
||||
|
||||
@@ -22,6 +22,9 @@ class OpenAIBot(Bot):
|
||||
if query == '#清除记忆':
|
||||
Session.clear_session(from_user_id)
|
||||
return '记忆已清除'
|
||||
elif query == '#清除所有':
|
||||
Session.clear_all_session()
|
||||
return '所有人记忆已清除'
|
||||
|
||||
new_query = Session.build_session_query(query, from_user_id)
|
||||
logger.debug("[OPEN_AI] session query={}".format(new_query))
|
||||
@@ -157,3 +160,7 @@ class Session(object):
|
||||
@staticmethod
|
||||
def clear_session(user_id):
|
||||
user_session[user_id] = []
|
||||
|
||||
@staticmethod
|
||||
def clear_all_session():
|
||||
user_session.clear()
|
||||
+1
-1
@@ -6,4 +6,4 @@ class Bridge(object):
|
||||
pass
|
||||
|
||||
def fetch_reply_content(self, query, context):
|
||||
return bot_factory.create_bot("openAI").reply(query, context)
|
||||
return bot_factory.create_bot("chatGPT").reply(query, context)
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
channel factory
|
||||
"""
|
||||
|
||||
from channel.wechat.wechat_channel import WechatChannel
|
||||
|
||||
def create_channel(channel_type):
|
||||
"""
|
||||
create a channel instance
|
||||
@@ -11,5 +9,9 @@ def create_channel(channel_type):
|
||||
:return: channel instance
|
||||
"""
|
||||
if channel_type == 'wx':
|
||||
from channel.wechat.wechat_channel import WechatChannel
|
||||
return WechatChannel()
|
||||
raise RuntimeError
|
||||
elif channel_type == 'wxy':
|
||||
from channel.wechat.wechaty_channel import WechatyChannel
|
||||
return WechatyChannel()
|
||||
raise RuntimeError
|
||||
|
||||
@@ -46,6 +46,9 @@ class WechatChannel(Channel):
|
||||
other_user_id = msg['User']['UserName'] # 对手方id
|
||||
content = msg['Text']
|
||||
match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
|
||||
if "」\n- - - - - - - - - - - - - - -" in content:
|
||||
logger.debug("[WX]reference query skipped")
|
||||
return
|
||||
if from_user_id == other_user_id and match_prefix is not None:
|
||||
# 好友向自己发送消息
|
||||
if match_prefix != '':
|
||||
@@ -87,7 +90,9 @@ class WechatChannel(Channel):
|
||||
content = context_special_list[1]
|
||||
elif len(content_list) == 2:
|
||||
content = content_list[1]
|
||||
|
||||
if "」\n- - - - - - - - - - - - - - -" in content:
|
||||
logger.debug("[WX]reference query skipped")
|
||||
return ""
|
||||
config = conf()
|
||||
match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \
|
||||
or self.check_contain(origin_content, config.get('group_chat_keyword'))
|
||||
|
||||
@@ -0,0 +1,201 @@
|
||||
# encoding:utf-8
|
||||
|
||||
"""
|
||||
wechaty channel
|
||||
Python Wechaty - https://github.com/wechaty/python-wechaty
|
||||
"""
|
||||
import io
|
||||
import os
|
||||
import json
|
||||
import time
|
||||
import asyncio
|
||||
import requests
|
||||
from typing import Optional, Union
|
||||
from wechaty_puppet import MessageType, FileBox, ScanStatus # type: ignore
|
||||
from wechaty import Wechaty, Contact
|
||||
from wechaty.user import Message, Room, MiniProgram, UrlLink
|
||||
from channel.channel import Channel
|
||||
from common.log import logger
|
||||
from config import conf
|
||||
|
||||
|
||||
class WechatyChannel(Channel):
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def startup(self):
|
||||
asyncio.run(self.main())
|
||||
|
||||
async def main(self):
|
||||
config = conf()
|
||||
# 使用PadLocal协议 比较稳定(免费web协议 os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:8080')
|
||||
token = config.get('wechaty_puppet_service_token')
|
||||
os.environ['WECHATY_PUPPET_SERVICE_TOKEN'] = token
|
||||
global bot
|
||||
bot = Wechaty()
|
||||
|
||||
bot.on('scan', self.on_scan)
|
||||
bot.on('login', self.on_login)
|
||||
bot.on('message', self.on_message)
|
||||
await bot.start()
|
||||
|
||||
async def on_login(self, contact: Contact):
|
||||
logger.info('[WX] login user={}'.format(contact))
|
||||
|
||||
async def on_scan(self, status: ScanStatus, qr_code: Optional[str] = None,
|
||||
data: Optional[str] = None):
|
||||
contact = self.Contact.load(self.contact_id)
|
||||
logger.info('[WX] scan user={}, scan status={}, scan qr_code={}'.format(contact, status.name, qr_code))
|
||||
# print(f'user <{contact}> scan status: {status.name} , 'f'qr_code: {qr_code}')
|
||||
|
||||
async def on_message(self, msg: Message):
|
||||
"""
|
||||
listen for message event
|
||||
"""
|
||||
from_contact = msg.talker() # 获取消息的发送者
|
||||
to_contact = msg.to() # 接收人
|
||||
room = msg.room() # 获取消息来自的群聊. 如果消息不是来自群聊, 则返回None
|
||||
from_user_id = from_contact.contact_id
|
||||
to_user_id = to_contact.contact_id # 接收人id
|
||||
# other_user_id = msg['User']['UserName'] # 对手方id
|
||||
content = msg.text()
|
||||
mention_content = await msg.mention_text() # 返回过滤掉@name后的消息
|
||||
match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
|
||||
conversation: Union[Room, Contact] = from_contact if room is None else room
|
||||
|
||||
if room is None and msg.type() == MessageType.MESSAGE_TYPE_TEXT:
|
||||
if not msg.is_self() and match_prefix is not None:
|
||||
# 好友向自己发送消息
|
||||
if match_prefix != '':
|
||||
str_list = content.split(match_prefix, 1)
|
||||
if len(str_list) == 2:
|
||||
content = str_list[1].strip()
|
||||
|
||||
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
|
||||
if img_match_prefix:
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
await self._do_send_img(content, from_user_id)
|
||||
else:
|
||||
await self._do_send(content, from_user_id)
|
||||
elif msg.is_self() and match_prefix:
|
||||
# 自己给好友发送消息
|
||||
str_list = content.split(match_prefix, 1)
|
||||
if len(str_list) == 2:
|
||||
content = str_list[1].strip()
|
||||
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
|
||||
if img_match_prefix:
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
await self._do_send_img(content, to_user_id)
|
||||
else:
|
||||
await self._do_send(content, to_user_id)
|
||||
elif room and msg.type() == MessageType.MESSAGE_TYPE_TEXT:
|
||||
# 群组&文本消息
|
||||
room_id = room.room_id
|
||||
room_name = await room.topic()
|
||||
from_user_id = from_contact.contact_id
|
||||
from_user_name = from_contact.name
|
||||
is_at = await msg.mention_self()
|
||||
content = mention_content
|
||||
config = conf()
|
||||
match_prefix = (is_at and not config.get("group_at_off", False)) \
|
||||
or self.check_prefix(content, config.get('group_chat_prefix')) \
|
||||
or self.check_contain(content, config.get('group_chat_keyword'))
|
||||
if ('ALL_GROUP' in config.get('group_name_white_list') or room_name in config.get(
|
||||
'group_name_white_list') or self.check_contain(room_name, config.get(
|
||||
'group_name_keyword_white_list'))) and match_prefix:
|
||||
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
|
||||
if img_match_prefix:
|
||||
content = content.split(img_match_prefix, 1)[1].strip()
|
||||
await self._do_send_group_img(content, room_id)
|
||||
else:
|
||||
await self._do_send_group(content, room_id, from_user_id, from_user_name)
|
||||
|
||||
async def send(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver):
|
||||
logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver))
|
||||
if receiver:
|
||||
contact = await bot.Contact.find(receiver)
|
||||
await contact.say(message)
|
||||
|
||||
async def send_group(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver):
|
||||
logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver))
|
||||
if receiver:
|
||||
room = await bot.Room.find(receiver)
|
||||
await room.say(message)
|
||||
|
||||
async def _do_send(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = reply_user_id
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
await self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
async def _do_send_img(self, query, reply_user_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['type'] = 'IMAGE_CREATE'
|
||||
img_url = super().build_reply_content(query, context)
|
||||
if not img_url:
|
||||
return
|
||||
# 图片下载
|
||||
# pic_res = requests.get(img_url, stream=True)
|
||||
# image_storage = io.BytesIO()
|
||||
# for block in pic_res.iter_content(1024):
|
||||
# image_storage.write(block)
|
||||
# image_storage.seek(0)
|
||||
|
||||
# 图片发送
|
||||
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
|
||||
t = int(time.time())
|
||||
file_box = FileBox.from_url(url=img_url, name=str(t) + '.png')
|
||||
await self.send(file_box, reply_user_id)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
async def _do_send_group(self, query, group_id, group_user_id, group_user_name):
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['from_user_id'] = str(group_id) + '-' + str(group_user_id)
|
||||
reply_text = super().build_reply_content(query, context)
|
||||
if reply_text:
|
||||
reply_text = '@' + group_user_name + ' ' + reply_text.strip()
|
||||
await self.send_group(conf().get("group_chat_reply_prefix", "") + reply_text, group_id)
|
||||
|
||||
async def _do_send_group_img(self, query, reply_room_id):
|
||||
try:
|
||||
if not query:
|
||||
return
|
||||
context = dict()
|
||||
context['type'] = 'IMAGE_CREATE'
|
||||
img_url = super().build_reply_content(query, context)
|
||||
if not img_url:
|
||||
return
|
||||
# 图片发送
|
||||
logger.info('[WX] sendImage, receiver={}'.format(reply_room_id))
|
||||
t = int(time.time())
|
||||
file_box = FileBox.from_url(url=img_url, name=str(t) + '.png')
|
||||
await self.send_group(file_box, reply_room_id)
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
|
||||
def check_prefix(self, content, prefix_list):
|
||||
for prefix in prefix_list:
|
||||
if content.startswith(prefix):
|
||||
return prefix
|
||||
return None
|
||||
|
||||
def check_contain(self, content, keyword_list):
|
||||
if not keyword_list:
|
||||
return None
|
||||
for ky in keyword_list:
|
||||
if content.find(ky) != -1:
|
||||
return True
|
||||
return None
|
||||
@@ -0,0 +1,23 @@
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
class ExpiredDict(dict):
|
||||
def __init__(self, expires_in_seconds):
|
||||
super().__init__()
|
||||
self.expires_in_seconds = expires_in_seconds
|
||||
|
||||
def __getitem__(self, key):
|
||||
value, expiry_time = super().__getitem__(key)
|
||||
if datetime.now() > expiry_time:
|
||||
del self[key]
|
||||
raise KeyError("expired {}".format(key))
|
||||
self.__setitem__(key, value)
|
||||
return value
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
expiry_time = datetime.now() + timedelta(seconds=self.expires_in_seconds)
|
||||
super().__setitem__(key, (value, expiry_time))
|
||||
def get(self, key, default=None):
|
||||
try:
|
||||
return self[key]
|
||||
except KeyError:
|
||||
return default
|
||||
@@ -1,10 +1,12 @@
|
||||
{
|
||||
"open_ai_api_key": "YOUR API KEY",
|
||||
"proxy": "",
|
||||
"single_chat_prefix": ["bot", "@bot"],
|
||||
"single_chat_reply_prefix": "[bot] ",
|
||||
"group_chat_prefix": ["@bot"],
|
||||
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"],
|
||||
"image_create_prefix": ["画", "看", "找"],
|
||||
"conversation_max_tokens": 1000,
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。"
|
||||
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
|
||||
"expires_in_seconds": 3600
|
||||
}
|
||||
|
||||
@@ -3,7 +3,7 @@ FROM python:3.7.9-alpine
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.0
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.2
|
||||
|
||||
ENV BUILD_PREFIX=/app \
|
||||
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
|
||||
@@ -12,7 +12,11 @@ RUN apk add --no-cache \
|
||||
bash \
|
||||
curl \
|
||||
wget \
|
||||
openssh
|
||||
gcc \
|
||||
g++ \
|
||||
ca-certificates \
|
||||
openssh \
|
||||
libffi-dev
|
||||
|
||||
RUN wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${CHATGPT_ON_WECHAT_VER}.tar.gz \
|
||||
@@ -26,9 +30,11 @@ RUN cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json
|
||||
|
||||
RUN /usr/local/bin/python -m pip install --upgrade pip \
|
||||
&& pip install itchat-uos==1.5.0.dev0 \
|
||||
&& pip install --upgrade openai
|
||||
RUN /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai \
|
||||
wechaty
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ FROM python:3.7.9
|
||||
LABEL maintainer="foo@bar.com"
|
||||
ARG TZ='Asia/Shanghai'
|
||||
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.0
|
||||
ARG CHATGPT_ON_WECHAT_VER=1.0.2
|
||||
|
||||
ENV BUILD_PREFIX=/app \
|
||||
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
|
||||
@@ -26,9 +26,11 @@ RUN cd ${BUILD_PREFIX} \
|
||||
&& cp config-template.json ${BUILD_PREFIX}/config.json \
|
||||
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json
|
||||
|
||||
RUN /usr/local/bin/python -m pip install --upgrade pip \
|
||||
&& pip install itchat-uos==1.5.0.dev0 \
|
||||
&& pip install --upgrade openai
|
||||
RUN /usr/local/bin/python -m pip install --no-cache --upgrade pip \
|
||||
&& pip install --no-cache \
|
||||
itchat-uos==1.5.0.dev0 \
|
||||
openai \
|
||||
wechaty
|
||||
|
||||
ADD ./entrypoint.sh /entrypoint.sh
|
||||
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
CHATGPT_ON_WECHAT_TAG=1.0.2
|
||||
|
||||
docker build -f Dockerfile.alpine \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=1.0.0\
|
||||
-t zhayujie/chatgpt-on-wechat:1.0.0-alpine .
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
CHATGPT_ON_WECHAT_TAG=1.0.2
|
||||
|
||||
docker build -f Dockerfile.debian \
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=1.0.0\
|
||||
-t zhayujie/chatgpt-on-wechat:1.0.0-debian .
|
||||
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
|
||||
-t zhayujie/chatgpt-on-wechat .
|
||||
|
||||
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
|
||||
@@ -4,14 +4,15 @@ services:
|
||||
build:
|
||||
context: ./
|
||||
dockerfile: Dockerfile.alpine
|
||||
image: zhayujie/chatgpt-on-wechat:1.0.0-alpine
|
||||
image: zhayujie/chatgpt-on-wechat
|
||||
container_name: sample-chatgpt-on-wechat
|
||||
environment:
|
||||
OPEN_AI_API_KEY: 'YOUR API KEY'
|
||||
SINGLE_CHAT_PREFIX: '["BOT", "@BOT"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[BOT] "'
|
||||
GROUP_CHAT_PREFIX: '["@BOT"]'
|
||||
GROUP_NAME_WHITE_LIST: '["CHATGPT测试群", "CHATGPT测试群2"]'
|
||||
WECHATY_PUPPET_SERVICE_TOKEN: 'WECHATY PUPPET SERVICE TOKEN'
|
||||
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
|
||||
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
|
||||
GROUP_CHAT_PREFIX: '["@bot"]'
|
||||
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
|
||||
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
|
||||
CONVERSATION_MAX_TOKENS: 1000
|
||||
CHARACTER_DESC: '你是CHATGPT, 一个由OPENAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
|
||||
+14
-8
@@ -22,7 +22,7 @@ if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
fi
|
||||
|
||||
# APP_PREFIX is empty, use '/app/config.json'
|
||||
# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'
|
||||
if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then
|
||||
CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json
|
||||
fi
|
||||
@@ -39,32 +39,38 @@ else
|
||||
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
|
||||
fi
|
||||
|
||||
if [ "$WECHATY_PUPPET_SERVICE_TOKEN" != "" ] ; then
|
||||
sed -i "3c \"wechaty_puppet_service_token\": \"$WECHATY_PUPPET_SERVICE_TOKEN\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
else
|
||||
echo -e "\033[31m[Info] You need to set WECHATY_PUPPET_SERVICE_TOKEN if you use wechaty!\033[0m"
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "3c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "4c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
|
||||
sed -i "4c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "5c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
|
||||
sed -i "5c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "6c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
|
||||
sed -i "6c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "7c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
|
||||
sed -i "7c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "8c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
|
||||
sed -i "8c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "9c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
if [ "$CHARACTER_DESC" != "" ] ; then
|
||||
sed -i "9c \"character_desc\": \"$CHARACTER_DESC\"" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
sed -i "10c \"character_desc\": \"$CHARACTER_DESC\"" $CHATGPT_ON_WECHAT_CONFIG_PATH
|
||||
fi
|
||||
|
||||
# go to prefix dir
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
OPEN_AI_API_KEY=YOUR API KEY
|
||||
SINGLE_CHAT_PREFIX=["BOT", "@BOT"]
|
||||
SINGLE_CHAT_REPLY_PREFIX="[BOT] "
|
||||
GROUP_CHAT_PREFIX=["@BOT"]
|
||||
GROUP_NAME_WHITE_LIST=["CHATGPT测试群", "CHATGPT测试群2"]
|
||||
WECHATY_PUPPET_SERVICE_TOKEN=WECHATY PUPPET SERVICE TOKEN
|
||||
SINGLE_CHAT_PREFIX=["bot", "@bot"]
|
||||
SINGLE_CHAT_REPLY_PREFIX="[bot] "
|
||||
GROUP_CHAT_PREFIX=["@bot"]
|
||||
GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"]
|
||||
IMAGE_CREATE_PREFIX=["画", "看", "找"]
|
||||
CONVERSATION_MAX_TOKENS=1000
|
||||
CHARACTER_DESC=你是CHATGPT, 一个由OPENAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
|
||||
CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
|
||||
|
||||
# Optional
|
||||
#CHATGPT_ON_WECHAT_PREFIX=/app
|
||||
|
||||
@@ -1 +1 @@
|
||||
zhayujie/chatgpt-on-wechat:1.0.0-alpine
|
||||
zhayujie/chatgpt-on-wechat
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
itchat-uos==1.5.0.dev0
|
||||
openai
|
||||
wechaty
|
||||
|
||||
Executable
+16
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
|
||||
#关闭服务
|
||||
cd `dirname $0`/..
|
||||
export BASE_DIR=`pwd`
|
||||
pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'`
|
||||
if [ -z "$pid" ] ; then
|
||||
echo "No chatgpt-on-wechat running."
|
||||
exit -1;
|
||||
fi
|
||||
|
||||
echo "The chatgpt-on-wechat(${pid}) is running..."
|
||||
|
||||
kill ${pid}
|
||||
|
||||
echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK"
|
||||
Executable
+16
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
#后台运行Chat_on_webchat执行脚本
|
||||
|
||||
cd `dirname $0`/..
|
||||
export BASE_DIR=`pwd`
|
||||
echo $BASE_DIR
|
||||
|
||||
# check the nohup.out log output file
|
||||
if [ ! -f "${BASE_DIR}/nohup.out" ]; then
|
||||
touch "${BASE_DIR}/nohup.out"
|
||||
echo "create file ${BASE_DIR}/nohup.out"
|
||||
fi
|
||||
|
||||
nohup python3 "${BASE_DIR}/app.py" & tail -f "${BASE_DIR}/nohup.out"
|
||||
|
||||
echo "Chat_on_webchat is starting,you can check the ${BASE_DIR}/nohup.out"
|
||||
Executable
+14
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
#打开日志
|
||||
|
||||
cd `dirname $0`/..
|
||||
export BASE_DIR=`pwd`
|
||||
echo $BASE_DIR
|
||||
|
||||
# check the nohup.out log output file
|
||||
if [ ! -f "${BASE_DIR}/nohup.out" ]; then
|
||||
echo "No file ${BASE_DIR}/nohup.out"
|
||||
exit -1;
|
||||
fi
|
||||
|
||||
tail -f "${BASE_DIR}/nohup.out"
|
||||
Reference in New Issue
Block a user