Compare commits

...

75 Commits

Author SHA1 Message Date
zhayujie 38ad01a387 docs: update doc for voice 2023-03-09 01:43:16 +08:00
zhayujie e014b0406c Merge pull request #382 from Bachery/master
support group chat in one seesion
2023-03-09 00:41:02 +08:00
zhayujie a4e8e64b5d Merge pull request #407 from zhayujie/feat-voice
fix: remove prefix match in voice msg
2023-03-09 00:36:05 +08:00
ubuntu 48e258dd67 fix: remove prefix match in voice msg 2023-03-09 00:31:36 +08:00
Bachery 574f05cc6f Merge branch 'zhayujie:master' into master 2023-03-08 17:28:27 +01:00
Bachery c2e4d88842 fix compatibility 2023-03-08 17:27:32 +01:00
zhayujie 99b4700b49 Merge pull request #385 from wanggang1987/google_voice
Voice support
2023-03-09 00:26:32 +08:00
Bachery 32cff41df5 add option: group_chat_in_one_session 2023-03-08 13:11:37 +01:00
Bachery 8eace7e30e Merge branch 'zhayujie:master' into master 2023-03-08 12:50:15 +01:00
wanggang d02508df41 [voice] Readme modify 2023-03-08 16:39:25 +08:00
wanggang 3db452ef71 [voice] using baidu service to gen reply voice 2023-03-08 15:22:46 +08:00
wanggang d7a8854fa1 [voice] add support for whisper-1 model 2023-03-08 11:32:27 +08:00
wanggang 882e6c3576 [voice] add support for wispper 2023-03-08 11:02:01 +08:00
zhayujie 51f0b898f0 Merge pull request #386 from limccn/feature/docker-support-proxy
feat: docker support proxy
2023-03-08 00:16:06 +08:00
zhayujie e6112568ed Merge pull request #395 from goldfishh/master
fix a minor typo
2023-03-08 00:15:09 +08:00
wanggang 720ad07f83 [voice] fix issue 2023-03-07 23:33:25 +08:00
wanggang cc19017c01 [voice] add text to voice 2023-03-07 23:28:57 +08:00
goldfishh 55fe38d5fb fix a minor typo 2023-03-07 22:49:55 +08:00
limccn 494c5a6222 feat: container support proxy with tag 1.0.4 2023-03-07 14:42:53 +08:00
wanggang 1711a5c064 [voice] fix google voice exception issue 2023-03-07 14:42:06 +08:00
wanggang d38fc61043 [voice] add google voice support 2023-03-07 14:29:59 +08:00
Bachery e5ab350bbf support group chat in one seesion 2023-03-06 22:39:40 +01:00
zhayujie ad7ab088fe docs: update chatgpt sign up tutorial 2023-03-06 22:23:00 +08:00
zhayujie f2ae3e2fd8 Merge pull request #362 from zhayujie/fix-tokens-limit
fix: tokens limit optimization
2023-03-06 00:54:02 +08:00
ubuntu 733f9d1f10 fix: tokens limit optimization 2023-03-06 00:51:53 +08:00
zhayujie 2886f48788 Merge pull request #360 from zwssunny/master
修正会话tokens计算
2023-03-06 00:25:46 +08:00
zwssunny 04078fd4fa Merge branch 'master' of github.com:zwssunny/chatgpt-on-wechat 2023-03-05 22:26:26 +08:00
zhanws 2c2217daad Merge branch 'zhayujie:master' into master 2023-03-05 22:16:26 +08:00
zwssunny 5de600c689 修正会话tokens计算 2023-03-05 22:15:15 +08:00
zhayujie 1d4966b69c docs: update issue template 2023-03-05 19:54:35 +08:00
zwssunny 7ad16731fd scripts/目录有相应的脚本可以启动服务器部署 2023-03-05 17:54:00 +08:00
zhanws 5df341fef2 Merge branch 'zhayujie:master' into master 2023-03-05 17:51:06 +08:00
zwssunny 39a5487f39 openai 接口返回token数量来修剪会话长度 2023-03-05 17:46:35 +08:00
zhayujie 6a98bc2d5a Merge pull request #354 from zwssunny/master
增加处理会话超长问题
2023-03-05 17:23:56 +08:00
zhanws b154dd7e86 Merge branch 'zhayujie:master' into master 2023-03-05 09:51:08 +08:00
zwssunny 3d4d1c734a 增加会话超长问题 2023-03-05 09:43:59 +08:00
zhayujie f10911bc3b Merge branch 'master' of github.com:zhayujie/chatgpt-on-wechat 2023-03-05 01:19:44 +08:00
zhayujie 44e5979a03 docs: update README.md 2023-03-05 01:18:46 +08:00
zhayujie 598bc6569d docs: update README.md 2023-03-05 00:47:40 +08:00
zhayujie d667ccb396 docs: update README.md for proxy 2023-03-05 00:36:20 +08:00
zwssunny efbc9de9d1 增加会话超长处理 2023-03-04 23:44:57 +08:00
zhayujie ebed4e7832 Merge pull request #348 from lanvent/dev
历史对话增加超时释放
2023-03-04 22:20:25 +08:00
zhayujie fb598fba82 Merge pull request #349 from sunxin18/add_more_exception
[feat] catch connection failed exception
2023-03-04 21:51:31 +08:00
lanvent 2c4d79e952 添加会话过期时间 2023-03-04 21:50:18 +08:00
sunxin.181 a2db765ade [feat] catch connection failed exception 2023-03-04 21:43:58 +08:00
lanvent df3f19b534 忽略引用消息 2023-03-04 20:39:24 +08:00
zhayujie f67dae5b0b fix: use default max_token 2023-03-04 17:01:26 +08:00
zhayujie cd5f58ff2c docs: temporarily removed optional config 2023-03-04 12:57:47 +08:00
zhayujie 7be9e7d0a8 Merge pull request #330 from alin299/master
feat:add proxy option
2023-03-04 12:44:04 +08:00
alin 47c675f999 feat:add proxy option 2023-03-03 15:23:14 +08:00
zhayujie cfa738087f docs: update readme doc for chatgpt-3.5 2023-03-02 15:48:49 +08:00
zhayujie 73b4d63545 Merge pull request #303 from zhayujie/feat-gpt-3.5
feat: support gpt-3.5 api
2023-03-02 15:42:28 +08:00
ubuntu 48900dfbc4 feat: support gpt-3.5 api 2023-03-02 15:41:11 +08:00
zhayujie a3153815c8 Merge pull request #286 from zwssunny/master
feat: add deploy scripts
2023-02-28 09:18:38 +08:00
zwssunny 8729a31119 创建scripts目录,专门放调用脚本 2023-02-28 08:46:46 +08:00
zwssunny b81d947dbb 增加#清除所有人记忆指令 2023-02-27 22:33:31 +08:00
zwssunny 999b2ea51f Merge branch 'master' of github.com:zhayujie/chatgpt-on-wechat 2023-02-27 22:27:51 +08:00
zwssunny 0b802a61ec 增加服务器执行脚步 2023-02-27 22:24:14 +08:00
zhayujie 02ca1f8772 Merge pull request #281 from limccn/feature/docker-add-wechaty-support
feat: container support wechaty with tag 1.0.2
2023-02-26 12:43:27 +08:00
limccn 820b255e24 feat: container support v1.0.2 wechaty 2023-02-25 19:27:59 +08:00
zhayujie bca0939c9d fix: model import isolation 2023-02-21 00:02:39 +08:00
zhayujie 01d0af841d Merge pull request #244 from ZQ7/master
add wechaty
2023-02-20 23:52:40 +08:00
ZQ7 18e9d6a9b9 add wechaty 2023-02-20 12:03:28 +08:00
zhayujie e27e5958a5 docs: update docker wiki ref 2023-02-18 19:30:26 +08:00
zhayujie 2c5b1d5a8d docs: update openai account sign up wiki #236 2023-02-18 10:54:28 +08:00
zhayujie 2aa146341f Merge pull request #223 from limccn/master
feat: use entrypoint.sh override environment variables
2023-02-16 21:31:48 +08:00
limc.cn 4da52405fb Merge branch 'zhayujie:master' into master 2023-02-16 11:17:15 +08:00
limccn badceb2798 feat: use entrypoint.sh override environment variables 2023-02-16 11:14:09 +08:00
zhayujie 0ff40ac443 docs: update README.md 2023-02-14 21:54:02 +08:00
zhayujie 0632a22ddf docs: update README.md 2023-02-14 21:53:08 +08:00
zhayujie 99a2980458 Merge pull request #208 from limccn/master
feat: support docker/container running
2023-02-14 21:31:59 +08:00
limccn 296f3d0d47 feat: support docker/container running 2023-02-14 14:17:28 +08:00
zhayujie fa127c869e docs: update README.md 2023-02-14 01:05:50 +08:00
zhayujie 25f222dfdf docs: update readme doc #200 2023-02-14 01:04:25 +08:00
zhayujie e723fa94c4 fix: update readme #191 2023-02-13 09:55:45 +08:00
31 changed files with 1025 additions and 530 deletions
+2 -2
View File
@@ -1,7 +1,7 @@
### 前置确认
1. 运行于国内网络环境,未开代理
2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装
1. 网络能够访问openai接口 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
2. python 已安装:版本在 3.7 ~ 3.10 之间,依赖已安装
3. 在已有 issue 中未搜索到类似问题
4. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题
+2
View File
@@ -5,3 +5,5 @@ venv*
*.pyc
config.json
QR.png
nohup.out
tmp
+36 -8
View File
@@ -3,16 +3,24 @@
> ChatGPT近期以强大的对话和信息整合能力风靡全网,可以写代码、改论文、讲故事,几乎无所不能,这让人不禁有个大胆的想法,能否用他的对话模型把我们的微信打造成一个智能机器人,可以在与好友对话中给出意想不到的回应,而且再也不用担心女朋友影响我们 ~~打游戏~~ 工作了。
基于ChatGPT的微信聊天机器人,通过 [OpenAI](https://github.com/openai/openai-quickstart-python) 接口生成对话内容,使用 [itchat](https://github.com/littlecodersh/ItChat) 实现微信消息的接收和自动回复。已实现的特性如下:
基于ChatGPT的微信聊天机器人,通过 [ChatGPT](https://github.com/openai/openai-python) 接口生成对话内容,使用 [itchat](https://github.com/littlecodersh/ItChat) 实现微信消息的接收和自动回复。已实现的特性如下:
- [x] **文本对话:** 接收私聊及群组中的微信消息,使用ChatGPT生成回复内容,完成自动回复
- [x] **规则定制化:** 支持私聊中按指定规则触发自动回复,支持对群组设置自动回复白名单
- [x] **多账号:** 支持多微信账号同时运行
- [x] **图片生成:** 支持根据描述生成图片,并自动发送至个人聊天或群聊
- [x] **上下文记忆**:支持多轮对话记忆,且为每个好友维护独立的上下会话
- [x] **语音识别:** 支持接收和处理语音消息,通过文字或语音回复
# 更新日志
>**2023.03.09** 基于 `whisper API` 实现对微信语音消息的解析和回复,添加配置项 `"speech_recognition":true` 即可启用。(contributed by [wanggang1987](https://github.com/wanggang1987) in [#385](https://github.com/zhayujie/chatgpt-on-wechat/pull/385))
>**2023.03.02** 接入[ChatGPT API](https://platform.openai.com/docs/guides/chat) (gpt-3.5-turbo),默认使用该模型进行对话,需升级openai依赖 (`pip3 install --upgrade openai`)。网络问题参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
>**2023.02.20** 增加 [python-wechaty](https://github.com/wechaty/python-wechaty) 作为可选渠道,使用Pad协议相对稳定,但Token收费 (使用参考[#244](https://github.com/zhayujie/chatgpt-on-wechat/pull/244)contributed by [ZQ7](https://github.com/ZQ7))
>**2023.02.09** 扫码登录存在封号风险,请谨慎使用,参考[#58](https://github.com/AutumnWhj/ChatGPT-wechat-bot/issues/158)
>**2023.02.05** 在openai官方接口方案中 (GPT-3模型) 实现上下文对话
@@ -44,7 +52,7 @@
### 1. OpenAI账号注册
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.cnblogs.com/damugua/p/16969508.html) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
前往 [OpenAI注册页面](https://beta.openai.com/signup) 创建账号,参考这篇 [教程](https://www.pythonthree.com/register-openai-chatgpt/) 可以通过虚拟手机号来接收验证码。创建完账号则前往 [API管理页面](https://beta.openai.com/account/api-keys) 创建一个 API Key 并保存下来,后面需要在项目中配置这个key。
> 项目中使用的对话模型是 davinci,计费方式是约每 750 字 (包含请求和回复) 消耗 $0.02,图片生成是每张消耗 $0.016,账号创建有免费的 $18 额度,使用完可以更换邮箱重新注册。
@@ -68,7 +76,7 @@ cd chatgpt-on-wechat/
pip3 install itchat-uos==1.5.0.dev0
pip3 install --upgrade openai
```
注:`itchat-uos`使用指定版本1.5.0.dev0`openai`使用最新版本,需高于0.25.0。
注:`itchat-uos`使用指定版本1.5.0.dev0`openai`使用最新版本,需高于0.27.0。
## 配置
@@ -84,7 +92,8 @@ cp config-template.json config.json
```bash
# config.json文件内容示例
{
"open_ai_api_key": "YOUR API KEY" # 填入上面创建的 OpenAI API KEY
"open_ai_api_key": "YOUR API KEY", # 填入上面创建的 OpenAI API KEY
"proxy": "127.0.0.1:7890", # 代理客户端的ip和端口
"single_chat_prefix": ["bot", "@bot"], # 私聊时文本需要包含该前缀才能触发机器人回复
"single_chat_reply_prefix": "[bot] ", # 私聊时自动回复的前缀,用于区分真人
"group_chat_prefix": ["@bot"], # 群聊时包含该前缀则会触发机器人回复
@@ -107,8 +116,13 @@ cp config-template.json config.json
+ 默认只要被人 @ 就会触发机器人自动回复;另外群聊天中只要检测到以 "@bot" 开头的内容,同样会自动回复(方便自己触发),这对应配置项 `group_chat_prefix`
+ 可选配置: `group_name_keyword_white_list`配置项支持模糊匹配群名称,`group_chat_keyword`配置项则支持模糊匹配群消息内容,用法与上述两个配置项相同。(Contributed by [evolay](https://github.com/evolay))
**3.其他配置**
**3.语音识别**
+ 添加 `"speech_recognition": true` 将开启语音识别,默认使用openai的whisper模型识别为文字,同时以文字回复,目前只支持私聊 (注意由于语音消息无法匹配前缀,一旦开启将对所有语音自动回复);
+ 添加 `"voice_reply_voice": true` 将开启语音回复语音,但是需要配置对应语音合成平台的key,由于itchat协议的限制,只能发送语音mp3文件,若使用wechaty则回复的是微信语音。
**4.其他配置**
+ `proxy`:由于目前 `openai` 接口国内无法访问,需配置代理客户端的地址,详情参考 [#351](https://github.com/zhayujie/chatgpt-on-wechat/issues/351)
+ 对于图像生成,在满足个人或群组触发条件外,还需要额外的关键词前缀来触发,对应配置 `image_create_prefix `
+ 关于OpenAI对话及图片接口的参数配置(内容自由度、回复字数限制、图片大小等),可以参考 [对话接口](https://beta.openai.com/docs/api-reference/completions) 和 [图像接口](https://beta.openai.com/docs/api-reference/completions) 文档直接在 [代码](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/bot/openai/open_ai_bot.py) `bot/openai/open_ai_bot.py` 中进行调整。
+ `conversation_max_tokens`:表示能够记忆的上下文最大字数(一问一答为一组对话,如果累积的对话字数超出限制,就会优先移除最早的一组对话)
@@ -117,7 +131,9 @@ cp config-template.json config.json
## 运行
1.如果是开发机 **本地运行**,直接在项目根目录下执行:
### 1.本地运行
如果是开发机 **本地运行**,直接在项目根目录下执行:
```bash
python3 app.py
@@ -125,15 +141,27 @@ python3 app.py
终端输出二维码后,使用微信进行扫码,当输出 "Start auto replying" 时表示自动回复程序已经成功运行了(注意:用于登录的微信需要在支付处已完成实名认证)。扫码登录后你的账号就成为机器人了,可以在微信手机端通过配置的关键词触发自动回复 (任意好友发送消息给你,或是自己发消息给好友),参考[#142](https://github.com/zhayujie/chatgpt-on-wechat/issues/142)。
2.如果是 **服务器部署**,则使用nohup命令在后台运行:
### 2.服务器部署
使用nohup命令在后台运行程序:
```bash
touch nohup.out # 首次运行需要新建日志文件
nohup python3 app.py & tail -f nohup.out # 在后台运行程序并通过日志输出二维码
```
扫码登录后程序即可运行于服务器后台,此时可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`
scripts/目录有相应的脚本可以调用
> 注:如果 扫码后手机提示登录验证需要等待5s,而终端的二维码再次刷新并提示 `Log in time out, reloading QR code`,此时需参考此 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/8) 修改一行代码即可解决。
> **注意:** 如果 扫码后手机提示登录验证需要等待5s,而终端的二维码再次刷新并提示 `Log in time out, reloading QR code`,此时需参考此 [issue](https://github.com/zhayujie/chatgpt-on-wechat/issues/8) 修改一行代码即可解决。
> **多账号支持:** 将 项目复制多份,分别启动程序,用不同账号扫码登录即可实现同时运行。
> **特殊指令:** 用户向机器人发送 **#清除记忆** 即可清空该用户的上下文记忆。
### 3.Docker部署
参考文档 [Docker部署](https://github.com/limccn/chatgpt-on-wechat/wiki/Docker%E9%83%A8%E7%BD%B2) (Contributed by [limccn](https://github.com/limccn))。
## 常见问题
+171 -500
View File
@@ -1,511 +1,182 @@
"""
A simple wrapper for the official ChatGPT API
"""
import argparse
import json
import os
import sys
from datetime import date
import openai
import tiktoken
# encoding:utf-8
from bot.bot import Bot
from config import conf
from common.log import logger
from common.expired_dict import ExpiredDict
import openai
import time
ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122"
if conf().get('expires_in_seconds'):
all_sessions = ExpiredDict(conf().get('expires_in_seconds'))
else:
all_sessions = dict()
ENCODER = tiktoken.get_encoding("gpt2")
def get_max_tokens(prompt: str) -> int:
"""
Get the max tokens for a prompt
"""
return 4000 - len(ENCODER.encode(prompt))
# ['text-chat-davinci-002-20221122']
class Chatbot:
"""
Official ChatGPT API
"""
def __init__(self, api_key: str, buffer: int = None) -> None:
"""
Initialize Chatbot with API key (from https://platform.openai.com/account/api-keys)
"""
openai.api_key = api_key or os.environ.get("OPENAI_API_KEY")
self.conversations = Conversation()
self.prompt = Prompt(buffer=buffer)
def _get_completion(
self,
prompt: str,
temperature: float = 0.5,
stream: bool = False,
):
"""
Get the completion function
"""
return openai.Completion.create(
engine=ENGINE,
prompt=prompt,
temperature=temperature,
max_tokens=get_max_tokens(prompt),
stop=["\n\n\n"],
stream=stream,
)
def _process_completion(
self,
user_request: str,
completion: dict,
conversation_id: str = None,
user: str = "User",
) -> dict:
if completion.get("choices") is None:
raise Exception("ChatGPT API returned no choices")
if len(completion["choices"]) == 0:
raise Exception("ChatGPT API returned no choices")
if completion["choices"][0].get("text") is None:
raise Exception("ChatGPT API returned no text")
completion["choices"][0]["text"] = completion["choices"][0]["text"].rstrip(
"<|im_end|>",
)
# Add to chat history
self.prompt.add_to_history(
user_request,
completion["choices"][0]["text"],
user=user,
)
if conversation_id is not None:
self.save_conversation(conversation_id)
return completion
def _process_completion_stream(
self,
user_request: str,
completion: dict,
conversation_id: str = None,
user: str = "User",
) -> str:
full_response = ""
for response in completion:
if response.get("choices") is None:
raise Exception("ChatGPT API returned no choices")
if len(response["choices"]) == 0:
raise Exception("ChatGPT API returned no choices")
if response["choices"][0].get("finish_details") is not None:
break
if response["choices"][0].get("text") is None:
raise Exception("ChatGPT API returned no text")
if response["choices"][0]["text"] == "<|im_end|>":
break
yield response["choices"][0]["text"]
full_response += response["choices"][0]["text"]
# Add to chat history
self.prompt.add_to_history(user_request, full_response, user)
if conversation_id is not None:
self.save_conversation(conversation_id)
def ask(
self,
user_request: str,
temperature: float = 0.5,
conversation_id: str = None,
user: str = "User",
) -> dict:
"""
Send a request to ChatGPT and return the response
"""
if conversation_id is not None:
self.load_conversation(conversation_id)
completion = self._get_completion(
self.prompt.construct_prompt(user_request, user=user),
temperature,
)
return self._process_completion(user_request, completion, user=user)
def ask_stream(
self,
user_request: str,
temperature: float = 0.5,
conversation_id: str = None,
user: str = "User",
) -> str:
"""
Send a request to ChatGPT and yield the response
"""
if conversation_id is not None:
self.load_conversation(conversation_id)
prompt = self.prompt.construct_prompt(user_request, user=user)
return self._process_completion_stream(
user_request=user_request,
completion=self._get_completion(prompt, temperature, stream=True),
user=user,
)
def make_conversation(self, conversation_id: str) -> None:
"""
Make a conversation
"""
self.conversations.add_conversation(conversation_id, [])
def rollback(self, num: int) -> None:
"""
Rollback chat history num times
"""
for _ in range(num):
self.prompt.chat_history.pop()
def reset(self) -> None:
"""
Reset chat history
"""
self.prompt.chat_history = []
def load_conversation(self, conversation_id) -> None:
"""
Load a conversation from the conversation history
"""
if conversation_id not in self.conversations.conversations:
# Create a new conversation
self.make_conversation(conversation_id)
self.prompt.chat_history = self.conversations.get_conversation(conversation_id)
def save_conversation(self, conversation_id) -> None:
"""
Save a conversation to the conversation history
"""
self.conversations.add_conversation(conversation_id, self.prompt.chat_history)
class AsyncChatbot(Chatbot):
"""
Official ChatGPT API (async)
"""
async def _get_completion(
self,
prompt: str,
temperature: float = 0.5,
stream: bool = False,
):
"""
Get the completion function
"""
return openai.Completion.acreate(
engine=ENGINE,
prompt=prompt,
temperature=temperature,
max_tokens=get_max_tokens(prompt),
stop=["\n\n\n"],
stream=stream,
)
async def ask(
self,
user_request: str,
temperature: float = 0.5,
user: str = "User",
) -> dict:
"""
Same as Chatbot.ask but async
}
"""
completion = await self._get_completion(
self.prompt.construct_prompt(user_request, user=user),
temperature,
)
return self._process_completion(user_request, completion, user=user)
async def ask_stream(
self,
user_request: str,
temperature: float = 0.5,
user: str = "User",
) -> str:
"""
Same as Chatbot.ask_stream but async
"""
prompt = self.prompt.construct_prompt(user_request, user=user)
return self._process_completion_stream(
user_request=user_request,
completion=await self._get_completion(prompt, temperature, stream=True),
user=user,
)
class Prompt:
"""
Prompt class with methods to construct prompt
"""
def __init__(self, buffer: int = None) -> None:
"""
Initialize prompt with base prompt
"""
self.base_prompt = (
os.environ.get("CUSTOM_BASE_PROMPT")
or "You are ChatGPT, a large language model trained by OpenAI. Respond conversationally. Do not answer as the user. Current date: "
+ str(date.today())
+ "\n\n"
+ "User: Hello\n"
+ "ChatGPT: Hello! How can I help you today? <|im_end|>\n\n\n"
)
# Track chat history
self.chat_history: list = []
self.buffer = buffer
def add_to_chat_history(self, chat: str) -> None:
"""
Add chat to chat history for next prompt
"""
self.chat_history.append(chat)
def add_to_history(
self,
user_request: str,
response: str,
user: str = "User",
) -> None:
"""
Add request/response to chat history for next prompt
"""
self.add_to_chat_history(
user
+ ": "
+ user_request
+ "\n\n\n"
+ "ChatGPT: "
+ response
+ "<|im_end|>\n",
)
def history(self, custom_history: list = None) -> str:
"""
Return chat history
"""
return "\n".join(custom_history or self.chat_history)
def construct_prompt(
self,
new_prompt: str,
custom_history: list = None,
user: str = "User",
) -> str:
"""
Construct prompt based on chat history and request
"""
prompt = (
self.base_prompt
+ self.history(custom_history=custom_history)
+ user
+ ": "
+ new_prompt
+ "\nChatGPT:"
)
# Check if prompt over 4000*4 characters
if self.buffer is not None:
max_tokens = 4000 - self.buffer
else:
max_tokens = 3200
if len(ENCODER.encode(prompt)) > max_tokens:
# Remove oldest chat
if len(self.chat_history) == 0:
return prompt
self.chat_history.pop(0)
# Construct prompt again
prompt = self.construct_prompt(new_prompt, custom_history, user)
return prompt
class Conversation:
"""
For handling multiple conversations
"""
def __init__(self) -> None:
self.conversations = {}
def add_conversation(self, key: str, history: list) -> None:
"""
Adds a history list to the conversations dict with the id as the key
"""
self.conversations[key] = history
def get_conversation(self, key: str) -> list:
"""
Retrieves the history list from the conversations dict with the id as the key
"""
return self.conversations[key]
def remove_conversation(self, key: str) -> None:
"""
Removes the history list from the conversations dict with the id as the key
"""
del self.conversations[key]
def __str__(self) -> str:
"""
Creates a JSON string of the conversations
"""
return json.dumps(self.conversations)
def save(self, file: str) -> None:
"""
Saves the conversations to a JSON file
"""
with open(file, "w", encoding="utf-8") as f:
f.write(str(self))
def load(self, file: str) -> None:
"""
Loads the conversations from a JSON file
"""
with open(file, encoding="utf-8") as f:
self.conversations = json.loads(f.read())
def main():
print(
"""
ChatGPT - A command-line interface to OpenAI's ChatGPT (https://chat.openai.com/chat)
Repo: github.com/acheong08/ChatGPT
""",
)
print("Type '!help' to show a full list of commands")
print("Press enter twice to submit your question.\n")
def get_input(prompt):
"""
Multi-line input function
"""
# Display the prompt
print(prompt, end="")
# Initialize an empty list to store the input lines
lines = []
# Read lines of input until the user enters an empty line
while True:
line = input()
if line == "":
break
lines.append(line)
# Join the lines, separated by newlines, and store the result
user_input = "\n".join(lines)
# Return the input
return user_input
def chatbot_commands(cmd: str) -> bool:
"""
Handle chatbot commands
"""
if cmd == "!help":
print(
"""
!help - Display this message
!rollback - Rollback chat history
!reset - Reset chat history
!prompt - Show current prompt
!save_c <conversation_name> - Save history to a conversation
!load_c <conversation_name> - Load history from a conversation
!save_f <file_name> - Save all conversations to a file
!load_f <file_name> - Load all conversations from a file
!exit - Quit chat
""",
)
elif cmd == "!exit":
exit()
elif cmd == "!rollback":
chatbot.rollback(1)
elif cmd == "!reset":
chatbot.reset()
elif cmd == "!prompt":
print(chatbot.prompt.construct_prompt(""))
elif cmd.startswith("!save_c"):
chatbot.save_conversation(cmd.split(" ")[1])
elif cmd.startswith("!load_c"):
chatbot.load_conversation(cmd.split(" ")[1])
elif cmd.startswith("!save_f"):
chatbot.conversations.save(cmd.split(" ")[1])
elif cmd.startswith("!load_f"):
chatbot.conversations.load(cmd.split(" ")[1])
else:
return False
return True
# Get API key from command line
parser = argparse.ArgumentParser()
parser.add_argument(
"--api_key",
type=str,
required=True,
help="OpenAI API key",
)
parser.add_argument(
"--stream",
action="store_true",
help="Stream response",
)
parser.add_argument(
"--temperature",
type=float,
default=0.5,
help="Temperature for response",
)
args = parser.parse_args()
# Initialize chatbot
chatbot = Chatbot(api_key=args.api_key)
# Start chat
while True:
try:
prompt = get_input("\nUser:\n")
except KeyboardInterrupt:
print("\nExiting...")
sys.exit()
if prompt.startswith("!"):
if chatbot_commands(prompt):
continue
if not args.stream:
response = chatbot.ask(prompt, temperature=args.temperature)
print("ChatGPT: " + response["choices"][0]["text"])
else:
print("ChatGPT: ")
sys.stdout.flush()
for response in chatbot.ask_stream(prompt, temperature=args.temperature):
print(response, end="")
sys.stdout.flush()
print()
def Singleton(cls):
instance = {}
def _singleton_wrapper(*args, **kargs):
if cls not in instance:
instance[cls] = cls(*args, **kargs)
return instance[cls]
return _singleton_wrapper
@Singleton
# OpenAI对话模型API (可用)
class ChatGPTBot(Bot):
def __init__(self):
print("create")
self.bot = Chatbot(conf().get('open_ai_api_key'))
openai.api_key = conf().get('open_ai_api_key')
proxy = conf().get('proxy')
if proxy:
openai.proxy = proxy
def reply(self, query, context=None):
# acquire reply content
if not context or not context.get('type') or context.get('type') == 'TEXT':
if len(query) < 10 and "reset" in query:
self.bot.reset()
return "reset OK"
return self.bot.ask(query)["choices"][0]["text"]
logger.info("[OPEN_AI] query={}".format(query))
session_id = context['session_id']
if query == '#清除记忆':
Session.clear_session(session_id)
return '记忆已清除'
elif query == '#清除所有':
Session.clear_all_session()
return '所有人记忆已清除'
session = Session.build_session_query(query, session_id)
logger.debug("[OPEN_AI] session query={}".format(session))
# if context.get('stream'):
# # reply in stream
# return self.reply_text_stream(query, new_query, session_id)
reply_content = self.reply_text(session, session_id, 0)
logger.debug("[OPEN_AI] new_query={}, session_id={}, reply_cont={}".format(session, session_id, reply_content["content"]))
if reply_content["completion_tokens"] > 0:
Session.save_session(reply_content["content"], session_id, reply_content["total_tokens"])
return reply_content["content"]
elif context.get('type', None) == 'IMAGE_CREATE':
return self.create_img(query, 0)
def reply_text(self, session, session_id, retry_count=0) ->dict:
'''
call openai's ChatCompletion to get the answer
:param session: a conversation session
:param session_id: session id
:param retry_count: retry count
:return: {}
'''
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # 对话模型的名称
messages=session,
temperature=0.9, # 值在[0,1]之间,越大表示回复越具有不确定性
#max_tokens=4096, # 回复最大的字符数
top_p=1,
frequency_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
presence_penalty=0.0, # [-2,2]之间,该值越大则更倾向于产生不同的内容
)
# logger.info("[ChatGPT] reply={}, total_tokens={}".format(response.choices[0]['message']['content'], response["usage"]["total_tokens"]))
return {"total_tokens": response["usage"]["total_tokens"],
"completion_tokens": response["usage"]["completion_tokens"],
"content": response.choices[0]['message']['content']}
except openai.error.RateLimitError as e:
# rate limit exception
logger.warn(e)
if retry_count < 1:
time.sleep(5)
logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1))
return self.reply_text(session, session_id, retry_count+1)
else:
return {"completion_tokens": 0, "content": "提问太快啦,请休息一下再问我吧"}
except openai.error.APIConnectionError as e:
# api connection exception
logger.warn(e)
logger.warn("[OPEN_AI] APIConnection failed")
return {"completion_tokens": 0, "content":"我连接不到你的网络"}
except openai.error.Timeout as e:
logger.warn(e)
logger.warn("[OPEN_AI] Timeout")
return {"completion_tokens": 0, "content":"我没有收到你的消息"}
except Exception as e:
# unknown exception
logger.exception(e)
Session.clear_session(session_id)
return {"completion_tokens": 0, "content": "请再问我一次吧"}
def create_img(self, query, retry_count=0):
try:
logger.info("[OPEN_AI] image_query={}".format(query))
response = openai.Image.create(
prompt=query, #图片描述
n=1, #每次生成图片的数量
size="256x256" #图片大小,可选有 256x256, 512x512, 1024x1024
)
image_url = response['data'][0]['url']
logger.info("[OPEN_AI] image_url={}".format(image_url))
return image_url
except openai.error.RateLimitError as e:
logger.warn(e)
if retry_count < 1:
time.sleep(5)
logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1))
return self.create_img(query, retry_count+1)
else:
return "提问太快啦,请休息一下再问我吧"
except Exception as e:
logger.exception(e)
return None
class Session(object):
@staticmethod
def build_session_query(query, session_id):
'''
build query with conversation history
e.g. [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
:param query: query content
:param session_id: session id
:return: query content with conversaction
'''
session = all_sessions.get(session_id, [])
if len(session) == 0:
system_prompt = conf().get("character_desc", "")
system_item = {'role': 'system', 'content': system_prompt}
session.append(system_item)
all_sessions[session_id] = session
user_item = {'role': 'user', 'content': query}
session.append(user_item)
return session
@staticmethod
def save_session(answer, session_id, total_tokens):
max_tokens = conf().get("conversation_max_tokens")
if not max_tokens:
# default 3000
max_tokens = 1000
max_tokens=int(max_tokens)
session = all_sessions.get(session_id)
if session:
# append conversation
gpt_item = {'role': 'assistant', 'content': answer}
session.append(gpt_item)
# discard exceed limit conversation
Session.discard_exceed_conversation(session, max_tokens, total_tokens)
@staticmethod
def discard_exceed_conversation(session, max_tokens, total_tokens):
dec_tokens = int(total_tokens)
# logger.info("prompt tokens used={},max_tokens={}".format(used_tokens,max_tokens))
while dec_tokens > max_tokens:
# pop first conversation
if len(session) > 3:
session.pop(1)
session.pop(1)
else:
break
dec_tokens = dec_tokens - max_tokens
@staticmethod
def clear_session(session_id):
all_sessions[session_id] = []
@staticmethod
def clear_all_session():
all_sessions.clear()
+7
View File
@@ -22,6 +22,9 @@ class OpenAIBot(Bot):
if query == '#清除记忆':
Session.clear_session(from_user_id)
return '记忆已清除'
elif query == '#清除所有':
Session.clear_all_session()
return '所有人记忆已清除'
new_query = Session.build_session_query(query, from_user_id)
logger.debug("[OPEN_AI] session query={}".format(new_query))
@@ -157,3 +160,7 @@ class Session(object):
@staticmethod
def clear_session(user_id):
user_session[user_id] = []
@staticmethod
def clear_all_session():
user_session.clear()
+8 -1
View File
@@ -1,4 +1,5 @@
from bot import bot_factory
from voice import voice_factory
class Bridge(object):
@@ -6,4 +7,10 @@ class Bridge(object):
pass
def fetch_reply_content(self, query, context):
return bot_factory.create_bot("openAI").reply(query, context)
return bot_factory.create_bot("chatGPT").reply(query, context)
def fetch_voice_to_text(self, voiceFile):
return voice_factory.create_voice("openai").voiceToText(voiceFile)
def fetch_text_to_voice(self, text):
return voice_factory.create_voice("baidu").textToVoice(text)
+7 -1
View File
@@ -11,7 +11,7 @@ class Channel(object):
"""
raise NotImplementedError
def handle(self, msg):
def handle_text(self, msg):
"""
process received msg
:param msg: message object
@@ -29,3 +29,9 @@ class Channel(object):
def build_reply_content(self, query, context=None):
return Bridge().fetch_reply_content(query, context)
def build_voice_to_text(self, voice_file):
return Bridge().fetch_voice_to_text(voice_file)
def build_text_to_voice(self, text):
return Bridge().fetch_text_to_voice(text)
+5 -3
View File
@@ -2,8 +2,6 @@
channel factory
"""
from channel.wechat.wechat_channel import WechatChannel
def create_channel(channel_type):
"""
create a channel instance
@@ -11,5 +9,9 @@ def create_channel(channel_type):
:return: channel instance
"""
if channel_type == 'wx':
from channel.wechat.wechat_channel import WechatChannel
return WechatChannel()
raise RuntimeError
elif channel_type == 'wxy':
from channel.wechat.wechaty_channel import WechatyChannel
return WechatyChannel()
raise RuntimeError
+70 -14
View File
@@ -3,12 +3,14 @@
"""
wechat channel
"""
import itchat
import json
from itchat.content import *
from channel.channel import Channel
from concurrent.futures import ThreadPoolExecutor
from common.log import logger
from common.tmp_dir import TmpDir
from config import conf
import requests
import io
@@ -18,7 +20,7 @@ thread_pool = ThreadPoolExecutor(max_workers=8)
@itchat.msg_register(TEXT)
def handler_single_msg(msg):
WechatChannel().handle(msg)
WechatChannel().handle_text(msg)
return None
@@ -28,6 +30,12 @@ def handler_group_msg(msg):
return None
@itchat.msg_register(VOICE)
def handler_single_voice(msg):
WechatChannel().handle_voice(msg)
return None
class WechatChannel(Channel):
def __init__(self):
pass
@@ -39,13 +47,37 @@ class WechatChannel(Channel):
# start message listener
itchat.run()
def handle(self, msg):
logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False))
def handle_voice(self, msg):
if conf().get('speech_recognition') != True :
return
logger.debug("[WX]receive voice msg: " + msg['FileName'])
thread_pool.submit(self._do_handle_voice, msg)
def _do_handle_voice(self, msg):
from_user_id = msg['FromUserName']
other_user_id = msg['User']['UserName']
if from_user_id == other_user_id:
file_name = TmpDir().path() + msg['FileName']
msg.download(file_name)
query = super().build_voice_to_text(file_name)
if conf().get('voice_reply_voice'):
self._do_send_voice(query, from_user_id)
else:
self._do_send_text(query, from_user_id)
def handle_text(self, msg):
logger.debug("[WX]receive text msg: " + json.dumps(msg, ensure_ascii=False))
content = msg['Text']
self._handle_single_msg(msg, content)
def _handle_single_msg(self, msg, content):
from_user_id = msg['FromUserName']
to_user_id = msg['ToUserName'] # 接收人id
other_user_id = msg['User']['UserName'] # 对手方id
content = msg['Text']
match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
if "\n- - - - - - - - - - - - - - -" in content:
logger.debug("[WX]reference query skipped")
return
if from_user_id == other_user_id and match_prefix is not None:
# 好友向自己发送消息
if match_prefix != '':
@@ -57,9 +89,8 @@ class WechatChannel(Channel):
if img_match_prefix:
content = content.split(img_match_prefix, 1)[1].strip()
thread_pool.submit(self._do_send_img, content, from_user_id)
else:
thread_pool.submit(self._do_send, content, from_user_id)
else :
thread_pool.submit(self._do_send_text, content, from_user_id)
elif to_user_id == other_user_id and match_prefix:
# 自己给好友发送消息
str_list = content.split(match_prefix, 1)
@@ -70,7 +101,7 @@ class WechatChannel(Channel):
content = content.split(img_match_prefix, 1)[1].strip()
thread_pool.submit(self._do_send_img, content, to_user_id)
else:
thread_pool.submit(self._do_send, content, to_user_id)
thread_pool.submit(self._do_send_text, content, to_user_id)
def handle_group(self, msg):
@@ -87,7 +118,9 @@ class WechatChannel(Channel):
content = context_special_list[1]
elif len(content_list) == 2:
content = content_list[1]
if "\n- - - - - - - - - - - - - - -" in content:
logger.debug("[WX]reference query skipped")
return ""
config = conf()
match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \
or self.check_contain(origin_content, config.get('group_chat_keyword'))
@@ -100,16 +133,30 @@ class WechatChannel(Channel):
thread_pool.submit(self._do_send_group, content, msg)
def send(self, msg, receiver):
logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
itchat.send(msg, toUserName=receiver)
logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver))
def _do_send(self, query, reply_user_id):
def _do_send_voice(self, query, reply_user_id):
try:
if not query:
return
context = dict()
context['from_user_id'] = reply_user_id
reply_text = super().build_reply_content(query, context)
if reply_text:
replyFile = super().build_text_to_voice(reply_text)
itchat.send_file(replyFile, toUserName=reply_user_id)
logger.info('[WX] sendFile={}, receiver={}'.format(replyFile, reply_user_id))
except Exception as e:
logger.exception(e)
def _do_send_text(self, query, reply_user_id):
try:
if not query:
return
context = dict()
context['session_id'] = reply_user_id
reply_text = super().build_reply_content(query, context)
if reply_text:
self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
except Exception as e:
@@ -133,8 +180,8 @@ class WechatChannel(Channel):
image_storage.seek(0)
# 图片发送
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
itchat.send_image(image_storage, reply_user_id)
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
except Exception as e:
logger.exception(e)
@@ -142,11 +189,19 @@ class WechatChannel(Channel):
if not query:
return
context = dict()
context['from_user_id'] = msg['ActualUserName']
group_name = msg['User']['NickName']
group_id = msg['User']['UserName']
group_chat_in_one_session = conf().get('group_chat_in_one_session', [])
if ('ALL_GROUP' in group_chat_in_one_session or \
group_name in group_chat_in_one_session or \
self.check_contain(group_name, group_chat_in_one_session)):
context['session_id'] = group_id
else:
context['session_id'] = msg['ActualUserName']
reply_text = super().build_reply_content(query, context)
if reply_text:
reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip()
self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName'])
self.send(conf().get("group_chat_reply_prefix", "") + reply_text, group_id)
def check_prefix(self, content, prefix_list):
@@ -163,3 +218,4 @@ class WechatChannel(Channel):
if content.find(ky) != -1:
return True
return None
+207
View File
@@ -0,0 +1,207 @@
# encoding:utf-8
"""
wechaty channel
Python Wechaty - https://github.com/wechaty/python-wechaty
"""
import io
import os
import json
import time
import asyncio
import requests
from typing import Optional, Union
from wechaty_puppet import MessageType, FileBox, ScanStatus # type: ignore
from wechaty import Wechaty, Contact
from wechaty.user import Message, Room, MiniProgram, UrlLink
from channel.channel import Channel
from common.log import logger
from config import conf
class WechatyChannel(Channel):
def __init__(self):
pass
def startup(self):
asyncio.run(self.main())
async def main(self):
config = conf()
# 使用PadLocal协议 比较稳定(免费web协议 os.environ['WECHATY_PUPPET_SERVICE_ENDPOINT'] = '127.0.0.1:8080')
token = config.get('wechaty_puppet_service_token')
os.environ['WECHATY_PUPPET_SERVICE_TOKEN'] = token
global bot
bot = Wechaty()
bot.on('scan', self.on_scan)
bot.on('login', self.on_login)
bot.on('message', self.on_message)
await bot.start()
async def on_login(self, contact: Contact):
logger.info('[WX] login user={}'.format(contact))
async def on_scan(self, status: ScanStatus, qr_code: Optional[str] = None,
data: Optional[str] = None):
contact = self.Contact.load(self.contact_id)
logger.info('[WX] scan user={}, scan status={}, scan qr_code={}'.format(contact, status.name, qr_code))
# print(f'user <{contact}> scan status: {status.name} , 'f'qr_code: {qr_code}')
async def on_message(self, msg: Message):
"""
listen for message event
"""
from_contact = msg.talker() # 获取消息的发送者
to_contact = msg.to() # 接收人
room = msg.room() # 获取消息来自的群聊. 如果消息不是来自群聊, 则返回None
from_user_id = from_contact.contact_id
to_user_id = to_contact.contact_id # 接收人id
# other_user_id = msg['User']['UserName'] # 对手方id
content = msg.text()
mention_content = await msg.mention_text() # 返回过滤掉@name后的消息
match_prefix = self.check_prefix(content, conf().get('single_chat_prefix'))
conversation: Union[Room, Contact] = from_contact if room is None else room
if room is None and msg.type() == MessageType.MESSAGE_TYPE_TEXT:
if not msg.is_self() and match_prefix is not None:
# 好友向自己发送消息
if match_prefix != '':
str_list = content.split(match_prefix, 1)
if len(str_list) == 2:
content = str_list[1].strip()
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
if img_match_prefix:
content = content.split(img_match_prefix, 1)[1].strip()
await self._do_send_img(content, from_user_id)
else:
await self._do_send(content, from_user_id)
elif msg.is_self() and match_prefix:
# 自己给好友发送消息
str_list = content.split(match_prefix, 1)
if len(str_list) == 2:
content = str_list[1].strip()
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
if img_match_prefix:
content = content.split(img_match_prefix, 1)[1].strip()
await self._do_send_img(content, to_user_id)
else:
await self._do_send(content, to_user_id)
elif room and msg.type() == MessageType.MESSAGE_TYPE_TEXT:
# 群组&文本消息
room_id = room.room_id
room_name = await room.topic()
from_user_id = from_contact.contact_id
from_user_name = from_contact.name
is_at = await msg.mention_self()
content = mention_content
config = conf()
match_prefix = (is_at and not config.get("group_at_off", False)) \
or self.check_prefix(content, config.get('group_chat_prefix')) \
or self.check_contain(content, config.get('group_chat_keyword'))
if ('ALL_GROUP' in config.get('group_name_white_list') or room_name in config.get(
'group_name_white_list') or self.check_contain(room_name, config.get(
'group_name_keyword_white_list'))) and match_prefix:
img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix'))
if img_match_prefix:
content = content.split(img_match_prefix, 1)[1].strip()
await self._do_send_group_img(content, room_id)
else:
await self._do_send_group(content, room_id, room_name, from_user_id, from_user_name)
async def send(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver):
logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver))
if receiver:
contact = await bot.Contact.find(receiver)
await contact.say(message)
async def send_group(self, message: Union[str, Message, FileBox, Contact, UrlLink, MiniProgram], receiver):
logger.info('[WX] sendMsg={}, receiver={}'.format(message, receiver))
if receiver:
room = await bot.Room.find(receiver)
await room.say(message)
async def _do_send(self, query, reply_user_id):
try:
if not query:
return
context = dict()
context['session_id'] = reply_user_id
reply_text = super().build_reply_content(query, context)
if reply_text:
await self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id)
except Exception as e:
logger.exception(e)
async def _do_send_img(self, query, reply_user_id):
try:
if not query:
return
context = dict()
context['type'] = 'IMAGE_CREATE'
img_url = super().build_reply_content(query, context)
if not img_url:
return
# 图片下载
# pic_res = requests.get(img_url, stream=True)
# image_storage = io.BytesIO()
# for block in pic_res.iter_content(1024):
# image_storage.write(block)
# image_storage.seek(0)
# 图片发送
logger.info('[WX] sendImage, receiver={}'.format(reply_user_id))
t = int(time.time())
file_box = FileBox.from_url(url=img_url, name=str(t) + '.png')
await self.send(file_box, reply_user_id)
except Exception as e:
logger.exception(e)
async def _do_send_group(self, query, group_id, group_name, group_user_id, group_user_name):
if not query:
return
context = dict()
group_chat_in_one_session = conf().get('group_chat_in_one_session', [])
if ('ALL_GROUP' in group_chat_in_one_session or \
group_name in group_chat_in_one_session or \
self.check_contain(group_name, group_chat_in_one_session)):
context['session_id'] = str(group_id)
else:
context['session_id'] = str(group_id) + '-' + str(group_user_id)
reply_text = super().build_reply_content(query, context)
if reply_text:
reply_text = '@' + group_user_name + ' ' + reply_text.strip()
await self.send_group(conf().get("group_chat_reply_prefix", "") + reply_text, group_id)
async def _do_send_group_img(self, query, reply_room_id):
try:
if not query:
return
context = dict()
context['type'] = 'IMAGE_CREATE'
img_url = super().build_reply_content(query, context)
if not img_url:
return
# 图片发送
logger.info('[WX] sendImage, receiver={}'.format(reply_room_id))
t = int(time.time())
file_box = FileBox.from_url(url=img_url, name=str(t) + '.png')
await self.send_group(file_box, reply_room_id)
except Exception as e:
logger.exception(e)
def check_prefix(self, content, prefix_list):
for prefix in prefix_list:
if content.startswith(prefix):
return prefix
return None
def check_contain(self, content, keyword_list):
if not keyword_list:
return None
for ky in keyword_list:
if content.find(ky) != -1:
return True
return None
+23
View File
@@ -0,0 +1,23 @@
from datetime import datetime, timedelta
class ExpiredDict(dict):
def __init__(self, expires_in_seconds):
super().__init__()
self.expires_in_seconds = expires_in_seconds
def __getitem__(self, key):
value, expiry_time = super().__getitem__(key)
if datetime.now() > expiry_time:
del self[key]
raise KeyError("expired {}".format(key))
self.__setitem__(key, value)
return value
def __setitem__(self, key, value):
expiry_time = datetime.now() + timedelta(seconds=self.expires_in_seconds)
super().__setitem__(key, (value, expiry_time))
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
+20
View File
@@ -0,0 +1,20 @@
import os
import pathlib
from config import conf
class TmpDir(object):
"""A temporary directory that is deleted when the object is destroyed.
"""
tmpFilePath = pathlib.Path('./tmp/')
def __init__(self):
pathExists = os.path.exists(self.tmpFilePath)
if not pathExists and conf().get('speech_recognition') == True:
os.makedirs(self.tmpFilePath)
def path(self):
return str(self.tmpFilePath) + '/'
+5 -1
View File
@@ -1,10 +1,14 @@
{
"open_ai_api_key": "YOUR API KEY",
"proxy": "",
"single_chat_prefix": ["bot", "@bot"],
"single_chat_reply_prefix": "[bot] ",
"group_chat_prefix": ["@bot"],
"group_chat_in_one_session": ["ChatGPT测试群"],
"group_name_white_list": ["ChatGPT测试群", "ChatGPT测试群2"],
"image_create_prefix": ["画", "看", "找"],
"conversation_max_tokens": 1000,
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。"
"speech_recognition": false,
"character_desc": "你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。",
"expires_in_seconds": 3600
}
+42
View File
@@ -0,0 +1,42 @@
FROM python:3.7.9-alpine
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
ENV BUILD_PREFIX=/app \
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
RUN apk add --no-cache \
bash \
curl \
wget \
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`} \
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& cd ${BUILD_PREFIX} \
&& cp config-template.json ${BUILD_PREFIX}/config.json \
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
&& pip install --no-cache \
itchat-uos==1.5.0.dev0 \
openai \
&& apk del curl wget
WORKDIR ${BUILD_PREFIX}
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh \
&& adduser -D -h /home/noroot -u 1000 -s /bin/bash noroot \
&& chown noroot:noroot ${BUILD_PREFIX}
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
+43
View File
@@ -0,0 +1,43 @@
FROM python:3.7.9
LABEL maintainer="foo@bar.com"
ARG TZ='Asia/Shanghai'
ARG CHATGPT_ON_WECHAT_VER
ENV BUILD_PREFIX=/app \
BUILD_OPEN_AI_API_KEY='YOUR OPEN AI KEY HERE'
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
wget \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& export BUILD_GITHUB_TAG=${CHATGPT_ON_WECHAT_VER:-`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`} \
&& wget -t 3 -T 30 -nv -O chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
https://github.com/zhayujie/chatgpt-on-wechat/archive/refs/tags/${BUILD_GITHUB_TAG}.tar.gz \
&& tar -xzf chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& mv chatgpt-on-wechat-${BUILD_GITHUB_TAG} ${BUILD_PREFIX} \
&& rm chatgpt-on-wechat-${BUILD_GITHUB_TAG}.tar.gz \
&& cd ${BUILD_PREFIX} \
&& cp config-template.json ${BUILD_PREFIX}/config.json \
&& sed -i "2s/YOUR API KEY/${BUILD_OPEN_AI_API_KEY}/" ${BUILD_PREFIX}/config.json \
&& /usr/local/bin/python -m pip install --no-cache --upgrade pip \
&& pip install --no-cache \
itchat-uos==1.5.0.dev0 \
openai
WORKDIR ${BUILD_PREFIX}
ADD ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh \
&& groupadd -r noroot \
&& useradd -r -g noroot -s /bin/bash -d /home/noroot noroot \
&& chown -R noroot:noroot ${BUILD_PREFIX}
USER noroot
ENTRYPOINT ["/entrypoint.sh"]
+15
View File
@@ -0,0 +1,15 @@
#!/bin/bash
# fetch latest release tag
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`
# build image
docker build -f Dockerfile.alpine \
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
-t zhayujie/chatgpt-on-wechat .
# tag image
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-alpine
+14
View File
@@ -0,0 +1,14 @@
#!/bin/bash
# fetch latest release tag
CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"([^"]+)".*/\1/'`
# build image
docker build -f Dockerfile.debian \
--build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
-t zhayujie/chatgpt-on-wechat .
# tag image
docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
+19
View File
@@ -0,0 +1,19 @@
version: '2.0'
services:
chatgpt-on-wechat:
build:
context: ./
dockerfile: Dockerfile.alpine
image: zhayujie/chatgpt-on-wechat
container_name: sample-chatgpt-on-wechat
environment:
OPEN_AI_API_KEY: 'YOUR API KEY'
OPEN_AI_PROXY: ''
SINGLE_CHAT_PREFIX: '["bot", "@bot"]'
SINGLE_CHAT_REPLY_PREFIX: '"[bot] "'
GROUP_CHAT_PREFIX: '["@bot"]'
GROUP_NAME_WHITE_LIST: '["ChatGPT测试群", "ChatGPT测试群2"]'
IMAGE_CREATE_PREFIX: '["画", "看", "找"]'
CONVERSATION_MAX_TOKENS: 1000
CHARACTER_DESC: '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。'
EXPIRES_IN_SECONDS: 3600
+90
View File
@@ -0,0 +1,90 @@
#!/bin/bash
set -e
# build prefix
CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""}
# path to config.json
CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""}
# execution command line
CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""}
OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""}
OPEN_AI_PROXY=${OPEN_AI_PROXY:-""}
SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""}
SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""}
GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""}
GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""}
IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""}
CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""}
CHARACTER_DESC=${CHARACTER_DESC:-""}
EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-""}
# CHATGPT_ON_WECHAT_PREFIX is empty, use /app
if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then
CHATGPT_ON_WECHAT_PREFIX=/app
fi
# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'
if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then
CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json
fi
# CHATGPT_ON_WECHAT_EXEC is empty, use python app.py
if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then
CHATGPT_ON_WECHAT_EXEC="python app.py"
fi
# modify content in config.json
if [ "$OPEN_AI_API_KEY" != "" ] ; then
sed -i "2c \"open_ai_api_key\": \"$OPEN_AI_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
else
echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m"
fi
# use http_proxy as default
if [ "$HTTP_PROXY" != "" ] ; then
sed -i "3c \"proxy\": \"$HTTP_PROXY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$OPEN_AI_PROXY" != "" ] ; then
sed -i "3c \"proxy\": \"$OPEN_AI_PROXY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then
sed -i "4c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then
sed -i "5c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$GROUP_CHAT_PREFIX" != "" ] ; then
sed -i "6c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then
sed -i "7c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then
sed -i "8c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then
sed -i "9c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$CHARACTER_DESC" != "" ] ; then
sed -i "10c \"character_desc\": \"$CHARACTER_DESC\"," $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
if [ "$EXPIRES_IN_SECONDS" != "" ] ; then
sed -i "11c \"expires_in_seconds\": $EXPIRES_IN_SECONDS" $CHATGPT_ON_WECHAT_CONFIG_PATH
fi
# go to prefix dir
cd $CHATGPT_ON_WECHAT_PREFIX
# excute
$CHATGPT_ON_WECHAT_EXEC
+15
View File
@@ -0,0 +1,15 @@
OPEN_AI_API_KEY=YOUR API KEY
OPEN_AI_PROXY=
SINGLE_CHAT_PREFIX=["bot", "@bot"]
SINGLE_CHAT_REPLY_PREFIX="[bot] "
GROUP_CHAT_PREFIX=["@bot"]
GROUP_NAME_WHITE_LIST=["ChatGPT测试群", "ChatGPT测试群2"]
IMAGE_CREATE_PREFIX=["画", "看", "找"]
CONVERSATION_MAX_TOKENS=1000
CHARACTER_DESC=你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。
EXPIRES_IN_SECONDS=3600
# Optional
#CHATGPT_ON_WECHAT_PREFIX=/app
#CHATGPT_ON_WECHAT_CONFIG_PATH=/app/config.json
#CHATGPT_ON_WECHAT_EXEC=python app.py
+26
View File
@@ -0,0 +1,26 @@
IMG:=`cat Name`
MOUNT:=
PORT_MAP:=
DOTENV:=.env
CONTAINER_NAME:=sample-chatgpt-on-wechat
echo:
echo $(IMG)
run_d:
docker rm $(CONTAINER_NAME) || echo
docker run -dt --name $(CONTAINER_NAME) $(PORT_MAP) \
--env-file=$(DOTENV) \
$(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -it --name $(CONTAINER_NAME) $(PORT_MAP) \
--env-file=$(DOTENV) \
$(MOUNT) $(IMG)
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME)
+1
View File
@@ -0,0 +1 @@
zhayujie/chatgpt-on-wechat
+1
View File
@@ -1,2 +1,3 @@
itchat-uos==1.5.0.dev0
openai
wechaty
+16
View File
@@ -0,0 +1,16 @@
#!/bin/bash
#关闭服务
cd `dirname $0`/..
export BASE_DIR=`pwd`
pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'`
if [ -z "$pid" ] ; then
echo "No chatgpt-on-wechat running."
exit -1;
fi
echo "The chatgpt-on-wechat(${pid}) is running..."
kill ${pid}
echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK"
+16
View File
@@ -0,0 +1,16 @@
#!/bin/bash
#后台运行Chat_on_webchat执行脚本
cd `dirname $0`/..
export BASE_DIR=`pwd`
echo $BASE_DIR
# check the nohup.out log output file
if [ ! -f "${BASE_DIR}/nohup.out" ]; then
touch "${BASE_DIR}/nohup.out"
echo "create file ${BASE_DIR}/nohup.out"
fi
nohup python3 "${BASE_DIR}/app.py" & tail -f "${BASE_DIR}/nohup.out"
echo "Chat_on_webchat is startingyou can check the ${BASE_DIR}/nohup.out"
+14
View File
@@ -0,0 +1,14 @@
#!/bin/bash
#打开日志
cd `dirname $0`/..
export BASE_DIR=`pwd`
echo $BASE_DIR
# check the nohup.out log output file
if [ ! -f "${BASE_DIR}/nohup.out" ]; then
echo "No file ${BASE_DIR}/nohup.out"
exit -1;
fi
tail -f "${BASE_DIR}/nohup.out"
+36
View File
@@ -0,0 +1,36 @@
"""
baidu voice service
"""
import time
from aip import AipSpeech
from common.log import logger
from common.tmp_dir import TmpDir
from voice.voice import Voice
from config import conf
class BaiduVoice(Voice):
APP_ID = conf().get('baidu_app_id')
API_KEY = conf().get('baidu_api_key')
SECRET_KEY = conf().get('baidu_secret_key')
client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)
def __init__(self):
pass
def voiceToText(self, voice_file):
pass
def textToVoice(self, text):
result = self.client.synthesis(text, 'zh', 1, {
'spd': 5, 'pit': 5, 'vol': 5, 'per': 111
})
if not isinstance(result, dict):
fileName = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3'
with open(fileName, 'wb') as f:
f.write(result)
logger.info('[Baidu] textToVoice text={} voice file name={}'.format(text, fileName))
return fileName
else:
logger.error('[Baidu] textToVoice error={}'.format(result))
return None
+51
View File
@@ -0,0 +1,51 @@
"""
google voice service
"""
import pathlib
import subprocess
import time
import speech_recognition
import pyttsx3
from common.log import logger
from common.tmp_dir import TmpDir
from voice.voice import Voice
class GoogleVoice(Voice):
recognizer = speech_recognition.Recognizer()
engine = pyttsx3.init()
def __init__(self):
# 语速
self.engine.setProperty('rate', 125)
# 音量
self.engine.setProperty('volume', 1.0)
# 0为男声,1为女声
voices = self.engine.getProperty('voices')
self.engine.setProperty('voice', voices[1].id)
def voiceToText(self, voice_file):
new_file = voice_file.replace('.mp3', '.wav')
subprocess.call('ffmpeg -i ' + voice_file +
' -acodec pcm_s16le -ac 1 -ar 16000 ' + new_file, shell=True)
with speech_recognition.AudioFile(new_file) as source:
audio = self.recognizer.record(source)
try:
text = self.recognizer.recognize_google(audio, language='zh-CN')
logger.info(
'[Google] voiceToText text={} voice file name={}'.format(text, voice_file))
return text
except speech_recognition.UnknownValueError:
return "抱歉,我听不懂。"
except speech_recognition.RequestError as e:
return "抱歉,无法连接到 Google 语音识别服务;{0}".format(e)
def textToVoice(self, text):
textFile = TmpDir().path() + '语音回复_' + str(int(time.time())) + '.mp3'
self.engine.save_to_file(text, textFile)
self.engine.runAndWait()
logger.info(
'[Google] textToVoice text={} voice file name={}'.format(text, textFile))
return textFile
+27
View File
@@ -0,0 +1,27 @@
"""
google voice service
"""
import json
import openai
from config import conf
from common.log import logger
from voice.voice import Voice
class OpenaiVoice(Voice):
def __init__(self):
openai.api_key = conf().get('open_ai_api_key')
def voiceToText(self, voice_file):
logger.debug(
'[Openai] voice file name={}'.format(voice_file))
file = open(voice_file, "rb")
reply = openai.Audio.transcribe("whisper-1", file)
text = reply["text"]
logger.info(
'[Openai] voiceToText text={} voice file name={}'.format(text, voice_file))
return text
def textToVoice(self, text):
pass
+16
View File
@@ -0,0 +1,16 @@
"""
Voice service abstract class
"""
class Voice(object):
def voiceToText(self, voice_file):
"""
Send voice to voice service and get text
"""
raise NotImplementedError
def textToVoice(self, text):
"""
Send text to voice service and get voice
"""
raise NotImplementedError
+20
View File
@@ -0,0 +1,20 @@
"""
voice factory
"""
def create_voice(voice_type):
"""
create a voice instance
:param voice_type: voice type code
:return: voice instance
"""
if voice_type == 'baidu':
from voice.baidu.baidu_voice import BaiduVoice
return BaiduVoice()
elif voice_type == 'google':
from voice.google.google_voice import GoogleVoice
return GoogleVoice()
elif voice_type == 'openai':
from voice.openai.openai_voice import OpenaiVoice
return OpenaiVoice()
raise RuntimeError